id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
1,568,694
https://en.wikipedia.org/wiki/Almonry
An almonry (Lat. , Fr. , Ger. ) is the place or chamber where alms were distributed to the poor in churches or other ecclesiastical buildings. The person designated to oversee the distribution was called an almoner. Examples in England At Worcester Cathedral the alms are said to have been distributed on stone tables, on each side, within the great porch. In some case, the duty to take in the poor and the sick was met by setting up an hospitium (inn) or spital (hospital) outside the gate in which about a dozen elderly or sick persons were maintained at the expense of the almoner, who had land allotted to his use from the monastery's estates. At Reading Abbey, the abbey's hospitium, or dormitory for pilgrims, known as the Hospitium of St. John was founded in 1189. The abbey school, which was founded in 1125, moved into the hospitium in 1485 as the Royal Grammar School of King Henry VII. In large monastic establishments, as at Westminster Abbey, it seems to have been a separate building of some importance, either joining the gatehouse or near it, so that the establishment might be disturbed as little as possible. Close to the sanctuary, and adjoining its western side, was the eleemosynary or almonry, where the alms of the abbey were daily doled out to the poor and needy. The almonry was a building, analogous to our more prosaic modern alms-houses, erected by King Henry VII and his mother, the Lady Margaret, to the glory of God, for twelve poor men and poor women. The almonry at Evesham was a separate building that was home to the almoner of the Benedictine Abbey of St. Mary and St. Ecgwine. An almonry school was a medieval English monastic charity school supported by a portion of the funds allocated to the almoner. The practice began in the early 14th century when a form of scholarship was established that provided attendance at the cathedral school, housing, and food for boys at least 10 years old who could sing and read. An almonry education could prepare boys for a variety of careers, as well as university. The almonry at Old St Paul's Cathedral was built along the south wall of the nave, not far from the parish church of St Gregory. The choir master was also the almoner, and the almonry housed boy choristers. It also served as a playhouse, in which the boys performed. At the Palace of Whitehall the office of the Royal Almonry was located in Middle Scotland Yard. The Hereditary Grand Almoner, a position instituted by Richard I, distributed alms on the occasion of a coronation. The duties of the High Almoner were more general and included visiting the sick, poor widows, and prisoners and reminding the king to bestow alms, especially on saints' days. A remnant of this custom may be found in the Royal Maundy service. References Church architecture Rooms
Almonry
[ "Engineering" ]
634
[ "Rooms", "Architecture" ]
1,568,897
https://en.wikipedia.org/wiki/OGLE-TR-111
OGLE-TR-111 is a yellow dwarf star approximately 5,000 light-years away in the constellation of Carina (the Keel). Having an apparent magnitude of about 17, this distant and dim star has not yet been cataloged. Because its apparent brightness changes when one of its planets transits, the star has been given the variable star designation V759 Carinae. Planetary system In 2002 the Optical Gravitational Lensing Experiment (OGLE) survey detected that the light from the star periodically dimmed very slightly every 4 days, indicating a planet-sized body transiting the star. But since the mass of the object had not been measured, it was not clear that it was a true planet, low-mass red dwarf or something else. In 2004 radial velocity measurements showed unambiguously that the transiting body is indeed a planet. The planet is probably very similar to the other "hot Jupiters" orbiting nearby stars. Its mass is about half that of Jupiter and it orbits the star at a distance less than 1/20th that of Earth from the Sun. Unconfirmed planet candidate In 2005, evidence of another transit was announced. Planet "OGLE-TR-111c" is a possible extrasolar planet orbiting the star. It was first proposed in 2005 based on preliminary evidence from the Optical Gravitational Lensing Experiment (OGLE) survey. More data is required to confirm this planet candidate. If it is confirmed, OGLE-TR-111 would become one of the first stars with a pair of transiting planets. See also OGLE-2005-BLG-390L List of extrasolar planets References External links Carina (constellation) Planetary systems with one confirmed planet Planetary transit variables G-type main-sequence stars Carinae, V759
OGLE-TR-111
[ "Astronomy" ]
362
[ "Carina (constellation)", "Constellations" ]
1,568,958
https://en.wikipedia.org/wiki/Inconel
Inconel is a nickel-chromium-based superalloy often utilized in extreme environments where components are subjected to high temperature, pressure or mechanical loads. Inconel alloys are oxidation- and corrosion-resistant. When heated, Inconel forms a thick, stable, passivating oxide layer protecting the surface from further attack. Inconel retains strength over a wide temperature range, attractive for high-temperature applications where aluminum and steel would succumb to creep as a result of thermally-induced crystal vacancies. Inconel's high-temperature strength is developed by solid solution strengthening or precipitation hardening, depending on the alloy. Inconel alloys are typically used in high temperature applications. Common trade names for various Inconel alloys include: Alloy 625: Inconel 625, Chronin 625, Altemp 625, Sanicro 625, Haynes 625, Nickelvac 625 Nicrofer 6020 and UNS designation N06625. Alloy 600: NA14, BS3076, 2.4816, NiCr15Fe (FR), NiCr15Fe (EU), NiCr15Fe8 (DE) and UNS designation N06600. Alloy 718: Nicrofer 5219, Superimphy 718, Haynes 718, Pyromet 718, Supermet 718, Udimet 718 and UNS designation N07718. History The Inconel family of alloys was first developed before December 1932, when its trademark was registered by the US company International Nickel Company of Delaware and New York. A significant early use was found in support of the development of the Whittle jet engine, during the 1940s by research teams at Henry Wiggin & Co of Hereford, England a subsidiary of the Mond Nickel Company, which merged with Inco in 1928. The Hereford Works and its properties including the Inconel trademark were acquired in 1998 by Special Metals Corporation. Specific data Composition Inconel alloys vary widely in their compositions, but all are predominantly nickel, with chromium as the second element. Properties When heated, Inconel forms a thick and stable passivating oxide layer protecting the surface from further attack. Inconel retains strength over a wide temperature range, attractive for high-temperature applications where aluminium and steel would succumb to creep as a result of thermally induced crystal vacancies (see Arrhenius equation). Inconel's high temperature strength is developed by solid solution strengthening or precipitation strengthening, depending on the alloy. In age-hardening or precipitation-strengthening varieties, small amounts of niobium combine with nickel to form the intermetallic compound Ni3Nb or gamma double prime (γ″). Gamma prime forms small cubic crystals that inhibit slip and creep effectively at elevated temperatures. The formation of gamma-prime crystals increases over time, especially after three hours of a heat exposure of , and continues to grow after 72 hours of exposure. Strengthening mechanisms The most prevalent hardening mechanisms for Inconel alloys are precipitate strengthening and solid solution strengthening. In Inconel alloys, one of the two often dominates. For alloys like Inconel 718, precipitate strengthening is the main strengthening mechanism. The majority of strengthening comes from the presence of gamma double prime (γ″) precipitates. Inconel alloys have a γ matrix phase with an FCC structure. γ″ precipitates are made of Ni and Nb, specifically with a Ni3Nb composition. These precipitates are fine, coherent, disk-shaped, intermetallic particles with a tetragonal structure. Secondary precipitate strengthening comes from gamma prime (γ') precipitates. The γ' phase can appear in multiple compositions such as Ni3(Al, Ti). The precipitate phase is coherent and has an FCC structure, like the γ matrix; The γ' phase is much less prevalent than γ″. The volume fraction of the γ″ and γ' phases are approximately 15% and 4% after precipitation, respectively. Because of the coherency between the γ matrix and the γ' and γ″ precipitates, strain fields exist that obstruct the motion of dislocations. The prevalence of carbides with MX(Nb, Ti)(C, N) compositions also helps to strengthen the material. For precipitate strengthening, elements like niobium, titanium, and tantalum play a crucial role. Because the γ″ phase is metastable, over-aging can result in the transformation of γ″ phase precipitates to delta (δ) phase precipitates, their stable counterparts. The δ phase has an orthorhombic structure, a Ni3(Nb, Mo, Ti) composition, and is incoherent. As a result, the transformation of γ″ to δ in Inconel alloys leads to the loss of coherency strengthening, making for a weaker material. That being said, in appropriate quantities, the δ phase is responsible for grain boundary pinning and strengthening. Another common phase in Inconel alloys is the Laves intermetallic phase. Its compositions are (Ni, Cr, Fe)x(Nb, Mo, Ti)y and NiyNb, it is brittle, and its presence can be detrimental to the mechanical behavior of Inconel alloys. Sites with large amounts of Laves phase are prone to crack propagation because of their higher potential for stress concentration. Additionally, due to its high Nb, Mo, and Ti content, the Laves phase can exhaust the matrix of these elements, ultimately making precipitate and solid-solution strengthening more difficult. For alloys like Inconel 625, solid-solution hardening is the main strengthening mechanism. Elements like Mo are important in this process. Nb and Ta can also contribute to solid solution strengthening to a lesser extent. In solid solution strengthening, Mo atoms are substituted into the γ matrix of Inconel alloys. Because Mo atoms have a significantly larger radius than those of Ni (209 pm and 163 pm, respectively), the substitution creates strain fields in the crystal lattice, which hinder the motion of dislocations, ultimately strengthening the material. The combination of elemental composition and strengthening mechanisms is why Inconel alloys can maintain their favorable mechanical and physical properties, such as high strength and fatigue resistance, at elevated temperatures, specifically those up to 650°C. Machining Inconel is a difficult metal to shape and to machine using traditional cold forming techniques due to rapid work hardening. After the first machining pass, work hardening tends to plastically deform either the workpiece or the tool on subsequent passes. For this reason, age-hardened Inconels such as 718 are typically machined using an aggressive but slow cut with a hard tool, minimizing the number of passes required. Alternatively, the majority of the machining can be performed with the workpiece in a "solutionized" form, with only the final steps being performed after age hardening. However some claim that Inconel can be machined extremely quickly with very fast spindle speeds using a multifluted ceramic tool with small width of cut at high feed rates as this causes localized heating and softening in front of the flute. External threads are machined using a lathe to "single-point" the threads or by rolling the threads in the solution treated condition (for hardenable alloys) using a screw machine. Inconel 718 can also be roll-threaded after full aging by using induction heat to without increasing the grain size. Holes with internal threads are made by threadmilling. Internal threads can also be formed using a sinker electrical discharge machining (EDM). Joining Welding of some Inconel alloys (especially the gamma prime precipitation hardened family; e.g., Waspaloy and X-750) can be difficult due to cracking and microstructural segregation of alloying elements in the heat-affected zone. However, several alloys such as 625 and 718 have been designed to overcome these problems. The most common welding methods are gas tungsten arc welding and electron-beam welding. Uses Inconel is often encountered in extreme environments. It is common in gas turbine blades, seals, and combustors, as well as turbocharger rotors and seals, electric submersible well pump motor shafts, high temperature fasteners, chemical processing and pressure vessels, heat exchanger tubing, steam generators and core components in nuclear pressurized water reactors, natural gas processing with contaminants such as H2S and CO2, firearm sound suppressor blast baffles, and Formula One, NASCAR, NHRA, and APR, LLC exhaust systems. It is also used in the turbo system of the 3rd generation Mazda RX7, and the exhaust systems of high powered Wankel engine and Norton motorcycles where exhaust temperatures reach more than . Inconel is increasingly used in the boilers of waste incinerators. The Joint European Torus and DIII-D tokamaks' vacuum vessels are made of Inconel. Inconel 718 is commonly used for cryogenic storage tanks, downhole shafts, wellhead parts, and in the aerospace industry -- where it has become a prime candidate material for constructing heat resistant turbines. Aerospace The Space Shuttle used four Inconel studs to secure the solid rocket boosters to the launch platform, eight total studs supported the entire weight of the ready to fly Shuttle system. Eight frangible nuts are encased on the outside of the solid rocket boosters, at launch explosives separated the nuts releasing the Shuttle from its launch platform. North American Aviation constructed the skin of the North American X-15 rocket-powered aircraft out of Inconel X/750 alloy. Rocketdyne used Inconel X-750 for the thrust chamber of the F-1 rocket engine used in the first stage of the Saturn V booster. SpaceX uses Inconel (Inconel 718) in the engine manifold of their Merlin engine which powers the Falcon 9 launch vehicle. In a first for 3D printing, the SpaceX SuperDraco rocket engine that provides launch escape system for the Dragon V2 crew-carrying space capsule is fully printed. In particular, the engine combustion chamber is printed of Inconel using a process of direct metal laser sintering, and operates at very high temperature and a chamber pressure of . SpaceX cast the Raptor rocket engine manifolds from SX300, later SX500, which are nickel superalloys (improvement over older Inconel alloys). Automotive Tesla claims to use Inconel in place of steel in the main battery pack contactor of its Model S so that it remains springy under the heat of heavy current. Tesla claims that this allows these upgraded vehicles to safely increase the maximum pack output from 1300 to 1500 amperes, allowing for an increase in power output (acceleration) Tesla refers to as "Ludicrous Mode". Ford Motor Company is using Inconel to make the turbine wheel in the turbocharger of its EcoBlue diesel engines introduced in 2016. The exhaust valves on NHRA Top Fuel and Funny Car drag racing engines are often made of Inconel. Ford Australia used Inconel valves in their turbocharged Barra engines. These valves have been proven very reliable, holding in excess of 1900 horsepower. BMW has since used Inconel in the exhaust manifold of its high performance luxury car, the BMW M5 E34 with the S38 engine, withstanding higher temperatures and reducing backpressure. Jaguar Cars has fit, in their Jaguar F-Type SVR high performance sports car, a new lightweight Inconel titanium exhaust system as standard which withstands higher peak temperatures, reduces backpressure and eliminates of mass from the vehicle. DeLorean Motor Company offers Inconel replacements for failure prone OE trailing arm bolts on the DMC-12. Failure of these bolts can result in loss of the vehicle. Rolled Inconel was frequently used as the recording medium by engraving in black box recorders on aircraft. Alternatives to the use of Inconel in chemical applications such as scrubbers, columns, reactors, and pipes are Hastelloy, perfluoroalkoxy (PFA) lined carbon steel or fiber reinforced plastic. Inconel alloys Alloys of Inconel include: Inconel 188: Readily fabricated for commercial gas turbine and aerospace applications. Inconel 230: Alloy 230 Plate & Sheet mainly used by the power, aerospace, chemical processing and industrial heating industries. Inconel 600: In terms of high-temperature and corrosion resistance, Inconel 600 excels. Inconel 601 Inconel 617: Solid solution strengthened (nickel-chromium-cobalt-molybdenum), high-temperature strength, corrosion and oxidation resistant, high workability and weldability. Incorporated in ASME Boiler and Pressure Vessel Code for high temperature nuclear applications such as molten salt reactors April, 2020. Inconel 625: Acid resistant, good weldability. The LCF version is typically used in bellows. It is commonly used for applications in aeronautic, aerospace, marine, chemical and petrochemical industries. It is also used for reactor-core and control-rod components in pressurized water reactors and as heat exchanger tubes in ammonia cracker plants for heavy water production. Inconel 690: Low cobalt content for nuclear applications, and low resistivity Inconel 706 Inconel 713C: Precipitation hardenable nickel-chromium base cast alloy Inconel 718: Gamma double prime strengthened with good weldability Inconel 738 Inconel X-750: Commonly used for gas turbine components, including blades, seals and rotors. Inconel 751: Increased aluminum content for improved rupture strength in the 1600 °F range Inconel 792: Increased aluminum content for improved high temperature corrosion resistant properties, used especially in gas turbines Inconel 907 Inconel 909 Inconel 925: Inconel 925 is a nonstabilized austenitic stainless steel with low carbon content. Inconel 939: Gamma prime strengthened to increase weldability In age hardening or precipitation strengthening varieties, alloying additions of aluminum and titanium combine with nickel to form the intermetallic compound or gamma prime (γ′). Gamma prime forms small cubic crystals that inhibit slip and creep effectively at elevated temperatures. See also Hastelloy Incoloy Monel Nichrome Nimonic Stellite References Nickel–chromium alloys Refractory metals Superalloys Aerospace materials Nickel alloys Chromium alloys
Inconel
[ "Chemistry", "Engineering" ]
3,036
[ "Nickel alloys", "Aerospace materials", "Refractory metals", "Superalloys", "Alloys", "Aerospace engineering", "Chromium alloys" ]
1,569,089
https://en.wikipedia.org/wiki/Membrane%20gas%20separation
Gas mixtures can be effectively separated by synthetic membranes made from polymers such as polyamide or cellulose acetate, or from ceramic materials. While polymeric membranes are economical and technologically useful, they are bounded by their performance, known as the Robeson limit (permeability must be sacrificed for selectivity and vice versa). This limit affects polymeric membrane use for CO2 separation from flue gas streams, since mass transport becomes limiting and CO2 separation becomes very expensive due to low permeabilities. Membrane materials have expanded into the realm of silica, zeolites, metal-organic frameworks, and perovskites due to their strong thermal and chemical resistance as well as high tunability (ability to be modified and functionalized), leading to increased permeability and selectivity. Membranes can be used for separating gas mixtures where they act as a permeable barrier through which different compounds move across at different rates or not move at all. The membranes can be nanoporous, polymer, etc. and the gas molecules penetrate according to their size, diffusivity, or solubility. Basic process Gas separation across a membrane is a pressure-driven process, where the driving force is the difference in pressure between inlet of raw material and outlet of product. The membrane used in the process is a generally non-porous layer, so there will not be a severe leakage of gas through the membrane. The performance of the membrane depends on permeability and selectivity. Permeability is affected by the penetrant size. Larger gas molecules have a lower diffusion coefficient. The polymer chain flexibility and free volume in the polymer of the membrane material influence the diffusion coefficient, as the space within the permeable membrane must be large enough for the gas molecules to diffuse across. The solubility is expressed as the ratio of the concentration of the gas in the polymer to the pressure of the gas in contact with it. Permeability is the ability of the membrane to allow the permeating gas to diffuse through the material of the membrane as a consequence of the pressure difference over the membrane, and can be measured in terms of the permeate flow rate, membrane thickness and area and the pressure difference across the membrane. The selectivity of a membrane is a measure of the ratio of permeability of the relevant gases for the membrane. It can be calculated as the ratio of permeability of two gases in binary separation. The membrane gas separation equipment typically pumps gas into the membrane module and the targeted gases are separated based on difference in diffusivity and solubility. For example, oxygen will be separated from the ambient air and collected at the upstream side, and nitrogen at the downstream side. As of 2016, membrane technology was reported as capable of producing 10 to 25 tonnes of 25 to 40% oxygen per day. Membrane governing methodology There are three main diffusion mechanisms. The first (b), Knudsen diffusion holds at very low pressures where lighter molecules can move across a membrane faster than heavy ones, in a material with reasonably large pores. The second (c), molecular sieving, is the case where the pores of the membrane are too small to let one component pass, a process which is typically not practical in gas applications, as the molecules are too small to design relevant pores. In these cases the movement of molecules is best described by pressure-driven convective flow through capillaries, which is quantified by Darcy's law. However, the more general model in gas applications is the solution-diffusion (d) where particles are first dissolved onto the membrane and then diffuse through it both at different rates. This model is employed when the pores in the polymer membrane appear and disappear faster relative to the movement of the particles. In a typical membrane system the incoming feed stream is separated into two components: permeant and retentate. Permeant is the gas that travels across the membrane and the retentate is what is left of the feed. On both sides of the membrane, a gradient of chemical potential is maintained by a pressure difference which is the driving force for the gas molecules to pass through. The ease of transport of each species is quantified by the permeability, Pi. With the assumptions of ideal mixing on both sides of the membrane, ideal gas law, constant diffusion coefficient and Henry's law, the flux of a species can be related to the pressure difference by Fick's law: where, (Ji) is the molar flux of species i across the membrane, (l) is membrane thickness, (Pi) is permeability of species i, (Di) is diffusivity, (Ki) is the Henry coefficient, and (pi') and (pi") represent the partial pressures of the species i at the feed and permeant side respectively. The product of DiKi is often expressed as the permeability of the species i, on the specific membrane being used. The flow of a second species, j, can be defined as: With the expression above, a membrane system for a binary mixture can be sufficiently defined. it can be seen that the total flow across the membrane is strongly dependent on the relation between the feed and permeate pressures. The ratio of feed pressure (p') over permeate pressure (p") is defined as the membrane pressure ratio (θ). It is clear from the above, that a flow of species i or j across the membrane can only occur when: In other words, the membrane will experience flow across it when there exists a concentration gradient between feed and permeate. If the gradient is positive, the flow will go from the feed to the permeate and species i will be separated from the feed. Therefore, the maximum separation of species i results from: Another important coefficient when choosing the optimum membrane for a separation process is the membrane selectivity αij defined as the ratio of permeability of species i with relation to the species j. This coefficient is used to indicate the level to which the membrane is able to separates species i from j. It is obvious from the expression above, that a membrane selectivity of 1 indicates the membrane has no potential to separate the two gases, the reason being, both gases will diffuse equally through the membrane. In the design of a separation process, normally the pressure ratio and the membrane selectivity are prescribed by the pressures of the system and the permeability of the membrane . The level of separation achieved by the membrane (concentration of the species to be separated) needs to be evaluated based on the aforementioned design parameters in order to evaluate the cost-effectiveness of the system. Membrane performance The concentration of species i and j across the membrane can be evaluated based on their respective diffusion flows across it. In the case of a binary mixture, the concentration of species i across the membrane: This can be further expanded to obtain an expression of the form: Using the relations: The expression can be rewritten as: Then using The solution to the above quadratic expression can be expressed as: Finally, an expression for the permeant concentration is obtained by the following: Along the separation unit, the feed concentration decays with the diffusion across the membrane causing the concentration at the membrane to drop accordingly. As a result, the total permeant flow (q"out) results from the integration of the diffusion flow across the membrane from the feed inlet (q'in) to feed outlet (q'out). A mass balance across a differential length of the separation unit is therefore: where: Because of the binary nature of the mixture, only one species needs to be evaluated. Prescribing a function n'i=n'i(x), the species balance can be rewritten as: Where: Lastly, the area required per unit membrane length can be obtained by the following expression: Membrane materials for carbon capture in flue gas streams The material of the membrane plays an important role in its ability to provide the desired performance characteristics. It is optimal to have a membrane with a high permeability and sufficient selectivity and it is also important to match the membrane properties to that of the system operating conditions (for example pressures and gas composition). Synthetic membranes are made from a variety of polymers including polyethylene, polyamides, polyimides, cellulose acetate, polysulphone and polydimethylsiloxane. Polymer membranes Polymeric membranes are a common option for use in the capture of CO2 from flue gas because of the maturity of the technology in a variety of industries, namely petrochemicals. The ideal polymer membrane has both a high selectivity and permeability. Polymer membranes are examples of systems that are dominated by the solution-diffusion mechanism. The membrane is considered to have holes which the gas can dissolve (solubility) and the molecules can move from one cavity to the other (diffusion). It was discovered by Robeson in the early 1990s that polymers with a high selectivity have a low permeability and opposite is true; materials with a low selectivity have a high permeability. This is best illustrated in a Robeson plot where the selectivity is plotted as a function of the CO2 permeation. In this plot, the upper bound of selectivity is approximately a linear function of the permeability. It was found that the solubility in polymers is mostly constant but the diffusion coefficients vary significantly and this is where the engineering of the material occurs. Somewhat intuitively, the materials with the highest diffusion coefficients have a more open pore structure, thus losing selectivity. There are two methods that researchers are using to break the Robeson limit, one of these is the use of glassy polymers whose phase transition and changes in mechanical properties make it appear that the material is absorbing molecules and thus surpasses the upper limit. The second method of pushing the boundaries of the Robeson limit is by the facilitated transport method. As previously stated, the solubility of polymers is typically fairly constant but the facilitated transport method uses a chemical reaction to enhance the permeability of one component without changing the selectivity. Nanoporous membranes Nanoporous membranes are fundamentally different from polymer-based membranes in that their chemistry is different and that they do not follow the Robeson limit for a variety of reasons. The simplified figure of a nanoporous membrane shows a small portion of an example membrane structure with cavities and windows. The white portion represents the area where the molecule can move and the blue shaded areas represent the walls of the structure. In the engineering of these membranes, the size of the cavity (Lcy x Lcz) and window region (Lwy x Lwz) can be modified so that the desired permeation is achieved. It has been shown that the permeability of a membrane is the production of adsorption and diffusion. In low loading conditions, the adsorption can be computed by the Henry coefficient. If the assumption is made that the energy of a particle does not change when moving through this structure, only the entropy of the molecules changes based on the size of the openings. If we first consider changes the cavity geometry, the larger the cavity, the larger the entropy of the absorbed molecules which thus makes the Henry coefficient larger. For diffusion, an increase in entropy will lead to a decrease in free energy which in turn leads to a decrease in the diffusion coefficient. Conversely, changing the window geometry will primarily effect the diffusion of the molecules and not the Henry coefficient. In summary, by using the above simplified analysis, it is possible to understand why the upper limit of the Robeson line does not hold for nanostructures. In the analysis, both the diffusion and Henry coefficients can be modified without affecting the permeability of the material which thus can exceed the upper limit for polymer membranes. Silica membranes Silica membranes are mesoporous and can be made with high uniformity (the same structure throughout the membrane). The high porosity of these membranes gives them very high permeabilities. Synthesized membranes have smooth surfaces and can be modified on the surface to drastically improve selectivity. Functionalizing silica membrane surfaces with amine containing molecules (on the surface silanol groups) allows the membranes to separate CO2 from flue gas streams more effectively. Surface functionalization (and thus chemistry) can be tuned to be more efficient for wet flue gas streams as compared to dry flue gas streams. While previously, silica membranes were impractical due to their technical scalability and cost (they are very difficult to produce in an economical manner on a large scale), there have been demonstrations of a simple method of producing silica membranes on hollow polymeric supports. These demonstrations indicate that economical materials and methods can effectively separate CO2 and N2. Ordered mesoporous silica membranes have shown considerable potential for surface modification that allows for ease of CO2 separation. Surface functionalization with amines leads to the reversible formation of carbamates (during CO2 flow), increasing CO2 selectivity significantly. Zeolite membranes Zeolites are crystalline aluminosilicates with a regular repeating structure of molecular-sized pores. Zeolite membranes selectively separate molecules based on pore size and polarity and are thus highly tunable to specific gas separation processes. In general, smaller molecules and those with stronger zeolite-adsorption properties are adsorbed onto zeolite membranes with larger selectivity. The capacity to discriminate based on both molecular size and adsorption affinity makes zeolite membranes an attractive candidate for CO2 separation from N2, CH4, and H2. Scientists have found that the gas-phase enthalpy (heat) of adsorption on zeolites increases as follows: H2 < CH4 < N2 < CO2. It is generally accepted that CO2 has the largest adsorption energy because it has the largest quadrupole moment, thereby increasing its affinity for charged or polar zeolite pores. At low temperatures, zeolite adsorption-capacity is large and the high concentration of adsorbed CO2 molecules blocks the flow of other gases. Therefore, at lower temperatures, CO2 selectively permeates through zeolite pores. Several recent research efforts have focused on developing new zeolite membranes that maximize the CO2 selectivity by taking advantage of the low-temperature blocking phenomena. Researchers have synthesized Y-type (Si:Al>3) zeolite membranes which achieve room-temperature separation factors of 100 and 21 for CO2/N2 and CO2/CH4 mixtures respectively. DDR-type and SAPO-34 membranes have also shown promise in separating CO2 and CH4 at a variety of pressures and feed compositions. The SAPO-34 membranes, being nitrogen selective, are also strong contender for natural gas sweetening process. Researchers have also made an effort to utilize zeolite membranes for the separation of H2 from hydrocarbons. Hydrogen can be separated from larger hydrocarbons such as C4H10 with high selectivity. This is due to the molecular sieving effect since zeolites have pores much larger than H2, but smaller than these large hydrocarbons. Smaller hydrocarbons such as CH4, C2H6, and C3H8 are small enough to not be separated by molecular sieving. Researchers achieved a higher selectivity of hydrogen when performing the separation at high temperatures, likely as a result of a decrease in the competitive adsorption effect. Metal-organic framework (MOF) membranes There have been advances in zeolitic-imidazolate frameworks (ZIFs), a subclass of metal-organic frameworks (MOFs), that have allowed them to be useful for carbon dioxide separation from flue gas streams. Extensive modeling has been performed to demonstrate the value of using MOFs as membranes. MOF materials are adsorption-based, and thus can be tuned to achieve selectivity. The drawback to MOF systems is stability in water and other compounds present in flue gas streams. Select materials, such as ZIF-8, have demonstrated stability in water and benzene, contents often present in flue gas mixtures. ZIF-8 can be synthesized as a membrane on a porous alumina support and has proven to be effective at separating CO2 from flue gas streams. At similar CO2/CH4 selectivity to Y-type zeolite membranes, ZIF-8 membranes achieve unprecedented CO2 permeance, two orders of magnitude above the previous standard. Perovskite membranes Perovskite are mixed metal oxide with a well-defined cubic structure and a general formula of ABO3, where A is an alkaline earth or lanthanide element and B is a transition metal. These materials are attractive for CO2 separation because of the tunability of the metal sites as well as their stabilities at elevated temperatures. The separation of CO2 from N2 was investigated with an α-alumina membrane impregnated with BaTiO3. It was found that adsorption of CO2 was favorable at high temperatures due to an endothermic interaction between CO2 and the material, promoting mobile CO2 that enhanced CO2 adsorption-desorption rate and surface diffusion. The experimental separation factor of CO2 to N2 was found to be 1.1-1.2 at 100 °C to 500 °C, which is higher than the separation factor limit of 0.8 predicted by Knudsen diffusion. Though the separation factor was low due to pinholes observed in the membrane, this demonstrates the potential of perovskite materials in their selective surface chemistry for CO2 separation. Other membrane technologies In special cases other materials can be utilized; for example, palladium membranes permit transport solely of hydrogen. In addition to palladium membranes (which are typically palladium silver alloys to stop embrittlement of the alloy at lower temperature) there is also a significant research effort looking into finding non-precious metal alternatives. Although slow kinetics of exchange on the surface of the membrane and tendency for the membranes to crack or disintegrate after a number of duty cycles or during cooling are problems yet to be fully solved. Construction Membranes are typically contained in one of three modules: Hollow fibre bundles in a metal module Spiral wound bundles in a metal module Plate and frame module constructed like a plate and frame heat exchanger Uses Membranes are employed in: The separation of nitrogen or oxygen from air (generally only up to 99.5%) Separation of hydrogen from gases like nitrogen and methane Recovery of hydrogen from product streams of ammonia plants Recovery of hydrogen in oil refinery processes Separation of methane from the other components of biogas Enrichment of air by oxygen for medical or metallurgical purposes. One of the methods used for commercial production of nitrox breathing gas for underwater diving. Enrichment of ullage by nitrogen in inerting systems designed to prevent fuel tank explosions Removal of water vapor from natural gas and other gases Removal of SO2, CO2 and H2S from natural gas (polyamide membranes) Removal of volatile organic liquids (VOL) from air of exhaust streams Air separation Oxygen-enriched air is in high demanded for a range of medical and industrial applications including chemical and combustion processes. Cryogenic distillation is the mature technology for commercial air separation for the production of large quantities of high purity oxygen and nitrogen. However, it is a complex process, is energy-intensive, and is generally not suitable for small-scale production. Pressure swing adsorption is also commonly used for air separation and can also produce high purity oxygen at medium production rates, but it still requires considerable space, high investment and high energy consumption. The membrane gas separation method is a relatively low environmental impact and sustainable process providing continuous production, simple operation, lower pressure/temperature requirements, and compact space requirements. Current status of CO2 capture with membranes A great deal of research has been undertaken to utilize membranes instead of absorption or adsorption for carbon capture from flue gas streams, however, no current projects exist that utilize membranes. Process engineering along with new developments in materials have shown that membranes have the greatest potential for low energy penalty and cost compared to competing technologies. Background Today, membranes are used for commercial separations involving: N2 from air, H2 from ammonia in the Haber-Bosch process, natural gas purification, and tertiary-level enhanced oil recovery supply. Single-stage membrane operations involve a single membrane with one selectivity value. Single-stage membranes were first used in natural gas purification, separating CO2 from methane. A disadvantage of single-stage membranes is the loss of product in the permeate due to the constraints imposed by the single selectivity value. Increasing the selectivity reduces the amount of product lost in the permeate, but comes at the cost of requiring a larger pressure difference to process an equivalent amount of a flue stream. In practice, the maximum pressure ratio economically possible is around 5:1. To combat the loss of product in the membrane permeate, engineers use “cascade processes” in which the permeate is recompressed and interfaced with additional, higher selectivity membranes. The retentate streams can be recycled, which achieves a better yield of product. Need for multi-stage process Single-stage membranes devices are not feasible for obtaining a high concentration of separated material in the permeate stream. This is due to the pressure ratio limit that is economically unrealistic to exceed. Therefore, the use of multi-stage membranes is required to concentrate the permeate stream. The use of a second stage allows for less membrane area and power to be used. This is because of the higher concentration that passes the second stage, as well as the lower volume of gas for the pump to process. Other factors, such as adding another stage that uses air to concentrate the stream further reduces cost by increasing concentration within the feed stream. Additional methods such as combining multiple types of separation methods allow for variation in creating economical process designs. Membrane use in hybrid processes Hybrid processes have long-standing history with gas separation. Typically, membranes are integrated into already existing processes such that they can be retrofitted into already existing carbon capture systems. MTR, Membrane Technology and Research Inc., and UT Austin have worked to create hybrid processes, utilizing both absorption and membranes, for CO2 capture. First, an absorption column using piperazine as a solvent absorbs about half the carbon dioxide in the flue gas, then the use of a membrane results in 90% capture. A parallel setup is also, with the membrane and absorption processes occurring simultaneously. Generally, these processes are most effective when the highest content of carbon dioxide enters the amine absorption column. Incorporating hybrid design processes allows for retrofitting into fossil fuel power plants. Hybrid processes can also use cryogenic distillation and membranes. For example, hydrogen and carbon dioxide can be separated, first using cryogenic gas separation, whereby most of the carbon dioxide exits first, then using a membrane process to separate the remaining carbon dioxide, after which it is recycled for further attempts at cryogenic separation. Cost analysis Cost limits the pressure ratio in a membrane CO2 separation stage to a value of 5; higher pressure ratios eliminate any economic viability for CO2 capture using membrane processes. Recent studies have demonstrated that multi-stage CO2 capture/separation processes using membranes can be economically competitive with older and more common technologies such as amine-based absorption. Currently, both membrane and amine-based absorption processes can be designed to yield a 90% CO2 capture rate. For carbon capture at an average 600 MW coal-fired power plant, the cost of CO2 capture using amine-based absorption is in the $40–100 per ton of CO2 range, while the cost of CO2 capture using current membrane technology (including current process design schemes) is about $23 per ton of CO2. Additionally, running an amine-based absorption process at an average 600 MW coal-fired power plant consumes about 30% of the energy generated by the power plant, while running a membrane process requires about 16% of the energy generated. CO2 transport (e.g. to geologic sequestration sites, or to be used for EOR) costs about $2–5 per ton of CO2. This cost is the same for all types of CO2 capture/separation processes such as membrane separation and absorption. In terms of dollars per ton of captured CO2, the least expensive membrane processes being studied at this time are multi-step counter-current flow/sweep processes. See also References Separation processes Gas technologies Membrane technology Industrial gases de:Gastrennung#Membranverfahren
Membrane gas separation
[ "Chemistry" ]
5,084
[ "Separation processes", "Membrane technology", "Industrial gases", "nan", "Chemical process engineering" ]
1,569,100
https://en.wikipedia.org/wiki/OGLE-TR-10
OGLE-TR-10 is a distant, magnitude 16 star in the constellation of Sagittarius. It is located near the Galactic Center. This star is listed as an eclipsing type variable star with the eclipse due to the passage of the planet as noted in the discovery papers. Planetary system This star is home to OGLE-TR-10b, a transiting planet found by the Optical Gravitational Lensing Experiment (OGLE) survey in 2002. See also Optical Gravitational Lensing Experiment or OGLE List of extrasolar planets References External links Planetary transit variables Sagittarius (constellation) G-type main-sequence stars Planetary systems with one confirmed planet Sagittarii, V5125
OGLE-TR-10
[ "Astronomy" ]
145
[ "Sagittarius (constellation)", "Constellations" ]
1,569,192
https://en.wikipedia.org/wiki/HD%2027894
HD 27894 is a single star with a system of orbiting exoplanets, located in the southern constellation of Reticulum. It is too faint to be seen with the naked eye at an apparent visual magnitude of 9.36. This system lies at a distance of 142.5 light years from the Sun, as determined via parallax measurements, and is drifting further away with a radial velocity of 83 km/s. The spectrum of HD 27894 presents as a K-type main-sequence star, an orange dwarf, with a stellar classification of K2 V. This is a quiescent solar-type star that displays no significant magnetic activity in its chromosphere and is spinning slowly with a rotation period of roughly 44 days. The abundance of iron in the star is much higher than in the Sun, an indicator that it is metal-rich. It has 83% of the mass of the Sun and 79% of the Sun's radius. The star is radiating 33% of the luminosity of the Sun from its photosphere at an effective temperature of 4,923 K. Planetary system In 2005, the Geneva Extrasolar Planet Search Team announced the discovery of an extrasolar planet orbiting the star. In 2017, the discovery of two additional exoplanets was announced. One is very close to the star like the one discovered earlier, while the other one orbits the star at a much larger distance. It is the first system where such a large gap between orbital distances has been found. In 2022, the inclination and true mass of HD 27894 d were measured via astrometry. The study only found strong evidence for planets b and d. See also List of extrasolar planets References K-type main-sequence stars Planetary systems with three confirmed planets Reticulum Durchmusterung objects 027894 020277
HD 27894
[ "Astronomy" ]
383
[ "Reticulum", "Constellations" ]
1,569,217
https://en.wikipedia.org/wiki/Veronese%20surface
In mathematics, the Veronese surface is an algebraic surface in five-dimensional projective space, and is realized by the Veronese embedding, the embedding of the projective plane given by the complete linear system of conics. It is named after Giuseppe Veronese (1854–1917). Its generalization to higher dimension is known as the Veronese variety. The surface admits an embedding in the four-dimensional projective space defined by the projection from a general point in the five-dimensional space. Its general projection to three-dimensional projective space is called a Steiner surface. Definition The Veronese surface is the image of the mapping given by where denotes homogeneous coordinates. The map is known as the Veronese embedding. Motivation The Veronese surface arises naturally in the study of conics. A conic is a degree 2 plane curve, thus defined by an equation: The pairing between coefficients and variables is linear in coefficients and quadratic in the variables; the Veronese map makes it linear in the coefficients and linear in the monomials. Thus for a fixed point the condition that a conic contains the point is a linear equation in the coefficients, which formalizes the statement that "passing through a point imposes a linear condition on conics". Veronese map The Veronese map or Veronese variety generalizes this idea to mappings of general degree d in n+1 variables. That is, the Veronese map of degree d is the map with m given by the multiset coefficient, or more familiarly the binomial coefficient, as: The map sends to all possible monomials of total degree d (of which there are ); we have since there are variables to choose from; and we subtract since the projective space has coordinates. The second equality shows that for fixed source dimension n, the target dimension is a polynomial in d of degree n and leading coefficient For low degree, is the trivial constant map to and is the identity map on so d is generally taken to be 2 or more. One may define the Veronese map in a coordinate-free way, as where V is any vector space of finite dimension, and are its symmetric powers of degree d. This is homogeneous of degree d under scalar multiplication on V, and therefore passes to a mapping on the underlying projective spaces. If the vector space V is defined over a field K which does not have characteristic zero, then the definition must be altered to be understood as a mapping to the dual space of polynomials on V. This is because for fields with finite characteristic p, the pth powers of elements of V are not rational normal curves, but are of course a line. (See, for example additive polynomial for a treatment of polynomials over a field of finite characteristic). Rational normal curve For the Veronese variety is known as the rational normal curve, of which the lower-degree examples are familiar. For the Veronese map is simply the identity map on the projective line. For the Veronese variety is the standard parabola in affine coordinates For the Veronese variety is the twisted cubic, in affine coordinates Biregular The image of a variety under the Veronese map is again a variety, rather than simply a constructible set; furthermore, these are isomorphic in the sense that the inverse map exists and is regular – the Veronese map is biregular. More precisely, the images of open sets in the Zariski topology are again open. See also The Veronese surface is the only Severi variety of dimension 2 References Joe Harris, Algebraic Geometry, A First Course, (1992) Springer-Verlag, New York. Algebraic varieties Algebraic surfaces Complex surfaces Tensors
Veronese surface
[ "Engineering" ]
770
[ "Tensors" ]
1,569,246
https://en.wikipedia.org/wiki/HD%20216770
HD 216770 is a star with an orbiting exoplanet in the southern constellation of Piscis Austrinus. With an apparent visual magnitude of 8.11, it is too faint to be visible to the naked eye. It is located at a distance of 120 light years from the Sun, as determined by parallax measurements, and is drifting further away with a radial velocity of 31.1 km/s. The star shows a high proper motion, traversing the celestial sphere at an angular rate of . The spectrum of HD 216770 presents as a late G-type main-sequence star, a yellow dwarf, with a stellar classification of G9VCN+1, where the suffix notation indicates anomalously strong band of CN. The star is smaller than the Sun, with 74% of the Sun's mass and 93% of the Sun's radius. It is about three billion years old and is spinning slowly with a rotation period of 35.6 days. The abundance of iron, a measure of the metallicity of the star, is higher than solar. The star is radiating 79% of the luminosity of the Sun from its photosphere at an effective temperature of 5,399 K. In 2003 an exoplanet was announced orbiting it by the Geneva Extrasolar Planet Search team. As the inclination of the orbital plane is unknown, only a lower bound on the mass of the object can be determined. It has at least 65% of the mass of Jupiter. The body has an eccentric orbit with a period of 118.5 days. See also HD 10647 HD 108874 HD 111232 HD 142415 HD 169830 HD 41004 HD 65216 Lists of exoplanets References External links G-type main-sequence stars Planetary systems with one confirmed planet Piscis Austrinus Durchmusterung objects 216770 113238
HD 216770
[ "Astronomy" ]
396
[ "Piscis Austrinus", "Constellations" ]
1,569,290
https://en.wikipedia.org/wiki/Nalidixic%20acid
Nalidixic acid (tradenames Nevigramon, NegGram, Wintomylon and WIN 18,320) is the first of the synthetic quinolone antibiotics. In a technical sense, it is a naphthyridone, not a quinolone: its ring structure is a 1,8-naphthyridine nucleus that contains two nitrogen atoms, unlike quinoline, which has a single nitrogen atom. Synthetic quinolone antibiotics were discovered by George Lesher and coworkers as a byproduct of chloroquine manufacture in the 1960s; nalidixic acid itself was used clinically, starting in 1967. Nalidixic acid is effective primarily against Gram-negative bacteria, with minor anti-Gram-positive activity. In lower concentrations, it acts in a bacteriostatic manner; that is, it inhibits growth and reproduction. In higher concentrations, it is bactericidal, meaning that it kills bacteria instead of merely inhibiting their growth. It has historically been used for treating urinary tract infections, caused, for example, by Escherichia coli, Proteus, Shigella, Enterobacter, and Klebsiella. It is no longer clinically used for this indication in the US as less toxic and more effective agents are available. The marketing authorization for nalidixic acid has been suspended throughout the EU. It is also a tool in studies as a regulation of bacterial division. It selectively and reversibly blocks DNA replication in susceptible bacteria. Nalidixic acid and related antibiotics inhibit a subunit of DNA gyrase and topoisomerase IV and induce formation of cleavage complexes. It also inhibits the nicking-closing activity on the subunit of DNA gyrase that releases the positive binding stress on the supercoiled DNA. Adverse effects Hives, rash, intense itching, or fainting soon after a dose may be a sign of anaphylaxis. Common adverse effects include rash, itchy skin, blurred or double vision, halos around lights, changes in color vision, nausea, vomiting, and diarrhea. Nalidixic acid may also cause convulsions and hyperglycemia, photosensitivity reactions, and sometimes hemolytic anemia, thrombocytopenia or leukopenia. Particularly in infants and young children, has been reported occasionally increased intracranial pressure. Overdose In case of overdose the patient experiences headache, visual disturbances, balance disorders, mental confusion, metabolic acidosis and seizures. Spectrum of bacterial susceptibility and resistance Aeromonas hydrophila, Clostridium and Haemophilus are generally susceptible to nalidixic acid, while other bacteria such as Bifidobacteria, Lactobacillus, Pseudomonas and Staphylococcus are resistant. Salmonella enterica serovar Typhimurium strain ATCC14028 acquires nalidixic acid resistance when gyrB gene is mutated (strain IR715). See also Amfonelic acid Oxolinic acid References External links Quinolone antibiotics Naphthyridines WIN compounds Carboxylic acids Topoisomerase inhibitors
Nalidixic acid
[ "Chemistry" ]
674
[ "Carboxylic acids", "Functional groups" ]
1,569,292
https://en.wikipedia.org/wiki/Stochastic%20electrodynamics
Stochastic electrodynamics (SED) extends classical electrodynamics (CED) of theoretical physics by adding the hypothesis of a classical Lorentz invariant radiation field having statistical properties similar to that of the electromagnetic zero-point field (ZPF) of quantum electrodynamics (QED). Key ingredients Stochastic electrodynamics combines two conventional classical ideas – electromagnetism derived from point charges obeying Maxwell's equations and particle motion driven by Lorentz forces – with one unconventional hypothesis: the classical field has radiation even at T=0. This zero-point radiation is inferred from observations of the (macroscopic) Casimir effect forces at low temperatures. As temperature approaches zero, experimental measurements of the force between two uncharged, conducting plates in a vacuum do not go to zero as classical electrodynamics would predict. Taking this result as evidence of classical zero-point radiation leads to the stochastic electrodynamics model. Brief history Stochastic electrodynamics is a term for a collection of research efforts of many different styles based on the ansatz that there exists a Lorentz invariant random electromagnetic radiation. The basic ideas have been around for a long time, but Marshall (1963) and Brafford seem to have originated the more concentrated efforts that started in the 1960s. Thereafter Timothy Boyer, Luis de la Peña and Ana María Cetto were perhaps the most prolific contributors in the 1970s and beyond. Others have made contributions, alterations, and proposals concentrating on applying SED to problems in QED. A separate thread has been the investigation of an earlier proposal by Walther Nernst attempting to use the SED notion of a classical ZPF to explain inertial mass as due to a vacuum reaction. In 2010, Cavalleri et al. introduced SEDS ('pure' SED, as they call it, plus spin) as a fundamental improvement that they claim potentially overcomes all the known drawbacks of SED. They also claim SEDS resolves four observed effects that are so far unexplained by QED, i.e., 1) the physical origin of the ZPF and its natural upper cutoff; 2) an anomaly in experimental studies of the neutrino rest mass; 3) the origin and quantitative treatment of 1/f noise; and 4) the high-energy tail (~ 1021 eV) of cosmic rays. Two double-slit electron diffraction experiments are proposed to discriminate between QM and SEDS. In 2013, Auñon et al. showed that Casimir and Van der Waals interactions are a particular case of stochastic forces from electromagnetic sources when the broad Planck's spectrum is chosen, and the wavefields are non-correlated. Addressing fluctuating partially coherent light emitters with a tailored spectral energy distribution in the optical range, this establishes the link between stochastic electrodynamics and coherence theory; henceforth putting forward a way to optically create and control both such zero-point fields as well as Lifshitz forces of thermal fluctuations. In addition, this opens the path to build many more stochastic forces on employing narrow-band light sources for bodies with frequency-dependent responses. Scope of SED SED has been used in attempts to provide a classical explanation for effects previously considered to require quantum mechanics (here restricted to the Schrödinger equation and the Dirac equation and QED) for their explanation. It has also motivated a classical ZPF-based underpinning for gravity and inertia. There is no universal agreement on the successes and failures of SED, either in its congruence with standard theories of quantum mechanics, QED, and gravity or in its compliance with observation. The following SED-based explanations are relatively uncontroversial and are free of criticism at the time of writing: The Van der Waals force Diamagnetism The Unruh effect The following SED-based calculations and SED-related claims are more controversial, and some have been subject to published criticism: The ground state of the harmonic oscillator The ground state of the hydrogen atom De Broglie waves Inertia Gravitation See also References Fringe physics Quantum field theory Emergence
Stochastic electrodynamics
[ "Physics" ]
875
[ "Quantum field theory", "Quantum mechanics" ]
1,569,338
https://en.wikipedia.org/wiki/HD%20192263
HD 192263 is a star with an orbiting exoplanet in the equatorial constellation of Aquila. The system is located at a distance of 64 light years from the Sun based on parallax measurements, and is drifting further away with a radial velocity of −10.7 km/s. It has an absolute magnitude of 6.36, but at that distance the apparent visual magnitude is 7.79. It is too faint to be viewed with the naked eye, but with good binoculars or small telescope it should be easy to spot. In the late 1990s, Klaus G. Strassmeier et al. discovered that HD 192263 is a variable star while conducting a search for stars that would be good candidates for Doppler imaging. It was given its variable star designation, V1703 Aquilae, in 2006. The spectrum of HD 192263 matches a K-type main-sequence star, an orange dwarf, with a stellar classification of K1/2 V This is a BY Draconis variable, with variations in luminosity being caused by star spots on a rotating stellar atmosphere. It has a high level of magnetic activity in its chromosphere. The star is being viewed almost equator-on, with a projected rotational velocity of 2 km/s. It has 65% of the mass of the Sun, 74% of the Sun's radius, and is roughly 6.6 billion years old. The star is radiating 30% of the luminosity of the Sun from its photosphere at an effective temperature of 4,955 K. The star HD 192263 is named Phoenicia. The name was selected in the NameExoWorlds campaign by Lebanon, during the 100th anniversary of the IAU. Phoenicia was an ancient thalassocratic civilisation of the Mediterranean that originated from the area of modern-day Lebanon. Various companions for the star have been reported, but all of them are probably line-of-sight optical components or just spurious observations. Planetary system On 28 September 1999, an exoplanet around HD 192263 was found by the Geneva Extrasolar Planet Search team using the CORALIE spectrograph on the 1.2m Euler Swiss Telescope at La Silla Observatory, discovered independently by Vogt et al. The exoplanet is named Beirut after the capital and largest city of Lebanon. See also List of exoplanets discovered before 2000 - HD 192263 b / Beirut References External links K-type main-sequence stars BY Draconis variables Planetary systems with one confirmed planet Aquila (constellation) Durchmusterung objects 192263 099711 Aquilae, V1703
HD 192263
[ "Astronomy" ]
562
[ "Aquila (constellation)", "Constellations" ]
1,569,480
https://en.wikipedia.org/wiki/TransManche%20Link
TransManche Link (Cross Channel Link) or TML was a British-French construction consortium responsible for building the Channel Tunnel under the English Channel between Cheriton in England, and Coquelles in France. History In April 1985 the British and French governments invited proposals for the construction of a link between the two countries to be privately funded. In January 1986 the two governments selected the Channel Tunnel Group/France Manche proposal for the construction of two undersea tunnels. At Canterbury Cathedral on 12 February 1986 the governments signed a treaty approving construction of the Channel Tunnel. In March the concession for the operation of the tunnel was given to Channel Tunnel Group (CTG) and France Manche (FM). Following the award of this concession CTG was subsumed by the newly formed Eurotunnel plc and FM was similarly replaced with Eurotunnel SA, together these formed the Eurotunnel Group. In July 1985 the British contractors formed Translink Contractors and the French consortium formed Transmanche Construction. On 18 October 1985 these two groups were merged to create TransManche Link (TML). TML was thus contracted to build the tunnel for its customer, Eurotunnel, who would own and operate it. TML senior management were employees of the partner companies seconded to the new organisation. In October 1986 Eurotunnel was partially floated and the contractors and banks no longer exercised control over the company. Beginning in 1987 relations between TML and Eurotunnel deteriorated, with significant and increasingly public rows erupting over cost and programme management. With the completion of the Channel Tunnel TML ceased to exist. Organisation The participants were as follows: Channel Tunnel Group (later Translink Contractors) Balfour Beatty Costain Tarmac Construction Taylor Woodrow Construction Wimpey International Construction NatWest Midland Bank France Manche (later Transmanche Construction) Bouygues Dumez Société Auxiliaire d’Entreprise Société Générale d’Entreprises Spie Batignolles Crédit Lyonnais Banque Nationale de Paris Banque Indosuez References Channel Tunnel Construction and civil engineering companies of the United Kingdom Tunnelling organizations Construction and civil engineering companies established in 1985 British companies established in 1985 Construction and civil engineering companies disestablished in the 20th century
TransManche Link
[ "Engineering" ]
463
[ "Tunnelling organizations", "Civil engineering organizations" ]
1,569,600
https://en.wikipedia.org/wiki/Thermal%20expansion
Thermal expansion is the tendency of matter to increase in length, area, or volume, changing its size and density, in response to an increase in temperature (usually excluding phase transitions). Substances usually contract with decreasing temperature (thermal contraction), with rare exceptions within limited temperature ranges (negative thermal expansion). Temperature is a monotonic function of the average molecular kinetic energy of a substance. As energy in particles increases, they start moving faster and faster, weakening the intermolecular forces between them and therefore expanding the substance. When a substance is heated, molecules begin to vibrate and move more, usually creating more distance between themselves. The relative expansion (also called strain) divided by the change in temperature is called the material's coefficient of linear thermal expansion and generally varies with temperature. Prediction If an equation of state is available, it can be used to predict the values of the thermal expansion at all the required temperatures and pressures, along with many other state functions. Contraction effects (negative expansion) A number of materials contract on heating within certain temperature ranges; this is usually called negative thermal expansion, rather than "thermal contraction". For example, the coefficient of thermal expansion of water drops to zero as it is cooled to and then becomes negative below this temperature; this means that water has a maximum density at this temperature, and this leads to bodies of water maintaining this temperature at their lower depths during extended periods of sub-zero weather. Other materials are also known to exhibit negative thermal expansion. Fairly pure silicon has a negative coefficient of thermal expansion for temperatures between about . ALLVAR Alloy 30, a titanium alloy, exhibits anisotropic negative thermal expansion across a wide range of temperatures. Factors Unlike gases or liquids, solid materials tend to keep their shape when undergoing thermal expansion. Thermal expansion generally decreases with increasing bond energy, which also has an effect on the melting point of solids, so high melting point materials are more likely to have lower thermal expansion. In general, liquids expand slightly more than solids. The thermal expansion of glasses is slightly higher compared to that of crystals. At the glass transition temperature, rearrangements that occur in an amorphous material lead to characteristic discontinuities of coefficient of thermal expansion and specific heat. These discontinuities allow detection of the glass transition temperature where a supercooled liquid transforms to a glass. Absorption or desorption of water (or other solvents) can change the size of many common materials; many organic materials change size much more due to this effect than due to thermal expansion. Common plastics exposed to water can, in the long term, expand by many percent. Effect on density Thermal expansion changes the space between particles of a substance, which changes the volume of the substance while negligibly changing its mass (the negligible amount comes from mass–energy equivalence), thus changing its density, which has an effect on any buoyant forces acting on it. This plays a crucial role in convection of unevenly heated fluid masses, notably making thermal expansion partly responsible for wind and ocean currents. Coefficients The coefficient of thermal expansion describes how the size of an object changes with a change in temperature. Specifically, it measures the fractional change in size per degree change in temperature at a constant pressure, such that lower coefficients describe lower propensity for change in size. Several types of coefficients have been developed: volumetric, area, and linear. The choice of coefficient depends on the particular application and which dimensions are considered important. For solids, one might only be concerned with the change along a length, or over some area. The volumetric thermal expansion coefficient is the most basic thermal expansion coefficient, and the most relevant for fluids. In general, substances expand or contract when their temperature changes, with expansion or contraction occurring in all directions. Substances that expand at the same rate in every direction are called isotropic. For isotropic materials, the area and volumetric thermal expansion coefficient are, respectively, approximately twice and three times larger than the linear thermal expansion coefficient. In the general case of a gas, liquid, or solid, the volumetric coefficient of thermal expansion is given by The subscript "p" to the derivative indicates that the pressure is held constant during the expansion, and the subscript V stresses that it is the volumetric (not linear) expansion that enters this general definition. In the case of a gas, the fact that the pressure is held constant is important, because the volume of a gas will vary appreciably with pressure as well as temperature. For a gas of low density this can be seen from the ideal gas law. For various materials This section summarizes the coefficients for some common materials. For isotropic materials the coefficients linear thermal expansion α and volumetric thermal expansion αV are related by . For liquids usually the coefficient of volumetric expansion is listed and linear expansion is calculated here for comparison. For common materials like many metals and compounds, the thermal expansion coefficient is inversely proportional to the melting point. In particular, for metals the relation is: for halides and oxides In the table below, the range for α is from 10−7 K−1 for hard solids to 10−3 K−1 for organic liquids. The coefficient α varies with the temperature and some materials have a very high variation; see for example the variation vs. temperature of the volumetric coefficient for a semicrystalline polypropylene (PP) at different pressure, and the variation of the linear coefficient vs. temperature for some steel grades (from bottom to top: ferritic stainless steel, martensitic stainless steel, carbon steel, duplex stainless steel, austenitic steel). The highest linear coefficient in a solid has been reported for a Ti-Nb alloy. (The formula is usually used for solids.) In solids When calculating thermal expansion it is necessary to consider whether the body is free to expand or is constrained. If the body is free to expand, the expansion or strain resulting from an increase in temperature can be simply calculated by using the applicable coefficient of thermal expansion. If the body is constrained so that it cannot expand, then internal stress will be caused (or changed) by a change in temperature. This stress can be calculated by considering the strain that would occur if the body were free to expand and the stress required to reduce that strain to zero, through the stress/strain relationship characterised by the elastic or Young's modulus. In the special case of solid materials, external ambient pressure does not usually appreciably affect the size of an object and so it is not usually necessary to consider the effect of pressure changes. Common engineering solids usually have coefficients of thermal expansion that do not vary significantly over the range of temperatures where they are designed to be used, so where extremely high accuracy is not required, practical calculations can be based on a constant, average, value of the coefficient of expansion. Length Linear expansion means change in one dimension (length) as opposed to change in volume (volumetric expansion). To a first approximation, the change in length measurements of an object due to thermal expansion is related to temperature change by a coefficient of linear thermal expansion (CLTE). It is the fractional change in length per degree of temperature change. Assuming negligible effect of pressure, one may write: where is a particular length measurement and is the rate of change of that linear dimension per unit change in temperature. The change in the linear dimension can be estimated to be: This estimation works well as long as the linear-expansion coefficient does not change much over the change in temperature , and the fractional change in length is small . If either of these conditions does not hold, the exact differential equation (using ) must be integrated. Effects on strain For solid materials with a significant length, like rods or cables, an estimate of the amount of thermal expansion can be described by the material strain, given by and defined as: where is the length before the change of temperature and is the length after the change of temperature. For most solids, thermal expansion is proportional to the change in temperature: Thus, the change in either the strain or temperature can be estimated by: where is the difference of the temperature between the two recorded strains, measured in degrees Fahrenheit, degrees Rankine, degrees Celsius, or kelvin, and is the linear coefficient of thermal expansion in "per degree Fahrenheit", "per degree Rankine", "per degree Celsius", or "per kelvin", denoted by , , , or , respectively. In the field of continuum mechanics, thermal expansion and its effects are treated as eigenstrain and eigenstress. Area The area thermal expansion coefficient relates the change in a material's area dimensions to a change in temperature. It is the fractional change in area per degree of temperature change. Ignoring pressure, one may write: where is some area of interest on the object, and is the rate of change of that area per unit change in temperature. The change in the area can be estimated as: This equation works well as long as the area expansion coefficient does not change much over the change in temperature , and the fractional change in area is small . If either of these conditions does not hold, the equation must be integrated. Volume For a solid, one can ignore the effects of pressure on the material, and the volumetric (or cubical) thermal expansion coefficient can be written: where is the volume of the material, and is the rate of change of that volume with temperature. This means that the volume of a material changes by some fixed fractional amount. For example, a steel block with a volume of 1 cubic meter might expand to 1.002 cubic meters when the temperature is raised by 50 K. This is an expansion of 0.2%. If a block of steel has a volume of 2 cubic meters, then under the same conditions, it would expand to 2.004 cubic meters, again an expansion of 0.2%. The volumetric expansion coefficient would be 0.2% for 50 K, or 0.004% K−1. If the expansion coefficient is known, the change in volume can be calculated where is the fractional change in volume (e.g., 0.002) and is the change in temperature (50 °C). The above example assumes that the expansion coefficient did not change as the temperature changed and the increase in volume is small compared to the original volume. This is not always true, but for small changes in temperature, it is a good approximation. If the volumetric expansion coefficient does change appreciably with temperature, or the increase in volume is significant, then the above equation will have to be integrated: where is the volumetric expansion coefficient as a function of temperature T, and and are the initial and final temperatures respectively. Isotropic materials For isotropic materials the volumetric thermal expansion coefficient is three times the linear coefficient: This ratio arises because volume is composed of three mutually orthogonal directions. Thus, in an isotropic material, for small differential changes, one-third of the volumetric expansion is in a single axis. As an example, take a cube of steel that has sides of length . The original volume will be and the new volume, after a temperature increase, will be We can easily ignore the terms as ΔL is a small quantity which on squaring gets much smaller and on cubing gets smaller still. So The above approximation holds for small temperature and dimensional changes (that is, when and are small), but it does not hold if trying to go back and forth between volumetric and linear coefficients using larger values of . In this case, the third term (and sometimes even the fourth term) in the expression above must be taken into account. Similarly, the area thermal expansion coefficient is two times the linear coefficient: This ratio can be found in a way similar to that in the linear example above, noting that the area of a face on the cube is just . Also, the same considerations must be made when dealing with large values of . Put more simply, if the length of a cubic solid expands from 1.00 m to 1.01 m, then the area of one of its sides expands from 1.00 m2 to 1.02 m2 and its volume expands from 1.00 m3 to 1.03 m3. Anisotropic materials Materials with anisotropic structures, such as crystals (with less than cubic symmetry, for example martensitic phases) and many composites, will generally have different linear expansion coefficients in different directions. As a result, the total volumetric expansion is distributed unequally among the three axes. If the crystal symmetry is monoclinic or triclinic, even the angles between these axes are subject to thermal changes. In such cases it is necessary to treat the coefficient of thermal expansion as a tensor with up to six independent elements. A good way to determine the elements of the tensor is to study the expansion by x-ray powder diffraction. The thermal expansion coefficient tensor for the materials possessing cubic symmetry (for e.g. FCC, BCC) is isotropic. Temperature dependence Thermal expansion coefficients of solids usually show little dependence on temperature (except at very low temperatures) whereas liquids can expand at different rates at different temperatures. There are some exceptions: for example, cubic boron nitride exhibits significant variation of its thermal expansion coefficient over a broad range of temperatures. Another example is paraffin which in its solid form has a thermal expansion coefficient that is dependent on temperature. In gases Since gases fill the entirety of the container which they occupy, the volumetric thermal expansion coefficient at constant pressure, , is the only one of interest. For an ideal gas, a formula can be readily obtained by differentiation of the ideal gas law, . This yields where is the pressure, is the molar volume (, with the total number of moles of gas), is the absolute temperature and is equal to the gas constant. For an isobaric thermal expansion, , so that and the isobaric thermal expansion coefficient is: which is a strong function of temperature; doubling the temperature will halve the thermal expansion coefficient. Absolute zero computation From 1787 to 1802, it was determined by Jacques Charles (unpublished), John Dalton, and Joseph Louis Gay-Lussac that, at constant pressure, ideal gases expanded or contracted their volume linearly (Charles's law) by about 1/273 parts per degree Celsius of temperature's change up or down, between 0° and 100 °C. This suggested that the volume of a gas cooled at about −273 °C would reach zero. In October 1848, William Thomson, a 24 year old professor of Natural Philosophy at the University of Glasgow, published the paper On an Absolute Thermometric Scale. In a footnote Thomson calculated that "infinite cold" (absolute zero) was equivalent to −273 °C (he called the temperature in °C as the "temperature of the air thermometers" of the time). This value of "−273" was considered to be the temperature at which the ideal gas volume reaches zero. By considering a thermal expansion linear with temperature (i.e. a constant coefficient of thermal expansion), the value of absolute zero was linearly extrapolated as the negative reciprocal of 0.366/100 °C – the accepted average coefficient of thermal expansion of an ideal gas in the temperature interval 0–100 °C, giving a remarkable consistency to the currently accepted value of −273.15 °C. In liquids The thermal expansion of liquids is usually higher than in solids because the intermolecular forces present in liquids are relatively weak and its constituent molecules are more mobile. Unlike solids, liquids have no definite shape and they take the shape of the container. Consequently, liquids have no definite length and area, so linear and areal expansions of liquids only have significance in that they may be applied to topics such as thermometry and estimates of sea level rising due to global climate change. Sometimes, αL is still calculated from the experimental value of αV. In general, liquids expand on heating, except cold water; below 4 °C it contracts, leading to a negative thermal expansion coefficient. At higher temperatures it shows more typical behavior, with a positive thermal expansion coefficient. Apparent and absolute The expansion of liquids is usually measured in a container. When a liquid expands in a vessel, the vessel expands along with the liquid. Hence the observed increase in volume (as measured by the liquid level) is not the actual increase in its volume. The expansion of the liquid relative to the container is called its apparent expansion, while the actual expansion of the liquid is called real expansion or absolute expansion. The ratio of apparent increase in volume of the liquid per unit rise of temperature to the original volume is called its coefficient of apparent expansion. The absolute expansion can be measured by a variety of techniques, including ultrasonic methods. Historically, this phenomenon complicated the experimental determination of thermal expansion coefficients of liquids, since a direct measurement of the change in height of a liquid column generated by thermal expansion is a measurement of the apparent expansion of the liquid. Thus the experiment simultaneously measures two coefficients of expansion and measurement of the expansion of a liquid must account for the expansion of the container as well. For example, when a flask with a long narrow stem, containing enough liquid to partially fill the stem itself, is placed in a heat bath, the height of the liquid column in the stem will initially drop, followed immediately by a rise of that height until the whole system of flask, liquid and heat bath has warmed through. The initial drop in the height of the liquid column is not due to an initial contraction of the liquid, but rather to the expansion of the flask as it contacts the heat bath first. Soon after, the liquid in the flask is heated by the flask itself and begins to expand. Since liquids typically have a greater percent expansion than solids for the same temperature change, the expansion of the liquid in the flask eventually exceeds that of the flask, causing the level of liquid in the flask to rise. For small and equal rises in temperature, the increase in volume (real expansion) of a liquid is equal to the sum of the apparent increase in volume (apparent expansion) of the liquid and the increase in volume of the containing vessel. The absolute expansion of the liquid is the apparent expansion corrected for the expansion of the containing vessel. Examples and applications The expansion and contraction of the materials must be considered when designing large structures, when using tape or chain to measure distances for land surveys, when designing molds for casting hot material, and in other engineering applications when large changes in dimension due to temperature are expected. Thermal expansion is also used in mechanical applications to fit parts over one another, e.g. a bushing can be fitted over a shaft by making its inner diameter slightly smaller than the diameter of the shaft, then heating it until it fits over the shaft, and allowing it to cool after it has been pushed over the shaft, thus achieving a 'shrink fit'. Induction shrink fitting is a common industrial method to pre-heat metal components between 150 °C and 300 °C thereby causing them to expand and allow for the insertion or removal of another component. There exist some alloys with a very small linear expansion coefficient, used in applications that demand very small changes in physical dimension over a range of temperatures. One of these is Invar 36, with expansion approximately equal to 0.6 K−1. These alloys are useful in aerospace applications where wide temperature swings may occur. Pullinger's apparatus is used to determine the linear expansion of a metallic rod in the laboratory. The apparatus consists of a metal cylinder closed at both ends (called a steam jacket). It is provided with an inlet and outlet for the steam. The steam for heating the rod is supplied by a boiler which is connected by a rubber tube to the inlet. The center of the cylinder contains a hole to insert a thermometer. The rod under investigation is enclosed in a steam jacket. One of its ends is free, but the other end is pressed against a fixed screw. The position of the rod is determined by a micrometer screw gauge or spherometer. To determine the coefficient of linear thermal expansion of a metal, a pipe made of that metal is heated by passing steam through it. One end of the pipe is fixed securely and the other rests on a rotating shaft, the motion of which is indicated by a pointer. A suitable thermometer records the pipe's temperature. This enables calculation of the relative change in length per degree temperature change. The control of thermal expansion in brittle materials is a key concern for a wide range of reasons. For example, both glass and ceramics are brittle and uneven temperature causes uneven expansion which again causes thermal stress and this might lead to fracture. Ceramics need to be joined or work in concert with a wide range of materials and therefore their expansion must be matched to the application. Because glazes need to be firmly attached to the underlying porcelain (or other body type) their thermal expansion must be tuned to 'fit' the body so that crazing or shivering do not occur. Good example of products whose thermal expansion is the key to their success are CorningWare and the spark plug. The thermal expansion of ceramic bodies can be controlled by firing to create crystalline species that will influence the overall expansion of the material in the desired direction. In addition or instead the formulation of the body can employ materials delivering particles of the desired expansion to the matrix. The thermal expansion of glazes is controlled by their chemical composition and the firing schedule to which they were subjected. In most cases there are complex issues involved in controlling body and glaze expansion, so that adjusting for thermal expansion must be done with an eye to other properties that will be affected, and generally trade-offs are necessary. Thermal expansion can have a noticeable effect on gasoline stored in above-ground storage tanks, which can cause gasoline pumps to dispense gasoline which may be more compressed than gasoline held in underground storage tanks in winter, or less compressed than gasoline held in underground storage tanks in summer. Heat-induced expansion has to be taken into account in most areas of engineering. A few examples are: Metal-framed windows need rubber spacers. Rubber tires need to perform well over a range of temperatures, being passively heated or cooled by road surfaces and weather, and actively heated by mechanical flexing and friction. Metal hot water heating pipes should not be used in long straight lengths. Large structures such as railways and bridges need expansion joints in the structures to avoid sun kink. A gridiron pendulum uses an arrangement of different metals to maintain a more temperature stable pendulum length. A power line on a hot day is droopy, but on a cold day it is tight. This is because the metals expand under heat. Expansion joints absorb the thermal expansion in a piping system. Precision engineering nearly always requires the engineer to pay attention to the thermal expansion of the product. For example, when using a scanning electron microscope small changes in temperature such as 1 degree can cause a sample to change its position relative to the focus point. Liquid thermometers contain a liquid (usually mercury or alcohol) in a tube, which constrains it to flow in only one direction when its volume expands due to changes in temperature. A bi-metal mechanical thermometer uses a bimetallic strip and bends due to the differing thermal expansion of the two metals. See also References External links Glass Thermal Expansion Thermal expansion measurement, definitions, thermal expansion calculation from the glass composition Water thermal expansion calculator DoITPoMS Teaching and Learning Package on Thermal Expansion and the Bi-material Strip Engineering Toolbox – List of coefficients of Linear Expansion for some common materials Article on how αV is determined MatWeb: Free database of engineering properties for over 79,000 materials USA NIST Website – Temperature and Dimensional Measurement workshop Hyperphysics: Thermal expansion Understanding Thermal Expansion in Ceramic Glazes Thermal Expansion Calculators Thermal expansion via density calculator Thermodynamics Heat transfer Physical properties Building defects
Thermal expansion
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
4,905
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Thermodynamics", "Building defects", "Mechanical failure", "Physical properties", "Dynamical systems" ]
1,569,663
https://en.wikipedia.org/wiki/Leo%20Kadanoff
Leo Philip Kadanoff (January 14, 1937 – October 26, 2015) was an American physicist. He was a professor of physics (emeritus from 2004) at the University of Chicago and a former president of the American Physical Society (APS). He contributed to the fields of statistical physics, chaos theory, and theoretical condensed matter physics. Biography Kadanoff was raised in New York City. He received his undergraduate degree and doctorate in physics (1960) from Harvard University. After a post-doctorate at the Niels Bohr Institute in Copenhagen, he joined the physics faculty at the University of Illinois in 1965. Kadanoff's early research focused upon superconductivity. In the late 1960s, he studied the organization of matter in phase transitions. Kadanoff demonstrated that sudden changes in material properties (such as the magnetization of a magnet or the boiling of a fluid) could be understood in terms of scaling and universality. With his collaborators, he showed how all the experimental data then available for the changes, called second-order phase transitions, could be understood in terms of these two ideas. These same ideas have now been extended to apply to a broad range of scientific and engineering problems, and have found numerous and important applications in urban planning, computer science, hydrodynamics, biology, applied mathematics and geophysics. In recognition of these achievements, he won the Buckley Prize of the American Physical Society (1977), the Wolf Prize in Physics (1980), the 1989 Boltzmann Medal of the International Union of Pure and Applied Physics, and the 2006 Lorentz Medal. In 1969 he moved to Brown University. He exploited mathematical analogies between solid state physics and urban growth to shed insights into the latter field, so much so that he contributed substantially to the statewide planning program in Rhode Island. In 1978 he moved to the University of Chicago, where he became the John D. and Catherine T. MacArthur Distinguished Service Professor of Physics and Mathematics. Much of his work in the second half of his career involved contributions to chaos theory, in both mechanical and fluid systems. He was elected a Fellow of the American Academy of Arts and Sciences in 1982. He was one of the recipients of the 1999 National Medal of Science, awarded by President Clinton. He was a member of the National Academy of Sciences and of the American Philosophical Society as well as being a Fellow of the American Physical Society and of the American Association for the Advancement of Science. During the last decade, he has received the Quantrell Award (for excellence in teaching) from the University of Chicago, the Centennial Medal of Harvard University, the Lars Onsager Prize of the American Physical Society, and the Grande Medaille d'Or of the Académie des sciences de l'Institut de France. His textbook with Gordon Baym, Quantum Statistical Mechanics (), is a prominent text in the field and has been widely translated. With Leo Irakliotis, Kadanoff established the Center for Presentation of Science at the University of Chicago. In June 2013, it was stated that anonymous donors had provided a $3.5 million gift to establish the Leo Kadanoff Center for Theoretical Physics at the University of Chicago. He died after complications from an illness on October 26, 2015. In 2018 the American Physical Society established the Leo P. Kadanoff Prize in his honor. Publications (selection) "Scaling laws for Ising models near ", Physics 2(263), 1966. (The seminal paper for the development of renormalization group theory; see History of renormalization group theory.) "Operator Algebra and the Determination of Critical Indices", Phys. Rev. Lett. 23(1430), 1969. (The seminal paper for the development of conformal field theory; see History of conformal field theory.) References External links "Leo P. Kadanoff" at the University of Chicago "Publications of Leo P. Kadanoff" Video of Leo Kadanoff on the opening panel at the Quantum to Cosmos festival 1937 births 2015 deaths Fellows of the American Academy of Arts and Sciences Fellows of the American Physical Society Lorentz Medal winners Members of the United States National Academy of Sciences National Medal of Science laureates Oliver E. Buckley Condensed Matter Prize winners Wolf Prize in Physics laureates Jewish American physicists Harvard University alumni University of Chicago faculty 21st-century American physicists 20th-century American physicists Brown University faculty Members of the American Philosophical Society Presidents of the American Physical Society Statistical physicists
Leo Kadanoff
[ "Physics" ]
908
[ "Statistical physicists", "Statistical mechanics" ]
1,569,732
https://en.wikipedia.org/wiki/Entry%20point
In computer programming, an entry point is the place in a program where the execution of a program begins, and where the program has access to command line arguments. To start a program's execution, the loader or operating system passes control to its entry point. (During booting, the operating system itself is the program). This marks the transition from load time (and dynamic link time, if present) to run time. For some operating systems and programming languages, the entry point is in a runtime library, a set of support functions for the language. The library code initializes the program and then passes control to the program proper. In other cases, the program may initialize the runtime library itself. In simple systems, execution begins at the first statement, which is common in interpreted languages, simple executable formats, and boot loaders. In other cases, the entry point is at some other known memory address which can be an absolute address or relative address (offset). Alternatively, execution of a program can begin at a named point, either with a conventional name defined by the programming language or operating system or at a caller-specified name. In many C-family languages, this is a function called main; as a result, the entry point is often known as the main function. In JVM languages, such as Java, the entry point is a static method called main; in CLI languages such as C# the entry point is a static method named Main. Usage Entry points apply both to source code and to executable files. However, in day-to-day software development, programmers specify the entry points only in source code, which makes them much better known. Entry points in executable files depend on the application binary interface (ABI) of the actual operating system, and are generated by the compiler or linker (if not fixed by the ABI). Other linked object files may also have entry points, which are used later by the linker when generating entry points of an executable file. Entry points are capable of passing on command arguments, variables, or other information as a local variable used by the Main() method. This way, specific options may be set upon execution of the program, and then interpreted by the program. Many programs use this as an alternative way to configure different settings, or perform a set variety of actions using a single program. Contemporary In most of today's popular programming languages and operating systems, a computer program usually only has a single entry point. In C, C++, D, Zig, Rust and Kotlin programs this is a function named main; in Java it is a static method named main (although the class must be specified at the invocation time), and in C# it is a static method named Main. In many major operating systems, the standard executable format has a single entry point. In the Executable and Linkable Format (ELF), used in Unix and Unix-like systems such as Linux, the entry point is specified in the e_entry field of the ELF header. In the GNU Compiler Collection (gcc), the entry point used by the linker is the _start symbol. Similarly, in the Portable Executable format, used in Microsoft Windows, the entry point is specified by the AddressOfEntryPoint field, which is inherited from COFF. In COM files, the entry point is at the fixed offset of 0100h. One exception to the single-entry-point paradigm is Android. Android applications do not have a single entry point there is no special main function. Instead, they have essential components (activities and services) which the system can load and run as needed. An occasionally used technique is the fat binary, which consists of several executables for different targets packaged in a single file. Most commonly, this is implemented by a single overall entry point, which is compatible with all targets and branches to the target-specific entry point. Alternative techniques include storing separate executables in separate forks, each with its own entry point, which is then selected by the operating system. Historical Historically, and in some contemporary legacy systems, such as VMS and OS/400, computer programs have a multitude of entry points, each corresponding to the different functionalities of the program. The usual way to denote entry points, as used system-wide in VMS and in PL/I and MACRO programs, is to append them at the end of the name of the executable image, delimited by a dollar sign ($), e.g. directory.exe$make. The Apple I computer also used this to some degree. For example, an alternative entry point in Apple I's BASIC would keep the BASIC program useful when the reset button was accidentally pushed. Exit point In general, programs can exit at any time by returning to the operating system or crashing. Programs in interpreted languages return control to the interpreter, but programs in compiled languages must return to the operating system, otherwise the processor will simply continue executing beyond the end of the program, resulting in undefined behavior. Usually, there is not a single exit point specified in a program. However, in other cases runtimes ensure that programs always terminate in a structured way via a single exit point, which is guaranteed unless the runtime itself crashes; this allows cleanup code to be run, such as atexit handlers. This can be done by either requiring that programs terminate by returning from the main function, by calling a specific exit function, or by the runtime catching exceptions or operating system signals. Programming languages In many programming languages, the main function is where a program starts its execution. It enables high-level organization of the program's functionality, and typically has access to the command arguments given to the program when it was executed. The main function is generally the first programmer-written function that runs when a program starts, and is invoked directly from the system-specific initialization contained in the runtime environment (crt0 or equivalent). However, some languages can execute user-written functions before main runs, such as the constructors of C++ global objects. In other languages, notably many interpreted languages, execution begins at the first statement in the program. A non-exhaustive list of programming languages follows, describing their way of defining the main entry point: APL In APL, when a workspace is loaded, the contents of "quad LX" (latent expression) variable is interpreted as an APL expression and executed. C and C++ In C and C++, the function prototype of the main function must be equivalent to one of the following: int main(); int main(void); int main(int argc, char **argv); The main function is the entry point for application programs written in ISO-standard C or C++. Low-level system programming (such as for a bare-metal embedded system) might specify a different entry point (for example via a reset interrupt vector) using functionality not defined by the language standard. The parameters argc, argument count, and argv, argument vector, respectively give the number and values of the program's command-line arguments. The names of argc and argv may be any valid identifier, but it is common convention to use these names. Other platform-dependent formats are also allowed by the C and C++ standards, except that in C++ the return type must always be int; for example, Unix (though not POSIX.1) and Windows have a third argument giving the program's environment, otherwise accessible through getenv in stdlib.h: int main(int argc, char **argv, char **envp); Darwin-based operating systems, such as macOS, have a fourth parameter containing arbitrary OS-supplied information, such as the path to the executing binary: int main(int argc, char **argv, char **envp, char **apple); The value returned from the main function becomes the exit status of the process, though the C standard only ascribes specific meaning to two values: EXIT_SUCCESS (traditionally 0) and EXIT_FAILURE. The meaning of other possible return values is implementation-defined. In case a return value is not defined by the programmer, an implicit return 0; at the end of the main() function is inserted by the compiler; this behavior is required by the C++ standard. It is guaranteed that argc is non-negative and that argv[argc] is a null pointer. By convention, the command-line arguments specified by argc and argv include the name of the program as the first element if argc is greater than 0; if a user types a command of "rm file", the shell will initialise the rm process with argc = 2 and argv = {"rm", "file", NULL}. As argv[0] is the name that processes appear under in ps, top etc., some programs, such as daemons or those running within an interpreter or virtual machine (where argv[0] would be the name of the host executable), may choose to alter their argv to give a more descriptive argv[0], usually by means of the exec system call. The main() function is special; normally every C and C++ program must define it exactly once. If declared, main() must be declared as if it has external linkage; it cannot be declared static or inline. In C++, main() must be in the global namespace (i.e. ::main), cannot be overloaded, and cannot be a member function, although the name is not otherwise reserved, and may be used for member functions, classes, enumerations, or non-member functions in other namespaces. In C++ (unlike C) main() cannot be called recursively and cannot have its address taken. C# When executing a program written in C#, the CLR searches for a static method marked with the .entrypoint IL directive, which takes either no arguments, or a single argument of type string[], and has a return type of void or int, and executes it. static void Main(); static void Main(string[] args); static int Main(); static int Main(string[] args); Command-line arguments are passed in args, similar to how it is done in Java. For versions of Main() returning an integer, similar to both C and C++, it is passed back to the environment as the exit status of the process. Since C#7.1 there are four more possible signatures of the entry point, which allow asynchronous execution in the Main() Method. static async Task Main() static async Task<int> Main() static async Task Main(string[]) static async Task<int> Main(string[]) The Task and Task<int> types are the asynchronous equivalents of void and int. async is required to allow the use of asynchrony (the await keyword) inside the method. Clean Clean is a functional programming language based on graph rewriting. The initial node is named Start and is of type *World -> *World if it changes the world or some fixed type if the program only prints the result after reducing Start. Start :: *World -> *World Start world = startIO ... Or even simpler Start :: String Start = "Hello, world!" One tells the compiler which option to use to generate the executable file. Common Lisp ANSI Common Lisp does not define a main function; instead, the code is read and evaluated from top to bottom in a source file. However, the following code will emulate a main function. (defun hello-main () (format t "Hello World!~%")) (hello-main) D In D, the function prototype of the main function looks like one of the following: void main(); void main(string[] args); int main(); int main(string[] args); Command-line arguments are passed in args, similar to how it is done in C# or Java. For versions of main() returning an integer, similar to both C and C++, it is passed back to the environment as the exit status of the process. Dart Dart is a general-purpose programming language that is often used for building web and mobile applications. Like many other programming languages, Dart has an entry point that serves as the starting point for a Dart program. The entry point is the first function that is executed when a program runs. In Dart, the entry point is typically a function named main . When a Dart program is run, the Dart runtime looks for a function named main and executes it. Any Dart code that is intended to be executed when the program starts should be included in the main function. Here is an example of a simple main function in Dart: void main() { print("Hello, world!"); } In this example, the main function simply prints the text Hello, world! to the console when the program is run. This code will be executed automatically when the Dart program is run. It is important to note that while the main function is the default entry point for a Dart program, it is possible to specify a different entry point if needed. This can be done using the @pragma("vm:entry-point") annotation in Dart. However, in most cases, the main function is the entry point that should be used for Dart programs. FORTRAN FORTRAN does not have a main subroutine or function. Instead a PROGRAM statement as the first line can be used to specify that a program unit is a main program, as shown below. The PROGRAM statement cannot be used for recursive calls. PROGRAM HELLO PRINT *, "Cint!" END PROGRAM HELLO Some versions of Fortran, such as those on the IBM System/360 and successor mainframes, do not support the PROGRAM statement. Many compilers from other software manufacturers will allow a fortran program to be compiled without a PROGRAM statement. In these cases, whatever module that has any non-comment statement where no SUBROUTINE, FUNCTION or BLOCK DATA statement occurs, is considered to be the Main program. GNAT Using GNAT, the programmer is not required to write a function named main; a source file containing a single subprogram can be compiled to an executable. The binder will however create a package ada_main, which will contain and export a C-style main function. Go In Go programming language, program execution starts with the main function of the package main package main import "fmt" func main() { fmt.Println("Hello, World!") } There is no way to access arguments or a return code outside of the standard library in Go. These can be accessed via os.Args and os.Exit respectively, both of which are included in the "os" package. Haskell A Haskell program must contain a name main bound to a value of type IO t, for some type t; which is usually IO (). IO is a monad, which organizes side-effects in terms of purely functional code. The main value represents the side-effects-ful computation done by the program. The result of the computation represented by main is discarded; that is why main usually has type IO (), which indicates that the type of the result of the computation is (), the unit type, which contains no information. main :: IO () main = putStrLn "Hello, World!" Command line arguments are not given to main; they must be fetched using another IO action, such as System.Environment.getArgs. Java Java programs start executing at the main method of a class, which has one of the following method headings: public static void main(String[] args) public static void main(String... args) public static void main(String args[]) void main() Command-line arguments are passed in args. As in C and C++, the name "main()" is special. Java's main methods do not return a value directly, but one can be passed by using the System.exit() method. Unlike C, the name of the program is not included in args, because it is the name of the class that contains the main method, so it is already known. Also unlike C, the number of arguments need not be included, since arrays in Java have a field that keeps track of how many elements there are. The main function must be included within a class. This is because in Java everything has to be contained within a class. For instance, a hello world program in Java may look like: public class HelloWorld { public static void main(String[] args) { System.out.println("Hello, world!"); } } To run this program, one must call java HelloWorld in the directory where the compiled class file HelloWorld.class) exists. Alternatively, executable JAR files use a manifest file to specify the entry point in a manner that is filesystem-independent from the user's perspective. LOGO In FMSLogo, the procedures when loaded do not execute. To make them execute, it is necessary to use this code: to procname ... ; Startup commands (such as print [Welcome]) end make "startup [procname] The variable startup is used for the startup list of actions, but the convention is that this calls a procedure that runs the actions. That procedure may be of any name. OCaml OCaml has no main function. Programs are evaluated from top to bottom. Command-line arguments are available in an array named Sys.argv and the exit status is 0 by default. Example: print_endline "Hello World" Pascal In Pascal, the main procedure is the only unnamed block in the program. Because Pascal programs define procedures and functions in a more rigorous bottom-up order than C, C++ or Java programs, the main procedure is usually the last block in the program. Pascal does not have a special meaning for the name "main" or any similar name. program Hello(Output); begin writeln('Hello, world!'); end. Command-line arguments are counted in ParamCount and accessible as strings by ParamStr(n), with n between 0 and ParamCount. Versions of Pascal that support units or modules may also contain an unnamed block in each, which is used to initialize the module. These blocks are executed before the main program entry point is called. Perl In Perl, there is no main function. Statements are executed from top to bottom, although statements in a BEGIN block are executed before normal statements. Command-line arguments are available in the special array @ARGV. Unlike C, @ARGV does not contain the name of the program, which is $0. PHP PHP does not have a "main" function. Starting from the first line of a PHP script, any code not encapsulated by a function header is executed as soon as it is seen. Pike In Pike syntax is similar to that of C and C++. The execution begins at main. The "argc" variable keeps the number of arguments passed to the program. The "argv" variable holds the value associated with the arguments passed to the program. Example: int main(int argc, array(string) argv) Python Python programs are evaluated top-to-bottom, as is usual in scripting languages: the entry point is the start of the source code. Since definitions must precede use, programs are typically structured with definitions at the top and the code to execute at the bottom (unindented), similar to code for a one-pass compiler, such as in Pascal. Alternatively, a program can be structured with an explicit main function containing the code to be executed when a program is executed directly, but which can also be invoked by importing the program as a module and calling the function. This can be done by the following idiom, which relies on the internal variable __name__ being set to __main__ when a program is executed, but not when it is imported as a module (in which case it is instead set to the module name); there are many variants of this structure: import sys def main(argv): n = int(argv[1]) print(n + 1) if __name__ == '__main__': sys.exit(main(sys.argv)) In this idiom, the call to the named entry point main is explicit, and the interaction with the operating system (receiving the arguments, calling system exit) are done explicitly by library calls, which are ultimately handled by the Python runtime. This contrasts with C, where these are done implicitly by the runtime, based on convention. QB64 The QB64 language has no main function, the code that is not within a function, or subroutine is executed first, from top to bottom: print "Hello World! a ="; a = getInteger(1.8d): print a function getInteger(n as double) getInteger = int(n) end function Command line arguments (if any) can be read using the function: dim shared commandline as string commandline = COMMAND$ 'Several space-separated command line arguments can be read using COMMAND$(n) commandline1 = COMMAND$(2) Ruby In Ruby, there is no distinct main function. Instead, code written outside of any class .. end or module .. end scope is executed in the context of a special "main" object. This object can be accessed using self: irb(main):001:0> self => main It has the following properties: irb(main):002:0> self.class => Object irb(main):003:0> self.class.ancestors => [Object, Kernel, BasicObject] Methods defined outside of a class or module scope are defined as private methods of the "main" object. Since the class of "main" is Object, such methods become private methods of almost every object: irb(main):004:0> def foo irb(main):005:1> 42 irb(main):006:1> end => nil irb(main):007:0> foo => 42 irb(main):008:0> [].foo NoMethodError: private method `foo' called for []:Array from (irb):8 from /usr/bin/irb:12:in `<main>' irb(main):009:0> false.foo NoMethodError: private method `foo' called for false:FalseClass from (irb):9 from /usr/bin/irb:12:in `<main>' The number and values of command-line arguments can be determined using the ARGV constant array: $ irb /dev/tty foo bar tty(main):001:0> ARGV ARGV => ["foo", "bar"] tty(main):002:0> ARGV.size ARGV.size => 2 The first element of ARGV, ARGV[0], contains the first command-line argument, not the name of program executed, as in C. The name of program is available using $0 or $PROGRAM_NAME. Similar to Python, one could use: if == $PROGRAM_NAME # Put "main" code here end to execute some code only if its file was specified in the ruby invocation. Rust In Rust, the entry point of a program is a function named main. Typically, this function is situated in a file called main.rs or lib.rs. // In `main.rs` fn main() { println!("Hello, World!"); } Additionally, as of Rust 1.26.0, the main function may return a Result: fn main() -> Result<(), std::io::Error> { println!("Hello, World!"); Ok(()) // Return a type `Result` of value `Ok` with the content `()`, i.e. an empty tuple. } Swift When run in an Xcode Playground, Swift behaves like a scripting language, executing statements from top to bottom; top-level code is allowed.// HelloWorld.playground let hello = "hello" let world = "world" let helloWorld = hello + " " + world print(helloWorld) // hello worldCocoa- and Cocoa Touch-based applications written in Swift are usually initialized with the @NSApplicationMain and @UIApplicationMain attributes, respectively. Those attributes are equivalent in their purpose to the main.m file in Objective-C projects: they implicitly declare the main function that calls UIApplicationMain(_:_:_:_:) which creates an instance of UIApplication. The following code is the default way to initialize a Cocoa Touch-based iOS app and declare its application delegate.// AppDelegate.swift import UIKit @UIApplicationMain class AppDelegate: UIResponder, UIApplicationDelegate { var window: UIWindow? func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { return true } } Visual Basic In Visual Basic, when a project contains no forms, the startup object may be the Main() procedure. The Command$ function can be optionally used to access the argument portion of the command line used to launch the program: Sub Main() Debug.Print "Hello World!" MsgBox "Arguments if any are: " & Command$ End Sub Xojo In Xojo, there are two different project types, each with a different main entry point. Desktop (GUI) applications start with the App.Open event of the project's Application object. Console applications start with the App.Run event of the project's ConsoleApplication object. In both instances, the main function is automatically generated, and cannot be removed from the project. See also crt0, a set of execution startup routines linked into a C program Runtime system References External links Hello from a libc-free world! (Part 1) , March 16, 2010 How main method works in Java Control flow Computer programming Software
Entry point
[ "Technology", "Engineering" ]
5,711
[ "Computer programming", "Computers", "Software engineering", "Computer science", "nan", "Software" ]
1,569,737
https://en.wikipedia.org/wiki/Loudspeaker%20acoustics
Loudspeaker acoustics is a subfield of acoustical engineering concerned with the design of loudspeakers. It focuses on the reproduction of sound and the parameters involved in doing so in actual equipment. Engineers measure the performance of drivers and complete speaker systems to characterize their behavior, often in an anechoic chamber, outdoors, or using time windowed measurement systems -- all to avoid including room effects (e.g., reverberation) in the measurements. Designers use models (from electrical filter theory) to predict the performance of drive units in different enclosures, now almost always based on the work of A N Thiele and Richard Small. Important driver characteristics are: Frequency response Off-axis response dispersion pattern, lobing Sensitivity (dB SPL for 1 watt input) Maximum power handling Non-linear distortion Colouration (i.e., more or less, delayed resonance). It is the performance of a loudspeaker/listening room combination that really matters, as the two interact in multiple ways. There are two approaches to high-quality reproduction. One ensures the listening room is reasonably 'alive' with reverberant sound at all frequencies, in which case the speakers should ideally have equal dispersion at all frequencies in order to equally excite the reverberant fields created by reflections off room surfaces. The other attempts to arrange the listening room to be 'dead' acoustically, leaving indirect sound to the dispersion of the speakers need only be sufficient to cover the listening positions. A dead or inert acoustic may be best, especially if properly filled with 'surround' reproduction, so that the reverberant field of the original space is reproduced realistically. This is currently quite hard to achieve, and so the ideal loudspeaker systems for stereo reproduction would have a uniform dispersion at all frequencies. Listening to sound in an anechoic "dead" room is quite different from listening in a conventional room, and, while revealing about loudspeaker behaviour it has an unnatural sonic character that some listeners find uncomfortable. Conventional stereo reproduction is more natural if the listening environment has some acoustically reflective surfaces. It is in large part the directional properties of speaker systems, which vary with frequency that make them sound different, even when they measure similarly well on-axis. Acoustical engineering in this instance is concerned with adapting these variations to each other. Notable experts In the 1930s, one of the leading experts on loudspeaker acoustics was N. W. McLachlan, author of Loud Speakers: Theory, Performance, Testing and Design. See also Audio quality measurement Acoustic lobing Loudspeaker time alignment Digital room correction Directional Sound Impulse response Loudspeaker Loudspeaker measurement MLSSA Sound quality Spectrogram References External links Conversion of sensitivity in dB per watt and meter to energy efficiency in percent of passive loudspeakers Acoustics Loudspeaker technology Sound
Loudspeaker acoustics
[ "Physics" ]
602
[ "Classical mechanics", "Acoustics" ]
1,569,785
https://en.wikipedia.org/wiki/Arcadia%20%28utopia%29
Arcadia () refers to a vision of pastoralism and harmony with nature. The term is derived from the Greek province of the same name which dates to antiquity; the province's mountainous topography and sparse population of pastoralists later caused the word Arcadia to develop into a poetic byword for an idyllic vision of unspoiled wilderness. Arcadia is a poetic term associated with bountiful natural splendor and harmony. The 'Garden' is often inhabited by shepherds. The concept also figures in Renaissance mythology. Although commonly thought of as being in line with Utopian ideals, Arcadia differs from that tradition in that it is more often specifically regarded as unattainable. Furthermore, it is seen as a lost, Edenic form of life, contrasting to the progressive nature of Utopian desires. The inhabitants were often regarded as having continued to live after the manner of the Golden Age, without the pride and avarice that corrupted other regions. It is also sometimes referred to in English poetry as Arcady. The inhabitants of this region bear an obvious connection to the figure of the noble savage, both being regarded as living close to nature, uncorrupted by civilization, and virtuous. In antiquity According to Greek mythology, Arcadia of Peloponnesus was the domain of Pan, a virgin wilderness home to the god of the forest and his court of dryads, nymphs and other spirits of nature. It was one version of paradise, though only in the sense of being the abode of supernatural entities, not an afterlife for deceased mortals. In the 3rd century BCE the Greek poet Theocritus wrote idealised views of the lives of peasants in Arcadia for his fellow educated inhabitants of the squalid and disease-ridden city of Alexandria. Greek mythology and the poetry of Theocritus inspired the Roman poet Virgil to write his Eclogues, a series of poems with references to Arcadia as the home of Pan, pipes and singing. In the Renaissance Arcadia has remained a popular artistic subject since antiquity, both in visual arts and literature. As Renaissance artists turned to classical antiquity for inspiration, artistic references to Arcadia underwent a revival. Images of beautiful nymphs frolicking in lush forests have been a frequent source of inspiration for painters and sculptors. Because of the influence of Virgil in medieval European literature, e. g. in Divine Comedy, Arcadia became a symbol of pastoral simplicity. European Renaissance writers (for instance, the Spanish poet Garcilaso de la Vega) often revisited the theme, and the name came to apply to any idyllic location or paradise. Of particular note is Et in Arcadia Ego by Nicolas Poussin. In 1502 Jacopo Sannazaro published his long poem Arcadia that fixed the Early Modern perception of Arcadia as a lost world of idyllic bliss, remembered in regretful dirges. In the 1580s Sir Philip Sidney circulated copies of his influential heroic romance poem The Countess of Pembroke's Arcadia, which established Arcadia as an icon of the Renaissance; although the story is plentifully supplied with shepherds and other pastoral characters, the primary characters are all royal visitors of the countryside. In 1598 the Spanish playwright and poet Lope de Vega published Arcadia: Prose and Verse, which was a bestseller at the time. Though depicted as contemporary, this pastoral form is often connected with the Golden Age. It may be suggested that its inhabitants have merely continued to live as persons did in the Golden Age, and all other nations have less pleasant lives because they have allowed themselves to depart from original simplicity. Acadia The 16th-century Italian explorer Giovanni da Verrazzano applied the name "Arcadia" to the entire North American Atlantic coast north of Virginia. In time, this mutated to Acadia. The Dictionary of Canadian Biography says: "Arcadia, the name Verrazzano gave to Maryland or Virginia 'on account of the beauty of the trees', made its first cartographical appearance in the 1548 Gastaldo map and is the only name on that map to survive in Canadian usage. . . . In the 17th century Champlain fixed its present orthography, with the 'r' omitted, and Ganong has shown its gradual progress northwards, in a succession of maps, to its resting place in the Atlantic Provinces". Revival of Mi'kmaq language has provided strong reason to believe that Verrazzano was informed by the name the Mi'kmaq gave to this place. The name Acadie may be derived from the Mi'kmaq, because in their language the word "cadie" means "place of abundance" and can be found in names such as "Tracadie" and "Shubenacadie". In 19th-century art In 1848, Judge Samuel Treat, of St. Louis described life of the early settlers in the Midwest with the sentence "Each family produced whatever was necessary for its own consumption, and lived in almost Arcadian simplicity." Composer W. S. Gilbert used the concept of Arcadia in his musicals Happy Arcadia (1872) and Iolanthe (1882). Around 1880, the German painter Wilhelm von Kaulbach produced an etching, named "Faust und Helena in Arkadien". Faust and Helena are shown in the Arcadian grove, at the place of cheerful poetry, where they produced a son, Euphorion. He represent the spirit of antiquity married to the Nordic-German spirit, as an allegory of German-Greek poetry. The American painter Thomas Eakins produced a series of Arcadian works in the 1880's: His painting "In Arcadia", which was an "unusual venture into mythology, tackled using the most modern of methods: the camera" and a relief with nearly 20 sculptures, paintings and photographs connected with it. The atmosphere of the relief has been described as "vespertinal mixture of sadness and tranquility", a "sylvan realm far removed from the realities in 1883 Philadelphia". New York magazine critic Mark Stevens wrote "His [Eakins] joy in the natural body rarely made its way into his major paintings, perhaps because the subject was so personally complex for him. Only in his great "Swimming", which shows naked young men at a swimming hole, did he create an American Arcadia." Eakins' student Thomas Pollock Anshutz (1851-1912) had a long preoccupation painting "Arcadian subjects". In popular culture One of the most popular Edwardian musical comedies is The Arcadians (1909). Pastoral science fiction Pastoral science fiction is a subgenre of science fiction which uses bucolic, rural settings, like other forms of pastoral literature. Since it is a subgenre of science fiction, authors may set stories either on Earth or another habitable planet or moon, sometimes including a terraformed planet or moon. Unlike most genres of science fiction, pastoral science fiction works downplay the role of futuristic technologies. The pioneer is author Clifford Simak (1904–1988), a science fiction Grand Master whose output included stories written in the 1950s and 1960s about rural people who have contact with extraterrestrial beings who hide their alien identity. Pastoral science fiction stories typically show a reverence for the land, its life-giving food harvests, the cycle of the seasons, and the role of the community. While fertile agrarian environments on Earth or Earth-like planets are common settings, some works may be set in ocean or desert planets or habitable moons. The rural dwellers, such as farmers and small-townspeople, are depicted sympathetically, albeit with the tendency to portray them as conservative and suspicious of change. The simple, peaceful rural life is often contrasted with the negative aspects of noisy, dirty, fast-paced cities. Some works take a Luddite tone, criticizing mechanization and industrialization and showing the ills of urbanization and over-reliance on advanced technologies. See also Acadia Arcadia (region of Greece) Et in Arcadia ego (Guercino), painting by Italian artist Giovanni Francesco Barbieri Garden of Eden Locus amoenus Millennialism Neverland Olam Haba Otherworld Notes References External links Net in Arcadia Virtual Museum of Contemporary Classicism Greek mythology Mythical utopias Mythological kingdoms, empires, and countries Renaissance art Conceptions of heaven Utopia Visual motifs
Arcadia (utopia)
[ "Mathematics" ]
1,687
[ "Symbols", "Visual motifs" ]
1,569,856
https://en.wikipedia.org/wiki/Gross%20margin
Gross margin, or gross profit margin, is the difference between revenue and cost of goods sold (COGS), divided by revenue. Gross margin is expressed as a percentage. Generally, it is calculated as the selling price of an item, less the cost of goods sold (e.g., production or acquisition costs, not including indirect fixed costs like office expenses, rent, or administrative costs), then divided by the same selling price. "Gross margin" is often used interchangeably with "gross profit", however, the terms are different: "gross profit" is technically an absolute monetary amount, and "gross margin" is technically a percentage or ratio. Gross margin is a kind of profit margin, specifically a form of profit divided by net revenue, e.g., gross (profit) margin, operating (profit) margin, net (profit) margin, etc. Purpose The purpose of calculating margins is "to determine the value of incremental sales, and to guide pricing and promotion decision." "Margin on sales represents a key factor behind many of the most fundamental business considerations, including budgets and forecasts. All managers should, and generally do, know their approximate business margins. Managers differ widely, however, in the assumptions they use in calculating margins and in the ways they analyze and communicate these important figures." Percentage margins and unit margins Gross margin can be expressed as a percentage or in total financial terms. If the latter, it can be reported on a per-unit basis or on a per-period basis for a business. "Margin (on sales) is the difference between selling price and cost. This difference is typically expressed either as a percentage of selling price or on a per-unit basis. Managers need to know margins for almost all marketing decisions. Margins represent a key factor in pricing, return on marketing spending, earnings forecasts, and analyses of customer profitability." In a survey of nearly 200 senior marketing managers, 78 percent responded that they found the "margin %" metric very useful while 65 percent found "unit margin" very useful. "A fundamental variation in the way people talk about margins lies in the difference between percentage margins and unit margins on sales. The difference is easy to reconcile, and managers should be able to switch back and forth between the two." Definition of "Unit" "Every business has its own notion of a 'unit,' ranging from a ton of margarine, to 64 ounces of cola, to a bucket of plaster. Many industries work with multiple units and calculate margin accordingly... Marketers must be prepared to shift between varying perspectives with little effort because decisions can be rounded in any of these perspectives." Investopedia defines "gross margin" as: In contrast, "gross profit" is defined as: or as the ratio of gross profit to revenue, usually as a percentage: Cost of sales, also denominated "cost of goods sold" (COGS), includes variable costs and fixed costs directly related to the sale, e.g., material costs, labor, supplier profit, shipping-in costs (cost of transporting the product to the point of sale, as opposed to shipping-out costs which are not included in COGS), etc. It excludes indirect fixed costs, e.g., office expenses, rent, and administrative costs. Higher gross margins for a manufacturer indicate greater efficiency in turning raw materials into income. For a retailer it would be the difference between its markup and the wholesale price. Larger gross margins are generally considered ideal for most businesses, with the exception of discount retailers who instead rely on operational efficiency and strategic financing to remain competitive with businesses that have lower margins. Two related metrics are unit margin and margin percent: "Percentage margins can also be calculated using total sales revenue and total costs. When working with either percentage or unit margins, marketers can perform a simple check by verifying that the individual parts sum to the total." To verify a unit margin ($): Selling price per unit = Unit margin + Cost per Unit To verify a margin (%): Cost as % of sales = 100% − Margin % "When considering multiple products with different revenues and costs, we can calculate overall margin (%) on either of two bases: Total revenue and total costs for all products, or the dollar-weighted average of the percentage margins of the different products." Use in sales Retailers can measure their profit by using two basic methods, namely markup and margin, both of which describe gross profit. Markup expresses profit as a percentage of the cost of the product to the retailer. Margin expresses profit as a percentage of the selling price of the product that the retailer determines. These methods produce different percentages, yet both percentages are valid descriptions of the profit. It is important to specify which method is used when referring to a retailer's profit as a percentage. Some retailers use margins because profits are easily calculated from the total of sales. If margin is 30%, then 30% of the total of sales is the profit. If markup is 30%, the percentage of daily sales that are profit will not be the same percentage. Some retailers use markups because it is easier to calculate a sales price from a cost. If markup is 40%, then sales price will be 40% more than the cost of the item. If margin is 40%, then sales price will not be equal to 40% over cost; in fact, it will be approximately 67% more than the cost of the item. Markup The equation for calculating the monetary value of gross margin is: A simple way to keep markup and gross margin factors straight is to remember that: Percent of markup is 100 times the price difference divided by the cost. Percent of gross margin is 100 times the price difference divided by the selling price. Gross margin (as a percentage of revenue) Most people find it easier to work with gross margin because it directly tells you how much of the sales revenue, or price, is profit: If an item costs to produce and is sold for a price of , the price includes a 100% markup which represents a 50% gross margin. Gross margin is just the percentage of the selling price that is profit. In this case, 50% of the price is profit, or . In a more complex example, if an item costs to produce and is sold for a price of , the price includes a 67% markup ($136) which represents a 40% gross margin. This means that 40% of the is profit. Again, gross margin is just the direct percentage of profit in the sale price. In accounting, the gross margin refers to sales minus cost of goods sold. It is not necessarily profit as other expenses such as sales, administrative, and financial costs must be deducted. And it means companies are reducing their cost of production or passing their cost to customers. The higher the ratio, all other things being equal, the better for the retailer. Converting between gross margin and markup (gross profit) Converting markup to gross margin Examples: Markup = 100% = 1 Markup = 66.7% = 0.667 Converting gross margin to markup Examples: Gross margin = 50% = 0.5 Gross margin = 40% = 0.4 Using gross margin to calculate selling price Given the cost of an item, one can compute the selling price required to achieve a specific gross margin. For example, if your product costs $100 and the required gross margin is 40%, then Gross margin tools to measure retail performance Some of the tools that are useful in retail analysis are GMROII, GMROS and GMROL. GMROII: Gross Margin Return On Inventory Investment GMROS: Gross Margin Return On Space GMROL: Gross Margin Return On Labor Differences between industries In some industries, like clothing for example, profit margins are expected to be near the 40% mark, as the goods need to be bought from suppliers at a certain rate before they are resold. In other industries such as software product development, the gross profit margin can be higher than 80% in many cases. In the agriculture industry, particularly with the European Union, Standard Gross Margin is used to assess farm profitability. References "Relationship between Markup and Gross Margin" Accounting terminology Corporate finance Financial ratios Management accounting Profit
Gross margin
[ "Mathematics" ]
1,691
[ "Financial ratios", "Quantity", "Metrics" ]
1,569,872
https://en.wikipedia.org/wiki/List%20of%20countries%20by%20total%20fertility%20rate
This is a list of all sovereign states and dependencies by total fertility rate (TFR): the expected number of children born per woman in her child-bearing years. Methodology The first lists show the most recent year where there is published total fertility rate (TFR) data ranked by sovereign states and dependencies, and are ordered by organization type – intergovernmental, governmental, or non-governmental organization that searched, organized, and published the data. Countries are ranked by most recent years’ lists of the following types: International organizations’ ranking lists The United Nations Population Fund ranking is based on the data for 2024 published online. The United Nations Population Fund (formerly the United Nations Fund for Population Activities - UNFPA) is an UN agency aimed at improving reproductive and maternal health worldwide. This agency collects and analyses information on demography issues based on its own work and national sources. The World Bank ranking list is based on the data for the year 2020 published online. The World Bank is a United Nations international financial institution, a component of the World Bank Group, and a member of the United Nations Development Group, but it also collects and analyses information on demography issues based on international and national sources: (1) United Nations Population Division: World Population Prospects, (2) United Nations Statistics Division: Population and Vital Statistics Report (various years), (3) Census reports and other statistical publications from national statistical offices, (4) Eurostat: Demographic Statistics, (5) Secretariat of the Pacific Community: Statistics and Demography Programme, and (6) U.S. Census Bureau International Database. Note: Sometimes the World Bank changes its figures of fertility rates for a certain year due to more accurate and updated information from sources. Because of that, sometimes it is necessary to update World Bank figures for fertility rates more than once for the same year. Governmental organizations ranking lists The CIA ranking list is sourced from the CIA World Factbook for the most recent year unless otherwise specified. Sovereign states and countries are ranked. Some countries might not be listed because they are not fully recognized as countries at the time of this census. The INED - Institut National d'Études Démographiques is based on the online publication Population & Sociétés - Tous les pays du monde (2019), number 569, September 2019. Non-governmental organizations ranking lists The Population Reference Bureau (PRB) ranking list is based on the data of the 2024 World Population Data Sheet published online. The PRB is a private, nonprofit organization which informs people around the world about population, health and the environment for research or academic purposes. It was founded in 1929. World Population Data Sheets are double-sided wallcharts (now published online) that present detailed information on demographic, health and environment indicators for more than 200 countries. The Our World in Data (OWID) Country ranking 2019 list is sourced and based on the OWID website (on the clickable map and quoted sources). OWID is an online publication that presents empirical research and data that show how living conditions around the world are changing. The aim is to show how the world is changing and why. The publication is developed at the University of Oxford and authored by social historian and development economist Max Roser. Notes: 1- Changes in figures of fertility rates by country from one year to another do not always reflect an actual increase or decrease of fertility rates in a certain country, but instead reflect a change made due to more accurate and updated information from sources. 2- Figures of fertility rates by country and their ranking are based on single referenced sources, from organizations that investigate demographic issues. In several instances, they do not correspond with other sources, such as other organizations and sources that are referenced in the individual demographics by country, which can be accessed by clicking on the names of the countries. These differences can be due to several factors, including primary sources, data quality, and methodology. Replacement rates Replacement fertility is the total fertility rate at which women give birth to enough babies to sustain population levels, assuming that mortality rates remain constant and net migration is zero. If replacement level fertility is sustained over a sufficiently long period, each generation will exactly replace itself. The replacement fertility rate is 2.1 births per female for most developed countries (in the United Kingdom, for example), but can be as high as 3.5 in undeveloped countries because of higher mortality rates, especially child mortality. The global average for the replacement total fertility rate, eventually leading to a stable global population, for the contemporary period, 2010–2015, is 2.3 children per female. Comparison ranking lists: The Our World in Data (OWID) Country ranking and comparison by TFR: 1950 and 2015 list is sourced and based on the OWID website (on the clickable map and quoted sources). Our World in Data (OWID) is an online publication that aims to show how and why the world is changing using empirical research and data. The publication is developed at the University of Oxford and authored by social historian and development economist Max Roser. The World Bank Country ranking and comparison by TFR: 1960 and 2015 list is sourced and based on the online published demographic data of the World Bank website (on the clickable map and quoted sources). The Population Reference Bureau (PRB) Country ranking and comparison by TFR: 1970 and 2013 list is sourced and based on the data of the 2014 World Population Data Sheet, which was published online. Forecast/prediction ranking lists: The UN ranking list is sourced from the United Nations World Population Prospects. Figures are from the 2015 revision of the United Nations World Population Prospects report, for the period 2015–2020, using the medium assumption. and from the 2019 revision United Nations World Population Prospects report, for the period 2020–2025, using the medium assumption. The United Nations Population Division, part of the DESA - Department of Economic and Social Affairs, ranking list is based on the forecast/prediction for the years 2015-2020 and 2020-2025. Only countries/territories with a population of 100,000 or more in 2019 are included. Rankings are based on the 2015–2020 and 2020-2025 figures. Country ranking by most recent year Country ranking by international organizations Note: (-) Data unavailable, inapplicable, not collected, or country or dependent territory not included. Sovereign states and dependent territories listed by alphabetical order, not ranked. Country ranking by governmental organizations Note: (-) Data unavailable, inapplicable, not collected, or country or dependent territory not included. Sovereign states and dependent territories listed by alphabetical order, not ranked. Country ranking by non-governmental organizations Note: (-) Data unavailable, inapplicable, not collected, or country or dependent territory not included. Sovereign states and dependent territories listed by alphabetical order, not ranked. Country ranking and comparison of TFR by year 1950 and 2015 Notes: (→) Country that changed name and flag, dependent territory that is now an independent country (sovereign state) from another current or extinct (dissolved) state or empire, former dependent territory from a sovereign state or empire that was included in another sovereign state. (-) Data unavailable, inapplicable, not collected, or country or dependent territory not included. Sovereign states and dependent territories listed by alphabetical order, not ranked. 1960 and 2015 Notes: (→) Country that changed name and flag, dependent territory that is now an independent country (sovereign state) from another current or extinct (dissolved) state or empire, former dependent territory from a sovereign state or empire that was included in another sovereign state. (-) Data unavailable, inapplicable, not collected, or country or dependent territory not included. Sovereign states and dependent territories listed by alphabetical order, not ranked. 1970 and 2014 Notes: (→) Country that changed name and flag, dependent territory that is now an independent country (sovereign state) from another current or extinct (dissolved) state or empire, former dependent territory from a sovereign state or empire that was included in another sovereign state. (-) Data unavailable, inapplicable, not collected, or country or dependent territory not included. Sovereign states and dependent territories listed by alphabetical order, not ranked. Country ranking by TFR forecast Note: (-) Data unavailable, inapplicable, not collected, or country or dependent territory not included. Sovereign states and dependent territories listed by alphabetical order, not ranked. See also Total fertility rate List of countries by past fertility rate List of countries by number of births List of countries by birth rate List of countries by net reproduction rate List of people with the most children List of population concern organizations Population growth Sub-replacement fertility Fertility and intelligence Case studies: Ageing of Europe Aging of Japan Christian population growth Muslim population growth References Fertility rate Fertility rate Human geography Fertility Demographic economics
List of countries by total fertility rate
[ "Environmental_science" ]
1,795
[ "Environmental social science", "Human geography" ]
1,570,072
https://en.wikipedia.org/wiki/Mathematical%20chemistry
Mathematical chemistry is the area of research engaged in novel applications of mathematics to chemistry; it concerns itself principally with the mathematical modeling of chemical phenomena. Mathematical chemistry has also sometimes been called computer chemistry, but should not be confused with computational chemistry. Major areas of research in mathematical chemistry include chemical graph theory, which deals with topology such as the mathematical study of isomerism and the development of topological descriptors or indices which find application in quantitative structure-property relationships; and chemical aspects of group theory, which finds applications in stereochemistry and quantum chemistry. Another important area is molecular knot theory and circuit topology that describe the topology of folded linear molecules such as proteins and nucleic acids. The history of the approach may be traced back to the 19th century. Georg Helm published a treatise titled "The Principles of Mathematical Chemistry: The Energetics of Chemical Phenomena" in 1894. Some of the more contemporary periodical publications specializing in the field are MATCH Communications in Mathematical and in Computer Chemistry, first published in 1975, and the Journal of Mathematical Chemistry, first published in 1987. In 1986 a series of annual conferences MATH/CHEM/COMP taking place in Dubrovnik was initiated by the late Ante Graovac. The basic models for mathematical chemistry are molecular graph and topological index. In 2005 the International Academy of Mathematical Chemistry (IAMC) was founded in Dubrovnik (Croatia) by Milan Randić. The Academy has 82 members (2009) from all over the world, including six scientists awarded with a Nobel Prize. See also Bibliography Molecular Descriptors for Chemoinformatics, by R. Todeschini and V. Consonni, Wiley-VCH, Weinheim, 2009. Mathematical Chemistry Series, by D. Bonchev, D. H. Rouvray (Eds.), Gordon and Breach Science Publisher, Amsterdam, 2000. Chemical Graph Theory, by N. Trinajstic, CRC Press, Boca Raton, 1992. Mathematical Concepts in Organic Chemistry, by I. Gutman, O. E. Polansky, Springer-Verlag, Berlin, 1986. Chemical Applications of Topology and Graph Theory, ed. by R. B. King, Elsevier, 1983. "Topological approach to the chemistry of conjugated molecules", by A. Graovac, I. Gutman, and N. Trinajstic, Lecture Notes in Chemistry, no.4, Springer-Verlag, Berlin, 1977. Notes References N. Trinajstić, I. Gutman, Mathematical Chemistry, Croatica Chemica Acta, 75(2002), pp. 329–356. A. T. Balaban, Reflections about Mathematical Chemistry, Foundations of Chemistry, 7(2005), pp. 289–306. G. Restrepo, J. L. Villaveces, Mathematical Thinking in Chemistry, HYLE, 18(2012), pp. 3–22. Advances in Mathematical Chemistry and Applications. Volume 2. Basak S. C., Restrepo G., Villaveces J. L. (Bentham Science eBooks, 2015) External links Journal of Mathematical Chemistry MATCH Communications in Mathematical and in Computer Chemistry International Academy of Mathematical Chemistry Chemistry Theoretical chemistry Application-specific graphs Cheminformatics
Mathematical chemistry
[ "Chemistry", "Mathematics" ]
666
[ "Drug discovery", "Applied mathematics", "Theoretical chemistry", "Mathematical chemistry", "Molecular modelling", "Computational chemistry", "nan", "Cheminformatics" ]
1,570,138
https://en.wikipedia.org/wiki/Phenotypic%20switching
Phenotypic switching is switching between multiple cellular morphologies. David R. Soll described two such systems: the first high frequency switching system between several morphological stages and a second high frequency switching system between opaque and white cells. The latter is an epigenetic switching system Phenotypic switching in Candida albicans is often used to refer to the epigenetic white-to-opaque switching system. C. albicans needs this switch for sexual mating. Next to the two above mentioned switching systems many other switching systems are known in C. albicans. A second example occurs in melanoma, where malignantly transformed pigment cells switch back-and-forth between phenotypes of proliferation and invasion in response to changing microenvironments, driving metastatic progression. See also Polyphenism References External links Cell biology
Phenotypic switching
[ "Biology" ]
175
[ "Cell biology" ]
1,570,527
https://en.wikipedia.org/wiki/First%20Friday%20%28public%20event%29
"First Friday" is a name for various public events in some cities (particularly in the United States) that occur on the first Friday of every month. These citywide events may take on many purposes, including art gallery openings, and social and political networking. American cities have promoted such events to bring people to historic areas perceived as dangerous, using the "safety in numbers" mentality to combat urban decay. In some cities this monthly event may occur on the first Saturday of each month instead of Friday or on "Third Thursdays". Additionally, these are "see and be seen" events that serve as a block party or social gathering open to the general public. Some of these events may be centered on political networking by Republicans and Democrats, but usually First Fridays are art and entertainment destinations. They may involve pub crawling, other retail establishments such as cafes and restaurants, and performances by fire twirling acts, jazz musicians, belly dancing, street musicians, or others. "First Fridays" is also a nationally recognized networking event targeting Black professionals held on the first Friday of every month in various cities throughout North America. These events started in 1987 and provide urban professionals an opportunity to socially network, exchange and share ideas on professional, educational, political and social issues. Art gallery openings Some cities hold "gallery hops" and "art walks", in which a number of the town's art galleries, museums, or artists' studios, both commercial and non-profit 501c3 organizations, will open their doors on Friday evening. The idea is that galleries will attract people to the downtown and enrich the art community by pooling their openings together, sometimes, as in the case of art6 and Artspace in Richmond, Virginia (when in Jackson Ward at 6 East Broad Street) into one monthly evening in a historically located designated arts district . Among the cities with art-oriented First Friday events are: Albany, Anchorage, Augusta, Bellingham, Binghamton, Boston, Burlington, Chicago, Columbia (Missouri), Columbus, Denver, Fort Collins, Honolulu, Hood River, Indianapolis, Ithaca, Juneau, Kalamazoo, Kansas City, Knoxville, Lincoln, Louisville, Las Vegas, Miami, Missoula, Peoria (Illinois), Scranton, Oakland, Oklahoma City, Oklahoma, Olympia, Philadelphia, Phoenix, Pittsburgh, Portland (Maine), Raleigh, Richmond, Rochester, San Antonio, San Jose, Santa Cruz, Santa Rosa, Spokane, Tallahassee, Tulsa, York (PA), Marietta, Ohio, and Ypsilanti. Richmond, Virginia is among the largest First Fridays art walks in the nation. It draws nearly 20,000 people from all over the state of Virginia and nation, showcasing its artistic side and opening restaurants and art galleries all over Broad Street, Manchester area, and Downtown Richmond. Artspace and Art6 Gallery were two of the first anchor galleries for First Fridays on Broad Street in Richmond. One of the oldest First Friday's is located in Boston's SoWa Arts District where more than 80 artists open their studios to the public every First Friday. The SoWa Arts District is located at Harrison Ave and Thayer Street in Boston's South End. In 2015, USA Today's 10BEST readers poll voted the SoWa as the second best arts district in the country. Social networking Some First Fridays promote arts and culture combined with social networking, like First Fridays in Tucson, AZ. Other smaller-scale First Fridays serve as social gatherings for groups of friends or people new to an area and may involve no art. They may also include the large First Friday events such as those in Phoenix, Arizona, attracting up to 20,000 attendees to hundreds of spaces. Various cities areas have First Fridays centered on politically conservative networking. The pioneer of these events began in Washington, D.C., but similar events have found success in Virginia, Nevada, and Arizona. In many cities, First Fridays events place an emphasis on African American networking and business opportunities for African American professionals. First Friday is the top networking event for African American professionals and consistently attracts over 16,000 people each month across North America according to First Fridays United. The First Fridays monthly events originated in 1987 as an outlet for African American professionals to mix, mingle and network. During the 1980s it was common for an individual to be the only black professional working in their company. First Fridays happy hours become a way for these professionals to meet in a social atmosphere while exchanging useful information. The concept spread rapidly and First Fridays chapters now operate in 39 cities across North America and seven countries worldwide, including Austin, Binghamton, Birmingham, Boston, Charlotte, Charleston, Chicago, Cincinnati, Cleveland, Detroit, Fort Myers, Fort Lauderdale, Hattiesburg, Hartford, Hong Kong, Houston, Indianapolis, Jackson, Kansas City, Kingston, Las Vegas, Los Angeles, Long Beach London, Louisville, Memphis, Miami, Nashville, Nassau, New Orleans, New York, Newark, Oakland, Orlando, Philadelphia, Phoenix, Pittsburgh, Raleigh, Richmond, Rio de Janeiro, Sacramento, San Antonio, San Diego, San Francisco, St. Louis, Normandy, Scranton, South Bend, Syracuse, Tallahassee, Tokyo, Toronto, Washington, D.C., and since May 2016 also in Switzerland Biel/Bienne In 2002, several First Fridays operators created First Fridays United, which is a company founded to organize the existing First Fridays chapters in 30 cities into one group to share information, resources and solicit corporate sponsors and advertisers. The organization sponsors a series of international events in addition to the monthly networking happy hours. Today, First Fridays reaches over 450,000 urban professionals through email, internet, and event marketing and has had numerous Fortune 1000 corporate clients. Chanin Walsh was one of the first to create a marketing plan for First Friday in Doylestown which turned the “art gallery model” into an all-merchant benefiting event. References External links Zak, Dan. "Off the clock and it's still party time on Capitol Hill with dueling happy hours". Washington Post. First Friday New Jersey First Friday Biel Bienne Switzerland Republic Bank First Friday Hop in Louisville KY First Friday dot Art Community building Culture of the United States Urban planning
First Friday (public event)
[ "Engineering" ]
1,265
[ "Urban planning", "Architecture" ]
1,570,530
https://en.wikipedia.org/wiki/Tetractys
The tetractys (), or tetrad, or the tetractys of the decad is a triangular figure consisting of ten points arranged in four rows: one, two, three, and four points in each row, which is the geometrical representation of the fourth triangular number. As a mystical symbol, it was very important to the secret worship of Pythagoreanism. There were four seasons, and the number was also associated with planetary motions and music. Pythagorean symbol The first four numbers symbolize the musica universalis and the Cosmos as: Monad – Unity Dyad – Power – Limit/Unlimited (peras/apeiron) Triad – Harmony Tetrad – Kosmos The four rows add up to ten, which was unity of a higher order (The Dekad). The Tetractys symbolizes the four classical elements—air, fire, water, and earth. The Tetractys represented the organization of space: the first row represented zero dimensions (a point) the second row represented one dimension (a line of two points) the third row represented two dimensions (a plane defined by a triangle of three points) the fourth row represented three dimensions (a tetrahedron defined by four points) A prayer of the Pythagoreans shows the importance of the Tetractys (sometimes called the "Mystic Tetrad"), as the prayer was addressed to it. The Pythagorean oath also mentioned the Tetractys: By that pure, holy, four lettered name on high, nature's eternal fountain and supply, the parent of all souls that living be, by him, with faith find oath, I swear to thee. It is said that the Pythagorean musical system was based on the Tetractys as the rows can be read as the ratios of 4:3 (perfect fourth), 3:2 (perfect fifth), 2:1 (octave), forming the basic intervals of the Pythagorean scales. That is, Pythagorean scales are generated from combining pure fourths (in a 4:3 relation), pure fifths (in a 3:2 relation), and the simple ratios of the unison 1:1 and the octave 2:1. Note that the diapason, 2:1 (octave), and the diapason plus diapente, 3:1 (compound fifth or perfect twelfth), are consonant intervals according to the tetractys of the decad, but that the diapason plus diatessaron, 8:3 (compound fourth or perfect eleventh), is not. Kabbalist symbol In the work by anthropologist Raphael Patai entitled The Hebrew Goddess, the author argues that the tetractys and its mysteries influenced the early Kabbalah. A Hebrew tetractys has the letters of the Tetragrammaton inscribed on the ten positions of the tetractys, from right to left. It has been argued that the Kabbalistic Tree of Life, with its ten spheres of emanation, is in some way connected to the tetractys, but its form is not that of a triangle. The occultist Dion Fortune writes: The point is assigned to Kether; the line to Chokmah; the two-dimensional plane to Binah; consequently the three-dimensional solid naturally falls to Chesed. The relationship between geometrical shapes and the first four Sephirot is analogous to the geometrical correlations in Tetraktys, shown above under #Pythagorean symbol, and unveils the relevance of the Tree of Life with the Tetraktys. Occurrence The tetractys occurs (generally coincidentally) in the following: the baryon decuplet an archbishop's coat of arms the arrangement of bowling pins in ten-pin bowling the arrangement of billiard balls in ten-ball pool a Chinese checkers board the "Christmas Tree" formation in association football In poetry In English-language poetry, a tetractys is a syllable-counting form with five lines. The first line has one syllable, the second has two syllables, the third line has three syllables, the fourth line has four syllables, and the fifth line has ten syllables. A sample tetractys would look like this: Mantrum Your / fury / confuses / us all greatly. / Volatile, big-bodied tots are selfish. // The tetractys was created by Ray Stebbing, who said the following about his newly created form: "The tetractys could be Britain's answer to the haiku. Its challenge is to express a complete thought, profound or comic, witty or wise, within the narrow compass of twenty syllables. See also Pascal's triangle References Further reading von Franz, Marie-Louise. Number and Time: Reflections Leading Towards a Unification of Psychology and Physics. Rider & Company, London, 1974. Fideler, D. ed. The Pythagorean Sourcebook and Library . Phanes Press, 1987. The Theoretic Arithmetic of the Pythagoreans – Thomas Taylor External links Examples of Tetractys poems Dot patterns Genres of poetry Greek mathematics History of mathematics History of poetry Kabbalah Mathematical symbols Poetic forms Pythagorean symbols Tarot Concepts in ancient Greek metaphysics
Tetractys
[ "Mathematics" ]
1,111
[ "Pythagorean symbols", "Symbols", "Mathematical symbols" ]
1,570,613
https://en.wikipedia.org/wiki/Pharming
Pharming is a cyberattack intended to redirect a website's traffic to another, fake site by installing a malicious program on the victim's computer in order to gain access to it. Pharming can be conducted either by changing the hosts file on a victim's computer or by exploitation of a vulnerability in DNS server software. DNS servers are computers responsible for resolving Internet names into their real IP addresses. Compromised DNS servers are sometimes referred to as "poisoned". Pharming requires unprotected access to target a computer, such as altering a customer's home computer, rather than a corporate business server. The term "pharming" is a neologism based on the words "farming" and "phishing". Phishing is a type of social-engineering attack to obtain access credentials, such as user names and passwords. In recent years, both pharming and phishing have been used to gain information for online identity theft. Pharming has become of major concern to businesses hosting ecommerce and online banking websites. Sophisticated measures known as anti-pharming are required to protect against this serious threat. Antivirus software and spyware removal software cannot protect against pharming. Vulnerabilities While malicious domain-name resolution can result from compromises in the large numbers of trusted nodes from a name lookup, the most vulnerable points of compromise are near the leaves of the Internet. For instance, incorrect entries in a desktop computer's hosts file, which circumvents name lookup with its own local name to IP address mapping, is a popular target for malware. Once rewritten, a legitimate request for a sensitive website can direct the user to a fraudulent copy. Personal computers such as desktops and laptops are often better targets for pharming because they receive poorer administration than most Internet servers. More worrisome than host-file attacks is the compromise of a local network router. Since most routers specify a trusted DNS to clients as they join the network, misinformation here will spoil lookups for the entire LAN. Unlike host-file rewrites, local-router compromise is difficult to detect. Routers can pass bad DNS information in two ways: misconfiguration of existing settings or wholesale rewrite of embedded software (aka firmware). Many routers allow the administrator to specify a particular, trusted DNS in place of the one suggested by an upstream node (e.g., the ISP). An attacker could specify a DNS server under his control instead of a legitimate one. All subsequent resolutions would go through the bad server. Alternatively, many routers have the ability to replace their firmware (i.e. the internal software that executes the device's more complex services). Like malware on desktop systems, a firmware replacement can be very difficult to detect. A stealthy implementation will appear to behave the same as the manufacturer's firmware; the administration page will look the same, settings will appear correct, etc. This approach, if well executed, could make it difficult for network administrators to discover the reconfiguration, if the device appears to be configured as the administrators intend but actually redirects DNS traffic in the background. Pharming is only one of many attacks that malicious firmware can mount; others include eavesdropping, active man in the middle attacks, and traffic logging. Like misconfiguration, the entire LAN is subject to these actions. By themselves, these pharming approaches have only academic interest. However, the ubiquity of consumer grade wireless routers presents a massive vulnerability. Administrative access can be available wirelessly on most of these devices. Moreover, since these routers often work with their default settings, administrative passwords are commonly unchanged. Even when altered, many are guessed quickly through dictionary attacks, since most consumer grade routers don't introduce timing penalties for incorrect login attempts. Once administrative access is granted, all of the router's settings including the firmware itself may be altered. These attacks are difficult to trace because they occur outside the home or small office and outside the Internet. Instances of pharming On 15 January 2005, the domain name for a large New York ISP, Panix, was hijacked to point to a website in Australia. No financial losses are known. The domain was later restored on 17 January, and ICANN's review blames Melbourne IT (now known as "Arq Group") "as a result of a failure of Melbourne IT to obtain express authorization from the registrant in accordance with ICANN's Inter-Registrar Transfer Policy." In February 2007, a pharming attack affected at least 50 financial companies in the U.S., Europe, and Asia. Attackers created a similar page for each targeted financial company, which requires effort and time. Victims clicked on a specific website that had a malicious code. This website forced consumers' computers to download a Trojan horse. Subsequent login information from any of the targeted financial companies was collected. The number of individuals affected is unknown but the incident continued for three days. In January 2008, Symantec reported a drive-by pharming incident, directed against a Mexican bank, in which the DNS settings on a customer's home router were changed after receipt of an e-mail that appeared to be from a legitimate Spanish-language greeting-card company. Defences Traditional methods for combating pharming include: Server-side software, DNS protection, and web browser add-ins such as toolbars. Server-side software is typically used by enterprises to protect their customers and employees who use internal or private web-based systems from being pharmed and phished, while browser add-ins allow individual users to protect themselves from phishing. DNS protection mechanisms help ensure that a specific DNS server cannot be hacked and thereby become a facilitator of pharming attacks. Spam filters typically do not provide users with protection against pharming. Currently the most efficient way to prevent pharming is for end users to make sure they are using secure web connections (HTTPS) to access privacy sensitive sites such as those for banking or taxing, and only accept the valid public key certificates issued by trusted sources. A certificate from an unknown organisation or an expired certificate should not be accepted all the time for crucial business. So-called active cookies provide for a server-side detection tool. Legislation also plays an essential role in anti-pharming. In March 2005, U.S. Senator Patrick Leahy (D-VT) introduced the Anti-Phishing Act of 2005, a bill that proposes a five-year prison sentence and/or fine for individuals who execute phishing attacks and use information garnered through online fraud such as phishing and pharming to commit crimes such as identity theft. For home users of consumer-grade routers and wireless access points, perhaps the single most effective defense is to change the password on the router to something other than the default, replacing it with a password that is not susceptible to a dictionary attack. Controversy over the use of the term The term "pharming" has been controversial within the field. At a conference organized by the Anti-Phishing Working Group, Phillip Hallam-Baker denounced the term as "a marketing neologism designed to convince banks to buy a new set of security services". See also Phishing DNS spoofing IT risk Mutual authentication Trusteer Notes References Sources External links After Phishing? Pharming! Types of malware Computer security exploits
Pharming
[ "Technology" ]
1,572
[ "Computer security exploits" ]
1,570,968
https://en.wikipedia.org/wiki/Isopropyl%20%CE%B2-D-1-thiogalactopyranoside
{{DISPLAYTITLE:Isopropyl β-D-1-thiogalactopyranoside}} Isopropyl β--1-thiogalactopyranoside (IPTG) is a molecular biology reagent. This compound is a molecular mimic of allolactose, a lactose metabolite that triggers transcription of the lac operon, and it is therefore used to induce protein expression where the gene is under the control of the lac operator. Mechanism of action Like allolactose, IPTG binds to the lac repressor and releases the tetrameric repressor from the lac operator in an allosteric manner, thereby allowing the transcription of genes in the lac operon, such as the gene coding for beta-galactosidase, a hydrolase enzyme that catalyzes the hydrolysis of β-galactosides into monosaccharides. But unlike allolactose, the sulfur (S) atom creates a chemical bond which is non-hydrolyzable by the cell, preventing the cell from metabolizing or degrading the inducer. Therefore, its concentration remains constant during an experiment. IPTG uptake by E. coli can be independent of the action of lactose permease, since other transport pathways are also involved. At low concentration, IPTG enters cells through lactose permease, but at high concentrations (typically used for protein induction), IPTG can enter the cells independently of lactose permease. Use in laboratory When stored as a powder at 4 °C or below, IPTG is stable for 5 years. It is significantly less stable in solution; Sigma recommends storage for no more than a month at room temperature. IPTG is an effective inducer of protein expression in the concentration range of 100 μmol/L to 3.0 mmol/L. Typically, a sterile, filtered 1 mol/L solution of IPTG is added 1:1000 to an exponentially growing bacterial culture, to give a final concentration of 1 mmol/L. The concentration used depends on the strength of induction required, as well as the genotype of cells or plasmid used. If lacIq, a mutant that over-produces the lac repressor, is present, then a higher concentration of IPTG may be necessary. In blue-white screen, IPTG is used together with X-gal. Blue-white screen allows colonies that have been transformed with the recombinant plasmid rather than a non-recombinant one to be identified in cloning experiments. References External links IPTG bound to proteins in the PDB Carbohydrates Molecular biology Isopropyl compounds Organosulfur compounds
Isopropyl β-D-1-thiogalactopyranoside
[ "Chemistry", "Biology" ]
581
[ "Biomolecules by chemical classification", "Carbohydrates", "Organosulfur compounds", "Organic compounds", "Carbohydrate chemistry", "Molecular biology", "Biochemistry" ]
166,689
https://en.wikipedia.org/wiki/Interferometry
Interferometry is a technique which uses the interference of superimposed waves to extract information. Interferometry typically uses electromagnetic waves and is an important investigative technique in the fields of astronomy, fiber optics, engineering metrology, optical metrology, oceanography, seismology, spectroscopy (and its applications to chemistry), quantum mechanics, nuclear and particle physics, plasma physics, biomolecular interactions, surface profiling, microfluidics, mechanical stress/strain measurement, velocimetry, optometry, and making holograms. Interferometers are devices that extract information from interference. They are widely used in science and industry for the measurement of microscopic displacements, refractive index changes and surface irregularities. In the case with most interferometers, light from a single source is split into two beams that travel in different optical paths, which are then combined again to produce interference; two incoherent sources can also be made to interfere under some circumstances. The resulting interference fringes give information about the difference in optical path lengths. In analytical science, interferometers are used to measure lengths and the shape of optical components with nanometer precision; they are the highest-precision length measuring instruments in existence. In Fourier transform spectroscopy they are used to analyze light containing features of absorption or emission associated with a substance or mixture. An astronomical interferometer consists of two or more separate telescopes that combine their signals, offering a resolution equivalent to that of a telescope of diameter equal to the largest separation between its individual elements. Basic principles Interferometry makes use of the principle of superposition to combine waves in a way that will cause the result of their combination to have some meaningful property that is diagnostic of the original state of the waves. This works because when two waves with the same frequency combine, the resulting intensity pattern is determined by the phase difference between the two waves—waves that are in phase will undergo constructive interference while waves that are out of phase will undergo destructive interference. Waves which are not completely in phase nor completely out of phase will have an intermediate intensity pattern, which can be used to determine their relative phase difference. Most interferometers use light or some other form of electromagnetic wave. Typically (see Fig. 1, the well-known Michelson configuration) a single incoming beam of coherent light will be split into two identical beams by a beam splitter (a partially reflecting mirror). Each of these beams travels a different route, called a path, and they are recombined before arriving at a detector. The path difference, the difference in the distance traveled by each beam, creates a phase difference between them. It is this introduced phase difference that creates the interference pattern between the initially identical waves. If a single beam has been split along two paths, then the phase difference is diagnostic of anything that changes the phase along the paths. This could be a physical change in the path length itself or a change in the refractive index along the path. As seen in Fig. 2a and 2b, the observer has a direct view of mirror M1 seen through the beam splitter, and sees a reflected image 2 of mirror M2. The fringes can be interpreted as the result of interference between light coming from the two virtual images 1 and 2 of the original source S. The characteristics of the interference pattern depend on the nature of the light source and the precise orientation of the mirrors and beam splitter. In Fig. 2a, the optical elements are oriented so that 1 and 2 are in line with the observer, and the resulting interference pattern consists of circles centered on the normal to M1 and M'2. If, as in Fig. 2b, M1 and 2 are tilted with respect to each other, the interference fringes will generally take the shape of conic sections (hyperbolas), but if 1 and 2 overlap, the fringes near the axis will be straight, parallel, and equally spaced. If S is an extended source rather than a point source as illustrated, the fringes of Fig. 2a must be observed with a telescope set at infinity, while the fringes of Fig. 2b will be localized on the mirrors. Use of white light will result in a pattern of colored fringes (see Fig. 3). The central fringe representing equal path length may be light or dark depending on the number of phase inversions experienced by the two beams as they traverse the optical system. (See Michelson interferometer for a discussion of this.) History The law of interference of light was described by Thomas Young in his 1803 Bakerian Lecture to the Royal Society of London. In preparation for the lecture, Young performed a double-aperture experiment that demonstrated interference fringes. His interpretation in terms of the interference of waves was rejected by most scientists at the time because of the dominance of Isaac Newton's corpuscular theory of light proposed a century before. The French engineer Augustin-Jean Fresnel, unaware of Young's results, began working on a wave theory of light and interference and was introduced to François Arago. Between 1816 and 1818, Fresnel and Arago performed interference experiments at the Paris Observatory. During this time, Arago designed and built the first interferometer, using it to measure the refractive index of moist air relative to dry air, which posed a potential problem for astronomical observations of star positions. The success of Fresnel's wave theory of light was established in his prize-winning memoire of 1819 that predicted and measured diffraction patterns. The Arago interferometer was later employed in 1850 by Leon Foucault to measure the speed of light in air relative to water, and it was used again in 1851 by Hippolyte Fizeau to measure the effect of Fresnel drag on the speed of light in moving water. Jules Jamin developed the first single-beam interferometer (not requiring a splitting aperture as the Arago interferometer did) in 1856. In 1881, the American physicist Albert A. Michelson, while visiting Hermann von Helmholtz in Berlin, invented the interferometer that is named after him, the Michelson Interferometer, to search for effects of the motion of the Earth on the speed of light. Michelson's null results performed in the basement of the Potsdam Observatory outside of Berlin (the horse traffic in the center of Berlin created too many vibrations), and his later more-accurate null results observed with Edward W. Morley at Case College in Cleveland, Ohio, contributed to the growing crisis of the luminiferous ether. Einstein stated that it was Fizeau's measurement of the speed of light in moving water using the Arago interferometer that inspired his theory of the relativistic addition of velocities. Categories Interferometers and interferometric techniques may be categorized by a variety of criteria: Homodyne versus heterodyne detection In homodyne detection, the interference occurs between two beams at the same wavelength (or carrier frequency). The phase difference between the two beams results in a change in the intensity of the light on the detector. The resulting intensity of the light after mixing of these two beams is measured, or the pattern of interference fringes is viewed or recorded. Most of the interferometers discussed in this article fall into this category. The heterodyne technique is used for (1) shifting an input signal into a new frequency range as well as (2) amplifying a weak input signal (assuming use of an active mixer). A weak input signal of frequency f1 is mixed with a strong reference frequency f2 from a local oscillator (LO). The nonlinear combination of the input signals creates two new signals, one at the sum f1 + f2 of the two frequencies, and the other at the difference f1 − f2. These new frequencies are called heterodynes. Typically only one of the new frequencies is desired, and the other signal is filtered out of the output of the mixer. The output signal will have an intensity proportional to the product of the amplitudes of the input signals. The most important and widely used application of the heterodyne technique is in the superheterodyne receiver (superhet), invented in 1917-18 by U.S. engineer Edwin Howard Armstrong and French engineer Lucien Lévy. In this circuit, the incoming radio frequency signal from the antenna is mixed with a signal from a local oscillator (LO) and converted by the heterodyne technique to a lower fixed frequency signal called the intermediate frequency (IF). This IF is amplified and filtered, before being applied to a detector which extracts the audio signal, which is sent to the loudspeaker. Optical heterodyne detection is an extension of the heterodyne technique to higher (visible) frequencies. While optical heterodyne interferometry is usually done at a single point it is also possible to perform this widefield. Double path versus common path A double-path interferometer is one in which the reference beam and sample beam travel along divergent paths. Examples include the Michelson interferometer, the Twyman–Green interferometer, and the Mach–Zehnder interferometer. After being perturbed by interaction with the sample under test, the sample beam is recombined with the reference beam to create an interference pattern which can then be interpreted. A common-path interferometer is a class of interferometer in which the reference beam and sample beam travel along the same path. Fig. 4 illustrates the Sagnac interferometer, the fibre optic gyroscope, the point diffraction interferometer, and the lateral shearing interferometer. Other examples of common path interferometer include the Zernike phase-contrast microscope, Fresnel's biprism, the zero-area Sagnac, and the scatterplate interferometer. Wavefront splitting versus amplitude splitting Wavefront splitting inferometers A wavefront splitting interferometer divides a light wavefront emerging from a point or a narrow slit (i.e. spatially coherent light) and, after allowing the two parts of the wavefront to travel through different paths, allows them to recombine. Fig. 5 illustrates Young's interference experiment and Lloyd's mirror. Other examples of wavefront splitting interferometer include the Fresnel biprism, the Billet Bi-Lens, diffraction-grating Michelson interferometer, and the Rayleigh interferometer. In 1803, Young's interference experiment played a major role in the general acceptance of the wave theory of light. If white light is used in Young's experiment, the result is a white central band of constructive interference corresponding to equal path length from the two slits, surrounded by a symmetrical pattern of colored fringes of diminishing intensity. In addition to continuous electromagnetic radiation, Young's experiment has been performed with individual photons, with electrons, and with buckyball molecules large enough to be seen under an electron microscope. Lloyd's mirror generates interference fringes by combining direct light from a source (blue lines) and light from the source's reflected image (red lines) from a mirror held at grazing incidence. The result is an asymmetrical pattern of fringes. The band of equal path length, nearest the mirror, is dark rather than bright. In 1834, Humphrey Lloyd interpreted this effect as proof that the phase of a front-surface reflected beam is inverted. Amplitude-splitting inferometers An amplitude splitting interferometer uses a partial reflector to divide the amplitude of the incident wave into separate beams which are separated and recombined. The Fizeau interferometer is shown as it might be set up to test an optical flat. A precisely figured reference flat is placed on top of the flat being tested, separated by narrow spacers. The reference flat is slightly beveled (only a fraction of a degree of beveling is necessary) to prevent the rear surface of the flat from producing interference fringes. Separating the test and reference flats allows the two flats to be tilted with respect to each other. By adjusting the tilt, which adds a controlled phase gradient to the fringe pattern, one can control the spacing and direction of the fringes, so that one may obtain an easily interpreted series of nearly parallel fringes rather than a complex swirl of contour lines. Separating the plates, however, necessitates that the illuminating light be collimated. Fig 6 shows a collimated beam of monochromatic light illuminating the two flats and a beam splitter allowing the fringes to be viewed on-axis. The Mach–Zehnder interferometer is a more versatile instrument than the Michelson interferometer. Each of the well separated light paths is traversed only once, and the fringes can be adjusted so that they are localized in any desired plane. Typically, the fringes would be adjusted to lie in the same plane as the test object, so that fringes and test object can be photographed together. If it is decided to produce fringes in white light, then, since white light has a limited coherence length, on the order of micrometers, great care must be taken to equalize the optical paths or no fringes will be visible. As illustrated in Fig. 6, a compensating cell would be placed in the path of the reference beam to match the test cell. Note also the precise orientation of the beam splitters. The reflecting surfaces of the beam splitters would be oriented so that the test and reference beams pass through an equal amount of glass. In this orientation, the test and reference beams each experience two front-surface reflections, resulting in the same number of phase inversions. The result is that light traveling an equal optical path length in the test and reference beams produces a white light fringe of constructive interference. The heart of the Fabry–Pérot interferometer is a pair of partially silvered glass optical flats spaced several millimeters to centimeters apart with the silvered surfaces facing each other. (Alternatively, a Fabry–Pérot etalon uses a transparent plate with two parallel reflecting surfaces.) As with the Fizeau interferometer, the flats are slightly beveled. In a typical system, illumination is provided by a diffuse source set at the focal plane of a collimating lens. A focusing lens produces what would be an inverted image of the source if the paired flats were not present, i.e., in the absence of the paired flats, all light emitted from point A passing through the optical system would be focused at point A'. In Fig. 6, only one ray emitted from point A on the source is traced. As the ray passes through the paired flats, it is multiply reflected to produce multiple transmitted rays which are collected by the focusing lens and brought to point A' on the screen. The complete interference pattern takes the appearance of a set of concentric rings. The sharpness of the rings depends on the reflectivity of the flats. If the reflectivity is high, resulting in a high Q factor (i.e., high finesse), monochromatic light produces a set of narrow bright rings against a dark background. In Fig. 6, the low-finesse image corresponds to a reflectivity of 0.04 (i.e., unsilvered surfaces) versus a reflectivity of 0.95 for the high-finesse image. Fig. 6 illustrates the Fizeau, Mach–Zehnder, and Fabry–Pérot interferometers. Other examples of amplitude splitting interferometer include the Michelson, Twyman–Green, Laser Unequal Path, and Linnik interferometer. Michelson-Morley Michelson and Morley (1887) and other early experimentalists using interferometric techniques in an attempt to measure the properties of the luminiferous aether, used monochromatic light only for initially setting up their equipment, always switching to white light for the actual measurements. The reason is that measurements were recorded visually. Monochromatic light would result in a uniform fringe pattern. Lacking modern means of environmental temperature control, experimentalists struggled with continual fringe drift even though the interferometer might be set up in a basement. Since the fringes would occasionally disappear due to vibrations by passing horse traffic, distant thunderstorms and the like, it would be easy for an observer to "get lost" when the fringes returned to visibility. The advantages of white light, which produced a distinctive colored fringe pattern, far outweighed the difficulties of aligning the apparatus due to its low coherence length. This was an early example of the use of white light to resolve the "2 pi ambiguity". Applications Physics and astronomy In physics, one of the most important experiments of the late 19th century was the famous "failed experiment" of Michelson and Morley which provided evidence for special relativity. Recent repetitions of the Michelson–Morley experiment perform heterodyne measurements of beat frequencies of crossed cryogenic optical resonators. Fig 7 illustrates a resonator experiment performed by Müller et al. in 2003. Two optical resonators constructed from crystalline sapphire, controlling the frequencies of two lasers, were set at right angles within a helium cryostat. A frequency comparator measured the beat frequency of the combined outputs of the two resonators. , the precision by which anisotropy of the speed of light can be excluded in resonator experiments is at the 10−17 level. Michelson interferometers are used in tunable narrow band optical filters and as the core hardware component of Fourier transform spectrometers. When used as a tunable narrow band filter, Michelson interferometers exhibit a number of advantages and disadvantages when compared with competing technologies such as Fabry–Pérot interferometers or Lyot filters. Michelson interferometers have the largest field of view for a specified wavelength, and are relatively simple in operation, since tuning is via mechanical rotation of waveplates rather than via high voltage control of piezoelectric crystals or lithium niobate optical modulators as used in a Fabry–Pérot system. Compared with Lyot filters, which use birefringent elements, Michelson interferometers have a relatively low temperature sensitivity. On the negative side, Michelson interferometers have a relatively restricted wavelength range and require use of prefilters which restrict transmittance. Fig. 8 illustrates the operation of a Fourier transform spectrometer, which is essentially a Michelson interferometer with one mirror movable. (A practical Fourier transform spectrometer would substitute corner cube reflectors for the flat mirrors of the conventional Michelson interferometer, but for simplicity, the illustration does not show this.) An interferogram is generated by making measurements of the signal at many discrete positions of the moving mirror. A Fourier transform converts the interferogram into an actual spectrum. Fig. 9 shows a doppler image of the solar corona made using a tunable Fabry-Pérot interferometer to recover scans of the solar corona at a number of wavelengths near the FeXIV green line. The picture is a color-coded image of the doppler shift of the line, which may be associated with the coronal plasma velocity towards or away from the satellite camera. Fabry–Pérot thin-film etalons are used in narrow bandpass filters capable of selecting a single spectral line for imaging; for example, the H-alpha line or the Ca-K line of the Sun or stars. Fig. 10 shows an Extreme ultraviolet Imaging Telescope (EIT) image of the Sun at 195 Ångströms (19.5 nm), corresponding to a spectral line of multiply-ionized iron atoms. EIT used multilayer coated reflective mirrors that were coated with alternate layers of a light "spacer" element (such as silicon), and a heavy "scatterer" element (such as molybdenum). Approximately 100 layers of each type were placed on each mirror, with a thickness of around 10 nm each. The layer thicknesses were tightly controlled so that at the desired wavelength, reflected photons from each layer interfered constructively. The Laser Interferometer Gravitational-Wave Observatory (LIGO) uses two 4-km Michelson–Fabry–Pérot interferometers for the detection of gravitational waves. In this application, the Fabry–Pérot cavity is used to store photons for almost a millisecond while they bounce up and down between the mirrors. This increases the time a gravitational wave can interact with the light, which results in a better sensitivity at low frequencies. Smaller cavities, usually called mode cleaners, are used for spatial filtering and frequency stabilization of the main laser. The first observation of gravitational waves occurred on September 14, 2015. The Mach–Zehnder interferometer's relatively large and freely accessible working space, and its flexibility in locating the fringes has made it the interferometer of choice for visualizing flow in wind tunnels, and for flow visualization studies in general. It is frequently used in the fields of aerodynamics, plasma physics and heat transfer to measure pressure, density, and temperature changes in gases. Mach–Zehnder interferometers are also used to study one of the most counterintuitive predictions of quantum mechanics, the phenomenon known as quantum entanglement. An astronomical interferometer achieves high-resolution observations using the technique of aperture synthesis, mixing signals from a cluster of comparatively small telescopes rather than a single very expensive monolithic telescope. Early radio telescope interferometers used a single baseline for measurement. Later astronomical interferometers, such as the Very Large Array illustrated in Fig 11, used arrays of telescopes arranged in a pattern on the ground. A limited number of baselines will result in insufficient coverage. This was alleviated by using the rotation of the Earth to rotate the array relative to the sky. Thus, a single baseline could measure information in multiple orientations by taking repeated measurements, a technique called Earth-rotation synthesis. Baselines thousands of kilometers long were achieved using very long baseline interferometry. Astronomical optical interferometry has had to overcome a number of technical issues not shared by radio telescope interferometry. The short wavelengths of light necessitate extreme precision and stability of construction. For example, spatial resolution of 1 milliarcsecond requires 0.5 μm stability in a 100 m baseline. Optical interferometric measurements require high sensitivity, low noise detectors that did not become available until the late 1990s. Astronomical "seeing", the turbulence that causes stars to twinkle, introduces rapid, random phase changes in the incoming light, requiring data collection rates to be faster than the rate of turbulence. Despite these technical difficulties, three major facilities are now in operation offering resolutions down to the fractional milliarcsecond range. This linked video shows a movie assembled from aperture synthesis images of the Beta Lyrae system, a binary star system approximately 960 light-years (290 parsecs) away in the constellation Lyra, as observed by the CHARA array with the MIRC instrument. The brighter component is the primary star, or the mass donor. The fainter component is the thick disk surrounding the secondary star, or the mass gainer. The two components are separated by 1 milli-arcsecond. Tidal distortions of the mass donor and the mass gainer are both clearly visible. The wave character of matter can be exploited to build interferometers. The first examples of matter interferometers were electron interferometers, later followed by neutron interferometers. Around 1990 the first atom interferometers were demonstrated, later followed by interferometers employing molecules. Electron holography is an imaging technique that photographically records the electron interference pattern of an object, which is then reconstructed to yield a greatly magnified image of the original object. This technique was developed to enable greater resolution in electron microscopy than is possible using conventional imaging techniques. The resolution of conventional electron microscopy is not limited by electron wavelength, but by the large aberrations of electron lenses. Neutron interferometry has been used to investigate the Aharonov–Bohm effect, to examine the effects of gravity acting on an elementary particle, and to demonstrate a strange behavior of fermions that is at the basis of the Pauli exclusion principle: Unlike macroscopic objects, when fermions are rotated by 360° about any axis, they do not return to their original state, but develop a minus sign in their wave function. In other words, a fermion needs to be rotated 720° before returning to its original state. Atom interferometry techniques are reaching sufficient precision to allow laboratory-scale tests of general relativity. Interferometers are used in atmospheric physics for high-precision measurements of trace gases via remote sounding of the atmosphere. There are several examples of interferometers that utilize either absorption or emission features of trace gases. A typical use would be in continual monitoring of the column concentration of trace gases such as ozone and carbon monoxide above the instrument. Engineering and applied science Newton (test plate) interferometry is frequently used in the optical industry for testing the quality of surfaces as they are being shaped and figured. Fig. 13 shows photos of reference flats being used to check two test flats at different stages of completion, showing the different patterns of interference fringes. The reference flats are resting with their bottom surfaces in contact with the test flats, and they are illuminated by a monochromatic light source. The light waves reflected from both surfaces interfere, resulting in a pattern of bright and dark bands. The surface in the left photo is nearly flat, indicated by a pattern of straight parallel interference fringes at equal intervals. The surface in the right photo is uneven, resulting in a pattern of curved fringes. Each pair of adjacent fringes represents a difference in surface elevation of half a wavelength of the light used, so differences in elevation can be measured by counting the fringes. The flatness of the surfaces can be measured to millionths of an inch by this method. To determine whether the surface being tested is concave or convex with respect to the reference optical flat, any of several procedures may be adopted. One can observe how the fringes are displaced when one presses gently on the top flat. If one observes the fringes in white light, the sequence of colors becomes familiar with experience and aids in interpretation. Finally one may compare the appearance of the fringes as one moves ones head from a normal to an oblique viewing position. These sorts of maneuvers, while common in the optical shop, are not suitable in a formal testing environment. When the flats are ready for sale, they will typically be mounted in a Fizeau interferometer for formal testing and certification. Fabry-Pérot etalons are widely used in telecommunications, lasers and spectroscopy to control and measure the wavelengths of light. Dichroic filters are multiple layer thin-film etalons. In telecommunications, wavelength-division multiplexing, the technology that enables the use of multiple wavelengths of light through a single optical fiber, depends on filtering devices that are thin-film etalons. Single-mode lasers employ etalons to suppress all optical cavity modes except the single one of interest. The Twyman–Green interferometer, invented by Twyman and Green in 1916, is a variant of the Michelson interferometer widely used to test optical components. The basic characteristics distinguishing it from the Michelson configuration are the use of a monochromatic point light source and a collimator. Michelson (1918) criticized the Twyman–Green configuration as being unsuitable for the testing of large optical components, since the light sources available at the time had limited coherence length. Michelson pointed out that constraints on geometry forced by limited coherence length required the use of a reference mirror of equal size to the test mirror, making the Twyman–Green impractical for many purposes. Decades later, the advent of laser light sources answered Michelson's objections. (A Twyman–Green interferometer using a laser light source and unequal path length is known as a Laser Unequal Path Interferometer, or LUPI.) Fig. 14 illustrates a Twyman–Green interferometer set up to test a lens. Light from a monochromatic point source is expanded by a diverging lens (not shown), then is collimated into a parallel beam. A convex spherical mirror is positioned so that its center of curvature coincides with the focus of the lens being tested. The emergent beam is recorded by an imaging system for analysis. Mach–Zehnder interferometers are being used in integrated optical circuits, in which light interferes between two branches of a waveguide that are externally modulated to vary their relative phase. A slight tilt of one of the beam splitters will result in a path difference and a change in the interference pattern. Mach–Zehnder interferometers are the basis of a wide variety of devices, from RF modulators to sensors to optical switches. The latest proposed extremely large astronomical telescopes, such as the Thirty Meter Telescope and the Extremely Large Telescope, will be of segmented design. Their primary mirrors will be built from hundreds of hexagonal mirror segments. Polishing and figuring these highly aspheric and non-rotationally symmetric mirror segments presents a major challenge. Traditional means of optical testing compares a surface against a spherical reference with the aid of a null corrector. In recent years, computer-generated holograms (CGHs) have begun to supplement null correctors in test setups for complex aspheric surfaces. Fig. 15 illustrates how this is done. Unlike the figure, actual CGHs have line spacing on the order of 1 to 10 μm. When laser light is passed through the CGH, the zero-order diffracted beam experiences no wavefront modification. The wavefront of the first-order diffracted beam, however, is modified to match the desired shape of the test surface. In the illustrated Fizeau interferometer test setup, the zero-order diffracted beam is directed towards the spherical reference surface, and the first-order diffracted beam is directed towards the test surface in such a way that the two reflected beams combine to form interference fringes. The same test setup can be used for the innermost mirrors as for the outermost, with only the CGH needing to be exchanged. Ring laser gyroscopes (RLGs) and fibre optic gyroscopes (FOGs) are interferometers used in navigation systems. They operate on the principle of the Sagnac effect. The distinction between RLGs and FOGs is that in a RLG, the entire ring is part of the laser while in a FOG, an external laser injects counter-propagating beams into an optical fiber ring, and rotation of the system then causes a relative phase shift between those beams. In a RLG, the observed phase shift is proportional to the accumulated rotation, while in a FOG, the observed phase shift is proportional to the angular velocity. In telecommunication networks, heterodyning is used to move frequencies of individual signals to different channels which may share a single physical transmission line. This is called frequency division multiplexing (FDM). For example, a coaxial cable used by a cable television system can carry 500 television channels at the same time because each one is given a different frequency, so they don't interfere with one another. Continuous wave (CW) doppler radar detectors are basically heterodyne detection devices that compare transmitted and reflected beams. Optical heterodyne detection is used for coherent Doppler lidar measurements capable of detecting very weak light scattered in the atmosphere and monitoring wind speeds with high accuracy. It has application in optical fiber communications, in various high resolution spectroscopic techniques, and the self-heterodyne method can be used to measure the linewidth of a laser. Optical heterodyne detection is an essential technique used in high-accuracy measurements of the frequencies of optical sources, as well as in the stabilization of their frequencies. Until a relatively few years ago, lengthy frequency chains were needed to connect the microwave frequency of a cesium or other atomic time source to optical frequencies. At each step of the chain, a frequency multiplier would be used to produce a harmonic of the frequency of that step, which would be compared by heterodyne detection with the next step (the output of a microwave source, far infrared laser, infrared laser, or visible laser). Each measurement of a single spectral line required several years of effort in the construction of a custom frequency chain. Currently, optical frequency combs have provided a much simpler method of measuring optical frequencies. If a mode-locked laser is modulated to form a train of pulses, its spectrum is seen to consist of the carrier frequency surrounded by a closely spaced comb of optical sideband frequencies with a spacing equal to the pulse repetition frequency (Fig. 16). The pulse repetition frequency is locked to that of the frequency standard, and the frequencies of the comb elements at the red end of the spectrum are doubled and heterodyned with the frequencies of the comb elements at the blue end of the spectrum, thus allowing the comb to serve as its own reference. In this manner, locking of the frequency comb output to an atomic standard can be performed in a single step. To measure an unknown frequency, the frequency comb output is dispersed into a spectrum. The unknown frequency is overlapped with the appropriate spectral segment of the comb and the frequency of the resultant heterodyne beats is measured. One of the most common industrial applications of optical interferometry is as a versatile measurement tool for the high precision examination of surface topography. Popular interferometric measurement techniques include Phase Shifting Interferometry (PSI), and Vertical Scanning Interferometry(VSI), also known as scanning white light interferometry (SWLI) or by the ISO term coherence scanning interferometry (CSI), CSI exploits coherence to extend the range of capabilities for interference microscopy. These techniques are widely used in micro-electronic and micro-optic fabrication. PSI uses monochromatic light and provides very precise measurements; however it is only usable for surfaces that are very smooth. CSI often uses white light and high numerical apertures, and rather than looking at the phase of the fringes, as does PSI, looks for best position of maximum fringe contrast or some other feature of the overall fringe pattern. In its simplest form, CSI provides less precise measurements than PSI but can be used on rough surfaces. Some configurations of CSI, variously known as Enhanced VSI (EVSI), high-resolution SWLI or Frequency Domain Analysis (FDA), use coherence effects in combination with interference phase to enhance precision. Phase Shifting Interferometry addresses several issues associated with the classical analysis of static interferograms. Classically, one measures the positions of the fringe centers. As seen in Fig. 13, fringe deviations from straightness and equal spacing provide a measure of the aberration. Errors in determining the location of the fringe centers provide the inherent limit to precision of the classical analysis, and any intensity variations across the interferogram will also introduce error. There is a trade-off between precision and number of data points: closely spaced fringes provide many data points of low precision, while widely spaced fringes provide a low number of high precision data points. Since fringe center data is all that one uses in the classical analysis, all of the other information that might theoretically be obtained by detailed analysis of the intensity variations in an interferogram is thrown away. Finally, with static interferograms, additional information is needed to determine the polarity of the wavefront: In Fig. 13, one can see that the tested surface on the right deviates from flatness, but one cannot tell from this single image whether this deviation from flatness is concave or convex. Traditionally, this information would be obtained using non-automated means, such as by observing the direction that the fringes move when the reference surface is pushed. Phase shifting interferometry overcomes these limitations by not relying on finding fringe centers, but rather by collecting intensity data from every point of the CCD image sensor. As seen in Fig. 17, multiple interferograms (at least three) are analyzed with the reference optical surface shifted by a precise fraction of a wavelength between each exposure using a piezoelectric transducer (PZT). Alternatively, precise phase shifts can be introduced by modulating the laser frequency. The captured images are processed by a computer to calculate the optical wavefront errors. The precision and reproducibility of PSI is far greater than possible in static interferogram analysis, with measurement repeatabilities of a hundredth of a wavelength being routine. Phase shifting technology has been adapted to a variety of interferometer types such as Twyman–Green, Mach–Zehnder, laser Fizeau, and even common path configurations such as point diffraction and lateral shearing interferometers. More generally, phase shifting techniques can be adapted to almost any system that uses fringes for measurement, such as holographic and speckle interferometry. In coherence scanning interferometry, interference is only achieved when the path length delays of the interferometer are matched within the coherence time of the light source. CSI monitors the fringe contrast rather than the phase of the fringes. Fig. 17 illustrates a CSI microscope using a Mirau interferometer in the objective; other forms of interferometer used with white light include the Michelson interferometer (for low magnification objectives, where the reference mirror in a Mirau objective would interrupt too much of the aperture) and the Linnik interferometer (for high magnification objectives with limited working distance). The sample (or alternatively, the objective) is moved vertically over the full height range of the sample, and the position of maximum fringe contrast is found for each pixel. The chief benefit of coherence scanning interferometry is that systems can be designed that do not suffer from the 2 pi ambiguity of coherent interferometry, and as seen in Fig. 18, which scans a 180μm x 140μm x 10μm volume, it is well suited to profiling steps and rough surfaces. The axial resolution of the system is determined in part by the coherence length of the light source. Industrial applications include in-process surface metrology, roughness measurement, 3D surface metrology in hard-to-reach spaces and in hostile environments, profilometry of surfaces with high aspect ratio features (grooves, channels, holes), and film thickness measurement (semi-conductor and optical industries, etc.). Fig. 19 illustrates a Twyman–Green interferometer set up for white light scanning of a macroscopic object. Holographic interferometry is a technique which uses holography to monitor small deformations in single wavelength implementations. In multi-wavelength implementations, it is used to perform dimensional metrology of large parts and assemblies and to detect larger surface defects. Holographic interferometry was discovered by accident as a result of mistakes committed during the making of holograms. Early lasers were relatively weak and photographic plates were insensitive, necessitating long exposures during which vibrations or minute shifts might occur in the optical system. The resultant holograms, which showed the holographic subject covered with fringes, were considered ruined. Eventually, several independent groups of experimenters in the mid-60s realized that the fringes encoded important information about dimensional changes occurring in the subject, and began intentionally producing holographic double exposures. The main Holographic interferometry article covers the disputes over priority of discovery that occurred during the issuance of the patent for this method. Double- and multi- exposure holography is one of three methods used to create holographic interferograms. A first exposure records the object in an unstressed state. Subsequent exposures on the same photographic plate are made while the object is subjected to some stress. The composite image depicts the difference between the stressed and unstressed states. Real-time holography is a second method of creating holographic interferograms. A holograph of the unstressed object is created. This holograph is illuminated with a reference beam to generate a hologram image of the object directly superimposed over the original object itself while the object is being subjected to some stress. The object waves from this hologram image will interfere with new waves coming from the object. This technique allows real time monitoring of shape changes. The third method, time-average holography, involves creating a holograph while the object is subjected to a periodic stress or vibration. This yields a visual image of the vibration pattern. Interferometric synthetic aperture radar (InSAR) is a radar technique used in geodesy and remote sensing. Satellite synthetic aperture radar images of a geographic feature are taken on separate days, and changes that have taken place between radar images taken on the separate days are recorded as fringes similar to those obtained in holographic interferometry. The technique can monitor centimeter- to millimeter-scale deformation resulting from earthquakes, volcanoes and landslides, and also has uses in structural engineering, in particular for the monitoring of subsidence and structural stability. Fig 20 shows Kilauea, an active volcano in Hawaii. Data acquired using the space shuttle Endeavour's X-band Synthetic Aperture Radar on April 13, 1994 and October 4, 1994 were used to generate interferometric fringes, which were overlaid on the X-SAR image of Kilauea. Electronic speckle pattern interferometry (ESPI), also known as TV holography, uses video detection and recording to produce an image of the object upon which is superimposed a fringe pattern which represents the displacement of the object between recordings. (see Fig. 21) The fringes are similar to those obtained in holographic interferometry. When lasers were first invented, laser speckle was considered to be a severe drawback in using lasers to illuminate objects, particularly in holographic imaging because of the grainy image produced. It was later realized that speckle patterns could carry information about the object's surface deformations. Butters and Leendertz developed the technique of speckle pattern interferometry in 1970, and since then, speckle has been exploited in a variety of other applications. A photograph is made of the speckle pattern before deformation, and a second photograph is made of the speckle pattern after deformation. Digital subtraction of the two images results in a correlation fringe pattern, where the fringes represent lines of equal deformation. Short laser pulses in the nanosecond range can be used to capture very fast transient events. A phase problem exists: In the absence of other information, one cannot tell the difference between contour lines indicating a peak versus contour lines indicating a trough. To resolve the issue of phase ambiguity, ESPI may be combined with phase shifting methods. A method of establishing precise geodetic baselines, invented by Yrjö Väisälä, exploited the low coherence length of white light. Initially, white light was split in two, with the reference beam "folded", bouncing back-and-forth six times between a mirror pair spaced precisely 1 m apart. Only if the test path was precisely 6 times the reference path would fringes be seen. Repeated applications of this procedure allowed precise measurement of distances up to 864 meters. Baselines thus established were used to calibrate geodetic distance measurement equipment, leading to a metrologically traceable scale for geodetic networks measured by these instruments. (This method has been superseded by GPS.) Other uses of interferometers have been to study dispersion of materials, measurement of complex indices of refraction, and thermal properties. They are also used for three-dimensional motion mapping including mapping vibrational patterns of structures. Biology and medicine Optical interferometry, applied to biology and medicine, provides sensitive metrology capabilities for the measurement of biomolecules, subcellular components, cells and tissues. Many forms of label-free biosensors rely on interferometry because the direct interaction of electromagnetic fields with local molecular polarizability eliminates the need for fluorescent tags or nanoparticle markers. At a larger scale, cellular interferometry shares aspects with phase-contrast microscopy, but comprises a much larger class of phase-sensitive optical configurations that rely on optical interference among cellular constituents through refraction and diffraction. At the tissue scale, partially-coherent forward-scattered light propagation through the micro aberrations and heterogeneity of tissue structure provides opportunities to use phase-sensitive gating (optical coherence tomography) as well as phase-sensitive fluctuation spectroscopy to image subtle structural and dynamical properties. Optical coherence tomography (OCT) is a medical imaging technique using low-coherence interferometry to provide tomographic visualization of internal tissue microstructures. As seen in Fig. 22, the core of a typical OCT system is a Michelson interferometer. One interferometer arm is focused onto the tissue sample and scans the sample in an X-Y longitudinal raster pattern. The other interferometer arm is bounced off a reference mirror. Reflected light from the tissue sample is combined with reflected light from the reference. Because of the low coherence of the light source, interferometric signal is observed only over a limited depth of sample. X-Y scanning therefore records one thin optical slice of the sample at a time. By performing multiple scans, moving the reference mirror between each scan, an entire three-dimensional image of the tissue can be reconstructed. Recent advances have striven to combine the nanometer phase retrieval of coherent interferometry with the ranging capability of low-coherence interferometry. Phase contrast and differential interference contrast (DIC) microscopy are important tools in biology and medicine. Most animal cells and single-celled organisms have very little color, and their intracellular organelles are almost totally invisible under simple bright field illumination. These structures can be made visible by staining the specimens, but staining procedures are time-consuming and kill the cells. As seen in Figs. 24 and 25, phase contrast and DIC microscopes allow unstained, living cells to be studied. DIC also has non-biological applications, for example in the analysis of planar silicon semiconductor processing. Angle-resolved low-coherence interferometry (a/LCI) uses scattered light to measure the sizes of subcellular objects, including cell nuclei. This allows interferometry depth measurements to be combined with density measurements. Various correlations have been found between the state of tissue health and the measurements of subcellular objects. For example, it has been found that as tissue changes from normal to cancerous, the average cell nuclei size increases. Phase-contrast X-ray imaging (Fig. 26) refers to a variety of techniques that use phase information of a coherent x-ray beam to image soft tissues. (For an elementary discussion, see Phase-contrast x-ray imaging (introduction). For a more in-depth review, see Phase-contrast X-ray imaging.) It has become an important method for visualizing cellular and histological structures in a wide range of biological and medical studies. There are several technologies being used for x-ray phase-contrast imaging, all utilizing different principles to convert phase variations in the x-rays emerging from an object into intensity variations. These include propagation-based phase contrast, Talbot interferometry, Moiré-based far-field interferometry, refraction-enhanced imaging, and x-ray interferometry. These methods provide higher contrast compared to normal absorption-contrast x-ray imaging, making it possible to see smaller details. A disadvantage is that these methods require more sophisticated equipment, such as synchrotron or microfocus x-ray sources, x-ray optics, or high resolution x-ray detectors. See also Coherence Coherence scanning interferometry Fine Guidance Sensor (HST) (HST FGS are interferometers) Holography Interferometric visibility Interference lithography List of types of interferometers Ramsey interferometry Seismic interferometry Superposition principle Very-long-baseline interferometry Zero spacing flux References Optical instruments Plasma diagnostics Articles containing video clips
Interferometry
[ "Physics", "Technology", "Engineering" ]
10,098
[ "Plasma diagnostics", "Measuring instruments", "Plasma physics" ]
166,697
https://en.wikipedia.org/wiki/Greek%20numerals
Greek numerals, also known as Ionic, Ionian, Milesian, or Alexandrian numerals, is a system of writing numbers using the letters of the Greek alphabet. In modern Greece, they are still used for ordinal numbers and in contexts similar to those in which Roman numerals are still used in the Western world. For ordinary cardinal numbers, however, modern Greece uses Arabic numerals. History The Minoan and Mycenaean civilizations' Linear A and Linear B alphabets used a different system, called Aegean numerals, which included number-only symbols for powers of ten:  = 1,  = 10,  = 100,  = 1000, and  = 10000. Attic numerals composed another system that came into use perhaps in the 7th century BC. They were acrophonic, derived (after the initial one) from the first letters of the names of the numbers represented. They ran  = 1,  = 5,  = 10,  = 100,  = 1,000, and  = 10,000. The numbers 50, 500, 5,000, and 50,000 were represented by the letter with minuscule powers of ten written in the top right corner: , , , and . One-half was represented by (left half of a full circle) and one-quarter by ɔ (right side of a full circle). The same system was used outside of Attica, but the symbols varied with the local alphabets, for example, 1,000 was in Boeotia. The present system probably developed around Miletus in Ionia. 19th century classicists placed its development in the 3rd century BC, the occasion of its first widespread use. More thorough modern archaeology has caused the date to be pushed back at least to the 5th century BC, a little before Athens abandoned its pre-Eucleidean alphabet in favour of Miletus's in 402 BC, and it may predate that by a century or two. The present system uses the 24 letters adopted under Eucleides, as well as three Phoenician and Ionic ones that had not been dropped from the Athenian alphabet (although kept for numbers): digamma, koppa, and sampi. The position of those characters within the numbering system imply that the first two were still in use (or at least remembered as letters) while the third was not. The exact dating, particularly for sampi, is problematic since its uncommon value means the first attested representative near Miletus does not appear until the 2nd century BC, and its use is unattested in Athens until the 2nd century CE. (In general, Athenians resisted using the new numerals for the longest of any Greek state, but had fully adopted them by .) Description Greek numerals are decimal, based on powers of 10. The units from 1 to 9 are assigned to the first nine letters of the old Ionic alphabet from alpha to theta. Instead of reusing these numbers to form multiples of the higher powers of ten, however, each multiple of ten from 10 to 90 was assigned its own separate letter from the next nine letters of the Ionic alphabet from iota to koppa. Each multiple of one hundred from 100 to 900 was then assigned its own separate letter as well, from rho to sampi. (That this was not the traditional location of sampi in the Ionic alphabetical order has led classicists to conclude that sampi had fallen into disuse as a letter by the time the system was created.) This alphabetic system operates on the additive principle in which the numeric values of the letters are added together to obtain the total. For example, 241 was represented as  (200 + 40 + 1). (It was not always the case that the numbers ran from highest to lowest: a 4th-century BC inscription at Athens placed the units to the left of the tens. This practice continued in Asia Minor well into the Roman period.) In ancient and medieval manuscripts, these numerals were eventually distinguished from letters using overbars: , , , etc. In medieval manuscripts of the Book of Revelation, the number of the Beast 666 is written as  (600 + 60 + 6). (Numbers larger than 1,000 reused the same letters but included various marks to note the change.) Fractions were indicated as the denominator followed by a keraia (ʹ); γʹ indicated one third, δʹ one fourth and so on. As an exception, special symbol ∠ʹ indicated one half, and γ°ʹ or γoʹ was two-thirds. These fractions were additive (also known as Egyptian fractions); for example indicated . Although the Greek alphabet began with only majuscule forms, surviving papyrus manuscripts from Egypt show that uncial and cursive minuscule forms began early. These new letter forms sometimes replaced the former ones, especially in the case of the obscure numerals. The old Q-shaped koppa (Ϙ) began to be broken up ( and ) and simplified ( and ). The numeral for 6 changed several times. During antiquity, the original letter form of digamma (Ϝ) came to be avoided in favour of a special numerical one (). By the Byzantine era, the letter was known as episemon and written as or . This eventually merged with the sigma-tau ligature stigma ϛ ( or ). In modern Greek, a number of other changes have been made. Instead of extending an over bar over an entire number, the keraia (, lit. "hornlike projection") is marked to its upper right, a development of the short marks formerly used for single numbers and fractions. The modern keraia () is a symbol similar to the acute accent (´), the tonos (U+0384,΄) and the prime symbol (U+02B9, ʹ), but has its own Unicode character as U+0374. Alexander the Great's father Philip II of Macedon is thus known as in modern Greek. A lower left keraia (Unicode: U+0375, "Greek Lower Numeral Sign") is now standard for distinguishing thousands: 2019 is represented as ͵ΒΙΘʹ (). The declining use of ligatures in the 20th century also means that stigma is frequently written as the separate letters ΣΤʹ, although a single keraia is used for the group. Isopsephy The practice of adding up the number values of Greek letters of words, names and phrases, thus connecting the meaning of words, names and phrases with others with equivalent numeric sums, is called isopsephy. Similar practices for the Hebrew and English are called gematria and English Qaballa, respectively. Table Alternatively, sub-sections of manuscripts are sometimes numbered by lowercase characters (αʹ. βʹ. γʹ. δʹ. εʹ. ϛʹ. ζʹ. ηʹ. θʹ.). In Ancient Greek, myriad notation is used for multiples of 10,000, for example for 20,000 or (also written on the line as Μ ) for 1,234,567. Higher numbers In his text The Sand Reckoner, the natural philosopher Archimedes gives an upper bound of the number of grains of sand required to fill the entire universe, using a contemporary estimation of its size. This would defy the then-held notion that it is impossible to name a number greater than that of the sand on a beach or on the entire world. In order to do that, he had to devise a new numeral scheme with much greater range. Pappus of Alexandria reports that Apollonius of Perga developed a simpler system based on powers of the myriad; was 10,000, was 10,0002 = 100,000,000, was 10,0003 = 1012 and so on. Zero Hellenistic astronomers extended alphabetic Greek numerals into a sexagesimal positional numbering system by limiting each position to a maximum value of 50 + 9 and including a special symbol for zero, which was only used alone for a whole table cell, rather than combined with other digits, like today's modern zero, which is a placeholder in positional numeric notation. This system was probably adapted from Babylonian numerals by Hipparchus . It was then used by Ptolemy (), Theon () and Theon's daughter Hypatia (). The symbol for zero is clearly different from that of the value for 70, omicron or "ο". In the 2nd-century papyrus shown here, one can see the symbol for zero in the lower right, and a number of larger omicrons elsewhere in the same papyrus. In Ptolemy's table of chords, the first fairly extensive trigonometric table, there were 360 rows, portions of which looked as follows: Each number in the first column, labeled ["regions"] is the number of degrees of arc on a circle. Each number in the second column, labeled ["straight lines" or "segments"] is the length of the corresponding chord of the circle, when the diameter is 120. Thus represents an 84° arc, and the ∠′ after it means one-half, so that πδ∠′ means °. In the next column we see  , meaning . That is the length of the chord corresponding to an arc of ° when the diameter of the circle is 120. The next column, labeled for "sixtieths", is the number to be added to the chord length for each 1' increase in the arc, over the span of the next 1°. Thus that last column was used for linear interpolation. The Greek sexagesimal placeholder or zero symbol changed over time: The symbol used on papyri during the second century was a very small circle with an overbar several diameters long, terminated or not at both ends in various ways. Later, the overbar shortened to only one diameter, similar to the modern o-macron (ō) which was still being used in late medieval Arabic manuscripts whenever alphabetic numerals were used, later the overbar was omitted in Byzantine manuscripts, leaving a bare ο (omicron). This gradual change from an invented symbol to ο does not support the hypothesis that the latter was the initial of meaning "nothing". Note that the letter ο was still used with its original numerical value of 70; however, there was no ambiguity, as 70 could not appear in the fractional part of a sexagesimal number, and zero was usually omitted when it was the integer. Some of Ptolemy's true zeros appeared in the first line of each of his eclipse tables, where they were a measure of the angular separation between the center of the Moon and either the center of the Sun (for solar eclipses) or the center of Earth's shadow (for lunar eclipses). All of these zeros took the form , where Ptolemy actually used three of the symbols described in the previous paragraph. The vertical bar (|) indicates that the integral part on the left was in a separate column labeled in the headings of his tables as digits (of five arc-minutes each), whereas the fractional part was in the next column labeled minute of immersion, meaning sixtieths (and thirty-six-hundredths) of a digit. The Greek zero was added to Unicode at . See also (acrophonic, not alphabetic, numerals) , based on the Greek system References External links The Greek Number Converter Numeral systems Numerals Numerals
Greek numerals
[ "Mathematics" ]
2,355
[ "Numeral systems", "Numerals", "Mathematical objects", "Numbers" ]
166,716
https://en.wikipedia.org/wiki/Chamomile
Chamomile (American English) or camomile (British English; see spelling differences) ( or ) is the common name for several plants of the family Asteraceae. Two of the species, Matricaria chamomilla and Chamaemelum nobile, are commonly used to make herbal infusions for beverages. There has been limited (though thus far insufficient) research as to whether consuming chamomile in foods or beverages is effective in treating medical conditions. Etymology The word chamomile is derived via French and Latin, from the Greek , from , and . First used in the 13th century, the spelling chamomile corresponds to the Latin and the Greek . The spelling camomile is a British derivation from the French. Species Some commonly used species include: Matricaria chamomilla – often called "German chamomile" or "Water of Youth" Chamaemelum nobile – Roman, English, or garden chamomile; also frequently used (C. nobile Treneague is normally used to create a chamomile lawn) A number of other species' common names include the word chamomile. This does not necessarily mean they are used in the same manner as the species used in the herbal tea known as "chamomile". Plants including the common name chamomile, of the family Asteraceae, are: Anthemis arvensis – corn, scentless or field chamomile Anthemis cotula – stinking chamomile Cladanthus mixtus – Moroccan chamomile Cota tinctoria – dyer's, golden, oxeye, or yellow chamomile Eriocephalus punctulatus – Cape chamomile Matricaria discoidea – wild chamomile or pineapple weed Oncosiphon pilulifer – globe chamomile Tripleurospermum inodorum – wild, scentless or false chamomile Uses Chamomile may be used as a flavouring agent in foods and beverages, mouthwash, soaps, and cosmetics. Chamomile tea is a herbal infusion made from dried flowers and hot water, and may improve sleep quality. Two types of chamomile are used, namely German chamomile (Matricaria recutita) and Roman chamomile (Chamaemelum nobile). Chamomile has historically been used as one of the flavouring ingredients in beer, and is sometimes used by modern brewers. Usually the whole plant is used, adding a bitter flavour component. Chamomile, chiefly Chamaemelum nobile cultivars, is used to "upholster" chamomile seats, raised beds which are about half a meter tall, and designed to be sat upon. Chamomile lawns are also used in sunny areas with light traffic. Research The main compounds of interest in chamomile flowers are coumarins, flavonoids, and polyphenols, including apigenin, quercetin, patuletin, luteolin, and daphnin. It is currently unclear whether chamomile is effective in treating any medical conditions. Chamomile is under preliminary research for its potential anti-anxiety properties. There is no high-quality clinical evidence that it is useful for treating insomnia. Drug interactions The use of chamomile has the potential to cause adverse interactions with numerous herbal products and prescription drugs and may worsen pollen allergies. People who are allergic to ragweed (also in the daisy family) may be allergic to chamomile due to cross-reactivity. Chamomile consists of several ingredients including coumarin, glycoside, herniarin, flavonoid, farnesol, nerolidol and germacranolide. Despite the presence of coumarin, as chamomile's effect on the coagulation system has not yet been studied, it is unknown whether a clinically significant drug–herb interaction exists with anticoagulant drugs. However, until more information is available, it is not recommended to use these substances concurrently. Chamomile should not be used by people with past or present cancers of the breast, ovary, or uterus; endometriosis; or uterine fibroids. Pregnancy and breastfeeding Because chamomile has been known to cause uterine contractions that can invoke miscarriage, pregnant women are advised to not consume Roman chamomile (Chamaemelum nobile). Although oral consumption of chamomile is generally recognized as safe in the United States, there is insufficient clinical evidence about its potential for affecting nursing infants. Agriculture The chamomile plant is known to be susceptible to many fungi, insects, and viruses. The following fungi are known to attack this plant: Albugo tragopogonis (white rust), Cylindrosporium matricariae, Erysiphe cichoracearum (powdery mildew), E. polyphage, Halicobasidium purpureum, Peronospora leptosperma, Peronospora radii, Phytophthora cactorum, Puccinia anthemedis, Puccinia matricaiae, Septoria chamomillae, and Sphaerotheca macularis (powdery mildew). Also, yellow virus (Chlorogenus callistephi var. californicus Holmes, Callistephus virus 1A) causes severe damage to this plant. Aphids (Aphis fabae) have been observed feeding on chamomile plants and the moth Autographa chryson causes defoliation.The insect Nysius minor caused shedding of M. chamomilla flowers, Historical descriptions The 11th century part of Old English Illustrated Herbal has an illustrated entry. Nicholas Culpeper's 17th century The Complete Herbal has an illustration and several entries on chamomel. In culture In The Tale of Peter Rabbit by Beatrix Potter (1902), Peter's mother gives him chamomile tea to cure his stomachache. Mary Wesley's 1984 novel The Camomile Lawn features a house in Cornwall with a lawn planted with chamomile rather than grass. In the 2001 No Doubt song "Hey Baby", chamomile is featured in the line "I'm just sippin' on chamomile", sung by Gwen Stefani. Chamomile is the national flower of Russia. In Shakespeare’s Henry IV part 1 Falstaff proclaims “…the camomile grows faster the more it is trodden on“. References External links PLANTS Profile: Anthemis tinctoria L. (golden chamomile), USDA Flower teas Herbal teas Medicinal plants Medicinal plants of Europe Medicinal plants of North America Flora of Mexico Plant common names
Chamomile
[ "Biology" ]
1,464
[ "Plant common names", "Common names of organisms", "Plants" ]
166,760
https://en.wikipedia.org/wiki/Windsock
A windsock (a wind cone or wind sleeve) is a conical textile tube that resembles a giant sock. It can be used as a basic indicator of wind speed and direction, or as decoration. Windsocks are typically used at airports to show the direction and strength of the wind to pilots, and at chemical plants where there is risk of gaseous leakage. They are also sometimes located alongside highways at windy locations. At many airports, windsocks are externally or internally lit at night. Wind direction is opposite the direction in which the windsock is pointing. Wind speed is indicated by the windsock's angle relative to the mounting polein low winds it droops; in high winds, it flies horizontally. History Alternating stripes of high-visibility orange and white were initially used to help estimate wind speed, with each stripe adding 3 knots (5.6km/h; 3.5mph) to the estimated speed. However, some circular frame mountings cause windsocks to be held open at one end, indicating a velocity of 3 knots even when stripes are not present. A fully extended windsock suggests a wind speed of or greater. Standards Per FAA standards, a properly functioning windsock orients itself to a breeze of at least and fully extends in a wind of . Per Transport Canada standards, a 15-knot wind fully extends the windsock; a wind raises it to 5° below the horizontal; and a wind raises it to 30° below the horizontal. ICAO standards specify a truncated cone-shaped windsock at least long and in diameter at the large end. It should be readable from an altitude of and ideally be of a single colour. If it is necessary to use two colours, they should ideally be orange and white, arranged in five alternating bands, with the first and last darker in tone. In wind speeds of or more, they must indicate wind direction to within ±5°. Other related wind direction indicators Wind tees and wind tetrahedrons are two other commonly used wind direction indicators in airports. Wind tees are shaped like an airplane so that they match with the heading of an aircraft ready to take off and land. Wind tetrahedrons always have their pointy ends pointing to the wind. Wind tees and tetrahedrons can swing freely and align themselves with the wind direction, but neither measures the wind speed, unlike a windsock. Since a wind tee or tetrahedron can also be manually set to align with the runway in use, a pilot should also look at the wind sock for wind information, if one is available. See also Air sock Anemoscope – meteorological device for measuring wind direction Anemometer – meteorological device for measuring wind speed Draco (military standard) – military standard carried by the Roman cavalry Koinobori – Japanese decorative carp-shaped windsocks Traffic pattern indicator, which may include a windsock at its center Notes References Airport infrastructure Meteorological instrumentation and equipment Socks
Windsock
[ "Technology", "Engineering" ]
597
[ "Airport infrastructure", "Meteorological instrumentation and equipment", "Measuring instruments", "Aerospace engineering" ]
166,796
https://en.wikipedia.org/wiki/Fissile%20material
In nuclear engineering, fissile material is material that can undergo nuclear fission when struck by a neutron of low energy. A self-sustaining thermal chain reaction can only be achieved with fissile material. The predominant neutron energy in a system may be typified by either slow neutrons (i.e., a thermal system) or fast neutrons. Fissile material can be used to fuel thermal-neutron reactors, fast-neutron reactors and nuclear explosives. Fissile vs fissionable The term fissile is distinct from fissionable. A nuclide that can undergo nuclear fission (even with a low probability) after capturing a neutron of high or low energy is referred to as fissionable. A fissionable nuclide that can undergo fission with a high probability after capturing a low-energy thermal neutron is referred to as fissile. Fissionable materials include those (such as uranium-238) for which fission can be induced only by high-energy neutrons. As a result, fissile materials (such as uranium-235) are a subset of fissionable materials. Uranium-235 fissions with low-energy thermal neutrons because the binding energy resulting from the absorption of a neutron is greater than the critical energy required for fission; therefore uranium-235 is fissile. By contrast, the binding energy released by uranium-238 absorbing a thermal neutron is less than the critical energy, so the neutron must possess additional energy for fission to be possible. Consequently, uranium-238 is fissionable but not fissile. An alternative definition defines fissile nuclides as those nuclides that can be made to undergo nuclear fission (i.e., are fissionable) and also produce neutrons from such fission that can sustain a nuclear chain reaction in the correct setting. Under this definition, the only nuclides that are fissionable but not fissile are those nuclides that can be made to undergo nuclear fission but produce insufficient neutrons, in either energy or number, to sustain a nuclear chain reaction. As such, while all fissile isotopes are fissionable, not all fissionable isotopes are fissile. In the arms control context, particularly in proposals for a Fissile Material Cutoff Treaty, the term fissile is often used to describe materials that can be used in the fission primary of a nuclear weapon. These are materials that sustain an explosive fast neutron nuclear fission chain reaction. Under all definitions above, uranium-238 () is fissionable, but not fissile. Neutrons produced by fission of have lower energies than the original neutron (they behave as in an inelastic scattering), usually below 1 MeV (i.e., a speed of about 14,000 km/s), the fission threshold to cause subsequent fission of , so fission of does not sustain a nuclear chain reaction. Fast fission of in the secondary stage of a thermonuclear weapon, due to the production of high-energy neutrons from nuclear fusion, contributes greatly to the yield and to fallout of such weapons. Fast fission of tampers has also been evident in pure fission weapons. The fast fission of also makes a significant contribution to the power output of some fast-neutron reactors. Fissile nuclides In general, most actinide isotopes with an odd neutron number are fissile. Most nuclear fuels have an odd atomic mass number ( = the total number of nucleons), and an even atomic number Z. This implies an odd number of neutrons. Isotopes with an odd number of neutrons gain an extra 1 to 2 MeV of energy from absorbing an extra neutron, from the pairing effect which favors even numbers of both neutrons and protons. This energy is enough to supply the needed extra energy for fission by slower neutrons, which is important for making fissionable isotopes also fissile. More generally, nuclides with an even number of protons and an even number of neutrons, and located near a well-known curve in nuclear physics of atomic number vs. atomic mass number are more stable than others; hence, they are less likely to undergo fission. They are more likely to "ignore" the neutron and let it go on its way, or else to absorb the neutron but without gaining enough energy from the process to deform the nucleus enough for it to fission. These "even-even" isotopes are also less likely to undergo spontaneous fission, and they also have relatively much longer partial half-lives for alpha or beta decay. Examples of these isotopes are uranium-238 and thorium-232. On the other hand, other than the lightest nuclides, nuclides with an odd number of protons and an odd number of neutrons (odd Z, odd N) are usually short-lived (a notable exception is neptunium-236 with a half-life of 154,000 years) because they readily decay by beta-particle emission to their isobars with an even number of protons and an even number of neutrons (even Z, even N) becoming much more stable. The physical basis for this phenomenon also comes from the pairing effect in nuclear binding energy, but this time from both proton–proton and neutron–neutron pairing. The relatively short half-life of such odd-odd heavy isotopes means that they are not available in quantity and are highly radioactive. According to the fissility rule proposed by Yigal Ronen, for a heavy element with Z between 90 and 100, an isotope is fissile if and only if } (where N = number of neutrons and Z = number of protons), with a few exceptions. This rule holds for all but fourteen nuclides – seven that satisfy the criterion but are nonfissile, and seven that are fissile but do not satisfy the criterion. Nuclear fuel To be a useful fuel for nuclear fission chain reactions, the material must: Be in the region of the binding energy curve where a fission chain reaction is possible (i.e., above radium) Have a high probability of fission on neutron capture Release more than one neutron on average per neutron capture. (Enough of them on each fission, to compensate for non-fissions and absorptions in non-fuel material) Have a reasonably long half-life Be available in suitable quantities. Fissile nuclides in nuclear fuels include: Uranium-233, bred from thorium-232 by neutron capture with intermediate decays steps omitted. Uranium-235, which occurs in natural uranium and enriched uranium Plutonium-239, bred from uranium-238 by neutron capture with intermediate decays steps omitted. Plutonium-241, bred from plutonium-240 directly by neutron capture. Fissile nuclides do not have a 100% chance of undergoing fission on absorption of a neutron. The chance is dependent on the nuclide as well as neutron energy. For low and medium-energy neutrons, the neutron capture cross sections for fission (σF), the cross section for neutron capture with emission of a gamma ray (σγ), and the percentage of non-fissions are in the table at right. Fertile nuclides in nuclear fuels include: Thorium-232, which breeds uranium-233 by neutron capture with intermediate decays steps omitted. Uranium-238, which breeds plutonium-239 by neutron capture with intermediate decays steps omitted. Plutonium-240, which breeds plutonium-241 directly by neutron capture. See also Fertile material Fission product Special nuclear material Notes References Nuclear physics Nuclear fission Nuclear weapon design
Fissile material
[ "Physics", "Chemistry" ]
1,558
[ "Explosive chemicals", "Nuclear fission", "Fissile materials", "Nuclear physics" ]
166,803
https://en.wikipedia.org/wiki/Shutter%20speed
In photography, shutter speed or exposure time is the length of time that the film or digital sensor inside the camera is exposed to light (that is, when the camera's shutter is open) when taking a photograph. The amount of light that reaches the film or image sensor is proportional to the exposure time. of a second will let half as much light in as . Introduction The camera's shutter speed, the lens's aperture or f-stop, and the scene's luminance together determine the amount of light that reaches the film or sensor (the exposure). Exposure value (EV) is a quantity that accounts for the shutter speed and the f-number. Once the sensitivity to light of the recording surface (either film or sensor) is set in numbers expressed in "ISOs" (e.g. 200 ISO, 400 ISO), the light emitted by the scene photographed can be controlled through aperture and shutter-speed to match the film or sensor sensitivity to light. This will achieve a good exposure when all the details of the scene are legible on the photograph. Too much light let into the camera results in an overly pale image (or "over-exposure") while too little light will result in an overly dark image (or "under-exposure"). Multiple combinations of shutter speed and f-number can give the same exposure value (E.V.). According to exposure value formula, doubling the exposure time doubles the amount of light (subtracts 1 EV). Reducing the aperture size at multiples of one over the square root of two lets half as much light into the camera, usually at a predefined scale of , , , , , , , , , , and so on. For example, lets four times more light into the camera as does. A shutter speed of  s with an aperture gives the same exposure value as a  s shutter speed with an aperture, and also the same exposure value as a  s shutter speed with an aperture, or  s at . In addition to its effect on exposure, the shutter speed changes the way movement appears in photographs. Very short shutter speeds can be used to freeze fast-moving subjects, for example at sporting events. Very long shutter speeds are used to intentionally blur a moving subject for effect. Short exposure times are sometimes called "fast", and long exposure times "slow". Adjustments to the aperture need to be compensated by changes of the shutter speed to keep the same (right) exposure. In early days of photography, available shutter speeds were not standardized, though a typical sequence might have been  s,  s,  s,  s,  s and  s; neither were apertures or film sensitivity (at least 3 different national standards existed). Soon this problem resulted in a solution consisting in the adoption of a standardized way of choosing aperture so that each major step exactly doubled or halved the amount of light entering the camera (, , , , , , etc.), a standardized 2:1 scale was adopted for shutter speed so that opening one aperture stop and reducing the amount of time of the shutter speed by one step resulted in the identical exposure. The agreed standards for shutter speeds are: . With this scale, each increment roughly doubles the amount of light (longer time) or halves it (shorter time). Camera shutters often include one or two other settings for making very long exposures: B (for bulb) keeps the shutter open as long as the shutter release is held. T (for time) keeps the shutter open (once the shutter-release button had been depressed) until the shutter release is pressed again. The ability of the photographer to take images without noticeable blurring by camera movement is an important parameter in the choice of the slowest possible shutter speed for a handheld camera. The rough guide used by most 35 mm photographers is that the slowest shutter speed that can be used easily without much blur due to camera shake is the shutter speed numerically closest to the lens focal length. For example, for handheld use of a 35 mm camera with a 50 mm normal lens, the closest shutter speed is  s (closest to "50"), while for a 200 mm lens it is recommended not to choose shutter speeds below  s. This rule can be augmented with knowledge of the intended application for the photograph, an image intended for significant enlargement and closeup viewing would require faster shutter speeds to avoid obvious blur. Through practice and special techniques such as bracing the camera, arms, or body to minimize camera movement, using a monopod or a tripod, slower shutter speeds can be used without blur. If a shutter speed is too slow for hand holding, a camera support, usually a tripod, must be used. Image stabilization on digital cameras or lenses can often permit the use of shutter speeds 3–4 stops slower (exposures 8–16 times longer). Shutter priority refers to a shooting mode used in cameras. It allows the photographer to choose a shutter speed setting and allow the camera to decide the correct aperture. This is sometimes referred to as Shutter Speed Priority Auto Exposure, or TV (time value on Canon cameras) mode, S mode on Nikons and most other brands. Creative utility in photography Shutter speed is one of several methods used to control the amount of light recorded by the camera's digital sensor or film. It is also used to manipulate the visual effects of the final image. Slower shutter speeds are often selected to suggest the movement of an object in a still photograph. Excessively fast shutter speeds can cause a moving subject to appear unnaturally frozen. For instance, a running person may be caught with both feet in the air with all indication of movement lost in the frozen moment. When a slower shutter speed is selected, a longer time passes from the moment the shutter opens till the moment it closes. More time is available for movement in the subject to be recorded by the camera as a blur. A slightly slower shutter speed will allow the photographer to introduce an element of blur, either in the subject, where, in our example, the feet, which are the fastest moving element in the frame, might be blurred while the rest remains sharp; or if the camera is panned to follow a moving subject, the background is blurred while the subject remains relatively sharp. The exact point at which the background or subject will start to blur depends on the speed at which the object is moving, the angle that the object is moving in relation to the camera, the distance it is from the camera and the focal length of the lens in relation to the size of the digital sensor or film. When slower shutter-speeds, in excess of about half a second, are used on running water, the water in the photo will have a ghostly white appearance reminiscent of fog. This effect can be used in landscape photography. Zoom burst is a technique which entails the variation of the focal length of a zoom lens during a longer exposure. In the moment that the shutter is opened, the lens is zoomed in, changing the focal length during the exposure. The center of the image remains sharp, while the details away from the center form a radial blur, which causes a strong visual effect, forcing the eye into the center of the image. The following list provides an overview of common photographic uses for standard shutter speeds.  s and less: The fastest speed available in APS-H or APS-C format DSLR cameras (). (Canon EOS 1D, Nikon D1, Nikon 1 J2, D1X, and D1H)  s: The fastest speed available in any 35 mm film SLR camera. (Minolta Maxxum 9xi,  s: The fastest speed available in production SLR cameras (), also the fastest speed available in any full-frame DSLR or SLT camera (). Used to take sharp photographs of very fast subjects, such as birds or planes, under good lighting conditions, with an ISO speed of 1,000 or more and a large-aperture lens.  s: The fastest speed available in consumer SLR cameras (); also the fastest speed available in any leaf shutter camera (such as the Sony Cyber-shot DSC-RX1) (). Used to take sharp photographs of fast subjects, such as athletes or vehicles, under good lighting conditions and with an ISO setting of up to 800.  s and  s: Used to take sharp photographs of moderately fast subjects under normal lighting conditions.  s and  s: Used to take sharp photographs of people in motion in everyday situations.  s is the fastest speed useful for panning; it also allows for a smaller aperture (up to ) in motion shots, and hence for a greater depth of field.  s: This speed, and slower ones, are no longer useful for freezing motion.  s is used to obtain greater depth of field and overall sharpness in landscape photography, and is also often used for panning shots.  s: Used for panning shots, for images taken under dim lighting conditions, and for available light portraits.  s: Used for panning subjects moving slower than and for available-light photography. Images taken at this and slower speeds normally require a tripod or an image stabilized lens/camera to be sharp.  s and  s: This and slower speeds are useful for photographs other than panning shots where motion blur is employed for deliberate effect, or for taking sharp photographs of immobile subjects under bad lighting conditions with a tripod-supported camera.  s,  s and 1 s: Also mainly used for motion blur effects and/or low-light photography, but only practical with a tripod-supported camera. B (bulb) (fraction of second to several hours): Used with a mechanically fixed camera in astrophotography and for certain special effects. Cinematographic shutter formula Motion picture cameras used in traditional film cinematography employ a mechanical rotating shutter. The shutter rotation is synchronized with film being pulled through the gate, hence shutter speed is a function of the frame rate and shutter angle. Where E = shutter speed (reciprocal of exposure time in seconds), F = frames per second, and S = shutter angle: , for E'' in reciprocal seconds With a traditional shutter angle of 180°, film is exposed for second at 24 frame/s. To avoid effect of light interference when shooting under artificial lights or when shooting television screens and computer monitors,  s (172.8°) or  s (144°) shutter is often used. Electronic video cameras do not have mechanical shutters and allow setting shutter speed directly in time units. Professional video cameras often allow selecting shutter speed in terms of shutter angle instead of time units, especially those that are capable of overcranking or undercranking. See also Exposure (photography) Exposure value F-number Shutter (photography) Preferred number References Sources Science of photography Durations
Shutter speed
[ "Physics" ]
2,195
[ "Temporal quantities", "Physical quantities", "Durations" ]
166,826
https://en.wikipedia.org/wiki/Johnson%20%26%20Johnson
Johnson & Johnson (J&J) is an American multinational pharmaceutical, biotechnology, and medical technologies corporation headquartered in New Brunswick, New Jersey, and publicly traded on the New York Stock Exchange. Its common stock is a component of the Dow Jones Industrial Average, and the company is ranked No. 40 on the 2023 Fortune 500 list of the largest United States corporations. In 2023, the company was ranked 40th in the Forbes Global 2000. Johnson & Johnson has a global workforce of approximately 130,000 employees who are led by the company's current chairman and chief executive officer, Joaquin Duato. Johnson & Johnson was founded in 1886 by three brothers, Robert Wood Johnson, James Wood Johnson, and Edward Mead Johnson, selling ready-to-use sterile surgical dressings. In 2023, the company split-off its consumer healthcare business sector into a new publicly traded company, Kenvue. The company is exclusively focused on developing and producing pharmaceutical prescription drugs and medical device technologies. Johnson & Johnson is one of the world's most valuable companies and is one of only two U.S.-based companies that has a prime credit rating of AAA. History 1873–1885: Before Johnson & Johnson Robert Wood Johnson began his professional training at age 16 as a pharmaceutical apprentice at an apothecary run by his mother's cousin, James G. Wood, in Poughkeepsie, New York. Johnson co-founded his own company with George Seabury in 1873. The New York-based Seabury & Johnson became known for its medicated plasters. Robert Wood Johnson represented the company at the 1876 World's Fair. There he heard Joseph Lister's explanation of a new procedure: antiseptic surgery. Johnson parted ways with his business partner Seabury in 1885. 1886: Founding of Johnson & Johnson Robert Wood Johnson joined his brothers, James Wood Johnson and Edward Mead Johnson, and created a line of ready-to-use sterile surgical dressings in 1886. They founded Johnson & Johnson in 1886 with 14 employees, eight women and six men. Johnson & Johnson opened its first factory building in the old Janeway and Carpenter factory on Neilson Street in New Brunswick, New Jersey. They manufactured sterile surgical supplies, household products, and medical guides. Those products initially featured a logo that resembled the signature of James Wood Johnson. Robert Wood Johnson served as the first president of the company. 1887–1942: Early history The company sold medicated plasters such as Johnson & Johnson's Black Perfect Taffeta Court Plaster and also manufactured the world's first sterile surgical products, including sutures, absorbent cotton, and gauze. The company published "Modern Methods of Antiseptic Wound Treatment", a guide on how to do sterile surgery using its products, and in 1888, distributed 85,000 copies to doctors and pharmacists across the United States. The manual was translated into three languages and distributed worldwide. The first commercial first aid kit was designed in 1888 to support railroad construction workers, who were often hundreds of miles from medical care. The kits included antiseptic emergency supplies and directions for field use. In 1901, the company published the Handbook of First Aid, a guide on applying first aid. In 1889, the company hired pharmacist Fred Kilmer as its first scientific director, who led its scientific research and wrote educational manuals. Kilmer's first achievement as scientific director was developing the industrial sterilization process. He was employed at the company until 1934. Johnson & Johnson had more than 400 employees and 14 buildings by 1894. In 1894, the company began producing Johnson's Baby Powder, the company's first baby product. The company introduced the world's first maternity kit in 1894 to aid at-home births, called Dr. Simpson's Maternity Packet, presumably after Scottish obstetrician James Young Simpson. The kit included a washcloth; safety pins; sterile sutures, sponges, and gauze; antiseptic soap; an obstetric sheet and ligatures; flannel to wrap the baby in; and a chart for keeping birth records. The products were later marketed separately, including "Lister's Towels", the world's first mass-produced sanitary napkins. Kilmer wrote "Hygiene in Maternity", an instructional guide for mothers before and after delivery. In 1904, the company expanded its baby care products with "Lister's Sanitary Diapers", a diaper product for infants. During the Spanish–American War, Johnson & Johnson developed and donated 300,000 packaged compressed surgical dressings for soldiers in the field and created a trauma stretcher for field medics. The company donated its products in disaster relief efforts of the 1900 Galveston hurricane and the 1906 San Francisco earthquake. Johnson & Johnson vaccinated all of its employees against smallpox during the 1901 smallpox epidemic. The firm employed more than 1,200 people by 1910. Women accounted for half of the company's workforce and led a quarter of its departments. Robert Wood Johnson died in 1910, and he was succeeded as president of the company by his brother James Wood Johnson. During World War I, Johnson & Johnson factories increased production to meet wartime demands for sterile surgical products. In 1916, the company acquired Chicopee Manufacturing Company in Chicopee Falls, Massachusetts, to meet demand. Near the end of World War I, the 1918 flu pandemic broke out. The company invented and distributed an epidemic mask that helped prevent the spread of the flu. In 1919, Johnson & Johnson opened the Gilmour Plant near Montreal, its first factory outside the United States, which produced surgical products for international customers. In 1924 the company's first overseas manufacturing facility was opened in Slough, England. In 1920, Earle Dickson combined two Johnson & Johnson products, adhesive tape and gauze, to create the first commercial adhesive bandage. Band-Aid Brand Adhesive Bandages began sales the following year. In 1921, the company released Johnson's Baby Soap. Named after its Massachusetts facility, Johnson & Johnson built a textile mill and company town, Chicopee, outside Gainesville, Georgia. In the 1930s, the company expanded operations to Argentina, Brazil, Mexico, and South Africa. In 1931, Johnson & Johnson introduced the first prescription contraceptive gel marketed as Ortho-Gynol. Robert Wood Johnson II became president of the company in 1932. During The Great Depression Johnson & Johnson kept all its workers employed and raised wages by 5%. In 1933, Robert Wood Johnson II wrote a letter to Franklin D. Roosevelt, calling for a federal law to increase wages and reduce hours for all American workers. The company also opened a new facility in Chicago during that period. Johnson wrote and distributed "Try Reality: A Discussion of Hours, Wages, and The Industrial Future" to persuade business leaders to follow his lead, advocating that business is more than profit and that companies have responsibilities to consumers, employees, and society. In "Try Reality", the section titled "An Industrial Philosophy" would later become the company's credo. In 1935, Johnson's Baby Oil was added to its line of baby products. Both male and female Johnson & Johnson employees were drafted and enlisted during World War II. The company ensured no one would lose their job when they returned home. Robert Wood Johnson II was appointed head of the Smaller War Plants Corporation in Washington, D.C. His work ensured U.S. factories with under 500 employees were awarded government contracts. 1943: Credo and going public In 1943, as the company was preparing for its initial public offering (IPO), Robert Wood Johnson wrote what the company would call, "Our Credo", a defining document that has been used to guide the company's decisions over the years. The company completed its IPO and became a public company in 1944. In 1943, Vesta Stoudt identified a need for waterproof tape for ammunition boxes in World War Two. She wrote to Franklin D. Roosevelt with the idea; the president commissioned Revolite, a subsidiary of Johnson & Johnson at the time, to develop and manufacture a cloth-based adhesive tape. 1944–1999: Acquisitions and international expansion In 1944, the company began selling Johnson's Baby Lotion. The same year, the company established Ethicon Suture Laboratories. In 1947, G. F. Merson Ltd. was acquired to expand the company's suture business in the United Kingdom. The company was rebranded and absorbed into Ethicon. Johnson & Johnson chairman of the board, Robert Wood Johnson, published Or Forfeit Freedom, in 1947. The book outlined that businesses need to develop sustainable methods of using natural resources for the future of business and the planet. In 1955, Ethicon developed a micro-point reverse-cutting ophthalmic needle attached to the suture. Micro-point surgical needles and sutures allowed for advances in modern vision surgery. In 1956, the company opened its first Asia-based operating company in the Philippines. The following year, an operating company opened in India. In 1959, Johnson & Johnson acquired McNeil Laboratories. A year later, the company sold Tylenol for the first time without a prescription. In the same year, Cilag Chemie joined Johnson & Johnson as Cilag. In 1961, Johnson & Johnson acquired Janssen Pharmaceuticals, which had been founded in 1953 by Belgian scientist Paul Janssen, the inventor of Fentanyl. In 1963, Philip B. Hofmann succeeded Robert Wood Johnson as chairman and CEO. He was the first non-Johnson family member to become chief executive. Hofmann also helped found the Robert Wood Johnson Foundation. In the same year, the Food and Drug Administration approved a synthetic hormone contraceptive pill, Ortho-Novum. In 1965, Johnson & Johnson acquired Codman & Shurtleff. The acquired company produced neurovascular devices and neurosurgery technologies. In 1968, the company developed the RhoGAM vaccine. The vaccine prevented Rh hemolytic disease in newborns. In 1969, Ortho Diagnostics, a company subsidiary, launched the Sickledex Tube Test for detecting anemia. The same year, the FDA approved the Johnson & Johnson arterial graft. In 1971, the company launched Hapindex Diagnostic Test, a rapid hepatitis B test for blood donors. The test was developed to prevent the spread of hepatitis B through blood transfusions. In the 1970s, Johnson & Johnson hired Henry N. Cobb from Pei Cobb Freed & Partners to design its new headquarters. The firm designed Johnson & Johnson Plaza across the railroad tracks from the older section of the Johnson & Johnson campus. In 1973, Richard Sellars became chairman and CEO of Johnson & Johnson. In 1976, James E. Burke became the company's chairman and CEO. During Burke's tenure, he managed the 1982 Tylenol tampering incident. It became a case study on crisis management. Under his leadership, the company recalled 31 million bottles of Tylenol, relaunched the product with a triple tamper-evident seal, and urged consumers not to use if tampered with. These practices became the pharmaceutical and packaged food industry norm. Johnson & Johnson opened operating companies in China and Egypt in 1985. In 1987, Acuvue contact lenses became the first disposable contact lenses available to consumers. The lenses lasted up to one week, reducing the cost of contact lenses. In the same year, the company launched One Touch, a blood glucose monitoring system. In 1989, Ralph S. Larsen was appointed chairman and CEO of the company. After the dissolution of the Soviet Union, Johnson & Johnson expanded into eastern Europe. By 1991, the company had a presence in Hungary, Russia, the Czech Republic, and Poland. In the 1990s, the company acquired many familiar consumer health brands that made up the Johnson & Johnson family of companies. These acquisitions included Clean & Clear, Neutrogena, Motrin, and Aveeno. Johnson & Johnson opened an operating company in Israel in 1996. In 1997, Johnson & Johnson acquired Biosense Webster. DePuy was acquired by Johnson & Johnson in 1998, rolling it into the Medtech business group. 2000–present William C. Weldon was appointed chairman and CEO of the company in 2002. In 2003, Ethicon launched Vicryl Plus Antibacterial Sutures. The products prevent post-surgery infection within stitches. In 2006, Johnson & Johnson acquired Pfizer's consumer healthcare business and merged it with its consumer healthcare business group. The acquisition added brands like Listerine, Bengay, and Neosporin to the company's portfolio. In the same year, Johnson & Johnson's Janssen Pharmaceuticals, launched Prezista, a protease inhibitor for patients with failed previous HIV therapies. In 2008, Johnson & Johnson acquired Mentor Corporation for $1 billion and merge its operations into Ethicon. In 2009, the company acquired HealthMedia, later renamed to Health & Wellness Solutions and the Human Performance Institute. In October 2010, J&J acquired Crucell for $2.4 billion. The subsidiary operates as the centre for vaccines, within Johnson & Johnson pharmaceuticals business group. In 2012, Alex Gorsky became chairman and CEO of Johnson & Johnson. In November 2015, Biosense Webster, Inc. acquired Coherex Medical Inc. expanding the company's range of treatment options for patients with atrial fibrillation. In 2017, Johnson & Johnson acquired Abbott Medical Optics from Abbott Laboratories for $4.325 billion, adding the new division into Johnson & Johnson Vision Care, Inc. in 2017. That same year, Johnson & Johnson acquired Actelion in a $30 billion deal, the largest ever purchase by the company. After the purchase, Johnson & Johnson spun off Actelion's research and development unit into a separate legal entity. In July 2017, Johnson & Johnson Vision Care, Inc acquired TearScience. In September 2017, the company acquired subscription-based contact lens startup Sightbox. In September of the same year Johnson & Johnson Medical GmbH acquired Emerging Implant Technologies GmbH, manufacturer of 3D-printed titanium interbody implants for spinal fusion surgery. In March 2019, the FDA approved esketamine for the treatment of severe depression, which is marketed as Spravato by Janssen Pharmaceuticals. In 2019, Johnson & Johnson announced the release of photochromic contact lenses. The lenses adjust to sunlight and help eyes recover from bright light exposure faster. The lenses contain a photochromic additive that adapts visible light amounts filtered to the eyes and are the first to use such additives. In November 2020, Johnson & Johnson acquired Momenta Pharmaceuticals for $6.5 billion. In January 2022, Joaquin Duato became CEO of Johnson & Johnson. In December 2022, Johnson & Johnson acquired cardiovascular medical technology company Abiomed Inc. for $16.6 billion. Johnson & Johnson began the separation of their consumer healthcare business sector in November 2021. In the split, Johnson & Johnson will retain the Johnson & Johnson name for prescription drugs and medical devices, while the second company will sell consumer health products and take over the Neutrogena, Aveeno, Tylenol, Listerine, Johnson's, Band-Aid and other brands. In September 2022, Johnson & Johnson chose Kenvue as the new name for its Consumer Health business. Kenvue went public through an IPO in May 2023, with Johnson & Johnson maintaining a controlling stake of around 91 percent. On July 24, 2023, Johnson & Johnson launched an exchange offer to split-off Kenvue. Following the completion of the exchange offer, Johnson & Johnson will retain approximately 9.5% of the outstanding shares of Kenvue common stock. Johnson & Johnson holds a patent on the tuberculosis-treating drug bedaquiline, with secondary patents in at least 25 out of 43 countries with a high burden of tuberculosis blocking affordable generic versions of the drug, preventing millions of people from accessing the life-saving treatment. Though the patent was set to expire in many countries in 2023, Johnson & Johnson applied to extend the patent. On July 13, 2023, Stop TB Partnership announced that after negotiations with Johnson & Johnson, they had been granted licenses to produce generic versions of the drug. Johnson & Johnson announced several acquisitions in 2024: Ambrx Biopharma for $2 billion (in January), Shockwave Medical for $13.1 billion (in April), and Proteologix for $850 million (in May). Johnson & Johnson announced it would buy neurological drug maker Intra-Cellular Therapies for $14.6 billion. Coronavirus (COVID-19) response Johnson & Johnson committed over $1 billion toward the development of a not-for-profit COVID-19 vaccine in partnership with the Biomedical Advanced Research and Development Authority (BARDA) Office of the Assistant Secretary for Preparedness and Response (ASPR) at the U.S. Department of Health and Human Services (HHS). Paul Stoffels of Johnson & Johnson said, "In order to go fast, the people of Johnson & Johnson are committed to do this and all together we say we're going to do this not for profit. That's the fastest and the best way to find all the collaborations in the world to make this happen so we commit to bring this at a not-for-profit level." Janssen Vaccines, in partnership with Beth Israel Deaconess Medical Center (BIDMC), is responsible for developing the vaccine candidate, based on the same technology used to make its Ebola vaccine. The vaccine candidate is expected to enter phase 1 human clinical study in September 2020. Demand for the product Tylenol surged two to four times normal levels in March 2020. In response, the company increased production globally. For example, the Tylenol plant in Puerto Rico ran 24 hours a day, seven days a week. In response to the shortage of ventilators, Ethicon, with Prisma Health, made and distributed the VESper Ventilator Expansion Splitter, which uses 3D printing technology, to allow one ventilator to support two patients. Janssen COVID-19 vaccine In June 2020, Johnson & Johnson and the National Institute of Allergy and Infectious Diseases (NIAID) confirmed its intention to start a clinical trials of J&J's vaccine in September 2020, with the possibility of Phase 1/2a human clinical trials starting at an accelerated pace in the second half of July. On August 5, 2020, the US government agreed to pay more than $1 billion to Johnson & Johnson (medical device company) for the production of 100 million doses of COVID-19 vaccine. As part of the agreed-upon deal, the U.S. can order up to 200 million additional doses of SARS-CoV-2 vaccine. In September 2020, Johnson & Johnson started its 60,000-person phase 3 adenovirus-based vaccine trial. The trial was paused on October 12, 2020, because a volunteer became ill, but the company said it found no evidence that the vaccine had caused the illness and announced on October 23, 2020, that it would resume the trial. In April 2021, the company reported that its COVID-19 vaccine achieved $100 million sales in the first quarter, accounting for less than 1% of its total revenue. Business sectors The company's business is divided into two business sectors: Innovative Medicine and MedTech. Johnson & Johnson Innovation, LLC (JJI) is a subsidiary of Johnson & Johnson. JJI focuses on early-stage, life science, and technology innovations to advance the company's research and development pipeline. JJI provides startups with sourcing, infrastructure, and capital equipment at JLABS, financing & venture capital at JJDC, Inc., and collaborations leading to the potential development of medical device technologies, pharmaceuticals, and therapeutics. There are 4 JJI Innovation Centers located in London, Shanghai, Boston (Cambridge), and the San Francisco Bay Area. There are 13 JLABS incubators located in the Bay Area (San Francisco and South San Francisco), Belgium (Beerse), Boston (Cambridge and Lowell), Houston (TMC), New York City, Philadelphia, San Diego, Shanghai, Toronto, and Washington, D.C. Innovative Medicine The Innovative Medicine (formerly known as pharmaceuticals) segment is focused on six therapeutic areas: immunology (rheumatoid arthritis, inflammatory bowel disease and psoriasis); infectious diseases (HIV/AIDS); neuroscience (mood disorders, neurodegenerative disorders and schizophrenia); oncology (solid tumours including lung cancer, prostate cancer and bladder cancer, and hematologic malignancies); cardiovascular, metabolism, retina (thrombosis and diabetes), and pulmonary hypertension (pulmonary arterial hypertension). MedTech The Cardiovascular & Specialty Solutions Group includes electrophysiology products that diagnose and treat cardiac arrhythmias; devices used in the endovascular treatment of hemorrhagic and ischemic stroke; solutions that focus on breast reconstruction and aesthetics, and ear, nose and throat procedures. The orthopaedics portfolio is composed of specialties including joint reconstruction, trauma, extremities, craniomaxillofacial, spinal surgery and sports medicine, in addition to the VELY digital surgery portfolio. The surgery portfolio includes advanced surgical innovations and solutions such as sutures, staplers, energy devices, and advanced hemostats along with interventional ablation, surgical robotics, and digital solutions. The Johnson & Johnson Vision portfolio includes contact lenses, intraocular lens, automated treatment for dry eye, and four brands of laser vision correction systems. Finance For the fiscal year 2023, Johnson & Johnson reported earnings of $35.15billion, with an annual revenue of $85.16billion, an increase of 10.57% over the previous fiscal cycle. Johnson & Johnson's shares traded at over $160 per share, and its market capitalization was valued at over $386.7billion in July 2024. Corporate governance As of 2023, the members of the board of directors of Johnson & Johnson are Joaquin Duato, Darius Adamczyk, Mary C. Beckerle, D. Scott Davis, Jennifer A. Doudna, Marillyn A. Hewson, Paula A. Johnson, Hubert Joly, Mark B. McClellan, Anne M. Mulcahy, Mark A. Weinberger, Nadja Y. West, and Eugene A. Woods. As of 2023, the members of the executive committee of Johnson & Johnson are Joaquin Duato, Vanessa Broadhurst, Peter Fasolo, Liz Forminard, William N. Hait, Tim Schmid, John C. Reed, James Swanson, Jennifer Taubert, Kathy E. Wengel, and Joseph J. Wolk. Joaquin Duato is chairman and chief executive officer. Chairmen Robert Wood Johnson I (1887–1910) James Wood Johnson (1910–1932) Robert Wood Johnson II (1932–1963) Philip B. Hofmann (1963–1973) Richard B. Sellars (1973–1976) James E. Burke (1976–1989) Ralph S. Larsen (1989–2002) William C. Weldon (2002–2012) Alex Gorsky (2012–2022) Joaquin Duato (2023–present) Ownership Johnson & Johnson is mainly owned by institutional investors, with over 70% of shares held. The 10 largest shareholder of Johnson & Johnson in December 2023 were: The Vanguard Group (9.52%) BlackRock (7.73%) State Street Corporation (5.52%) Geode Capital Management (2.13%) Morgan Stanley (1.73%) State Farm (1.32%) JPMorgan Chase (1.24%) Northern Trust (1.23%) Capital International Investors (1.20%) Norges Bank (1.08%) Environmental record Johnson & Johnson has set several positive goals to keep the company environmentally friendly and was ranked third among the United States's largest companies in Newsweeks "Green Rankings". Some examples are the reduction in water use, waste, and energy use and an increased level of transparency. Johnson & Johnson agreed to change its packaging of plastic bottles used in the manufacturing process, switching their packaging of liquids to non-polyvinyl chloride containers. The corporation is working with the Climate Northwest Initiative and the EPA National Environmental Performance Track program. As a member of the national Green Power Partnership, Johnson & Johnson operates the largest solar power generator in Pennsylvania at its site in Fort Washington, Pennsylvania. Recalls and litigation 1982 Chicago Tylenol murders On September 29, 1982, a "Tylenol scare" began when the first of seven individuals died in Chicago metropolitan area, after ingesting Extra Strength Tylenol that had been deliberately laced with cyanide. Within a week, the company pulled 31 million bottles of capsules back from retailers, making it one of the first major recalls in American history. The incident led to reforms in the packaging of over-the-counter substances and to federal anti-tampering laws. The case remains unsolved and no suspects have been charged. Johnson & Johnson's quick response, including a nationwide recall, was widely praised by public relations experts and the media and was the gold standard for corporate crisis management. 2010 children's product recall On April 30, 2010, McNeil Consumer Healthcare, a subsidiary of Johnson & Johnson, voluntarily recalled 43 over-the-counter children's medicines, including Tylenol, Tylenol Plus, Motrin, Zyrtec and Benadryl. The recall was conducted after a routine inspection at a manufacturing facility in Fort Washington, Pennsylvania, United States, revealed that some "products may not fully meet the required manufacturing specifications". Affected products may contain a "higher concentration of active ingredients" or exhibit other manufacturing defects. Products shipped to Canada, Dominican Republic, Mexico, Guam, Guatemala, Jamaica, Puerto Rico, Panama, Trinidad and Tobago, the United Arab Emirates, Kuwait and Fiji were included in the recall. In a statement, Johnson & Johnson said "a comprehensive quality assessment across its manufacturing operations" was underway. A dedicated website was established by the company listing affected products and other consumer information. 2010 hip-replacement recall On August 24, 2010, DePuy, a subsidiary of American giant Johnson & Johnson, recalled its ASR (articular surface replacement) hip prostheses from the market. DePuy said the recall was due to unpublished National Joint Registry data showing a 12% revision rate for resurfacing at five years and an ASR XL revision rate of 13%. All hip prostheses fail in some patients, but it is expected that the rate will be about 1% a year. Pathologically, the failing prosthesis had several effects. Metal debris from wear of the implant led to a reaction that destroyed the soft tissues surrounding the joint, leaving some patients with long term disability. Ions of cobalt and chromiumthe metals from which the implant was madewere also released into the blood and cerebral spinal fluid in some patients. In March 2013, a jury in Los Angeles ordered Johnson & Johnson to pay more than $8.3million in damages to a Montana man in the first of more than 10,000 lawsuits pending against the company in connection with the now-recalled DePuy hip. Some lawyers and industry analysts have estimated that the suits ultimately will cost Johnson & Johnson billions of dollars to resolve. 2010 Tylenol recall In 2010 and 2011, Johnson & Johnson voluntarily recalled some over-the-counter products, including Tylenol, due to an odor caused by tribromoanisole. In this case, 2,4,6-tribromophenol was used to treat wooden pallets on which product packaging materials were transported and stored. Shareholders lawsuit In 2010 a group of shareholders sued the board for allegedly failing to take action to prevent serious failings and illegalities since the 1990s, including manufacturing problems, bribing officials, covering up adverse effects and misleading marketing for unapproved uses. The judge initially dismissed the case in September 2011, but allowed the plaintiffs opportunity to refile at a later time. In 2012 Johnson and Johnson proposed a settlement with the shareholders, whereby the company would institute new oversight, quality and compliance procedures binding for five years. Illegal marketing of Risperdal Juries in several US states have found J&J guilty of concealing the adverse effects of Janssen Pharmaceuticals' antipsychotic medication Risperdal, produced by its unit, to promote it to doctors and patients as better than cheaper generics, and of falsely marketing it for treating patients with dementia. States that have awarded damages include Texas ($158million), South Carolina ($327million), Louisiana ($258million), and most notably Arkansas ($1.2billion). In 2010, the United States Department of Justice joined a whistleblowers suit accusing the company of illegally marketing Risperdal through Omnicare, the largest company supplying pharmaceuticals to nursing homes. The allegations include that J&J were warned by the FDA to not promote Risperdal as effective and safe for elderly patients, but they did so, and that they paid Omnicare to promote the drug to care home physicians. The settlement was finalized on November 4, 2013, with J&J agreeing to pay a penalty of around $2.2billion, "including criminal fines and forfeiture totaling $485million and civil settlements with the federal government and states totaling $1.72billion". Johnson & Johnson has also been subject to congressional investigations related to payments given to psychiatrists to promote its products and ghost write articles, notably Joseph Biederman and his pediatric bipolar disorder research unit. Foreign bribery In 2011, J&J settled litigation brought by the US Securities and Exchange Commission under the Foreign Corrupt Practices Act and paid around $70M in disgorgement and fines. J&J's employees had given kickbacks and bribes to doctors in Greece, Poland, and Romania to obtain business selling drugs and medical devices and had bribed officials in Iraq to win contracts under the Oil for Food program. J&J fully cooperated with the investigation once the problems came to light. Consumer fraud settlements In May 2017, J&J reached an agreement to pay $33million to several states to settle consumer fraud allegations in some of the company's over-the-counter drugs. Use of the Red Cross symbol Johnson & Johnson registered the Red Cross as a U.S. trademark for "medicinal and surgical plasters" in 1905 and has used the design since 1887. The Geneva Conventions, which reserved the Red Cross emblem for specific uses, were first approved in 1864 and ratified by the United States in 1882. However, the emblem was not protected by U.S. law for the use of the American Red Cross (ARC) and the U.S. military until after Johnson & Johnson had obtained its trademark. A clause in this law (now 18 U.S.C. 706) permits this pre-existing use of the Red Cross to continue. A declaration made by the U.S. upon its ratification of the 1949 Geneva Conventions includes a reservation that pre-1905 U.S. domestic uses of the Red Cross, such as Johnson & Johnson's, would remain lawful as long as the cross is not used on "aircraft, vessels, vehicles, buildings or other structures, or upon the ground", i.e., uses which could be confused with its military uses. This means that the U.S. did not agree to any interpretation of the 1949 Geneva Conventions that would overrule Johnson & Johnson's trademark. The American Red Cross continues to recognize the validity of Johnson & Johnson's trademark. In August 2007, Johnson & Johnson filed a lawsuit against the ARC, demanding that the charity halt the use of the red cross symbol on products it sells to the public, though the company takes no issue with the charity's use of the mark for nonprofit purposes. In May 2008, the judge in the case dismissed most of Johnson & Johnson's claims, and a month later the two organizations announced a settlement had been reached in which both parties would continue to use the symbol. Boston Scientific lawsuits Since 2003, Johnson & Johnson and Boston Scientific have both claimed that the other had infringed on their patents covering heart stent medical devices. The litigation was settled when Boston Scientific agreed to pay $716million to Johnson & Johnson in September 2009 and an additional $1.73billion in February 2010. Their dispute was renewed in 2014, now on the grounds of a contract dispute. Patent-infringement case against Abbott In 2007, Johnson & Johnson sued Abbott Laboratories over the development and sale of the arthritis drug Humira, claiming Abbott used technology licensed exclusively to Johnson & Johnson's Centocor division. Johnson & Johnson won the court case, and in 2009 Abbott was ordered to pay Johnson & Johnson $1.17billion in lost revenues and $504million in royalties. The judge also added $175.6million in interest to bring the total to $1.84billion. This was the largest patent-infringement award in U.S. history until the 2013 decision against Teva in favor of Takeda and Pfizer for over $2.1billion. In 2010 Abbott appealed the verdict and in 2011 won the appeal. Vaginal mesh implants Tens of thousands of women worldwide have taken legal action against Johnson & Johnson after suffering serious complications following a vaginal mesh implant procedure. In Australia, more than 700 women began a class action against the company in the Federal Court of Australia in 2017, telling the court they "suffered irreparable, debilitating pain after the devices began to erode into surrounding tissue and organs, causing infections and complications". The class action alleged that Johnson & Johnson, which "aggressively marketed" the implants "failed to properly warn patients and surgeons of the risk, or test the devices adequately". Emails between executives show the company was aware of the risks in 2005 but still went ahead and made the product available. In November 2019 the Federal Court of Australia found Johnson & Johnson negligent. The judgment was appealed, with the appeals court upholding all findings of Justice Anna Katzman. Ethicon then sought a High Court decision but this was not permitted by the High Court of Australia. Subsequently (September 2022) a A$300,000,000 compensation agreement was reached between Shine Lawyers and J&J but this agreement remains subject to approval by the Federal Court of Australia. In the US in 2016 the U.S. states of California and Washington filed a lawsuit against the company, accusing it of deception. In October 2019, the company and its subsidiary, Ethicon, Inc. reached a settlement with 41 states and the District of Columbia, with no admission of liability, in a suit alleging deceptive marketing of transvaginal surgical-mesh devices. The suit also alleges that the company failed to disclose risks associated with the product, which J&J pulled from the US market in 2012. The amount settled in the suit was about $117million. Baby powder J&J has been the subject of over 26,000 lawsuits claiming that its baby powder causes ovarian cancer. The lawsuits focus on claims that the talc-based powder is contaminated with asbestos, a known carcinogen commonly found in places where talc is mined. In 2016, J&J was ordered to pay $72million in damages to the family of Jacqueline Fox, a 62-year-old woman who died of ovarian cancer in 2015. The company said it would appeal. A year later, over 1,000 U.S. women had sued J&J for covering up the possible cancer risk from its Baby Powder product. The company says that 70% of its Baby Powder is used by adults. Later that year, a California jury ordered Johnson & Johnson to pay $417million to a woman who claimed she developed ovarian cancer after using the company's talc-based products like Johnson's Baby Powder for feminine hygiene. The verdict included $70million in compensatory damages and $347million in punitive damages. J&J said they would appeal the verdict. The Missouri Eastern District appeals court later negated a $72million jury verdict in the Jacqueline Fox lawsuit, ruling it lacked jurisdiction in Missouri because of a U.S. Supreme Court decision that imposed limits on where injury lawsuit can be filed. Subsequently, this ruling killed three other recent St. Louis jury verdicts of more than $200million combined. Fox, 62, of Birmingham, Alabama, died in 2015, about four months before her trial was held in St. Louis Circuit Court. She was among 65 plaintiffs, of whom only two were from Missouri. A St. Louis jury awarded nearly $4.7billion in damages to 22 women and their families in 2018 after they claimed that asbestos in Johnson & Johnson talcum powder caused their ovarian cancer. In August, J&J said that it removed several chemicals from baby powder products and re-engineered them to make consumers more confident that products were safer for children. The company was forced to release internal documents with 11,700 people suing J&J over cancers allegedly caused by baby powder. The documents showed that the company had known about asbestos contamination since at least as early as 1971 and had spent decades finding ways to conceal the evidence from the public. The company lost its request to reverse a jury verdict that ruled in favor of the accusers, which required the company to pay $4.14billion in punitive damages and $550million in compensatory damages. A large study performed in 2003 found that ovarian cancer risk increased from a baseline of 0.0121% to 0.0161% in people who reported regularly using talc in the genital area. Two more studies over the next twelve years, which also relied on self-reporting, had similar results; however, none of the three studies showed a relationship between how long someone used talc and how much their cancer risk increased, which is expected in experiments with carcinogens and other toxic substances (see dose–response relationship). Conversely, a St. Louis jury ruled in favor of Johnson & Johnson in the case of a single plaintiff who had used the company's talc-containing baby powder for thirty years with a similar claim. The company's CEO, Alex Gorsky, declined to appear at a United States congressional hearing on the safety of J&J's Baby Powder and other talc-based cosmetics. J&J spokesman Ernie Knewitz said that the subcommittee had rejected the company's offers to send a talc testing expert or a J&J executive in charge of consumer products. In response to declining demand, J&J announced it would discontinue the sale of talc-based baby powder in the United States and Canada in 2020, but would continue to sell it in other markets. In a statement, the company said that the existing retail inventory of the talc-based powder will sell until it runs out, while the company's cornstarch-based baby powder will continue to sell in the United States and Canada. The Supreme Court of Missouri refused to consider J&J's appeal of a $2.12 billion damages award to women who blamed their ovarian cancer on its talc-based products. The Supreme Court of the United States also refused to consider an appeal from J&J, leaving in place a judgment from a state appeal court that had cut the original award to $2.1 billion. Two of the justices had to recuse: Samuel Alito because either he and/or his wife owning or recently owning stock in J&J, and Brett Kavanaugh, whose father led an industry group lobbying against safety warnings on talc products. Representing the affected women during the trial, Mark Lanier remarked that the Supreme Court's decision sent "a clear message to the rich and powerful: You will be held to account when you cause grievous harm under our system of equal justice under law." J&J had argued that the combined claims in the St. Louis trial were too different, yet the short jury deliberation and identical payouts were, therefore, a violation of the company's due process and also that the high punitive award was unconstitutional. In 2021, Johnson & Johnson subsidiary LTL Management LLC, using a process called a Texas divisional merger, filed for Chapter 11 bankruptcy in North Carolina. The process allowed by Texas law lets a company create a separate subsidiary to take over liabilities, with the existing company operating normally. The new company, with a different name, can locate in a state such as North Carolina where bankruptcy laws are different, and then declare bankruptcy, paying less than the original company would have. In the case of LTL, a $2 billion trust will be created, compared to $25 billion if Johnson & Johnson had declared bankruptcy. According to the filing, a company known as Old JJCI took on the baby powder related liabilities in 1979, while Johnson & Johnson remained a defendant. LTL and New JJCI were created with LTL taking the baby powder related liabilities and some assets, and New JJCI taking the remaining assets. Johnson & Johnson says LTL is now based in New Jersey. The company announced that it would stop making talc-based powder by 2023 and replace it with cornstarch-based powders. The company says the talc-based powder is safe to use and does not contain asbestos. In 2023, the number of lawsuits regarding talc-based baby powder has exceeded 40,000 as more claimants come forward to say that the company's product caused them to have cancer. Johnson & Johnson have now reportedly offered $9 billion to settle all the lawsuits against the company, up from the previous figure of $2 billion. Opioid epidemic By 2018, the company had become embroiled in the opioid epidemic in the United States and had become a target of lawsuits. Over 500 opioid-related cases have been filed as of May 2018 against J&J and its competitors. In Idaho, J&J is part of a lawsuit accusing the company for being partially to blame for opioid-related overdose deaths. The first major trial began in Oklahoma in May 2019. On August 26, 2019, the Oklahoma judge ordered J&J to pay $572million for their part in the opioid crisis, and in October J&J paid $20.4million to two Ohio counties fighting the opioid epidemic. In January 2022, Johnson & Johnson agreed to pay up to $5 billion as part of a $26 billion settlement which included McKesson, AmerisourceBergen, and Cardinal Health. Had the states gone to court, the companies could have faced up to $95 billion in penalties. Northeastern Ohio Settlement In October 2019, the company agreed to a settlement of $20.4million with northeastern Ohio's most populous counties of Cuyahoga (containing Cleveland) and Summit (Akron). The settlement allows the company avoidance of a trial accusing J&J and many other pharmaceutical manufacturers of helping to spark the US opioid epidemic. The trial was thought to be an indicator for thousands of opioid-related lawsuits against many drug manufacturers. The arrangement, which contains no admission of liability by the company, provides the counties $10million in cash, $5million for legal expenses and $5.4million in contributions to opioid-related nonprofit organizations in the counties. Public-private engagement Johnson & Johnson and its subsidiaries engage with the public and private sectors in a variety of settings including to promote research and development, academic funding, event sponsorship, philanthropy, and political lobbying. Academia J&J is a matching gift donor to the Institute for Advanced Study. Activism J&J is a corporate partner of Human Rights Campaign, a large LGBT advocacy group. J&J is a financial supporter of Women Deliver. Political lobbying Johnson & Johnson is engaged in various forms of lobbying in the United States, Canada and internationally, including through corporate philanthropy and membership in lobbying organizations. J&J is one of the largest donors to the Foundation for the National Institutes of Health (FNIH), having donated $5–10 million from 2000 to 2020. J&J is a partner of the Pandemic Action Network. J&J is a member company of the Biotechnology Innovation Organization (BIO) and the Pharmaceutical Research and Manufacturers of America (PhRMA), trade associations that lobby the U.S. Government on behalf of the Biotechnology Industry and pharmaceutical industry. BIO and PhRMA have offices in Washington, D.C., with PhRMA also having locations in Japan and the United Arab Emirates. J&J is a member of the Personalized Medicine Coalition, a medical research advocacy group that lobbies on behalf of the pharmaceutical industry to increase funding for personalized medicine research and development. J&J is a member company of the National Pharmaceutical Council (NPC), a nonprofit that advocates for expanded research funding and innovation. Research and development J&J has provided research grants and major funding to the C. D. Howe Institute. See also Zodiac (schooner) References External links 1886 establishments in New Jersey 1940s initial public offerings American companies established in 1886 Companies based in New Brunswick, New Jersey Companies in the Dow Jones Industrial Average Companies in the Dow Jones Global Titans 50 Companies in the S&P 500 Dividend Aristocrats Companies listed on the New York Stock Exchange Conglomerate companies of the United States COVID-19 vaccine producers Dental companies of the United States Health care companies based in New Jersey Life sciences industry Medical technology companies of the United States Multinational companies headquartered in the United States Personal care companies of the United States Pharmaceutical companies based in New Jersey Pharmaceutical companies established in 1886 Pharmaceutical companies of the United States
Johnson & Johnson
[ "Biology" ]
9,475
[ "Life sciences industry" ]
166,890
https://en.wikipedia.org/wiki/Langevin%20equation
In physics, a Langevin equation (named after Paul Langevin) is a stochastic differential equation describing how a system evolves when subjected to a combination of deterministic and fluctuating ("random") forces. The dependent variables in a Langevin equation typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. One application is to Brownian motion, which models the fluctuating motion of a small particle in a fluid. Brownian motion as a prototype The original Langevin equation describes Brownian motion, the apparently random movement of a particle in a fluid due to collisions with the molecules of the fluid, Here, is the velocity of the particle, is its damping coefficient, and is its mass. The force acting on the particle is written as a sum of a viscous force proportional to the particle's velocity (Stokes' law), and a noise term representing the effect of the collisions with the molecules of the fluid. The force has a Gaussian probability distribution with correlation function where is the Boltzmann constant, is the temperature and is the i-th component of the vector . The -function form of the time correlation means that the force at a time is uncorrelated with the force at any other time. This is an approximation: the actual random force has a nonzero correlation time corresponding to the collision time of the molecules. However, the Langevin equation is used to describe the motion of a "macroscopic" particle at a much longer time scale, and in this limit the -correlation and the Langevin equation becomes virtually exact. Another common feature of the Langevin equation is the occurrence of the damping coefficient in the correlation function of the random force, which in an equilibrium system is an expression of the Einstein relation. Mathematical aspects A strictly -correlated fluctuating force is not a function in the usual mathematical sense and even the derivative is not defined in this limit. This problem disappears when the Langevin equation is written in integral form Therefore, the differential form is only an abbreviation for its time integral. The general mathematical term for equations of this type is "stochastic differential equation". Another mathematical ambiguity occurs for Langevin equations with multiplicative noise, which refers to noise terms that are multiplied by a non-constant function of the dependent variables, e.g., . If a multiplicative noise is intrinsic to the system, its definition is ambiguous, as it is equally valid to interpret it according to Stratonovich- or Ito- scheme (see Itô calculus). Nevertheless, physical observables are independent of the interpretation, provided the latter is applied consistently when manipulating the equation. This is necessary because the symbolic rules of calculus differ depending on the interpretation scheme. If the noise is external to the system, the appropriate interpretation is the Stratonovich one. Generic Langevin equation There is a formal derivation of a generic Langevin equation from classical mechanics. This generic equation plays a central role in the theory of critical dynamics, and other areas of nonequilibrium statistical mechanics. The equation for Brownian motion above is a special case. An essential step in the derivation is the division of the degrees of freedom into the categories slow and fast. For example, local thermodynamic equilibrium in a liquid is reached within a few collision times, but it takes much longer for densities of conserved quantities like mass and energy to relax to equilibrium. Thus, densities of conserved quantities, and in particular their long wavelength components, are slow variable candidates. This division can be expressed formally with the Zwanzig projection operator. Nevertheless, the derivation is not completely rigorous from a mathematical physics perspective because it relies on assumptions that lack rigorous proof, and instead are justified only as plausible approximations of physical systems. Let denote the slow variables. The generic Langevin equation then reads The fluctuating force obeys a Gaussian probability distribution with correlation function This implies the Onsager reciprocity relation for the damping coefficients . The dependence of on is negligible in most cases. The symbol denotes the Hamiltonian of the system, where is the equilibrium probability distribution of the variables . Finally, is the projection of the Poisson bracket of the slow variables and onto the space of slow variables. In the Brownian motion case one would have , or and . The equation of motion for is exact: there is no fluctuating force and no damping coefficient . Examples Thermal noise in an electrical resistor There is a close analogy between the paradigmatic Brownian particle discussed above and Johnson noise, the electric voltage generated by thermal fluctuations in a resistor. The diagram at the right shows an electric circuit consisting of a resistance R and a capacitance C. The slow variable is the voltage U between the ends of the resistor. The Hamiltonian reads , and the Langevin equation becomes This equation may be used to determine the correlation function which becomes white noise (Johnson noise) when the capacitance becomes negligibly small. Critical dynamics The dynamics of the order parameter of a second order phase transition slows down near the critical point and can be described with a Langevin equation. The simplest case is the universality class "model A" with a non-conserved scalar order parameter, realized for instance in axial ferromagnets, Other universality classes (the nomenclature is "model A",..., "model J") contain a diffusing order parameter, order parameters with several components, other critical variables and/or contributions from Poisson brackets. Harmonic oscillator in a fluid A particle in a fluid is described by a Langevin equation with a potential energy function, a damping force, and thermal fluctuations given by the fluctuation dissipation theorem. If the potential is quadratic then the constant energy curves are ellipses, as shown in the figure. If there is dissipation but no thermal noise, a particle continually loses energy to the environment, and its time-dependent phase portrait (velocity vs position) corresponds to an inward spiral toward 0 velocity. By contrast, thermal fluctuations continually add energy to the particle and prevent it from reaching exactly 0 velocity. Rather, the initial ensemble of stochastic oscillators approaches a steady state in which the velocity and position are distributed according to the Maxwell–Boltzmann distribution. In the plot below (figure 2), the long time velocity distribution (blue) and position distributions (orange) in a harmonic potential () is plotted with the Boltzmann probabilities for velocity (green) and position (red). In particular, the late time behavior depicts thermal equilibrium. Trajectories of free Brownian particles Consider a free particle of mass with equation of motion described by where is the particle velocity, is the particle mobility, and is a rapidly fluctuating force whose time-average vanishes over a characteristic timescale of particle collisions, i.e. . The general solution to the equation of motion is where is the correlation time of the noise term. It can also be shown that the autocorrelation function of the particle velocity is given by where we have used the property that the variables and become uncorrelated for time separations . Besides, the value of is set to be equal to such that it obeys the equipartition theorem. If the system is initially at thermal equilibrium already with , then for all , meaning that the system remains at equilibrium at all times. The velocity of the Brownian particle can be integrated to yield its trajectory . If it is initially located at the origin with probability 1, then the result is Hence, the average displacement asymptotes to as the system relaxes. The mean squared displacement can be determined similarly: This expression implies that , indicating that the motion of Brownian particles at timescales much shorter than the relaxation time of the system is (approximately) time-reversal invariant. On the other hand, , which indicates an irreversible, dissipative process. Recovering Boltzmann statistics If the external potential is conservative and the noise term derives from a reservoir in thermal equilibrium, then the long-time solution to the Langevin equation must reduce to the Boltzmann distribution, which is the probability distribution function for particles in thermal equilibrium. In the special case of overdamped dynamics, the inertia of the particle is negligible in comparison to the damping force, and the trajectory is described by the overdamped Langevin equation where is the damping constant. The term is white noise, characterized by (formally, the Wiener process). One way to solve this equation is to introduce a test function and calculate its average. The average of should be time-independent for finite , leading to Itô's lemma for the Itô drift-diffusion process says that the differential of a twice-differentiable function is given by Applying this to the calculation of gives This average can be written using the probability density function ; where the second term was integrated by parts (hence the negative sign). Since this is true for arbitrary functions , it follows that thus recovering the Boltzmann distribution Equivalent techniques In some situations, one is primarily interested in the noise-averaged behavior of the Langevin equation, as opposed to the solution for particular realizations of the noise. This section describes techniques for obtaining this averaged behavior that are distinct from—but also equivalent to—the stochastic calculus inherent in the Langevin equation. Fokker–Planck equation A Fokker–Planck equation is a deterministic equation for the time dependent probability density of stochastic variables . The Fokker–Planck equation corresponding to the generic Langevin equation described in this article is the following: The equilibrium distribution is a stationary solution. Klein–Kramers equation The Fokker–Planck equation for an underdamped Brownian particle is called the Klein–Kramers equation. If the Langevin equations are written as where is the momentum, then the corresponding Fokker–Planck equation is Here and are the gradient operator with respect to and , and is the Laplacian with respect to . In -dimensional free space, corresponding to on , this equation can be solved using Fourier transforms. If the particle is initialized at with position and momentum , corresponding to initial condition , then the solution is where In three spatial dimensions, the mean squared displacement is Path integral A path integral equivalent to a Langevin equation can be obtained from the corresponding Fokker–Planck equation or by transforming the Gaussian probability distribution of the fluctuating force to a probability distribution of the slow variables, schematically . The functional determinant and associated mathematical subtleties drop out if the Langevin equation is discretized in the natural (causal) way, where depends on but not on . It turns out to be convenient to introduce auxiliary response variables . The path integral equivalent to the generic Langevin equation then reads where is a normalization factor and The path integral formulation allows for the use of tools from quantum field theory, such as perturbation and renormalization group methods. This formulation is typically referred to as either the Martin-Siggia-Rose formalism or the Janssen-De Dominicis formalism after its developers. The mathematical formalism for this representation can be developed on abstract Wiener space. See also Grote–Hynes theory Langevin dynamics Stochastic thermodynamics References Further reading W. T. Coffey (Trinity College, Dublin, Ireland) and Yu P. Kalmykov (Université de Perpignan, France, The Langevin Equation: With Applications to Stochastic Problems in Physics, Chemistry and Electrical Engineering (Third edition), World Scientific Series in Contemporary Chemical Physics – Vol 27. Reif, F. Fundamentals of Statistical and Thermal Physics, McGraw Hill New York, 1965. See section 15.5 Langevin Equation R. Friedrich, J. Peinke and Ch. Renner. How to Quantify Deterministic and Random Influences on the Statistics of the Foreign Exchange Market, Phys. Rev. Lett. 84, 5224–5227 (2000) L.C.G. Rogers and D. Williams. Diffusions, Markov Processes, and Martingales, Cambridge Mathematical Library, Cambridge University Press, Cambridge, reprint of 2nd (1994) edition, 2000. Statistical mechanics Stochastic differential equations
Langevin equation
[ "Physics" ]
2,558
[ "Statistical mechanics" ]
166,891
https://en.wikipedia.org/wiki/Equivalent%20rectangular%20bandwidth
The equivalent rectangular bandwidth or ERB is a measure used in psychoacoustics, which gives an approximation to the bandwidths of the filters in human hearing, using the unrealistic but convenient simplification of modeling the filters as rectangular band-pass filters, or band-stop filters, like in tailor-made notched music training (TMNMT). Approximations For moderate sound levels and young listeners, suggest that the bandwidth of human auditory filters can be approximated by the polynomial equation: where is the center frequency of the filter, in kHz, and is the bandwidth of the filter in Hz. The approximation is based on the results of a number of published simultaneous masking experiments and is valid from 0.1–. Seven years later, published another, simpler approximation: where is in Hz and is also in Hz. The approximation is applicable at moderate sound levels and for values of between 100 and . ERB-rate scale The ERB-rate scale, or ERB-number scale, can be defined as a function ERBS(f) which returns the number of equivalent rectangular bandwidths below the given frequency f. The units of the ERB-number scale are known ERBs, or as Cams, following a suggestion by Hartmann. The scale can be constructed by solving the following differential system of equations: The solution for ERBS(f) is the integral of the reciprocal of ERB(f) with the constant of integration set in such a way that ERBS(0) = 0. Using the second order polynomial approximation () for ERB(f) yields: where f is in kHz. The VOICEBOX speech processing toolbox for MATLAB implements the conversion and its inverse as: where f is in Hz. Using the linear approximation () for ERB(f) yields: where f is in Hz. See also Critical bands Bark scale References External links Auditory Scales by Giampiero Salvi: shows comparison between Bark, Mel, and ERB scales Acoustics Hearing Signal processing
Equivalent rectangular bandwidth
[ "Physics", "Technology", "Engineering" ]
412
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Classical mechanics", "Acoustics" ]
166,896
https://en.wikipedia.org/wiki/Fokker%E2%80%93Planck%20equation
In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. The Fokker–Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc. It is named after Adriaan Fokker and Max Planck, who described it in 1914 and 1917. It is also known as the Kolmogorov forward equation, after Andrey Kolmogorov, who independently discovered it in 1931. When applied to particle position distributions, it is better known as the Smoluchowski equation (after Marian Smoluchowski), and in this context it is equivalent to the convection–diffusion equation. When applied to particle position and momentum distributions, it is known as the Klein–Kramers equation. The case with zero diffusion is the continuity equation. The Fokker–Planck equation is obtained from the master equation through Kramers–Moyal expansion. The first consistent microscopic derivation of the Fokker–Planck equation in the single scheme of classical and quantum mechanics was performed by Nikolay Bogoliubov and Nikolay Krylov. One dimension In one spatial dimension x, for an Itô process driven by the standard Wiener process and described by the stochastic differential equation (SDE) with drift and diffusion coefficient , the Fokker–Planck equation for the probability density of the random variable is In the following, use . Define the infinitesimal generator (the following can be found in Ref.): The transition probability , the probability of going from to , is introduced here; the expectation can be written as Now we replace in the definition of , multiply by and integrate over . The limit is taken on Note now that which is the Chapman–Kolmogorov theorem. Changing the dummy variable to , one gets which is a time derivative. Finally we arrive to From here, the Kolmogorov backward equation can be deduced. If we instead use the adjoint operator of , , defined such that then we arrive to the Kolmogorov forward equation, or Fokker–Planck equation, which, simplifying the notation , in its differential form reads Remains the issue of defining explicitly . This can be done taking the expectation from the integral form of the Itô's lemma: The part that depends on vanished because of the martingale property. Then, for a particle subject to an Itô equation, using it can be easily calculated, using integration by parts, that which bring us to the Fokker–Planck equation: While the Fokker–Planck equation is used with problems where the initial distribution is known, if the problem is to know the distribution at previous times, the Feynman–Kac formula can be used, which is a consequence of the Kolmogorov backward equation. The stochastic process defined above in the Itô sense can be rewritten within the Stratonovich convention as a Stratonovich SDE: It includes an added noise-induced drift term due to diffusion gradient effects if the noise is state-dependent. This convention is more often used in physical applications. Indeed, it is well known that any solution to the Stratonovich SDE is a solution to the Itô SDE. The zero-drift equation with constant diffusion can be considered as a model of classical Brownian motion: This model has discrete spectrum of solutions if the condition of fixed boundaries is added for : It has been shown that in this case an analytical spectrum of solutions allows deriving a local uncertainty relation for the coordinate-velocity phase volume: Here is a minimal value of a corresponding diffusion spectrum , while and represent the uncertainty of coordinate–velocity definition. Higher dimensions More generally, if where and are -dimensional vectors, is an matrix and is an M-dimensional standard Wiener process, the probability density for satisfies the Fokker–Planck equationwith drift vector and diffusion tensor , i.e. If instead of an Itô SDE, a Stratonovich SDE is considered, the Fokker–Planck equation will read: Generalization In general, the Fokker–Planck equations are a special case to the general Kolmogorov forward equation where the linear operator is the Hermitian adjoint to the infinitesimal generator for the Markov process. Examples Wiener process A standard scalar Wiener process is generated by the stochastic differential equation Here the drift term is zero and the diffusion coefficient is 1/2. Thus the corresponding Fokker–Planck equation is which is the simplest form of a diffusion equation. If the initial condition is , the solution is Boltzmann distribution at the thermodynamic equilibrium The overdamped Langevin equationgives . The Boltzmann distribution is an equilibrium distribution, and assuming grows sufficiently rapidly (that is, the potential well is deep enough to confine the particle), the Boltzmann distribution is the unique equilibrium. Ornstein–Uhlenbeck process The Ornstein–Uhlenbeck process is a process defined as with . Physically, this equation can be motivated as follows: a particle of mass with velocity moving in a medium, e.g., a fluid, will experience a friction force which resists motion whose magnitude can be approximated as being proportional to particle's velocity with . Other particles in the medium will randomly kick the particle as they collide with it and this effect can be approximated by a white noise term; . Newton's second law is written as Taking for simplicity and changing the notation as leads to the familiar form . The corresponding Fokker–Planck equation is The stationary solution () is Plasma physics In plasma physics, the distribution function for a particle species , , takes the place of the probability density function. The corresponding Boltzmann equation is given by where the third term includes the particle acceleration due to the Lorentz force and the Fokker–Planck term at the right-hand side represents the effects of particle collisions. The quantities and are the average change in velocity a particle of type experiences due to collisions with all other particle species in unit time. Expressions for these quantities are given elsewhere. If collisions are ignored, the Boltzmann equation reduces to the Vlasov equation. Smoluchowski diffusion equation Consider an overdamped Brownian particle under external force :where the term is negligible (the meaning of "overdamped"). Thus, it is just . The Fokker–Planck equation for this particle is the Smoluchowski diffusion equation: Where is the diffusion constant and . The importance of this equation is it allows for both the inclusion of the effect of temperature on the system of particles and a spatially dependent diffusion constant. Starting with the Langevin Equation of a Brownian particle in external field , where is the friction term, is a fluctuating force on the particle, and is the amplitude of the fluctuation. At equilibrium the frictional force is much greater than the inertial force, . Therefore, the Langevin equation becomes, Which generates the following Fokker–Planck equation, Rearranging the Fokker–Planck equation, Where . Note, the diffusion coefficient may not necessarily be spatially independent if or are spatially dependent. Next, the total number of particles in any particular volume is given by, Therefore, the flux of particles can be determined by taking the time derivative of the number of particles in a given volume, plugging in the Fokker–Planck equation, and then applying Gauss's Theorem. In equilibrium, it is assumed that the flux goes to zero. Therefore, Boltzmann statistics can be applied for the probability of a particles location at equilibrium, where is a conservative force and the probability of a particle being in a state is given as . This relation is a realization of the fluctuation–dissipation theorem. Now applying to and using the Fluctuation-dissipation theorem, Rearranging, Therefore, the Fokker–Planck equation becomes the Smoluchowski equation, for an arbitrary force . Computational considerations Brownian motion follows the Langevin equation, which can be solved for many different stochastic forcings with results being averaged (canonical ensemble in molecular dynamics). However, instead of this computationally intensive approach, one can use the Fokker–Planck equation and consider the probability of the particle having a velocity in the interval when it starts its motion with at time 0. 1-D linear potential example Brownian dynamics in one dimension is simple. Theory Starting with a linear potential of the form the corresponding Smoluchowski equation becomes, Where the diffusion constant, , is constant over space and time. The boundary conditions are such that the probability vanishes at with an initial condition of the ensemble of particles starting in the same place, . Defining and and applying the coordinate transformation, With the Smoluchowki equation becomes, Which is the free diffusion equation with solution, And after transforming back to the original coordinates, Simulation The simulation on the right was completed using a Brownian dynamics simulation. Starting with a Langevin equation for the system, where is the friction term, is a fluctuating force on the particle, and is the amplitude of the fluctuation. At equilibrium the frictional force is much greater than the inertial force, . Therefore, the Langevin equation becomes, For the Brownian dynamic simulation the fluctuation force is assumed to be Gaussian with the amplitude being dependent of the temperature of the system . Rewriting the Langevin equation, where is the Einstein relation. The integration of this equation was done using the Euler–Maruyama method to numerically approximate the path of this Brownian particle. Solution Being a partial differential equation, the Fokker–Planck equation can be solved analytically only in special cases. A formal analogy of the Fokker–Planck equation with the Schrödinger equation allows the use of advanced operator techniques known from quantum mechanics for its solution in a number of cases. Furthermore, in the case of overdamped dynamics when the Fokker–Planck equation contains second partial derivatives with respect to all spatial variables, the equation can be written in the form of a master equation that can easily be solved numerically. In many applications, one is only interested in the steady-state probability distribution , which can be found from . The computation of mean first passage times and splitting probabilities can be reduced to the solution of an ordinary differential equation which is intimately related to the Fokker–Planck equation. Particular cases with known solution and inversion In mathematical finance for volatility smile modeling of options via local volatility, one has the problem of deriving a diffusion coefficient consistent with a probability density obtained from market option quotes. The problem is therefore an inversion of the Fokker–Planck equation: Given the density f(x,t) of the option underlying X deduced from the option market, one aims at finding the local volatility consistent with f. This is an inverse problem that has been solved in general by Dupire (1994, 1997) with a non-parametric solution. Brigo and Mercurio (2002, 2003) propose a solution in parametric form via a particular local volatility consistent with a solution of the Fokker–Planck equation given by a mixture model. More information is available also in Fengler (2008), Gatheral (2008), and Musiela and Rutkowski (2008). Fokker–Planck equation and path integral Every Fokker–Planck equation is equivalent to a path integral. The path integral formulation is an excellent starting point for the application of field theory methods. This is used, for instance, in critical dynamics. A derivation of the path integral is possible in a similar way as in quantum mechanics. The derivation for a Fokker–Planck equation with one variable is as follows. Start by inserting a delta function and then integrating by parts: The -derivatives here only act on the -function, not on . Integrate over a time interval , Insert the Fourier integral for the -function, This equation expresses as functional of . Iterating times and performing the limit gives a path integral with action The variables conjugate to are called "response variables". Although formally equivalent, different problems may be solved more easily in the Fokker–Planck equation or the path integral formulation. The equilibrium distribution for instance may be obtained more directly from the Fokker–Planck equation. See also Bogoliubov–Born–Green–Kirkwood–Yvon hierarchy of equations Boltzmann equation Convection–diffusion equation Klein–Kramers equation Kolmogorov backward equation Kolmogorov equation Langevin equation Master equation Mean-field game theory Ornstein–Uhlenbeck process Vlasov equation Notes and references Further reading Stochastic processes Equations Parabolic partial differential equations Max Planck Stochastic calculus Mathematical finance Transport phenomena
Fokker–Planck equation
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
2,671
[ "Transport phenomena", "Physical phenomena", "Applied mathematics", "Chemical engineering", "Mathematical objects", "Equations", "Mathematical finance" ]
166,931
https://en.wikipedia.org/wiki/River%20delta
A river delta is a triangular landform created by the deposition of the sediments that are carried by the waters of a river, where the river merges with a body of slow-moving water or with a body of stagnant water. The creation of a river delta occurs at the river mouth, where the river merges into an ocean, a sea, or an estuary, into a lake, a reservoir, or (more rarely) into another river that cannot carry away the sediment supplied by the feeding river. Etymologically, the term river delta derives from the triangular shape (Δ) of the uppercase Greek letter delta. In hydrology, the dimensions of a river delta are determined by the balance between the watershed processes that supply sediment and the watershed processes that redistribute, sequester, and export the supplied sediment into the receiving basin. River deltas are important in human civilization, as they are major agricultural production centers and population centers. They can provide coastline defence and can impact drinking water supply. They are also ecologically important, with different species' assemblages depending on their landscape position. On geologic timescales, they are also important carbon sinks. Etymology A river delta is so named because the shape of the Nile Delta approximates the triangular uppercase Greek letter delta. The triangular shape of the Nile Delta was known to audiences of classical Athenian drama; the tragedy Prometheus Bound by Aeschylus refers to it as the "triangular Nilotic land", though not as a "delta". Herodotus's description of Egypt in his Histories mentions the Delta fourteen times, as "the Delta, as it is called by the Ionians", including describing the outflow of silt into the sea and the convexly curved seaward side of the triangle. Despite making comparisons to other river systems deltas, Herodotus did not describe them as "deltas". The Greek historian Polybius likened the land between the Rhône and Isère rivers to the Nile Delta, referring to both as islands, but did not apply the word delta. According to the Greek geographer Strabo, the Cynic philosopher Onesicritus of Astypalaea, who accompanied Alexander the Great's conquests in India, reported that Patalene (the delta of the Indus River) was "a delta" (). The Roman author Arrian's Indica states that "the delta of the land of the Indians is made by the Indus river no less than is the case with that of Egypt". As a generic term for the landform at the mouth of the river, the word delta is first attested in the English-speaking world in the late 18th century, in the work of Edward Gibbon. Formation River deltas form when a river carrying sediment reaches a body of water, such as a lake, ocean, or a reservoir. When the flow enters the standing water, it is no longer confined to its channel and expands in width. This flow expansion results in a decrease in the flow velocity, which diminishes the ability of the flow to transport sediment. As a result, sediment drops out of the flow and is deposited as alluvium, which builds up to form the river delta. Over time, this single channel builds a deltaic lobe (such as the bird's-foot of the Mississippi or Ural river deltas), pushing its mouth into the standing water. As the deltaic lobe advances, the gradient of the river channel becomes lower because the river channel is longer but has the same change in elevation (see slope). As the gradient of the river channel decreases, the amount of shear stress on the bed decreases, which results in the deposition of sediment within the channel and a rise in the channel bed relative to the floodplain. This destabilizes the river channel. If the river breaches its natural levees (such as during a flood), it spills out into a new course with a shorter route to the ocean, thereby obtaining a steeper, more stable gradient. Typically, when the river switches channels in this manner, some of its flow remains in the abandoned channel. Repeated channel-switching events build up a mature delta with a distributary network. Another way these distributary networks form is from the deposition of mouth bars (mid-channel sand and/or gravel bars at the mouth of a river). When this mid-channel bar is deposited at the mouth of a river, the flow is routed around it. This results in additional deposition on the upstream end of the mouth bar, which splits the river into two distributary channels. A good example of the result of this process is the Wax Lake Delta. In both of these cases, depositional processes force redistribution of deposition from areas of high deposition to areas of low deposition. This results in the smoothing of the planform (or map-view) shape of the delta as the channels move across its surface and deposit sediment. Because the sediment is laid down in this fashion, the shape of these deltas approximates a fan. The more often the flow changes course, the shape develops closer to an ideal fan because more rapid changes in channel position result in a more uniform deposition of sediment on the delta front. The Mississippi and Ural River deltas, with their bird's feet, are examples of rivers that do not avulse often enough to form a symmetrical fan shape. Alluvial fan deltas, as seen by their name, avulse frequently and more closely approximate an ideal fan shape. Most large river deltas discharge to intra-cratonic basins on the trailing edges of passive margins due to the majority of large rivers such as the Mississippi, Nile, Amazon, Ganges, Indus, Yangtze, and Yellow River discharging along passive continental margins. This phenomenon is due mainly to three factors: topography, basin area, and basin elevation. Topography along passive margins tend to be more gradual and widespread over a greater area enabling sediment to pile up and accumulate over time to form large river deltas. Topography along active margins tends to be steeper and less widespread, which results in sediments not having the ability to pile up and accumulate due to the sediment traveling into a steep subduction trench rather than a shallow continental shelf. There are many other lesser factors that could explain why the majority of river deltas form along passive margins rather than active margins. Along active margins, orogenic sequences cause tectonic activity to form over-steepened slopes, brecciated rocks, and volcanic activity resulting in delta formation to exist closer to the sediment source. When sediment does not travel far from the source, sediments that build up are coarser grained and more loosely consolidated, therefore making delta formation more difficult. Tectonic activity on active margins causes the formation of river deltas to form closer to the sediment source which may affect channel avulsion, delta lobe switching, and auto cyclicity. Active margin river deltas tend to be much smaller and less abundant but may transport similar amounts of sediment. However, the sediment is never piled up in thick sequences due to the sediment traveling and depositing in deep subduction trenches. At the mouth of a river, the change in flow conditions can cause the river to drop any sediment it is carrying. This sediment deposition can generate a variety of landforms, such as deltas, sand bars, spits, and tie channels. Landforms at the river mouth drastically alter the geomorphology and ecosystem. Types Deltas are typically classified according to the main control on deposition, which is a combination of river, wave, and tidal processes, depending on the strength of each. The other two factors that play a major role are landscape position and the grain size distribution of the source sediment entering the delta from the river. Fluvial-dominated deltas Fluvial-dominated deltas are found in areas of low tidal range and low wave energy. Where the river water is nearly equal in density to the basin water, the delta is characterized by homopycnal flow, in which the river water rapidly mixes with basin water and abruptly dumps most of its sediment load. Where the river water has a higher density than basin water, typically from a heavy load of sediment, the delta is characterized by hyperpycnal flow in which the river water hugs the basin bottom as a density current that deposits its sediments as turbidites. When the river water is less dense than the basin water, as is typical of river deltas on an ocean coastline, the delta is characterized by hypopycnal flow in which the river water is slow to mix with the denser basin water and spreads out as a surface fan. This allows fine sediments to be carried a considerable distance before settling out of suspension. Beds in a hypocynal delta dip at a very shallow angle, around 1 degree. Fluvial-dominated deltas are further distinguished by the relative importance of the inertia of rapidly flowing water, the importance of turbulent bed friction beyond the river mouth, and buoyancy. Outflow dominated by inertia tends to form Gilbert-type deltas. Outflow dominated by turbulent friction is prone to channel bifurcation, while buoyancy-dominated outflow produces long distributaries with narrow subaqueous natural levees and few channel bifurcations. The modern Mississippi River delta is a good example of a fluvial-dominated delta whose outflow is buoyancy-dominated. Channel abandonment has been frequent, with seven distinct channels active over the last 5000 years. Other fluvial-dominated deltas include the Mackenzie delta and the Alta delta. Gilbert deltas A Gilbert delta (named after Grove Karl Gilbert) is a type of fluvial-dominated delta formed from coarse sediments, as opposed to gently sloping muddy deltas such as that of the Mississippi. For example, a mountain river depositing sediment into a freshwater lake would form this kind of delta. It is commonly a result of homopycnal flow. Such deltas are characterized by a tripartite structure of topset, foreset, and bottomset beds. River water entering the lake rapidly deposits its coarser sediments on the submerged face of the delta, forming steeping dipping foreset beds. The finer sediments are deposited on the lake bottom beyond this steep slope as more gently dipping bottomset beds. Behind the delta front, braided channels deposit the gently dipping beds of the topset on the delta plain. While some authors describe both lacustrine and marine locations of Gilbert deltas, others note that their formation is more characteristic of the freshwater lakes, where it is easier for the river water to mix with the lakewater faster (as opposed to the case of a river falling into the sea or a salt lake, where less dense fresh water brought by the river stays on top longer). Gilbert himself first described this type of delta on Lake Bonneville in 1885. Elsewhere, similar structures occur, for example, at the mouths of several creeks that flow into Okanagan Lake in British Columbia and form prominent peninsulas at Naramata, Summerland, and Peachland. Wave-dominated deltas In wave-dominated deltas, wave-driven sediment transport controls the shape of the delta, and much of the sediment emanating from the river mouth is deflected along the coastline. The relationship between waves and river deltas is quite variable and largely influenced by the deepwater wave regimes of the receiving basin. With a high wave energy near shore and a steeper slope offshore, waves will make river deltas smoother. Waves can also be responsible for carrying sediments away from the river delta, causing the delta to retreat. For deltas that form further upriver in an estuary, there are complex yet quantifiable linkages between winds, tides, river discharge, and delta water levels. Tide-dominated deltas Erosion is also an important control in tide-dominated deltas, such as the Ganges Delta, which may be mainly submarine, with prominent sandbars and ridges. This tends to produce a "dendritic" structure. Tidal deltas behave differently from river-dominated and wave-dominated deltas, which tend to have a few main distributaries. Once a wave-dominated or river-dominated distributary silts up, it is abandoned, and a new channel forms elsewhere. In a tidal delta, new distributaries are formed during times when there is a lot of water around – such as floods or storm surges. These distributaries slowly silt up at a more or less constant rate until they fizzle out. Tidal freshwater deltas A tidal freshwater delta is a sedimentary deposit formed at the boundary between an upland stream and an estuary, in the region known as the "subestuary". Drowned coastal river valleys that were inundated by rising sea levels during the late Pleistocene and subsequent Holocene tend to have dendritic estuaries with many feeder tributaries. Each tributary mimics this salinity gradient from its brackish junction with the mainstem estuary up to the fresh stream feeding the head of tidal propagation. As a result, the tributaries are considered to be "subestuaries". The origin and evolution of a tidal freshwater delta involves processes that are typical of all deltas as well as processes that are unique to the tidal freshwater setting. The combination of processes that create a tidal freshwater delta result in a distinct morphology and unique environmental characteristics. Many tidal freshwater deltas that exist today are directly caused by the onset of or changes in historical land use, especially deforestation, intensive agriculture, and urbanization. These ideas are well illustrated by the many tidal freshwater deltas prograding into Chesapeake Bay along the east coastline of the United States. Research has demonstrated that the accumulating sediments in this estuary derive from post-European settlement deforestation, agriculture, and urban development. Estuaries Other rivers, particularly those on coasts with significant tidal range, do not form a delta but enter into the sea in the form of an estuary. Notable examples include the Gulf of Saint Lawrence and the Tagus estuary. Inland deltas In rare cases, the river delta is located inside a large valley and is called an inverted river delta. Sometimes a river divides into multiple branches in an inland area, only to rejoin and continue to the sea. Such an area is called an inland delta, and often occurs on former lake beds. The term was first coined by Alexander von Humboldt for the middle reaches of the Orinoco River, which he visited in 1800. Other prominent examples include the Inner Niger Delta, Peace–Athabasca Delta, the Sacramento–San Joaquin River Delta, and the Sistan delta of Iran. The Danube has one in the valley on the Slovak–Hungarian border between Bratislava and Iža. In some cases, a river flowing into a flat arid area splits into channels that evaporate as it progresses into the desert. The Okavango Delta in Botswana is one example. See endorheic basin. Mega deltas The generic term mega delta can be used to describe very large Asian river deltas, such as the Yangtze, Pearl, Red, Mekong, Irrawaddy, Ganges-Brahmaputra, and Indus. Sedimentary structure The formation of a delta is complicated, multiple, and cross-cutting over time, but in a simple delta three main types of bedding may be distinguished: the bottomset beds, foreset/frontset beds, and topset beds. This three-part structure may be seen on small scale by crossbedding. The bottomset beds are created from the lightest suspended particles that settle farthest away from the active delta front, as the river flow diminishes into the standing body of water and loses energy. This suspended load is deposited by sediment gravity flow, creating a turbidite. These beds are laid down in horizontal layers and consist of the finest grain sizes. The foreset beds in turn are deposited in inclined layers over the bottomset beds as the active lobe advances. Foreset beds form the greater part of the bulk of a delta, (and also occur on the lee side of sand dunes). The sediment particles within foreset beds consist of larger and more variable sizes, and constitute the bed load that the river moves downstream by rolling and bouncing along the channel bottom. When the bed load reaches the edge of the delta front, it rolls over the edge, and is deposited in steeply dipping layers over the top of the existing bottomset beds. Underwater, the slope of the outermost edge of the delta is created at the angle of repose of these sediments. As the foresets accumulate and advance, subaqueous landslides occur and readjust overall slope stability. The foreset slope, thus created and maintained, extends the delta lobe outward. In cross section, foresets typically lie in angled, parallel bands, and indicate stages and seasonal variations during the creation of the delta. The topset beds of an advancing delta are deposited in turn over the previously laid foresets, truncating or covering them. Topsets are nearly horizontal layers of smaller-sized sediment deposited on the top of the delta and form an extension of the landward alluvial plain. As the river channels meander laterally across the top of the delta, the river is lengthened and its gradient is reduced, causing the suspended load to settle out in nearly horizontal beds over the delta's top. Topset beds are subdivided into two regions: the upper delta plain and the lower delta plain. The upper delta plain is unaffected by the tide, while the boundary with the lower delta plain is defined by the upper limit of tidal influence. Existential threats to deltas Human activities in both deltas and the river basins upstream of deltas can radically alter delta environments. Upstream land use change such as anti-erosion agricultural practices and hydrological engineering such as dam construction in the basins feeding deltas have reduced river sediment delivery to many deltas in recent decades. This change means that there is less sediment available to maintain delta landforms, and compensate for erosion and sea level rise, causing some deltas to start losing land. Declines in river sediment delivery are projected to continue in the coming decades. The extensive anthropogenic activities in deltas also interfere with geomorphological and ecological delta processes. People living on deltas often construct flood defences which prevent sedimentation from floods on deltas, and therefore means that sediment deposition can not compensate for subsidence and erosion. In addition to interference with delta aggradation, pumping of groundwater, oil, and gas, and constructing infrastructure all accelerate subsidence, increasing relative sea level rise. Anthropogenic activities can also destabilise river channels through sand mining, and cause saltwater intrusion. There are small-scale efforts to correct these issues, improve delta environments and increase environmental sustainability through sedimentation enhancing strategies. While nearly all deltas have been impacted to some degree by humans, the Nile Delta and Colorado River Delta are some of the most extreme examples of the devastation caused to deltas by damming and diversion of water. Historical data documents show that during the Roman Empire and Little Ice Age (times when there was considerable anthropogenic pressure), there was significant sediment accumulation in deltas. The industrial revolution has only amplified the impact of humans on delta growth and retreat. Deltas in the economy Ancient deltas benefit the economy due to their well-sorted sand and gravel. Sand and gravel are often quarried from these old deltas and used in concrete for highways, buildings, sidewalks, and landscaping. More than 1 billion tons of sand and gravel are produced in the United States alone. Not all sand and gravel quarries are former deltas, but for ones that are, much of the sorting is already done by the power of water. Urban areas and human habitation tend to be located in lowlands near water access for transportation and sanitation. This makes deltas a common location for civilizations to flourish due to access to flat land for farming, freshwater for sanitation and irrigation, and sea access for trade. Deltas often host extensive industrial and commercial activities, and agricultural land is frequently in conflict. Some of the world's largest regional economies are located on deltas such as the Pearl River Delta, Yangtze River Delta, European Low Countries and the Greater Tokyo Area. Examples The Ganges–Brahmaputra Delta, which spans most of Bangladesh and West Bengal and empties into the Bay of Bengal, is the world's largest delta. The Selenga River delta in the Russian republic of Buryatia is the largest delta emptying into a body of fresh water, in its case Lake Baikal. Deltas on Mars Researchers have found a number of examples of deltas that formed in Martian lakes. Finding deltas is a major sign that Mars once had large amounts of water. Deltas have been found over a wide geographical range. Below are pictures of a few. See also – Complex delta in south-east China References Bibliography Renaud, F. and C. Kuenzer 2012: The Mekong Delta System – Interdisciplinary Analyses of a River Delta, Springer, , , pp. 7–48 KUENZER C. and RENAUD, F. 2012: Climate Change and Environmental Change in River Deltas Globally. In (eds.): Renaud, F. and C. Kuenzer 2012: The Mekong Delta System – Interdisciplinary Analyses of a River Delta, Springer, , , pp. 7–48 Claudia Kuenzer,  Valentin Heimhuber, Juliane Huth, Stefan Dech: Remote Sensing for the Quantification of Land Surface Dynamics in Large River Delta Regions - A Review. Remote Sensing, 11(17), 2019, S. 1-42. doi: 10.3390/rs11171985. ISSN 2072-4292. Michel Leonard Wolters, Claudia Kuenzer: Vulnerability assessments of coastal river deltas - categorization and review. Journal of Coastal Conservation, 3/2015 (3/2015), 2015, S. 1-24, doi: 10.1007/s11852-015-0396-6. ISSN 1400-0350. External links Louisiana State University Geology – World Deltas http://www.wisdom.eoc.dlr.de WISDOM Water-related Information System for the Sustainable Development of the Mekong Delta Wave-dominated river deltas on coastalwiki.org – A coastalwiki.org page on wave-dominated river deltas Aquatic ecology Ecology Coastal geography Sedimentology Fluvial landforms Water streams Bodies of water Water
River delta
[ "Biology", "Environmental_science" ]
4,644
[ "Hydrology", "Ecology", "Ecosystems", "Water", "Aquatic ecology" ]
166,939
https://en.wikipedia.org/wiki/Newi
Newi is an acronym for NEw World Infrastructure, a software architecture for software componentry, mostly known as Newi Business Objects which coined the term business object. Newi was developed by Oliver Sims at the software engineering company Integrated Object Systems, England. It was one of the first implemented architectures for software components. Overview Newi was what today is called a component container. The concepts behind the Newi middleware can be found in Oliver Sims' book "Business Objects", McGraw-Hill 1994. In spite of the title, the book was about software components. Newi components were language-neutral. That is, a Newi component could be written in one of a variety of languages that was supported by Newi. At its height, Newi supported software components written in Cobol, Ada, C, C++, Rexx, and Java. Platforms supported included Windows 3.1, Win95, WinNT, three varieties of Unix - and a prototype supporting components written in RPG was running on the AS400. Newi components were intended to be "objects in the large". There was a form of sub/supertyping which was implemented by the infrastructure through an intelligent delegation mechanism. For example, a component written in C could be "subtyped" by a component written in Cobol. Component names (or types?) were separated from the code implementation module. Messages (both sync and async) were passed using a proprietary form of "tagged data" (a similar concept to today's XML). There was also a notification service. The various system services (including the GUI framework and communications subsystems) were implemented as Newi components. Throughout, there was a rigorous focus on making the programming of application components as simple as possible, with Newi providing many transparencies. From the start, Newi was targeted at both front-end GUI systems and back-end server systems. The front-end version had a GUI run-time framework implemented as components. The component concept fitted very well with the object-based UI provided. Hence a designer/programmer used the same technical code structure to implement both front-end and back-end business function. History The initial concept behind Newi originated in 1989 when Oliver Sims, then working for IBM, saw the need for an infrastructure whereby a given real-world business concept (process or entity) could be implemented as a software module that could be plugged into a running system. Applications would be created by composing an appropriate set of modules. IBM UK funded development of the concept through collaboration with Softwright, a UK bespoke software company. After several successful prototypes, and an early AS400 production version, a joint venture called Integrated Object Systems (IOS) was created in 1993 to exploit the concept. The first version of Newi was announced and shipped in 1994. In early 1996, IOS was bought by SSA (System Software Associates, Inc), who saw great potential in Newi. The software was significantly further developed within SSA, in particular in its back-end capability, as well as having its underlying communications function moved to a COTS Corba product that provided the communications "wet string" while maintaining the Newi programming model and loosely coupled component interaction. Tools were also significantly expanded. SSA also announced an early and proprietary form of web services, based on the re-developed Newi, called "Semantic Message Gateways", or SMG. In 1998, for reasons other than their technology base, SSA had to downsize dramatically; Newi development was halted then abandoned, and the development team (around forty people in UK and US) was dispersed. See also business object Software component References Further reading Peter Eeles and Oliver Sims, Building Business Objects, Wiley 1998. Peter Herzum and Oliver Sims, Business Component Factory, Wiley 2000. Component-based software engineering History of computing in the United Kingdom
Newi
[ "Technology" ]
796
[ "Component-based software engineering", "Components", "History of computing", "History of computing in the United Kingdom" ]
166,969
https://en.wikipedia.org/wiki/Slavoj%20%C5%BDi%C5%BEek
Slavoj Žižek ( , ; born 21 March 1949) is a Slovenian Marxist philosopher, cultural theorist and public intellectual. He is the international director of the Birkbeck Institute for the Humanities at the University of London, Global Distinguished Professor of German at New York University, professor of philosophy and psychoanalysis at the European Graduate School and senior researcher at the Institute for Sociology and Philosophy at the University of Ljubljana. He primarily works on continental philosophy (particularly Hegelianism, psychoanalysis and Marxism) and political theory, as well as film criticism and theology. Žižek is the most famous associate of the Ljubljana School of Psychoanalysis, a group of Slovenian academics working on German idealism, Lacanian psychoanalysis, ideology critique, and media criticism. His breakthrough work was 1989's The Sublime Object of Ideology, his first book in English, which was decisive in the introduction of the Ljubljana School's thought to English-speaking audiences. He has written over 50 books in multiple languages and speaks Slovene, Serbo-Croatian, English, German, and French. The idiosyncratic style of his public appearances, frequent magazine op-eds, and academic works, characterised by the use of obscene jokes and pop cultural examples, as well as politically incorrect provocations, have gained him fame, controversy and criticism both in and outside academia. Life and career Early life Žižek was born in Ljubljana, PR Slovenia, Yugoslavia, into a middle-class family. His father Jože Žižek was an economist and civil servant from the region of Prekmurje in eastern Slovenia. His mother Vesna, a native of the Gorizia Hills in the Slovenian Littoral, was an accountant in a state enterprise. His parents were atheists. He spent most of his childhood in the coastal town of Portorož, where he was exposed to Western film, theory and popular culture. When Žižek was a teenager his family moved back to Ljubljana where he attended Bežigrad High School. Originally wanting to become a filmmaker himself, he abandoned these ambitions and chose to pursue philosophy instead. Education In 1967, during an era of liberalization in Titoist Yugoslavia, Žižek enrolled at the University of Ljubljana and studied philosophy and sociology. Žižek had already begun reading French structuralists prior to entering university, and in 1967 he published the first translation of a text by Jacques Derrida into Slovenian. Žižek frequented the circles of dissident intellectuals, including the Heideggerian philosophers Tine Hribar and Ivo Urbančič, and published articles in alternative magazines, such as Praxis, Tribuna and Problemi, which he also edited. In 1971 he accepted a job as an assistant researcher with the promise of tenure, but was dismissed after his Master's thesis was denounced by the authorities as being "non-Marxist". He graduated from the University of Ljubljana in 1981 with a Doctor of Arts in Philosophy for his dissertation entitled The Theoretical and Practical Relevance of French Structuralism. He spent the next few years in what was described as "professional wilderness", also fulfilling his legal duty of undertaking a year-long national service in the Yugoslav People's Army in Karlovac. Academic career During the 1980s, Žižek edited and translated Jacques Lacan, Sigmund Freud, and Louis Althusser. He used Lacan's work to interpret Hegelian and Marxist philosophy. In 1986, Žižek completed a second doctorate (Doctor of Philosophy in psychoanalysis) at the University of Paris VIII under Jacques-Alain Miller, entitled "La philosophie entre le symptôme et le fantasme". Žižek wrote the introduction to Slovene translations of G. K. Chesterton's and John le Carré's detective novels. In 1988, he published his first book dedicated entirely to film theory, Pogled s strani. The following year, he achieved international recognition as a social theorist with the 1989 publication of his first book in English, The Sublime Object of Ideology. Žižek has been publishing in journals such as Lacanian Ink and In These Times in the United States, the New Left Review and The London Review of Books in the United Kingdom, and with the Slovenian left-liberal magazine Mladina and newspapers Dnevnik and Delo. He also cooperates with the Polish leftist magazine Krytyka Polityczna, regional southeast European left-wing journal Novi Plamen, and serves on the editorial board of the psychoanalytical journal Problemi. Žižek is a series editor of the Northwestern University Press series Diaeresis that publishes works that "deal not only with philosophy, but also will intervene at the levels of ideology critique, politics, and art theory". In 2012, Foreign Policy listed Žižek on its list of Top 100 Global Thinkers, calling him "a celebrity philosopher", while elsewhere he has been dubbed the "Elvis of cultural theory" and "the most dangerous philosopher in the West". Žižek has been called "the leading Hegelian of our time", and "the foremost exponent of Lacanian theory". A journal, the International Journal of Žižek Studies, was founded by professors David J. Gunkel and Paul A. Taylor to engage with his work. Political career In the late 1980s, Žižek came to public attention as a columnist for the alternative youth magazine Mladina, which was critical of Tito's policies, Yugoslav politics, especially the militarization of society. He was a member of the League of Communists of Slovenia until October 1988, when he quit in protest against the JBTZ trial together with 32 other Slovenian intellectuals. Between 1988 and 1990, he was actively involved in several political and civil society movements which fought for the democratization of Slovenia, most notably the Committee for the Defence of Human Rights. In the first free elections in 1990, he ran as the Liberal Democratic Party's candidate for the former four-person collective presidency of Slovenia. Žižek is a member of the Democracy in Europe Movement 2025 (DiEM25) founded in 2016. Public life In 2003, Žižek wrote text to accompany Bruce Weber's photographs in a catalog for Abercrombie & Fitch. Questioned as to the seemliness of a major intellectual writing ad copy, Žižek told The Boston Globe, "If I were asked to choose between doing things like this to earn money and becoming fully employed as an American academic, kissing ass to get a tenured post, I would with pleasure choose writing for such journals!" Žižek and his thought have been the subject of several documentaries. The 1996 Liebe Dein Symptom wie Dich selbst! is a German documentary on him. In the 2004 The Reality of the Virtual, Žižek gave an hour-long lecture on his interpretation of Lacan's tripartite thesis of the imaginary, the symbolic, and the real. Zizek! is a 2005 documentary by Astra Taylor on his philosophy. The 2006 The Pervert's Guide to Cinema and 2012 The Pervert's Guide to Ideology also portray Žižek's ideas and cultural criticism. Examined Life (2008) features Žižek speaking about his conception of ecology at a garbage dump. He was also featured in the 2011 Marx Reloaded, directed by Jason Barker. Foreign Policy named Žižek one of its 2012 Top 100 Global Thinkers "for giving voice to an era of absurdity". In 2019, Žižek began hosting a mini-series called How to Watch the News with Slavoj Žižek on the RT network. In April, Žižek debated psychology professor Jordan Peterson at the Sony Centre in Toronto, Canada over happiness under capitalism versus Marxism. Personal life Žižek has been married four times and has two adult sons, Tim and Kostja. His second wife was Slovene philosopher and socio-legal theorist Renata Salecl, fellow member of the Ljubljana school of psychoanalysis. His third wife was Argentinian model and Lacanian scholar Analia Hounie, whom he married in 2005. Currently, he is married to Slovene journalist, author and philosopher, Jela Krečič. In early 2018, Žižek experienced Bell's palsy on the right side of his face. He went on to give several lectures and interviews with this condition; on March 9 of that year, during a lecture on political revolutions in London, he commented on the treatment he had been receiving, and used his paralysis as a metaphor for political idleness. Aside from his native Slovene, Žižek is a fluent speaker of Serbo-Croatian, French, German and English. Taste In the 2012 Sight & Sound critics' poll, Žižek listed his 10 favourite films: 3:10 to Yuma, Dune, The Fountainhead, Hero, Hitman, Nightmare Alley, On Dangerous Ground, Opfergang, The Sound of Music, and We the Living. On this list, he clarified: "I opted for pure madness: the list contains only 'guilty pleasures'". In his tour of The Criterion Collection closet, he chose Trouble in Paradise, Sweet Smell of Success, Picnic at Hanging Rock, Murmur of the Heart, The Joke, The Ice Storm, Great Expectations, Roberto Rossellini's History Films, City Lights, a box set of Carl Theodor Dreyer's films, Y tu mamá también and Antichrist. In an article called "My Favourite Classics", Žižek states that Arnold Schoenberg's Gurre-Lieder is the piece of music he would take to a desert island. He goes on to list other favourites, including Beethoven's Fidelio, Schubert's Winterreise, Mussorgsky's Khovanshchina and Donizetti's L'elisir d'amore. He expresses a particular love for Wagner, particularly Das Rheingold and Parsifal. He ranks Schoenberg over Stravinsky, and insists on Eisler's importance among Schoenberg's followers. Žižek often lists Franz Kafka, Samuel Beckett and Andrei Platonov as his "three absolute masters of 20th-century literature". He ranks/prefers Varlam Shalamov over Aleksandr Solzhenitsyn, Marina Tsvetaeva and Osip Mandelstam over Anna Akhmatova, Daphne du Maurier over Virginia Woolf, and Samuel Beckett over James Joyce. His theories have been applied to studying a variety of literature, including Finnegans Wake. Thought and positions Žižek and his thought have been described by many commentators as "Hegelo-Lacanian". In his early career, Žižek claimed "a theoretical space moulded by three centres of gravity: Hegelian dialectics, Lacanian psychoanalytic theory, and contemporary criticism of ideology", designating "the theory of Jacques Lacan" as the fundamental element. In 2010, Žižek instead claimed that for him Hegel is more fundamental than Lacan—"Even Lacan is just a tool for me to read Hegel. For me, always it is Hegel, Hegel, Hegel."—while in 2019, he claimed that "For me, in some sense, all of philosophy happened in [the] fifty years" between Immanuel Kant's Critique of Pure Reason (1781) and the death of Georg Wilhelm Friedrich Hegel (1831). Alongside his academic, theoretical works, Žižek is a prolific commentator on current affairs and contemporary political debates. Subjectivity For Žižek, although a subject may take on a symbolic (social) position, it can never be reduced to this attempted symbolisation, since the very "taking on" of this position implies a separate 'I', beyond the symbolic, that does the taking on. Yet, under scrutiny, nothing positive can be said about this subject, this 'I', that eludes symbolisation; it cannot be discerned as anything but "that which cannot be symbolised". Thus, without the initial, attempted, failed symbolisation, subjectivity cannot present itself. As Žižek writes in his first book in English: "the subject of the signifier is a retroactive effect of the failure of its own representation; that is why the failure of representation is the only way to represent it adequately." Žižek attributes this position on the subject to Hegel, particularly his description of man as "the night of the world", and to Lacan, with his description of the barred, split subject, who he sees as developing the Cartesian notion of the cogito. According to Žižek, these thinkers, in insisting on the role of the subject, run counter to "culturalist" or "historicist" positions held by thinkers such as Louis Althusser and Michel Foucault, which posit that "subjects" are bound by and reducible to their historical/cultural(/symbolic) context. Political theory Ideology Žižek's Lacanian-informed theory of ideology is one of his major contributions to political theory; his first book in English, The Sublime Object of Ideology, and the documentary The Pervert's Guide to Ideology, in which he stars, are among the well-known places in which it is discussed. Žižek believes that ideology has been frequently misinterpreted as dualistic and, according to him, this misinterpreted dualism posits that there is a real world of material relations and objects outside of oneself, which is accessible to reason. For Žižek, as for Marx, ideology is made up of fictions that structure political life; in Lacan's terms, ideology belongs to the symbolic order. Žižek argues that these fictions are primarily maintained at an unconscious level, rather than a conscious one. Since, according to psychoanalytic theory, the unconscious can determine one's actions directly, bypassing one's conscious awareness (as in parapraxes), ideology can be expressed in one's behaviour, regardless of one's conscious beliefs. Hence, Žižek breaks with orthodox Marxist accounts that view ideology purely as a system of mistaken beliefs (see False consciousness). Drawing on Peter Sloterdijk's Critique of Cynical Reason, Žižek argues that adopting a cynical perspective is not enough to escape ideology, since, according to Žižek, even though postmodern subjects are consciously cynical about the political situation, they continue to reinforce it through their behaviour. Freedom Žižek claims that (a sense of) political freedom is sustained by a deeper unfreedom, at least under liberal capitalism. In a 2002 article, Žižek endorses Lenin's distinction between formal and actual freedom, claiming that liberal society only contains formal freedom, "freedom of choice within the coordinates of the existing power relations", while prohibiting actual freedom, "the site of an intervention that undermines these very coordinates." In an oft-quoted passage from a book published in the same year, he writes that, in these conditions of liberal censorship, "we 'feel free' because we lack the very language to articulate our unfreedom". In a 2019 article, he writes that Marx "made a valuable point with his claim that the market economy combines in a unique way political and personal freedom with social unfreedom: personal freedom (freely selling myself on the market) is the very form of my unfreedom." However, in 2014, he rejects the "pseudo-Marxist" total derision of 'formal freedom', claiming that it is necessary for critique: "When we are formally free, only then we become aware how limited this freedom actually is." Žižek co-signed a petition condemning the "use of disproportionate force and retaliatory brutality by the Hong Kong Police against students in university campuses in Hong Kong" during the 2019–2020 Hong Kong protests. The petition concludes with the statement: "We believe the defence of academic freedom, the freedom of speech, freedom of the press, freedom of assembly and association, and the responsibility to protect the safety of our students are universal causes common to all." Theology Žižek has asserted that "Atheism is a legacy worth fighting for" in The New York Times. However, he nonetheless finds extensive conceptual value in Christianity, particularly Protestantism: the subtitle of his 2000 book The Fragile Absolute is "Or, Why Is the Christian Legacy Worth Fighting For?". Hence, he labels his position 'Christian Atheism', and has written about theology at length. In The Pervert's Guide to Ideology, Žižek suggests that "the only way to be an Atheist is through Christianity", since, he claims, atheism often fails to escape the religious paradigm by remaining faithful to an external guarantor of meaning, simply switching God for natural necessity or evolution. Christianity, on the other hand, in the doctrine of the incarnation, brings God down from the 'beyond' and onto earth, into human affairs; for Žižek, this paradigm is more authentically godless, since the external guarantee is abolished. Communism Although sometimes adopting the title of 'radical leftist', Žižek also controversially insists on identifying as a communist, even though he rejects 20th century communism as a "total failure", and decries "the communism of the 20th century, more specifically all the network of phenomena we refer to as Stalinism as "maybe the worst ideological, political, ethical, social (and so on) catastrophe in the history of humanity." Žižek justifies this choice by claiming that only the term 'communism' signals a genuine step outside of the existing order, in part since the term 'socialism' no longer has radical enough implications, and means nothing more than that one "care[s] for society." In Marx Reloaded, Žižek rejects both 20th-century totalitarianism and "spontaneous local self-organisation, direct democracy, councils, and so on". There, he endorses a definition of communism as "a society where you, everyone would be allowed to dwell in his or her stupidity", an idea with which he credits Fredric Jameson as the inspiration. Žižek has labelled himself a "communist in a qualified sense" and as a "moderately conservative Communist". When he spoke at a conference on The Idea of Communism, he applied (in qualified form) the 'communist' label to the Occupy Wall Street protestors: Electoral politics In May 2013, during Subversive Festival, Žižek commented: "If they don't support SYRIZA, then, in my vision of the democratic future, all these people will get from me [is] a first-class one-way ticket to [a] gulag." In response, the center-right New Democracy party claimed Žižek's comments should be understood literally, not ironically. Just before the 2017 French presidential election, Žižek stated that one could not choose between Macron and Le Pen, arguing that the neoliberalism of Macron just gives rise to neofascism anyway. This was in response to many on the left calling for support for Macron to prevent a Le Pen victory. In 2022, Žižek expressed his support for the Slovenian political party Levica (The Left) at its 5th annual conference. Support for Donald Trump's election In a 2016 interview with Channel 4, Žižek said that were he American, he would vote for Donald Trump in the 2016 United States presidential election: These views were derisively characterised as accelerationist by Left Voice, and were labelled "regressive" by Noam Chomsky. In 2019 and 2020, Žižek defended his views, saying that Trump's election "created, for the first time in I don't know how many decades, a true American left", citing the boost it gave Bernie Sanders and Alexandria Ocasio-Cortez. However, regarding the 2020 United States presidential election, Žižek reported himself "tempted by changing his position", saying "Trump is a little too much". In another interview, he stood by his 2016 "wager" that Trump's election would lead to a socialist reaction ("maybe I was right"), but claimed that "now with coronavirus: no, no—no Trump. ... difficult as it is for me to say this, but now I would say 'Biden better than Trump', although he is far from ideal." In his 2022 book, Heaven in Disorder, Žižek continued to express a preference for Joe Biden over Donald Trump, stating "Trump was corroding the ethical substance of our lives", while Biden lies and represents big capital more politely. Social issues Žižek's views on social issues such as Eurocentrism, immigration and the LGBT movement have drawn criticism and accusations of bigotry. Europe and multiculturalism In his 1997 article 'Multiculturalism, Or, The Cultural Logic of Multinational Capitalism', Žižek critiqued multiculturalism for privileging a culturally 'neutral' perspective from which all cultures are disaffectedly apprehended in their particularity because this distancing reproduces the racist procedure of Othering. He further argues that a fixation on particular identities and struggles corresponds to an abandonment of the universal struggle against global capitalism. In his 1998 article 'A Leftist Plea for "Eurocentrism"', he argued that Leftists should 'undermine the global empire of capital, not by asserting particular identities, but through the assertion of a new universality', and that in this struggle the European universalist value of egaliberte (Etienne Balibar's term) should be foregrounded, proposing 'a Leftist appropriation of the European legacy'. Elsewhere, he has also argued, defending Marx, that Europe's destruction of non-European tradition (e.g. through imperialism and slavery) has opened up the space for a 'double liberation', both from tradition and from European domination. In her 2010 article 'The Two Zizeks', Nivedita Menon criticised Žižek for focusing on differentiation as a colonial project, ignoring how assimilation was also such a project; she also critiqued him for privileging the European Enlightenment Christian legacy as neutral, 'free of the cultural markers that fatally afflict all other religions.' David Pavón Cuéllar, closer to Žižek, also criticised him. In the mid-2010s, over the issue of Eurocentrism, there was a dispute between Žižek and Walter Mignolo, in which Mignolo (supporting a previous article by Hamid Dabashi, which argued against the centrality of European philosophers like Žižek, criticised by Michael Marder) argued, against Žižek, that decolonial struggle should forget European philosophy, purportedly following Frantz Fanon; in response, Žižek pointed out Fanon's European intellectual influences, and his resistance to being confined within the black tradition, and claimed to be following Fanon on this point. In his book Can Non-Europeans Think? (foreworded by Mignolo), Dabashi also critiqued Žižek for privileging Europe; Žižek argued that Dabashi slanderously and comically misrepresents him through misattribution, a critique supported by Ilan Kapoor. Transgender issues In his 2016 article "The Sexual Is Political", Žižek argued that all subjects are, like transgender subjects, in discord with the sexual position assigned to them. For Žižek, any attempt to escape this antagonism is false and utopian: thus, he rejects both the reactionary attempt to violently impose sexual fixity and the "postgenderist" attempt to escape sexual fixity entirely; he aligns the latter with 'transgenderism', which he claims does not adequately describe the behaviour of actual transgender subjects, who seek a stable "place where they could recognise themselves" (e.g., a bathroom that confirms their identity). Žižek argues for a third bathroom: a "GENERAL GENDER" bathroom that would represent the fact that both sexual positions (Žižek insists on the unavoidable "twoness" of the sexual landscape) are missing something and thus fail to adequately represent the subjects that take them on. In his 2019 article "Transgender dogma is naive and incompatible with Freud", Žižek argued that there is "a tension in LGBT+ ideology between social constructivism and (some kind of biological) determinism", between the idea that gender is a social construct, and the idea that gender is essential and pre-social. He concludes the essay with a "Freudian solution" to this deadlock: Che Gossett criticized Žižek for his use of the "pathologising" term "transgenderism" throughout the 2016 article, and for writing "about trans subjectivity with such assumed authority while ignoring the voices of trans theorists (academics and activists) entirely", as well as for purportedly claiming that a "futuristic" vision underlies so-called "transgenderism", ignoring present-day oppression. Sam Warren Miell and Chris Coffman, both psychoanalytically inclined, have separately criticized Žižek for conflating transgenderism and postgenderism; Miell further criticised the 2014 article for rehearsing homophobic/transphobic clichés (including Žižek's designation of inter-species marriage as a possible "anti-discriminatory demand"), and misusing Lacanian theory; Coffman argued that Žižek should have engaged with contemporary Lacanian trans studies, which would have shown that psychoanalytic and transgender discourses were aligned, not opposed. In response to the title of the 2019 article, McKenzie Wark had t-shirts made with the transgender flag and "Incompatible with Freud" printed on them. Žižek defended his 2016 article in two follow-up pieces. The first addresses purported misreadings of his position, while the second is a more sustained defence (against Miell) of the article's application of Lacanian theory, to which Miell responded in turn. Douglas Lain also defended Žižek, claiming that context makes it clear that Žižek is "not opposed [to] the struggle of LGBTQ people" but is instead critiquing "a phony liberal ideology that set up the terms of the LGBTQ struggle", "a certain utopian postmodern ideology that seeks to eliminate all limits, to eliminate all binaries, to go beyond norms because the imposition of a limit is patriarchal and oppressive." In a 2023 piece for Compact Magazine, Žižek took a hard stance against access to puberty blockers for trans youth, and against trans adults being sent to prisons matching their gender, citing the case of Isla Bryson, whom he referred to as "a person who identifies itself as a woman using its penis to rape two women". Both of these things were attributed by Žižek to wokeness (the wider subject of the article). Other Žižek wrote that the convention center in which nationalist Slovene writers hold their conventions should be blown up, adding, "Since we live in the time without any sense of irony, I must add I don't mean it literally." In 2013, Žižek corresponded with imprisoned Russian activist and Pussy Riot member Nadezhda Tolokonnikova. He criticized Western military interventions in developing countries and wrote that it was the 2011 military intervention in Libya "which threw the country in chaos" and the U.S.-led invasion of Iraq "which created the conditions for the rise" of the Islamic State. Žižek believes that China is the combination of capitalism and authoritarianism in their extreme forms, and the Chinese Communist Party is the best protector of the interests of capitalists. From the Cultural Revolution to Deng's reforms, "Mao himself created the ideological condition for rapid capitalist development by tearing apart the fabric of traditional society." In an opinion article for The Guardian, Žižek argued in favour of giving full support to Ukraine after the Russian invasion and for creating a stronger NATO in response to Russian aggression, later arguing that it would also be a tragedy for Ukraine to yoke itself to western neoliberalism. He compared the struggle of Ukraine against its occupiers to the Palestinians' struggle against the Israeli occupation. In April 2024, Žižek criticized Israel's actions in the Gaza Strip. Criticism and controversy Inconsistency and ambiguity Žižek's philosophical and political positions have been described as ambiguous, and his work has been criticized for a failure to take a consistent stance. While he has claimed to stand by a revolutionary Marxist project, his lack of vision concerning the possible circumstances which could lead to successful revolution makes it unclear what that project consists of. According to John Gray and John Holbo, his theoretical argument often lacks grounding in historical fact, which makes him more provocative than insightful. In a very negative review of Žižek's book Less than Nothing, the British political philosopher John Gray attacked Žižek for his celebrations of violence, his failure to ground his theories in historical facts, and his 'formless radicalism' which, according to Gray, professes to be communist yet lacks the conviction that communism could ever be successfully realized. Gray concluded that Žižek's work, though entertaining, is intellectually worthless: "Achieving a deceptive substance by endlessly reiterating an essentially empty vision, Žižek's work amounts in the end to less than nothing." Žižek's refusal to present an alternative vision has led critics to accuse him of using unsustainable Marxist categories of analysis and having a 19th-century understanding of class. For example, post-Marxist Ernesto Laclau argued that "Žižek uses class as a sort of deus ex machina to play the role of the good guy against the multicultural devils." In his book Living in the End Times, Žižek suggests that the criticism of his positions is itself ambiguous and multilateral: Stylistic confusion Žižek has been criticized for his chaotic and non-systematic style: Harpham calls Žižek's style "a stream of nonconsecutive units arranged in arbitrary sequences that solicit a sporadic and discontinuous attention". O'Neill concurs: "a dizzying array of wildly entertaining and often quite maddening rhetorical strategies are deployed in order to beguile, browbeat, dumbfound, dazzle, confuse, mislead, overwhelm, and generally subdue the reader into acceptance." Noam Chomsky deems Žižek guilty of "using fancy terms like polysyllables and pretending you have a theory when you have no theory whatsoever", adding that his views are often too obscure to be communicated usefully to common people. Conservative thinker Roger Scruton claims that: Careless scholarship Žižek has been accused of approaching phenomena without rigour, reductively forcing them to support pre-given theoretical notions. For example, Tania Modleski alleges that "in trying to make Hitchcock 'fit' Lacan, he [Žižek] frequently ends up simplifying what goes on in the films". Similarly, Yannis Stavrakakis criticises Žižek's reading of Antigone, claiming it proceeds without regard for both the play itself and the interpretation, given by Lacan in his 7th Seminar, which Žižek claims to follow. According to Stavrakakis, Žižek mistakenly characterises Antigone's act (illegally burying her brother) as politically radical/revolutionary, when in reality "Her act is a one-off and she couldn't care less about what will happen in the polis after her suicide." Noah Horwitz alleges that Žižek (and the Ljubljana School to which Žižek belongs) mistakenly conflates the insights of Lacan and Hegel, and registers concern that such a move "risks transforming Lacanian psychoanalysis into a discourse of self-consciousness rather than a discourse on the psychoanalytic, Freudian unconscious." Allegations of plagiarism Žižek's tendency to recycle portions of his own texts in subsequent works resulted in the accusation of self-plagiarism by The New York Times in 2014, after Žižek published an op-ed in the magazine which contained portions of his writing from an earlier book. In response, Žižek expressed perplexity at the harsh tone of the denunciation, emphasizing that the recycled passages in question only acted as references from his theoretical books to supplement otherwise original writing. In July 2014, Newsweek reported that online bloggers led by Steve Sailer had discovered that in an article published in 2006, Žižek plagiarized long passages from an earlier review by Stanley Hornbeck that first appeared in the journal American Renaissance, a publication condemned by the Southern Poverty Law Center as the organ of a "white nationalist hate group". In response to the allegations, Žižek stated: Works Bibliography Filmography References External links Slavoj Žižek on Big Think Slavoj Žižek Faculty Page at European Graduate School Žižek's entry in the Internet Encyclopedia of Philosophy Žižek bibliography at Lacanian Ink magazine Column archive at The Guardian Column archive at Jacobin Wendy Brown, Costas Douzinas, Stephen Frosh, and Zizek at the London Critical Theory Summer School – Friday Debate 2012 Žižek's blog on Substack 1949 births 20th-century atheists 20th-century essayists 20th-century non-fiction writers 20th-century Slovenian philosophers 20th-century Slovenian writers 21st-century atheists 21st-century essayists 21st-century non-fiction writers 21st-century Slovenian philosophers 21st-century Slovenian writers Academic staff of European Graduate School Academic staff of the University of Ljubljana Academics of Birkbeck, University of London Analysands of Jacques-Alain Miller Anti-capitalists Anti-consumerists Anti-Stalinist left Aphorists Atheist philosophers Atheist theologians Writers on atheism Contemporary philosophers Critical theorists Critics of capitalism Criticism of capitalism Critics of Islamism Critics of multiculturalism Critics of political economy Critics of postmodernism Critics of religions Death of God theologians Deleuze scholars Epistemologists Film theorists Freudo-Marxism Hegelian philosophers Jacques Lacan Liberal Democracy of Slovenia politicians Libertarian Marxists Living people Logicians Maoist theorists Marxist theorists Mass media theorists Materialists Media critics Members of the Slovenian Academy of Sciences and Arts Metaphysicians Ontologists Opinion journalists Paris 8 University Vincennes-Saint-Denis alumni Philosophers of art Philosophers of culture Philosophers of education Philosophers of history Philosophers of logic Philosophers of mind Philosophers of nihilism Philosophers of psychology Philosophers of religion Philosophers of social science Philosophers of science Philosophers of technology Philosophy writers Political philosophers Poststructuralists Slovenian anti-fascists Slovenian atheists Slovenian communists Slovenian essayists Slovenian ethicists Slovenian Marxist writers Slovenian Marxists Slovenian non-fiction writers Slovenian political philosophers Slovenian psychoanalysts Slovenian socialists Slovenian sociologists Slovenian theologians Social philosophers Sociologists of art Theorists on Western civilization University of Ljubljana alumni Writers about activism and social change Writers about globalization Writers about religion and science Writers about the Soviet Union Yugoslav dissidents
Slavoj Žižek
[ "Physics" ]
7,295
[ "Materialism", "Matter", "Materialists" ]
166,971
https://en.wikipedia.org/wiki/Edwin%20Thompson%20Jaynes
Edwin Thompson Jaynes (July 5, 1922 – April 30, 1998) was the Wayman Crow Distinguished Professor of Physics at Washington University in St. Louis. He wrote extensively on statistical mechanics and on foundations of probability and statistical inference, initiating in 1957 the maximum entropy interpretation of thermodynamics as being a particular application of more general Bayesian/information theory techniques (although he argued this was already implicit in the works of Josiah Willard Gibbs). Jaynes strongly promoted the interpretation of probability theory as an extension of logic. In 1963, together with his doctoral student Fred Cummings, he modeled the evolution of a two-level atom in an electromagnetic field, in a fully quantized way. This model is known as the Jaynes–Cummings model. A particular focus of his work was the construction of logical principles for assigning prior probability distributions; see the principle of maximum entropy, the principle of maximum caliber, the principle of transformation groups and Laplace's principle of indifference. Other contributions include the mind projection fallacy. Jaynes' book, Probability Theory: The Logic of Science (2003) gathers various threads of modern thinking about Bayesian probability and statistical inference, develops the notion of probability theory as extended logic, and contrasts the advantages of Bayesian techniques with the results of other approaches. This book, which he dedicated to Harold Jeffreys, was published posthumously in 2003 (from an incomplete manuscript that was edited by Larry Bretthorst). Other of his doctoral students included Joseph H. Eberly and Douglas James Scalapino. See also Differential entropy Limiting density of discrete points References External links Edwin Thompson Jaynes. Probability Theory: The Logic of Science. Cambridge University Press, (2003). . Early (1994) version (fragmentary) of Probability Theory: The Logic of Science. Book no longer downloadable for copyright reasons. A comprehensive web page on E. T. Jaynes's life and work. ET Jaynes' obituary at Washington University http://bayes.wustl.edu/etj/articles/entropy.concentration.pdf Jaynes' analysis of Rudolph Wolf's dice data 1922 births 1998 deaths American agnostics 20th-century American physicists American statisticians Washington University in St. Louis mathematicians Washington University in St. Louis physicists Scientists from Missouri 20th-century American mathematicians Statistical physicists Information theorists American probability theorists Cornell College alumni Bayesian statisticians Philosophers of probability
Edwin Thompson Jaynes
[ "Physics" ]
492
[ "Statistical physicists", "Statistical mechanics" ]
166,980
https://en.wikipedia.org/wiki/Incidence%20algebra
In order theory, a field of mathematics, an incidence algebra is an associative algebra, defined for every locally finite partially ordered set and commutative ring with unity. Subalgebras called reduced incidence algebras give a natural construction of various types of generating functions used in combinatorics and number theory. Definition A locally finite poset is one in which every closed interval [a, b] = {x : a ≤ x ≤ b} is finite. The members of the incidence algebra are the functions f assigning to each nonempty interval [a, b] a scalar f(a, b), which is taken from the ring of scalars, a commutative ring with unity. On this underlying set one defines addition and scalar multiplication pointwise, and "multiplication" in the incidence algebra is a convolution defined by An incidence algebra is finite-dimensional if and only if the underlying poset is finite. Related concepts An incidence algebra is analogous to a group algebra; indeed, both the group algebra and the incidence algebra are special cases of a category algebra, defined analogously; groups and posets being special kinds of categories. Upper-triangular matrices Consider the case of a partial order ≤ over any -element set . We enumerate as , and in such a way that the enumeration is compatible with the order ≤ on , that is, implies , which is always possible. Then, functions as above, from intervals to scalars, can be thought of as matrices , where whenever , and otherwise. Since we arranged in a way consistent with the usual order on the indices of the matrices, they will appear as upper-triangular matrices with a prescribed zero-pattern determined by the incomparable elements in under ≤. The incidence algebra of ≤ is then isomorphic to the algebra of upper-triangular matrices with this prescribed zero-pattern and arbitrary (including possibly zero) scalar entries everywhere else, with the operations being ordinary matrix addition, scaling and multiplication. Special elements The multiplicative identity element of the incidence algebra is the delta function, defined by The zeta function of an incidence algebra is the constant function ζ(a, b) = 1 for every nonempty interval [a, b]. Multiplying by ζ is analogous to integration. One can show that ζ is invertible in the incidence algebra (with respect to the convolution defined above). (Generally, a member h of the incidence algebra is invertible if and only if h(x, x) is invertible for every x.) The multiplicative inverse of the zeta function is the Möbius function μ(a, b); every value of μ(a, b) is an integral multiple of 1 in the base ring. The Möbius function can also be defined inductively by the following relation: Multiplying by μ is analogous to differentiation, and is called Möbius inversion. The square of the zeta function gives the number of elements in an interval: Examples Positive integers ordered by divisibility The convolution associated to the incidence algebra for intervals [1, n] becomes the Dirichlet convolution, hence the Möbius function is μ(a, b) = μ(b/a), where the second "μ" is the classical Möbius function introduced into number theory in the 19th century. Finite subsets of some set E, ordered by inclusion The Möbius function is whenever S and T are finite subsets of E with S ⊆ T, and Möbius inversion is called the principle of inclusion-exclusion. Geometrically, this is a hypercube: Natural numbers with their usual order The Möbius function is and Möbius inversion is called the (backwards) difference operator. Geometrically, this corresponds to the discrete number line. The convolution of functions in the incidence algebra corresponds to multiplication of formal power series: see the discussion of reduced incidence algebras below. The Möbius function corresponds to the sequence (1, −1, 0, 0, 0, ... ) of coefficients of the formal power series 1 − t, and the zeta function corresponds to the sequence of coefficients (1, 1, 1, 1, ...) of the formal power series , which is inverse. The delta function in this incidence algebra similarly corresponds to the formal power series 1. Finite sub-multisets of some multiset E, ordered by inclusion The above three examples can be unified and generalized by considering a multiset E, and finite sub-multisets S and T of E. The Möbius function is This generalizes the positive integers ordered by divisibility by a positive integer corresponding to its multiset of prime factors with multiplicity, e.g., 12 corresponds to the multiset This generalizes the natural numbers with their usual order by a natural number corresponding to a multiset of one underlying element and cardinality equal to that number, e.g., 3 corresponds to the multiset Subgroups of a finite p-group G, ordered by inclusion The Möbius function is if is a normal subgroup of and and it is 0 otherwise. This is a theorem of Weisner (1935). Partitions of a set Partially order the set of all partitions of a finite set by saying σ ≤ τ if σ is a finer partition than τ. In particular, let τ have t blocks which respectively split into s1, ..., st finer blocks of σ, which has a total of s = s1 + ⋅⋅⋅ + st blocks. Then the Möbius function is: Euler characteristic A poset is bounded if it has smallest and largest elements, which we call 0 and 1 respectively (not to be confused with the 0 and 1 of the ring of scalars). The Euler characteristic of a bounded finite poset is μ(0,1). The reason for this terminology is the following: If P has a 0 and 1, then μ(0,1) is the reduced Euler characteristic of the simplicial complex whose faces are chains in P \ {0, 1}. This can be shown using Philip Hall's theorem, relating the value of μ(0,1) to the number of chains of length i. Reduced incidence algebras The reduced incidence algebra consists of functions which assign the same value to any two intervals which are equivalent in an appropriate sense, usually meaning isomorphic as posets. This is a subalgebra of the incidence algebra, and it clearly contains the incidence algebra's identity element and zeta function. Any element of the reduced incidence algebra that is invertible in the larger incidence algebra has its inverse in the reduced incidence algebra. Thus the Möbius function is also in the reduced incidence algebra. Reduced incidence algebras were introduced by Doubillet, Rota, and Stanley to give a natural construction of various rings of generating functions. Natural numbers and ordinary generating functions For the poset the reduced incidence algebra consists of functions invariant under translation, for all so as to have the same value on isomorphic intervals [a+k, b+k] and [a, b]. Let t denote the function with t(a, a+1) = 1 and t(a, b) = 0 otherwise, a kind of invariant delta function on isomorphism classes of intervals. Its powers in the incidence algebra are the other invariant delta functions t n(a, a+n) = 1 and t n(x, y) = 0 otherwise. These form a basis for the reduced incidence algebra, and we may write any invariant function as . This notation makes clear the isomorphism between the reduced incidence algebra and the ring of formal power series over the scalars R, also known as the ring of ordinary generating functions. We may write the zeta function as the reciprocal of the Möbius function Subset poset and exponential generating functions For the Boolean poset of finite subsets ordered by inclusion , the reduced incidence algebra consists of invariant functions defined to have the same value on isomorphic intervals [S,T] and [S′,T&hairsp;′] with |T \ S| = |T&hairsp;′ \ S′|. Again, let t denote the invariant delta function with t(S,T) = 1 for |T \ S| = 1 and t(S,T) = 0 otherwise. Its powers are: where the sum is over all chains and the only non-zero terms occur for saturated chains with since these correspond to permutations of n, we get the unique non-zero value n!. Thus, the invariant delta functions are the divided powers and we may write any invariant function as where [n] = {1, . . . , n}. This gives a natural isomorphism between the reduced incidence algebra and the ring of exponential generating functions. The zeta function is with Möbius function: Indeed, this computation with formal power series proves that Many combinatorial counting sequences involving subsets or labeled objects can be interpreted in terms of the reduced incidence algebra, and computed using exponential generating functions. Divisor poset and Dirichlet series Consider the poset D of positive integers ordered by divisibility, denoted by The reduced incidence algebra consists of functions that are invariant under multiplication: for all (This multiplicative equivalence of intervals is a much stronger relation than poset isomorphism; e.g., for primes p, the two-element intervals [1,p] are all inequivalent.) For an invariant function, f(a,b) depends only on b/a, so a natural basis consists of invariant delta functions defined by if b/a = n and 0 otherwise; then any invariant function can be written The product of two invariant delta functions is: since the only non-zero term comes from c = na and b = mc = nma. Thus, we get an isomorphism from the reduced incidence algebra to the ring of formal Dirichlet series by sending to so that f corresponds to The incidence algebra zeta function ζD(a,b) = 1 corresponds to the classical Riemann zeta function having reciprocal where is the classical Möbius function of number theory. Many other arithmetic functions arise naturally within the reduced incidence algebra, and equivalently in terms of Dirichlet series. For example, the divisor function is the square of the zeta function, a special case of the above result that gives the number of elements in the interval [x,y]; equivalenty, The product structure of the divisor poset facilitates the computation of its Möbius function. Unique factorization into primes implies D is isomorphic to an infinite Cartesian product , with the order given by coordinatewise comparison: , where is the kth prime, corresponds to its sequence of exponents Now the Möbius function of D is the product of the Möbius functions for the factor posets, computed above, giving the classical formula: The product structure also explains the classical Euler product for the zeta function. The zeta function of D corresponds to a Cartesian product of zeta functions of the factors, computed above as so that where the right side is a Cartesian product. Applying the isomorphism which sends t in the kth factor to , we obtain the usual Euler product. See also Graph algebra Incidence coalgebra Path algebra Literature Incidence algebras of locally finite posets were treated in a number of papers of Gian-Carlo Rota beginning in 1964, and by many later combinatorialists. Rota's 1964 paper was: N. Jacobson, Basic Algebra. I, W. H. Freeman and Co., 1974. See section 8.6 for a treatment of Mobius functions on posets Further reading Algebraic combinatorics Order theory
Incidence algebra
[ "Mathematics" ]
2,416
[ "Fields of abstract algebra", "Algebraic combinatorics", "Order theory", "Combinatorics" ]
167,001
https://en.wikipedia.org/wiki/Concrete%20category
In mathematics, a concrete category is a category that is equipped with a faithful functor to the category of sets (or sometimes to another category). This functor makes it possible to think of the objects of the category as sets with additional structure, and of its morphisms as structure-preserving functions. Many important categories have obvious interpretations as concrete categories, for example the category of topological spaces and the category of groups, and trivially also the category of sets itself. On the other hand, the homotopy category of topological spaces is not concretizable, i.e. it does not admit a faithful functor to the category of sets. A concrete category, when defined without reference to the notion of a category, consists of a class of objects, each equipped with an underlying set; and for any two objects A and B a set of functions, called homomorphisms, from the underlying set of A to the underlying set of B. Furthermore, for every object A, the identity function on the underlying set of A must be a homomorphism from A to A, and the composition of a homomorphism from A to B followed by a homomorphism from B to C must be a homomorphism from A to C. Definition A concrete category is a pair (C,U) such that C is a category, and U : C → Set (the category of sets and functions) is a faithful functor. The functor U is to be thought of as a forgetful functor, which assigns to every object of C its "underlying set", and to every morphism in C its "underlying function". It is customary to call the morphisms in a concrete category homomorphisms (e.g., group homomorphisms, ring homomorphisms, etc.) Because of the faithfulness of the functor U, the homomorphisms of a concrete category may be formally identified with their underlying functions (i.e., their images under U); the homomorphisms then regain the usual interpretation as "structure-preserving" functions. A category C is concretizable if there exists a concrete category (C,U); i.e., if there exists a faithful functor U: C → Set. All small categories are concretizable: define U so that its object part maps each object b of C to the set of all morphisms of C whose codomain is b (i.e. all morphisms of the form f: a → b for any object a of C), and its morphism part maps each morphism g: b → c of C to the function U(g): U(b) → U(c) which maps each member f: a → b of U(b) to the composition gf: a → c, a member of U(c). (Item 6 under Further examples expresses the same U in less elementary language via presheaves.) The Counter-examples section exhibits two large categories that are not concretizable. Remarks Contrary to intuition, concreteness is not a property that a category may or may not satisfy, but rather a structure with which a category may or may not be equipped. In particular, a category C may admit several faithful functors into Set. Hence there may be several concrete categories (C, U) all corresponding to the same category C. In practice, however, the choice of faithful functor is often clear and in this case we simply speak of the "concrete category C". For example, "the concrete category Set" means the pair (Set, I) where I denotes the identity functor Set → Set. The requirement that U be faithful means that it maps different morphisms between the same objects to different functions. However, U may map different objects to the same set and, if this occurs, it will also map different morphisms to the same function. For example, if S and T are two different topologies on the same set X, then (X, S) and (X, T) are distinct objects in the category Top of topological spaces and continuous maps, but mapped to the same set X by the forgetful functor Top → Set. Moreover, the identity morphism (X, S) → (X, S) and the identity morphism (X, T) → (X, T) are considered distinct morphisms in Top, but they have the same underlying function, namely the identity function on X. Similarly, any set with four elements can be given two non-isomorphic group structures: one isomorphic to , and the other isomorphic to . Further examples Any group G may be regarded as an "abstract" category with one arbitrary object, , and one morphism for each element of the group. This would not be counted as concrete according to the intuitive notion described at the top of this article. But every faithful G-set (equivalently, every representation of G as a group of permutations) determines a faithful functor G → Set. Since every group acts faithfully on itself, G can be made into a concrete category in at least one way. Similarly, any poset P may be regarded as an abstract category with a unique arrow x → y whenever x ≤ y. This can be made concrete by defining a functor D : P → Set which maps each object x to and each arrow x → y to the inclusion map . The category Rel whose objects are sets and whose morphisms are relations can be made concrete by taking U to map each set X to its power set and each relation to the function defined by . Noting that power sets are complete lattices under inclusion, those functions between them arising from some relation R in this way are exactly the supremum-preserving maps. Hence Rel is equivalent to a full subcategory of the category Sup of complete lattices and their sup-preserving maps. Conversely, starting from this equivalence we can recover U as the composite Rel → Sup → Set of the forgetful functor for Sup with this embedding of Rel in Sup. The category Setop can be embedded into Rel by representing each set as itself and each function f: X → Y as the relation from Y to X formed as the set of pairs (f(x), x) for all x ∈ X; hence Setop is concretizable. The forgetful functor which arises in this way is the contravariant powerset functor Setop → Set. It follows from the previous example that the opposite of any concretizable category C is again concretizable, since if U is a faithful functor C → Set then Cop may be equipped with the composite Cop → Setop → Set. If C is any small category, then there exists a faithful functor P : SetCop → Set which maps a presheaf X to the coproduct . By composing this with the Yoneda embedding Y:C → SetCop one obtains a faithful functor C → Set. For technical reasons, the category Ban1 of Banach spaces and linear contractions is often equipped not with the "obvious" forgetful functor but the functor U1 : Ban1 → Set which maps a Banach space to its (closed) unit ball. The category Cat whose objects are small categories and whose morphisms are functors can be made concrete by sending each category C to the set containing its objects and morphisms. Functors can be simply viewed as functions acting on the objects and morphisms. Counter-examples The category hTop, where the objects are topological spaces and the morphisms are homotopy classes of continuous functions, is an example of a category that is not concretizable. While the objects are sets (with additional structure), the morphisms are not actual functions between them, but rather classes of functions. The fact that there does not exist any faithful functor from hTop to Set was first proven by Peter Freyd. In the same article, Freyd cites an earlier result that the category of "small categories and natural equivalence-classes of functors" also fails to be concretizable. Implicit structure of concrete categories Given a concrete category (C, U) and a cardinal number N, let UN be the functor C → Set determined by UN(c) = (U(c))N. Then a subfunctor of UN is called an N-ary predicate and a natural transformation UN → U an N-ary operation. The class of all N-ary predicates and N-ary operations of a concrete category (C,U), with N ranging over the class of all cardinal numbers, forms a large signature. The category of models for this signature then contains a full subcategory which is equivalent to C. Relative concreteness In some parts of category theory, most notably topos theory, it is common to replace the category Set with a different category X, often called a base category. For this reason, it makes sense to call a pair (C, U) where C is a category and U a faithful functor C → X a concrete category over X. For example, it may be useful to think of the models of a theory with N sorts as forming a concrete category over SetN. In this context, a concrete category over Set is sometimes called a construct. Notes References Adámek, Jiří, Herrlich, Horst, & Strecker, George E.; (1990). Abstract and Concrete Categories (4.2MB PDF). Originally publ. John Wiley & Sons. . (now free on-line edition). Freyd, Peter; (1970). Homotopy is not concrete. Originally published in: The Steenrod Algebra and its Applications, Springer Lecture Notes in Mathematics Vol. 168. Republished in a free on-line journal: Reprints in Theory and Applications of Categories, No. 6 (2004), with the permission of Springer-Verlag. Rosický, Jiří; (1981). Concrete categories and infinitary languages. Journal of Pure and Applied Algebra, Volume 22, Issue 3. Category theory
Concrete category
[ "Mathematics" ]
2,127
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory" ]
167,008
https://en.wikipedia.org/wiki/Incidence%20%28epidemiology%29
In epidemiology, incidence reflects the number of new cases of a given medical condition in a population within a specified period of time. Incidence proportion Incidence proportion (IP), also known as cumulative incidence, is defined as the probability that a particular event, such as occurrence of a particular disease, has occurred in a specified period: For example, if a population contains 1,000 persons and 28 develop a condition from the time the disease first occurred until two years later, the cumulative incidence is 28 cases per 1,000 persons, i.e. 2.8%. Incidence rate The incidence rate can be calculated by dividing the number of subjects developing a disease by the total time at risk from all patients: One of the important advantages of incidence rate is that it doesn't require all subjects to be present for the whole study because it's only interested in the time at risk. Incidence vs. prevalence Incidence should not be confused with prevalence, which is the proportion of cases in the population at a given time rather than rate of occurrence of new cases. Thus, incidence conveys information about the risk of contracting the disease, whereas prevalence indicates how widespread the disease is. Prevalence is the proportion of the total number of cases to the total population and is more a measure of the burden of the disease on society with no regard to time at risk or when subjects may have been exposed to a possible risk factor. Prevalence can also be measured with respect to a specific subgroup of a population. Incidence is usually more useful than prevalence in understanding the disease etiology: for example, if the incidence rate of a disease in a population increases, then there is a risk factor that promotes the incidence. For example, consider a disease that takes a long time to cure and was widespread in 2002 but dissipated in 2003. This disease will have both high incidence and high prevalence in 2002, but in 2003 it will have a low incidence yet will continue to have a high prevalence (because it takes a long time to cure, so the fraction of individuals that are affected remains high). In contrast, a disease that has a short duration may have a low prevalence and a high incidence. When the incidence is approximately constant for the duration of the disease, prevalence is approximately the product of disease incidence and average disease duration, so prevalence = incidence × duration. The importance of this equation is in the relation between prevalence and incidence; for example, when the incidence increases, then the prevalence must also increase. Note that this relation does not hold for age-specific prevalence and incidence, where the relation becomes more complicated. Example Consider the following example. Say you are looking at a sample population of 225 people, and want to determine the incidence rate of developing HIV over a 10-year period: At the beginning of the study (t=0) you find 25 cases of existing HIV. These people are not counted as they cannot develop HIV a second time. A follow-up at 5 years (t=5 years) finds 20 new cases of HIV. A second follow-up at the end of the study (t=10 years) finds 30 new cases. If you were to measure prevalence you would simply take the total number of cases (25 + 20 + 30 = 75) and divide by your sample population (225). So prevalence would be 75/225 = 0.33 or 33% (by the end of the study). This tells you how widespread HIV is in your sample population, but little about the actual risk of developing HIV for any person over a coming year. To measure incidence rate you must take into account how many years each person contributed to the study, and when they developed HIV because when a subject develops HIV he stops being at risk. When it is not known exactly when a person develops the disease in question, epidemiologists frequently use the actuarial method, and assume it was developed at a half-way point between follow-ups. In this calculation: At 5 yrs you found 20 new cases, so you assume they developed HIV at 2.5 years, thus contributing (20 * 2.5) = 50 person-years of disease-free life. At 10 years you found 30 new cases. These people did not have HIV at 5 years, but did at 10, so you assume they were infected at 7.5 years, thus contributing (30 * 7.5) = 225 person-years of disease-free life. That is a total of (225 + 50) = 275 person years so far. You also want to account for the 150 people who never had or developed HIV over the 10-year period, (150 * 10) contributing 1500 person-years of disease-free life. That is a total of (1500 + 275) = 1775 person-years of life. Now take the 50 new cases of HIV, and divide by 1775 to get 0.028, or 28 cases of HIV per 1000 population, per year. In other words, if you were to follow 1000 people for one year, you would see 28 new cases of HIV. This is a much more accurate measure of risk than prevalence. See also Attack rate Attributable risk Rate ratio References External links Calculation of standardized incidence rate PAMCOMP Person-Years Analysis and Computation Programme for calculating standardized incidence rates (SIRs) Epidemiology Medical statistics Articles containing video clips Hygiene
Incidence (epidemiology)
[ "Environmental_science" ]
1,086
[ "Epidemiology", "Environmental social science" ]
167,052
https://en.wikipedia.org/wiki/ISO%20639
ISO 639 is a standard by the International Organization for Standardization (ISO) concerned with representation of languages and language groups. It currently consists of four sets (1-3, 5) of code, named after each part which formerly described respective set (part 4 was guidelines without its own coding system); a part 6 was published but withdrawn. It was first approved in 1967 as a single-part ISO Recommendation, ISO/R 639, superseded in 2002 by part 1 of the new series, ISO 639-1, followed by additional parts. All existing parts of the series were consolidated into a single standard in 2023, largely based on the text of ISO 639-4. Use of ISO 639 codes The language codes defined in the several sections of ISO 639 are used for bibliographic purposes and, in computing and internet environments, as a key element of locale data. The codes also find use in various applications, such as Wikipedia URLs for its different language editions. History The early form of ISO's language coding system was manifested by ISO/R 639:1967 titled Symbols for Languages, Countries and Authorities, which aimed chiefly to regulate vocabularies signifying languages, countries, and standardization agencies of ISO member bodies. Its "language symbols" consisted of one- or two-letter variable-length identifiers in capitalized Latin alphabets, e.g. E or En for English; S, Sp, or Es for Spanish; and In for Indonesian. It was also allowed to use (the pre-1993 version of) UDC numeral auxiliaries to indicate languages. After decoupling the country code into ISO 3166 in 1974, the first edition of the standard ISO 639:1988 Code for the representation of names of languages was published with a framework of uniformly two-letter identifiers in lowercase Latin alphabets, mostly identical in format and vocabulary to that of the current ISO 639 Set 1. Since then, the standard has been adopted as a fundamental technology of the rapidly expanding computer industry (RFC 1766), leading to development of more expressive three-letter framework, published as ISO 639-2:1998, largely based on MARC codes for languages. The original two-letter system was redefined as ISO 639-1 in 2001. Seeking for more extensive support of languages for widening applications, separate supersets of the ISO 639-2 namespace that cover individual languages and groups were established as ISO 639-3 and ISO 639-5, respectively. There was also an attempt to code more precise language variants using four-letter identifiers as ISO 639-6, which was later withdrawn and to be reorganized under another framework, ISO 21636. Relatively constant updates in parts of ISO 639 had been handled by each own authority in charge until the publication of ISO 639:2023, which harmonized and reunified the body text of former standards and brought about organizational change with a joint maintenance agency supervising all sets and issuing newsletters. The maintenance agency is located in Ontario, Canada. Current sets and historical parts of the standard Each set of the standard is maintained by a maintenance agency, which adds codes and changes the status of codes when needed. ISO 639-6 was withdrawn in 2014, and not included in ISO 639:2023. Characteristics of individual codes Scopes: Individual languages Macrolanguages (Set 3) Collections of languages (Sets 1, 2, 5). Some collections were already in Set 2, and others were added only in Set 5: Remainder groups: 36 collections in both Set 2 and 5 are of this kind — for compatibility with Set 2 when Set 5 was still not published, the remainder groups do not contain any language and collection that was already coded in Set 2 (however new applications compatible with Set 5 may treat these groups inclusively, as long they respect the containment hierarchy published in Set 5 and they use the most specific collection when grouping languages); The only collection which previously assigned with two-alphabet code is Bihari (bh) during the Part 1 era, which deprecated in June 2021. Regular groups: 29 collections in both Sets 2 and 5 are of this kind — for compatibility with Set 2, they can not contain other groups; Families: 50 new collections coded only in Set 5 (including one containing a regular group already coded in Set 2) — for compatibility with Set 2, they may contain other collections except remainder groups. Dialects: they were intended to be covered by former ISO 639-6 (proposed but now withdrawn). Special situations (Sets 2, 3). Reserved for local use (Sets 2, 3). Also used sometimes in applications needing a two-letter code like standard codes in Sets 1 and 2 (where the special code mis is not suitable), or a three-letter code for collections like standard codes in Set 5. Types (for individual languages): Living languages (Sets 2, 3) (except Sanskrit, all other macrolanguages are living languages) Extinct languages (Sets 2, 3) (599, 5 of them are in Set 2: chb, chg, cop, lui, sam; none are in Set 1) Ancient languages (Sets 1, 2, 3) (124, 19 of them are in Set 2; and 5 of them, namely ave, chu, lat, pli and san, also have a code in Set 1: ae, cu, la, pi, sa) Historical languages (Sets 2, 3) (89, 16 of them are in Set 2; none are in Set 1) Constructed languages (Sets 1, 2, 3) (23, 9 of them in Set 2: afh, epo, ido, ile, ina, jbo, tlh, vol, zbl; 5 of them in Set 1: eo, ia, ie, io, vo) Individual languages and macrolanguages with two distinct three-letter codes in Set 2: Bibliographic (some of them were deprecated, none were defined in Set 3): these are legacy codes (based on language names in English). Terminologic (also defined in Set 3): these are the preferred codes (based on native language names, romanized if needed). All others (including collections of languages and special/reserved codes) only have a single three-letter code for both uses. Relations between the sets The different sets of ISO 639 are designed to work together, in such a way that no code means one thing in one set and something else in another. However, not all languages are in all sets, and there is a variety of different ways that specific languages and other elements are treated in the different sets. This depends, for example, whether a language is listed in Sets 1 or 2, whether it has separate B/T codes in Set 2, or is classified as a macrolanguage in Set 3, and so forth. These various treatments are detailed in the following chart. In each group of rows (one for each scope of Set 3), the last four columns contain codes for a representative language that exemplifies a specific type of relation between the sets of ISO 639, the second column provides an explanation of the relationship, and the first column indicates the number of elements that have that type of relationship. For example, there are four elements that have a code in Set 1, have a B/T code, and are classified as macrolanguages in Set 3. One representative of these four elements is "Persian" fa/per/fas. These differences are due to the following factors. In ISO 639 Set 2, two distinct codes were assigned to 22 individual languages, namely a bibliographic and a terminology code (B/T codes). B codes were included for historical reasons because previous widely used bibliographic systems used language codes based on the English name for the language. In contrast, the Set 1 codes were based on the native name for the language, and there was also a strong desire to have Set 2 codes (T codes) for these languages which were similar to the corresponding 2-character code in Set 1. For instance, the German language (Set 1: de) has two codes in Set 2: ger (B code) and deu (T code), whereas there is only one code in Set 2, eng, for the English language. 2 former B codes were withdrawn, leaving today only 20 pairs of B/T codes. Individual languages in Set 2 always have a code in Set 3 (only the Set 2 terminology code is reused there) but may or may not have a code in Set 1, as illustrated by the following examples: Set 3 eng corresponds to Set 2 eng and Set 1 en Set 3 ast corresponds to Set 2 ast but lacks a code in Set 1. Some codes (62) in Set 3 are macrolanguages. These are groups containing multiple individual languages that have a good mutual understanding and are commonly mixed or confused. Some macrolanguages developed a default standard form on one of their individual languages (e.g. Mandarin is implied by default for the Chinese macrolanguage, other individual languages may be still distinguished if needed but the specific code cmn for Mandarin is rarely used). 1 macrolanguage has a Set 2 code and a Set 1 code, while its member individual languages also have codes in Set 1 and Set 2: nor/no contains non/nn, nob/nb; or 4 macrolanguages have two Set 2 codes (B/T) and a Set 1 code: per/fas/fa, may/msa/ms, alb/sqi/sq, and chi/zho/zh; 28 macrolanguages have a Set 2 code but no Set 1 code; 29 other macrolanguages only have codes in Set 3. Collective codes in Set 2 have a code in Set 5: e.g. aus in Sets 2 and 5, which stands for Australian languages. Some codes were added in Set 5 but had no code in Set 2: e.g. sqj Sets 2 and 3 also have a reserved range and four special codes: Codes qaa through qtz are reserved for local use. There are four special codes: mis for languages that have no code yet assigned, mul for "multiple languages", und for "undefined", and zxx for "no linguistic content, not applicable". Code space Two-letter code space Two-letter (formerly "Alpha-2") identifiers (for codes composed of 2 letters of the ISO basic Latin alphabet) are used in Set 1. When codes for a wider range of languages were desired, more than 2 letter combinations could cover (a maximum of 262 = 676), Set 2 was developed using three-letter codes. (However, the latter was formally published first.) Three-letter code space Three-letter (formerly "Alpha-3") identifiers (for codes composed of 3 letters of the ISO basic Latin alphabet) are used in Set 2, Set 3, and Set 5. The number of languages and language groups that can be so represented is 263 = 17,576. The common use of three-letter codes by three sets of ISO 639 requires some coordination within a larger system. Set 2 defines four special codes mis, mul, und, zxx, a reserved range qaa-qtz (20 × 26 = 520 codes) and has 20 double entries (the B/T codes), plus 2 entries with deprecated B-codes. This sums up to 520 + 22 + 4 = 546 codes that cannot be used in Set 3 to represent languages or in Set 5 to represent language families or groups. The remainder is 17,576 – 546 = 17,030. There are somewhere around six to seven thousand languages on Earth today. So those 17,030 codes are adequate to assign a unique code to each language, although some languages may end up with arbitrary codes that sound nothing like the traditional name(s) of that language. Alpha-4 code space (withdrawn) "Alpha-4" codes (for codes composed of 4 letters of the ISO basic Latin alphabet) were proposed to be used in ISO 639-6, which has been withdrawn. The upper limit for the number of languages and dialects that can be represented is 264 = 456,976. See also IETF language tags (based on ISO 639) ISO 3166 (codes for countries) ISO 15924 (codes for writing systems) Codes for constructed languages Language code Language families and languages List of languages List of official languages Lists of ISO 639 codes Notes and references External links ISO 639 at ISO official website Language Coding Agency websites: The ISO 639 Language Code at Infoterm, LCA for Set 1 (code list provided by Set 2 LCA below) ISO 639-2 at the Library of Congress, LCA for Set 2 ISO 639-3 at SIL International, LCA for Set 3 ISO 693-5 at the Library of Congress, LCA for Set 5 ISO 639 Maintenance Agency reports Common Locale Data Repository which contains translations of ISO 639 codes in other languages in an XML format. The CLDR survey tool also contains a more readable format of the data. 00639 Language identifiers Internationalization and localization 1967 introductions
ISO 639
[ "Technology" ]
2,768
[ "Natural language and computing", "Internationalization and localization" ]
167,053
https://en.wikipedia.org/wiki/Unit%20vector
In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in (pronounced "v-hat"). The normalized vector û of a non-zero vector u is the unit vector in the direction of u, i.e., where ‖u‖ is the norm (or length) of u. The term normalized vector is sometimes used as a synonym for unit vector. A unit vector is often used to represent directions, such as normal directions. Unit vectors are often chosen to form the basis of a vector space, and every vector in the space may be written as a linear combination form of unit vectors. Orthogonal coordinates Cartesian coordinates Unit vectors may be used to represent the axes of a Cartesian coordinate system. For instance, the standard unit vectors in the direction of the x, y, and z axes of a three dimensional Cartesian coordinate system are They form a set of mutually orthogonal unit vectors, typically referred to as a standard basis in linear algebra. They are often denoted using common vector notation (e.g., x or ) rather than standard unit vector notation (e.g., x̂). In most contexts it can be assumed that x, y, and z, (or and ) are versors of a 3-D Cartesian coordinate system. The notations (î, ĵ, k̂), (x̂1, x̂2, x̂3), (êx, êy, êz), or (ê1, ê2, ê3), with or without hat, are also used, particularly in contexts where i, j, k might lead to confusion with another quantity (for instance with index symbols such as i, j, k, which are used to identify an element of a set or array or sequence of variables). When a unit vector in space is expressed in Cartesian notation as a linear combination of x, y, z, its three scalar components can be referred to as direction cosines. The value of each component is equal to the cosine of the angle formed by the unit vector with the respective basis vector. This is one of the methods used to describe the orientation (angular position) of a straight line, segment of straight line, oriented axis, or segment of oriented axis (vector). Cylindrical coordinates The three orthogonal unit vectors appropriate to cylindrical symmetry are: (also designated or ), representing the direction along which the distance of the point from the axis of symmetry is measured; , representing the direction of the motion that would be observed if the point were rotating counterclockwise about the symmetry axis; , representing the direction of the symmetry axis; They are related to the Cartesian basis , , by: The vectors and are functions of and are not constant in direction. When differentiating or integrating in cylindrical coordinates, these unit vectors themselves must also be operated on. The derivatives with respect to are: Spherical coordinates The unit vectors appropriate to spherical symmetry are: , the direction in which the radial distance from the origin increases; , the direction in which the angle in the x-y plane counterclockwise from the positive x-axis is increasing; and , the direction in which the angle from the positive z axis is increasing. To minimize redundancy of representations, the polar angle is usually taken to lie between zero and 180 degrees. It is especially important to note the context of any ordered triplet written in spherical coordinates, as the roles of and are often reversed. Here, the American "physics" convention is used. This leaves the azimuthal angle defined the same as in cylindrical coordinates. The Cartesian relations are: The spherical unit vectors depend on both and , and hence there are 5 possible non-zero derivatives. For a more complete description, see Jacobian matrix and determinant. The non-zero derivatives are: General unit vectors Common themes of unit vectors occur throughout physics and geometry: Curvilinear coordinates In general, a coordinate system may be uniquely specified using a number of linearly independent unit vectors (the actual number being equal to the degrees of freedom of the space). For ordinary 3-space, these vectors may be denoted . It is nearly always convenient to define the system to be orthonormal and right-handed: where is the Kronecker delta (which is 1 for i = j, and 0 otherwise) and is the Levi-Civita symbol (which is 1 for permutations ordered as ijk, and −1 for permutations ordered as kji). Right versor A unit vector in was called a right versor by W. R. Hamilton, as he developed his quaternions . In fact, he was the originator of the term vector, as every quaternion has a scalar part s and a vector part v. If v is a unit vector in , then the square of v in quaternions is –1. Thus by Euler's formula, is a versor in the 3-sphere. When θ is a right angle, the versor is a right versor: its scalar part is zero and its vector part v is a unit vector in . Thus the right versors extend the notion of imaginary units found in the complex plane, where the right versors now range over the 2-sphere rather than the pair {i, –i} in the complex plane. By extension, a right quaternion is a real multiple of a right versor. See also Cartesian coordinate system Coordinate system Curvilinear coordinates Four-velocity Jacobian matrix and determinant Normal vector Polar coordinate system Standard basis Unit interval Unit square, cube, circle, sphere, and hyperbola Vector notation Vector of ones Unit matrix Notes References Linear algebra Elementary mathematics 1 (number) Vectors (mathematics and physics)
Unit vector
[ "Mathematics" ]
1,228
[ "Linear algebra", "Elementary mathematics", "Algebra" ]
167,079
https://en.wikipedia.org/wiki/Smartphone
A smartphone is a mobile device that combines the functionality of a traditional mobile phone with advanced computing capabilities. It typically has a touchscreen interface, allowing users to access a wide range of applications and services, such as web browsing, email, and social media, as well as multimedia playback and streaming. Smartphones have built-in cameras, GPS navigation, and support for various communication methods, including voice calls, text messaging, and internet-based messaging apps. Smartphones are distinguished from older-design feature phones by their more advanced hardware capabilities and extensive mobile operating systems, access to the internet, business applications, mobile payments, and multimedia functionality, including music, video, gaming, radio, and television. Smartphones typically feature metal–oxide–semiconductor (MOS) integrated circuit (IC) chips, various sensors, and support for multiple wireless communication protocols. These devices leverage sensors such as accelerometers, barometers, gyroscopes, and magnetometers, which can be used by both pre-installed and third-party software to enhance functionality. In addition, smartphones are equipped to support a variety of wireless communication standards, including LTE, 5G NR, Wi-Fi, Bluetooth, and satellite navigation. By the mid-2020s, manufacturers began integrating satellite messaging and emergency services, expanding their utility in remote areas without reliable cellular coverage. Following the rising popularity of the iPhone in the late 2000s, the majority of smartphones have featured thin, slate-like form factors with large, capacitive touch screens with support for multi-touch gestures rather than physical keyboards. Most modern smartphones have the ability for users to download or purchase additional applications from a centralized app store. They often have support for cloud storage and cloud synchronization, and virtual assistants. Smartphones have largely replaced personal digital assistant (PDA) devices, handheld/palm-sized PCs, portable media players (PMP), point-and-shoot cameras, camcorders, and, to a lesser extent, handheld video game consoles, e-reader devices, pocket calculators, and GPS tracking units. Since the early 2010s, improved hardware and faster wireless communication have bolstered the growth of the smartphone industry. , over a billion smartphones are sold globally every year. In 2019 alone, 1.54 billion smartphone units were shipped worldwide. , 75.05 percent of the world population were smartphone users. History Early smartphones were marketed primarily towards the enterprise market, attempting to bridge the functionality of standalone PDA devices with support for cellular telephony, but were limited by their bulky form, short battery life, slow analog cellular networks, and the immaturity of wireless data services. These issues were eventually resolved with the exponential scaling and miniaturization of MOS transistors down to sub-micron levels (Moore's law), the improved lithium-ion battery, faster digital mobile data networks (Edholm's law), and more mature software platforms that allowed mobile device ecosystems to develop independently of data providers. In the 2000s, NTT DoCoMo's i-mode platform, BlackBerry, Nokia's Symbian platform, and Windows Mobile began to gain market traction, with models often featuring QWERTY keyboards or resistive touchscreen input and emphasizing access to push email and wireless internet. Forerunner In the early 1990s, IBM engineer Frank Canova realised that chip-and-wireless technology was becoming small enough to use in handheld devices. The first commercially available device that could be properly referred to as a "smartphone" began as a prototype called "Angler" developed by Canova in 1992 while at IBM and demonstrated in November of that year at the COMDEX computer industry trade show. A refined version was marketed to consumers in 1994 by BellSouth under the name Simon Personal Communicator. In addition to placing and receiving cellular calls, the touchscreen-equipped Simon could send and receive faxes and emails. It included an address book, calendar, appointment scheduler, calculator, world time clock, and notepad, as well as other visionary mobile applications such as maps, stock reports and news. The IBM Simon was manufactured by Mitsubishi Electric, which integrated features with its own cellular radio technologies. It featured a liquid-crystal display (LCD) and PC Card support. The Simon was commercially unsuccessful, particularly due to its bulky form factor and limited battery life, using NiCad batteries rather than the nickel–metal hydride batteries commonly used in mobile phones in the 1990s, or lithium-ion batteries used in modern smartphones. The term "smart phone" (in two words) was not coined until a year after the introduction of the Simon, appearing in print as early as 1995, describing AT&T's PhoneWriter Communicator. The term "smartphone" (as one word) was first used by Ericsson in 1997 to describe a new device concept, the GS88. PDA/phone hybrids Beginning in the mid-to-late 1990s, many people who had mobile phones carried a separate dedicated PDA device, running early versions of operating systems such as Palm OS, Newton OS, Symbian or Windows CE/Pocket PC. These operating systems would later evolve into early mobile operating systems. Most of the "smartphones" in this era were hybrid devices that combined these existing familiar PDA OSes with basic phone hardware. The results were devices that were bulkier than either dedicated mobile phones or PDAs, but allowed a limited amount of cellular Internet access. PDA and mobile phone manufacturers competed in reducing the size of devices. The bulk of these smartphones combined with their high cost and expensive data plans, plus other drawbacks such as expansion limitations and decreased battery life compared to separate standalone devices, generally limited their popularity to "early adopters" and business users who needed portable connectivity. In March 1996, Hewlett-Packard released the OmniGo 700LX, a modified HP 200LX palmtop PC with a Nokia 2110 mobile phone piggybacked onto it and ROM-based software to support it. It had a 640 × 200 resolution CGA compatible four-shade gray-scale LCD screen and could be used to place and receive calls, and to create and receive text messages, emails and faxes. It was also 100% DOS 5.0 compatible, allowing it to run thousands of existing software titles, including early versions of Windows. In August 1996, Nokia released the Nokia 9000 Communicator, a digital cellular PDA based on the Nokia 2110 with an integrated system based on the PEN/GEOS 3.0 operating system from Geoworks. The two components were attached by a hinge in what became known as a clamshell design, with the display above and a physical QWERTY keyboard below. The PDA provided e-mail; calendar, address book, calculator and notebook applications; text-based Web browsing; and could send and receive faxes. When closed, the device could be used as a digital cellular telephone. In June 1999, Qualcomm released the "pdQ Smartphone", a CDMA digital PCS smartphone with an integrated Palm PDA and Internet connectivity. Subsequent landmark devices included: The Ericsson R380 (December 2000) by Ericsson Mobile Communications, the first phone running the operating system later named Symbian (it ran EPOC Release 5, which was renamed Symbian OS at Release 6). It had PDA functionality and limited Web browsing on a resistive touchscreen utilizing a stylus. While it was marketed as a "smartphone", users could not install their own software on the device. The Kyocera 6035 (February 2001), a dual-nature device with a separate Palm OS PDA operating system and CDMA mobile phone firmware. It supported limited Web browsing with the PDA software treating the phone hardware as an attached modem. The Nokia 9210 Communicator (June 2001), the first phone running Symbian (Release 6) with Nokia's Series 80 platform (v1.0). This was the first Symbian phone platform allowing the installation of additional applications. Like the Nokia 9000 Communicator, it is a large clamshell device with a full physical QWERTY keyboard inside. Handspring's Treo 180 (2002), the first smartphone that fully integrated the Palm OS on a GSM mobile phone having telephony, SMS messaging and Internet access built into the OS. The 180 model had a thumb-type keyboard and the 180g version had a Graffiti handwriting recognition area, instead. Japanese cell phones In 1999, Japanese wireless provider NTT DoCoMo launched i-mode, a new mobile internet platform which provided data transmission speeds up to 9.6 kilobits per second, and access web services available through the platform such as online shopping. NTT DoCoMo's i-mode used cHTML, a language which restricted some aspects of traditional HTML in favor of increasing data speed for the devices. Limited functionality, small screens and limited bandwidth allowed for phones to use the slower data speeds available. The rise of i-mode helped NTT DoCoMo accumulate an estimated 40 million subscribers by the end of 2001, and ranked first in market capitalization in Japan and second globally. Japanese cell phones increasingly diverged from global standards and trends to offer other forms of advanced services and smartphone-like functionality that were specifically tailored to the Japanese market, such as mobile payments and shopping, near-field communication (NFC) allowing mobile wallet functionality to replace smart cards for transit fares, loyalty cards, identity cards, event tickets, coupons, money transfer, etc., downloadable content like musical ringtones, games, and comics, and 1seg mobile television. Phones built by Japanese manufacturers used custom firmware, however, and did not yet feature standardized mobile operating systems designed to cater to third-party application development, so their software and ecosystems were akin to very advanced feature phones. As with other feature phones, additional software and services required partnerships and deals with providers. The degree of integration between phones and carriers, unique phone features, non-standardized platforms, and tailoring to Japanese culture made it difficult for Japanese manufacturers to export their phones, especially when demand was so high in Japan that the companies did not feel the need to look elsewhere for additional profits. The rise of 3G technology in other markets and non-Japanese phones with powerful standardized smartphone operating systems, app stores, and advanced wireless network capabilities allowed non-Japanese phone manufacturers to finally break in to the Japanese market, gradually adopting Japanese phone features like emojis, mobile payments, NFC, etc. and spreading them to the rest of the world. Early smartphones Phones that made effective use of any significant data connectivity were still rare outside Japan until the introduction of the Danger Hiptop in 2002, which saw moderate success among U.S. consumers as the T-Mobile Sidekick. Later, in the mid-2000s, business users in the U.S. started to adopt devices based on Microsoft's Windows Mobile, and then BlackBerry smartphones from Research In Motion. American users popularized the term "CrackBerry" in 2006 due to the BlackBerry's addictive nature. In the U.S., the high cost of data plans and relative rarity of devices with Wi-Fi capabilities that could avoid cellular data network usage kept adoption of smartphones mainly to business professionals and "early adopters." Outside the U.S. and Japan, Nokia was seeing success with its smartphones based on Symbian, originally developed by Psion for their personal organisers, and it was the most popular smartphone OS in Europe during the middle to late 2000s. Initially, Nokia's Symbian smartphones were focused on business with the Eseries, similar to Windows Mobile and BlackBerry devices at the time. From 2002 onwards, Nokia started producing consumer-focused smartphones, popularized by the entertainment-focused Nseries. Until 2010, Symbian was the world's most widely used smartphone operating system. The touchscreen personal digital assistant (PDA)derived nature of adapted operating systems like Palm OS, the "Pocket PC" versions of what was later Windows Mobile, and the UIQ interface that was originally designed for pen-based PDAs on Symbian OS devices resulted in some early smartphones having stylus-based interfaces. These allowed for virtual keyboards and handwriting input, thus also allowing easy entry of Asian characters. By the mid-2000s, the majority of smartphones had a physical QWERTY keyboard. Most used a "keyboard bar" form factor, like the BlackBerry line, Windows Mobile smartphones, Palm Treos, and some of the Nokia Eseries. A few hid their full physical QWERTY keyboard in a sliding form factor, like the Danger Hiptop line. Some even had only a numeric keypad using T9 text input, like the Nokia Nseries and other models in the Nokia Eseries. Resistive touchscreens with stylus-based interfaces could still be found on a few smartphones, like the Palm Treos, which had dropped their handwriting input after a few early models that were available in versions with Graffiti instead of a keyboard. Form factor and operating system shifts The late 2000s and early 2010s saw a shift in smartphone interfaces away from devices with physical keyboards and keypads to ones with large finger-operated capacitive touchscreens. The first phone of any kind with a large capacitive touchscreen was the LG Prada, announced by LG in December 2006. This was a fashionable feature phone created in collaboration with Italian luxury designer Prada with a 3" 240 x 400 pixel screen, a 2-Megapixel digital camera with 144p video recording ability, an LED flash, and a miniature mirror for self portraits. In January 2007, Apple Computer introduced the iPhone. It had a 3.5" capacitive touchscreen with twice the common resolution of most smartphone screens at the time, and introduced multi-touch to phones, which allowed gestures such as "pinching" to zoom in or out on photos, maps, and web pages. The iPhone was notable as being the first device of its kind targeted at the mass market to abandon the use of a stylus, keyboard, or keypad typical of contemporary smartphones, instead using a large touchscreen for direct finger input as its main means of interaction. The iPhone's operating system was also a shift away from older operating systems (which older phones supported and which were adapted from PDAs and feature phones) to an operative system powerful enough to not require using a limited, stripped down web browser that can only render pages specially formatted using technologies such as WML, cHTML, or XHTML and instead ran a version of Apple's Safari browser that could render full websites not specifically designed for mobile phones. Later Apple shipped a software update that gave the iPhone a built-in on-device App Store allowing direct wireless downloads of third-party software. This kind of centralized App Store and free developer tools quickly became the new main paradigm for all smartphone platforms for software development, distribution, discovery, installation, and payment, in place of expensive developer tools that required official approval to use and a dependence on third-party sources providing applications for multiple platforms. The advantages of a design with software powerful enough to support advanced applications and a large capacitive touchscreen affected the development of another smartphone OS platform, Android, with a more BlackBerry-like prototype device scrapped in favor of a touchscreen device with a slide-out physical keyboard, as Google's engineers thought at the time that a touchscreen could not completely replace a physical keyboard and buttons. Android is based around a modified Linux kernel, again providing more power than mobile operating systems adapted from PDAs and feature phones. The first Android device, the horizontal-sliding HTC Dream, was released in September 2008. In 2012, Asus started experimenting with a convertible docking system named PadFone, where the standalone handset can when necessary be inserted into a tablet-sized screen unit with integrated supportive battery and used as such. In 2013 and 2014, Samsung experimented with the hybrid combination of compact camera and smartphone, releasing the Galaxy S4 Zoom and K Zoom, each equipped with integrated 10× optical zoom lens and manual parameter settings (including manual exposure and focus) years before these were widely adapted among smartphones. The S4 Zoom additionally has a rotary knob ring around the lens and a tripod mount. While screen sizes have increased, manufacturers have attempted to make smartphones thinner at the expense of utility and sturdiness, since a thinner frame is more vulnerable to bending and has less space for components, namely battery capacity. Operating system competition The iPhone and later touchscreen-only Android devices together popularized the slate form factor, based on a large capacitive touchscreen as the sole means of interaction, and led to the decline of earlier, keyboard- and keypad-focused platforms. Later, navigation keys such as the home, back, menu, task and search buttons have also been increasingly replaced by nonphysical touch keys, then virtual, simulated on-screen navigation keys, commonly with access combinations such as a long press of the task key to simulate a short menu key press, as with home button to search. More recent "bezel-less" types have their screen surface space extended to the unit's front bottom to compensate for the display area lost for simulating the navigation keys. While virtual keys offer more potential customizability, their location may be inconsistent among systems depending on screen rotation and software used. Multiple vendors attempted to update or replace their existing smartphone platforms and devices to better-compete with Android and the iPhone; Palm unveiled a new platform known as webOS for its Palm Pre in late-2009 to replace Palm OS, which featured a focus on a task-based "card" metaphor and seamless synchronization and integration between various online services (as opposed to the then-conventional concept of a smartphone needing a PC to serve as a "canonical, authoritative repository" for user data). HP acquired Palm in 2010 and released several other webOS devices, including the Pre 3 and HP TouchPad tablet. As part of a proposed divestment of its consumer business to focus on enterprise software, HP abruptly ended development of future webOS devices in August 2011, and sold the rights to webOS to LG Electronics in 2013, for use as a smart TV platform. Research in Motion introduced the vertical-sliding BlackBerry Torch and BlackBerry OS 6 in 2010, which featured a redesigned user interface, support for gestures such as pinch-to-zoom, and a new web browser based on the same WebKit rendering engine used by the iPhone. The following year, RIM released BlackBerry OS 7 and new models in the Bold and Torch ranges, which included a new Bold with a touchscreen alongside its keyboard, and the Torch 9860—the first BlackBerry phone to not include a physical keyboard. In 2013, it replaced the legacy BlackBerry OS with a revamped, QNX-based platform known as BlackBerry 10, with the all-touch BlackBerry Z10 and keyboard-equipped Q10 as launch devices. In 2010, Microsoft unveiled a replacement for Windows Mobile known as Windows Phone, featuring a new touchscreen-centric user interface built around flat design and typography, a home screen with "live tiles" containing feeds of updates from apps, as well as integrated Microsoft Office apps. In February 2011, Nokia announced that it had entered into a major partnership with Microsoft, under which it would exclusively use Windows Phone on all of its future smartphones, and integrate Microsoft's Bing search engine and Bing Maps (which, as part of the partnership, would also license Nokia Maps data) into all future devices. The announcement led to the abandonment of both Symbian, as well as MeeGo—a Linux-based mobile platform it was co-developing with Intel. Nokia's low-end Lumia 520 saw strong demand and helped Windows Phone gain niche popularity in some markets, overtaking BlackBerry in global market share in 2013. In mid-June 2012, Meizu released its mobile operating system, Flyme OS. Many of these attempts to compete with Android and iPhone were short-lived. Over the course of the decade, the two platforms became a clear duopoly in smartphone sales and market share, with BlackBerry, Windows Phone, and other operating systems eventually stagnating to little or no measurable market share. In 2015, BlackBerry began to pivot away from its in-house mobile platforms in favor of producing Android devices, focusing on a security-enhanced distribution of the software. The following year, the company announced that it would also exit the hardware market to focus more on software and its enterprise middleware, and began to license the BlackBerry brand and its Android distribution to third-party OEMs such as TCL for future devices. In September 2013, Microsoft announced its intent to acquire Nokia's mobile device business for $7.1 billion, as part of a strategy under CEO Steve Ballmer for Microsoft to be a "devices and services" company. Despite the growth of Windows Phone and the Lumia range (which accounted for nearly 90% of all Windows Phone devices sold), the platform never had significant market share in the key U.S. market, and Microsoft was unable to maintain Windows Phone's momentum in the years that followed, resulting in dwindling interest from users and app developers. After Balmer was succeeded by Satya Nadella (who has placed a larger focus on software and cloud computing) as CEO of Microsoft, it took a $7.6 billion write-off on the Nokia assets in July 2015, and laid off nearly the entire Microsoft Mobile unit in May 2016. Prior to the completion of the sale to Microsoft, Nokia released a series of Android-derived smartphones for emerging markets known as Nokia X, which combined an Android-based platform with elements of Windows Phone and Nokia's feature phone platform Asha, using Microsoft and Nokia services rather than Google. Camera advancements The first commercial camera phone was the Kyocera Visual Phone VP-210, released in Japan in May 1999. It was called a "mobile videophone" at the time, and had a 110,000-pixel front-facing camera. It could send up to two images per second over Japan's Personal Handy-phone System (PHS) cellular network, and store up to 20 JPEG digital images, which could be sent over e-mail. The first mass-market camera phone was the J-SH04, a Sharp J-Phone model sold in Japan in November 2000. It could instantly transmit pictures via cell phone telecommunication. By the mid-2000s, higher-end cell phones commonly had integrated digital cameras. In 2003 camera phones outsold stand-alone digital cameras, and in 2006 they outsold film and digital stand-alone cameras. Five billion camera phones were sold in five years, and by 2007 more than half of the installed base of all mobile phones were camera phones. Sales of separate cameras peaked in 2008. Many early smartphones did not have cameras at all, and earlier models that had them had low performance and insufficient image and video quality that could not compete with budget pocket cameras and fulfill user's needs. By the beginning of the 2010s almost all smartphones had an integrated digital camera. The decline in sales of stand-alone cameras accelerated due to the increasing use of smartphones with rapidly improving camera technology for casual photography, easier image manipulation, and abilities to directly share photos through the use of apps and web-based services. By 2011, cell phones with integrated cameras were selling hundreds of millions per year. In 2015, digital camera sales were 35.395 million units or only less than a third of digital camera sales numbers at their peak and also slightly less than film camera sold number at their peak. Contributing to the rise in popularity of smartphones being used over dedicated cameras for photography, smaller pocket cameras have difficulty producing bokeh in images, but nowadays, some smartphones have dual-lens cameras that reproduce the bokeh effect easily, and can even rearrange the level of bokeh after shooting. This works by capturing multiple images with different focus settings, then combining the background of the main image with a macro focus shot. In 2007, the Nokia N95 was notable as a smartphone that had a 5.0 Megapixel (MP) camera, when most others had cameras with around 3 MP or less than 2 MP. Some specialized feature phones like the LG Viewty, Samsung SGH-G800, and Sony Ericsson K850i, all released later that year, also had 5.0 MP cameras. By 2010, 5.0 MP cameras were common; a few smartphones had 8.0 MP cameras and the Nokia N8, Sony Ericsson Satio, and Samsung M8910 Pixon12 feature phone had 12 MP. The main camera of the 2009 Nokia N86 uniquely features a three-level aperture lens. The Altek Leo, a 14-megapixel smartphone with 3x optical zoom lens and 720p HD video camera was released in late 2010. In 2011, the same year the Nintendo 3DS was released, HTC unveiled the Evo 3D, a 3D phone with a dual five-megapixel rear camera setup for spatial imaging, among the earliest mobile phones with more than one rear camera. The 2012 Samsung Galaxy S3 introduced the ability to capture photos using voice commands. In 2012, Nokia announced and released the Nokia 808 PureView, featuring a 41-megapixel 1/1.2-inch sensor and a high-resolution f/2.4 Zeiss all-aspherical one-group lens. The high resolution enables four times of lossless digital zoom at 1080p and six times at 720p resolution, using image sensor cropping. The 2013 Nokia Lumia 1020 has a similar high-resolution camera setup, with the addition of optical image stabilization and manual camera settings years before common among high-end mobile phones, although lacking expandable storage that could be of use for accordingly high file sizes. Mobile optical image stabilization was first introduced by Nokia in 2012 with the Lumia 920, and the earliest known smartphone with an optically stabilized front camera is the HTC 10 from 2016. Optical image stabilization enables prolonged exposure times for low-light photography and smoothing out handheld video shaking, since the appearance of shakes magnifies over a larger display such as a monitor or television set, which would be detrimental to the watching experience. Since 2012, smartphones have become increasingly able to capture photos while filming. The resolution of those photos resolution may vary between devices. Samsung has used the highest image sensor resolution at the video's aspect ratio, which at 16:9 is 6 Megapixels (3264 × 1836) on the Galaxy S3 and 9.6 Megapixels (4128 × 2322) on the Galaxy S4. The earliest iPhones with such functionality, iPhone 5 and 5s, captured simultaneous photos at 0.9 Megapixels (1280 × 720) while filming. Starting in 2013 on the Xperia Z1, Sony experimented with real-time augmented reality camera effects such as floating text, virtual plants, volcano, and a dinosaur walking in the scenery. Apple later did similarly in 2017 with the iPhone X. In the same year, iOS 7 introduced the later widely implemented viewfinder intuition, where exposure value can be adjusted through vertical swiping, after focus and exposure has been set by tapping, and even while locked after holding down for a brief moment. On some devices, this intuition may be restricted by software in video/slow motion modes and for front camera. In 2013, Samsung unveiled the Galaxy S4 Zoom smartphone with the grip shape of a compact camera and a 10× optical zoom lens, as well as a rotary knob ring around the lens, as used on higher-end compact cameras, and an ISO 1222 tripod mount. It is equipped with manual parameter settings, including for focus and exposure. The successor 2014 Samsung Galaxy K Zoom brought resolution and performance enhancements, but lacks the rotary knob and tripod mount to allow for a more smartphone-like shape with less protruding lens. The 2014 Panasonic Lumix DMC-CM1 was another attempt at mixing mobile phone with compact camera, so much so that it inherited the Lumix brand. While lacking optical zoom, its image sensor has a format of 1", as used in high-end compact cameras such as the Lumix DMC-LX100 and Sony CyberShot DSC-RX100 series, with multiple times the surface size of a typical mobile camera image sensor, as well as support for light sensitivities of up to ISO 25600, well beyond the typical mobile camera light sensitivity range. , no successor has been released. In 2013 and 2014, HTC experimentally traded in pixel count for pixel surface size on their One M7 and M8, both with only four megapixels, marketed as UltraPixel, citing improved brightness and less noise in low light, though the more recent One M8 lacks optical image stabilization. The One M8 additionally was one of the earliest smartphones to be equipped with a dual camera setup. Its software allows generating visual spatial effects such as 3D panning, weather effects, and focus adjustment ("UFocus"), simulating the postphotographic selective focusing capability of images produced by a light-field camera. HTC returned to a high-megapixel single-camera setup on the 2015 One M9. Meanwhile, in 2014, LG Mobile started experimenting with time-of-flight camera functionality, where a rear laser beam that measures distance accelerates autofocus. Phase-detection autofocus was increasingly adapted throughout the mid-2010s, allowing for quicker and more accurate focusing than contrast detection. In 2016, Apple introduced the iPhone 7 Plus, one of the phones to popularize a dual camera setup. The iPhone 7 Plus included a main 12 MP camera along with a 12 MP telephoto camera. In early 2018 Huawei released a new flagship phone, the Huawei P20 Pro, one of the first triple camera lens setups with Leica optics. In late 2018, Samsung released a new mid-range smartphone, the Galaxy A9 (2018) with the world's first quad camera setup. The Nokia 9 PureView was released in 2019 featuring a penta-lens camera system. 2019 saw the commercialization of high resolution sensors, which use pixel binning to capture more light. 48 MP and 64 MP sensors developed by Sony and Samsung are commonly used by several manufacturers. 108 MP sensors were first implemented in late 2019 and early 2020. Video resolution With stronger getting chipsets to handle computing workload demands at higher pixel rates, mobile video resolution and framerate has caught up with dedicated consumer-grade cameras over years. In 2009, the Samsung Omnia HD became the first mobile phone with 720p HD video recording. In the same year, Apple brought video recording initially to the iPhone 3GS, at 480p, whereas the 2007 original iPhone and 2008 iPhone 3G lacked video recording entirely. 720p was more widely adapted in 2010, on smartphones such as the original Samsung Galaxy S, Sony Ericsson Xperia X10, iPhone 4, and HTC Desire HD. The early 2010s brought a steep increase in mobile video resolution. 1080p mobile video recording was achieved in 2011 on the Samsung Galaxy S2, HTC Sensation, and iPhone 4s. In 2012 and 2013, select devices with 720p filming at 60 frames per second were released: the Asus PadFone 2 and HTC One M7, unlike flagships of Samsung, Sony, and Apple. However, the 2013 Samsung Galaxy S4 Zoom does support it. In 2013, the Samsung Galaxy Note 3 introduced 2160p (4K) video recording at 30 frames per second, as well as 1080p doubled to 60 frames per second for smoothness. Other vendors adapted 2160p recording in 2014, including the optically stabilized LG G3. Apple first implemented it in late 2015 on the iPhone 6s and 6s Plus. The framerate at 2160p was widely doubled to 60 in 2017 and 2018, starting with the iPhone 8, Galaxy S9, LG G7, and OnePlus 6. Sufficient computing performance of chipsets and image sensor resolution and its reading speeds have enabled mobile 4320p (8K) filming in 2020, introduced with the Samsung Galaxy S20 and Redmi K30 Pro, though some upper resolution levels were foregone (skipped) throughout development, including 1440p (2.5K), 2880p (5K), and 3240p (6K), except 1440p on Samsung Galaxy front cameras. Mid-class Among mid-range smartphone series, the introduction of higher video resolutions was initially delayed by two to three years compared to flagship counterparts. 720p was widely adapted in 2012, including with the Samsung Galaxy S3 Mini, Sony Xperia go, and 1080p in 2013 on the Samsung Galaxy S4 Mini and HTC One mini. The proliferation of video resolutions beyond 1080p has been postponed by several years. The mid-class Sony Xperia M5 supported 2160p filming in 2016, whereas Samsung's mid-class series such as the Galaxy J and A series were strictly limited to 1080p in resolution and 30 frames per second at any resolution for six years until around 2019, whether and how much for technical reasons is unclear. Setting A lower video resolution setting may be desirable to extend recording time by reducing space storage and power consumption. The camera software of some smartphones is equipped with separate controls for resolution, frame rate, and bit rate. An example of a smartphone with these controls is the LG V10. Slow motion video A distinction between different camera software is the method used to store high frame rate video footage, with more recent phones retaining both the image sensor's original output frame rate and audio, while earlier phones do not record audio and stretch the video so it can be played back slowly at default speed. While the stretched encoding method used on earlier phones enables slow motion playback on video player software that lacks manual playback speed control, typically found on older devices, if the aim were to achieve a slow motion effect, the real-time method used by more recent phones offers greater versatility for video editing, where slowed down portions of the footage can be freely selected by the user, and exported into a separate video. A rudimentary video editing software for this purpose is usually pre-installed. The video can optionally be played back at normal (real-time) speed, acting as usual video. Development The earliest smartphone known to feature a slow motion mode is the 2009 Samsung i8000 Omnia II, which can record at QVGA (320×240) at 120 fps (frames per second). Slow motion is not available on the Galaxy S1, Galaxy S2, Galaxy Note 1, and Galaxy S3 flagships. In early 2012, the HTC One X allowed 768×432 pixel slow motion filming at an undocumented frame rate. The output footage has been measured as a third of real-time speed. In late 2012, the Galaxy Note 2 brought back slow motion, with D1 (720 × 480) at 120 fps. In early 2013, the Galaxy S4 and HTC One M7 recorded at that frame rate with 800 × 450, followed by the Note 3 and iPhone 5s with 720p (1280 × 720) in late 2013, the latter of which retaines audio and original sensor frame rate, as with all later iPhones. In early 2014, the Sony Xperia Z2 and HTC One M8 adapted this resolution as well. In late 2014, the iPhone 6 doubled the frame rate to 240 fps, and in late 2015, the iPhone 6s added support for 1080p (1920 × 1080) at 120 frames per second. In early 2015, the Galaxy S6 became the first Samsung mobile phone to retain the sensor framerate and audio, and in early 2016, the Galaxy S7 became the first Samsung mobile phone with 240 fps recording, also at 720p. In early 2015, the MT6795 chipset by MediaTek promised 1080p@480 fps video recording. The project's status remains indefinite. Since early 2017, starting with the Sony Xperia XZ, smartphones have been released with a slow motion mode that unsustainably records at framerates multiple times as high, by temporarily storing frames on the image sensor's internal burst memory. Such a recording lasts a few real-time seconds at most. In late 2017, the iPhone 8 brought 1080p at 240 fps, as well as 2160p at 60 fps, followed by the Galaxy S9 in early 2018. In mid-2018, the OnePlus 6 brought 720p at 480 fps, sustainable for one minute. In early 2021, the OnePlus 9 Pro became the first phone with 2160p at 120 fps. HDR video The first smartphones to record HDR video were the early 2013 Sony Xperia Z and mid-2013 Xperia Z Ultra, followed by the early 2014 Galaxy S5, all at 1080p. Audio recording Mobile phones with multiple microphones usually allow video recording with stereo audio for spaciality, with Samsung, Sony, and HTC initially implementing it in 2012 on their Samsung Galaxy S3, Sony Xperia S, and HTC One X. Apple implemented stereo audio starting with the 2018 iPhone Xs family and iPhone XR. Front cameras Photo Emphasis is being put on the front camera since the mid-2010s, where front cameras have reached resolutions as high as typical rear cameras, such as the 2015 LG G4 (8 megapixels), Sony Xperia C5 Ultra (13 megapixels), and 2016 Sony Xperia XA Ultra (16 megapixels, optically stabilized). The 2015 LG V10 brought a dual front camera system where the second has a wider angle for group photography. Samsung implemented a front-camera sweep panorama (panorama selfie) feature since the Galaxy Note 4 to extend the field of view. Video In 2012, the Galaxy S3 and iPhone 5 brought 720p HD front video recording (at 30 fps). In early 2013, the Samsung Galaxy S4, HTC One M7 and Sony Xperia Z brought 1080p Full HD at that framerate, and in late 2014, the Galaxy Note 4 introduced 1440p video recording on the front camera. Apple adapted 1080p front camera video with the late 2016 iPhone 7. In 2019, smartphones started adapting 2160p 4K video recording on the front camera, six years after rear camera 2160p commenced with the Galaxy Note 3. Display advancements In the early 2010s, larger smartphones with screen sizes of at least diagonal, dubbed "phablets", began to achieve popularity, with the 2011 Samsung Galaxy Note series gaining notably wide adoption. In 2013, Huawei launched the Huawei Mate series, sporting a HD (1280 x 720) IPS+ LCD display, which was considered to be quite large at the time. Some companies began to release smartphones in 2013 incorporating flexible displays to create curved form factors, such as the Samsung Galaxy Round and LG G Flex. By 2014, 1440p displays began to appear on high-end smartphones. In 2015, Sony released the Xperia Z5 Premium, featuring a 4K resolution display, although only images and videos could actually be rendered at that resolution (all other software was shown at 1080p). New trends for smartphone displays began to emerge in 2017, with both LG and Samsung releasing flagship smartphones (LG G6 and Galaxy S8), utilizing displays with taller aspect ratios than the common 16:9 ratio, and a high screen-to-body ratio, also known as a "bezel-less design". These designs allow the display to have a larger diagonal measurement, but with a slimmer width than 16:9 displays with an equivalent screen size. Another trend popularized in 2017 were displays containing tab-like cut-outs at the top-centre—colloquially known as a "notch"—to contain the front-facing camera, and sometimes other sensors typically located along the top bezel of a device. These designs allow for "edge-to-edge" displays that take up nearly the entire height of the device, with little to no bezel along the top, and sometimes a minimal bottom bezel as well. This design characteristic appeared almost simultaneously on the Sharp Aquos S2 and the Essential Phone, which featured small circular tabs for their cameras, followed just a month later by the iPhone X, which used a wider tab to contain a camera and facial scanning system known as Face ID. The 2016 LG V10 had a precursor to the concept, with a portion of the screen wrapped around the camera area in the top-left corner, and the resulting area marketed as a "second" display that could be used for various supplemental features. Other variations of the practice later emerged, such as a "hole-punch" camera (such as those of the Honor View 20, and Samsung's Galaxy A8s and Galaxy S10)—eschewing the tabbed "notch" for a circular or rounded-rectangular cut-out within the screen instead, while Oppo released the first "all-screen" phones with no notches at all, including one with a mechanical front camera that pops up from the top of the device (Find X), and a 2019 prototype for a front-facing camera that can be embedded and hidden below the display, using a special partially-translucent screen structure that allows light to reach the image sensor below the panel. The first implementation was the ZTE Axon 20 5G, with a 32 MP sensor manufactured by Visionox. Displays supporting refresh rates higher than 60 Hz (such as 90 Hz or 120 Hz) also began to appear on smartphones in 2017; initially confined to "gaming" smartphones such as the Razer Phone (2017) and Asus ROG Phone (2018), they later became more common on flagship phones such as the Pixel 4 (2019) and Samsung Galaxy S21 series (2021). Higher refresh rates allow for smoother motion and lower input latency, but often at the cost of battery life. As such, the device may offer a means to disable high refresh rates, or be configured to automatically reduce the refresh rate when there is low on-screen motion. Multi-tasking An early implementation of multiple simultaneous tasks on a smartphone display are the picture-in-picture video playback mode ("pop-up play") and "live video list" with playing video thumbnails of the 2012 Samsung Galaxy S3, the former of which was later delivered to the 2011 Samsung Galaxy Note through a software update. Later that year, a split-screen mode was implemented on the Galaxy Note 2, later retrofitted on the Galaxy S3 through the "premium suite upgrade". The earliest implementation of desktop and laptop-like windowing was on the 2013 Samsung Galaxy Note 3. Foldable smartphones Smartphones utilizing flexible displays were theorized as possible once manufacturing costs and production processes were feasible. In November 2018, the startup company Royole unveiled the first commercially available foldable smartphone, the Royole FlexPai. Also that month, Samsung presented a prototype phone featuring an "Infinity Flex Display" at its developers conference, with a smaller, outer display on its "cover", and a larger, tablet-sized display when opened. Samsung stated that it also had to develop a new polymer material to coat the display as opposed to glass. Samsung officially announced the Galaxy Fold, based on the previously demonstrated prototype, in February 2019 for an originally-scheduled release in late-April. Due to various durability issues with the display and hinge systems encountered by early reviewers, the release of the Galaxy Fold was delayed to September to allow for design changes. In November 2019, Motorola unveiled a variation of the concept with its re-imagining of the Razr, using a horizontally-folding display to create a clamshell form factor inspired by its previous feature phone range of the same name. Samsung would unveil a similar device known as the Galaxy Z Flip the following February. Other developments in the 2010s The first smartphone with a fingerprint reader was the Motorola Atrix 4G in 2011. In September 2013, the iPhone 5S was unveiled as the first smartphone on a major U.S. carrier since the Atrix to feature this technology. Once again, the iPhone popularized this concept. One of the barriers of fingerprint reading amongst consumers was security concerns, however Apple was able to address these concerns by encrypting this fingerprint data onto the A7 Processor located inside the phone as well as make sure this information could not be accessed by third-party applications and is not stored in iCloud or Apple servers In 2012, Samsung introduced the Galaxy S3 (GT-i9300) with retrofittable wireless charging, pop-up video playback, 4G-LTE variant (GT-i9305) quad-core processor. In 2013, Fairphone launched its first "socially ethical" smartphone at the London Design Festival to address concerns regarding the sourcing of materials in the manufacturing followed by Shiftphone in 2015. In late 2013, QSAlpha commenced production of a smartphone designed entirely around security, encryption and identity protection. In October 2013, Motorola Mobility announced Project Ara, a concept for a modular smartphone platform that would allow users to customize and upgrade their phones with add-on modules that attached magnetically to a frame. Ara was retained by Google following its sale of Motorola Mobility to Lenovo, but was shelved in 2016. That year, LG and Motorola both unveiled smartphones featuring a limited form of modularity for accessories; the LG G5 allowed accessories to be installed via the removal of its battery compartment, while the Moto Z utilizes accessories attached magnetically to the rear of the device. Microsoft, expanding upon the concept of Motorola's short-lived "Webtop", unveiled functionality for its Windows 10 operating system for phones that allows supported devices to be docked for use with a PC-styled desktop environment. Samsung and LG used to be the "last standing" manufacturers to offer flagship devices with user-replaceable batteries. But in 2015, Samsung succumbed to the minimalism trend set by Apple, introducing the Galaxy S6 without a user-replaceable battery. In addition, Samsung was criticised for pruning long-standing features such as MHL, MicroUSB 3.0, water resistance and MicroSD card support, of which the latter two came back in 2016 with the Galaxy S7 and S7 Edge. , the global median for smartphone ownership was 43%. Statista forecast that 2.87 billion people would own smartphones in 2020. Within the same decade, rapid deployment of LTE cellular network and general availability of smartphones have increased popularity of the streaming television services, and the corresponding mobile TV apps. Major technologies that began to trend in 2016 included a focus on virtual reality and augmented reality experiences catered towards smartphones, the newly introduced USB-C connector, and improving LTE technologies. In 2016, adjustable screen resolution known from desktop operating systems was introduced to smartphones for power saving, whereas variable screen refresh rates were popularized in 2020. In 2018, the first smartphones featuring fingerprint readers embedded within OLED displays were announced, followed in 2019 by an implementation using an ultrasonic sensor on the Samsung Galaxy S10. In 2019, the majority of smartphones released have more than one camera, are waterproof with IP67 and IP68 ratings, and unlock using facial recognition or fingerprint scanners. Designs first implemented by Apple have been replicated by other vendors several times. These include a sealed body that does not allow replacing the battery, a lack of the physical audio connector (since the iPhone 7 from 2016), a screen with a cut-out area at the top for the earphone and front-facing camera and sensors (colloquially known as "notch"; since the iPhone X from 2017), the exclusion of a charging wall adapter from the scope of delivery (since the iPhone 12 from 2019), and a camera user interface with circular and usually solid-colour shutter button and a camera mode selector using perpendicular text and separate camera modes for photo and video (since iOS 7 from 2013). Other developments in the 2020s In 2020, the first smartphones featuring high-speed 5G network capability were announced. Since 2020, smartphones have decreasingly been shipped with rudimentary accessories like a power adapter and headphones that have historically been almost invariably within the scope of delivery. This trend was initiated with Apple's iPhone 12, followed by Samsung and Xiaomi on the Galaxy S21 and Mi 11 respectively, months after having mocked the same through advertisements. The reason cited is reducing environmental footprint, though reaching raised charging rates supported by newer models demands a new charger shipped through separate packaging with its own environmental footprint. With the development of the PinePhone and Librem 5 in the 2020s, there are intensified efforts to make open source GNU/Linux for smartphones a major alternative to iOS and Android. Moreover, associated software enabled convergence (beyond convergent and hybrid apps) by allowing the smartphones to be used like a desktop computer when connected to a keyboard, mouse and monitor. In the early 2020s, manufacturers began to integrate satellite connectivity into smartphone devices for use in remote areas, where local terrestrial communication infrastructures, such as landline and cellular networks, are not available. Due to the antenna limitations in the conventional phones, in the early stages of implementation satellite connectivity would be limited to the satellite messaging and satellite emergency services. Hardware A typical smartphone contains a number of metal–oxide–semiconductor (MOS) integrated circuit (IC) chips, which in turn contain billions of tiny MOS field-effect transistors (MOSFETs). A typical smartphone contains the following MOS IC chips: Application processor (CMOS system-on-a-chip) Flash memory (floating-gate MOS memory) Cellular modem (baseband RF CMOS) RF transceiver (RF CMOS) Phone camera image sensor (CMOS image sensor) Power management integrated circuit (power MOSFETs) Display driver (LCD or LED driver) Wireless communication chips (Wi-Fi, Bluetooth, GPS receiver) Sound chip (audio codec and power amplifier) Gyroscope Capacitive touchscreen controller (ASIC and DSP) RF power amplifier (LDMOS) Some are also equipped with an FM radio receiver, a hardware notification LED, and an infrared transmitter for use as remote control. A few models have additional sensors such as thermometer for measuring ambient temperature, hygrometer for humidity, and a sensor for ultraviolet ray measurement. A few smartphones designed around specific purposes are equipped with uncommon hardware such as a projector (Samsung Beam i8520 and Samsung Galaxy Beam i8530), optical zoom lenses (Samsung Galaxy S4 Zoom and Samsung Galaxy K Zoom), thermal camera, and even PMR446 (walkie-talkie radio) transceiver. Central processing unit Smartphones have central processing units (CPUs), similar to those in computers, but optimised to operate in low power environments. In smartphones, the CPU is typically integrated in a CMOS (complementary metal–oxide–semiconductor) system-on-a-chip (SoC) application processor. The performance of mobile CPU depends not only on the clock rate (generally given in multiples of hertz) but also on the memory hierarchy. Because of these challenges, the performance of mobile phone CPUs is often more appropriately given by scores derived from various standardized tests to measure the real effective performance in commonly used applications. Buttons Smartphones are typically equipped with a power button and volume buttons. Some pairs of volume buttons are unified. Some are equipped with a dedicated camera shutter button. Units for outdoor use may be equipped with an "SOS" emergency call and "PTT" (push-to-talk button). The presence of physical front-side buttons such as the home and navigation buttons has decreased throughout the 2010s, increasingly becoming replaced by capacitive touch sensors and simulated (on-screen) buttons. As with classic mobile phones, early smartphones such as the Samsung Omnia II were equipped with buttons for accepting and declining phone calls. Due to the advancements of functionality besides phone calls, these have increasingly been replaced by navigation buttons such as "menu" (also known as "options"), "back", and "tasks". Some early 2010s smartphones such as the HTC Desire were additionally equipped with a "Search" button (🔍) for quick access to a web search engine or apps' internal search feature. Since 2013, smartphones' home buttons started integrating fingerprint scanners, starting with the iPhone 5s and Samsung Galaxy S5. Functions may be assigned to button combinations. For example, screenshots can usually be taken using the home and power buttons, with a short press on iOS and one-second holding Android OS, the two most popular mobile operating systems. On smartphones with no physical home button, usually the volume-down button is instead pressed with the power button. Some smartphones have a screenshot and possibly screencast shortcuts in the navigation button bar or the power button menu. Display One of the main characteristics of smartphones is the screen. Depending on the device's design, the screen fills most or nearly all of the space on a device's front surface. Many smartphone displays have an aspect ratio of 16:9, but taller aspect ratios became more common in 2017, as well as the aim to eliminate bezels by extending the display surface to as close to the edges as possible. Screen sizes Screen sizes are measured in diagonal inches. Phones with screens larger than 5.2 inches are often called "phablets". Smartphones with screens over 4.5 inches in size are commonly difficult to use with only a single hand, since most thumbs cannot reach the entire screen surface; they may need to be shifted around in the hand, held in one hand and manipulated by the other, or used in place with both hands. Due to design advances, some modern smartphones with large screen sizes and "edge-to-edge" designs have compact builds that improve their ergonomics, while the shift to taller aspect ratios have resulted in phones that have larger screen sizes whilst maintaining the ergonomics associated with smaller 16:9 displays. Panel types Liquid-crystal displays (LCDs) and organic light-emitting diode (OLED) displays are the most common. Some displays are integrated with pressure-sensitive digitizers, such as those developed by Wacom and Samsung, and Apple's Force Touch system. A few phones, such as the YotaPhone prototype, are equipped with a low-power electronic paper rear display, as used in e-book readers. Alternative input methods Some devices are equipped with additional input methods such as a stylus for higher precision input and hovering detection or a self-capacitive touch screens layer for floating finger detection. The latter has been implemented on few phones such as the Samsung Galaxy S4, Note 3, S5, Alpha, and Sony Xperia Sola, making the Galaxy Note 3 the only smartphone with both so far. Hovering can enable preview tooltips such as on the video player's seek bar, in text messages, and quick contacts on the dial pad, as well as lock screen animations, and the simulation of a hovering mouse cursor on web sites. Some styluses support hovering as well and are equipped with a button for quick access to relevant tools such as digital post-it notes and highlighting of text and elements when dragging while pressed, resembling drag selection using a computer mouse. Some series such as the Samsung Galaxy Note series and LG G Stylus series have an integrated tray to store the stylus in. Few devices such as the iPhone 6s until iPhone Xs and Huawei Mate S are equipped with a pressure-sensitive touch screen, where the pressure may be used to simulate a gas pedal in video games, access to preview windows and shortcut menus, controlling the typing cursor, and a weight scale, the latest of which has been rejected by Apple from the App Store. Some early 2010s HTC smartphones such as the HTC Desire (Bravo) and HTC Legend are equipped with an optical track pad for scrolling and selection. Notification light Many smartphones except Apple iPhones are equipped with low-power light-emitting diodes besides the screen that are able to notify the user about incoming messages, missed calls, low battery levels, and facilitate locating the mobile phone in darkness, with marginial power consumption. To distinguish between the sources of notifications, the colour combination and blinking pattern can vary. Usually three diodes in red, green, and blue (RGB) are able to create a multitude of colour combinations. Sensors Smartphones are equipped with a multitude of sensors to enable system features and third-party applications. Common sensors Accelerometers and gyroscopes enable automatic control of screen rotation. Uses by third-party software include bubble level simulation. An ambient light sensor allows for automatic screen brightness and contrast adjustment, and an RGB sensor enables the adaption of screen colour. Many mobile phones are also equipped with a barometer sensor to measure air pressure, such as Samsung since 2012 with the Galaxy S3, and Apple since 2014 with the iPhone 6. It allows estimating and detecting changes in altitude. A magnetometer can act as a digital compass by measuring Earth's magnetic field. Rare sensors Samsung equips their flagship smartphones since the 2014 Galaxy S5 and Galaxy Note 4 with a heart rate sensor to assist in fitness-related uses and act as a shutter key for the front-facing camera. So far, only the 2013 Samsung Galaxy S4 and Note 3 are equipped with an ambient temperature sensor and a humidity sensor, and only the Note 4 with an ultraviolet radiation sensor which could warn the user about excessive exposure. A rear infrared laser beam for distance measurement can enable time-of-flight camera functionality with accelerated autofocus, as implemented on select LG mobile phones starting with LG G3 and LG V10. Due to their currently rare occurrence among smartphones, not much software to utilize these sensors has been developed yet. Storage While eMMC (embedded multi media card) flash storage was most commonly used in mobile phones, its successor, UFS (Universal Flash Storage) with higher transfer rates emerged throughout the 2010s for upper-class devices. Capacity While the internal storage capacity of mobile phones has been near-stagnant during the first half of the 2010s, it has increased steeper during its second half, with Samsung for example increasing the available internal storage options of their flagship class units from 32 GB to 512 GB within only 2 years between 2016 and 2018. Memory cards The space for data storage of some mobile phones can be expanded using MicroSD memory cards, whose capacity has multiplied throughout the 2010s (→ ). Benefits over USB on the go storage and cloud storage include offline availability and privacy, not reserving and protruding from the charging port, no connection instability or latency, no dependence on voluminous data plans, and preservation of the limited rewriting cycles of the device's permanent internal storage. Large amounts of data can be moved immediately between devices by changing memory cards, large-scale data backups can be created offline, and data can be read externally should the smartphone be inoperable. In case of technical defects which make the device unusable or unbootable as a result of liquid damage, fall damage, screen damage, bending damage, malware, or bogus system updates, etc., data stored on the memory card is likely rescueable externally, while data on the inaccessible internal storage would be lost. A memory card can usually immediately be re-used in a different memory-card-enabled device with no necessity for prior file transfers. Some dual-SIM mobile phones are equipped with a hybrid slot, where one of the two slots can be occupied by either a SIM card or a memory card. Some models, typically of higher end, are equipped with three slots including one dedicated memory card slot, for simultaneous dual-SIM and memory card usage. Physical location The location of both SIM and memory card slots vary among devices, where they might be located accessibly behind the back cover or else behind the battery, the latter of which denies hot swapping. Mobile phones with non-removable rear cover typically house SIM and memory cards in a small tray on the handset's frame, ejected by inserting a needle tool into a pinhole. Some earlier mid-range phones such as the 2011 Samsung Galaxy Fit and Ace have a sideways memory card slot on the frame covered by a cap that can be opened without tool. File transfer Originally, mass storage access was commonly enabled to computers through USB. Over time, mass storage access was removed, leaving the Media Transfer Protocol as protocol for USB file transfer, due to its non-exclusive access ability where the computer is able to access the storage without it being locked away from the mobile phone's software for the duration of the connection, and no necessity for common file system support, as communication is done through an abstraction layer. However, unlike mass storage, Media Transfer Protocol lacks parallelism, meaning that only a single transfer can run at a time, for which other transfer requests need to wait to finish. This, for example, denies browsing photos and playing back videos from the device during an active file transfer. Some programs and devices lack support for MTP. In addition, the direct access and random access of files through MTP is not supported. Any file is wholly downloaded from the device before opened. Sound Some audio quality enhancing features, such as Voice over LTE and HD Voice have appeared and are often available on newer smartphones. Sound quality can remain a problem due to the design of the phone, the quality of the cellular network and compression algorithms used in long-distance calls. Audio quality can be improved using a VoIP application over Wi-Fi. Cellphones have small speakers so that the user can use a speakerphone feature and talk to a person on the phone without holding it to their ear. The small speakers can also be used to listen to digital audio files of music or speech or watch videos with an audio component, without holding the phone close to the ear. However, integrated speakers may be small and of restricted sound quality to conserve space. Some mobile phones such as the HTC One M8 and the Sony Xperia Z2 are equipped with stereophonic speakers to create spacial sound when in horizontal orientation. Audio connector The 3.5mm headphone receptible (coll. "headphone jack") allows the immediate operation of passive headphones, as well as connection to other external auxiliary audio appliances. Among devices equipped with the connector, it is more commonly located at the bottom (charging port side) than on the top of the device. The decline of the connector's availability among newly released mobile phones among all major vendors commenced in 2016 with its lack on the Apple iPhone 7. An adapter reserving the charging port can retrofit the plug. Battery-powered, wireless Bluetooth headphones are an alternative. Those tend to be costlier however due to their need for internal hardware such as a Bluetooth transceiver and a battery with a charging controller, and a Bluetooth coupling is required ahead of each operation. Battery Smartphones typically feature lithium-ion or lithium-polymer batteries due to their high energy densities. Batteries chemically wear down as a result of repeated charging and discharging throughout ordinary usage, losing both energy capacity and output power, which results in loss of processing speeds followed by system outages. Battery capacity may be reduced to 80% after few hundred recharges, and the drop in performance accelerates with time. Some mobile phones are designed with batteries that can be interchanged upon expiration by the end user, usually by opening the back cover. While such a design had initially been used in most mobile phones, including those with touch screen that were not Apple iPhones, it has largely been usurped throughout the 2010s by permanently built-in, non-replaceable batteries; a design practice criticized for planned obsolescence. Charging Due to limitations of electrical currents that existing USB cables' copper wires could handle, charging protocols which make use of elevated voltages such as Qualcomm Quick Charge and MediaTek Pump Express have been developed to increase the power throughput for faster charging, to maximize the usage time without restricted ergonomy and to minimize the time a device needs to be attached to a power source. The smartphone's integrated charge controller (IC) requests the elevated voltage from a supported charger. "VOOC" by Oppo, also marketed as "dash charge", took the counter approach and increased current to cut out some heat produced from internally regulating the arriving voltage in the end device down to the battery's charging terminal voltage, but is incompatible with existing USB cables, as it requires the thicker copper wires of high-current USB cables. Later, USB Power Delivery (USB-PD) was developed with the aim to standardize the negotiation of charging parameters across devices of up to 100 Watts, but is only supported on cables with USB-C on both endings due to the connector's dedicated PD channels. While charging rates have been increasing, with 15 watts in 2014, 20 Watts in 2016, and 45 watts in 2018, the power throughput may be throttled down significantly during operation of the device. Wireless charging has been widely adapted, allowing for intermittent recharging without wearing down the charging port through frequent reconnection, with Qi being the most common standard, followed by Powermat. Due to the lower efficiency of wireless power transmission, charging rates are below that of wired charging, and more heat is produced at similar charging rates. By the end of 2017, smartphone battery life has become generally adequate; however, earlier smartphone battery life was poor due to the weak batteries that could not handle the significant power requirements of the smartphones' computer systems and color screens. Smartphone users purchase additional chargers for use outside the home, at work, and in cars and by buying portable external "battery packs". External battery packs include generic models which are connected to the smartphone with a cable, and custom-made models that "piggyback" onto a smartphone's case. In 2016, Samsung had to recall millions of the Galaxy Note 7 smartphones due to an explosive battery issue. For consumer convenience, wireless charging stations have been introduced in some hotels, bars, and other public spaces. Power management A technique to minimize power consumption is the panel self-refresh, whereby the image to be shown on the display is not sent at all times from the processor to the integrated controller (IC) of the display component, but only if the information on screen is changed. The display's integrated controller instead memorizes the last screen contents and refreshes the screen by itself. This technology was introduced around 2014 and has reduced power consumption by a few hundred milliwatts. Cameras Cameras have become standard features of smartphones. phone cameras are now a highly competitive area of differentiation between models, with advertising campaigns commonly based on a focus on the quality or capabilities of a device's main cameras. Images are usually saved in the JPEG file format; some high-end phones since the mid-2010s also have RAW imaging capability. Space constraints Typically smartphones have at least one main rear-facing camera and a lower-resolution front-facing camera for "selfies" and video chat. Owing to the limited depth available in smartphones for image sensors and optics, rear-facing cameras are often housed in a "bump" that is thicker than the rest of the phone. Since increasingly thin mobile phones have more abundant horizontal space than the depth that is necessary and used in dedicated cameras for better lenses, there is additionally a trend for phone manufacturers to include multiple cameras, with each optimized for a different purpose (telephoto, wide angle, etc.). Viewed from back, rear cameras are commonly located at the top center or top left corner. A cornered location benefits by not requiring other hardware to be packed around the camera module while increasing ergonomy, as the lens is less likely to be covered when held horizontally. Modern advanced smartphones have cameras with optical image stabilisation (OIS), larger sensors, bright lenses, and even optical zoom plus RAW images. HDR, "Bokeh mode" with multi lenses and multi-shot night modes are now also familiar. Many new smartphone camera features are being enabled via computational photography image processing and multiple specialized lenses rather than larger sensors and lenses, due to the constrained space available inside phones that are being made as slim as possible. Dedicated camera button Some mobile phones such as the Samsung i8000 Omnia 2, some Nokia Lumias and some Sony Xperias are equipped with a physical camera shutter button. Those with two pressure levels resemble the point-and-shoot intuition of dedicated compact cameras. The camera button may be used as a shortcut to quickly and ergonomically launch the camera software, as it is located more accessibly inside a pocket than the power button. Back cover materials Back covers of smartphones are typically made of polycarbonate, aluminium, or glass. Polycarbonate back covers may be glossy or matte, and possibly textured, like dotted on the Galaxy S5 or leathered on the Galaxy Note 3 and Note 4. While polycarbonate back covers may be perceived as less "premium" among fashion- and trend-oriented users, its utilitarian strengths and technical benefits include durability and shock absorption, greater elasticity against permanent bending like metal, inability to shatter like glass, which facilitates designing it removable; better manufacturing cost efficiency, and no blockage of radio signals or wireless power like metal. Accessories A wide range of accessories are sold for smartphones, including cases, memory cards, screen protectors, chargers, wireless power stations, USB On-The-Go adapters (for connecting USB drives and or, in some cases, a HDMI cable to an external monitor), MHL adapters, add-on batteries, power banks, headphones, combined headphone-microphones (which, for example, allow a person to privately conduct calls on the device without holding it to the ear), and Bluetooth-enabled powered speakers that enable users to listen to media from their smartphones wirelessly. Cases range from relatively inexpensive rubber or soft plastic cases which provide moderate protection from bumps and good protection from scratches to more expensive, heavy-duty cases that combine a rubber padding with a hard outer shell. Some cases have a "book"-like form, with a cover that the user opens to use the device; when the cover is closed, it protects the screen. Some "book"-like cases have additional pockets for credit cards, thus enabling people to use them as wallets. Accessories include products sold by the manufacturer of the smartphone and compatible products made by other manufacturers. However, some companies, like Apple, stopped including chargers with smartphones in order to "reduce carbon footprint", etc., causing many customers to pay extra for charging adapters. Software Mobile operating systems A mobile operating system (or mobile OS) is an operating system for phones, tablets, smartwatches, or other mobile devices. Globally, Android and IOS are the two most used mobile operating systems based on usage share, with the former having been the best selling OS globally on all devices since 2013. Mobile operating systems combine features of a personal computer operating system with other features useful for mobile or handheld use; usually including, and most of the following considered essential in modern mobile systems; a touchscreen, cellular, Bluetooth, Wi-Fi Protected Access, Wi-Fi, Global Positioning System (GPS) mobile navigation, video- and single-frame picture cameras, speech recognition, voice recorder, music player, near-field communication, and infrared blaster. By Q1 2018, over 383 million smartphones were sold with 85.9 percent running Android, 14.1 percent running iOS and a negligible number of smartphones running other OSes. Android alone is more popular than the popular desktop operating system Windows, and in general, smartphone use (even without tablets) exceeds desktop use. Other well-known mobile operating systems are Flyme OS and Harmony OS. Mobile devices with mobile communications abilities (e.g., smartphones) contain two mobile operating systemsthe main user-facing software platform is supplemented by a second low-level proprietary real-time operating system which operates the radio and other hardware. Research has shown that these low-level systems may contain a range of security vulnerabilities permitting malicious base stations to gain high levels of control over the mobile device. Mobile apps A mobile app is a computer program designed to run on a mobile device, such as a smartphone. The term "app" is a short-form of the term "software application". Application stores The introduction of Apple's App Store for the iPhone and iPod Touch in July 2008 popularized manufacturer-hosted online distribution for third-party applications (software and computer programs) focused on a single platform. There are a huge variety of apps, including video games, music products and business tools. Up until that point, smartphone application distribution depended on third-party sources providing applications for multiple platforms, such as GetJar, Handango, Handmark, and PocketGear. Following the success of the App Store, other smartphone manufacturers launched application stores, such as Google's Android Market (later renamed to the Google Play Store) and RIM's BlackBerry App World, Android-related app stores like Aptoide, Cafe Bazaar, F-Droid, GetJar, and Opera Mobile Store. In February 2014, 93% of mobile developers were targeting smartphones first for mobile app development. List of current smartphone brands Asus Gionee Google Pixel Hisense Honor HTC Huawei Infinix iPhone iQOO Itel Lava Lenovo LG Meizu Motorola Nokia Nothing Nubia OnePlus Oppo POCO Realme Redmi Samsung Galaxy Sharp Sony Xperia TCL Tecno Umidigi Vivo Xiaomi ZTE Sales Since 1996, smartphone shipments have had positive growth. In November 2011, 27% of all photographs created were taken with camera-equipped smartphones. In September 2012, a study concluded that 4 out of 5 smartphone owners use the device to shop online. Global smartphone sales surpassed the sales figures for feature phones in early 2013. Worldwide shipments of smartphones topped 1 billion units in 2013, up 38% from 2012's 725 million, while comprising a 55% share of the mobile phone market in 2013, up from 42% in 2012. In 2013, smartphone sales began to decline for the first time. In Q1 2016 for the first time the shipments dropped by 3 percent year on year. The situation was caused by the maturing China market. A report by NPD shows that fewer than 10% of US citizens have spent $1,000 or more on smartphones, as they are too expensive for most people, without introducing particularly innovative features, and amid Huawei, Oppo and Xiaomi introducing products with similar feature sets for lower prices. In 2019, smartphone sales declined by 3.2%, the largest in smartphone history, while China and India were credited with driving most smartphone sales worldwide. It is predicted that widespread adoption of 5G will help drive new smartphone sales. By manufacturer In 2011, Samsung had the highest shipment market share worldwide, followed by Apple. In 2013, Samsung had 31.3% market share, a slight increase from 30.3% in 2012, while Apple was at 15.3%, a decrease from 18.7% in 2012. Huawei, LG and Lenovo were at about 5% each, significantly better than 2012 figures, while others had about 40%, the same as the previous years figure. Only Apple lost market share, although their shipment volume still increased by 12.9%; the rest had significant increases in shipment volumes of 36 to 92%. In Q1 2014, Samsung had a 31% share and Apple had 16%. In Q4 2014, Apple had a 20.4% share and Samsung had 19.9%. In Q2 2016, Samsung had a 22.3% share and Apple had 12.9%. In Q1 2017, IDC reported that Samsung was first placed, with 80 million units, followed by Apple with 50.8 million, Huawei with 34.6 million, Oppo with 25.5 million and Vivo with 22.7 million. Samsung's mobile business is half the size of Apple's, by revenue. Apple business increased very rapidly in the years 2013 to 2017. Realme, a brand owned by Oppo, is the fastest-growing phone brand worldwide since Q2 2019. In China, Huawei and Honor, a brand owned by Huawei, have 46% of market share combined and posted 66% annual growth , amid growing Chinese nationalism. In 2019, Samsung had a 74% market share of 5G smartphones in South Korea. In the first quarter of 2024, global smartphone shipments rose by 7.8% to 289.4 million units. Samsung, with a 20.8% market share, overtook Apple to become the leading smartphone manufacturer. Apple's smartphone shipments dropped 10%. Xiaomi secured the third spot with a 14.1% market share. By operating system Use Contemporary use and convergence The rise in popularity of touchscreen smartphones and mobile apps distributed via app stores along with rapidly advancing network, mobile processor, and storage technologies led to a convergence where separate mobile phones, organizers, and portable media players were replaced by a smartphone as the single device most people carried. Advances in digital camera sensors and on-device image processing software more gradually led to smartphones replacing simpler cameras for photographs and video recording. The built-in GPS capabilities and mapping apps on smartphones largely replaced stand-alone satellite navigation devices, and paper maps became less common. Mobile gaming on smartphones greatly grew in popularity, allowing many people to use them in place of handheld game consoles, and some companies tried creating game console/phone hybrids based on phone hardware and software. People frequently have chosen not to get fixed-line telephone service in favor of smartphones. Music streaming apps and services have grown rapidly in popularity, serving the same use as listening to music stations on a terrestrial or satellite radio. Streaming video services are easily accessed via smartphone apps and can be used in place of watching television. People have often stopped wearing wristwatches in favor of checking the time on their smartphones, and many use the clock features on their phones in place of alarm clocks. Mobile phones can also be used as a digital note taking, text editing and memorandum device whose computerization facilitates searching of entries. Additionally, in many lesser technologically developed regions smartphones are people's first and only means of Internet access due to their portability, with personal computers being relatively uncommon outside of business use. The cameras on smartphones can be used to photograph documents and send them via email or messaging in place of using fax (facsimile) machines. Payment apps and services on smartphones allow people to make less use of wallets, purses, credit and debit cards, and cash. Mobile banking apps can allow people to deposit checks simply by photographing them, eliminating the need to take the physical check to an ATM or teller. Guide book apps can take the place of paper travel and restaurant/business guides, museum brochures, and dedicated audio guide equipment. Mobile banking and payment In many countries, mobile phones are used to provide mobile banking services, which may include the ability to transfer cash payments by secure SMS text message. Kenya's M-PESA mobile banking service, for example, allows customers of the mobile phone operator Safaricom to hold cash balances which are recorded on their SIM cards. Cash can be deposited or withdrawn from M-PESA accounts at Safaricom retail outlets located throughout the country and can be transferred electronically from person to person and used to pay bills to companies. Branchless banking has been successful in South Africa and the Philippines. A pilot project in Bali was launched in 2011 by the International Finance Corporation and an Indonesian bank, Bank Mandiri. Another application of mobile banking technology is Zidisha, a US-based nonprofit micro-lending platform that allows residents of developing countries to raise small business loans from Web users worldwide. Zidisha uses mobile banking for loan disbursements and repayments, transferring funds from lenders in the United States to borrowers in rural Africa who have mobile phones and can use the Internet. Mobile payments were first trialled in Finland in 1998 when two Coca-Cola vending machines in Espoo were enabled to work with SMS payments. Eventually, the idea spread and in 1999, the Philippines launched the country's first commercial mobile payments systems with mobile operators Globe and Smart. Some mobile phones can make mobile payments via direct mobile billing schemes, or through contactless payments if the phone and the point of sale support near-field communication (NFC). Enabling contactless payments through NFC-equipped mobile phones requires the co-operation of manufacturers, network operators, and retail merchants. Facsimile Some apps allows for sending and receiving facsimile (fax), over a smartphone, including facsimile data (composed of raster bi-level graphics) generated directly and digitally from document and image file formats. Films Films are increasingly made using smartphones and tablets, leading to the rise of dedicated film festivals for such films, including the SmartFone Flick Fest in Sydney, Australia; Dublin Smartphone Film Festival; the International Mobil Film Festival based in San Diego; the Spanish festival Cinephone – Festival Internacional de Cine con Smartphone; the African Smartphone International Film Festival; Toronto Smartphone Film Festival; New York Mobile Film Festival; and others. Criticism and issues Social impacts Manufacture Cobalt is needed in order to manufacture smartphones' rechargeable batteries. Workers, including children, suffer injuries, amputations, and death as the result of the hazardous working conditions and mine tunnel collapses in the Democratic Republic of the Congo during artisanal mining of cobalt. In 2019 a lawsuit was filed against Apple and other tech companies for the use of child labor in mining cobalt; in 2024 the court ruled that the companies were not liable. Apple announced it would convert to using recycled cobalt by 2025. Use In 2012, University of Southern California study found that unprotected adolescent sexual activity was more common among owners of smartphones. A study conducted by the Rensselaer Polytechnic Institute's (RPI) Lighting Research Center (LRC) concluded that smartphones, or any backlit devices, can seriously affect sleep cycles. Some persons might become psychologically attached to smartphones, resulting in anxiety when separated from the devices. A "smombie" (a combination of "smartphone" and "zombie") is a walking person using a smartphone and not paying attention as they walk, possibly risking an accident in the process, an increasing social phenomenon. The issue of slow-moving smartphone users led to the temporary creation of a "mobile lane" for walking in Chongqing, China. The issue of distracted smartphone users led the city of Augsburg, Germany, to embed pedestrian traffic lights in the pavement. While driving Mobile phone use while driving—including calling, text messaging, playing media, web browsing, gaming, using mapping apps or operating other phone features—is common but controversial, since it is widely considered dangerous due to what is known as distracted driving. Being distracted while operating a motor vehicle has been shown to increase the risk of accidents. In September 2010, the US National Highway Traffic Safety Administration (NHTSA) reported that 995 people were killed by drivers distracted by phones. In March 2011 a US insurance company, State Farm Insurance, announced the results of a study which showed 19% of drivers surveyed accessed the Internet on a smartphone while driving. Many jurisdictions prohibit the use of mobile phones while driving. In Egypt, Israel, Japan, Portugal and Singapore, both handheld and hands-free calling on a mobile phone (which uses a speakerphone) is banned. In other countries, including the UK and France, and in many US states, calling is only banned on handheld phones, while hands-free calling is permitted. A 2011 study reported that over 90% of college students surveyed text (initiate, reply or read) while driving. The scientific literature on the danger of driving while sending a text message from a mobile phone, or texting while driving, is limited. A simulation study at the University of Utah found a sixfold increase in distraction-related accidents when texting. Due to the complexity of smartphones that began to grow more after, this has introduced additional difficulties for law enforcement officials when attempting to distinguish one usage from another in drivers using their devices. This is more apparent in countries which ban both handheld and hands-free usage, rather than those which ban handheld use only, as officials cannot easily tell which function of the phone is being used simply by looking at the driver. This can lead to drivers being stopped for using their device illegally for a call when, in fact, they were using the device legally, for example, when using the phone's incorporated controls for car stereo, GPS or satnav. A 2010 study reviewed the incidence of phone use while cycling and its effects on behavior and safety. In 2013 a national survey in the US reported the number of drivers who reported using their phones to access the Internet while driving had risen to nearly one of four. A study conducted by the University of Vienna examined approaches for reducing inappropriate and problematic use of mobile phones, such as using phones while driving. Accidents involving a driver being distracted by being in a call on a phone have begun to be prosecuted as negligence similar to speeding. In the United Kingdom, from 27 February 2007, motorists who are caught using a handheld phone while driving will have three penalty points added to their license in addition to the fine of £60. This increase was introduced to try to stem the increase in drivers ignoring the law. Japan prohibits all use of phones while driving, including use of hands-free devices. New Zealand has banned handheld phone use since 1 November 2009. Many states in the United States have banned text messaging on phones while driving. Illinois became the 17th American state to enforce this law. , 30 states had banned texting while driving, with Kentucky becoming the most recent addition on July 15. Public Health Law Research maintains a list of distracted driving laws in the United States. This database of laws provides a comprehensive view of the provisions of laws that restrict the use of mobile devices while driving for all 50 states and the District of Columbia between 1992, when first law was passed through December 1, 2010. The dataset contains information on 22 dichotomous, continuous or categorical variables including, for example, activities regulated (e.g., texting versus talking, hands-free versus handheld calls, web browsing, gaming), targeted populations, and exemptions. Legal A "patent war" between Samsung and Apple started when the latter claimed that the original Galaxy S Android phone copied the interfaceand possibly the hardwareof Apple's iOS for the iPhone 3GS. There was also smartphone patents licensing and litigation involving Sony Mobile, Google, Apple Inc., Samsung, Microsoft, Nokia, Motorola, HTC, Huawei and ZTE, among others. The conflict is part of the wider "patent wars" between multinational technology and software corporations. To secure and increase market share, companies granted a patent can sue to prevent competitors from using the methods the patent covers. Since the 2010s the number of lawsuits, counter-suits, and trade complaints based on patents and designs in the market for smartphones, and devices based on smartphone OSes such as Android and iOS, has increased significantly. Initial suits, countersuits, rulings, license agreements, and other major events began in 2009 as the smartphone market stated to grow more rapidly by 2012. Medical With the rise in number of mobile medical apps in the market place, government regulatory agencies raised concerns on the safety of the use of such applications. These concerns were transformed into regulation initiatives worldwide with the aim of safeguarding users from untrusted medical advice. According to the findings of these medical experts in recent years, excessive smartphone use in society may lead to headaches, sleep disorders and insufficient sleep, while severe smartphone addiction may lead to physical health problems, such as hunchback, muscle relaxation and uneven nutrition. Impacts on cognition and mental health There is a debate about beneficial and detrimental impacts of smartphones or smartphone-uses on cognition and mental health. Security Smartphone malware is easily distributed through an insecure app store. Often, malware is hidden in pirated versions of legitimate apps, which are then distributed through third-party app stores. Malware risk also comes from what is known as an "update attack", where a legitimate application is later changed to include a malware component, which users then install when they are notified that the app has been updated. As well, one out of three robberies in 2012 in the United States involved the theft of a mobile phone. An online petition has urged smartphone makers to install kill switches in their devices. In 2014, Apple's "Find my iPhone" and Google's "Android Device Manager" can locate, disable, and wipe the data from phones that have been lost or stolen. With BlackBerry Protect in OS version 10.3.2, devices can be rendered unrecoverable to even BlackBerry's own Operating System recovery tools if incorrectly authenticated or dissociated from their account. Leaked documents from 2013 to 2016 codenamed Vault 7 detail the capabilities of the United States Central Intelligence Agency (CIA) to perform electronic surveillance and cyber warfare, including the ability to compromise the operating systems of most smartphones (including iOS and Android). In 2021, journalists and researchers reported the discovery of spyware, called Pegasus, developed and distributed by a private company which can and has been used to infect iOS and Android smartphones oftenpartly via use of 0-day exploitswithout the need for any user-interaction or significant clues to the user and then be used to exfiltrate data, track user locations, capture film through its camera, and activate the microphone at any time. Analysis of data traffic by popular smartphones running variants of Android found substantial by-default data collection and sharing with no opt-out by this pre-installed software. Guidelines for mobile device security were issued by NIST and many other organizations. For conducting a private, in-person meeting, at least one site recommends that the user switch the smartphone off and disconnect the battery. Sleep Using smartphones late at night can disturb sleep, due to the blue light and brightly lit screen, which affects melatonin levels and sleep cycles. In an effort to alleviate these issues, "Night Mode" functionality to change the color temperature of a screen to a warmer hue based on the time of day to reduce the amount of blue light generated became available through several apps for Android and the f.lux software for jailbroken iPhones. iOS 9.3 integrated a similar, system-level feature known as "Night Shift." Several Android device manufacturers bypassed Google's initial reluctance to make Night Mode a standard feature in Android and included software for it on their hardware under varying names, before Android Oreo added it to the OS for compatible devices. It has also been theorized that for some users, addiction to use of their phones, especially before they go to bed, can result in "ego depletion." Many people also use their phones as alarm clocks, which can also lead to loss of sleep. Replacement of dedicated digital cameras As the 2010s decade commenced, the sale figures of dedicated compact cameras decreased sharply since mobile phone cameras were increasingly perceived as serving as a sufficient surrogate camera. Increases in computing power in mobile phones enabled fast image processing and high-resolution filming, with 1080p Full HD being achieved in 2011 and the barrier to 2160p 4K being breached in 2013. However, due to design and space limitations, smartphones lack several features found even on low-budget compact cameras, including a hot-swappable memory card and battery for nearly uninterrupted operation, physical buttons and knobs for focusing and capturing and zooming, a bolt thread tripod mount, a capacitor-charged xenon flash that exceeds the brightness of smartphones' LED flashlights, and an ergonomic grip for steadier holding during handheld shooting, which enables longer exposure times. Since dedicated cameras can be more spacious, they can house larger image sensors and feature optical zooming. Since the late 2010s, smartphone manufacturers have bypassed the lack of optical zoom to a limited extent by incorporating additional rear cameras with fixed magnification levels. Lifespan In mobile phones released since the second half of the 2010s, operational life span commonly is limited by built-in batteries which are not designed to be interchangeable. The life expectancy of batteries depends on usage intensity of the powered device, where activity (longer usage) and tasks demanding more energy expire the battery earlier. Lithium-ion and lithium-polymer batteries, those commonly powering portable electronics, additionally wear down more from fuller charge and deeper discharge cycles, and when unused for an extended amount of time while depleted, where self-discharging may lead to a harmful depth of discharge. Manufacturers have prevented some smartphones from operating after repairs, by associating components' unique serial numbers to the device so it will refuse to operate or disable some functionality in case of a mismatch that would occur after a replacement. Locking of the serial number was first documented in 2015 on the iPhone 6, which would become inoperable from a detected replacement of the "home" button. Later, some functionality was restricted on Apple and Samsung smartphones when a battery replacement not authorized by the vendor was detected. See also Comparison of smartphones Lists of mobile computers List of mobile app distribution platforms Media Transfer Protocol Mobile Internet device Portable media player Second screen Smartphone kill switch Smartphone zombie Notes References External links Cloud clients Consumer electronics Information appliances Personal digital assistants
Smartphone
[ "Technology" ]
19,425
[ "Information appliances", "Computers" ]
167,120
https://en.wikipedia.org/wiki/Gravel
Gravel () is a loose aggregation of rock fragments. Gravel occurs naturally on Earth as a result of sedimentary and erosive geological processes; it is also produced in large quantities commercially as crushed stone. Gravel is classified by particle size range and includes size classes from granule- to boulder-sized fragments. In the Udden-Wentworth scale gravel is categorized into granular gravel () and pebble gravel (). ISO 14688 grades gravels as fine, medium, and coarse, with ranges for fine and for coarse. One cubic metre of gravel typically weighs about , or one cubic yard weighs about . Gravel is an important commercial product, with a number of applications. Almost half of all gravel production is used as aggregate for concrete. Much of the rest is used for road construction, either in the road base or as the road surface (with or without asphalt or other binders.) Naturally occurring porous gravel deposits have a high hydraulic conductivity, making them important aquifers. Definition and properties Colloquially, the term gravel is often used to describe a mixture of different size pieces of stone mixed with sand and possibly some clay. The American construction industry distinguishes between gravel (a natural material) and crushed stone (produced artificially by mechanical crushing of rock.) The technical definition of gravel varies by region and by area of application. Many geologists define gravel simply as loose rounded rock particles over in diameter, without specifying an upper size limit. Gravel is sometimes distinguished from rubble, which is loose rock particles in the same size range but angular in shape. The Udden-Wentworth scale, widely used by geologists in the US, defines granular gravel as particles with a size from and pebble gravel as particles with a size from . This corresponds to all particles with sizes between coarse sand and cobbles. The U.S. Department of Agriculture and the Soil Science Society of America define gravel as particles from in size, while the German scale (Atterburg) defines gravel as particles from in size. The U.S. Army Corps of Engineers defines gravel as particles under in size that are retained by a number 4 mesh, which has a mesh spacing of . ISO 14688 for soil engineering grades gravels as fine, medium, and coarse with ranges 2 mm to 6.3 mm to 20 mm to 63 mm. The bulk density of gravel varies from . Natural gravel has a high hydraulic conductivity, sometimes reaching above 1 cm/s. Origin Most gravel is derived from disintegration of bedrock as it weathers. Quartz is the most common mineral found in gravel, as it is hard, chemically inert, and lacks cleavage planes along which the rock easily splits. Most gravel particles consist of multiple mineral grains, since few rocks have mineral grains coarser than about in size. Exceptions include quartz veins, pegmatites, deep intrusions, and high-grade metamorphic rock. The rock fragments are rapidly rounded as they are transported by rivers, often within a few tens of kilometers of their source outcrops. Gravel is deposited as gravel blankets or bars in stream channels; in alluvial fans; in near-shore marine settings, where the gravel is supplied by streams or erosion along the coast; and in the deltas of swift-flowing streams. The upper Mississippi embayment contains extensive chert gravels thought to have their origin less than from the periphery of the embayment. It has been suggested that wind-formed (aeolian) gravel "megaripples" in Argentina have counterparts on the planet Mars. Production and uses Gravel is a major basic raw material in construction. Sand is not usually distinguished from gravel in official statistics, but crushed stone is treated as a separate category. In 2020, sand and gravel together made up 23% of all industrial mineral production in the U.S., with a total value of about $12.6 billion. Some 960 million tons of construction sand and gravel were produced. This greatly exceeds production of industrial sand and gravel (68 million tons), which is mostly sand rather than gravel. It is estimated that almost half of construction sand and gravel is used as aggregate for concrete. Other important uses include in road construction, as road base or in blacktop; as construction fill; and in myriad minor uses. Gravel is widely and plentifully distributed, mostly as river deposits, river flood plains, and glacial deposits, so that environmental considerations and quality dictate whether alternatives, such as crushed stone, are more economical. Crushed stone is already displacing natural gravel in the eastern United States, and recycled gravel is also becoming increasingly important. Etymology The word gravel comes from the Old French gravele or gravelle. Types Different varieties of gravel are distinguished by their composition, origin, and use cases. Types of gravel include: Bank gravel naturally deposited gravel intermixed with sand or clay found in and next to rivers and streams. Also known as "bank run" or "river run". Bench gravel a bed of gravel located on the side of a valley above the present stream bottom, indicating the former location of the stream bed when it was at a higher level. The term is most commonly used in Alaska and the Yukon Territory. Crushed stone rock crushed and graded by screens and then mixed to a blend of stones and fines. It is widely used as a surfacing for roads and driveways, sometimes with tar applied over it. Crushed stone may be made from granite, limestone, dolomite, and other rocks. Also known as "crusher run", DGA (dense grade aggregate) QP (quarry process), and shoulder stone. Crushed stone is distinguished from gravel by the U.S. Geological Survey. Fine gravel gravel consisting of particles with a diameter of Lag gravel a surface accumulation of coarse gravel produced by the removal of finer particles. Pay gravel also known as "pay dirt"; a nickname for gravel with a high concentration of gold and other precious metals. The metals are recovered through gold panning. Pea gravel also known as "pea shingle" is clean gravel similar in size to garden peas. Used for concrete surfaces, walkways, driveways and as a substrate in home aquariums. Piedmont gravel a coarse gravel carried down from high places by mountain streams and deposited on relatively flat ground, where the water runs more slowly. Plateau gravel a layer of gravel on a plateau or other region above the height at which stream-terrace gravel is usually found. Shingle Coarse, loose, well-rounded, waterworn, specifically alluvial and beach, sediment that is largely composed of smooth and spheroidal or flattened pebbles, cobbles, and sometimes small boulders, generally measuring in diameter. Relationship to plant life In locales where gravelly soil is predominant, plant life is generally more sparse. This is due to the inferior ability of gravels to retain moisture, as well as the corresponding paucity of mineral nutrients, since finer soils that contain such minerals are present in smaller amounts. In the geologic record Sediments containing over 30% gravel that become lithified into solid rock are termed conglomerate. Conglomerates are widely distributed in sedimentary rock of all ages, but usually as a minor component, making up less than 1% of all sedimentary rock. Alluvial fans likely contain the largest accumulations of gravel in the geologic record. These include conglomerates of the Triassic basins of eastern North America and the New Red Sandstone of south Devon. See also Construction aggregate Melon gravel Pebble Rock Shingle beach References External links British Geological Survey UKGravelBarriers: Understanding coastal protection by gravel barriers in a changing climate Aggregate (composite) Sedimentology Building stone Natural materials Pavements Gardening aids Stone (material) Soil-based building materials
Gravel
[ "Physics" ]
1,567
[ "Natural materials", "Materials", "Matter" ]
167,181
https://en.wikipedia.org/wiki/Shopping%20hours
Customs and regulations for shopping hours (times that shops are open) vary between countries and between cities. Shopping days and impact of holidays Some countries, particularly those with predominantly Christian populations or histories, do not allow Sunday shopping. In Islamic countries, some shops are closed on Fridays for noontime prayers. In Israel many shops are closed on Friday evenings and Saturdays during the daytime for Shabbat (the Jewish Sabbath). Each state in Australia sets its own standard trading hours, but in most of the country the shops are open seven days a week for at least part of the day. These also depends on their day-to-day needs. For some shops and other businesses in culturally Christian countries, Christmas Day is the only day in the year that they are closed. In the United States and Canada, nearly all retail stores are open every day of the year except for Thanksgiving, Christmas Day, and Easter Sunday. Some suburban and smaller communities often close on Sundays. For example, Bergen County, New Jersey, next to New York City, completely bans Sunday shopping. Nearly all stores in the United States have restricted hours on Sundays (most often 11 am or noon to 5 - 7 pm), and stores close early on important holidays, such as Christmas Eve, New Year's Eve, New Year's Day, and Independence Day. Banks, post offices and other government offices either are closed on weekends, or close early on Saturdays. Many other non-retail establishments remain closed on weekends. In Islamic countries shops may have special opening hours during Ramadan. In Israel, many shops are closed on religious holidays other than Shabbat, especially on Yom Kippur when nearly all businesses are closed. By country Australia Shop trading hours in Australia are regulated by individual states and territories. The Australian Capital Territory, the Northern Territory and the states of New South Wales, Victoria and Tasmania, totally or almost totally deregulate shopping hours. All retail businesses in the two territories, regardless of size or product offer, are allowed to decide their trading hours to suit their individual customer demand. Non-essential shops in the three states are required to remain closed on Christmas Day and Good Friday, ANZAC Day (until 1 pm), and in Tasmania and NSW on Easter Sunday, and in NSW on Boxing Day (outside the Sydney special trading precinct). Shops in the Northern Territory and the Australian Capital Territory can remain open on any public holiday. The two main supermarket operators, Woolworths and Coles, generally trade between 6 am and midnight every day, although some inner-city shops in Sydney and Melbourne operate twenty-four hours. In Canberra, stores such as Woolworths Kmart and Coles were open 24/7; however during the Covid-19 pandemic store closures, these stores did not reopen with 24/7 shopping hours. Melbourne generally has the most relaxed rules. Almost all shopping centres in Melbourne now trade until 9 pm on Thursdays and Fridays as well as being open longer hours on Sundays. Interstate late night trading only occurs on either Thursday or Friday rather than both. Melbourne is also famous for beginning the trend of 36-hour overnight trades in the lead-up to Christmas. Some of the larger shopping centres will open from 8 am December 23 until 6 pm on Christmas Eve. Centres often open to 10 pm or midnight on most other nights in the fortnight before Christmas, and the first few days of the annual Boxing Day Sales. Trading hours in the Australian Capital Territory have been deregulated since the repeal of the Trading Hours Act 1996 [ACT] on 29 May 1997. Shopping hours in South Australia are still regulated, but there have been numerous changes to relax the laws. Nonetheless, trading laws are still face complicated and confusing: legal trading hours vary depending on size and product offer. Supermarkets that trade with fewer than seven workers and with a trading floor less than 500 m2 are exempt from the laws. Larger supermarkets are required by law to close at 9 pm on Mondays to Fridays, and 5 pm on Saturdays; they are permitted to trade on Sundays and public holidays only from 9 am to 5 pm, except ANZAC Day, which is 12 noon to 5 pm; they must remain closed on Good Friday, Easter Sunday and Christmas Day. In all areas of Queensland, trading hours of major supermarkets are Monday to Saturday from 7 am to 9 pm and Sundays and public holidays from 9 am to 6 pm. Most major shopping centres close at 5 pm every day, except for "late night shopping" on one night a week. Supermarkets in major shopping centres must still cease trading at 9 pm, with special access for just the supermarket. In rural areas of Western Australia below the 27th parallel, local governments nominate shop closing hours to the State government, which, if accepted, are implemented by ministerial order. Shopping hours in the state's capital, Perth, are regulated by laws similar to South Australia's. Trading hours are stipulated in law, and are based on size and product offer. As in South Australia, smaller, independently operated supermarket retailers are exempted. Chain supermarkets are required to close Monday to Friday at 9 pm, Saturdays at 5 pm, and are permitted to trade on Sundays and public holidays only from 11 am to 5 pm. Austria The situation in Austria is very similar to that in Germany, with most public holidays being based on Catholic holidays as the country is predominantly Roman Catholic. Until the 1990s, all shops closed around noon on Saturday and did not reopen until Monday morning. Entrepreneurs such as Richard Lugner lobbied for an expansion of shopping hours, and laws are gradually being changed, with more and more exceptions granted. Meanwhile, as in Germany, filling stations and train stations in big cities have taken on the role of Nahversorger ("local providers", supplying the local population with groceries) outside regular shopping hours. Until very recently, shopping hours remained very restrictive. In 2008 Austria modified its 2003 Öffnungszeitengesetz ("opening times law"). The new regulations allow stores to open from 6:00 a.m. until 9:00 p.m. on weekdays, and on Saturday until 6:00 p.m. but they are restricted to a total of 72 open hours per week. Bakeries can open 30 minutes earlier at 5:30 a.m. Shops are closed on Sunday, but there are exceptions for tourist locations, train stations, airports, and the Christmas season. Canada Store hours in Canada are regulated by each province or territory and, in some provinces, individual municipalities as well. As a general rule, there is little regulation of shopping hours across the country. In the provinces of British Columbia, Alberta, and Saskatchewan, as well as all three territories (Yukon, Northwest Territories and Nunavut), there are no restrictions at all and stores can open 24/7 every day. Nova Scotia permits any store to open every day of the year except Remembrance Day (November 11). The remaining provinces (Manitoba, Ontario, Quebec, New Brunswick, Prince Edward Island and Newfoundland and Labrador) require stores to close on most major holidays. Furthermore, three provinces have further restrictions on Sunday openings. In Manitoba, stores may open on Sundays only with municipal approval and only between 9 am - 6 pm (although exceptions for essential services apply). New Brunswick allows Sunday opening all year only with both municipal and provincial approval; otherwise it is permitted only from August until the New Year. Some communities in New Brunswick (such as Woodstock, Miramichi, Sussex) restrict Sunday hours of operation to 12 pm - 5 pm. The province of Quebec is the only province in Canada that regulates shopping hours outside of Sundays and holidays. As a general rule, stores are permitted to open only between 8 am and 9 pm weekdays and 8 am - 5 pm weekends, excluding holidays. There are several exceptions, however, notably several supermarkets in Montreal, which are open later hours or 24 hours a day. In practice, few stores in Canada (except a few grocery stores) remain open 24 hours. Most shopping centres open from 10 am-9pm Monday to Friday, 9:30 am - 6 pm (or in some cases 9 pm) on Saturday and 12 pm - 5 pm or 6 pm on Sunday. Many larger stores, such as Walmart Canada, and most major grocery stores remain open 8 am - 10 pm Monday to Saturday and 10 am - 6 pm (in some provinces 8 am - 10 pm) on Sunday, except in provinces where further restrictions apply. The Sobeys chain stays open from 7 am - 11 pm on weekdays and Saturdays, though some locations are open twenty-four hours. Many Loblaws brand stores such as Zehrs Markets and Real Canadian Superstore are open from 7 am - 11 pm, 7 days a week. China Trading hours in China, including Hong Kong and Macau special administrative regions, are commercial decisions and are not regulated. Most shops are open on public holidays. Some convenience stores are open twenty-four hours and every day of the year, but only a few large supermarkets are open twenty-four hours a day. During the Chinese New Year, many shops in China close for a few days, from Chinese New Year's Eve to the first day of the Chinese New Year. Or more often, to the third day of the Chinese New Year. Some shops in Hong Kong and Macau operate on Chinese New Year holidays, especially supermarket chains. Croatia Shopping hours in Croatia are currently unregulated after the Constitutional Court struck down a ban on Sunday shopping, which had been in effect from mid-2008 until mid-2009. Most large out-of-town supermarkets are open between 07:30/08:00-21:00/22:00, Monday to Sunday. Shopping malls usually open at 09:00 and also close at 22:00, every day. Smaller supermarkets close earlier on Sundays, typically at 13:00. Other shops in urban areas are generally closed on Sundays. Bakeries and newspaper kiosks often open very early in the morning, at 05:30 or 06:00, and open every day but not twenty-four hours. Filling stations and convenience stores along major roads as well as some pharmacies (at least one in each major city, five in Zagreb) operate twenty-four hours. Denmark Standard operating hours for most businesses are generally 8:00/8:30 - 17:30. Since 1 October 2012, Danish shops have been allowed to be open every day around the clock, except on public holidays and after 3 pm on Christmas Eve's Day and New Year Eve's Day. Shops with a turnover of less than DKK 32.2 million (2012 figure, indexed) are allowed to be open every day of the year. Still in many small towns shops are usually closed on Saturday after 2 pm and on Sunday. Some small shops are closed on Monday. Finland Sunday shopping was first introduced in 1994. In 1989 shops were allowed to be open on Sundays in sparsely populated areas. In the autumn of 1994 the law was extended to apply to the conurbanations i.e. densely populated areas, but only in December and on six specifically designated Sundays. In 1997 it was legislated that the grocery shops could be open on Sundays during the whole summer. At the same time the closing hour was set at later; 21:00. In 2000 small markets — less than 400 m2 in sales area — were encouraged to be open on Sundays all year around, with the exception of four days. Also the legislation concerning (super)markets bigger than 400 m2 in sales area was clarified by discarding the law of six designated Sundays and replacing it with Sunday opening hours from May to August and from November to December. On 15 December 2015, the Finnish parliament voted to remove all opening hour restrictions for all retailers. The law came into effect on 1 January 2016. Many stores are open every day. Larger supermarkets and hypermarkets are open from 7:00 or 8:00 to 21:00 or 22:00, and smaller shops from 10:00 to 18:00 or 19:00. The only stores with regulated hours are the nationally owned Alko alcohol shops, which are open from 9:00 to 21:00 on weekdays and from 9:00 to 18:00 on Saturdays. On Sundays all Alko stores are closed. The sale of alcoholic beverages of over 2.8% is limited from 9:00 to 21:00 on each day of the week. Germany In Germany, shopping days and opening hours were previously regulated by a federal law called the "Shop Closing Law" (Ladenschlussgesetz), first enacted in 1956 and last revised on 13 March 2003. On 7 July 2006, however, the federal government handed over the authority to regulate shopping hours to the sixteen states (Länder). Since then, states have been allowed to pass their own laws regulating opening hours. The federal Ladenschlussgesetz still applies in Bavaria and Saarland, which have not passed their own laws. Under this law, shops may not open prior to 6 am and may not stay open later than 8 p.m. from Monday to Saturday. Shops must also stay closed on Sundays and public holidays (both federal and state), and special rules apply concerning Christmas Eve (December 24) should that day fall on a weekday. There are several exceptions, including petrol stations and shops located in railway stations and airports, which may stay open past the normal hours. Most petrol stations in larger cities, and all situated on Autobahns, are open twenty-four hours. Shops in so-called "tourist zones" may also open outside the normal hours, but they are restricted to selling souvenirs, handcrafted articles and similar tourist items. In connection with fairs and public market days, communities are allowed four days per year (normally Sundays) on which shops may open outside the normal restrictions; such shop openings may not take place during primary church services and they must close by 6 pm. Bakeries may open for business at 5.30 am and may also open for a limited time on Sundays. Restaurants, bars, theatres, and cultural establishments are generally unaffected by the shop opening time restrictions. As most public holidays in Germany are religiously based, and since the religious holidays (Protestant and Catholic) are not uniform across Germany, shops may be closed due to a public holiday in one state, and open in a neighbouring state. Bavaria even differentiates between cities with Protestant or Catholic majorities. The Ladenschlussgesetz has been the subject of controversy, as larger stores (and many of their customers) would prefer to have fewer restrictions on shopping hours, while trade unions, small shop owners and the church are opposed to a further loosening of the rules. On June 9, 2004, the German supreme court (Bundesverfassungsgericht) rejected a claim by the German department store chain Kaufhof AG that the shop-closing law was unconstitutional. Among other things, the court cited Article 140 of the German constitution (Grundgesetz) (which in turn invokes Article 139 of the 1919 Weimar Constitution) protecting Sundays and public holidays as days of rest and recuperation. Nonetheless, the court in effect invited the federal parliament (Bundestag) to reconsider whether the states should regulate hours instead of the federal government. So far, no state has passed a regulation that allows general store opening on Sundays. States with no restrictions from Monday to Saturday and varying regulations for Sunday: Baden-Württemberg Berlin Brandenburg Bremen Hamburg Hesse Lower Saxony Schleswig-Holstein States with no restrictions from Monday to Friday and varying regulations for Saturday and Sunday: Mecklenburg-Western Pomerania North Rhine-Westphalia Saxony-Anhalt Thuringia States where shops can open between 6 a.m. and 10 p.m. from Monday to Saturday; regulations for Sunday vary: Rhineland-Palatinate Saxony States with no liberalisation of opening hours exceeding the federal Ladenschlussgesetz: Bavaria Saarland Ireland Shops in Ireland may, with few exceptions (such as those involved in the sale of alcohol), open whenever they want, including Sundays and public holidays. Here are typical hours: Monday - Wednesday, Saturday: 8:00/9:00/10:00 - 17:00/18:00/19:00 Thursday - Friday: 8:00/9:00/10:00 - 20:00/21:00/22:00 Sunday: 9:00/10:00/11:00 - 17:00/18:00/19:00 Many supermarkets are open twenty-four hours or have longer opening hours (like 8:00 - 22:00) every day. Large shopping centres are typically open longer hours every day (e.g. 09:00 - 21:00/22:00 weekdays, 09:00 - 19:00 Saturdays, 10:00 - 19:00 Sundays). In the two weeks running up to Christmas, it is common for many shops to have extended opening hours; some may operate twenty-four hours however the 24 hr openings are extremely uncommon and would be mainly in the large cities. On Christmas Eve most shops have shut their doors by 6 pm, some close by 3 pm. Most shops (other than petrol stations or convenience stores) in smaller towns and villages don't open at all on Sundays. Almost all shops (except certain petrol stations, convenience stores) are closed on Christmas Day, though most are open on all other holidays. Convenience stores and some chemists (drugstores) normally open at 09:00 and close between 18:00 and 21:00. New Year's Day is also Sunday hours. In rural areas or in traditional trades, Businesses used to take a half day on Wednesdays, however this no longer happens. Alcohol is allowed to be sold only between 10:30 and 22:00 from Monday to Saturday and 12:30 to 22:00 on Sundays, but this does not affect opening hours (supermarkets will often block access to alcoholic products outside of these times). Alcohol can now be sold on Good Friday as-well as having the pubs open. Japan In Japan, most shops open at 10:00, and close at 20:00 (8 pm). Banks are open from 09:00-15:00 on weekdays, and closed on weekends; post offices are open from 09:00-17:00 on weekdays, and closed on weekends. Convenience stores are open round the clock. Mexico In Mexico City, in large shopping centers, stores are generally open from 11 a.m. to 8 p.m., and on Sundays from 11 a.m. to 8 p.m. Restaurant and cinema hours are different, as are those of independent shops and markets. Netherlands Regular opening hours are Monday 11:00 - 18:00; Tuesday-Friday: 09:30 - 18:00; Saturday: 09:30 - 17:00; Sunday (Amsterdam, Rotterdam, The Hague, Utrecht, Almere, Leiden and smaller tourist towns): 12:00 - 18:00. In many other towns shops are open every first Sunday of the month (koopzondag). Shops are allowed to stay open until 22:00 from Monday to Saturday. Except in busy tourist areas, in or near railway stations, or for big box retailers such as Media Markt, most close at 18:00 on weekdays, and 17:00 on Saturdays, unlike Germany where retailers have taken fuller advantage of liberalization laws and most stay open till at least 19:30. Many supermarkets (including outlets of the market leader Albert Heijn, several DIY stores and IKEA) stay open until 20:00, 21:00 or 22:00. Most towns have their weekly shopping evening (koopavond), when shops stay open until 21:00, on Thursday or Friday. In touristic towns (like Amsterdam's city centre) supermarkets are allowed to open on Sundays between 07:00 and 22:00. Many towns have one or more supermarkets (avondwinkels) that are open until later in the evening, occasionally all night. Convenience stores also have longer shopping hours; they are at many larger railway stations ("Albert Heijn to go") and in some busy streets. A regular size supermarket that is open until midnight seven days a week is the Albert Heijn at Schiphol Airport near Amsterdam (in the landside area of the airport, not just for air travelers). On public holidays, shops that close on Sundays are usually also closed, and other shops tend to have Sunday opening hours. On Christmas Day and New Year's Eve almost all shops are closed. For specific opening hours (openingstijden) in the Netherlands there are several websites. Serbia Shopping hours in Serbia are unregulated. Large supermarkets are usually open from 07:00/07:30/08:00 to 22:00 from Monday to Sunday. Shopping malls open at 09:00 or 10:00 and also stay open until 22:00. Smaller supermarkets close earlier on Sundays, at 15:00 or 16:00. Unlike neighbouring Croatia, many fast food outlets, bakeries, kiosks and convenience stores in urban areas operate twenty-four hours. Even some hypermarkets, like Tempo and Metro, are open twenty-four hours. Singapore Shopping hours for shopping malls are usually from 10:00 to 22:00 from Monday to Sunday. Automotive shops like tire outlets are usually from 09.30 to 19:00. Some supermarkets are open twenty-four hours. Most stores do not open on the first day of Chinese New Year because of low demand patronage. However, the COVID-19 pandemic prompted some shopping malls and stores to shorten their operating hours, a trend that has persisted in certain locations. For instance, several outlets now close as early as 8:30 PM, reflecting changes in consumer behavior and cost management strategies. Despite these adjustments, high footfall areas like Orchard Road continue to maintain extended hours in most establishments to accommodate both locals and tourists. Sweden In Sweden there is no longer any law regarding shopping hours except for the nationally owned Systembolaget alcohol shops, which close at 20:00 at the latest on weekdays and 15:00 on Saturdays. On Sundays no alcohol is sold at all, although it is served in restaurants. Shopping centres and food shops are generally open every day; grocery stores often until 22:00 all days of the week and shopping centers usually until 20:00 on weekdays and 18:00 on weekends. Usually shopping centers are closed on New Year's Day, Midsummer's Day and Christmas Day, but grocery stores are open even those days albeit fewer hours than usual. Although there aren't any law that regulate business hours in general, labour laws do not allow work between midnight and 5 am in many professions including grocery stores and most shops. Switzerland Shopping hours are governed by cantonal law and vary accordingly, the only confederally mandated store holiday being August 1 (the national holiday), as per article 110 III of the Swiss Constitution. Most often, stores will be open from 8 or 9 am to 7 or 8 pm, 9 pm one day a week (usually a Thursday or a Friday) depending on the region. On Saturday and the day before public holidays, most stores close at around 4 or 5 pm. Stores are also generally closed on Sundays; see Sunday shopping in Switzerland. United Kingdom In Great Britain, many retail stores are open every day. Some large supermarkets are open for twenty-four hours, (except on Sundays in England and Wales). Most stores do not open on Easter Sunday, New Year's Day or Christmas Day and have reduced hours on other public and bank holidays. Typical store shopping hours: Mondays - Saturdays: 9:00 am to 5:30 pm, or 10:00 am to 8:00 pm/10:00 pm. Sundays: 10:00 am to 4:00 pm, or 11:00 am to 5:00 pm, or 12 noon to 6:00 pm. Sunday shopping has become more popular, and most but not all shops in towns and cities are open for business. Shops 280 m2 and larger in England and Wales are allowed to trade for only six hours on Sundays; shops in Northern Ireland may open from 1:00 pm to 6:00 pm. In Scotland, in theory, Sunday is considered the same as any other day, and there are no restrictions. In practice, however, some shops do not open on Sunday or open for only four hours in smaller towns. In some Free Church dominated areas, for example Stornoway on the Isle of Lewis, Sunday is considered a day of rest and consequently very few if any shops open at all. United States In the United States, the various levels of government generally do not regulate the hours of the vast majority of retailers (though there are exceptions, such as the blue law), and with the main exception being shops licensed to sell spirits and other alcoholic beverages (for shopping hours, see alcohol sale hours by state) and car dealerships. Shopping hours vary widely based on management considerations and customer needs. Key variables are the size of the metropolitan area, the type of store, and the size of the store. Las Vegas, Nevada is the notable exception to all the traditions just described. Las Vegas is world-famous for its 24-hour local culture since it is an area with large gaming and tourism industries that operate 24/7. Since many of the employees in the city's primary industries work overnight shifts — and because Nevada has few laws in regard to operating hours for any type of commercial activity — many businesses cater to such workers. Thus, Las Vegas is home to many 24-hour car dealerships, dental clinics, auto mechanics, computer shops, and even some smaller clothing stores. Typical store shopping hours: Monday - Saturday 9 - 10 a.m. to 8 - 10 p.m. (9:00 - 10:00 to 20:00 - 22:00) Sunday 11 - 12 noon to 5 - 7 p.m. (11:00 - 12:00 to 17:00 - 19:00) Supermarkets usually open at earlier hours, between 6 or 7 a.m. to 10 p.m. (7:00 - 22:00) every day. Boutiques and smaller shops often close early at 5 or 6 p.m. (17:00 or 18:00), and usually close once or twice a week, most often on Sunday. Nearly all stores are closed on Easter, Thanksgiving Day and Christmas Day. In recent years, however, several department stores and discount stores have started opening during the evening on Thanksgiving Day; see Black Friday for more details. Early closing (half days) occur on Christmas Eve and New Year's Eve. Some stores might have reduced hours on other major holidays. All malls and department stores, as well as most other stores remain open longer hours between Thanksgiving weekend and Christmas Eve for Christmas and holiday shopping. Many are open until 11 p.m. (23:00), and a few even longer. Few stores remain open twenty-four hours; the main exceptions to this rule are most Walmarts throughout the country (especially Supercenters, which combine a discount store and full supermarket); many convenience stores, especially those that also sell motor fuel; and some drug stores like CVS, especially in larger cities like New York City and Las Vegas. Some stores, especially in suburban and rural areas, might remain closed on Sundays for any reason (such as most retail in Bergen County, New Jersey due to the blue law, which is next to New York City, and home to four major malls and has the largest retail in the nation). See also 24/7 service Blue law Sunday shopping Ladenschlußgesetz References Retail processes and techniques Economics and time Shopping (activity)
Shopping hours
[ "Physics" ]
5,675
[ "Spacetime", "Economics and time", "Physical quantities", "Time" ]
167,183
https://en.wikipedia.org/wiki/Thiotimoline
Thiotimoline is a fictitious chemical compound conceived by American biochemist and science fiction author Isaac Asimov. It was first described in a spoof scientific paper titled "The Endochronic Properties of Resublimated Thiotimoline" in 1948. The major peculiarity of the chemical is its "endochronicity": it starts dissolving before it makes contact with water. Asimov went on to write three additional short stories, each describing different properties or uses of thiotimoline. Chemical properties In Asimov's writings the endochronicity of thiotimoline is explained by the fact that in the thiotimoline molecule, there is at least one carbon atom such that, while two of the carbon's four chemical bonds lie in normal space and time, one of the bonds projects into the future and another into the past. Thiotimoline is derived from the bark of the (fictitious) shrub Rosacea karlsbadensis rufo, and the thiotimoline molecule includes at least fourteen hydroxy groups, two amino groups, and one sulfonic acid group, and possibly one nitro compound group as well. The nature of the hydrocarbon nucleus is unknown, although it seems in part to be an aromatic hydrocarbon. Background In 1947 Asimov was engaged in doctoral research in chemistry and, as part of his experimental procedure, he needed to dissolve catechol in water. As he observed the crystals dissolve as soon as they hit the water's surface, it occurred to him that if catechol were any more soluble, then it would dissolve before it encountered the water. By that time Asimov had been writing professionally for nine years and would soon write a doctoral dissertation. He feared that the experience of writing readable prose for publication (i.e. science fiction) might have impaired his ability to write the turgid prose typical of academic discourse, and decided to practice with a spoof article (including charts, graphs, tables, and citations of fake articles in nonexistent journals) describing experiments on a compound, thiotimoline, that was so soluble that it dissolved in water up to 1.12 seconds before the water was added. Asimov wrote the article on 8 June 1947, but was uncertain as to whether the resulting work of fiction was publishable. John W. Campbell, the editor of Astounding Science Fiction, accepted it for publication on 10 June, agreeing to Asimov's request that it appear under a pseudonym in deference to Asimov's concern that he might alienate potential doctoral examiners at Columbia University if he were revealed as the author. Some months later Asimov was alarmed to see the piece appear in the March 1948 issue of Astounding under his own name, and copies of the issue circulated at the Columbia chemistry department. Asimov believed that Campbell had done so out of greater wisdom. His examiners told him that they accepted his dissertation by asking a final question about thiotimoline, resulting in him having to be led from the room while laughing hysterically with relief. The article made Asimov famous for the first time outside science fiction, as chemists shared copies of the article. He heard that many children went to the New York Public Library trying to find the nonexistent journals. "The Micropsychiatric Applications of Thiotimoline" In 1952, Asimov wrote a second spoof scientific paper on thiotimoline called "The Micropsychiatric Applications of Thiotimoline". Like the first, it included charts, graphs, tables, and citations of fake articles from fake journals (along with one real citation: Asimov's own earlier spoof article from Astounding, which was listed tongue-in-cheek as the Journal of Astounding Science Fiction). This second article described the use of thiotimoline to establish a quantitative classification of "certain mental disorders". It also expounds a putative rationale for thiotimoline's behaviour: namely that the chemical bonds in the compound's structural formula are so starved of space that some are forced into the time dimension. According to the second article, thiotimoline's time of solubility varies depending on the determination of the person adding the water. It also claims that one effect is that when people with multiple personalities add the water, some parts of the thiotimoline dissolve before others, due to some of the individual's personalities being more determined than others. "The Micropsychiatric Applications of Thiotimoline" appeared in the December 1953 issue of Astounding. The first two thiotimoline "articles" appeared together in Asimov's first collection of science essays, Only a Trillion (1957), under the joint title "The Marvellous Properties of Thiotimoline". Asimov also included the original article in his 1972 collection The Early Asimov. The first article also appeared in Fifty Years of the Best Science Fiction from Analog (Davis Publications, 1982). "Thiotimoline and the Space Age" Asimov wrote a third thiotimoline article on 14 November 1959 called "Thiotimoline and the Space Age". Instead of a fake scientific paper, this third article took the form of an address by Asimov to the 12th annual meeting of the American Chronochemical Society, a nonexistent scientific society. In his address, Asimov "describes" his first experiments with thiotimoline in July 1947, and timing the compound's dissolution with the original endochronometer, "the same instrument now at the Smithsonian". Asimov laments the skepticism with which chronochemistry has been greeted in America, noting with sorrow that his address has only attracted fifteen attendees. He then contrasts the thriving state of chronochemistry in the Soviet Union, with the research town of Khrushchevsk, nicknamed "Tiotimolingrad", established in the Urals. According to Asimov, two Scottish researchers have developed a "telechronic battery", which uses a series of 77,000 interconnected endochronometers to allow a final sample of thiotimoline to dissolve up to a day before water is added to an initial sample. Asimov says there is "strong, if indirect, evidence that the Soviet Union possesses even more sophisticated devices and is turning them out in commercial quantities". He believes that the Soviets are using telechronic batteries to determine ahead of time whether satellite launches will be successful. Finally, Asimov describes attempts to create a "Heisenberg failure", to get a sample of thiotimoline to dissolve without later adding water to it. In every case where the thiotimoline dissolved, some accident occurred that caused some water to be added to it at the proper time. Several attempts to create a Heisenberg failure in the mid-1950s coincided with a series of hurricanes striking New England in such a manner as to suggest that nature would find a way to add water whatever man decided, if man were to be resolute in not adding water. Asimov speculated that Noah's flood might have been brought about by thiotimoline experiments among the ancient Sumerians. He then concludes with some speculation about thiotimoline's potential applications as a weapon of mass destruction by deliberately using it to artificially induce hurricanes. "Thiotimoline and the Space Age" appeared in the October 1960 issue of Astounding, which was then in the final stages of changing its name to Analog. The article was reprinted in full in Opus 100 (1969) and The Asimov Chronicles: Fifty Years of Isaac Asimov (1989). "Thiotimoline to the Stars" Asimov's final piece on thiotimoline was a short story titled "Thiotimoline to the Stars", which he wrote for Harry Harrison's anthology Astounding (1973). In it, Admiral Vernon, Commandant of the Astronautic Academy, gives a speech to the graduating "Class of '22". Vernon's speech explains that thiotimoline was first mentioned in 1948 by a semi-mythical scientist named Azimuth or Asymptote, but that serious study of the compound didn't begin until the 21st century scientist Almirante worked out the theory of hypersteric hindrance. Later scientists worked out ways to form endochronic molecules into polymers, allowing large structures such as spaceships to be built out of endochronic materials. One effect of endochronicity is that if one fails to add water to an object that has reacted to water, the object will travel into the future in search of water to interact with. An individual with sufficient inborn talent, Vernon explains, can perfectly balance a starship's endochronicity with relativistic time dilation, so that a ship traveling at relativistic speeds can age at the same rate as the rest of the universe, allowing it to return to its starting point within months, rather than centuries, of its departure. Vernon emphasizes that starship pilots are expected to match endochronicity with relativity exactly: a sixty-second difference between the two is regarded as barely acceptable, and a 120-second difference is considered grounds for dismissal. Vernon also emphasizes that endochronic molecules are unstable, and must be renewed before each trip, so that an endochronic ship that finds itself lost might not have sufficient endochronicity to return to its proper time. A ship that finds itself in the future might be able to re-endochronize itself if the technology still exists; a ship that finds itself in the past will be marooned there. Finally, Vernon reveals that the auditorium where he is giving his speech is actually an endochronic starship, and that during his speech, they have all flown to the outskirts of the Solar System. The graduates felt no acceleration because canceling out time dilation also caused the canceling out of inertia. When Vernon concludes his speech, the graduates will be landing in the United Nations Port at Lincoln, Nebraska, where they will be spending the weekend. After they land, Vernon receives an awful shock and passes out when his pilot informs him that the ship is surrounded by Indians. Vernon wrongly assumed the pilot meant Red Indians, and thought that they had landed centuries in the past. But the pilot only meant that they had landed at the correct time but near Calcutta, India. Asimov included "Thiotimoline to the Stars" in his 1975 collection Buy Jupiter and Other Stories. Other references to thiotimoline In Glen Bever's story "And Silently Vanish Away" a chemist with the unique ability to use psychokinetic catalysis to speed up difficult reactions is shocked by a lab explosion and the mixture he was working on gets changed. Under analysis the structure never appears to be the same twice and when the substance is injected into lab rats they start to silently and suddenly vanish. It is found that one part of the compound is a molecule which spreads out into four dimensions. The four-dimensional molecule is thiotimoline. The story appeared in the November 1971 issue of Analog Science Fiction/Science Fact. Topi H. Barr's story "Antithiotimoline" deals with a chemist who accidentally creates a thiotimoline-like compound which extrudes only into the past, enabling the scientist to create images of past events. The narrator complains that thiotimoline is extremely difficult to obtain, and suspects that the CIA or other agencies are controlling the supply for their own reasons. The story appeared in the December 1977 issue of Analog. Spider Robinson's story "Mirror mirror, off the wall", published in Time Travelers Strictly Cash in 1981, also references thiotimoline. In Robert Silverberg's 1989 story "The Asenion Solution", thiotimoline is used to send excess quantities of plutonium-186 to the end of time, where they will fall over the brink into anti-time and lead to the Big Bang. "The Asenion Solution" appeared in the Asimov festschrift Foundation's Friends. The November/December 2001 and March/April 2002 issues of the IEEE Design & Test of Computers included spoof articles on the use of thiotimoline for debugging computers. In the game We Happy Few, a mysterious liquid called "motilene" acts as the primary source of electrical power in the setting, and is pumped throughout the city in pipes in lieu of a traditional electrical grid, or alternatively placed into special jars to act as portable batteries. A research note can be found in one location which makes reference to "thiomotilene crystals" and their "endochronic properties", which in turn strongly suggests motilene's name be derived from thiotimoline. See also Tachyons in fiction List of fictional elements, materials, isotopes and atomic particles Pâté de Foie Gras, another Asimov scientific spoof about a goose which actually laid golden eggs References External links "The Endochronic Properties of Resublimated Thiotimoline", "The Micropsychiatric Applications of Thiotimoline", and "Thiotimoline and the Space Age" on the Internet Archive Fictional materials Hoaxes in science Short fiction about time travel Short stories by Isaac Asimov False documents
Thiotimoline
[ "Physics" ]
2,758
[ "Materials", "Fictional materials", "Matter" ]
167,184
https://en.wikipedia.org/wiki/Rapid%20eye%20movement%20sleep
Rapid eye movement sleep (REM sleep or REMS) is a unique phase of sleep in mammals (including humans) and birds, characterized by random rapid movement of the eyes, accompanied by low muscle tone throughout the body, and the propensity of the sleeper to dream vividly. The core body and brain temperatures increase during REM sleep and skin temperature decreases to lowest values. The REM phase is also known as paradoxical sleep (PS) and sometimes desynchronized sleep or dreamy sleep, because of physiological similarities to waking states including rapid, low-voltage desynchronized brain waves. Electrical and chemical activity regulating this phase seem to originate in the brain stem, and is characterized most notably by an abundance of the neurotransmitter acetylcholine, combined with a nearly complete absence of monoamine neurotransmitters histamine, serotonin and norepinephrine. Experiences of REM sleep are not transferred to permanent memory due to absence of norepinephrine. REM sleep is physiologically different from the other phases of sleep, which are collectively referred to as non-REM sleep (NREM sleep, NREMS, synchronized sleep). The absence of visual and auditory stimulation (sensory deprivation) during REM sleep can cause hallucinations. REM and non-REM sleep alternate within one sleep cycle, which lasts about 90 minutes in adult humans. As sleep cycles continue, they shift towards a higher proportion of REM sleep. The transition to REM sleep brings marked physical changes, beginning with electrical bursts called "ponto-geniculo-occipital waves" (PGO waves) originating in the brain stem. REM sleep occurs 4 times in a 7-hour sleep. Organisms in REM sleep suspend central homeostasis, allowing large fluctuations in respiration, thermoregulation and circulation which do not occur in any other modes of sleeping or waking. The body abruptly loses muscle tone, a state known as REM atonia. In 1953, Professor Nathaniel Kleitman and his student Eugene Aserinsky defined rapid eye movement and linked it to dreams. REM sleep was further described by researchers, including William Dement and Michel Jouvet. Many experiments have involved awakening test subjects whenever they begin to enter the REM phase, thereby producing a state known as REM deprivation. Subjects allowed to sleep normally again usually experience a modest REM rebound. Techniques of neurosurgery, chemical injection, electroencephalography, positron emission tomography, and reports of dreamers upon waking have all been used to study this phase of sleep. Physiology Electrical activity in the brain REM sleep is called "paradoxical" because of its similarities to wakefulness. Although the body is paralyzed, the brain acts as if it is somewhat awake, with cerebral neurons firing with the same overall intensity as in wakefulness. Electroencephalography during REM deep sleep reveals fast, low amplitude, desynchronized neural oscillation (brainwaves) that resemble the pattern seen during wakefulness, which differ from the slow δ (delta) waves pattern of NREM deep sleep. An important element of this contrast is the 3–10 Hz theta rhythm in the hippocampus and 40–60 Hz gamma waves in the cortex; patterns of EEG activity similar to these rhythms are also observed during wakefulness. The cortical and thalamic neurons in the waking and REM sleeping brain are more depolarized (fire more readily) than in the NREM deep sleeping brain. Human theta wave activity predominates during REM sleep in both the hippocampus and the cortex. During REM sleep, electrical connectivity among different parts of the brain manifests differently than during wakefulness. Frontal and posterior areas are less coherent in most frequencies, a fact which has been cited in relation to the chaotic experience of dreaming. However, the posterior areas are more coherent with each other; as are the right and left hemispheres of the brain, especially during lucid dreams. Brain energy use in REM sleep, as measured by oxygen and glucose metabolism, equals or exceeds energy use in waking. The rate in non-REM sleep is 11–40% lower. Brain stem Neural activity during REM sleep seems to originate in the brain stem, especially the pontine tegmentum and locus coeruleus. REM sleep is punctuated and immediately preceded by PGO (ponto-geniculo-occipital) waves, bursts of electrical activity originating in the brain stem. (PGO waves have long been measured directly in cats but not in humans because of constraints on experimentation; however, comparable effects have been observed in humans during "phasic" events which occur during REM sleep, and the existence of similar PGO waves is thus inferred.) These waves occur in clusters about every 6 seconds for 1–2 minutes during the transition from deep to paradoxical sleep. They exhibit their highest amplitude upon moving into the visual cortex and are a cause of the "rapid eye movements" in paradoxical sleep. Other muscles may also contract under the influence of these waves. Forebrain Research in the 1990s using positron emission tomography (PET) confirmed the role of the brain stem and suggested that, within the forebrain, the limbic and paralimbic systems showed more activation than other areas. The areas activated during REM sleep are approximately inverse to those activated during non-REM sleep and display greater activity than in quiet waking. The "anterior paralimbic REM activation area" (APRA) includes areas linked with emotion, memory, fear and sex, and may thus relate to the experience of dreaming during REMS. More recent PET research has indicated that the distribution of brain activity during REM sleep varies in correspondence with the type of activity seen in the prior period of wakefulness. The superior frontal gyrus, medial frontal areas, intraparietal sulcus, and superior parietal cortex, areas involved in sophisticated mental activity, show equal activity in REM sleep as in wakefulness. The amygdala is also active during REM sleep and may participate in generating the PGO waves, and experimental suppression of the amygdala results in less REM sleep. The amygdala may also regulate cardiac function in lieu of the less active insular cortex. Chemicals in the brain Compared to slow-wave sleep, both waking and paradoxical sleep involve higher use of the neurotransmitter acetylcholine, which may cause the faster brainwaves. The monoamine neurotransmitters norepinephrine, serotonin and histamine are completely unavailable. Injections of acetylcholinesterase inhibitor, which effectively increases available acetylcholine, have been found to induce paradoxical sleep in humans and other animals already in slow-wave sleep. Carbachol, which mimics the effect of acetylcholine on neurons, has a similar influence. In waking humans, the same injections produce paradoxical sleep only if the monoamine neurotransmitters have already been depleted. Two other neurotransmitters, orexin and gamma-Aminobutyric acid (GABA), seem to promote wakefulness, diminish during deep sleep, and inhibit paradoxical sleep. Unlike the abrupt transitions in electrical patterns, the chemical changes in the brain show continuous periodic oscillation. Models of REM regulation According to the activation-synthesis hypothesis proposed by Robert McCarley and Allan Hobson in 1975–1977, control over REM sleep involves pathways of "REM-on" and "REM-off" neurons in the brain stem. REM-on neurons are primarily cholinergic (i.e., involve acetylcholine); REM-off neurons activate serotonin and noradrenaline, which among other functions suppress the REM-on neurons. McCarley and Hobson suggested that the REM-on neurons actually stimulate REM-off neurons, thereby serving as the mechanism for the cycling between REM and non-REM sleep. They used Lotka–Volterra equations to describe this cyclical inverse relationship. Kayuza Sakai and Michel Jouvet advanced a similar model in 1981. Whereas acetylcholine manifests in the cortex equally during wakefulness and REM, it appears in higher concentrations in the brain stem during REM. The withdrawal of orexin and GABA may cause the absence of the other excitatory neurotransmitters; researchers in recent years increasingly include GABA regulation in their models. Eye movements Most of the eye movements in "rapid eye movement" sleep are in fact less rapid than those normally exhibited by waking humans. They are also shorter in duration and more likely to loop back to their starting point. About seven such loops take place over one minute of REM sleep. In slow-wave sleep, the eyes can drift apart; however, the eyes of the paradoxical sleeper move in tandem. These eye movements follow the ponto-geniculo-occipital waves originating in the brain stem. The eye movements themselves may relate to the sense of vision experienced in the dream, but a direct relationship remains to be clearly established. Congenitally blind people, who do not typically have visual imagery in their dreams, still move their eyes in REM sleep. An alternative explanation suggests that the functional purpose of REM sleep is for procedural memory processing, and the rapid eye movement is only a side effect of the brain processing the eye-related procedural memory. Circulation, respiration, and thermoregulation Generally speaking, the body suspends homeostasis during paradoxical sleep. Heart rate, cardiac pressure, cardiac output, arterial pressure, and breathing rate quickly become irregular when the body moves into REM sleep. In general, respiratory reflexes such as response to hypoxia diminish. Overall, the brain exerts less control over breathing; electrical stimulation of respiration-linked brain areas does not influence the lungs, as it does during non-REM sleep and in waking. Erections of the penis (nocturnal penile tumescence or NPT) normally accompany REM sleep in rats and humans. If a male has erectile dysfunction (ED) while awake, but has NPT episodes during REM, it would suggest that the ED is from a psychological rather than a physiological cause. In females, erection of the clitoris (nocturnal clitoral tumescence or NCT) causes enlargement, with accompanying vaginal blood flow and transudation (i.e. lubrication). During a normal night of sleep, the penis and clitoris may be erect for a total time of from one hour to as long as three and a half hours during REM. Body temperature is not well regulated during REM sleep, and thus organisms become more sensitive to temperatures outside their thermoneutral zone. Cats and other small furry mammals will shiver and breathe faster to regulate temperature during NREMS—but not during REMS. With the loss of muscle tone, animals lose the ability to regulate temperature through body movement. (However, even cats with pontine lesions preventing muscle atonia during REM did not regulate their temperature by shivering.) Neurons that typically activate in response to cold temperatures—triggers for neural thermoregulation—simply do not fire during REM sleep, as they do in NREM sleep and waking. Consequently, hot or cold environmental temperatures can reduce the proportion of REM sleep, as well as amount of total sleep. In other words, if at the end of a phase of deep sleep, the organism's thermal indicators fall outside of a certain range, it will not enter paradoxical sleep lest deregulation allow temperature to drift further from the desirable value. This mechanism can be 'fooled' by artificially warming the brain. Muscle REM atonia, an almost complete paralysis of the body, is accomplished through the inhibition of motor neurons. When the body shifts into REM sleep, motor neurons throughout the body undergo a process called hyperpolarization: their already-negative membrane potential decreases by another 2–10 millivolts, thereby raising the threshold which a stimulus must overcome to excite them. Muscle inhibition may result from unavailability of monoamine neurotransmitters (restraining the abundance of acetylcholine in the brainstem) and perhaps from mechanisms used in waking muscle inhibition. The medulla oblongata, located between pons and spine, seems to have the capacity for organism-wide muscle inhibition. Some localized twitching and reflexes can still occur. Pupils contract. Lack of REM atonia causes REM behavior disorder, where those affected physically act out their dreams, or conversely "dream out their acts", under an alternative theory on the relationship between muscle impulses during REM and associated mental imagery (which would also apply to people without the condition, except that commands to their muscles are suppressed). This is different from conventional sleepwalking, which takes place during slow-wave sleep, not REM. Narcolepsy, by contrast, seems to involve excessive and unwanted REM atonia: cataplexy and excessive daytime sleepiness while awake, hypnagogic hallucinations before entering slow-wave sleep, or sleep paralysis while waking. Other psychiatric disorders including depression have been linked to disproportionate REM sleep. Patients with suspected sleep disorders are typically evaluated by polysomnogram. Lesions of the pons to prevent atonia have induced functional "REM behavior disorder" in animals. Psychology Dreaming Rapid eye movement sleep (REM) has since its discovery been closely associated with dreaming. Waking up sleepers during a REM phase is a common experimental method for obtaining dream reports; 80% of people can give some kind of dream report under these circumstances. Sleepers awakened from REM tend to give longer, more narrative descriptions of the dreams they were experiencing, and to estimate the duration of their dreams as longer. Lucid dreams are reported far more often in REM sleep. (In fact these could be considered a hybrid state combining essential elements of REM sleep and waking consciousness.) The mental events which occur during REM most commonly have dream hallmarks including narrative structure, convincingness (e.g., experiential resemblance to waking life), and incorporation of instinctual themes. Sometimes, they include elements of the dreamer's recent experience taken directly from episodic memory. By one estimate, 80% of dreams occur during REM. Hobson and McCarley proposed that the PGO waves characteristic of "phasic" REM might supply the visual cortex and forebrain with electrical excitement which amplifies the hallucinatory aspects of dreaming. However, people woken up during sleep do not report significantly more bizarre dreams during phasic REMS, compared to tonic REMS. Another possible relationship between the two phenomena could be that the higher threshold for sensory interruption during REM sleep allows the brain to travel further along unrealistic and peculiar trains of thought. Some dreaming can take place during non-REM sleep. "Light sleepers" can experience dreaming during stage 2 non-REM sleep, whereas "deep sleepers", upon awakening in the same stage, are more likely to report "thinking" but not "dreaming". Certain scientific efforts to assess the uniquely bizarre nature of dreams experienced while asleep were forced to conclude that waking thought could be just as bizarre, especially in conditions of sensory deprivation. Because of non-REM dreaming, some sleep researchers have strenuously contested the importance of connecting dreaming to the REM sleep phase. The prospect that well-known neurological aspects of REM do not themselves cause dreaming suggests the need to re-examine the neurobiology of dreaming per se. Some researchers (Dement, Hobson, Jouvet, for example) tend to resist the idea of disconnecting dreaming from REM sleep. Effects of SSRIs Previous research has shown that selective serotonin reuptake inhibitors (SSRIs) have an important effect on REM sleep neurobiology and dreaming. A study at Harvard Medical School in 2000 tested the effects of paroxetine and fluvoxamine on healthy young adult male and females for 31 days: a drug-free baseline week, 19 days on either paroxetine or fluvoxamine with morning and evening doses, and 5 days of absolute discontinuation. Results showed that SSRI treatment decreased the average amount of dream recall frequency in comparison to baseline measurements as a result of serotonergic REM suppression. Fluvoxamine increased the length of dream reporting, bizarreness of dreams as well as the intensity of REM sleep. These effects were the greatest during acute discontinuation compared to treatment and baseline days. However, the subjective intensity of dreaming increased and the proclivity to enter REM sleep was decreased during SSRI treatment compared to baseline and discontinuation days. Creativity After waking from REM sleep, the mind seems "hyperassociative"—more receptive to semantic priming effects. People awakened from REM have performed better on tasks like anagrams and creative problem solving. Sleep aids the process by which creativity forms associative elements into new combinations that are useful or meet some requirement. This occurs in REM sleep rather than in NREM sleep. Rather than being due to memory processes, this has been attributed to changes during REM sleep in cholinergic and noradrenergic neuromodulation. High levels of acetylcholine in the hippocampus suppress feedback from hippocampus to the neocortex, while lower levels of acetylcholine and norepinephrine in the neocortex encourage the uncontrolled spread of associational activity within neocortical areas. This is in contrast to waking consciousness, where higher levels of norepinephrine and acetylcholine inhibit recurrent connections in the neocortex. REM sleep through this process adds creativity by allowing "neocortical structures to reorganise associative hierarchies, in which information from the hippocampus would be reinterpreted in relation to previous semantic representations or nodes." Timing In the ultradian sleep cycle, an organism alternates between deep sleep (slow, large, synchronized brain waves) and paradoxical sleep (faster, desynchronized waves). Sleep happens in the context of the larger circadian rhythm, which influences sleepiness and physiological factors based on timekeepers within the body. Sleep can be distributed throughout the day or clustered during one part of the rhythm: in nocturnal animals, during the day, and in diurnal animals, at night. The organism returns to homeostatic regulation almost immediately after REM sleep ends. During a night of sleep, humans usually experience about four or five periods of REM sleep; they are shorter (~15 min) at the beginning of the night and longer (~25 min) toward the end. Many animals and some people tend to wake, or experience a period of very light sleep, for a short time immediately after a bout of REM. The relative amount of REM sleep varies considerably with age. A newborn baby spends more than 80% of total sleep time in REM. REM sleep typically occupies 20–25% of total sleep in adult humans: about 90–120 minutes of a night's sleep. The first REM episode occurs about 70 minutes after falling asleep. Cycles of about 90 minutes each follow, with each cycle including a larger proportion of REM sleep. (The increased REM sleep later in the night is connected with the circadian rhythm and occurs even in people who did not sleep in the first part of the night.) In the weeks after a human baby is born, as its nervous system matures, neural patterns in sleep begin to show a rhythm of REM and non-REM sleep. (In faster-developing mammals, this process occurs in utero.) Infants spend more time in REM sleep than adults. The proportion of REM sleep then decreases significantly in childhood. Older people tend to sleep less overall, but sleep in REM for about the same absolute time (and therefore spend a greater proportion of sleep in REM). Rapid eye movement sleep can be subclassified into tonic and phasic modes. Tonic REM is characterized by theta rhythms in the brain; phasic REM is characterized by PGO waves and actual "rapid" eye movements. Processing of external stimuli is heavily inhibited during phasic REM, and recent evidence suggests that sleepers are more difficult to arouse from phasic REM than in slow-wave sleep. Deprivation effects Selective REMS deprivation causes a significant increase in the number of attempts to go into REM stage while asleep. On recovery nights, an individual will usually move to stage 3 and REM sleep more quickly and experience a REM rebound, which refers to an increase in the time spent in REM stage over normal levels. These findings are consistent with the idea that REM sleep is biologically necessary. However, the "rebound" REM sleep usually does not last fully as long as the estimated length of the missed REM periods. After the deprivation is complete, mild psychological disturbances, such as anxiety, irritability, hallucinations, and difficulty concentrating may develop and appetite may increase. There are also positive consequences of REM deprivation. Some symptoms of depression are found to be suppressed by REM deprivation; aggression may increase, and eating behavior may get disrupted. Higher norepinepherine is a possible cause of these results. Whether and how long-term REM deprivation has psychological effects remains a matter of controversy. Several reports have indicated that REM deprivation increases aggression and sexual behavior in laboratory test animals. Rats deprived of paradoxical sleep die in 4–6 weeks (twice the time before death in case of total sleep deprivation). Mean body temperature falls continually during this period. It has been suggested that acute REM sleep deprivation can improve certain types of depression—when depression appears to be related to an imbalance of certain neurotransmitters. Although sleep deprivation in general annoys most of the population, it has repeatedly been shown to alleviate depression, albeit temporarily. More than half the individuals who experience this relief report it to be rendered ineffective after sleeping the following night. Thus, researchers have devised methods such as altering the sleep schedule for a span of days following a REM deprivation period and combining sleep-schedule alterations with pharmacotherapy to prolong this effect. Antidepressants (including selective serotonin reuptake inhibitors, tricyclics, and monoamine oxidase inhibitors) and stimulants (such as amphetamine, methylphenidate and cocaine) interfere with REM sleep by stimulating the monoamine neurotransmitters which must be suppressed for REM sleep to occur. Administered at therapeutic doses, these drugs may stop REM sleep entirely for weeks or months. Withdrawal causes a REM rebound. Sleep deprivation stimulates hippocampal neurogenesis much as antidepressants do, but whether this effect is driven by REM sleep in particular is unknown. In other animals Although it manifests differently in different animals, REM sleep or something like it occurs in all land mammals—as well as in birds. The primary criteria used to identify REM are the change in electrical activity, measured by EEG, and loss of muscle tone, interspersed with bouts of twitching in phasic REM. The amount of REM sleep and cycling varies among animals; predators experience more REM sleep than prey. Larger animals also tend to stay in REM for longer, possibly because higher thermal inertia of their brains and bodies allows them to tolerate longer suspension of thermoregulation. The period (full cycle of REM and non-REM) lasts for about 90 minutes in humans, 22 minutes in cats, and 12 minutes in rats. In utero, mammals spend more than half (50–80%) of a 24-hour day in REM sleep. Sleeping reptiles do not seem to have PGO waves or the localized brain activation seen in mammalian REM. However, they do exhibit sleep cycles with phases of REM-like electrical activity measurable by EEG. A recent study found periodic eye movements in the central bearded dragon of Australia, leading its authors to speculate that the common ancestor of amniotes may therefore have manifested some precursor to REMS. Observations of jumping spiders in their nocturnal resting position also suggest a REM sleep-like state characterized by bouts of twitching and retinal movements and hints of muscle atonia (legs curling up as a result of pressure loss caused by muscle atonia in the prosoma). Sleep deprivation experiments on non-human animals can be set up differently than those on humans. The "flower pot" method involves placing a laboratory animal above water on a platform so small that it falls off upon losing muscle tone. The naturally rude awakening which results may elicit changes in the organism which necessarily exceed the simple absence of a sleep phase. This method also stops working after about 3 days as the subjects (typically rats) lose their will to avoid the water. Another method involves computer monitoring of brain waves, complete with automatic mechanized shaking of the cage when the test animal drifts into REM sleep. Possible functions Some researchers argue that the perpetuation of a complex brain process such as REM sleep indicates that it serves an important function for the survival of mammalian and avian species. It fulfills important physiological needs vital for survival to the extent that prolonged REM sleep deprivation leads to death in experimental animals. In both humans and experimental animals, REM sleep loss leads to several behavioral and physiological abnormalities. Loss of REM sleep has been noticed during various natural and experimental infections. Survivability of the experimental animals decreases when REM sleep is totally attenuated during infection; this leads to the possibility that the quality and quantity of REM sleep is generally essential for normal body physiology. Further, the existence of a "REM rebound" effect suggests the possibility of a biological need for REM sleep. While the precise function of REM sleep is not well understood, several theories have been proposed. Memory Sleep in general aids memory. REM sleep may favor the preservation of certain types of memories: specifically, procedural memory, spatial memory, and emotional memory. In rats, REM sleep increases following intensive learning, especially several hours after, and sometimes for multiple nights. Experimental REM sleep deprivation has sometimes inhibited memory consolidation, especially regarding complex processes (e.g., how to escape from an elaborate maze). In humans, the best evidence for REM's improvement of memory pertains to learning of procedures—new ways of moving the body (such as trampoline jumping), and new techniques of problem solving. REM deprivation seemed to impair declarative (i.e., factual) memory only in more complex cases, such as memories of longer stories. REM sleep apparently counteracts attempts to suppress certain thoughts. According to the dual-process hypothesis of sleep and memory, the two major phases of sleep correspond to different types of memory. "Night half" studies have tested this hypothesis with memory tasks either begun before sleep and assessed in the middle of the night, or begun in the middle of the night and assessed in the morning. Slow-wave sleep, part of non-REM sleep, appears to be important for declarative memory. Artificial enhancement of the non-REM sleep improves the next-day recall of memorized pairs of words. Tucker et al. demonstrated that a daytime nap containing solely non-REM sleep enhances declarative memory—but not procedural memory. According to the sequential hypothesis, the two types of sleep work together to consolidate memory. Sleep researcher Jerome Siegel has observed that extreme REM deprivation does not significantly interfere with memory. One case study of an individual who had little or no REM sleep due to a shrapnel injury to the brainstem did not find the individual's memory to be impaired. Antidepressants, which suppress REM sleep, show no evidence of impairing memory and may improve it. Graeme Mitchison and Francis Crick proposed in 1983 that by virtue of its inherent spontaneous activity, the function of REM sleep "is to remove certain undesirable modes of interaction in networks of cells in the cerebral cortex"—a process they characterize as "unlearning". As a result, those memories which are relevant (whose underlying neuronal substrate is strong enough to withstand such spontaneous, chaotic activation) are further strengthened, whilst weaker, transient, "noise" memory traces disintegrate. Memory consolidation during paradoxical sleep is specifically correlated with the periods of rapid eye movement, which do not occur continuously. One explanation for this correlation is that the PGO electrical waves, which precede the eye movements, also influence memory. REM sleep could provide a unique opportunity for "unlearning" to occur in the basic neural networks involved in homeostasis, which are protected from this "synaptic downscaling" effect during deep sleep. Neural ontogeny REM sleep prevails most after birth, and diminishes with age. According to the "ontogenetic hypothesis", REM (also known in neonates as active sleep) aids the developing brain by providing the neural stimulation that newborns need to form mature neural connections. Sleep deprivation studies have shown that deprivation early in life can result in behavioral problems, permanent sleep disruption, and decreased brain mass. The strongest evidence for the ontogenetic hypothesis comes from experiments on REM deprivation, and from the development of the visual system in the lateral geniculate nucleus and primary visual cortex. Defensive immobilization Ioannis Tsoukalas of Stockholm University has hypothesized that REM sleep is an evolutionary transformation of a well-known defensive mechanism, the tonic immobility reflex. This reflex, also known as animal hypnosis or death feigning, functions as the last line of defense against an attacking predator and consists of the total immobilization of the animal so that it appears dead. Tsoukalas argues that the neurophysiology and phenomenology of this reaction shows striking similarities to REM sleep; for example, both reactions exhibit brainstem control, cholinergic neurotransmission, paralysis, hippocampal theta rhythm, and thermoregulatory changes. Shift of gaze According to "scanning hypothesis", the directional properties of REM sleep are related to a shift of gaze in dream imagery. Against this hypothesis is that such eye movements occur in those born blind and in fetuses in spite of lack of vision. Also, binocular REMs are non-conjugated (i.e., the two eyes do not point in the same direction at a time) and so lack a fixation point. In support of this theory, research finds that in goal-oriented dreams, eye gaze is directed towards the dream action, determined from correlations in the eye and body movements of REM sleep behavior disorder patients who enact their dreams. Oxygen supply to cornea Dr. David M. Maurice, an eye specialist and former adjunct professor at Columbia University, proposed that REM sleep was associated with oxygen supply to the cornea, and that aqueous humor, the liquid between cornea and iris, was stagnant if not stirred. Among the supportive evidence, he calculated that if aqueous humor was stagnant, oxygen from the iris had to reach the cornea by diffusion through aqueous humor, which was not sufficient. According to the theory, when the organism is awake, eye movement (or cool environmental temperature) enables the aqueous humor to circulate. When the organism is sleeping, REM provides the much needed stir to aqueous humor. This theory is consistent with the observation that fetuses, as well as eye-sealed newborn animals, spend much time in REM sleep, and that during a normal sleep, a person's REM sleep episodes become progressively longer deeper into the night. However, owls experience REM sleep, but do not move their head more than in non-REM sleep and it is well known that owls' eyes are nearly immobile. Other theories Another theory suggests that monoamine shutdown is required so that the monoamine receptors in the brain can recover to regain full sensitivity. The sentinel hypothesis of REM sleep was put forward by Frederick Snyder in 1966. It is based upon the observation that REM sleep in several mammals (the rat, the hedgehog, the rabbit, and the rhesus monkey) is followed by a brief awakening. This does not occur for either cats or humans, although humans are more likely to wake from REM sleep than from NREM sleep. Snyder hypothesized that REM sleep activates an animal periodically, to scan the environment for possible predators. This hypothesis does not explain the muscle paralysis of REM sleep; however, a logical analysis might suggest that the muscle paralysis exists to prevent the animal from fully waking up unnecessarily, and allowing it to return easily to deeper sleep. Jim Horne, a sleep researcher at Loughborough University, has suggested that REM in modern humans compensates for the reduced need for wakeful food foraging. Other theories are that REM sleep warms the brain, stimulates and stabilizes the neural circuits that have not been activated during waking, or creates internal stimulation to aid development of the CNS; while some argue that REM lacks any purpose, and simply results from random brain activation. Furthermore, eye movements are also theorized to play a role in certain psychotherapies such as eye movement desensitization and reprocessing (EMDR). See also Neuroscience of sleep Pedunculopontine nucleus (PPN) Sleep and learning References Further reading External links PBS' NOVA episode "What Are Dreams?" Video and Transcript LSDBase – an open sleep research database with images of REM sleep recordings. Dream Sleep physiology Neurophysiology Articles containing video clips
Rapid eye movement sleep
[ "Biology" ]
7,079
[ "Dream", "Behavior", "Sleep physiology", "Sleep" ]
167,258
https://en.wikipedia.org/wiki/Dome
A dome () is an architectural element similar to the hollow upper half of a sphere. There is significant overlap with the term cupola, which may also refer to a dome or a structure on top of a dome. The precise definition of a dome has been a matter of controversy and there are a wide variety of forms and specialized terms to describe them. A dome can rest directly upon a rotunda wall, a drum, or a system of squinches or pendentives used to accommodate the transition in shape from a rectangular or square space to the round or polygonal base of the dome. The dome's apex may be closed or may be open in the form of an oculus, which may itself be covered with a roof lantern and cupola. Domes have a long architectural lineage that extends back into prehistory. Domes were built in ancient Mesopotamia, and they have been found in Persian, Hellenistic, Roman, and Chinese architecture in the ancient world, as well as among a number of indigenous building traditions throughout the world. Dome structures were common in both Byzantine architecture and Sasanian architecture, which influenced that of the rest of Europe and Islam in the Middle Ages. The domes of European Renaissance architecture spread from Italy in the early modern period, while domes were frequently employed in Ottoman architecture at the same time. Baroque and Neoclassical architecture took inspiration from Roman domes. Advancements in mathematics, materials, and production techniques resulted in new dome types. Domes have been constructed over the centuries from mud, snow, stone, wood, brick, concrete, metal, glass, and plastic. The symbolism associated with domes includes mortuary, celestial, and governmental traditions that have likewise altered over time. The domes of the modern world can be found over religious buildings, legislative chambers, sports stadiums, and a variety of functional structures. Etymology The English word "dome" ultimately derives from the ancient Greek and Latin domus ("house"), which, up through the Renaissance, labeled a revered house, such as a Domus Dei, or "House of God", regardless of the shape of its roof. This is reflected in the uses of the Italian word duomo, the German/Icelandic/Danish word dom ("cathedral"), and the English word dome as late as 1656, when it meant a "Town-House, Guild-Hall, State-House, and Meeting-House in a city." The French word dosme came to acquire the meaning of a cupola vault, specifically, by 1660. This French definition gradually became the standard usage of the English dome in the eighteenth century as many of the most impressive Houses of God were built with monumental domes, and in response to the scientific need for more technical terms. Definitions Across the ancient world, curved-roof structures that would today be called domes had a number of different names reflecting a variety of shapes, traditions, and symbolic associations. The shapes were derived from traditions of pre-historic shelters made from various impermanent pliable materials and were only later reproduced as vaulting in more durable materials. The hemispherical shape often associated with domes today derives from Greek geometry and Roman standardization, but other shapes persisted, including a pointed and bulbous tradition inherited by some early Islamic mosques. Modern academic study of the topic has been controversial and confused by inconsistent definitions, such as those for cloister vaults and domical vaults. Dictionary definitions of the term "dome" are often general and imprecise. Generally-speaking, it "is non-specific, a blanket-word to describe an hemispherical or similar spanning element." Published definitions include: hemispherical roofs alone; revolved arches; and vaults on a circular base alone, circular or polygonal base, circular, elliptical, or polygonal base, or an undefined area. Definitions specifying vertical sections include: semicircular, pointed, or bulbous; semicircular, segmental or pointed; semicircular, segmental, pointed, or bulbous; semicircular, segmental, elliptical, or bulbous; and high profile, hemispherical, or flattened. Sometimes called "false" domes, corbel domes achieve their shape by extending each horizontal layer of stones inward slightly farther than the lower one until they meet at the top. A "false" dome may also refer to a wooden dome. The Italian use of the term finto, meaning "false", can be traced back to the 17th century in the use of vaulting made of reed mats and gypsum mortar. "True" domes are said to be those whose structure is in a state of compression, with constituent elements of wedge-shaped voussoirs, the joints of which align with a central point. The validity of this is unclear, as domes built underground with corbelled stone layers are in compression from the surrounding earth. The precise definition of "pendentive" has also been a source of academic contention, such as whether or not corbelling is permitted under the definition and whether or not the lower portions of a sail vault should be considered pendentives. Domes with pendentives can be divided into two kinds: simple and compound. In the case of the simple dome, the pendentives are part of the same sphere as the dome itself; however, such domes are rare. In the case of the more common compound dome, the pendentives are part of the surface of a larger sphere below that of the dome itself and form a circular base for either the dome or a drum section. The fields of engineering and architecture have lacked common language for domes, with engineering focused on structural behavior and architecture focused on form and symbolism. Additionally, new materials and structural systems in the 20th century have allowed for large dome-shaped structures that deviate from the traditional compressive structural behavior of masonry domes. Popular usage of the term has expanded to mean "almost any long-span roofing system". Elements The word "cupola" is another word for "dome", and is usually used for a small dome upon a roof or turret. "Cupola" has also been used to describe the inner side of a dome. The top of a dome is the "crown". The inner side of a dome is called the "intrados" and the outer side is called the "extrados". As with arches, the "springing" of a dome is the base level from which the dome rises and the "haunch" is the part that lies roughly halfway between the base and the top. Domes can be supported by an elliptical or circular wall called a "drum". If this structure extends to ground level, the round building may be called a "rotunda". Drums are also called "tholobates" and may or may not contain windows. A "tambour" or "lantern" is the equivalent structure over a dome's oculus, supporting a cupola. When the base of the dome does not match the plan of the supporting walls beneath it (for example, a dome's circular base over a square bay), techniques are employed to bridge the two. One technique is to use corbelling, progressively projecting horizontal layers from the top of the supporting wall to the base of the dome, such as the corbelled triangles often used in Seljuk and Ottoman architecture. The simplest technique is to use diagonal lintels across the corners of the walls to create an octagonal base. Another is to use arches to span the corners, which can support more weight. A variety of these techniques use what are called "squinches". A squinch can be a single arch or a set of multiple projecting nested arches placed diagonally over an internal corner. Squinch forms also include trumpet arches, niche heads (or half-domes), trumpet arches with "anteposed" arches, and muqarnas arches. Squinches transfer the weight of a dome across the gaps created by the corners and into the walls. Pendentives are triangular sections of a sphere, like concave spandrels between arches, and transition from the corners of a square bay to the circular base of a dome. The curvature of the pendentives is that of a sphere with a diameter equal to the diagonal of the square bay. Pendentives concentrate the weight of a dome into the corners of the bay. Materials The earliest domes in the Middle East were built with mud-brick and, eventually, with baked brick and stone. Domes of wood allowed for wide spans due to the relatively light and flexible nature of the material and were the normal method for domed churches by the 7th century, although most domes were built with the other less flexible materials. Wooden domes were protected from the weather by roofing, such as copper or lead sheeting. Domes of cut stone were more expensive and never as large, and timber was used for large spans where brick was unavailable. Roman concrete used an aggregate of stone with a powerful mortar. The aggregate transitioned over the centuries to pieces of fired clay, then to Roman bricks. By the sixth century, bricks with large amounts of mortar were the principle vaulting materials. Pozzolana appears to have only been used in central Italy. Brick domes were the favored choice for large-space monumental coverings until the Industrial Age, due to their convenience and dependability. Ties and chains of iron or wood could be used to resist stresses. In the Middle East and Central Asia, domes and drums constructed from mud brick and baked brick were sometimes covered with brittle ceramic tiles on the exterior to protect against rain and snow. The new building materials of the 19th century and a better understanding of the forces within structures from the 20th century opened up new possibilities. Iron and steel beams, steel cables, and pre-stressed concrete eliminated the need for external buttressing and enabled much thinner domes. Whereas earlier masonry domes may have had a radius to thickness ratio of 50, the ratio for modern domes can be in excess of 800. The lighter weight of these domes not only permitted far greater spans, but also allowed for the creation of large movable domes over modern sports stadiums. Experimental rammed earth domes were made as part of work on sustainable architecture at the University of Kassel in 1983. Shapes and internal forces A masonry dome produces thrusts downward and outward. They are thought of in terms of two kinds of forces at right angles from one another: meridional forces (like the meridians, or lines of longitude, on a globe) are compressive only, and increase towards the base, while hoop forces (like the lines of latitude on a globe) are in compression at the top and tension at the base, with the transition in a hemispherical dome occurring at an angle of 51.8 degrees from the top. The thrusts generated by a dome are directly proportional to the weight of its materials. Grounded hemispherical domes generate significant horizontal thrusts at their haunches. The outward thrusts in the lower portion of a hemispherical masonry dome can be counteracted with the use of chains incorporated around the circumference or with external buttressing, although cracking along the meridians is natural. For small or tall domes with less horizontal thrust, the thickness of the supporting arches or walls can be enough to resist deformation, which is why drums tend to be much thicker than the domes they support. Unlike voussoir arches, which require support for each element until the keystone is in place, domes are stable during construction as each level is made a complete and self-supporting ring. The upper portion of a masonry dome is always in compression and is supported laterally, so it does not collapse except as a whole unit and a range of deviations from the ideal in this shallow upper cap are equally stable. Because voussoir domes have lateral support, they can be made much thinner than corresponding arches of the same span. For example, a hemispherical dome can be 2.5 times thinner than a semicircular arch, and a dome with the profile of an equilateral arch can be thinner still. The optimal shape for a masonry dome of equal thickness provides for perfect compression, with none of the tension or bending forces against which masonry is weak. For a particular material, the optimal dome geometry is called the funicular surface, the comparable shape in three dimensions to a catenary curve for a two-dimensional arch. Adding a weight to the top of a pointed dome, such as the heavy cupola at the top of Florence Cathedral, changes the optimal shape to more closely match the actual pointed shape of the dome. The pointed profiles of many Gothic domes more closely approximate the optimal dome shape than do hemispheres, which were favored by Roman and Byzantine architects due to the circle being considered the most perfect of forms. Symbolism According to E. Baldwin Smith, from the late Stone Age the dome-shaped tomb was used as a reproduction of the ancestral, god-given shelter made permanent as a venerated home of the dead. The instinctive desire to do this resulted in widespread domical mortuary traditions across the ancient world, from the stupas of India to the tholos tombs of Iberia. By Hellenistic and Roman times, the domical tholos had become the customary cemetery symbol. Domes and tent-canopies were also associated with the heavens in Ancient Persia and the Hellenistic-Roman world. A dome over a square base reflected the geometric symbolism of those shapes. The circle represented perfection, eternity, and the heavens. The square represented the earth. An octagon was intermediate between the two. The distinct symbolism of the heavenly or cosmic tent stemming from the royal audience tents of Achaemenid and Indian rulers was adopted by Roman rulers in imitation of Alexander the Great, becoming the imperial baldachin. This probably began with Nero, whose "Golden House" also made the dome a feature of palace architecture. The dual sepulchral and heavenly symbolism was adopted by early Christians in both the use of domes in architecture and in the ciborium, a domical canopy like the baldachin used as a ritual covering for relics or the church altar. The celestial symbolism of the dome, however, was the preeminent one by the Christian era. In the early centuries of Islam, domes were closely associated with royalty. A dome built in front of the mihrab of a mosque, for example, was at least initially meant to emphasize the place of a prince during royal ceremonies. Over time such domes became primarily focal points for decoration or the direction of prayer. The use of domes in mausoleums can likewise reflect royal patronage or be seen as representing the honor and prestige that domes symbolized, rather than having any specific funerary meaning. The wide variety of dome forms in medieval Islam reflected dynastic, religious, and social differences as much as practical building considerations. Acoustics Because domes are concave from below, they can reflect sound and create echoes. A dome may have a "whispering gallery" at its base that at certain places transmits distinct sound to other distant places in the gallery. The half-domes over the apses of Byzantine churches helped to project the chants of the clergy. Although this can complement music, it may make speech less intelligible, leading Francesco Giorgi in 1535 to recommend vaulted ceilings for the choir areas of a church, but a flat ceiling filled with as many coffers as possible for where preaching would occur. Cavities in the form of jars built into the inner surface of a dome may serve to compensate for this interference by diffusing sound in all directions, eliminating echoes while creating a "divine effect in the atmosphere of worship." This technique was written about by Vitruvius in his Ten Books on Architecture, which describes bronze and earthenware resonators. The material, shape, contents, and placement of these cavity resonators determine the effect they have: reinforcing certain frequencies or absorbing them. Types Beehive dome Also called a corbelled dome, cribbed dome, or false dome, these are different from a 'true dome' in that they consist of purely horizontal layers. As the layers get higher, each is slightly cantilevered, or corbeled, toward the center until meeting at the top. A monumental example is the Mycenaean Treasury of Atreus from the late Bronze Age. Braced dome A single or double layer space frame in the form of a dome, a braced dome is a generic term that includes ribbed, Schwedler, three-way grid, lamella or Kiewitt, lattice, and geodesic domes. The different terms reflect different arrangements in the surface members. Braced domes often have a very low weight and are usually used to cover spans of up to 150 meters. Often prefabricated, their component members can either lie on the dome's surface of revolution, or be straight lengths with the connecting points or nodes lying upon the surface of revolution. Single-layer structures are called frame or skeleton types and double-layer structures are truss types, which are used for large spans. When the covering also forms part of the structural system, it is called a stressed skin type. The formed surface type consists of sheets joined at bent edges to form the structure. Cloister vault Also called domical vaults (a term sometimes also applied to sail vaults), polygonal domes, coved domes, gored domes, segmental domes (a term sometimes also used for saucer domes), paneled vaults, or pavilion vaults, these are domes that maintain a polygonal shape in their horizontal cross section. The component curved surfaces of these vaults are called severies, webs, or cells. The earliest known examples date to the first century BC, such as the Tabularium of Rome from 78 BC. Others include the Baths of Antoninus in Carthage (145–160) and the Palatine Chapel at Aachen (13th – 14th century). The most famous example is the Renaissance octagonal dome of Filippo Brunelleschi over the Florence Cathedral. Thomas Jefferson, the third president of the United States, installed an octagonal dome above the West front of his plantation house, Monticello. Compound dome Also called domes on pendentives or pendentive domes (a term also applied to sail vaults), compound domes have pendentives that support a smaller diameter dome immediately above them, as in the Hagia Sophia, or a drum and dome, as in many Renaissance and post-Renaissance domes, with both forms resulting in greater height. Crossed-arch dome One of the earliest types of ribbed vault, the first known examples are found in the Great Mosque of Córdoba in the 10th century. Rather than meeting in the center of the dome, the ribs characteristically intersect one another off-center, forming an empty polygonal space in the center. Geometry is a key element of the designs, with the octagon being perhaps the most popular shape used. Whether the arches are structural or purely decorative remains a matter of debate. The type may have an eastern origin, although the issue is also unsettled. Examples are found in Spain, North Africa, Armenia, Iran, France, and Italy. Ellipsoidal dome The ellipsoidal dome is a surface formed by the rotation around a vertical axis of a semi-ellipse. Like other "rotational domes" formed by the rotation of a curve around a vertical axis, ellipsoidal domes have circular bases and horizontal sections and are a type of "circular dome" for that reason. Geodesic dome Geodesic domes are the upper portion of geodesic spheres. They are composed of a framework of triangles in a polyhedron pattern. The structures are named for geodesics and are based upon geometric shapes such as icosahedrons, octahedrons or tetrahedrons. Such domes can be created using a limited number of simple elements and joints and efficiently resolve a dome's internal forces. Their efficiency is said to increase with size. Although not first invented by Buckminster Fuller, they are associated with him because he designed many geodesic domes and patented them in the United States. Hemispherical dome The hemispherical dome is a surface formed by the rotation around a vertical axis of a semicircle. Like other "rotational domes" formed by the rotation of a curve around a vertical axis, hemispherical domes have circular bases and horizontal sections and are a type of "circular dome" for that reason. They experience vertical compression along their meridians, but horizontally experience compression only in the portion above 51.8 degrees from the top. Below this point, hemispherical domes experience tension horizontally, and usually require buttressing to counteract it. According to E. Baldwin Smith, it was a shape likely known to the Assyrians, defined by Greek theoretical mathematicians, and standardized by Roman builders. Onion dome Bulbous domes bulge out beyond their base diameters, offering a profile greater than a hemisphere. An onion dome is a greater than hemispherical dome with a pointed top in an ogee profile. They are found in the Near East, Middle East, Persia, and India and may not have had a single point of origin. Their appearance in northern Russian architecture predates the Tatar occupation of Russia and so is not easily explained as the result of that influence. They became popular in the second half of the 15th century in the Low Countries of Northern Europe, possibly inspired by the finials of minarets in Egypt and Syria, and developed in the 16th and 17th centuries in the Netherlands before spreading to Germany, becoming a popular element of the baroque architecture of Central Europe. German bulbous domes were also influenced by Russian and Eastern European domes. The examples found in various European architectural styles are typically wooden. Examples include Kazan Church in Kolomenskoye and the Brighton Pavilion by John Nash. In Islamic architecture, they are typically made of masonry, rather than timber, with the thick and heavy bulging portion serving to buttress against the tendency of masonry domes to spread at their bases. The Taj Mahal is a famous example. Oval dome An oval dome is a dome of oval shape in plan, profile, or both. The term comes from the Latin ovum, meaning "egg". The earliest oval domes were used by convenience in corbelled stone huts as rounded but geometrically undefined coverings, and the first examples in Asia Minor date to around 4000 B.C. The geometry was eventually defined using combinations of circular arcs, transitioning at points of tangency. If the Romans created oval domes, it was only in exceptional circumstances. The Roman foundations of the oval plan Church of St. Gereon in Cologne point to a possible example. Domes in the Middle Ages also tended to be circular, though the church of Santo Tomás de las Ollas in Spain has an oval dome over its oval plan. Other examples of medieval oval domes can be found covering rectangular bays in churches. Oval plan churches became a type in the Renaissance and popular in the Baroque style. The dome built for the basilica of Vicoforte by Francesco Gallo was one of the largest and most complex ever made. Although the ellipse was known, in practice, domes of this shape were created by combining segments of circles. Popular in the 16th and 17th centuries, oval and elliptical plan domes can vary their dimensions in three axes or two axes. A sub-type with the long axis having a semicircular section is called a Murcia dome, as in the Chapel of the Junterones at Murcia Cathedral. When the short axis has a semicircular section, it is called a Melon dome. Paraboloid dome A paraboloid dome is a surface formed by the rotation around a vertical axis of a sector of a parabola. Like other "rotational domes" formed by the rotation of a curve around a vertical axis, paraboloid domes have circular bases and horizontal sections and are a type of "circular dome" for that reason. Because of their shape, paraboloid domes experience only compression, both radially and horizontally. Sail dome Also called sail vaults, handkerchief vaults, domical vaults (a term sometimes also applied to cloister vaults), pendentive domes (a term that has also been applied to compound domes), Bohemian vaults, or Byzantine domes, this type can be thought of as pendentives that, rather than merely touching each other to form a circular base for a drum or compound dome, smoothly continue their curvature to form the dome itself. The dome gives the impression of a square sail pinned down at each corner and billowing upward. These can also be thought of as saucer domes upon pendentives. Sail domes are based upon the shape of a hemisphere and are not to be confused with elliptic parabolic vaults, which appear similar but have different characteristics. In addition to semicircular sail vaults there are variations in geometry such as a low rise to span ratio or covering a rectangular plan. Sail vaults of all types have a variety of thrust conditions along their borders, which can cause problems, but have been widely used from at least the sixteenth century. The second floor of the Llotja de la Seda is covered by a series of nine meter wide sail vaults. Saucer dome Also called segmental domes (a term sometimes also used for cloister vaults), or calottes, these have profiles of less than half a circle. Because they reduce the portion of the dome in tension, these domes are strong but have increased radial thrust. Many of the largest existing domes are of this shape. Masonry saucer domes, because they exist entirely in compression, can be built much thinner than other dome shapes without becoming unstable. The trade-off between the proportionately increased horizontal thrust at their abutments and their decreased weight and quantity of materials may make them more economical, but they are more vulnerable to damage from movement in their supports. Umbrella dome Also called gadrooned, fluted, organ-piped, pumpkin, melon, ribbed, parachute, scalloped, or lobed domes, these are a type of dome divided at the base into curved segments, which follow the curve of the elevation. "Fluted" may refer specifically to this pattern as an external feature, such as was common in Mamluk Egypt. The "ribs" of a dome are the radial lines of masonry that extend from the crown down to the springing. The central dome of the Hagia Sophia uses the ribbed method, which accommodates a ring of windows between the ribs at the base of the dome. The central dome of St. Peter's Basilica also uses this method. History Early history and simple domes Cultures from pre-history to modern times constructed domed dwellings using local materials. Although it is not known when the first dome was created, sporadic examples of early domed structures have been discovered. The earliest discovered may be four small dwellings made of Mammoth tusks and bones. The first was found by a farmer in Mezhirich, Ukraine, in 1965 while he was digging in his cellar and archaeologists unearthed three more. They date from 19,280 – 11,700 BC. In modern times, the creation of relatively simple dome-like structures has been documented among various indigenous peoples around the world. The wigwam was made by Native Americans using arched branches or poles covered with grass or hides. The Efé people of central Africa construct similar structures, using leaves as shingles. Another example is the igloo, a shelter built from blocks of compact snow and used by the Inuit, among others. The Himba people of Namibia construct "desert igloos" of wattle and daub for use as temporary shelters at seasonal cattle camps, and as permanent homes by the poor. Extraordinarily thin domes of sun-baked clay 20 feet in diameter, 30 feet high, and nearly parabolic in curve, are known from Cameroon. The historical development from structures like these to more sophisticated domes is not well documented. That the dome was known to early Mesopotamia may explain the existence of domes in both China and the West in the first millennium BC. Another explanation, however, is that the use of the dome shape in construction did not have a single point of origin and was common in virtually all cultures long before domes were constructed with enduring materials. Corbelled stone domes have been found from the Neolithic period in the ancient Near East, and in the Middle East to Western Europe from antiquity. The kings of Achaemenid Persia held audiences and festivals in domical tents derived from the nomadic traditions of central Asia. Simple domical mausoleums existed in the Hellenistic period. Indian bas-relief sculptures from Sāñcī (1st century BC), Bhārhut (2nd century BC), and Amarāvatī (2nd century BC), show domed huts, shrines, and pavilions. The remains of a large domed circular hall in the Parthian capital city of Nyssa has been dated to perhaps the first century AD, showing "...the existence of a monumental domical tradition in Central Asia that had hitherto been unknown and which seems to have preceded Roman Imperial monuments or at least to have grown independently from them." It likely had a wooden dome. Persian domes Persian architecture likely inherited an architectural tradition of dome-building dating back to the earliest Mesopotamian domes. Due to the scarcity of wood in many areas of the Iranian plateau and Greater Iran, domes were an important part of vernacular architecture throughout Persian history. The Persian invention of the squinch, a series of concentric arches forming a half-cone over the corner of a room, enabled the transition from the walls of a square chamber to an octagonal base for a dome in a way reliable enough for large constructions and domes moved to the forefront of Persian architecture as a result. Pre-Islamic domes in Persia are commonly semi-elliptical, with pointed domes and those with conical outer shells being the majority of the domes in the Islamic periods. The area of north-eastern Iran was, along with Egypt, one of two areas notable for early developments in Islamic domed mausoleums, which appear in the tenth century. The Samanid Mausoleum in Transoxiana dates to no later than 943 and is the first to have squinches create a regular octagon as a base for the dome, which then became the standard practice. Cylindrical or polygonal plan tower tombs with conical roofs over domes also exist beginning in the 11th century. The Seljuk Empire's notables built tomb-towers, called "Turkish Triangles", as well as cube mausoleums covered with a variety of dome forms. Seljuk domes included conical, semi-circular, and pointed shapes in one or two shells. Shallow semi-circular domes are mainly found from the Seljuk era. The double-shell domes were either discontinuous or continuous. The domed enclosure of the Jameh Mosque of Isfahan, built in 1086-7 by Nizam al-Mulk, was the largest masonry dome in the Islamic world at that time, had eight ribs, and introduced a new form of corner squinch with two quarter domes supporting a short barrel vault. In 1088 Tāj-al-Molk, a rival of Nizam al-Mulk, built another dome at the opposite end of the same mosque with interlacing ribs forming five-pointed stars and pentagons. This is considered the landmark Seljuk dome, and may have inspired subsequent patterning and the domes of the Il-Khanate period. The use of tile and of plain or painted plaster to decorate dome interiors, rather than brick, increased under the Seljuks. Beginning in the Ilkhanate, Persian domes achieved their final configuration of structural supports, zone of transition, drum, and shells, and subsequent evolution was restricted to variations in form and shell geometry. Characteristic of these domes are the use of high drums and several types of discontinuous double-shells, and the development of triple-shells and internal stiffeners occurred at this time. The construction of tomb towers decreased. The 7.5 meter wide double dome of Soltan Bakht Agha Mausoleum (1351–1352) is the earliest known example in which the two shells of the dome have significantly different profiles, which spread rapidly throughout the region. The development of taller drums also continued into the Timurid period. The large, bulbous, fluted domes on tall drums that are characteristic of 15th century Timurid architecture were the culmination of the Central Asian and Iranian tradition of tall domes with glazed tile coverings in blue and other colors. The domes of the Safavid dynasty (1501–1732) are characterized by a distinctive bulbous profile and are considered the last generation of Persian domes. They are generally thinner than earlier domes and are decorated with a variety of colored glazed tiles and complex vegetal patterns, and they were influential on those of other Islamic styles, such as the Mughal architecture of India. An exaggerated style of onion dome on a short drum, as can be seen at the Shah Cheragh (1852–1853), first appeared in the Qajar period. Domes have remained important in modern mausoleums, and domed cisterns and icehouses remain common sights in the countryside. East Asian domes Very little has survived of ancient Chinese architecture, due to the extensive use of timber as a building material. Brick and stone vaults used in tomb construction have survived, and the corbeled dome was used, rarely, in tombs and temples. The earliest true domes found in Chinese tombs were shallow cloister vaults, called simian jieding, derived from the Han use of barrel vaulting. Unlike the cloister vaults of western Europe, the corners are rounded off as they rise. The first known example is a brick tomb dating from the end of the Western Han period, near the modern city of Xiangcheng in Henan Province. These four-sided domes used small interlocking bricks and enabled a square space near the entrance of a tomb large enough for several people that may have been used for funeral ceremonies. The interlocking brick technique was rapidly adopted and four-sided domes became widespread outside Henan by the end of the first century AD. A model of a tomb found with a shallow true dome from the late Han dynasty (206 BC – 220 AD) can be seen at the Guangzhou Museum (Canton). Another, the Lei Cheng Uk Han Tomb, found in Hong Kong in 1955, has a design common among Eastern Han dynasty (25 AD – 220 AD) tombs in South China: a barrel vaulted entrance leading to a domed front hall with barrel vaulted chambers branching from it in a cross shape. It is the only such tomb that has been found in Hong Kong and is exhibited as part of the Hong Kong Museum of History. During the Three Kingdoms period (220–280), the "cross-joint dome" (siyuxuanjinshi) was developed under the Wu and Western Jin dynasties south of the Yangtze River, with arcs building out from the corners of a square room until they met and joined at the center. These domes were stronger, had a steeped angle, and could cover larger areas than the relatively shallow cloister vaults. Over time, they were made taller and wider. There were also corbel vaults, called diese, although these are the weakest type. Some tombs of the Song dynasty (960–1279) have beehive domes. The Seokguram Grotto (751), built in the Korean city of Gyeongju during the Unified Silla period, includes a domed chamber 7.2 meters wide covering a statue of the Buddha. The dome is made from blocks of granite, with the flat cap of the dome decorated with a lotus flower motif. The dome is unique in north-east Asia. The Buddhism monastery Baoguo near Ningbo has three domes dated to 1013. The Daoist monastery Yongle Gong in Shanxi has domes in its Hall of the Three Purities, from the 13th century. The Fenghuang Mosque in Hangzhou has three domes along its back wall dating to the Yuan dynasty. The central dome is 8 meters in diameter and covered by an octagonal roof. The north and south flanking domes are 6.8 meters and 7.2 meters wide, respectively, and covered by hexagonal roofs. The zones of transition under the domes use a tiered system similar to muqarnas or the corner bracketing found in Chinese temples. Roman and Byzantine domes Roman domes are found in baths, villas, palaces, and tombs. oculi are common features. They are customarily hemispherical in shape and partially or totally concealed on the exterior. To buttress the horizontal thrusts of a large hemispherical masonry dome, the supporting walls were built up beyond the base to at least the haunches of the dome, and the dome was then also sometimes covered with a conical or polygonal roof. Domes reached monumental size in the Roman Imperial period. Roman baths played a leading role in the development of domed construction in general, and monumental domes in particular. Modest domes in baths dating from the 2nd and 1st centuries BC are seen in Pompeii, in the cold rooms of the Terme Stabiane and the Terme del Foro. However, the extensive use of domes did not occur before the 1st century AD. The growth of domed construction increases under Emperor Nero and the Flavians in the 1st century AD, and during the 2nd century. Centrally-planned halls become increasingly important parts of palace and palace villa layouts beginning in the 1st century, serving as state banqueting halls, audience rooms, or throne rooms. The Pantheon, a temple in Rome completed by Emperor Hadrian as part of the Baths of Agrippa, is the most famous, best preserved, and largest Roman dome. Segmented domes, made of radially concave wedges or of alternating concave and flat wedges, appear under Hadrian in the 2nd century and most preserved examples of this style date from this period. In the 3rd century, Imperial mausoleums began to be built as domed rotundas, rather than as tumulus structures or other types, following similar monuments by private citizens. The technique of building lightweight domes with interlocking hollow ceramic tubes further developed in North Africa and Italy in the late third and early fourth centuries. In the 4th century, Roman domes proliferated due to changes in the way domes were constructed, including advances in centering techniques and the use of brick ribbing. The material of choice in construction gradually transitioned during the 4th and 5th centuries from stone or concrete to lighter brick in thin shells. Baptisteries began to be built in the manner of domed mausoleums during the 4th century in Italy. The octagonal Lateran baptistery or the baptistery of the Holy Sepulchre may have been the first, and the style spread during the 5th century. By the 5th century, structures with small-scale domed cross plans existed across the Christian world. With the end of the Western Roman Empire, domes became a signature feature of the church architecture of the surviving Eastern Roman — or "Byzantine" — Empire. 6th-century church building by the Emperor Justinian used the domed cross unit on a monumental scale, and his architects made the domed brick-vaulted central plan standard throughout the Roman east. This divergence with the Roman west from the second third of the 6th century may be considered the beginning of a "Byzantine" architecture. Justinian's Hagia Sophia was an original and innovative design with no known precedents in the way it covers a basilica plan with dome and semi-domes. Periodic earthquakes in the region have caused three partial collapses of the dome and necessitated repairs. "Cross-domed units", a more secure structural system created by bracing a dome on all four sides with broad arches, became a standard element on a smaller scale in later Byzantine church architecture. The Cross-in-square plan, with a single dome at the crossing or five domes in a quincunx pattern, became widely popular in the Middle Byzantine period (c. 843–1204). It is the most common church plan from the tenth century until the fall of Constantinople in 1453. Resting domes on circular or polygonal drums pierced with windows eventually became the standard style, with regional characteristics. In the Byzantine period, domes were normally hemispherical and had, with occasional exceptions, windowed drums. All of the surviving examples in Constantinople are ribbed or pumpkin domes, with the divisions corresponding to the number of windows. Roofing for domes ranged from simple ceramic tile to more expensive, more durable, and more form-fitting lead sheeting. Metal clamps between stone cornice blocks, metal tie rods, and metal chains were also used to stabilize domed construction. The technique of using double shells for domes, although revived in the Renaissance, originated in Byzantine practice. Arabic and Western European domes The Syria and Palestine area has a long tradition of domical architecture, including wooden domes in shapes described as "conoid", or similar to pine cones. When the Arab Muslim forces conquered the region, they employed local craftsmen for their buildings and, by the end of the 7th century, the dome had begun to become an architectural symbol of Islam. In addition to religious shrines, such as the Dome of the Rock, domes were used over the audience and throne halls of Umayyad palaces, and as part of porches, pavilions, fountains, towers and the calderia of baths. Blending the architectural features of both Byzantine and Persian architecture, the domes used both pendentives and squinches and were made in a variety of shapes and materials. Although architecture in the region would decline following the movement of the capital to Iraq under the Abbasids in 750, mosques built after a revival in the late 11th century usually followed the Umayyad model. Early versions of bulbous domes can be seen in mosaic illustrations in Syria dating to the Umayyad period. They were used to cover large buildings in Syria after the eleventh century. Italian church architecture from the late sixth century to the end of the eighth century was influenced less by the trends of Constantinople than by a variety of Byzantine provincial plans. With the crowning of Charlemagne as a new Roman Emperor, Byzantine influences were largely replaced in a revival of earlier Western building traditions. Occasional exceptions include examples of early quincunx churches at Milan and near Cassino. Another is the Palatine Chapel. Its domed octagon design was influenced by Byzantine models. It was the largest dome north of the Alps at that time. Venice, Southern Italy and Sicily served as outposts of Middle Byzantine architectural influence in Italy. The Great Mosque of Córdoba contains the first known examples of the crossed-arch dome type. The use of corner squinches to support domes was widespread in Islamic architecture by the 10th and 11th centuries. After the ninth century, mosques in North Africa often have a small decorative dome over the mihrab. Additional domes are sometimes used at the corners of the mihrab wall, at the entrance bay, or on the square tower minarets. Egypt, along with north-eastern Iran, was one of two areas notable for early developments in Islamic mausoleums, beginning in the 10th century. Fatimid mausoleums were mostly simple square buildings covered by a dome. Domes were smooth or ribbed and had a characteristic Fatimid "keel" shape profile. Domes in Romanesque architecture are generally found within crossing towers at the intersection of a church's nave and transept, which conceal the domes externally. They are typically octagonal in plan and use corner squinches to translate a square bay into a suitable octagonal base. They appear "in connection with basilicas almost throughout Europe" between 1050 and 1100. The Crusades, beginning in 1095, also appear to have influenced domed architecture in Western Europe, particularly in the areas around the Mediterranean Sea. The Knights Templar, headquartered at the site, built a series of centrally planned churches throughout Europe modeled on the Church of the Holy Sepulchre, with the Dome of the Rock also an influence. In southwest France, there are over 250 domed Romanesque churches in the Périgord region alone. The use of pendentives to support domes in the Aquitaine region, rather than the squinches more typical of western medieval architecture, strongly implies a Byzantine influence. Gothic domes are uncommon due to the use of rib vaults over naves, and with church crossings usually focused instead by a tall steeple, but there are examples of small octagonal crossing domes in cathedrals as the style developed from the Romanesque. Star-shaped domes found at the Moorish palace of the Alhambra in Granada, Spain, the Hall of the Abencerrajes (c. 1333–91) and the Hall of the two Sisters (c. 1333–54), are extraordinarily developed examples of muqarnas domes. In the first half of the fourteenth century, stone blocks replaced bricks as the primary building material in the dome construction of Mamluk Egypt and, over the course of 250 years, around 400 domes were built in Cairo to cover the tombs of Mamluk sultans and emirs. Dome profiles were varied, with "keel-shaped", bulbous, ogee, stilted domes, and others being used. On the drum, angles were chamfered, or sometimes stepped, externally and triple windows were used in a tri-lobed arrangement on the faces. Bulbous cupolas on minarets were used in Egypt beginning around 1330, spreading to Syria in the following century. In the fifteenth century, pilgrimages to and flourishing trade relations with the Near East exposed the Low Countries of northwest Europe to the use of bulbous domes in the architecture of the Orient and such domes apparently became associated with the city of Jerusalem. Multi-story spires with truncated bulbous cupolas supporting smaller cupolas or crowns became popular in the sixteenth century. Russian domes The multidomed church is a typical form of Russian church architecture that distinguishes Russia from other Orthodox nations and Christian denominations. Indeed, the earliest Russian churches, built just after the Christianization of Kievan Rus', were multi-domed, which has led some historians to speculate about how Russian pre-Christian pagan temples might have looked. Examples of these early churches are the 13-domed wooden Saint Sophia Cathedral in Novgorod (989) and the 25-domed stone Desyatinnaya Church in Kiev (989–996). The number of domes typically has a symbolical meaning in Russian architecture, for example 13 domes symbolize Christ with 12 Apostles, while 25 domes means the same with an additional 12 Prophets of the Old Testament. The multiple domes of Russian churches were often comparatively smaller than Byzantine domes. Plentiful timber in Russia made wooden domes common and at least partially contributed to the popularity of onion domes, which were easier to shape in wood than in masonry. The earliest stone churches in Russia featured Byzantine style domes, however by the Early Modern era the onion dome had become the predominant form in traditional Russian architecture. The onion dome is a dome whose shape resembles an onion, after which they are named. Such domes are often larger in diameter than the drums they sit on, and their height usually exceeds their width. The whole bulbous structure tapers smoothly to a point. Though the earliest preserved Russian domes of such type date from the 16th century, illustrations from older chronicles indicate they have existed since the late 13th century. Like tented roofs—which were combined with, and sometimes replaced domes in Russian architecture since the 16th century—onion domes initially were used only in wooden churches. Builders introduced them into stone architecture much later, and continued to make their carcasses of either of wood or metal on top of masonry drums. Russian domes are often gilded or brightly painted. A dangerous technique of chemical gilding using mercury had been applied on some occasions until the mid-19th century, most notably in the giant dome of Saint Isaac's Cathedral. The more modern and safe method of gold electroplating was applied for the first time in gilding the domes of the Cathedral of Christ the Saviour in Moscow, the tallest Eastern Orthodox church in the world. Ukrainian domes The domes of the Saint Sophia Cathedral and Dormition Cathedral were remodeled to the helmet-shaped baroque style by Ivan Mazepa in the early 18th century, who also paid for gilding of the domes. Mazepa's reign also included the construction of an octagonal western bay with a baroque dome (1672) and five helmet-shaped domes over Boris and Gleb Cathedral in Chernihiv, which were removed in the 20th century by the Soviet government. Ottoman domes The rise of the Ottoman Empire and its spread in Asia Minor and the Balkans coincided with the decline of the Seljuk Turks and the Byzantine Empire. Early Ottoman buildings, for almost two centuries after 1300, were characterized by a blending of Ottoman culture and indigenous architecture, and the pendentive dome was used throughout the empire. The Byzantine dome form was adopted and further developed. Ottoman architecture made exclusive use of the semi-spherical dome for vaulting over even very small spaces, influenced by the earlier traditions of both Byzantine Anatolia and Central Asia. The smaller the structure, the simpler the plan, but mosques of medium size were also covered by single domes. Early experiments with large domes include the domed square mosques of Çine and Mudurnu under Bayezid I, and the later domed "zawiya-mosques" at Bursa. The Üç Şerefeli Mosque at Edirne developed the idea of the central dome being a larger version of the domed modules used throughout the rest of the structure to generate open space. This idea became important to the Ottoman style as it developed. The Bayezid II Mosque (1501–1506) in Istanbul begins the classical period in Ottoman architecture, in which the great imperial mosques, with variations, resemble the former Byzantine basilica of Hagia Sophia in having a large central dome with semi-domes of the same span to the east and west. Hagia Sophia's central dome arrangement is largely reproduced in three Ottoman mosques in Istanbul: the Bayezid II Mosque, the Kılıç Ali Pasha Mosque, and the Süleymaniye Mosque. Other Imperial mosques in Istanbul added semi-domes to the north and south, doing away with the basilica plan, starting with the Şehzade Mosque and seen again in later examples such as the Sultan Ahmed I Mosque and the Yeni Cami. The classical period lasted into the 17th century but its peak is associated with the architect Mimar Sinan in the 16th century. In addition to large imperial mosques, he designed hundreds of other monuments, including medium-sized mosques such as the Mihrimah Sultan Mosque, Sokollu Mehmed Pasha Mosque, and Rüstem Pasha Mosque and the tomb of Suleiman the Magnificent, with its double-shell dome. The Süleymaniye Mosque, built from 1550 to 1557, has a main dome 53 meters high with a diameter of 26.5 meters. At the time it was built, the dome was the highest in the Ottoman Empire when measured from sea level, but lower from the floor of the building and smaller in diameter than that of the nearby Hagia Sophia. Another classical domed mosque type is, like the Byzantine church of Sergius and Bacchus, the domed polygon within a square. Octagons and hexagons were common, such as those of the Üç Şerefeli Mosque (1437–1447) and the Selimiye Mosque in Edirne. The Selimiye Mosque was the first structure built by the Ottomans that had a larger dome than that of the Hagia Sophia. The dome rises above a square bay. Corner semi-domes convert this into an octagon, which muqarnas transition to a circular base. The dome has an average internal diameter of about 31.5 meters, while that of Hagia Sophia averages 31.3 meters. Designed and built by architect Mimar Sinan between 1568 and 1574, when he finished it he was 86 years old, and he considered the mosque his masterpiece. Italian Renaissance domes Filippo Brunelleschi's octagonal brick domical vault over Florence Cathedral was built between 1420 and 1436 and the lantern surmounting the dome was completed in 1467. The dome is 42 meters wide and made of two shells. The dome is not itself Renaissance in style, although the lantern is closer. A combination of dome, drum, pendentives, and barrel vaults developed as the characteristic structural forms of large Renaissance churches following a period of innovation in the later fifteenth century. Florence was the first Italian city to develop the new style, followed by Rome and then Venice. Brunelleschi's domes at San Lorenzo and the Pazzi Chapel established them as a key element of Renaissance architecture. His plan for the dome of the Pazzi Chapel in Florence's Basilica of Santa Croce (1430–52) illustrates the Renaissance enthusiasm for geometry and for the circle as geometry's supreme form. This emphasis on geometric essentials would be very influential. De re aedificatoria, written by Leon Battista Alberti around 1452, recommends vaults with coffering for churches, as in the Pantheon, and the first design for a dome at St. Peter's Basilica in Rome is usually attributed to him, although the recorded architect is Bernardo Rossellino. This would culminate in Bramante's 1505–06 projects for a wholly new St. Peter's Basilica, marking the beginning of the displacement of the Gothic ribbed vault with the combination of dome and barrel vault, which proceeded throughout the sixteenth century. Bramante's initial design was for a Greek cross plan with a large central hemispherical dome and four smaller domes around it in a quincunx pattern. Work began in 1506 and continued under a succession of builders over the next 120 years. The dome was completed by Giacomo della Porta and Domenico Fontana. The publication of Sebastiano Serlio's treatise, one of the most popular architectural treatises ever published, was responsible for the spread of the oval in late Renaissance and Baroque architecture throughout Italy, Spain, France, and central Europe. The Villa Capra, also known as "La Rotunda", was built by Andrea Palladio from 1565 to 1569 near Vicenza. Its highly symmetrical square plan centers on a circular room covered by a dome, and it proved highly influential on the Georgian architects of 18th century England, architects in Russia, and architects in America, Thomas Jefferson among them. Palladio's two domed churches in Venice are San Giorgio Maggiore (1565–1610) and Il Redentore (1577–92), the latter built in thanksgiving for the end of a bad outbreak of plague in the city. The spread of the Renaissance-style dome outside of Italy began with central Europe, although there was often a stylistic delay of a century or two. South Asian domes Hemispherical rock-cut tombs appear to imitate in stone the early bamboo or timber roofed domed huts with central poles known from the pre-Buddhist period. Examples include Sudama cave (3rd century BC) in Bihar, a similar domed chamber at Cannanora in Malabar, and a cave at Guntpalle (1st century BC). A rock-cut hemispherical chamber at Manappuram in Kerala retained a thin central pillar with no structural function. The hemispherical shape of Buddist stupas, likely refined forms of burial mounds, may also reflect earlier wooden dome roof construction, such as at Ghantasala. Islamic rule over northern and central India brought with it the use of domes constructed with stone, brick and mortar, and iron dowels and cramps. Centering was made from timber and bamboo. The use of iron cramps to join together adjacent stones was known in pre-Islamic India, and was used at the base of domes for hoop reinforcement. The synthesis of styles created by this introduction of new forms to the Hindu tradition of trabeate construction created a distinctive architecture. Domes in pre-Mughal India have a standard squat circular shape with a lotus design and bulbous finial at the top, derived from Hindu architecture. Because the Hindu architectural tradition did not include arches, flat corbels were used to transition from the corners of the room to the dome, rather than squinches. In contrast to Persian and Ottoman domes, the domes of Indian tombs tend to be more bulbous. The earliest examples include the half-domes of the late 13th century tomb of Balban and the small dome of the tomb of Khan Shahid, which were made of roughly cut material and would have needed covering surface finishes. Under the Lodi dynasty there was a large proliferation of tomb building, with octagonal plans reserved for royalty and square plans used for others of high rank, and the first double dome was introduced to India in this period. The first major Mughal building is the domed tomb of Humayun, built between 1562 and 1571 by a Persian architect. The central double dome covers an octagonal central chamber about 15 meters wide and is accompanied by small domed chattri made of brick and faced with stone. Chatris, the domed kiosks on pillars characteristic of Mughal roofs, were adopted from their Hindu use as cenotaphs. The fusion of Persian and Indian architecture can be seen in the dome shape of the Taj Mahal: the bulbous shape derives from Persian Timurid domes, and the finial with lotus leaf base is derived from Hindu temples. The Gol Gumbaz, or Round Dome, is one of the largest masonry domes in the world. It has an internal diameter of 41.15 meters and a height of 54.25 meters. The dome was the most technically advanced built in the Deccan. The last major Islamic tomb built in India was the tomb of Safdar Jang (1753–54). The central dome is reportedly triple-shelled, with two relatively flat inner brick domes and an outer bulbous marble dome, although it may actually be that the marble and second brick domes are joined everywhere but under the lotus leaf finial at the top. Early modern period domes In the early sixteenth century, the lantern of the Italian dome spread to Germany, gradually adopting the bulbous cupola from the Netherlands. Russian architecture strongly influenced the many bulbous domes of the wooden churches of Bohemia and Silesia and, in Bavaria, bulbous domes less resemble Dutch models than Russian ones. Domes like these gained in popularity in central and southern Germany and in Austria in the seventeenth and eighteenth centuries, particularly in the Baroque style, and influenced many bulbous cupolas in Poland and Eastern Europe in the Baroque period. However, many bulbous domes in eastern Europe were replaced over time in the larger cities during the second half of the eighteenth century in favor of hemispherical or stilted cupolas in the French or Italian styles. The construction of domes in the sixteenth and seventeenth centuries relied primarily on empirical techniques and oral traditions rather than the architectural treatises of the times, which avoided practical details. This was adequate for domes up to medium size, with diameters in the range of 12 to 20 meters. Materials were considered homogeneous and rigid, with compression taken into account and elasticity ignored. The weight of materials and the size of the dome were the key references. Lateral tensions in a dome were counteracted with horizontal rings of iron, stone, or wood incorporated into the structure. Over the course of the seventeenth and eighteenth centuries, developments in mathematics and the study of statics led to a more precise formalization of the ideas of the traditional constructive practices of arches and vaults, and there was a diffusion of studies on the most stable form for these structures: the catenary curve. Robert Hooke, who first articulated that a catenary arch was comparable to an inverted hanging chain, may have advised Wren on how to achieve the crossing dome of St. Paul's Cathedral. Wren's structural system became the standard for large domes well into the 19th century. The ribs in Guarino Guarini's San Lorenzo and Il Sidone were shaped as catenary arches. The idea of a large oculus in a solid dome revealing a second dome originated with him. He also established the oval dome as a reconciliation of the longitudinal plan church favored by the liturgy of the Counter-Reformation and the centralized plan favored by idealists. Because of the imprecision of oval domes in the Rococo period, drums were problematic and the domes instead often rested directly on arches or pendentives. In the eighteenth century, the study of dome structures changed radically, with domes being considered as a composition of smaller elements, each subject to mathematical and mechanical laws and easier to analyse individually, rather than being considered as whole units unto themselves. Although never very popular in domestic settings, domes were used in a number of 18th century homes built in the Neo-Classical style. In the United States, most public buildings in the late 18th century were only distinguishable from private residences because they featured cupolas. Modern period domes The historicism of the 19th century led to many domes being re-translations of the great domes of the past, rather than further stylistic developments, especially in sacred architecture. New production techniques allowed for cast iron and wrought iron to be produced both in larger quantities and at relatively low prices during the Industrial Revolution. Russia, which had large supplies of iron, has some of the earliest examples of iron's architectural use. Excluding those that simply imitated multi-shell masonry, metal framed domes such as the elliptical dome of Royal Albert Hall in London (57 to 67 meters in diameter) and the circular dome of the Halle au Blé in Paris may represent the century's chief development of the simple domed form. Cast-iron domes were particularly popular in France. The practice of building rotating domes for housing large telescopes was begun in the 19th century, with early examples using papier-mâché to minimize weight. Unique glass domes springing straight from ground level were used for hothouses and winter gardens. Elaborate covered shopping arcades included large glazed domes at their cross intersections. The large domes of the 19th century included exhibition buildings and functional structures such as gasometers and locomotive sheds. The "first fully triangulated framed dome" was built in Berlin in 1863 by Johann Wilhelm Schwedler and, by the start of the 20th century, similarly triangulated frame domes had become fairly common. Vladimir Shukhov was also an early pioneer of what would later be called gridshell structures and in 1897 he employed them in domed exhibit pavilions at the All-Russia Industrial and Art Exhibition. Domes built with steel and concrete were able to achieve very large spans. In the late 19th and early 20th centuries, the Guastavino family, a father and son team who worked on the eastern seaboard of the United States, further developed the masonry dome, using tiles set flat against the surface of the curve and fast-setting Portland cement, which allowed mild steel bar to be used to counteract tension forces. The thin domical shell was further developed with the construction by Walther Bauersfeld of two planetarium domes in Jena, Germany in the early 1920s. They consisting of a triangulated frame of light steel bars and mesh covered by a thin layer of concrete. These are generally taken to be the first modern architectural thin shells. These are also considered the first geodesic domes. Geodesic domes have been used for radar enclosures, greenhouses, housing, and weather stations. Architectural shells had their heyday in the 1950s and 1960s, peaking in popularity shortly before the widespread adoption of computers and the finite element method of structural analysis. The first permanent air supported membrane domes were the radar domes designed and built by Walter Bird after World War II. Their low cost eventually led to the development of permanent versions using teflon-coated fiberglass and by 1985 the majority of the domed stadiums around the world used this system. Tensegrity domes, patented by Buckminster Fuller in 1962, are membrane structures consisting of radial trusses made from steel cables under tension with vertical steel pipes spreading the cables into the truss form. They have been made circular, elliptical, and other shapes to cover stadiums from Korea to Florida. Tension membrane design has depended upon computers, and the increasing availability of powerful computers resulted in many developments being made in the last three decades of the 20th century. The higher expense of rigid large span domes made them relatively rare, although rigidly moving panels is the most popular system for sports stadiums with retractable roofing. See also Lists of domes Cupola Vault (architecture) Rotunda (architecture) Monolithic dome Copper domes Dome car Excerpts References Bibliography Arches and vaults Architectural elements Ancient Roman architectural elements Byzantine architecture Church architecture Mosque architecture Baroque architectural features Ceilings Roofs
Dome
[ "Technology", "Engineering" ]
13,248
[ "Structural engineering", "Building engineering", "Architecture", "Structural system", "Architectural elements", "Ceilings", "Roofs", "Components" ]
167,334
https://en.wikipedia.org/wiki/Adult
An adult is an animal that has reached full growth. The biological definition of the word means an animal reaching sexual maturity and thus capable of reproduction. In the human context, the term adult has meanings associated with social and legal concepts. In contrast to a non-adult or "minor", a legal adult is a person who has attained the age of majority and is therefore regarded as independent, self-sufficient, and responsible. They may also be regarded as "majors". The typical age of attaining legal adulthood is 18 although definition may vary by legal rights, country, and psychological development. Human adulthood encompasses psychological adult development. Definitions of adulthood are often inconsistent and contradictory; a person may be biologically an adult, and have adult behavior, but still be treated as a child if they are under the legal age of majority. Conversely, one may legally be an adult but possess none of the maturity and responsibility that may define an adult character. In different cultures, there are events that relate passing from being a child to becoming an adult or coming of age. This often encompasses passing a series of tests to demonstrate that a person is prepared for adulthood, or reaching a specified age, sometimes in conjunction with demonstrating preparation. Most modern societies determine legal adulthood based on reaching a legally specified age without requiring a demonstration of physical maturity or preparation for adulthood. Biological adulthood Historically and cross-culturally, adulthood has been determined primarily by the start of puberty (the appearance of secondary sex characteristics such as menstruation and the development of breasts in women, ejaculation, the development of facial hair, and a deeper voice in men, and pubic hair in both sexes). In the past, a person usually moved from the status of child directly to the status of adult, often with this shift being marked by some type of coming-of-age test or ceremony. During the Industrial Revolution, children went to work as soon as they could in order to help provide for their family. There was not a huge emphasis on school or education in general. Many children could get a job and were not required to have experience as adults are nowadays. Adulthood, in more recent years, as it has been studied has developed a characteristic list, that goes far beyond just ones physical maturity. These markers for a full, mentally developed, adult include traits of personal responsibilities in multiple aspects of life. Although few or no established dictionaries provide a definition for the two-word term biological adult, the first definition of adult in multiple dictionaries includes "the stage of the life cycle of an animal after reproductive capacity has been attained". Thus, the base definition of the word adult is the period beginning at physical sexual maturity, which occurs sometime after the onset of puberty. Although this is the primary definition of the base word "adult", the term is also frequently used to refer to social adults. The two-word term biological adult stresses or clarifies that the original definition, based on physical maturity (i.e. having reached reproductive competency), is being used. The time of puberty varies from child to child, but usually begins between 10 and 12 years old. Girls typically begin the process of puberty at age 10 or 11, and boys at age 11 or 12. Girls generally complete puberty by 15–17, and boys by age 16 or 17. Nutrition, genetics and environment also usually play a part in the onset of puberty. Girls will go through a growth spurt and gain weight in several areas of their body. Boys will go through similar spurts in growth, though it is usually not in a similar style or time frame. This is due to the natural processes of puberty, but genetics also plays a part in how much weight they gain or how much taller they get. One recent area of debate within the science of brain development is the most likely chronological age for full mental maturity, or indeed, if such an age even exists. Common claims repeated in the media since 2005 (based upon interpretations of imaging data) have commonly suggested an "end-point" of 25, referring to the prefrontal cortex as one area that is not yet fully mature at the age of 18. However, this is based on an interpretation of a brain imaging study by Jay Giedd, dating back to 2004 or 2005, where the only participants were aged up to 21 years, and Giedd assumed this maturing process would be done by the age of 25 years, whereas more recent studies show prefrontal cortex maturation continuing well past the age of 30 years, marking this interpretation as incorrect and outdated. Legal adulthood Legally, adulthood typically means that one has reached the age of majority – when parents lose parental rights and responsibilities regarding the person concerned. Depending on one's jurisdiction, the age of majority may or may not be set independently of and should not be confused with the minimum ages applicable to other activities, such as engaging in a contract, marriage, voting, having a job, serving in the military, buying/possessing firearms, driving, traveling abroad, involvement with alcoholic beverages, smoking, sexual activity, gambling, being a model or actor in pornography, running for president, etc. Admission of a young person to a place may be restricted because of danger for that person, concern that the place may lead the person to immoral behavior, or because of the risk that the young person causes damage (for example, at an exhibition of fragile items). One can distinguish the legality of acts of a young person, or of enabling a young person to carry out that act, by selling, renting out, showing, permitting entrance, allowing participation, etc. There may be distinction between commercially and socially enabling. Sometimes there is the requirement of supervision by a legal guardian, or just by an adult. Sometimes there is no requirement, but rather a recommendation. Using the example of pornography, one can distinguish between: being allowed inside an adult establishment being allowed to purchase pornography being allowed to possess pornography another person being allowed to sell, rent out, or show the young person pornography, see disseminating pornography to a minor being a pornographic actor: rules for the young person, and for other people, regarding production, possession, etc. (see child pornography) With regard to films with violence, etc.: another person being allowed to sell, rent out, or show the young person a film; a cinema being allowed to let a young person enter The age of majority ranges internationally from ages 15 to 21, with 18 being the most common age. Nigeria, Mali, Democratic Republic of Congo and Cameroon define adulthood at age 15, but marriage of girls at an earlier age is common. In most of the world, the legal adult age is 18 for most purposes, with some notable exceptions: The legal age of adulthood in British Columbia, New Brunswick, Newfoundland and Labrador, Northwest Territories, Nova Scotia, Nunavut, and Yukon in Canada is 19 (though there are some exceptions in which Canadians may be considered legal adults in certain situations like sexual consent, which is age 16, and criminal law, federal elections and the military, which is at 18); The legal age of adulthood in Nebraska and Alabama in the United States is 19. The legal age of adulthood in South Korea is 19. The legal age of adulthood in Mississippi and Puerto Rico in the U.S. is 21. Prior to the 1970s, young people were not classed as adults until 21 in most western nations. For example, in the United States, young citizens could not vote in many elections until 21 until July 1971 when the 26th Amendment passed mandating that the right to vote cannot be abridged for anyone 18 or older. The voting age was lowered in response to the fact that young men between the ages of 18 and 21 were drafted into the army to fight in the Vietnam War, hence the popular slogan "old enough to fight, old enough to vote" Young people under 21 in the US could also not purchase alcohol, purchase handguns, sign a binding contract, or marry without permission from parents. After the voting age was lowered, many states also moved to lower the drinking age (with most states having a minimum age of 18 or 19) and also to lower the age of legal majority (adulthood) to 18. However, there are legal activities where 18 is not the default age of adulthood. There are still some exceptions where 21 (or even higher) is still the benchmark for certain rights or responsibilities. For example, in the US the Gun Control Act of 1968 prohibits those under 21 from purchasing a handgun from a federally licensed dealer (although federal law makes an exception for individuals between the ages of 18 and 20 to obtain one from a private dealer if state law permits.) As of July 1984, the National Minimum Drinking Age Act mandated that all states raise their respective drinking ages to 21 to create a uniform standard for legally purchasing, drinking, or publicly possessing alcohol with exceptions made for consumption only in private residences under parental supervision and permission. This was done in response to reducing the number of drunk driving fatalities prevalent among young drivers. States that choose not to comply can lose up to 10% of highway funding. The Credit Card Act of 2009 imposed tougher safeguards for young adults between the ages of 18 and 20 obtaining a credit card. Young adults under the age of 21 must either have a co-signer 21 or older or show proof (usually a source of income) that they can repay their credit card balance. Unless that requirement is met, one must wait until 21 to be approved for a credit card on their own. The Affordable Care Act of 2010 expands the age that young adults can remain on their parent's health insurance plan up to age 26. , the federal government raised the legal age to purchase tobacco and vaping products from 18 to 21. In states where recreational marijuana is legalized, the default age is also 21, though those younger may be able to obtain medical marijuana prescriptions or cards upon seeing a physician. Gambling also varies from 18 to 21 depending on the state and many rental car companies do not rent cars to those under 21 and have surcharges for drivers under 25 (although this is not codified, and is company policy). In Quebec, Canada the Quebec legislature in 2020 raised the age one could purchase recreational marijuana from 18 to 21 stepping out of line with most of the country that set a minimum age of 19 (except Alberta, which is 18.) The Quebec government cited the risk that marijuana poses to the brain development of people under 21 as justification for the age raise. In March 2021, the state of Washington in a 5–4 decision, justices in the Supreme Court of the State of Washington tossed the life without parole sentences of a 19-year-old and a 20-year-old convicted in separate cases of first-degree aggravated murder decades ago, saying, as with juveniles, the court must first consider the age of those under 21 before sentencing them to die behind bars. This comes at a time where there are ongoing debates about whether those between 18 and 20 should be exempted from the death penalty. In Germany, courts largely sentence defendants under the age of 21 according to juvenile law in a bid to help them reintegrate into society and mete out punishments that fit the crime as well as the offender. In May 2021, the state of Texas raised the age that one can be an exotic dancer and work and patronize sexually oriented businesses from 18 to 21. In the UK, there have been many proposals to raise the age that one can buy tobacco from 18 to 21 in an attempt to curb teen and young adult use to get to a "smoke-free" UK by 2030. All of these laws made over the years reflect the growing awareness that young adults, while not children, are still in a transitional stage between adolescence and full adulthood and that there should be policy adjustments or restrictions where necessary, especially where it pertains to activities that carry certain degrees of risk or harm to themselves or others. At the same time, however, even though the generally accepted age of majority is 18 in most nations, there are rights or privileges afforded to adolescents who have not yet reached legal adulthood. In the United States, youth are able to get a part-time job at 14 provided they have a work permit. At 16, one is able to obtain a driver's permit or license depending on state laws and is able to work most jobs (except ones requiring heavy machinery) and consent to sexual activity (depending on the state). At 17, one is able to enlist in the armed forces with parental consent although they cannot be deployed to be in combat roles until age 18. The voting age for local elections in most American cities is 18. But in five localities nationwide — four of which are in Maryland — 16 and 17-year-olds are eligible to vote. The cities are Takoma Park, Riverdale, Greenbelt, and Hyattsville. In 2020, students 16 or older in Oakland, California gained the right to vote in school board elections. There is a growing movement to lower the voting age in the US and many other countries from 18 to 16 in hopes of engaging the youth vote and encouraging greater electoral participation. Some countries already have a voting age of 16 which include Austria, Scotland, Argentina, Brazil, Wales, Cuba, and Ecuador. In Germany, one can purchase beer and wine at the age of 16 although they cannot purchase spirits or hard liquor until 18. The age of consent in Germany is 14 if both partners are under 18. Sexual activity with a person under 18 is punishable if the adult is a person of authority over the minor in upbringing, education, care, or employment. Social construction of adulthood In contrast to biological perspectives of aging and adulthood, social scientists conceptualize adulthood as socially constructed. While aging is an established biological process, the attainment of adulthood is social in its criteria. In contrast to other perspectives that conceptualize aging and the attainment of adulthood as a largely universal development, regardless of context, nation, generation, gender, race, or social class. Social scientists regard these aspects as paramount in cultural definitions of adulthood. Further evidence of adulthood as a social construction is illustrated by the changing criteria of adulthood over time. Historically, adulthood in the U.S. has rested on completing one's education, moving away from the family of origin, and beginning one's career. Other key historical criteria include entering a marriage and becoming a parent. These criteria are social and subjective; they are organized by gender, race, ethnicity, and social class, among other key identity markers. As a result, particular populations feel adult earlier in the life course than do others. Contemporary experiences of and research on young adults today substitute more seemingly subjective criteria for adulthood which resonate more soundly with young adults' experiences of aging. The criteria are marked by a growing "importance of individualistic criteria and the irrelevance of the demographic markers of normative conceptions of adulthood." In particular, younger cohorts' attainment of adulthood centers on three criteria: gaining a sense of responsibility, independent decision-making, and financial independence. Jeffrey Arnett, a psychologist and professor at Clark University in Massachusetts, studied the development of adults and argues that there is a new and distinct period of development in between adolescence and adulthood. This stage, which he calls "emerging adulthood", occurs between the ages of 18 and 25. Arnett describes these individuals as able to take some responsibility for their lives, but still not completely feeling like an adult. Arnett articulates five distinct features that are unique to this period of development: identity exploration, feeling in between, instability, self-focus, and having possibilities. Arnett makes it clear that these 5 aspects of emerging adulthood are only relevant during the life stage of emerging adulthood. The first feature, identity exploration, describes emerging adults making decisions for themselves about their career, education, and love life. This is a time of life when a young person has yet to finalize these decisions but are pondering them, making them feel somewhere in between adolescent and adult. This leads into a second feature of this phase of life—feeling in between. Emerging adults feel that they are taking on responsibilities but do not feel like a 'full' adult quite yet. Next, the instability feature notes that emerging adults often move around after their high school years whether that is to college, friends' houses, or living with a romantic partner, as well as moving back home with their parents/guardians for a time. This moving around often ends once the individual's family and career have been set. Tagging along with the instability feature is having self-focus. Emerging adults, being away from their parental and societal routines, are now able to do what they want when they want and where they want before they are put back into a routine when they start a marriage, family, and career. Arnett's last feature of emerging adulthood, an age of possibilities, characterizes this stage as one where "optimism reigns". These individuals believe they have a good chance of turning out better than their parents did. Religion According to Jewish tradition, adulthood is reached at age 13 for Jewish boys and 12 for Jewish girls in accordance with the Bar or Bat Mitzvah; they are expected to demonstrate preparation for adulthood by learning the Torah and other Jewish practices. The Christian Bible and Jewish scripture contain no age requirement for adulthood or marrying, which includes engaging in sexual activity. The 1983 Code of Canon Law states, "A man before he has completed his sixteenth year of age, and likewise a woman before she has completed her fourteenth year of age, cannot enter a valid marriage". According to The Disappearance of Childhood by Neil Postman, the Christian Church of the Middle Ages considered the age of accountability, when a person could be tried and even executed as an adult, to be age 7. While certain religions have their guidelines on what it means to be an adult, generally speaking, there are trends that occur regarding religiosity as individuals transition from adolescence to adulthood. The role of religion in one's life can impact development during adolescence. The National Library of Medicine (NCBI) highlights some studies that show rates of religiosity declining as people move out of the house and live on their own. Oftentimes when people live on their own, they change their life goals and religion tends to be less important as they discover who they are. Other studies from the NCBI show that as adults get married and have children they settle down, and as they do, there tends to be an increase in religiosity. Everyone's level of religiosity builds at a different pace, meaning that religion relative to adult development varies across cultures and time. See also References Biological concepts Juvenile law
Adult
[ "Biology" ]
3,807
[ "nan" ]
167,394
https://en.wikipedia.org/wiki/Nicolas%20Bourbaki
Nicolas Bourbaki () is the collective pseudonym of a group of mathematicians, predominantly French alumni of the (ENS). Founded in 1934–1935, the Bourbaki group originally intended to prepare a new textbook in analysis. Over time the project became much more ambitious, growing into a large series of textbooks published under the Bourbaki name, meant to treat modern pure mathematics. The series is known collectively as the Éléments de mathématique (Elements of Mathematics), the group's central work. Topics treated in the series include set theory, abstract algebra, topology, analysis, Lie groups and Lie algebras. Bourbaki was founded in response to the effects of the First World War which caused the death of a generation of French mathematicians; as a result, young university instructors were forced to use dated texts. While teaching at the University of Strasbourg, Henri Cartan complained to his colleague André Weil of the inadequacy of available course material, which prompted Weil to propose a meeting with others in Paris to collectively write a modern analysis textbook. The group's core founders were Cartan, Claude Chevalley, Jean Delsarte, Jean Dieudonné and Weil; others participated briefly during the group's early years, and membership has changed gradually over time. Although former members openly discuss their past involvement with the group, Bourbaki has a custom of keeping its current membership secret. The group's name derives from the 19th century French general Charles-Denis Bourbaki, who had a career of successful military campaigns before suffering a dramatic loss in the Franco-Prussian War. The name was therefore familiar to early 20th-century French students. Weil remembered an ENS student prank in which an upperclassman posed as a professor and presented a "theorem of Bourbaki"; the name was later adopted. The Bourbaki group holds regular private conferences for the purpose of drafting and expanding the Éléments. Topics are assigned to subcommittees, drafts are debated, and unanimous agreement is required before a text is deemed fit for publication. Although slow and labor-intensive, the process results in a work which meets the group's standards for rigour and generality. The group is also associated with the Séminaire Bourbaki, a regular series of lectures presented by members and non-members of the group, also published and disseminated as written documents. Bourbaki maintains an office at the ENS. Nicolas Bourbaki was influential in 20th-century mathematics, particularly during the middle of the century when volumes of the Éléments appeared frequently. The group is noted among mathematicians for its rigorous presentation and for introducing the notion of a mathematical structure, an idea related to the broader, interdisciplinary concept of structuralism. Bourbaki's work informed the New Math, a trend in elementary math education during the 1960s. Although the group remains active, its influence is considered to have declined due to infrequent publication of new volumes of the Éléments. However, since 2012 the group has published four new (or significantly revised) volumes, the most recent in 2023 (treating spectral theory). Moreover, at least three further volumes are under preparation. Background Charles-Denis Sauter Bourbaki was a successful general during the era of Napoleon III, serving in the Crimean War and other conflicts. During the Franco-Prussian war however, Charles-Denis Bourbaki suffered a major defeat in which the Armée de l'Est, under his command, retreated across the Swiss border and was disarmed. The general unsuccessfully attempted suicide. The dramatic story of his defeat was known in the French popular consciousness following his death. In the early 20th century, the First World War affected Europeans of all professions and social classes, including mathematicians and male students who fought and died in the front. For example, the French mathematician Gaston Julia, a pioneer in the study of fractals, lost his nose during the war and wore a leather strap over the affected part of his face for the rest of his life. The deaths of ENS students resulted in a lost generation in the French mathematical community; the estimated proportion of ENS mathematics students (and French students generally) who died in the war ranges from one-quarter to one-half, depending on the intervals of time (c. 1900–1918, especially 1910–1916) and populations considered. Furthermore, Bourbaki founder André Weil remarked in his memoir Apprenticeship of a Mathematician that France and Germany took different approaches with their intelligentsia during the war: while Germany protected its young students and scientists, France instead committed them to the front, owing to the French culture of egalitarianism. A succeeding generation of mathematics students attended the ENS during the 1920s, including Weil and others, the future founders of Bourbaki. During his time as a student, Weil recalled a prank in which an upperclassman, , posed as a professor and gave a math lecture, ending with a prompt: "Theorem of Bourbaki: you are to prove the following...". Weil was also aware of a similar stunt around 1910 in which a student claimed to be from the fictional, impoverished nation of "Poldevia" and solicited the public for donations. Weil had strong interests in languages and Indian culture, having learned Sanskrit and read the Bhagavad Gita. After graduating from the ENS and obtaining his doctorate, Weil took a teaching stint at the Aligarh Muslim University in India. While there, Weil met the mathematician Damodar Kosambi, who was engaged in a power struggle with one of his colleagues. Weil suggested that Kosambi write an article with material attributed to one "Bourbaki", in order to show off his knowledge to the colleague. Kosambi took the suggestion, attributing the material discussed in the article to "the little-known Russian mathematician D. Bourbaki, who was poisoned during the Revolution." It was the first article in the mathematical literature with material attributed to the eponymous "Bourbaki". Weil's stay in India was short-lived; he attempted to revamp the mathematics department at Aligarh, without success. The university administration planned to fire Weil and promote his colleague Vijayaraghavan to the vacated position. However, Weil and Vijayaraghavan respected one another. Rather than play any role in the drama, Vijayaraghavan instead resigned, later informing Weil of the plan. Weil returned to Europe to seek another teaching position. He ended up at the University of Strasbourg, joining his friend and colleague Henri Cartan. The Bourbaki collective Founding During their time together at Strasbourg, Weil and Cartan regularly complained to each other regarding the inadequacy of available course material for calculus instruction. In his memoir Apprenticeship, Weil described his solution in the following terms: "One winter day toward the end of 1934, I came upon a great idea that would put an end to these ceaseless interrogations by my comrade. 'We are five or six friends', I told him some time later, 'who are in charge of the same mathematics curriculum at various universities. Let us all come together and regulate these matters once and for all, and after this, I shall be delivered of these questions.' I was unaware of the fact that Bourbaki was born at that instant." Cartan confirmed the account. The first, unofficial meeting of the Bourbaki collective took place at noon on Monday, 10 December 1934, at the Café Grill-Room A. Capoulade, Paris, in the Latin Quarter. Six mathematicians were present: Henri Cartan, Claude Chevalley, Jean Delsarte, Jean Dieudonné, René de Possel, and André Weil. Most of the group were based outside Paris and were in town to attend the Julia Seminar, a conference prepared with the help of Gaston Julia at which several future Bourbaki members and associates presented. The group resolved to collectively write a treatise on analysis, for the purpose of standardizing calculus instruction in French universities. The project was especially meant to supersede the text of Édouard Goursat, which the group found to be badly outdated, and to improve its treatment of Stokes' Theorem. The founders were also motivated by a desire to incorporate ideas from the Göttingen school, particularly from exponents Hilbert, Noether and B.L. van der Waerden. Further, in the aftermath of World War I, there was a certain nationalist impulse to save French mathematics from decline, especially in competition with Germany. As Dieudonné stated in an interview, "Without meaning to boast, I can say that it was Bourbaki that saved French mathematics from extinction." Jean Delsarte was particularly favorable to the collective aspect of the proposed project, observing that such a working style could insulate the group's work against potential later individual claims of copyright. As various topics were discussed, Delsarte also suggested that the work begin in the most abstract, axiomatic terms possible, treating all of mathematics prerequisite to analysis from scratch. The group agreed to the idea, and this foundational area of the proposed work was referred to as the "Abstract Packet" (Paquet Abstrait). Working titles were adopted: the group styled itself as the Committee for the Treatise on Analysis, and their proposed work was called the Treatise on Analysis (Traité d'analyse). In all, the collective held ten preliminary biweekly meetings at A. Capoulade before its first official, founding conference in July 1935. During this early period, Paul Dubreil, Jean Leray and Szolem Mandelbrojt joined and participated. Dubreil and Leray left the meetings before the following summer, and were respectively replaced by new participants Jean Coulomb and Charles Ehresmann. The group's official founding conference was held in Besse-en-Chandesse, from 10 to 17 July 1935. At the time of the official founding, the membership consisted of the six attendees at the first lunch of 10 December 1934, together with Coulomb, Ehresmann and Mandelbrojt. On 16 July, the members took a walk to alleviate the boredom of unproductive proceedings. During the malaise, some decided to skinny-dip in the nearby Lac Pavin, repeatedly yelling "Bourbaki!" At the close of the first official conference, the group renamed itself "Bourbaki", in reference to the general and prank as recalled by Weil and others. During 1935, the group also resolved to establish the mathematical personhood of their collective pseudonym by getting an article published under its name. A first name had to be decided; a full name was required for publication of any article. To this end, René de Possel's wife Eveline "baptized" the pseudonym with the first name of Nicolas, becoming Bourbaki's "godmother". This allowed for the publication of a second article with material attributed to Bourbaki, this time under "his" own name. Henri Cartan's father Élie Cartan, also a mathematician and supportive of the group, presented the article to the publishers, who accepted it. At the time of Bourbaki's founding, René de Possel and his wife Eveline were in the process of divorcing. Eveline remarried to André Weil in 1937, and de Possel left the Bourbaki collective some time later. This sequence of events has caused speculation that de Possel left the group because of the remarriage, however this suggestion has also been criticized as possibly historically inaccurate, since de Possel is supposed to have remained active in Bourbaki for years after André's marriage to Eveline. World War II Bourbaki's work slowed significantly during the Second World War, though the group survived and later flourished. Some members of Bourbaki were Jewish and therefore forced to flee from certain parts of Europe at certain times. Weil, who was Jewish, spent the summer of 1939 in Finland with his wife Eveline, as guests of Lars Ahlfors. Due to their travel near the border, the couple were suspected as Soviet spies by Finnish authorities near the onset of the Winter War, and André was later arrested. According to an anecdote, Weil was to have been executed but for the passing mention of his case to Rolf Nevanlinna, who asked that Weil's sentence be commuted. However, the accuracy of this detail is dubious. Weil reached the United States in 1941, later taking another teaching stint in São Paulo from 1945 to 1947 before settling at the University of Chicago from 1947 to 1958 and finally the Institute for Advanced Study in Princeton, where he spent the remainder of his career. Although Weil remained in touch with the Bourbaki collective and visited Europe and the group periodically following the war, his level of involvement with Bourbaki never returned to that at the time of founding. Second-generation Bourbaki member Laurent Schwartz was also Jewish and found pickup work as a math teacher in rural Vichy France. Moving from village to village, Schwartz planned his movements in order to evade capture by the Nazis. On one occasion Schwartz found himself trapped overnight in a certain village, as his expected transportation home was unavailable. There were two inns in town: a comfortable, well-appointed one, and a very poor one with no heating and bad beds. Schwartz's instinct told him to stay at the poor inn; overnight, the Nazis raided the good inn, leaving the poor inn unchecked. Meanwhile, Jean Delsarte, a Catholic, was mobilized in 1939 as the captain of an audio reconnaissance battery. He was forced to lead the unit's retreat from the northeastern part of France toward the south. While passing near the Swiss border, Delsarte overheard a soldier say "We are the army of Bourbaki"; the 19th-century general's retreat was known to the French. Delsarte had coincidentally led a retreat similar to that of the collective's namesake. Postwar until the present Following the war, Bourbaki had solidified the plan of its work and settled into a productive routine. Bourbaki regularly published volumes of the Éléments during the 1950s and 1960s, and enjoyed its greatest influence during this period. Over time the founding members gradually left the group, slowly being replaced with younger newcomers including Jean-Pierre Serre and Alexander Grothendieck. Serre, Grothendieck and Laurent Schwartz were awarded the Fields Medal during the postwar period, in 1954, 1966 and 1950 respectively. Later members Alain Connes and Jean-Christophe Yoccoz also received the Fields Medal, in 1982 and 1994 respectively. The later practice of accepting scientific awards contrasted with some of the founders' views. During the 1930s, Weil and Delsarte petitioned against a French national scientific "medal system" proposed by the Nobel physics laureate Jean Perrin. Weil and Delsarte felt that the institution of such a system would increase unconstructive pettiness and jealousy in the scientific community. Despite this, the Bourbaki group had previously successfully petitioned Perrin for a government grant to support its normal operations. Like the founders, Grothendieck was also averse to awards, albeit for pacifist reasons. Although Grothendieck was awarded the Fields Medal in 1966, he declined to attend the ceremony in Moscow, in protest of the Soviet government. In 1988, Grothendieck rejected the Crafoord Prize outright, citing no personal need to accept prize money, lack of recent relevant output, and general distrust of the scientific community. Born to Jewish anarchist parentage, Grothendieck survived the Holocaust and advanced rapidly in the French mathematical community, despite poor education during the war. Grothendieck's teachers included Bourbaki's founders, and so he joined the group. During Grothendieck's membership, Bourbaki reached an impasse concerning its foundational approach. Grothendieck advocated for a reformulation of the group's work using category theory as its theoretical basis, as opposed to set theory. The proposal was ultimately rejected in part because the group had already committed itself to a rigid track of sequential presentation, with multiple already-published volumes. Following this, Grothendieck left Bourbaki "in anger". Biographers of the collective have described Bourbaki's unwillingness to start over in terms of category theory as a missed opportunity. However, Bourbaki has in 2023 announced that a book on category theory is currently under preparation (see below the last paragraph of this section). During the founding period, the group chose the Parisian publisher Hermann to issue installments of the Éléments. Hermann was led by Enrique Freymann, a friend of the founders willing to publish the group's project, despite financial risk. During the 1970s, Bourbaki entered a protracted legal battle with Hermann over matters of copyright and royalty payment. Although the Bourbaki group won the suit and retained collective copyright of the Éléments, the dispute slowed the group's productivity. Former member Pierre Cartier described the lawsuit as a pyrrhic victory, saying: "As usual in legal battles, both parties lost and the lawyer got rich." Later editions of the Éléments were published by Masson, and modern editions are published by Springer. From the 1980s through the 2000s, Bourbaki published very infrequently, with the result that in 1998 Le Monde pronounced the collective "dead". However, in 2012 Bourbaki resumed the publication of the Éléments with a revised chapter 8 of algebra, the first 4 chapters of a new book on algebraic topology, and two volumes on spectral theory (the first of which is an expanded and revised version of the edition of 1967 while the latter consist of three new chapters). Moreover, the text of the two latest volumes announces that books on category theory and modular forms are currently under preparation (in addition to the latter part of the book on algebraic topology). Working method Bourbaki holds periodic conferences for the purpose of expanding the Éléments; these conferences are the central activity of the group's working life. Subcommittees are assigned to write drafts on specific material, and the drafts are later presented, vigorously debated, and re-drafted at the conferences. Unanimous agreement is required before any material is deemed acceptable for publication. A given piece of material may require six or more drafts over a period of several years, and some drafts are never developed into completed work. Bourbaki's writing process has therefore been described as "Sisyphean". Although the method is slow, it yields a final product which satisfies the group's standards for mathematical rigour, one of Bourbaki's main priorities in the treatise. Bourbaki's emphasis on rigour was a reaction to the style of Henri Poincaré, who stressed the importance of free-flowing mathematical intuition at the cost of thorough presentation. During the project's early years, Dieudonné served as the group's scribe, authoring several final drafts which were ultimately published. For this purpose, Dieudonné adopted an impersonal writing style which was not his own, but which was used to craft material acceptable to the entire group. Dieudonné reserved his personal style for his own work; like all members of Bourbaki, Dieudonné also published material under his own name, including the nine-volume Éléments d'analyse, a work explicitly focused on analysis and of a piece with Bourbaki's initial intentions. Most of the final drafts of Bourbaki's Éléments carefully avoided using illustrations, favoring a formal presentation based only in text and formulas. An exception to this was the treatment of Lie groups and Lie algebras (especially in chapters 4–6), which did make use of diagrams and illustrations. The inclusion of illustration in this part of the work was due to Armand Borel. Borel was minority-Swiss in a majority-French collective, and self-deprecated as "the Swiss peasant", explaining that visual learning was important to the Swiss national character. When asked about the dearth of illustration in the work, former member Pierre Cartier replied: The conferences have historically been held at quiet rural areas. These locations contrast with the lively, sometimes heated debates which have occurred. Laurent Schwartz reported an episode in which Weil slapped Cartan on the head with a draft. The hotel's proprietor saw the incident and assumed that the group would split up, but according to Schwartz, "peace was restored within ten minutes." The historical, confrontational style of debate within Bourbaki has been partly attributed to Weil, who believed that new ideas have a better chance of being born in confrontation than in an orderly discussion. Schwartz related another illustrative incident: Dieudonné was adamant that topological vector spaces must appear in the work before integration, and whenever anyone suggested that the order be reversed, he would loudly threaten his resignation. This became an in-joke among the group; Roger Godement's wife Sonia attended a conference, aware of the idea, and asked for proof. As Sonia arrived at a meeting, a member suggested that integration must appear before topological vector spaces, which triggered Dieudonné's usual reaction. Despite the historical culture of heated argument, Bourbaki thrived during the middle of the twentieth century. Bourbaki's ability to sustain such a collective, critical approach has been described as "something unusual", surprising even its own members. In founder Henri Cartan's words, "That a final product can be obtained at all is a kind of miracle that none of us can explain." It has been suggested that the group survived because its members believed strongly in the importance of their collective project, despite personal differences. When the group overcame difficulties or developed an idea that they liked, they would sometimes say l'esprit a soufflé ("the spirit breathes"). Historian Liliane Beaulieu noted that the "spirit"—which might be an avatar, the group mentality in action, or Bourbaki "himself"—was part of an internal culture and mythology which the group used to form its identity and perform work. Humor Humor has been an important aspect of the group's culture, beginning with Weil's memories of the student pranks involving "Bourbaki" and "Poldevia". For example, in 1939 the group released a wedding announcement for the marriage of "Betti Bourbaki" (daughter of Nicolas) to one "H. Pétard" (H. "Firecrackers" or "Hector Pétard"), a "lion hunter". Hector Pétard was itself a pseudonym, but not one originally coined by the Bourbaki members. The Pétard moniker was originated by Ralph P. Boas, Frank Smithies and other Princeton mathematicians who were aware of the Bourbaki project; inspired by them, the Princeton mathematicians published an article on the "mathematics of lion hunting". After meeting Boas and Smithies, Weil composed the wedding announcement, which contained several mathematical puns. Bourbaki's internal newsletter La Tribu has sometimes been issued with humorous subtitles to describe a given conference, such as "The Extraordinary Congress of Old Fogies" (where anyone older than 30 was considered a fogy) or "The Congress of the Motorization of the Trotting Ass" (an expression used to describe the routine unfolding of a mathematical proof, or process). During the 1940s–1950s, the American Mathematical Society received applications for individual membership from Bourbaki. They were rebuffed by J.R. Kline who understood the entity to be a collective, inviting them to re-apply for institutional membership. In response, Bourbaki floated a rumor that Ralph Boas was not a real person, but a collective pseudonym of the editors of Mathematical Reviews with which Boas had been affiliated. The reason for targeting Boas was because he had known the group in its earlier days when they were less strict with secrecy, and he'd described them as a collective in an article for the Encyclopædia Britannica. In November 1968, a mock obituary of Nicolas Bourbaki was released during one of the seminars. The group developed some variants of the word "Bourbaki" for internal use. The noun "Bourbaki" might refer to the group proper or to an individual member, e.g. "André Weil was a Bourbaki." "Bourbakist" is sometimes used to refer to members but also denotes associates, supporters, and enthusiasts. To "bourbakize" meant to take a poor existing text and to improve it through an editing process. Bourbaki's culture of humor has been described as an important factor in the group's social cohesion and capacity to survive, smoothing over tensions of heated debate. As of , a Twitter account registered to "Betty_Bourbaki" provides regular updates on the group's activity. Works Bourbaki's work includes a series of textbooks, a series of printed lecture notes, journal articles, and an internal newsletter. The textbook series Éléments de mathématique (Elements of mathematics) is the group's central work. The Séminaire Bourbaki is a lecture series held regularly under the group's auspices, and the talks given are also published as lecture notes. Journal articles have been published with authorship attributed to Bourbaki, and the group publishes an internal newsletter La Tribu (The Tribe) which is distributed to current and former members. Éléments de mathématique The content of the Éléments is divided into books—major topics of discussion, volumes—individual, physical books, and chapters, together with certain summaries of results, historical notes, and other details. The volumes of the Éléments have had a complex publication history. Material has been revised for new editions, published chronologically out of order of its intended logical sequence, grouped together and partitioned differently in later volumes, and translated into English. For example, the second book on Algebra was originally released in eight French volumes: the first in 1942 being chapter 1 alone, and the last in 1980 being chapter 10 alone. This presentation was later condensed into five volumes with chapters 1–3 in the first volume, chapters 4–7 in the second, and chapters 8–10 each remaining the third through fifth volumes of that portion of the work. The English edition of Bourbaki's Algebra consists of translations of the three volumes consisting of chapters 1–3, 4–7 and 8, with chapters 9 and 10 unavailable in English as of . When Bourbaki's founders began working on the Éléments, they originally conceived of it as a "treatise on analysis", the proposed work having a working title of the same name (Traité d'analyse). The opening part was to comprehensively deal with the foundations of mathematics prior to analysis, and was referred to as the "Abstract Packet". Over time, the members developed this proposed "opening section" of the work to the point that it would instead run for several volumes and comprise a major part of the work, covering set theory, abstract algebra, and topology. Once the project's scope expanded far beyond its original purpose, the working title Traité d'analyse was dropped in favor of Éléments de mathématique. The unusual, singular "Mathematic" was meant to connote Bourbaki's belief in the unity of mathematics. The first six books of the Éléments, representing the first half of the work, are numbered sequentially and ordered logically, with a given statement being established only on the basis of earlier results. This first half of the work bore the subtitle Les structures fondamentales de l’analyse (Fundamental Structures of Analysis), covering established mathematics (algebra, analysis) in the group's style. The second half of the work consists of unnumbered books treating modern areas of research (Lie groups, commutative algebra), each presupposing the first half as a shared foundation but without dependence on each other. This second half of the work, consisting of newer research topics, does not have a corresponding subtitle. The volumes of the Éléments published by Hermann were indexed by chronology of publication and referred to as fascicules: installments in a large work. Some volumes did not consist of the normal definitions, proofs, and exercises in a math textbook, but contained only summaries of results for a given topic, stated without proof. These volumes were referred to as Fascicules de résultats, with the result that fascicule may refer to a volume of Hermann's edition, or to one of the "summary" sections of the work (e.g. Fascicules de résultats is translated as "Summary of Results" rather than "Installment of Results", referring to the content rather than a specific volume). The first volume of Bourbaki's Éléments to be published was the Summary of Results in the Theory of Sets, in 1939. Similarly one of the work's later books, Differential and Analytic Manifolds, consisted only of two volumes of summaries of results, with no chapters of content having been published. Later installments of the Éléments appeared infrequently during the 1980s and 1990s. A volume of Commutative Algebra (chapters 8–9) was published in 1983, and no other volumes were issued until the appearance of the same book's tenth chapter in 1998. During the 2010s, Bourbaki increased its productivity. A re-written and expanded version of the eighth chapter of Algebra appeared in 2012, the first four chapters of a new book treating Algebraic Topology was published in 2016, and the first two chapters of a revised and expanded edition of Spectral Theory was issued in 2019 while the remaining three (completely new) chapters appeared in 2023. Séminaire Bourbaki The Séminaire Bourbaki has been held regularly since 1948, and lectures are presented by non-members and members of the collective. As of the Séminaire Bourbaki has run to over a thousand recorded lectures in its written incarnation, denoted chronologically by simple numbers. At the time of a June 1999 lecture given by Jean-Pierre Serre on the topic of Lie groups, the total lectures given in the series numbered 864, corresponding to roughly 10,000 pages of printed material. Articles Several journal articles have appeared in the mathematical literature with material or authorship attributed to Bourbaki; unlike the Éléments, they were typically written by individual members and not crafted through the usual process of group consensus. Despite this, Jean Dieudonné's essay "The Architecture of Mathematics" has become known as Bourbaki's manifesto. Dieudonné addressed the issue of overspecialization in mathematics, to which he opposed the inherent unity of mathematic (as opposed to mathematics) and proposed mathematical structures as useful tools which can be applied to several subjects, showing their common features. To illustrate the idea, Dieudonné described three different systems in arithmetic and geometry and showed that all could be described as examples of a group, a specific kind of (algebraic) structure. Dieudonné described the axiomatic method as "the 'Taylor system' for mathematics" in the sense that it could be used to solve problems efficiently. Such a procedure would entail identifying relevant structures and applying established knowledge about the given structure to the specific problem at hand. Reprinted in Kosambi attributed material in the article to "D. Bourbaki", the first mention of the eponymous Bourbaki in the literature. Presumptive author: André Weil. Presumptive author: Jean Dieudonné. Presumptive author: Jean Dieudonné. Second in a series of three articles. Presumptive author: Jean Dieudonné or André Weil. Presumptive author: Jean Dieudonné. Presumptive author: André Weil. Presumptive author: Henri Cartan or Jean Dieudonné. Presumptive author: Jean Dieudonné. Authorized translation of the book chapter L'architecture des mathématiques, appearing in English as a journal article. Presumptive authors: Jean Dieudonné and Laurent Schwartz. La Tribu La Tribu is Bourbaki's internal newsletter, distributed to current and former members. The newsletter usually documents recent conferences and activity in a humorous, informal way, sometimes including poetry. Member Pierre Samuel wrote the newsletter's narrative sections for several years. Early editions of La Tribu and related documents have been made publicly available by Bourbaki. Historian Liliane Beaulieu examined La Tribu and Bourbaki's other writings, describing the group's humor and private language as an "art of memory" which is specific to the group and its chosen methods of operation. Because of the group's secrecy and informal organization, individual memories are sometimes recorded in a fragmentary way, and may not have significance to other members. On the other hand, the predominantly French, ENS background of the members, together with stories of the group's early period and successes, create a shared culture and mythology which is drawn upon for group identity. La Tribu usually lists the members present at a conference, together with any visitors, family members or other friends in attendance. Humorous descriptions of location or local "props" (cars, bicycles, binoculars, etc.) can also serve as mnemonic devices. Membership As of 2000, Bourbaki has had "about forty" members. Historically the group has numbered about ten to twelve members at any given point, although it was briefly (and officially) limited to nine members at the time of founding. Bourbaki's membership has been described in terms of generations: After the first three generations there were roughly twenty later members, not including current participants. Bourbaki has a custom of keeping its current membership secret, a practice meant to ensure that its output is presented as a collective, unified effort under the Bourbaki pseudonym, not attributable to any one author (e.g. for purposes of copyright or royalty payment). This secrecy is also intended to deter unwanted attention which could disrupt normal operations. However, former members freely discuss Bourbaki's internal practices upon departure. Prospective members are invited to conferences and styled as guinea pigs, a process meant to vet the newcomer's mathematical ability. In the event of agreement between the group and the prospect, the prospect eventually becomes a full member. The group is supposed to have an age limit: active members are expected to retire at (or about) 50 years of age. At a 1956 conference, Cartan read a letter from Weil which proposed a "gradual disappearance" of the founding members, forcing younger members to assume full responsibility for Bourbaki's operations. This rule is supposed to have resulted in a complete change of personnel by 1958. However, historian Liliane Beaulieu has been critical of the claim. She reported never having found written affirmation of the rule, and has indicated that there have been exceptions. The age limit is thought to express the founders' intent that the project should continue indefinitely, operated by people at their best mathematical ability—in the mathematical community, there is a widespread belief that mathematicians produce their best work while young. Among full members there is no official hierarchy; all operate as equals, having the ability to interrupt conference proceedings at any point, or to challenge any material presented. However, André Weil has been described as "first among equals" during the founding period, and was given some deference. On the other hand, the group has also poked fun at the idea that older members should be afforded greater respect. Bourbaki conferences have also been attended by members' family, friends, visiting mathematicians, and other non-members of the group. Bourbaki is not known ever to have had any female members. Influence and criticism Bourbaki was influential in 20th century mathematics and had some interdisciplinary impact on the humanities and the arts, although the extent of the latter influence is a matter of dispute. The group has been praised and criticized for its method of presentation, its working style, and its choice of mathematical topics. Influence Bourbaki introduced several mathematical notations which have remained in use. Weil took the letter of the Norwegian alphabet and used it to denote the empty set, . This notation first appeared in the Summary of Results on the Theory of Sets, and remains in use. The words injective, surjective and bijective were introduced to refer to functions which satisfy certain properties. Bourbaki used simple language for certain geometric objects, naming them pavés (paving stones) and boules (balls) as opposed to "parallelotopes" or "hyperspheroids". Similarly in its treatment of topological vector spaces, Bourbaki defined a barrel as a set which is convex, balanced, absorbing, and closed. The group were proud of this definition, believing that the shape of a wine barrel typified the mathematical object's properties. Bourbaki also employed a "dangerous bend" symbol in the margins of its text to indicate an especially difficult piece of material. Bourbaki enjoyed its greatest influence during the 1950s and 1960s, when installments of the Éléments were published frequently. Bourbaki had some interdisciplinary influence on other fields, including anthropology and psychology. This influence was in the context of structuralism, a school of thought in the humanities which stresses the relationships between objects over the objects themselves, pursued in various fields by other French intellectuals. In 1943, André Weil met the anthropologist Claude Lévi-Strauss in New York, where the two undertook a brief collaboration. At Lévi-Strauss' request, Weil wrote a brief appendix describing marriage rules for four classes of people within Aboriginal Australian society, using a mathematical model based on group theory. The result was published as an appendix in Lévi-Strauss' Elementary Structures of Kinship, a work examining family structures and the incest taboo in human cultures. In 1952, Jean Dieudonné and Jean Piaget participated in an interdisciplinary conference on mathematical and mental structures. Dieudonné described mathematical "mother structures" in terms of Bourbaki's project: composition, neighborhood, and order. Piaget then gave a talk on children's mental processes, and considered that the psychological concepts he had just described were very similar to the mathematical ones just described by Dieudonné. According to Piaget, the two were "impressed with each other". The psychoanalyst Jacques Lacan liked Bourbaki's collaborative working style and proposed a similar collective group in psychology, an idea which did not materialize. Bourbaki was also cited by post-structuralist philosophers. In their joint work Anti-Oedipus, Gilles Deleuze and Félix Guattari presented a criticism of capitalism. The authors cited Bourbaki's use of the axiomatic method (with the purpose of establishing truth) as a distinct counter-example to management processes which instead seek economic efficiency. The authors said of Bourbaki's axiomatics that "they do not form a Taylor system", inverting the phrase used by Dieudonné in "The Architecture of Mathematics". In The Postmodern Condition, Jean-François Lyotard criticized the "legitimation of knowledge", the process by which statements become accepted as valid. As an example, Lyotard cited Bourbaki as a group which produces knowledge within a given system of rules. Lyotard contrasted Bourbaki's hierarchical, "structuralist" mathematics with the catastrophe theory of René Thom and the fractals of Benoit Mandelbrot, expressing preference for the latter "postmodern science" which problematized mathematics with "fracta, catastrophes, and pragmatic paradoxes". Although biographer Amir Aczel stressed Bourbaki's influence on other disciplines during the mid-20th century, Maurice Mashaal moderated the claims of Bourbaki's influence in the following terms: The impact of "structuralism" on mathematics itself was also criticized. The mathematical historian Leo Corry argued that Bourbaki's use of mathematical structures was unimportant within the Éléments, having been established in Theory of Sets and cited infrequently afterwards. Corry described the "structural" view of mathematics promoted by Bourbaki as an "image of knowledge"—a conception about a scientific discipline—as opposed to an item in the discipline's "body of knowledge", which refers to the actual scientific results in the discipline itself. Bourbaki also had some influence in the arts. The literary collective Oulipo was founded on 24 November 1960 under circumstances similar to Bourbaki's founding, with the members initially meeting in a restaurant. Although several members of Oulipo were mathematicians, the group's purpose was to create experimental literature by playing with language. Oulipo frequently employed mathematically-based constrained writing techniques, such as the S+7 method. Oulipo member Raymond Queneau attended a Bourbaki conference in 1962. In 2016, an anonymous group of economists collaboratively wrote a note alleging academic misconduct by the authors and editor of a paper published in the American Economic Review. The note was published under the name Nicolas Bearbaki in homage to Nicolas Bourbaki. In 2018, the American musical duo Twenty One Pilots released a concept album named Trench. The album's conceptual framework was the mythical city of "Dema" ruled by nine "bishops"; one of the bishops was named "Nico", short for Nicolas Bourbaki. Another of the bishops was named Andre, which may refer to André Weil. Following the album's release, there was a spike in internet searches for "Nicolas Bourbaki". Praise Bourbaki's work has been praised by some mathematicians. In a book review, Emil Artin described the Éléments in broad, positive terms: Among the volumes of the Éléments, Bourbaki's work on Lie Groups and Lie Algebras has been identified as "excellent", having become a standard reference on the topic. In particular, former member Armand Borel described the volume with chapters 4–6 as "one of the most successful books by Bourbaki". The success of this part of the work has been attributed to the fact that the books were composed while leading experts on the topic were Bourbaki members. Jean-Pierre Bourguignon expressed appreciation for the Séminaire Bourbaki, saying that he'd learned a large amount of material at its lectures, and referred to its printed lecture notes regularly. He also praised the Éléments for containing "some superb and very clever proofs". Criticism Bourbaki has also been criticized by several mathematicians—including its own former members—for a variety of reasons. Criticisms have included the choice of presentation of certain topics within the Éléments at the expense of others, dislike of the method of presentation for given topics, dislike of the group's working style, and a perceived elitist mentality around Bourbaki's project and its books, especially during the collective's most productive years in the 1950s and 1960s. Bourbaki's deliberations on the Éléments resulted in the inclusion of some topics, while others were not treated. When asked in a 1997 interview about topics left out of the Éléments, former member Pierre Cartier replied: Although Bourbaki had resolved to treat mathematics from its foundations, the group's eventual solution in terms of set theory was attended by several problems. Bourbaki's members were mathematicians as opposed to logicians, and therefore the collective had a limited interest in mathematical logic. As Bourbaki's members themselves said of the book on set theory, it was written "with pain and without pleasure, but we had to do it." Dieudonné personally remarked elsewhere that ninety-five percent of mathematicians "don't care a fig" for mathematical logic. In response, logician Adrian Mathias harshly criticized Bourbaki's foundational framework, noting that it did not take Gödel's results into account. Bourbaki also influenced the New Math, a failed reform in Western mathematics education at the elementary and secondary levels, which stressed abstraction over concrete examples. During the mid-20th century, reform in basic math education was spurred by a perceived need to create a mathematically literate workforce for the modern economy, and also to compete with the Soviet Union. In France, this led to the Lichnerowicz Commission of 1967, headed by André Lichnerowicz and including some (then-current and former) Bourbaki members. Although Bourbaki members had previously (and individually) reformed math instruction at the university level, they had less direct involvement with implementation of the New Math at the primary and secondary levels. New Math reforms resulted in instructional material which was incomprehensible to both students and teachers, failing to meet the cognitive needs of younger students. The attempted reform was harshly criticized by Dieudonné and also by brief founding Bourbaki participant Jean Leray. Apart from French mathematicians, the French reforms also met with harsh criticism from Soviet-born mathematician Vladimir Arnold, who argued that in his time as a student and teacher in Moscow, the teaching of mathematics was firmly rooted in analysis and geometry, and interweaved with problems from classical mechanics; hence, the French reforms cannot be a legitimate attempt to emulate Soviet scientific education. In 1997, while speaking to a conference on mathematical teaching in Paris, he commented on Bourbaki by stating: "genuine mathematicians do not gang up, but the weak need gangs in order to survive." and suggested that Bourbaki's bonding over "super-abstractness" was similar to groups of mathematicians in the 19th century who had bonded over anti-Semitism. Dieudonné later regretted that Bourbaki's success had contributed to a snobbery for pure mathematics in France, at the expense of applied mathematics. In an interview, he said: "It is possible to say that there was no serious applied mathematics in France for forty years after Poincaré. There was even a snobbery for pure math. When one noticed a talented student, one would tell him 'You should do pure math.' On the other hand, one would advise a mediocre student to do applied math while thinking, "It's all that he can do! ... The truth is actually the reverse. You can't do good work in applied math until you can do good work in pure math." Claude Chevalley confirmed an elitist culture within Bourbaki, describing it as "an absolute certainty of our superiority over other mathematicians." Alexander Grothendieck also confirmed an elitist mentality within Bourbaki. Some mathematicians, especially geometers and applied mathematicians, found Bourbaki's influence to be stifling. Benoit Mandelbrot's decision to emigrate to the United States in 1958 was motivated in part by a desire to escape Bourbaki's influence in France. Several related criticisms of the Éléments have concerned its target audience and the intent of its presentation. Volumes of the Éléments begin with a note to the reader which says that the series "takes up mathematics at the beginning, and gives complete proofs" and that "the method of exposition we have chosen is axiomatic and abstract, and normally proceeds from the general to the particular." Despite the opening language, Bourbaki's intended audience are not absolute beginners in mathematics, but rather undergraduates, graduate students, and professors who are familiar with mathematical concepts. Claude Chevalley said that the Éléments are "useless for a beginner", and Pierre Cartier clarified that "The misunderstanding was that it should be a textbook for everybody. That was the big disaster." The work is divided into two halves. While the first half—the Structures fondamentales de l’analyse—treats established subjects, the second half deals with modern research areas like commutative algebra and spectral theory. This divide in the work is related to a historical change in the intent of the treatise. The Éléments' content consists of theorems, proofs, exercises and related commentary, common material in math textbooks. Despite this presentation, the first half was not written as original research but rather as a reorganized presentation of established knowledge. In this sense, the Éléments''' first half was more akin to an encyclopedia than a textbook series. As Cartier remarked, "The misunderstanding was that many people thought it should be taught the way it was written in the books. You can think of the first books of Bourbaki as an encyclopedia of mathematics... If you consider it as a textbook, it's a disaster." The strict, ordered presentation of material in the Éléments first half was meant to form the basis for any further additions. However, developments in modern mathematical research have proven difficult to adapt in terms of Bourbaki's organizational scheme. This difficulty has been attributed to the fluid, dynamic nature of ongoing research which, being new, is not settled or fully understood. Bourbaki's style has been described as a particular scientific paradigm which has been superseded in a paradigm shift. For example, Ian Stewart cited Vaughan Jones' novel work in knot theory as an example of topology which was done without dependence on Bourbaki's system. Bourbaki's influence has declined over time; this decline has been partly attributed to the absence of certain modern topics—such as category theory—from the treatise. Although multiple criticisms have pointed to shortcomings in the collective's project, one has also pointed to its strength: Bourbaki was a "victim of its own success" in the sense that it accomplished what it set out to do, achieving its original goal of presenting a thorough treatise on modern mathematics. These factors prompted biographer Maurice Mashaal to conclude his treatment of Bourbaki in the following terms: See also Bourbaki–Witt theorem Jacobson–Bourbaki theorem Secret societyOther collective mathematical pseudonyms' Arthur Besse Blanche Descartes John Rainwater G. W. Peck Notes References Bibliography Presumptive author: Jean Dieudonné. Authorized translation of the book chapter L'architecture des mathématiques'', appearing in English as a journal article. External links Official Website of L'Association des Collaborateurs de Nicolas Bourbaki Archives of the association 1934 establishments in France 1935 establishments in France Academic shared pseudonyms French mathematicians Large-scale mathematical formalization projects Organizations established in 1934 Organizations established in 1935 Pseudonymous mathematicians Secret societies in France Collaborative non-fiction
Nicolas Bourbaki
[ "Mathematics" ]
10,579
[ "Large-scale mathematical formalization projects", "Mathematical logic" ]
167,489
https://en.wikipedia.org/wiki/Performance%20Rating
The PR (performance rating, P-rating, or Pentium rating) system was a figure of merit developed by AMD, Cyrix, IBM Microelectronics and SGS-Thomson in the mid-1990s as a method of comparing their x86 processors to those of rival Intel. The idea was to consider instructions per cycle (IPC) in addition to the clock speed, so that the processors become comparable with Intel's Pentium that had a higher clock speed with overall lower IPC. Branding The first use of the PR system was in 1995, when AMD used it to assert that their AMD 5x86 processor was as fast as a Pentium running at 75 MHz. The designation "P75" was added to the chip to denote this. Later that year, Cyrix also adopted the PR system for its 6x86 and 6x86MX line of processors. These processors were faster than Pentiums of the same speed in some benchmarks, so Cyrix gave them a Performance Rating faster than their clock speed. Some AMD K5 models also use the PR system. AMD initially branded its AMD K6 processors with a "PR2" rating but dropped this after consumer confusion. AMD revived the branding for its Athlon XP, which was released in 2001. The efficient Athlon XP chips could perform better than similarly-clocked chips from Intel's competing Pentium 4 line-up, which depended on high clock speeds to overcome their low IPC. As a result, AMD believed consumers would be swayed by the megahertz myth. These chips were rated against the Athlon Thunderbird but were popularly compared to the Pentium 4. As a result, the branding became colloquially known as a "Pentium Rating". Maximum PC criticized this as making it more difficult for power users to differentiate between the various Athlon XP chips. For example, two chips could be given the same "PR" branding but have much different engineering (cache size, bus speed, etc), which would affect their performance at different tasks. See also iCOMP (index) References External links Processor Performance Rating (P-rating) Specification, February 1996. Uses Winstone 96. P-rating on wikichip AMD Rating systems Computer performance
Performance Rating
[ "Technology" ]
468
[ "Computer performance" ]
167,506
https://en.wikipedia.org/wiki/Echo%20sounding
Echo sounding or depth sounding is the use of sonar for ranging, normally to determine the depth of water (bathymetry). It involves transmitting acoustic waves into water and recording the time interval between emission and return of a pulse; the resulting time of flight, along with knowledge of the speed of sound in water, allows determining the distance between sonar and target. This information is then typically used for navigation purposes or in order to obtain depths for charting purposes. Echo sounding can also be used for ranging to other targets, such as fish schools. Hydroacoustic assessments have traditionally employed mobile surveys from boats to evaluate fish biomass and spatial distributions. Conversely, fixed-location techniques use stationary transducers to monitor passing fish. The word sounding is used for all types of depth measurements, including those that don't use sound, and is unrelated in origin to the word sound in the sense of noise or tones. Echo sounding is a more rapid method of measuring depth than the previous technique of lowering a sounding line until it touched bottom. History German inventor Alexander Behm was granted German patent No. 282009 for the invention of echo sounding (device for measuring depths of the sea and distances and headings of ships or obstacles by means of reflected sound waves) on 22 July 1913. Meanwhile, in France, physicist Paul Langevin (connected with Marie Curie and better known for his research work in nuclear physics) was recruited by French Navy laboratories at the beginning of World War 2 and conducted (then secret) research on active sonars for anti-submarine warfare (using a piezoelectric transmitter). His work was developed and implemented by other scientists and technnicians such as Chilowski, Florisson and Pierre Marti. Though a fully operational échosondeur (sonar) was not ready for use in wartime, there were successful trials both off Toulon and in the English Channel as early as 1920, and French patents taken for civilian uses. Oceanographic ships and French high-sea fishing assistance vessels were equipped with Langevin-Florisson and Langevin Marti recording sonars as early as the mid/late 1920s. One of the first commercial echo sounding units was the Fessenden Fathometer, which used the Fessenden oscillator to generate sound waves. This was first installed by the Submarine Signal Company in 1924 on the M&M liner SS Berkshire. Technique Distance is measured by multiplying half the time from the signal's outgoing pulse to its return by the speed of sound in water, which is approximately 1.5 kilometres per second. The speed of sound will vary slightly depending on temperature, pressure and salinity; and for precise applications of echosounding, such as hydrography, the speed of sound must also be measured, typically by deploying a sound velocity probe in the water. Echo sounding is a special purpose application of sonar used to locate the bottom. Since a historical pre-SI unit of water depth was the fathom, an instrument used for determining water depth is sometimes called a fathometer. Most charted ocean depths are based on an average or standard sound speed. Where greater accuracy is required, average and even seasonal standards may be applied to ocean regions. For high accuracy depths, usually restricted to special purpose or scientific surveys, a sensor may be lowered to measure the temperature, pressure and salinity. These factors are used to estimate more accurately the actual sound speed in the local water column. This technique is often used by the US Office of Coast Survey for navigational surveys of US coastal waters. Types Single beam A single-beam echo sounder is one of the simplest and most fundamental types of underwater sonar. They are ubiquitous in the boating world and used on a number of different marine robotic vehicles. It operates by using a transducer to emit a pulse through the water and listen for echos to return. Using that data, it's able to determine the distance from the strongest echo, which can be the seafloor, a concrete structure, or other larger obstacle. A fishfinder is an echo sounding device used by both recreational and commercial fishers. Multibeam Common use As well as an aid to navigation (most larger vessels will have at least a simple depth sounder), echo sounding is commonly used for fishing. Variations in elevation often represent places where fish congregate. Schools of fish will also register. Hydrography In areas where detailed bathymetry is required, a precise echo sounder may be used for the work of hydrography. There are many considerations when evaluating such a system, not limited to the vertical accuracy, resolution, acoustic beamwidth of the transmit/receive beam and the acoustic frequency of the transducer. The majority of hydrographic echosounders are dual frequency, meaning that a low frequency pulse (typically around 24 kHz) can be transmitted at the same time as a high frequency pulse (typically around 200 kHz). As the two frequencies are discrete, the two return signals do not typically interfere with each other. Dual frequency echosounding has many advantages, including the ability to identify a vegetation layer or a layer of soft mud on top of a layer of rock. Most hydrographic operations use a 200 kHz transducer, which is suitable for inshore work up to 100 metres in depth. Deeper water requires a lower frequency transducer as the acoustic signal of lower frequencies is less susceptible to attenuation in the water column. Commonly used frequencies for deep water sounding are 33 kHz and 24 kHz. The beamwidth of the transducer is also a consideration for the hydrographer, as to obtain the best resolution of the data gathered a narrow beamwidth is preferable. The higher the operating frequency, the narrower the beamwidth. Therefore, it is especially important when sounding in deep water, as the resulting footprint of the acoustic pulse can be very large once it reaches a distant sea floor. A multispectral multibeam echosounder is an extension of a dual frequency vertical beam echosounder in that, as well as measuring two soundings directly below the sonar at two different frequencies; it measures multiple soundings at multiple frequencies, at multiple different grazing angles, and multiple different locations on the seabed. These systems are detailed further in the section called multibeam echosounder. Echo sounders are used in laboratory applications to monitor sediment transport, scour and erosion processes in scale models (hydraulic models, flumes etc.). These can also be used to create plots of 3D contours. Standards for hydrographic echo sounding The required precision and accuracy of the hydrographic echo sounder is defined by the requirements of the International Hydrographic Organization (IHO) for surveys that are to be undertaken to IHO standards. These values are contained within IHO publication S44. In order to meet these standards, the surveyor must consider not only the vertical and horizontal accuracy of the echo sounder and transducer, but the survey system as a whole. A motion sensor may be used, specifically the heave component (in single beam echosounding) to reduce soundings for the motion of the vessel experienced on the water's surface. Once all of the uncertainties of each sensor are established, the hydrographer will create an uncertainty budget to determine whether the survey system meets the requirements laid down by IHO. Different hydrographic organisations will have their own set of field procedures and manuals to guide their surveyors to meet the required standards. Two examples are the US Army Corps of Engineers publication EM110-2-1003, and the NOAA 'Field Procedures Manual'. See also Acoustical oceanography Alexander Behm – inventor AUV Bathymeter Depth gauge Fessenden oscillator Fisheries acoustics Hydroacoustics Hydrographic survey Sonar Depth sounding Underwater acoustics References External links "How Echoes Tell Depth of Water Under Ship" Popular Mechanics Monthly, July 1930 – drawing of details of early depth finders using echoes ELAC (1982) An Introduction to Echosounding. Honeywell-ELAC-Nautik GmbH, Kiel, 88 pp, (pdf 27.5 MB) Surveying Oceanographic instrumentation
Echo sounding
[ "Technology", "Engineering" ]
1,664
[ "Surveying", "Civil engineering", "Oceanographic instrumentation", "Measuring instruments" ]
167,513
https://en.wikipedia.org/wiki/Sphalerite
Sphalerite is a sulfide mineral with the chemical formula . It is the most important ore of zinc. Sphalerite is found in a variety of deposit types, but it is primarily in sedimentary exhalative, Mississippi-Valley type, and volcanogenic massive sulfide deposits. It is found in association with galena, chalcopyrite, pyrite (and other sulfides), calcite, dolomite, quartz, rhodochrosite, and fluorite. German geologist Ernst Friedrich Glocker discovered sphalerite in 1847, naming it based on the Greek word sphaleros, meaning "deceiving", due to the difficulty of identifying the mineral. In addition to zinc, sphalerite is an ore of cadmium, gallium, germanium, and indium. Miners have been known to refer to sphalerite as zinc blende, black-jack, and ruby blende. Marmatite is an opaque black variety with a high iron content. Crystal habit and structure Sphalerite crystallizes in the face-centered cubic zincblende crystal structure, which named after the mineral. This structure is a member of the hextetrahedral crystal class (space group F3m). In the crystal structure, both the sulfur and the zinc or iron ions occupy the points of a face-centered cubic lattice, with the two lattices displaced from each other such that the zinc and iron are tetrahedrally coordinated to the sulfur ions, and vice versa. Minerals similar to sphalerite include those in the sphalerite group, consisting of sphalerite, colaradoite, hawleyite, metacinnabar, stilleite and tiemannite. The structure is closely related to the structure of diamond. The hexagonal polymorph of sphalerite is wurtzite, and the trigonal polymorph is matraite. Wurtzite is the higher temperature polymorph, stable at temperatures above . The lattice constant for zinc sulfide in the zinc blende crystal structure is 0.541 nm. Sphalerite has been found as a pseudomorph, taking the crystal structure of galena, tetrahedrite, barite and calcite. Sphalerite can have Spinel Law twins, where the twin axis is [111]. The chemical formula of sphalerite is ; the iron content generally increases with increasing formation temperature and can reach up to 40%. The material can be considered a ternary compound between the binary endpoints ZnS and FeS with composition ZnxFe(1-x)S, where x can range from 1 (pure ZnS) to 0.6. All natural sphalerite contains concentrations of various impurities, which generally substitute for zinc in the cation position in the lattice; the most common cation impurities are cadmium, mercury and manganese, but gallium, germanium and indium may also be present in relatively high concentrations (hundreds to thousands of ppm). Cadmium can replace up to 1% of zinc and manganese is generally found in sphalerite with high iron abundances. Sulfur in the anion position can be substituted for by selenium and tellurium. The abundances of these impurities are controlled by the conditions under which the sphalerite formed; formation temperature, pressure, element availability and fluid composition are important controls. Properties Physical properties Sphalerite possesses perfect dodecahedral cleavage, having six cleavage planes. In pure form, it is a semiconductor, but transitions to a conductor as the iron content increases. It has a hardness of 3.5 to 4 on the Mohs scale of mineral hardness. It can be distinguished from similar minerals by its perfect cleavage, its distinctive resinous luster, and the reddish-brown streak of the darker varieties. Optical properties Pure zinc sulfide is a wide-bandgap semiconductor, with bandgap of about 3.54 electron volts, which makes the pure material transparent in the visible spectrum. Increasing iron content will make the material opaque, while various impurities can give the crystal a variety of colors. In thin section, sphalerite exhibits very high positive relief and appears colorless to pale yellow or brown, with no pleochroism. The refractive index of sphalerite (as measured via sodium light, average wavelength 589.3 nm) ranges from 2.37 when it is pure ZnS to 2.50 when there is 40% iron content. Sphalerite is isotropic under cross-polarized light, however sphalerite can experience birefringence if intergrown with its polymorph wurtzite; the birefringence can increase from 0 (0% wurtzite) up to 0.022 (100% wurtzite). Depending on the impurities, sphalerite will fluoresce under ultraviolet light. Sphalerite can be triboluminescent. Sphalerite has a characteristic triboluminescence of yellow-orange. Typically, specimens cut into end-slabs are ideal for displaying this property. Varieties Gemmy, colorless to pale green sphalerite from Franklin, New Jersey (see Franklin Furnace), are highly fluorescent orange and/or blue under longwave ultraviolet light and are known as cleiophane, an almost pure ZnS variety. Cleiophane contains less than 0.1% of iron in the sphalerite crystal structure. Marmatite or christophite is an opaque black variety of sphalerite and its coloring is due to high quantities of iron, which can reach up to 25%; marmatite is named after Marmato mining district in Colombia and christophite is named for the St. Christoph mine in Breitenbrunn, Saxony. Both marmatite and cleiophane are not recognized by the International Mineralogical Association (IMA). Red, orange or brownish-red sphalerite is termed ruby blende or ruby zinc, whereas dark colored sphalerite is termed black-jack. Deposit types Sphalerite is amongst the most common sulfide minerals, and it is found worldwide and in a variety of deposit types. The reason for the wide distribution of sphalerite is that it appears in many types of deposits; it is found in skarns, hydrothermal deposits, sedimentary beds, volcanogenic massive sulfide deposits (VMS), Mississippi-valley type deposits (MVT), granite and coal. Sedimentary exhalitive Approximately 50% of zinc (from sphalerite) and lead comes from Sedimentary exhalative (SEDEX) deposits, which are stratiform Pb-Zn sulfides that form at seafloor vents. The metals precipitate from hydrothermal fluids and are hosted by shales, carbonates and organic-rich siltstones in back-arc basins and failed continental rifts. The main ore minerals in SEDEX deposits are sphalerite, galena, pyrite, pyrrhotite and marcasite, with minor sulfosalts such as tetrahedrite-freibergite and boulangerite; the zinc + lead grade typically ranges between 10 and 20%. Important SEDEX mines are Red Dog in Alaska, Sullivan Mine in British Columbia, Mount Isa and Broken Hill in Australia and Mehdiabad in Iran. Mississippi-Valley type Similar to SEDEX, Mississippi-Valley type (MVT) deposits are also a Pb-Zn deposit which contains sphalerite. However, they only account for 15–20% of zinc and lead, are 25% smaller in tonnage than SEDEX deposits and have lower grades of 5–10% Pb + Zn. MVT deposits form from the replacement of carbonate host rocks such as dolostone and limestone by ore minerals; they are located in platforms and foreland thrust belts. Furthermore, they are stratabound, typically Phanerozoic in age and epigenetic (form after the lithification of the carbonate host rocks). The ore minerals are the same as SEDEX deposits: sphalerite, galena, pyrite, pyrrhotite and marcasite, with minor sulfosalts. Mines that contain MVT deposits include Polaris in the Canadian arctic, Mississippi River in the United States, Pine Point in Northwest Territories, and Admiral Bay in Australia. Volcanogenic massive sulfide Volcanogenic massive sulfide (VMS) deposits can be Cu-Zn- or Zn-Pb-Cu-rich, and accounts for 25% of Zn in reserves. There are various types of VMS deposits with a range of regional contexts and host rock compositions; a common characteristic is that they are all hosted by submarine volcanic rocks. They form from metals such as copper and zinc being transferred by hydrothermal fluids (modified seawater) which leach them from volcanic rocks in the oceanic crust; the metal-saturated fluid rises through fractures and faults to the surface, where it cools and deposits the metals as a VMS deposit. The most abundant ore minerals are pyrite, chalcopyrite, sphalerite and pyrrhotite. Mines that contain VMS deposits include Kidd Creek in Ontario, Urals in Russia, Troodos in Cyprus, and Besshi in Japan. Localities The top producers of sphalerite include the United States, Russia, Mexico, Germany, Australia, Canada, China, Ireland, Peru, Kazakhstan and England. Sources of high quality crystals include: Uses Metal ore Sphalerite is an important ore of zinc; around 95% of all primary zinc is extracted from sphalerite ore. However, due to its variable trace element content, sphalerite is also an important source of several other metals such as cadmium, gallium, germanium, and indium which replace zinc. The ore was originally called blende by miners (from German blind or deceiving) because it resembles galena but yields no lead. Brass and bronze The zinc in sphalerite is used to produce brass, an alloy of copper with 3–45% zinc. Major element alloy compositions of brass objects provide evidence that sphalerite was being used to produce brass by the Islamic as far back as the medieval ages between the 7th and 16th century CE. Sphalerite may have also been used during the cementation process of brass in Northern China during the 12th–13th century CE (Jin Dynasty). Besides brass, the zinc in sphalerite can also be used to produce certain types of bronze; bronze is dominantly copper which is alloyed with other metals such as tin, zinc, lead, nickel, iron and arsenic. Other Yule Marble – sphalerite is found as inclusions in yule marble, which is used as a building material for the Lincoln Memorial and Tomb of the Unknown. Galvanized iron – zinc from sphalerite is used as a protective coating to prevent corrosion and rusting; it is used on power transmission towers, nails and automobiles. Batteries. Gemstone. Gallery See also List of minerals References Further reading Dana's Manual of Mineralogy Webster, R., Read, P. G. (Ed.) (2000). Gems: Their sources, descriptions and identification (5th ed.), p. 386. Butterworth-Heinemann, Great Britain. External links The sphalerite structure Possible relation of Sphalerite to origins of life and precursor chemicals in 'Primordial Soup' Minerals.net Minerals of Franklin, NJ Gemstones Sulfide minerals Zinc minerals Cubic minerals Minerals in space group 216 Luminescent minerals Zincblende crystal structure Minerals described in 1847 Blendes
Sphalerite
[ "Physics", "Chemistry" ]
2,418
[ "Luminescence", "Luminescent minerals", "Materials", "Gemstones", "Matter" ]
167,520
https://en.wikipedia.org/wiki/Guyot
In marine geology, a guyot (), also called a tablemount, is an isolated underwater volcanic mountain (seamount) with a flat top more than below the surface of the sea. The diameters of these flat summits can exceed . Guyots are most commonly found in the Pacific Ocean, but they have been identified in all the oceans except the Arctic Ocean. They are analogous to tables (such as mesas) on land. History Guyots were first recognized in 1945 by Harry Hammond Hess, who collected data using echo-sounding equipment on a ship he commanded during World War II. His data showed that some undersea mountains had flat tops. Hess called these undersea mountains "guyots", after the 19th-century geographer Arnold Henry Guyot. Hess postulated they were once volcanic islands that were beheaded by wave action, yet they are now deep under sea level. This idea was used to help bolster the theory of plate tectonics. Formation Guyots show evidence of having once been above the surface, with gradual subsidence through stages from fringed reefed mountain, coral atoll, and finally a flat-topped submerged mountain. Seamounts are made by extrusion of lavas piped upward in stages from sources within the Earth's mantle, usually hotspots, to vents on the seafloor. The volcanism invariably ceases after a time, and other processes dominate. When an undersea volcano grows high enough to be near or breach the ocean surface, wave action or coral reef growth tend to create a flat-topped edifice. However, all ocean crust and guyots form from hot magma or rock, which cools over time. As the lithosphere that the future guyot rides on slowly cools, it becomes denser and sinks lower into Earth's mantle, through the process of isostasy. In addition, the erosive effects of waves and currents are found mostly near the surface: the tops of guyots generally lie below this higher-erosion zone. This is the same process that gives rise to higher seafloor topography at oceanic ridges, such as the Mid-Atlantic Ridge in the Atlantic Ocean, and deeper ocean at abyssal plains and oceanic trenches, such as the Mariana Trench. Thus, the island or shoal that will eventually become a guyot slowly subsides over millions of years. In the right climatic regions, coral growth can sometimes keep pace with the subsidence, resulting in coral atoll formation, but eventually the corals dip too deep to grow and the island becomes a guyot. The greater the amount of time that passes, the deeper the guyots become. Seamounts provide data on movements of tectonic plates on which they ride, and on the rheology of the underlying lithosphere. The trend of a seamount chain traces the direction of motion of the lithospheric plate over a more or less fixed heat source in the underlying asthenosphere, the part of the Earth's mantle beneath the lithosphere. There are thought to be up to an estimated 50,000 seamounts in the Pacific basin. The Hawaiian–Emperor seamount chain is an excellent example of an entire volcanic chain undergoing this process, from active volcanism, to coral reef growth, to atoll formation, to subsidence of the islands and becoming guyots. Characteristics The steepness gradient of most guyots is about 20 degrees. To technically be considered a guyot or tablemount, they must stand at least tall. One guyot in particular, the Great Meteor Tablemount in the Northeast Atlantic Ocean, stands at more than high, with a diameter of . However, there are many undersea mounts that can range from just less than to around . Very large oceanic volcanic constructions, hundreds of kilometres across, are called oceanic plateaus. Guyots have a mean area of , which is much larger than typical seamounts, which have a mean area of . There are 283 known guyots in the world's oceans, with the North Pacific having 119, the South Pacific 77, the South Atlantic 43, the Indian Ocean 28, the North Atlantic eight, the Southern Ocean six, and the Mediterranean Sea two; there are none known in the Arctic Ocean, though one is found along the Fram Strait off northeastern Greenland. Guyots are also associated with specific lifeforms and varying amounts of organic matter. Local increases in chlorophyll a, enhanced carbon incorporation rates and changes in phytoplankton species composition are associated with guyots and other seamounts. See also Evolution of Hawaiian volcanoes Kodiak–Bowie Seamount chain New England Seamounts References External links NOAA: What is a guyot? Physical oceanography Plate tectonics Seamounts
Guyot
[ "Physics" ]
979
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
167,540
https://en.wikipedia.org/wiki/Protein%20production
Protein production is the biotechnological process of generating a specific protein. It is typically achieved by the manipulation of gene expression in an organism such that it expresses large amounts of a recombinant gene. This includes the transcription of the recombinant DNA to messenger RNA (mRNA), the translation of mRNA into polypeptide chains, which are ultimately folded into functional proteins and may be targeted to specific subcellular or extracellular locations. Protein production systems (also known as expression systems) are used in the life sciences, biotechnology, and medicine. Molecular biology research uses numerous proteins and enzymes, many of which are from expression systems; particularly DNA polymerase for PCR, reverse transcriptase for RNA analysis, restriction endonucleases for cloning, and to make proteins that are screened in drug discovery as biological targets or as potential drugs themselves. There are also significant applications for expression systems in industrial fermentation, notably the production of biopharmaceuticals such as human insulin to treat diabetes, and to manufacture enzymes. Protein production systems Commonly used protein production systems include those derived from bacteria, yeast, baculovirus/insect, mammalian cells, and more recently filamentous fungi such as Myceliophthora thermophila. When biopharmaceuticals are produced with one of these systems, process-related impurities termed host cell proteins also arrive in the final product in trace amounts. Cell-based systems The oldest and most widely used expression systems are cell-based and may be defined as the "combination of an expression vector, its cloned DNA, and the host for the vector that provide a context to allow foreign gene function in a host cell, that is, produce proteins at a high level". Overexpression is an abnormally and excessively high level of gene expression which produces a pronounced gene-related phenotype. There are many ways to introduce foreign DNA to a cell for expression, and many different host cells may be used for expression — each expression system has distinct advantages and liabilities. Expression systems are normally referred to by the host and the DNA source or the delivery mechanism for the genetic material. For example, common hosts are bacteria (such as E. coli, B. subtilis), yeast (such as S. cerevisiae) or eukaryotic cell lines. Common DNA sources and delivery mechanisms are viruses (such as baculovirus, retrovirus, adenovirus), plasmids, artificial chromosomes and bacteriophage (such as lambda). The best expression system depends on the gene involved, for example the Saccharomyces cerevisiae is often preferred for proteins that require significant posttranslational modification. Insect or mammal cell lines are used when human-like splicing of mRNA is required. Nonetheless, bacterial expression has the advantage of easily producing large amounts of protein, which is required for X-ray crystallography or nuclear magnetic resonance experiments for structure determination. Because bacteria are prokaryotes, they are not equipped with the full enzymatic machinery to accomplish the required post-translational modifications or molecular folding. Hence, multi-domain eukaryotic proteins expressed in bacteria often are non-functional. Also, many proteins become insoluble as inclusion bodies that are difficult to recover without harsh denaturants and subsequent cumbersome protein-refolding. To address these concerns, expressions systems using multiple eukaryotic cells were developed for applications requiring the proteins be conformed as in, or closer to eukaryotic organisms: cells of plants (i.e. tobacco), of insects or mammalians (i.e. bovines) are transfected with genes and cultured in suspension and even as tissues or whole organisms, to produce fully folded proteins. Mammalian in vivo expression systems have however low yield and other limitations (time-consuming, toxicity to host cells,..). To combine the high yield/productivity and scalable protein features of bacteria and yeast, and advanced epigenetic features of plants, insects and mammalians systems, other protein production systems are developed using unicellular eukaryotes (i.e. non-pathogenic 'Leishmania' cells). Bacterial systems Escherichia coli E. coli is one of the most widely used expression hosts, and DNA is normally introduced in a plasmid expression vector. The techniques for overexpression in E. coli are well developed and work by increasing the number of copies of the gene or increasing the binding strength of the promoter region so assisting transcription. For example, a DNA sequence for a protein of interest could be cloned or subcloned into a high copy-number plasmid containing the lac (often LacUV5) promoter, which is then transformed into the bacterium E. coli. Addition of IPTG (a lactose analog) activates the lac promoter and causes the bacteria to express the protein of interest. E. coli strain BL21 and BL21(DE3) are two strains commonly used for protein production. As members of the B lineage, they lack lon and OmpT proteases, protecting the produced proteins from degradation. The DE3 prophage found in BL21(DE3) provides T7 RNA polymerase (driven by the LacUV5 promoter), allowing for vectors with the T7 promoter to be used instead. Corynebacterium Non-pathogenic species of the gram-positive Corynebacterium are used for the commercial production of various amino acids. The C. glutamicum species is widely used for producing glutamate and lysine, components of human food, animal feed and pharmaceutical products. Expression of functionally active human epidermal growth factor has been done in C. glutamicum, thus demonstrating a potential for industrial-scale production of human proteins. Expressed proteins can be targeted for secretion through either the general, secretory pathway (Sec) or the twin-arginine translocation pathway (Tat). Unlike gram-negative bacteria, the gram-positive Corynebacterium lack lipopolysaccharides that function as antigenic endotoxins in humans. Pseudomonas fluorescens The non-pathogenic and gram-negative bacteria, Pseudomonas fluorescens, is used for high level production of recombinant proteins; commonly for the development bio-therapeutics and vaccines. P. fluorescens is a metabolically versatile organism, allowing for high throughput screening and rapid development of complex proteins. P. fluorescens is most well known for its ability to rapid and successfully produce high titers of active, soluble protein. Eukaryotic systems Yeasts Expression systems using either S. cerevisiae or Pichia pastoris allow stable and lasting production of proteins that are processed similarly to mammalian cells, at high yield, in chemically defined media of proteins. Filamentous fungi Filamentous fungi, especially Aspergillus and Trichoderma, have long been used to produce diverse industrial enzymes from their own genomes ("native", "homologous") and from recombinant DNA ("heterologous"). More recently, Myceliophthora thermophila C1 has been developed into an expression platform for screening and production of native and heterologous proteins.The expression system C1 shows a low viscosity morphology in submerged culture, enabling the use of complex growth and production media. C1 also does not "hyperglycosylate" heterologous proteins, as Aspergillus and Trichoderma tend to do. Baculovirus-infected cells Baculovirus-infected insect cells (Sf9, Sf21, High Five strains) or mammalian cells (HeLa, HEK 293) allow production of glycosylated or membrane proteins that cannot be produced using fungal or bacterial systems. It is useful for production of proteins in high quantity. Genes are not expressed continuously because infected host cells eventually lyse and die during each infection cycle. Non-lytic insect cell expression Non-lytic insect cell expression is an alternative to the lytic baculovirus expression system. In non-lytic expression, vectors are transiently or stably transfected into the chromosomal DNA of insect cells for subsequent gene expression. This is followed by selection and screening of recombinant clones. The non-lytic system has been used to give higher protein yield and quicker expression of recombinant genes compared to baculovirus-infected cell expression. Cell lines used for this system include: Sf9, Sf21 from Spodoptera frugiperda cells, Hi-5 from Trichoplusia ni cells, and Schneider 2 cells and Schneider 3 cells from Drosophila melanogaster cells. With this system, cells do not lyse and several cultivation modes can be used. Additionally, protein production runs are reproducible. This system gives a homogeneous product. A drawback of this system is the requirement of an additional screening step for selecting viable clones. Excavata Leishmania tarentolae (cannot infect mammals) expression systems allow stable and lasting production of proteins at high yield, in chemically defined media. Produced proteins exhibit fully eukaryotic post-translational modifications, including glycosylation and disulfide bond formation. Mammalian systems The most common mammalian expression systems are Chinese Hamster ovary (CHO) and Human embryonic kidney (HEK) cells. Chinese hamster ovary cell Mouse myeloma lymphoblstoid (e.g. NS0 cell) Fully Human Human embryonic kidney cells (HEK-293) Human embryonic retinal cells (Crucell's Per.C6) Human amniocyte cells (Glycotope and CEVEC) Cell-free systems Cell-free production of proteins is performed in vitro using purified RNA polymerase, ribosomes, tRNA and ribonucleotides. These reagents may be produced by extraction from cells or from a cell-based expression system. Due to the low expression levels and high cost of cell-free systems, cell-based systems are more widely used. See also Cellosaurus, a database of cell lines Gene expression Single-cell protein Protein purification Precision fermentation Host cell protein List of recombinant proteins References Further reading External links Gene expression Biotechnology
Protein production
[ "Chemistry", "Biology" ]
2,178
[ "Gene expression", "Biotechnology", "Molecular genetics", "Cellular processes", "nan", "Molecular biology", "Biochemistry" ]
167,544
https://en.wikipedia.org/wiki/Transcription%20%28biology%29
Transcription is the process of copying a segment of DNA into RNA for the purpose of gene expression. Some segments of DNA are transcribed into RNA molecules that can encode proteins, called messenger RNA (mRNA). Other segments of DNA are transcribed into RNA molecules called non-coding RNAs (ncRNAs). Both DNA and RNA are nucleic acids, which use base pairs of nucleotides as a complementary language. During transcription, a DNA sequence is read by an RNA polymerase, which produces a complementary, antiparallel RNA strand called a primary transcript. In virology, the term transcription is used when referring to mRNA synthesis from a viral RNA molecule. The genome of many RNA viruses is composed of negative-sense RNA which acts as a template for positive sense viral messenger RNA - a necessary step in the synthesis of viral proteins needed for viral replication. This process is catalyzed by a viral RNA dependent RNA polymerase. Background A DNA transcription unit encoding for a protein may contain both a coding sequence, which will be translated into the protein, and regulatory sequences, which direct and regulate the synthesis of that protein. The regulatory sequence before (upstream from) the coding sequence is called the five prime untranslated regions (5'UTR); the sequence after (downstream from) the coding sequence is called the three prime untranslated regions (3'UTR). As opposed to DNA replication, transcription results in an RNA complement that includes the nucleotide uracil (U) in all instances where thymine (T) would have occurred in a DNA complement. Only one of the two DNA strands serves as a template for transcription. The antisense strand of DNA is read by RNA polymerase from the 3' end to the 5' end during transcription (3' → 5'). The complementary RNA is created in the opposite direction, in the 5' → 3' direction, matching the sequence of the sense strand except switching uracil for thymine. This directionality is because RNA polymerase can only add nucleotides to the 3' end of the growing mRNA chain. This use of only the 3' → 5' DNA strand eliminates the need for the Okazaki fragments that are seen in DNA replication. This also removes the need for an RNA primer to initiate RNA synthesis, as is the case in DNA replication. The non-template (sense) strand of DNA is called the coding strand, because its sequence is the same as the newly created RNA transcript (except for the substitution of uracil for thymine). This is the strand that is used by convention when presenting a DNA sequence. Transcription has some proofreading mechanisms, but they are fewer and less effective than the controls for copying DNA. As a result, transcription has a lower copying fidelity than DNA replication. Major steps Transcription is divided into initiation, promoter escape, elongation, and termination. Setting up for transcription Enhancers, transcription factors, Mediator complex, and DNA loops in mammalian transcription Setting up for transcription in mammals is regulated by many cis-regulatory elements, including core promoter and promoter-proximal elements that are located near the transcription start sites of genes. Core promoters combined with general transcription factors are sufficient to direct transcription initiation, but generally have low basal activity. Other important cis-regulatory modules are localized in DNA regions that are distant from the transcription start sites. These include enhancers, silencers, insulators and tethering elements. Among this constellation of elements, enhancers and their associated transcription factors have a leading role in the initiation of gene transcription. An enhancer localized in a DNA region distant from the promoter of a gene can have a very large effect on gene transcription, with some genes undergoing up to 100-fold increased transcription due to an activated enhancer. Enhancers are regions of the genome that are major gene-regulatory elements. Enhancers control cell-type-specific gene transcription programs, most often by looping through long distances to come in physical proximity with the promoters of their target genes. While there are hundreds of thousands of enhancer DNA regions, for a particular type of tissue only specific enhancers are brought into proximity with the promoters that they regulate. In a study of brain cortical neurons, 24,937 loops were found, bringing enhancers to their target promoters. Multiple enhancers, each often at tens or hundred of thousands of nucleotides distant from their target genes, loop to their target gene promoters and can coordinate with each other to control transcription of their common target gene. The schematic illustration in this section shows an enhancer looping around to come into close physical proximity with the promoter of a target gene. The loop is stabilized by a dimer of a connector protein (e.g. dimer of CTCF or YY1), with one member of the dimer anchored to its binding motif on the enhancer and the other member anchored to its binding motif on the promoter (represented by the red zigzags in the illustration). Several cell function specific transcription factors (there are about 1,600 transcription factors in a human cell) generally bind to specific motifs on an enhancer and a small combination of these enhancer-bound transcription factors, when brought close to a promoter by a DNA loop, govern level of transcription of the target gene. Mediator (a complex usually consisting of about 26 proteins in an interacting structure) communicates regulatory signals from enhancer DNA-bound transcription factors directly to the RNA polymerase II (pol II) enzyme bound to the promoter. Enhancers, when active, are generally transcribed from both strands of DNA with RNA polymerases acting in two different directions, producing two enhancer RNAs (eRNAs) as illustrated in the Figure. An inactive enhancer may be bound by an inactive transcription factor. Phosphorylation of the transcription factor may activate it and that activated transcription factor may then activate the enhancer to which it is bound (see small red star representing phosphorylation of transcription factor bound to enhancer in the illustration). An activated enhancer begins transcription of its RNA before activating transcription of messenger RNA from its target gene. CpG island methylation and demethylation Transcription regulation at about 60% of promoters is also controlled by methylation of cytosines within CpG dinucleotides (where 5' cytosine is followed by 3' guanine or CpG sites). 5-methylcytosine (5-mC) is a methylated form of the DNA base cytosine (see Figure). 5-mC is an epigenetic marker found predominantly within CpG sites. About 28 million CpG dinucleotides occur in the human genome. In most tissues of mammals, on average, 70% to 80% of CpG cytosines are methylated (forming 5-methylCpG or 5-mCpG). However, unmethylated cytosines within 5'cytosine-guanine 3' sequences often occur in groups, called CpG islands, at active promoters. About 60% of promoter sequences have a CpG island while only about 6% of enhancer sequences have a CpG island. CpG islands constitute regulatory sequences, since if CpG islands are methylated in the promoter of a gene this can reduce or silence gene transcription. DNA methylation regulates gene transcription through interaction with methyl binding domain (MBD) proteins, such as MeCP2, MBD1 and MBD2. These MBD proteins bind most strongly to highly methylated CpG islands. These MBD proteins have both a methyl-CpG-binding domain as well as a transcription repression domain. They bind to methylated DNA and guide or direct protein complexes with chromatin remodeling and/or histone modifying activity to methylated CpG islands. MBD proteins generally repress local chromatin such as by catalyzing the introduction of repressive histone marks, or creating an overall repressive chromatin environment through nucleosome remodeling and chromatin reorganization. As noted in the previous section, transcription factors are proteins that bind to specific DNA sequences in order to regulate the expression of a gene. The binding sequence for a transcription factor in DNA is usually about 10 or 11 nucleotides long. As summarized in 2009, Vaquerizas et al. indicated there are approximately 1,400 different transcription factors encoded in the human genome by genes that constitute about 6% of all human protein encoding genes. About 94% of transcription factor binding sites (TFBSs) that are associated with signal-responsive genes occur in enhancers while only about 6% of such TFBSs occur in promoters. EGR1 protein is a particular transcription factor that is important for regulation of methylation of CpG islands. An EGR1 transcription factor binding site is frequently located in enhancer or promoter sequences. There are about 12,000 binding sites for EGR1 in the mammalian genome and about half of EGR1 binding sites are located in promoters and half in enhancers. The binding of EGR1 to its target DNA binding site is insensitive to cytosine methylation in the DNA. While only small amounts of EGR1 transcription factor protein are detectable in cells that are un-stimulated, translation of the EGR1 gene into protein at one hour after stimulation is drastically elevated. Production of EGR1 transcription factor proteins, in various types of cells, can be stimulated by growth factors, neurotransmitters, hormones, stress and injury. In the brain, when neurons are activated, EGR1 proteins are up-regulated and they bind to (recruit) the pre-existing TET1 enzymes that are produced in high amounts in neurons. TET enzymes can catalyse demethylation of 5-methylcytosine. When EGR1 transcription factors bring TET1 enzymes to EGR1 binding sites in promoters, the TET enzymes can demethylate the methylated CpG islands at those promoters. Upon demethylation, these promoters can then initiate transcription of their target genes. Hundreds of genes in neurons are differentially expressed after neuron activation through EGR1 recruitment of TET1 to methylated regulatory sequences in their promoters. The methylation of promoters is also altered in response to signals. The three mammalian DNA methyltransferasess (DNMT1, DNMT3A, and DNMT3B) catalyze the addition of methyl groups to cytosines in DNA. While DNMT1 is a maintenance methyltransferase, DNMT3A and DNMT3B can carry out new methylations. There are also two splice protein isoforms produced from the DNMT3A gene: DNA methyltransferase proteins DNMT3A1 and DNMT3A2. The splice isoform DNMT3A2 behaves like the product of a classical immediate-early gene and, for instance, it is robustly and transiently produced after neuronal activation. Where the DNA methyltransferase isoform DNMT3A2 binds and adds methyl groups to cytosines appears to be determined by histone post translational modifications. On the other hand, neural activation causes degradation of DNMT3A1 accompanied by reduced methylation of at least one evaluated targeted promoter. Initiation Transcription begins with the RNA polymerase and one or more general transcription factors binding to a DNA promoter sequence to form an RNA polymerase-promoter closed complex. In the closed complex, the promoter DNA is still fully double-stranded. RNA polymerase, assisted by one or more general transcription factors, then unwinds approximately 14 base pairs of DNA to form an RNA polymerase-promoter open complex. In the open complex, the promoter DNA is partly unwound and single-stranded. The exposed, single-stranded DNA is referred to as the "transcription bubble". RNA polymerase, assisted by one or more general transcription factors, then selects a transcription start site in the transcription bubble, binds to an initiating NTP and an extending NTP (or a short RNA primer and an extending NTP) complementary to the transcription start site sequence, and catalyzes bond formation to yield an initial RNA product. In bacteria, RNA polymerase holoenzyme consists of five subunits: 2 α subunits, 1 β subunit, 1 β' subunit, and 1 ω subunit. In bacteria, there is one general RNA transcription factor known as a sigma factor. RNA polymerase core enzyme binds to the bacterial general transcription (sigma) factor to form RNA polymerase holoenzyme and then binds to a promoter. (RNA polymerase is called a holoenzyme when sigma subunit is attached to the core enzyme which is consist of 2 α subunits, 1 β subunit, 1 β' subunit only). Unlike eukaryotes, the initiating nucleotide of nascent bacterial mRNA is not capped with a modified guanine nucleotide. The initiating nucleotide of bacterial transcripts bears a 5′ triphosphate (5′-PPP), which can be used for genome-wide mapping of transcription initiation sites. In archaea and eukaryotes, RNA polymerase contains subunits homologous to each of the five RNA polymerase subunits in bacteria and also contains additional subunits. In archaea and eukaryotes, the functions of the bacterial general transcription factor sigma are performed by multiple general transcription factors that work together. In archaea, there are three general transcription factors: TBP, TFB, and TFE. In eukaryotes, in RNA polymerase II-dependent transcription, there are six general transcription factors: TFIIA, TFIIB (an ortholog of archaeal TFB), TFIID (a multisubunit factor in which the key subunit, TBP, is an ortholog of archaeal TBP), TFIIE (an ortholog of archaeal TFE), TFIIF, and TFIIH. The TFIID is the first component to bind to DNA due to binding of TBP, while TFIIH is the last component to be recruited. In archaea and eukaryotes, the RNA polymerase-promoter closed complex is usually referred to as the "preinitiation complex". Transcription initiation is regulated by additional proteins, known as activators and repressors, and, in some cases, associated coactivators or corepressors, which modulate formation and function of the transcription initiation complex. Promoter escape After the first bond is synthesized, the RNA polymerase must escape the promoter. During this time there is a tendency to release the RNA transcript and produce truncated transcripts. This is called abortive initiation, and is common for both eukaryotes and prokaryotes. Abortive initiation continues to occur until an RNA product of a threshold length of approximately 10 nucleotides is synthesized, at which point promoter escape occurs and a transcription elongation complex is formed. Mechanistically, promoter escape occurs through DNA scrunching, providing the energy needed to break interactions between RNA polymerase holoenzyme and the promoter. In bacteria, it was historically thought that the sigma factor is definitely released after promoter clearance occurs. This theory had been known as the obligate release model. However, later data showed that upon and following promoter clearance, the sigma factor is released according to a stochastic model known as the stochastic release model. In eukaryotes, at an RNA polymerase II-dependent promoter, upon promoter clearance, TFIIH phosphorylates serine 5 on the carboxy terminal domain of RNA polymerase II, leading to the recruitment of capping enzyme (CE). The exact mechanism of how CE induces promoter clearance in eukaryotes is not yet known. Elongation One strand of the DNA, the template strand (or noncoding strand), is used as a template for RNA synthesis. As transcription proceeds, RNA polymerase traverses the template strand and uses base pairing complementarity with the DNA template to create an RNA copy (which elongates during the traversal). Although RNA polymerase traverses the template strand from 3' → 5', the coding (non-template) strand and newly formed RNA can also be used as reference points, so transcription can be described as occurring 5' → 3'. This produces an RNA molecule from 5' → 3', an exact copy of the coding strand (except that thymines are replaced with uracils, and the nucleotides are composed of a ribose (5-carbon) sugar whereas DNA has deoxyribose (one fewer oxygen atom) in its sugar-phosphate backbone). mRNA transcription can involve multiple RNA polymerases on a single DNA template and multiple rounds of transcription (amplification of particular mRNA), so many mRNA molecules can be rapidly produced from a single copy of a gene. The characteristic elongation rates in prokaryotes and eukaryotes are about 10–100 nts/sec. In eukaryotes, however, nucleosomes act as major barriers to transcribing polymerases during transcription elongation. In these organisms, the pausing induced by nucleosomes can be regulated by transcription elongation factors such as TFIIS. Elongation also involves a proofreading mechanism that can replace incorrectly incorporated bases. In eukaryotes, this may correspond with short pauses during transcription that allow appropriate RNA editing factors to bind. These pauses may be intrinsic to the RNA polymerase or due to chromatin structure. Double-strand breaks in actively transcribed regions of DNA are repaired by homologous recombination during the S and G2 phases of the cell cycle. Since transcription enhances the accessibility of DNA to exogenous chemicals and internal metabolites that can cause recombinogenic lesions, homologous recombination of a particular DNA sequence may be strongly stimulated by transcription. Termination Bacteria use two different strategies for transcription termination – Rho-independent termination and Rho-dependent termination. In Rho-independent transcription termination, RNA transcription stops when the newly synthesized RNA molecule forms a G-C-rich hairpin loop followed by a run of Us. When the hairpin forms, the mechanical stress breaks the weak rU-dA bonds, now filling the DNA–RNA hybrid. This pulls the poly-U transcript out of the active site of the RNA polymerase, terminating transcription. In Rho-dependent termination, Rho, a protein factor, destabilizes the interaction between the template and the mRNA, thus releasing the newly synthesized mRNA from the elongation complex. Transcription termination in eukaryotes is less well understood than in bacteria, but involves cleavage of the new transcript followed by template-independent addition of adenines at its new 3' end, in a process called polyadenylation. Beyond termination by a terminator sequences (which is a part of a gene), transcription may also need to be terminated when it encounters conditions such as DNA damage or an active replication fork. In bacteria, the Mfd ATPase can remove a RNA polymerase stalled at a lesion by prying open its clamp. It also recruits nucleotide excision repair machinery to repair the lesion. Mfd is proposed to also resolve conflicts between DNA replication and transcription. In eukayrotes, ATPase TTF2 helps to suppress the action of RNAP I and II during mitosis, preventing errors in chromosomal segregation. In archaea, the Eta ATPase is proposed to play a similar role. Transcription increases susceptibility to DNA damage Genome damage occurs with a high frequency, estimated to range between tens and hundreds of thousands of DNA damages arising in each cell every day. The process of transcription is a major source of DNA damage, due to the formation of single-strand DNA intermediates that are vulnerable to damage. The regulation of transcription by processes using base excision repair and/or topoisomerases to cut and remodel the genome also increases the vulnerability of DNA to damage. Role of RNA polymerase in post-transcriptional changes in RNA RNA polymerase plays a very crucial role in all steps including post-transcriptional changes in RNA. As shown in the image in the right it is evident that the CTD (C Terminal Domain) is a tail that changes its shape; this tail will be used as a carrier of splicing, capping and polyadenylation, as shown in the image on the left. Inhibitors Transcription inhibitors can be used as antibiotics against, for example, pathogenic bacteria (antibacterials) and fungi (antifungals). An example of such an antibacterial is rifampicin, which inhibits bacterial transcription of DNA into mRNA by inhibiting DNA-dependent RNA polymerase by binding its beta-subunit, while 8-hydroxyquinoline is an antifungal transcription inhibitor. The effects of histone methylation may also work to inhibit the action of transcription. Potent, bioactive natural products like triptolide that inhibit mammalian transcription via inhibition of the XPB subunit of the general transcription factor TFIIH has been recently reported as a glucose conjugate for targeting hypoxic cancer cells with increased glucose transporter production. Endogenous inhibitors In vertebrates, the majority of gene promoters contain a CpG island with numerous CpG sites. When many of a gene's promoter CpG sites are methylated the gene becomes inhibited (silenced). Colorectal cancers typically have 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. However, transcriptional inhibition (silencing) may be of more importance than mutation in causing progression to cancer. For example, in colorectal cancers about 600 to 800 genes are transcriptionally inhibited by CpG island methylation (see regulation of transcription in cancer). Transcriptional repression in cancer can also occur by other epigenetic mechanisms, such as altered production of microRNAs. In breast cancer, transcriptional repression of BRCA1 may occur more frequently by over-produced microRNA-182 than by hypermethylation of the BRCA1 promoter (see Low expression of BRCA1 in breast and ovarian cancers). Transcription factories Active transcription units are clustered in the nucleus, in discrete sites called transcription factories or euchromatin. Such sites can be visualized by allowing engaged polymerases to extend their transcripts in tagged precursors (Br-UTP or Br-U) and immuno-labeling the tagged nascent RNA. Transcription factories can also be localized using fluorescence in situ hybridization or marked by antibodies directed against polymerases. There are ~10,000 factories in the nucleoplasm of a HeLa cell, among which are ~8,000 polymerase II factories and ~2,000 polymerase III factories. Each polymerase II factory contains ~8 polymerases. As most active transcription units are associated with only one polymerase, each factory usually contains ~8 different transcription units. These units might be associated through promoters and/or enhancers, with loops forming a "cloud" around the factor. History A molecule that allows the genetic material to be realized as a protein was first hypothesized by François Jacob and Jacques Monod. Severo Ochoa won a Nobel Prize in Physiology or Medicine in 1959 for developing a process for synthesizing RNA in vitro with polynucleotide phosphorylase, which was useful for cracking the genetic code. RNA synthesis by RNA polymerase was established in vitro by several laboratories by 1965; however, the RNA synthesized by these enzymes had properties that suggested the existence of an additional factor needed to terminate transcription correctly. Roger D. Kornberg won the 2006 Nobel Prize in Chemistry "for his studies of the molecular basis of eukaryotic transcription". Measuring and detecting Transcription can be measured and detected in a variety of ways: G-Less Cassette transcription assay: measures promoter strength Run-off transcription assay: identifies transcription start sites (TSS) Nuclear run-on assay: measures the relative abundance of newly formed transcripts KAS-seq: measures single-stranded DNA generated by RNA polymerases; can work with 1,000 cells. RNase protection assay and ChIP-Chip of RNAP: detect active transcription sites RT-PCR: measures the absolute abundance of total or nuclear RNA levels, which may however differ from transcription rates DNA microarrays: measures the relative abundance of the global total or nuclear RNA levels; however, these may differ from transcription rates In situ hybridization: detects the presence of a transcript MS2 tagging: by incorporating RNA stem loops, such as MS2, into a gene, these become incorporated into newly synthesized RNA. The stem loops can then be detected using a fusion of GFP and the MS2 coat protein, which has a high affinity, sequence-specific interaction with the MS2 stem loops. The recruitment of GFP to the site of transcription is visualized as a single fluorescent spot. This new approach has revealed that transcription occurs in discontinuous bursts, or pulses (see Transcriptional bursting). With the notable exception of in situ techniques, most other methods provide cell population averages, and are not capable of detecting this fundamental property of genes. Northern blot: the traditional method, and until the advent of RNA-Seq, the most quantitative RNA-Seq: applies next-generation sequencing techniques to sequence whole transcriptomes, which allows the measurement of relative abundance of RNA, as well as the detection of additional variations such as fusion genes, post-transcriptional edits and novel splice sites Single cell RNA-Seq: amplifies and reads partial transcriptomes from isolated cells, allowing for detailed analyses of RNA in tissues, embryos, and cancers Reverse transcription Some viruses (such as HIV, the cause of AIDS), have the ability to transcribe RNA into DNA. HIV has an RNA genome that is reverse transcribed into DNA. The resulting DNA can be merged with the DNA genome of the host cell. The main enzyme responsible for synthesis of DNA from an RNA template is called reverse transcriptase. In the case of HIV, reverse transcriptase is responsible for synthesizing a complementary DNA strand (cDNA) to the viral RNA genome. The enzyme ribonuclease H then digests the RNA strand, and reverse transcriptase synthesises a complementary strand of DNA to form a double helix DNA structure (cDNA). The cDNA is integrated into the host cell's genome by the enzyme integrase, which causes the host cell to generate viral proteins that reassemble into new viral particles. In HIV, subsequent to this, the host cell undergoes programmed cell death, or apoptosis, of T cells. However, in other retroviruses, the host cell remains intact as the virus buds out of the cell. Some eukaryotic cells contain an enzyme with reverse transcription activity called telomerase. Telomerase carries an RNA template from which it synthesizes a telomere, a repeating sequence of DNA, to the end of linear chromosomes. It is important because every time a linear chromosome is duplicated, it is shortened. With the telomere at the ends of chromosomes, the shortening eliminates some of the non-essential, repeated sequence, rather than the protein-encoding DNA sequence farther away from the chromosome end. Telomerase is often activated in cancer cells to enable cancer cells to duplicate their genomes indefinitely without losing important protein-coding DNA sequence. Activation of telomerase could be part of the process that allows cancer cells to become immortal. The immortalizing factor of cancer via telomere lengthening due to telomerase has been proven to occur in 90% of all carcinogenic tumors in vivo with the remaining 10% using an alternative telomere maintenance route called ALT or Alternative Lengthening of Telomeres. See also Life Cell (biology) Cell division DBTSS Gene Gene regulation Epigenetics Genome Gene regulation Long non-coding RNA Missense mRNA Splicing – process of removing introns from precursor messenger RNA (pre-mRNA) to make messenger RNA (mRNA) Transcriptomics Translation (biology) Notes References External links Interactive Java simulation of transcription initiation. From Center for Models of Life at the Niels Bohr Institute. Interactive Java simulation of transcription interference—a game of promoter dominance in bacterial virus. From Center for Models of Life at the Niels Bohr Institute. Virtual Cell Animation Collection, Introducing Transcription Gene expression Molecular biology Cellular processes
Transcription (biology)
[ "Chemistry", "Biology" ]
5,937
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
167,570
https://en.wikipedia.org/wiki/Microarray
A microarray is a multiplex lab-on-a-chip. Its purpose is to simultaneously detect the expression of thousands of biological interactions. It is a two-dimensional array on a solid substrate—usually a glass slide or silicon thin-film cell—that assays (tests) large amounts of biological material using high-throughput screening miniaturized, multiplexed and parallel processing and detection methods. The concept and methodology of microarrays was first introduced and illustrated in antibody microarrays (also referred to as antibody matrix) by Tse Wen Chang in 1983 in a scientific publication and a series of patents. The "gene chip" industry started to grow significantly after the 1995 Science Magazine article by the Ron Davis and Pat Brown labs at Stanford University. With the establishment of companies, such as Affymetrix, Agilent, Applied Microarrays, Arrayjet, Illumina, and others, the technology of DNA microarrays has become the most sophisticated and the most widely used, while the use of protein, peptide and carbohydrate microarrays is expanding. Types of microarrays include: DNA microarrays, such as cDNA microarrays, oligonucleotide microarrays, BAC microarrays and SNP microarrays MMChips, for surveillance of microRNA populations Protein microarrays Peptide microarrays, for detailed analyses or optimization of protein–protein interactions Tissue microarrays Cellular microarrays (also called transfection microarrays) Chemical compound microarrays Antibody microarrays Glycan arrays (carbohydrate arrays) Phenotype microarrays Reverse phase protein lysate microarrays, microarrays of lysates or serum Interferometric reflectance imaging sensor (IRIS) People in the field of CMOS biotechnology are developing new kinds of microarrays. Once fed magnetic nanoparticles, individual cells can be moved independently and simultaneously on a microarray of magnetic coils. A microarray of nuclear magnetic resonance microcoils is under development. Fabrication and operation of microarrays A large number of technologies underlie the microarray platform, including the material substrates, spotting of biomolecular arrays, and the microfluidic packaging of the arrays. Microarrays can be categorized by how they physically isolate each element of the array, by spotting (making small physical wells), on-chip synthesis (synthesizing the target DNA probes adhered directly on the array), or bead-based (adhering samples to barcoded beads randomly distributed across the array). Production process The initial publication on microarray production process dates back to 1995, when 48 cDNAs of a plant were printed on glass slide typically used for light microscopy, modern microarrays on the other hand include now thousands of probes and different carriers with coatings. The fabrication of the microarray requires both biological and physical information, including sample libraries, printers, and slide substrates. Though all procedures and solutions always dependent on the fabrication technique employed. The basic principle of the microarray is the printing of small stains of solutions containing different species of the probe on a slide several thousand times. Modern printers are HEPA-filtered and have controlled humidity and temperature surroundings, which is typically around 25°C, 50% humidity. Early microarrays were directly printed onto the surface by using printer pins which deposit the samples in a user-defined pattern on the slide. Modern methods are faster, generate less cross-contamination, and produce better spot morphology. The surface to which the probes are printed must be clean, dust free and hydrophobic, for high-density microarrays. Slide coatings include poly-L-lysine, amino silane, epoxy and others, including manufacturers solutions and are chosen based on the type of sample used. Ongoing efforts to advance microarray technology aim to create uniform, dense arrays while reducing the necessary volume of solution and minimizing contamination or damage. For the manufacturing process, a sample library which contains all relevant information is needed. In the early stages of microarray technology, the sole sample used was DNA, obtained from commonly available clone libraries and acquired through DNA amplification via bacterial vectors. Modern approaches do not include just DNA as a sample anymore, but also proteins, antibodies, antigens, glycans, cell lysates and other small molecules. All samples used are presynthesized, regularly updated, and more straightforward to maintain. Array fabrication techniques include contact printing, lithography, non-contact and cell free printing. Contact printing Contact printing microarray include Pin printing, microstamping or flow printing. Pin printing is the oldest and still widest adopted methodology in DNA microarray contact printing. This technique uses pin types like solid pins, split or quill pins to load and deliver the sample solution directly on solid microarray surfaces. Microstamping offers an alternative to the commonly used pin printing and is also referred as soft lithography, which in theory covers different, related pattern transfer technologies using patterned polymer monolithic substrates, the most prominent being microstamping. In contrast to pin printing, microstamping is a more parallel deposition method with less individuality. Certain stamps are loaded with reagents and printed with these reagent solutions identically. Lithography Lithography combines various methods like Photolithography, Interference lithography, laser writing, electron-beam and Dip pen. The most widely used and researched method remains Photolithography, in which photolithographic masks are used to target specific nucleotides to the surface. UV light is passed through the mask that acts as a filter to either transmit or block the light from the chemically protected microarray surface. If the UV light has been blocked, the area will remain protected from the addition of nucleotides, whereas in areas which were exposed to UV light, further nucleotides can be added. With this method high-quality custom arrays can be produced with a very high density of DNA features by using a compact device with few moving parts. Non contact Non-contact printing methods vary from Photochemistry-based printing, Electro-printing and droplet dispensing. In contrast to the other methods, non-contact printing does not involve contact between the surface and the stamp, pin, or other used dispenser. The main advantages are reduced contamination, lesser cleaning and higher throughput which increases steadily. Many of the methods are able to load the probes in parallel, allowing multiple arrays to be produced simultaneously. Cell free In cell free systems, the transcription and translation are carried out in situ, which makes the cloning and expression of proteins in host cells obsolete, because no intact cells are needed. The molecule of interest is directly synthesized onto the surface of a solid area. These assays allow high-throughput analysis in a controlled environment without inferences associated with intact cells. See also Microarray databases Microarray analysis techniques DNA Microarray Biochip Notes
Microarray
[ "Chemistry", "Materials_science", "Biology" ]
1,481
[ "Biochemistry methods", "Genetics techniques", "Microtechnology", "Microarrays", "Bioinformatics", "Molecular biology techniques" ]
167,578
https://en.wikipedia.org/wiki/Chemical%20weapons%20in%20World%20War%20I
The use of toxic chemicals as weapons dates back thousands of years, but the first large-scale use of chemical weapons was during World War I. They were primarily used to demoralize, injure, and kill entrenched defenders, against whom the indiscriminate and generally very slow-moving or static nature of gas clouds would be most effective. The types of weapons employed ranged from disabling chemicals, such as tear gas, to lethal agents like phosgene, chlorine, and mustard gas. These chemical weapons caused medical problems. This chemical warfare was a major component of the first global war and first total war of the 20th century. Gas attack left a strong psychological impact, and estimates go up to about 90,000 fatalities and a total of about 1.3 million casualties. However, this would amount to only 3-3.5% of overall casualties, and gas was unlike most other weapons of the period because it was possible to develop countermeasures, such as gas masks. In the later stages of the war, as the use of gas increased, its overall effectiveness diminished. The widespread use of these agents of chemical warfare, and wartime advances in the composition of high explosives, gave rise to an occasionally expressed view of World War I as "the chemist's war" and also the era where weapons of mass destruction were created. The use of poison gas by all major belligerents throughout World War I constituted war crimes as its use violated the 1899 Hague Declaration Concerning Asphyxiating Gases and the 1907 Hague Convention on Land Warfare, which prohibited the use of "poison or poisoned weapons" in warfare. Widespread horror and public revulsion at the use of gas and its consequences led to far less use of chemical weapons by combatants during World War II. Use of poison gas 1914: Tear gas The most frequently used chemicals during World War I were tear-inducing irritants rather than fatal or disabling poison. During World War I, the French Army was the first to employ tear gas, using 26 mm grenades filled with ethyl bromoacetate in August 1914. The small quantities of gas delivered, roughly per cartridge, were not even detected by the Germans. The stocks were rapidly consumed and by November a new order was placed by the French military. As bromine was scarce among the Entente allies, the active ingredient was changed to chloroacetone. In October 1914, German troops fired fragmentation shells filled with a chemical irritant against British positions at Neuve Chapelle; the concentration achieved was so small that it too was barely noticed. None of the combatants considered the use of tear gas to be in conflict with the Hague Treaty of 1899, which specifically prohibited the launching of projectiles containing asphyxiating or poisonous gas. 1915: Large-scale use and lethal gases The first instance of large-scale use of gas as a weapon was on 31 January 1915, when Germany fired 18,000 artillery shells containing liquid xylyl bromide tear gas on Russian positions on the Rawka River, west of Warsaw during the Battle of Bolimov. Instead of vaporizing, the chemical froze and failed to have the desired effect. The first killing agent was chlorine, used by the German military. Chlorine is a powerful irritant that can inflict damage to the eyes, nose, throat and lungs. At high concentrations and prolonged exposure it can cause death by asphyxiation. German chemical companies BASF, Hoechst and Bayer (which formed the IG Farben conglomerate in 1925) had been making chlorine as a by-product of their dye manufacturing. In cooperation with Fritz Haber of the Kaiser Wilhelm Institute for Chemistry in Berlin, they began developing methods of discharging chlorine gas against enemy trenches. It may appear from a feldpost letter of Major Karl von Zingler that the first chlorine gas attack by German forces took place before 2 January 1915: "In other war theatres it does not go better and it has been said that our Chlorine is very effective. 140 English officers have been killed. This is a horrible weapon ...". This letter must be discounted as evidence for early German use of chlorine, however, because the date "2 January 1915" may have been hastily scribbled instead of the intended "2 January 1916," the sort of common typographical error that is often made at the beginning of a new year. The deaths of so many English officers from gas at this time would certainly have been met with outrage, but a recent, extensive study of British reactions to chemical warfare says nothing of this supposed attack. Perhaps this letter was referring to the chlorine-phosgene attack on British troops at Wieltje near Ypres, on 19 December 1915 (see below). By 22 April 1915, the German Army had 167 tons of chlorine deployed in 5,730 cylinders from Langemark-Poelkapelle, north of Ypres. At 17:30, in a slight easterly breeze, the liquid chlorine was siphoned from the tanks, producing gas which formed a grey-green cloud that drifted across positions held by troops of the 45th Infantry Division (France), specifically the 1st Tirailleurs and the 2nd Zouaves from Algeria. Faced with an unfamiliar threat these troops broke ranks, abandoning their trenches and creating an 8,000-yard (7 km) gap in the Allied line. The German infantry were also wary of the gas and, lacking reinforcements, failed to exploit the break before the 1st Canadian Division and assorted French troops reformed the line in scattered, hastily prepared positions apart. The Entente governments claimed the attack was a flagrant violation of international law but Germany argued that the Hague treaty had only banned chemical shells, rather than the use of gas projectors. In what became the Second Battle of Ypres, the Germans used gas on three more occasions; on 24 April against the 1st Canadian Division, on 2 May near Mouse Trap Farm and on 5 May against the British at Hill 60. The British Official History stated that at Hill 60, "90 men died from gas poisoning in the trenches or before they could be got to a dressing station; of the 207 brought to the nearest dressing stations, 46 died almost immediately and 12 after long suffering." On 6 August, German troops under Field Marshal Paul von Hindenburg used chlorine gas against Russian troops defending Osowiec Fortress. Surviving defenders drove back the attack and retained the fortress. The event would later be called the Attack of the Dead Men. Germany used chemical weapons on the Eastern Front in an attack at Rawka (river), west of Warsaw. The Russian Army took 9,000 casualties, with more than 1,000 fatalities. In response, the artillery branch of the Russian Army organised a commission to study the delivery of poison gas in shells. Effectiveness and countermeasures It quickly became evident that the men who stayed in their places suffered less than those who ran away, as any movement worsened the effects of the gas, and that those who stood up on the fire step suffered less—indeed they often escaped any serious effects—than those who lay down or sat at the bottom of a trench. Men who stood on the parapet suffered least, as the gas was denser near the ground. The worst sufferers were the wounded lying on the ground, or on stretchers, and the men who moved back with the cloud. Chlorine was less effective as a weapon than the Germans had hoped, particularly as soon as simple countermeasures were introduced. The gas produced a visible greenish cloud and strong odour, making it easy to detect. It was water-soluble, so the simple expedient of covering the mouth and nose with a damp cloth was effective at reducing the effect of the gas. It was thought to be even more effective to use urine rather than water, as it was known at the time that chlorine reacted with urea (present in urine) to form dichloro urea. Chlorine required a concentration of 1,000 parts per million to be fatal, destroying tissue in the lungs, likely through the formation of hypochlorous and hydrochloric acids when dissolved in the water in the lungs. Despite its limitations, chlorine was an effective psychological weapon—the sight of an oncoming cloud of the gas was a continual source of dread for the infantry. Countermeasures were quickly introduced in response to the use of chlorine. The Germans issued their troops with small gauze pads filled with cotton waste, and bottles of a bicarbonate solution with which to dampen the pads. Immediately following the use of chlorine gas by the Germans, instructions were sent to British and French troops to hold wet handkerchiefs or cloths over their mouths. Simple pad respirators similar to those issued to German troops were soon proposed by Lieutenant-Colonel N. C. Ferguson, the Assistant Director Medical Services of the 28th Division. These pads were intended to be used damp, preferably dipped into a solution of bicarbonate kept in buckets for that purpose; other liquids were also used. Because such pads could not be expected to arrive at the front for several days, army divisions set about making them for themselves. Locally available muslin, flannel and gauze were used, officers were sent to Paris to buy more and local French women were employed making up rudimentary pads with string ties. Other units used lint bandages manufactured in the convent at Poperinge. Pad respirators were sent up with rations to British troops in the line as early as the evening of 24 April. In Britain the Daily Mail newspaper encouraged women to manufacture cotton pads, and within one month a variety of pad respirators were available to British and French troops, along with motoring goggles to protect the eyes. The response was enormous and a million gas masks were produced in a day. The Mails design was useless when dry and caused suffocation when wet—the respirator was responsible for the deaths of scores of men. By 6 July 1915, the entire British army was equipped with the more effective "smoke helmet" designed by Major Cluny MacPherson, Newfoundland Regiment, which was a flannel bag with a celluloid window, which entirely covered the head. The race was then on between the introduction of new and more effective poison gases and the production of effective countermeasures, which marked gas warfare until the armistice in November 1918. British gas attacks The British expressed outrage at Germany's use of poison gas at Ypres and responded by developing their own gas warfare capability. The commander of II Corps, Lieutenant General Sir Charles Ferguson, said of gas: The first use of gas by the British was at the Battle of Loos, 25 September 1915, but the attempt was a disaster. Chlorine, codenamed Red Star, was the agent to be used (140 tons arrayed in 5,100 cylinders), and the attack was dependent on a favourable wind. On this occasion the wind proved fickle, and the gas either lingered in no man's land or, in places, blew back on the British trenches. This was compounded when the gas could not be released from all the British canisters because the wrong turning keys were sent with them. Subsequent retaliatory German shelling hit some of those unused full cylinders, releasing gas among the British troops. Exacerbating the situation were the primitive flannel gas masks distributed to the British. The masks got hot, and the small eye-pieces misted over, reducing visibility. Some of the troops lifted the masks to get fresh air, causing them to be gassed. 1915: More deadly gases The deficiencies of chlorine were overcome with the introduction of phosgene, which was prepared by a group of French chemists led by Victor Grignard and first used by France in 1915. Colourless and having an odour likened to "mouldy hay," phosgene was difficult to detect, making it a more effective weapon. Phosgene was sometimes used on its own, but was more often used mixed with an equal volume of chlorine, with the chlorine helping to spread the denser phosgene. The Allies called this combination White Star after the marking painted on shells containing the mixture. German phosgene came in the form of diphosgene, codenamed Grün Kreuz (Green cross). This was less effective than its allied counterpart, being less toxic and slower to evaporate, but was easier to handle in shell manufacture early in the war. Phosgene was a potent killing agent, deadlier than chlorine. It had a potential drawback in that some of the symptoms of exposure took 24 hours or more to manifest. This meant that the victims were initially still capable of putting up a fight; this could also mean that apparently fit troops would be incapacitated by the effects of the gas on the following day. In the first combined chlorine–phosgene attack by Germany, against British troops at Wieltje near Ypres, Belgium on 19 December 1915, 88 tons of the gas were released from cylinders causing 1069 casualties and 69 deaths. The British P gas helmet, issued at the time, was impregnated with sodium phenolate and partially effective against phosgene. The modified PH Gas Helmet, which was impregnated with phenate hexamine and hexamethylene tetramine (urotropine) to improve the protection against phosgene, was issued in January 1916. Around 36,600 tons of phosgene were manufactured during the war, out of a total of 190,000 tons for all chemical weapons, making it second only to chlorine (93,800 tons) in the quantity manufactured: Germany 18,100 tons France 15,700 tons United Kingdom 1,400 tons (also used French stocks) United States 1,400 tons (also used French stocks) 1916: Austrian use On 29 June 1916, the Austro-Hungarian Army attacked the Royal Italian Army's Brigade "Ferrara" on Monte San Michele with a mix of phosgene and chlorine gas. Thousands of Italian soldiers died in this first chemical weapons attack on the Italian Front. 1917: Mustard gas The most widely reported chemical agent of the First World War was mustard gas. Despite the name it is not a gas but a volatile oily liquid, and is dispersed as a fine mist of liquid droplets. It was introduced as a vesicant by Germany on July 12, 1917, weeks prior to the Third Battle of Ypres. The Germans marked their shells yellow for mustard gas and green for chlorine and phosgene; hence they called the new gas Yellow Cross. It was known to the British as HS (Hun Stuff), and the French called it Yperite (named after Ypres). Mustard gas is not an effective killing agent (though in high enough doses it is fatal) but can be used to harass and disable the enemy and pollute the battlefield. Delivered in artillery shells, mustard gas was heavier than air, and it settled to the ground as an oily liquid. Once in the soil, mustard gas remained active for several days, weeks, or even months, depending on the weather conditions. The skin of victims of mustard gas blistered, their eyes became very sore and they began to vomit. Mustard gas caused internal and external bleeding and attacked the bronchial tubes, stripping off the mucous membrane. This was extremely painful. Fatally injured victims sometimes took four or five weeks to die of mustard gas exposure. One nurse, Vera Brittain, wrote: "I wish those people who talk about going on with this war whatever it costs could see the soldiers suffering from mustard gas poisoning. Great mustard-coloured blisters, blind eyes, all sticky and stuck together, always fighting for breath, with voices a mere whisper, saying that their throats are closing and they know they will choke." The polluting nature of mustard gas meant that it was not always suitable for supporting an attack as the assaulting infantry would be exposed to the gas when they advanced. When Germany launched Operation Michael on 21 March 1918, they saturated the Flesquières salient with mustard gas instead of attacking it directly, believing that the harassing effect of the gas, coupled with threats to the salient's flanks, would make the British position untenable. Gas never reproduced the dramatic success of 22 April 1915; it became a standard weapon which, combined with conventional artillery, was used to support most attacks in the later stages of the war. Gas was employed primarily on the Western Front—the static, confined trench system was ideal for achieving an effective concentration. Germany also used gas against Russia on the Eastern Front, where the lack of effective countermeasures resulted in deaths of over 56,000 Russians, while Britain experimented with gas in Palestine during the Second Battle of Gaza. Russia began manufacturing chlorine gas in 1916, with phosgene being produced later in the year. Most of the manufactured gas was never used. The British Army first used mustard gas in November 1917 at Cambrai, after their armies had captured a stockpile of German mustard gas shells. It took the British more than a year to develop their own mustard gas weapon, with production of the chemicals centred on Avonmouth Docks. (The only option available to the British was the Despretz–Niemann–Guthrie process.) This was used first in September 1918 during the breaking of the Hindenburg Line with the Hundred Days' Offensive. The Allies mounted more gas attacks than the Germans in 1917 and 1918 because of a marked increase in production of gas from the Allied nations. Germany was unable to keep up with this pace despite creating various new gases for use in battle, mostly as a result of very costly methods of production. Entry into the war by the United States allowed the Allies to increase mustard gas production far more than Germany. Also the prevailing wind on the Western Front was blowing from west to east, which meant the Allies more frequently had favourable conditions for a gas release than did the Germans. When the United States entered the war, it was already mobilizing resources from academic, industry and military sectors for research and development into poison gas. A Subcommittee on Noxious Gases was created by the National Research Committee, a major research centre was established at Camp American University, and the 1st Gas Regiment was recruited. The 1st Gas Regiment eventually served in France, where it used phosgene gas in several attacks. The Artillery used mustard gas with significant effect during the Meuse-Argonne Offensive on at least three occasions. The United States began large-scale production of an improved vesicant gas known as Lewisite, for use in an offensive planned for early 1919. By the time of the armistice on 11 November, a plant near Willoughby, Ohio was producing 10 tons per day of the substance, for a total of about 150 tons. It is uncertain what effect this new chemical would have had on the battlefield, as it degrades in moist conditions. Post-war By the end of the war, chemical weapons had lost much of their effectiveness against well trained and equipped troops. By that time, chemical weapon agents had inflicted an estimated 1.3 million casualties. Nevertheless, in the following years, chemical weapons were used in several, mainly colonial, wars where one side had an advantage in equipment over the other. The British used poison gas, possibly adamsite, against Russian revolutionary troops beginning on 27 August 1919 and contemplated using chemical weapons against Iraqi insurgents in the 1920s; Bolshevik troops used poison gas to suppress the Tambov Rebellion in 1920, Spain used chemical weapons in Morocco against Rif tribesmen throughout the 1920s and Italy used mustard gas in Libya in 1930 and again during its invasion of Ethiopia in 1936. In 1925, a Chinese warlord, Zhang Zuolin, contracted a German company to build him a mustard gas plant in Shenyang, which was completed in 1927. Public opinion had by then turned against the use of such weapons which led to the Geneva Protocol, an updated and extensive prohibition of poison weapons. The Protocol, which was signed by most First World War combatants in 1925, bans the use (but not the stockpiling or production) of lethal gas and bacteriological weapons among signatories in international armed conflicts. Most countries that signed ratified it within around five years; a few took much longer—Brazil, Japan, Uruguay, and the United States did not do so until the 1970s, and Nicaragua ratified it in 1990. The signatory nations agreed not to use poison gas against each other in the future, both stating "the use in war of asphyxiating, poisonous or other gases, and of all analogous liquids, materials or devices, has been justly condemned by the general opinion of the civilized world" and "the High Contracting Parties ... agree to be bound as between themselves according to the terms of this declaration." Chemical weapons have been used in at least a dozen wars since the end of the First World War; they were not used in combat on a large scale until Iraq used mustard gas and the more deadly nerve agents in the Halabja chemical attack near the end of the eight-year Iran–Iraq War. The full conflict's use of such weaponry killed around 20,000 Iranian troops (and injured another 80,000), around a quarter of the number of deaths caused by chemical weapons during the First World War. The Geneva Protocol, 1925 The Geneva Protocol, signed by 132 nations on June 17, 1925, was a treaty established to ban the use of chemical and biological weapons among signatories in international armed conflicts. As stated by Coupland and Leins, "it was fostered in part by a 1918 appeal in which the International Committee of the Red Cross (ICRC) described the use of poisonous gas against soldiers as a barbarous invention which science is bringing to perfection". Chemical warfare agents that contained bromine, nitroaromatic, and chlorine were dismantled and destroyed. The destruction and disposal of the chemicals did not consider the long-term and adverse impacts on the environment. The Protocol does not ban the stockpilling or production of chemical weapons as well as the use of such weaponry against non-ratifying states and in internal disturbances or conflicts, and permits reservations that allow signatories to adopt the policy of no first use. As a result, the Chemical Weapons Convention (CWC) was drafted in 1993, which prohibits the development, production, stockpiling, and use of chemical weapons. Despite there being an international ban on chemical warfare, the CWC "allows domestic law enforcement agencies of the signing countries to use chemical weapons on their citizens". Effect on World War II All major combatants stockpiled chemical weapons during the Second World War, but the only reports of its use in the conflict were the Japanese use of relatively small amounts of mustard gas and lewisite in China, Italy's use of gas in Ethiopia (in what is more often considered to be the Second Italo-Ethiopian War), and very rare occurrences in Europe (for example some mustard gas bombs were dropped on Warsaw on 3 September 1939, which Germany acknowledged in 1942 but indicated had been accidental). Mustard gas was the agent of choice, with the British stockpiling 40,719 tons, the Soviets 77,400 tons, the Americans over 87,000 tons and the Germans 27,597 tons. The destruction of an American cargo ship containing mustard gas led to many casualties in Bari, Italy, in December 1943. In both Axis and Allied nations, children in school were taught to wear gas masks in case of gas attack. Germany developed the poison gases tabun, sarin, and soman during the war, and used Zyklon B in their extermination camps. Neither Germany nor the Allied nations used any of their war gases in combat, despite maintaining large stockpiles and occasional calls for their use. Poison gas played an important role in the Holocaust. Britain made plans to use mustard gas on the landing beaches in the event of an invasion of the United Kingdom in 1940. The United States considered using gas to support their planned invasion of Japan. Casualties A range of authors have attempted to estimate the casualties from chemical weapons in WWI. This is hampered by incomplete data. British casualties were best recorded, while estimates of gas casualties amongst Russians on the Eastern front have been described as "pure guesswork", a major issue as it is often claimed that a large proportion of casualties occurred there. A commonly used estimate claims 90,000 fatalities and 1.3 million casualties. Of this, 26,600 deaths and 652,000 casualties come from the UK, France, Germany and the US where more dependable data exists. Of the rest, historian L. F. Haber suggests the usual estimates are likely too high, but concedes "we shall never know". It is generally agreed that the contribution of gas weapons to the total casualty figures was relatively minor. British figures, which were accurately maintained from 1916, recorded that 3% of gas casualties were fatal, 2% were permanently invalid and 70% were fit for duty again within six weeks. Death by gas was often slow and painful. According to Denis Winter (Death's Men, 1978), a fatal dose of phosgene eventually led to "shallow breathing and retching, pulse up to 120, an ashen face and the discharge of four pints (2 litres) of yellow liquid from the lungs each hour for the 48 of the drowning spasms." A common fate of those exposed to gas was blindness, chlorine gas or mustard gas being the main causes. One of the most famous First World War paintings, Gassed by John Singer Sargent, captures such a scene of mustard gas casualties which he witnessed at a dressing station at Le Bac-du-Sud near Arras in July 1918. (The gases used during that battle (tear gas) caused temporary blindness and/or a painful stinging in the eyes. These bandages were normally water-soaked to provide a rudimentary form of pain relief to the eyes of casualties before they reached more organized medical help.) The proportion of mustard gas fatalities to total casualties was low; 2% of mustard gas casualties died and many of these succumbed to secondary infections rather than the gas itself. Once it was introduced at the third battle of Ypres, mustard gas produced 90% of all British gas casualties and 14% of battle casualties of any type. Mustard gas was a source of extreme dread. In The Anatomy of Courage (1945), Lord Moran, who had been a medical officer during the war, wrote: Mustard gas did not need to be inhaled to be effective—any contact with skin was sufficient. Exposure to 0.1 ppm was enough to cause massive blisters. Higher concentrations could burn flesh to the bone. It was particularly effective against the soft skin of the eyes, nose, armpits and groin, since it dissolved in the natural moisture of those areas. Typical exposure would result in swelling of the conjunctiva and eyelids, forcing them closed and rendering the victim temporarily blind. Where it contacted the skin, moist red patches would immediately appear which after 24 hours would have formed into blisters. Other symptoms included severe headache, elevated pulse and temperature (fever), and pneumonia (from blistering in the lungs). Many of those who survived a gas attack were scarred for life. Respiratory disease and failing eyesight were common post-war afflictions. Of the Canadians who, without any effective protection, had withstood the first chlorine attacks during Second Ypres, 60% of the casualties had to be repatriated and half of these were still unfit by the end of the war, over three years later. Many of those who were fairly soon recorded as fit for service were left with scar tissue in their lungs. This tissue was susceptible to tuberculosis attack. It was from this that many of the 1918 casualties died, around the time of the Second World War, shortly before sulfa drugs became widely available for its treatment. British testimony A British nurse treating mustard gas cases recorded: A postmortem account from the British official medical history records one of the British casualties: Case four. Aged 39 years. Gassed 29 July 1917. Admitted to casualty clearing station the same day. Died about ten days later. Brownish pigmentation present over large surfaces of the body. A white ring of skin where the wrist watch was. Marked superficial burning of the face and scrotum. The larynx much congested. The whole of the trachea was covered by a yellow membrane. The bronchi contained abundant gas. The lungs fairly voluminous. The right lung showing extensive collapse at the base. Liver congested and fatty. Stomach showed numerous submucous haemorrhages. The brain substance was unduly wet and very congested. Civilian casualties The belligerents avoided deliberate attacks on civilians, but the distribution of gas cloud casualties was not only limited to the front. Nearby towns were at risk from winds blowing the poison gases through, with only the French taking special precautions in planning gas attacks. Later in the war, the British provided warnings and issued civilians working near the front with gas masks, leading to improved preparedness. Eventually, everyone within 8 kilometers of the front was to carry a respirator at all times. Regardless, a significant number would be exposed, with the most serious case in Armentières where lingering mustard gas residue from heavy German bombardment in July 1917 led to 675 civilian casualties (including 86 killed). Hundreds of shells rained down per minute, and while civilians had shelters and gas masks, the particular dangers of mustard gas was not yet known. British and French records list a total of around 1325 civilian casualties, including over a hundred deaths from German gas weapons. This is an underestimate as smaller incidents of exposure have not been recorded, and there is no German record of civilian casualties from Allied weapons. In addition, around 4000 civilians working in chemical weapons production and shell filling in France, Britain and the United States were injured due to accidental exposure. Similar figures for Germany are not available, though it is known that there were a number of deaths. The British did not publicise incidents of civilians being gassed by Germans due to fears about the effect on morale at home. Countermeasures None of the First World War's combatants were prepared for the introduction of poison gas as a weapon. Once gas was introduced, development of gas protection began and the process continued for much of the war, producing a series of increasingly effective gas masks. Even at Second Ypres, Germany, still unsure of the weapon's effectiveness, only issued breathing masks to the engineers handling the gas. At Ypres a Canadian medical officer, who was also a chemist, quickly identified the gas as chlorine and recommended that the troops urinate on a cloth and hold it over their mouth and nose, urine would be left to sit for a period so that the ammonia would activate, this would neutralize some of the chemicals in the chlorine gas, this action would allow them to delay the German advance at Ypres giving the allies time to reinforce the area when French and other colonial troops had retreated. The first official equipment issued was similarly crude; a pad of material, usually impregnated with a chemical, tied over the lower face. To protect the eyes from tear gas, soldiers were issued with gas goggles. The next advance was the introduction of the gas helmet—basically a bag placed over the head. The fabric of the bag was impregnated with a chemical to neutralize the gas—the chemical would wash out into the soldier's eyes whenever it rained. Eye-pieces, which were prone to fog up, were initially made from talc. When going into combat, gas helmets were typically worn rolled up on top of the head, to be pulled down and secured about the neck when the gas alarm was given. The first British version was the hypo helmet, the fabric of which was soaked in sodium hyposulfite (commonly known as "hypo"). The British P gas helmet, partially effective against phosgene and with which all infantry were equipped with at Loos, was impregnated with sodium phenolate. A mouthpiece was added through which the wearer would breathe out to prevent carbon dioxide build-up. The adjutant of the 1/23rd Battalion, The London Regiment, recalled his experience of the P helmet at Loos: A modified version of the P helmet, called the PH helmet, was issued in January 1916, and was impregnated with hexamethylenetetramine to improve protection against phosgene. Self-contained box respirators represented the culmination of gas mask development during the First World War. Box respirators used a two-piece design; a mouthpiece connected via a hose to a box filter. The box filter contained granules of chemicals that neutralised the gas, delivering clean air to the wearer. Separating the filter from the mask enabled a bulky but efficient filter to be supplied. Nevertheless, the first version, known as the large box respirator (LBR) or "Harrison's Tower", was deemed too bulky—the box canister needed to be carried on the back. The LBR had no mask, just a mouthpiece and nose clip; separate gas goggles had to be worn. It continued to be issued to the artillery gun crews but the infantry were supplied with the "small box respirator" (SBR). The Small Box Respirator featured a single-piece, close-fitting rubberized mask with eye-pieces. The box filter was compact and could be worn around the neck. The SBR could be readily upgraded as more effective filter technology was developed. The British-designed SBR was also adopted for use by the American Expeditionary Force. The SBR was the prized possession of the ordinary infantryman; when the British were forced to retreat during the German spring offensive of 1918, it was found that while some troops had discarded their rifles, hardly any had left behind their respirators. Horses and mules were important methods of transport that could be endangered if they came into close contact with gas. This was not so much of a problem until it became common to launch gas great distances. This caused researchers to develop masks that could be used on animals such as dogs, horses, mules, and even carrier pigeons. For mustard gas, which could cause severe damage by simply making contact with skin, no effective countermeasure was found during the war. The kilt-wearing Scottish regiments were especially vulnerable to mustard gas injuries due to their bare legs. At Nieuwpoort in Flanders some Scottish battalions took to wearing women's tights beneath the kilt as a form of protection. Gas alert procedure became a routine for the front-line soldier. To warn of a gas attack, a bell would be rung, often made from a spent artillery shell. At the noisy batteries of the siege guns, a compressed air strombus horn was used, which could be heard nine miles (14 km) away. Notices would be posted on all approaches to an affected area, warning people to take precautions. Other British attempts at countermeasures were not so effective. An early plan was to use 100,000 fans to disperse the gas. Burning coal or carborundum dust was tried. A proposal was made to equip front-line sentries with diving helmets, air being pumped to them through a 100 ft (30 m) hose. The effectiveness of all countermeasures is apparent. In 1915, when poison gas was relatively new, less than 3% of British gas casualties died. In 1916, the proportion of fatalities jumped to 17%. By 1918, the figure was back below 3%, though the total number of British gas casualties was now nine times the 1915 levels. Delivery systems The first system employed for the mass delivery of gas involved releasing the gas cylinders in a favourable wind such that it was carried over the enemy's trenches. The Hague Convention of 1899 prohibited the use of poison gasses delivered by projectiles. The main advantage of this method was that it was relatively simple and, in suitable atmospheric conditions, produced a concentrated cloud capable of overwhelming the gas mask defences. The disadvantages of cylinder releases were numerous. First and foremost, delivery was at the mercy of the wind. If the wind was fickle, as was the case at Loos, the gas could backfire, causing friendly casualties. Gas clouds gave plenty of warning, allowing the enemy time to protect themselves, though many soldiers found the sight of a creeping gas cloud unnerving. Gas clouds had limited penetration, only capable of affecting the front-line trenches before dissipating. Finally, the cylinders had to be emplaced at the very front of the trench system so that the gas was released directly over no man's land. This meant that the cylinders had to be manhandled through communication trenches, often clogged and sodden, and stored at the front where there was always the risk that cylinders would be prematurely breached during a bombardment. A leaking cylinder could issue a telltale wisp of gas that, if spotted, would be sure to attract shellfire. A British chlorine cylinder, known as an "oojah", weighed 190 lb (86 kg), of which 60 lb (27 kg) was chlorine gas, and required two men to carry. Phosgene gas was introduced later in a cylinder, known as a "mouse", that weighed 50 lb (23 kg). Delivering gas via artillery shell overcame many of the risks of dealing with gas in cylinders. The Germans, for example, used artillery shells. Gas shells were independent of the wind and increased the effective range of gas, making anywhere within reach of the guns vulnerable. Gas shells could be delivered without warning, especially the clear, nearly odourless phosgene—there are numerous accounts of gas shells, landing with a "plop" rather than exploding, being initially dismissed as dud HE or shrapnel shells, giving the gas time to work before the soldiers were alerted and took precautions. The main flaw associated with delivering gas via artillery was the difficulty of achieving a killing concentration. Each shell had a small gas payload and an area would have to be subjected to a saturation bombardment to produce a cloud to match cylinder delivery. Mustard gas did not need to form a concentrated cloud and hence artillery was the ideal vehicle for delivery of this battlefield pollutant. The solution to achieving a lethal concentration without releasing from cylinders was the "gas projector", essentially a large-bore mortar that fired the entire cylinder as a missile. The British Livens projector (invented by Captain W.H. Livens in 1917) was a simple device; an diameter tube sunk into the ground at an angle, a propellant was ignited by an electrical signal, firing the cylinder containing 30 or 40 lb (14 or 18 kg) of gas up to 1,900 metres. By arranging a battery of these projectors and firing them simultaneously, a dense concentration of gas could be achieved. The Livens was first used at Arras on 4 April 1917. On 31 March 1918 the British conducted their largest ever "gas shoot", firing 3,728 cylinders at Lens. Unexploded weapons Over of France had to be cordoned off at the end of the war because of unexploded ordnance. About 20% of the chemical shells were duds, and approximately 13 million of these munitions were left in place. This has been a serious problem in former battle areas from immediately after the end of the War until the present. Shells may be, for instance, uncovered when farmers plough their fields (termed the 'iron harvest'), and are also regularly discovered when public works or construction work is done. After the armistice, people sought unexploded weapons for their metal value, as well as preventing the danger that they posed to civilians. Toxic chemicals were emptied from shells, resulting in many deaths and health defects. Another difficulty is the current stringency of environmental legislation. In the past, a common method of getting rid of unexploded chemical ammunition was to detonate or dump it at sea; this is currently prohibited in most countries. The problems are especially acute in some northern regions of France. The French government no longer disposes of chemical weapons at sea. For this reason, piles of untreated chemical weapons accumulated. In 2001, it became evident that the pile stored at a depot in Vimy was unsafe; the inhabitants of the neighbouring town were evacuated, and the pile moved, using refrigerated trucks and under heavy guard, to a military camp in Suippes. The capacity of the plant is meant to be 25 tons per year (extensible to 80 tons at the beginning), for a lifetime of 30 years. Germany has to deal with unexploded ammunition and polluted lands resulting from the explosion of an ammunition train in 1919. Aside from unexploded shells, there have been claims that poison residues have remained in the local environment for an extended period, though this is unconfirmed; well known but unverified anecdotes claim that as late as the 1960s trees in the area retained enough mustard gas residue to injure farmers or construction workers who were clearing them. Disposal methods of chemical weapons After World War I, the United States, Germany, the United Kingdom and other nations had stockpiles of unfired weapons. It has been estimated that 125 million tons of toxic gases were used to manufacture bombs, grenades and shells. The remaining weapons were destroyed, dismantled, and disposed of in oceans and seas. It was believed that the chemicals would be diluted when disposed of in the ocean, and therefore ocean and sea dumping was a "safe and convenient" practice. Hundreds of thousands of tons of chemical agents, such as sulphur mustard, cyanogen chloride and arsine oil, were disposed of at sea. Chemical weapons have since washed up on shorelines and been found by fishers, causing injuries and, in some cases, death. Other disposal methods included land burials and incineration. After World War 1, "chemical shells made up 35 percent of French and German ammunition supplies, 25 percent British and 20 percent American". Weapons that contained chemicals such as bromine, chlorine and nitroaromatic were burned. The thermal destruction of chemical weapons negatively impacted the ecological environment of disposal sites. For example, in Verdun, France, the thermal destruction of weapons "resulted in severe metal contamination of upper 4–10 cm of topsoil" at the Place à Gas disposal site. Gases used Long-term health effects Soldiers who claimed to have been exposed to chemical warfare often presented unusual medical conditions which has led to much controversy. The lack of information left doctors, patients, and their families in the dark in terms of prognosis and treatment. Nerve agents such as sarin, tabun, and soman are believed to have had the most significant long-term health effects. Chronic fatigue and memory loss were reported to last up to three years after exposure. In the years following World War One, there were many conferences held in attempts to abolish the use of chemical weapons altogether, such as the Washington Naval Conference (1921–22), Geneva Conference (1923–25) and the World Disarmament Conference (1933). The United States was an original signatory of the Geneva Protocol in 1925, but the US Senate did not ratify it until 1975. Although the health effects are generally chronic in nature, the exposures were generally acute. A positive correlation has been proven between exposure to mustard agents and skin cancers, other respiratory and skin conditions, leukemia, several eye conditions, bone marrow depression and subsequent immunosuppression, psychological disorders and sexual dysfunction. Chemicals used in the production of chemical weapons also left residues in the soil where the weapons were used. The chemicals that were detected can cause cancer and can affect the brain, blood, liver, kidneys and skin. The development and production of chemical weapons threatened public health and introduced a new set of challenges. Not only did war gasses like mustard and chlorine endanger the lives of soldiers, but also threatened the safety of workers who manufactured them. Explanatory notes References Further reading Cook, Tim. "‘Against God-Inspired Conscience’: The Perception of Gas Warfare as a Weapon of Mass Destruction, 1915–1939." War & Society 18.1 (2000): 47-69. Dorsey, M. Girard. Holding Their Breath: How the Allies Confronted the Threat of Chemical Warfare in World War II (Cornell UP, 2023) online. Fitzgerald, Gerard J. "Chemical warfare and medical response during World War I." American journal of public health 98.4 (2008): 611–625. online Jones, Edgar. "Terror weapons: The British experience of gas and its treatment in the First World War." War in History 21.3 (2014): 355–375. online Padley, Anthony Paul. "Gas: the greatest terror of the Great War." Anaesthesia and intensive care 44.1_suppl (2016): 24–30. online Smith, Susan I. Toxic Exposures: Mustard Gas and the Health Consequences of World War II in the United States (Rutgers University Press, 2017) online book review External links Faith, Thomas I.: Gas Warfare, in: 1914–1918-online. International Encyclopedia of the First World War. Chemical Weapons in World War I Gas Warfare Gas-Poisoning, by Arthur Hurst, M.A., MD (Oxon), FRCP 1917 effects of chlorine gas poisoning Understanding Chemical Weapons in the First World War World War I Environmental impact of war World War I crimes World War I crimes by Austria-Hungary World War I crimes by Imperial Germany World War I crimes by the British Empire and Commonwealth World War I crimes by the Third French Republic World War I crimes by the United States Italian war crimes United Kingdom chemical weapons program
Chemical weapons in World War I
[ "Chemistry" ]
9,517
[ "World War I chemical weapons", "Chemical warfare by conflict", "Chemical weapons" ]
167,585
https://en.wikipedia.org/wiki/Washington%20Monument
The Washington Monument is an obelisk on the National Mall in Washington, D.C., built to commemorate George Washington, a Founding Father of the United States, victorious commander-in-chief of the Continental Army from 1775 to 1783 in the American Revolutionary War, and the first president of the United States from 1789 to 1797. Standing east of the Reflecting Pool and the Lincoln Memorial, the monument is made of bluestone gneiss for the foundation and of granite for the construction. The outside facing consists, due to the interrupted building process, of three different kinds of white marble: in the lower third, marble from Baltimore County, Maryland, followed by a narrow zone of marble from Sheffield, Berkshire County, Massachusetts, and, in the upper part, the so-called Cockeysville Marble. Both "Maryland Marbles" came from the "lost" Irish Quarry Town of "New Texas". It is both the world's tallest predominantly stone structure and the world's tallest obelisk, standing tall, according to U.S. National Geodetic Survey measurements in 2013–2014. It is the tallest monumental column in the world if all are measured above their pedestrian entrances. It was the world's tallest structure between 1884 and 1889, after which it was overtaken by the Eiffel Tower, in Paris. Previously, the tallest structures were Lincoln Cathedral (1311–1548; 525 ft/160 m) and Cologne Cathedral (1880–1884; 515 ft/157 m). Construction of the presidential memorial began in 1848. The construction was suspended from 1854 to 1877 due to funding challenges, a struggle for control over the Washington National Monument Society, and the American Civil War. The stone structure was completed in 1884, and the internal ironwork, the knoll, and installation of memorial stones was completed in 1888. A difference in shading of the marble, visible about or 27% up, shows where construction was halted and later resumed with marble from a different source. The original design was by Robert Mills from South Carolina, but construction omitted his proposed colonnade for lack of funds, and construction proceeded instead with a bare obelisk. The cornerstone was laid on July 4, 1848; the first stone was laid atop the unfinished stump on August 7, 1880; the capstone was set on December 6, 1884; the completed monument was dedicated on February 21, 1885; it opened on October 9, 1888. The Washington Monument is a hollow Egyptian-style stone obelisk with a column surmounted by a pyramidion. Its walls are thick at its base and thick at their top. The marble pyramidion's walls are thick, supported by six arches: two between opposite walls, which cross at the center of the pyramidion, and four smaller arches in the corners. The top of the pyramidion is a large, marble capstone with a small aluminum pyramid at its apex, with inscriptions on all four sides. The bottom of the walls, built during the first phase from 1848 to 1854, are composed of a pile of bluestone gneiss rubble stones (not finished stones) held together by a large amount of mortar with a facade of semi-finished marble stones about thick. The upper of the walls, built in the second phase, 1880–1884, are of finished marble surface stones, half of which project into the walls, partly backed by finished granite stones. The interior is occupied by iron stairs that spiral up the walls, with an elevator in the center, each supported by four iron columns, which do not support the stone structure. The stairs are in fifty sections, most on the north and south walls, with many long landings stretching between them along the east and west walls. These landings allowed many inscribed memorial stones of various materials and sizes to be easily viewed while the stairs were accessible (until 1976), plus one memorial stone between stairs that is difficult to view. The pyramidion has eight observation windows, two per side, and eight red aircraft warning lights, two per side. Two aluminum lightning rods, connected by the elevator support columns to groundwater, protect the monument. The monument's present foundation is thick, consisting of half of its original bluestone gneiss rubble encased in concrete. At the northeast corner of the foundation, below ground, is the marble cornerstone, including a zinc case filled with memorabilia. Fifty American flags fly on a large circle of poles centered on the monument. In 2001, a temporary screening facility was added to the entrance to prevent a terrorist attack. A Virginia-centered earthquake in 2011 slightly damaged the monument, and it was closed until 2014. The monument was closed for elevator repairs, security upgrades, and mitigation of soil contamination in August 2016 before reopening again fully in September 2019. History Rationale George Washington (1732–1799), hailed as the father of his country, and as the leader who was "first in war, first in peace and first in the hearts of his countrymen", as Maj. Gen. 'Light-Horse Harry' Lee eulogized at Washington's December 26, 1799, funeral, was the dominant military and political leader of the new United States of America from 1775 to 1799. At Washington's death in 1799, he was the unchallenged public icon of American military and civic patriotism. He was also identified with the Federalist Party, which lost control of the national government in 1800 to the Jeffersonian Republicans, who were reluctant to celebrate the hero of the opposition party. Proposals Starting with victory in the American Revolutionary War, there were many proposals to build a monument to Washington, beginning with an authorization in 1783 by the old Confederation Congress to erect an equestrian statue of the General in a future American national capital city. After his December 1799 death, the United States Congress authorized a suitable memorial in the planned national capital then under construction since 1791, but the decision was reversed when the Democratic-Republican Party (Jeffersonian Republicans) took control of Congress in 1801 after the pivotal 1800 Election, with the first change of power between opposing political parties. The Republicans were dismayed that Washington had become the symbol of the Federalist Party; furthermore the values of Republicanism seemed hostile to the idea of building monuments to powerful men. They also blocked his image on coins or the celebration of his birthday. Further political squabbling, along with the North–South division on the Civil War, blocked the completion of the Washington Monument until the late 19th century. By that time, Washington had the image of a national hero who could be celebrated by both North and South, and memorials to him were no longer controversial. As early as 1783, the old Confederation Congress (successors after 1781 to the earlier Second Continental Congress) had resolved "That an equestrian statue of George Washington be erected at the place where the residence of Congress shall be established". The proposal called for engraving on the statue which explained it had been erected "in honor of George Washington, the illustrious Commander-in-Chief of the Armies of the United States of America during the war which vindicated and secured their liberty, sovereignty, and independence". Currently, there are two equestrian statues of President Washington in the national capital city of Washington, D.C. One is located in Washington Circle at the intersection of the Foggy Bottom and West End neighborhoods at the north end of the George Washington University campus, and the other is in the gardens of the National Cathedral of the Episcopal Church on Mount St. Alban in northwest Washington. On December 24, 1799, 10 days after Washington's death, a U.S. Congressional committee recommended a different type of monument. John Marshall (1755–1835), a Representative from Virginia (who later became Chief Justice of the United States, 1801–1835) proposed that a tomb be erected within the Capitol and it was designed later to place such a crypt sepulchre below the rotunda of the great dome. However, a lack of funds, disagreement over what type of memorial would best honor the country's first president, and the Washington family's reluctance to move his body from Mount Vernon prevented progress on any project. Design Progress toward a memorial finally began in 1833. That year a large group of citizens, including Eliza Hamilton, Dolley Madison, and Louisa Adams formed the Washington National Monument Society. Three years later, in 1836, after they had raised $28,000 in donations (), they announced a competition for the design of the memorial. On September 23, 1835, the board of managers of the society described their expectations: The society held a competition for designs in 1836. In 1845, the winner was announced to be architect Robert Mills, supposedly the first native-born American to be professionally trained as an architect. The citizens of Baltimore had chosen him in 1814 to build one of the first monuments to George Washington originally planned for the former courthouse square in their port city, and he had designed a tall elaborately decorated Greek column with balconies, surmounted by a statue of the President. Mills' Baltimore monument, with cornerstone laid and construction begun in 1815, was later simplified to a plain column shaft with a statue of a toga-clad Washington at the top when it was completed in 1829 but moved (because of its height) to the then rural hills to the north, where the city's growth would later extend. Mills also knew the capital well, with its being only southwest of Baltimore, and his having just been chosen Architect of Public Buildings for Washington. His design called for a circular colonnaded building in diameter and high from which sprang a four-sided obelisk high, for a total elevation of . A massive cylindrical pillar in diameter supported the obelisk at the center of the building. The obelisk was to be square at the base and square at the top with a slightly peaked roof. Both the obelisk and pillar were hollow within which a railway spiraled up. The obelisk had no doorway—instead its interior was entered from the interior of the pillar upon which it was mounted. The pillar had an "arched way" at its base. The top of the portico of the building would feature Washington standing in a chariot holding the reins of six horses. Inside the colonnade would be statues of 30 prominent Revolutionary War heroes as well as statues of the 56 signers of the Declaration of Independence. Criticism of Mills's design came up already in 1847, when architect Henry Robinson Searle from Rochester presented an alternative concept, backed by three objections against Mill's project.: Morerover the estimated price tag of more than $1 million (in 1848 money, ) caused the society to hesitate. On April 11, 1848, the society decided, due to a lack of funds, to build only a simple plain obelisk. Mills's 1848 obelisk was to be tall, square at the base and square at the top. It had two massive doorways, each high and wide, on the east and west sides of its base. Surrounding each doorway were raised jambs, a heavy pediment, and entablature within which was carved an Egyptian-style winged sun and asps. This original design conformed to a massive temple which was to have surrounded the base of the obelisk, but because it was never built, the architect of the second phase of construction Thomas Lincoln Casey smoothed down the projecting jambs, pediment and entablature in 1885, walled up the west entrance with marble forming an alcove, and reduced the east entrance to high. The western alcove has contained a bronze statue of Washington since 1992–93. Also, during 1992–93 a limestone surround was installed at the east elevator entrance decorated with a winged sun and asps to mimic Mills's 1848 design. Construction The Washington Monument was originally intended to be located at the point at which a line running directly south from the center of the White House crossed a line running directly west from the center of the U.S. Capitol on Capitol Hill. French-born military engineer Pierre (Peter) Charles L'Enfant's 1791 visionary "Plan of the city intended for the permanent seat of the government of the United States ..." designated this point as the location of the proposed central equestrian statue of George Washington that the old Confederation Congress had voted for in 1783, at the end of the American Revolutionary War (1775–1783) in a future American national capital city. The ground at the intended location proved to be too unstable to support a structure as heavy as the planned obelisk, so the monument's location was moved east-southeast. At that originally intended site there now stands a small monolith called the Jefferson Pier. Consequently, the McMillan Plan specified that the Lincoln Memorial should be "placed on the main axis of the Capitol and the Monument", about 1° south of due west of the Capitol or the monument, not due west of the Capitol or the monument. Excavation and initial construction Construction of the monument finally began three years later in 1848 with the excavation of the site, the laying of the cornerstone on the prepared bed, and laying the original foundation around and on top of the cornerstone, before the construction of its massive walls began the next year. Regarding modern claims of slave labor being used in construction, Washington Monument historian John Steele Gordon stated "I can't say for certain, but the stonemasonry was pretty highly skilled, so it's unlikely that slaves would've been doing it. The stones were cut by stonecutters, which is highly skilled work; and the stones were hoisted by means of steam engines, so you'd need a skilled engineer and foreman for stuff like that. Tending the steam engine, building the cast-iron staircase inside—that wasn't grunt work. ... The early quarries were in Maryland, so slave labor was undoubtedly used to quarry and haul the stone" Abraham Riesman, who quoted Gordon, states "there were plenty of people who worked as skilled laborers while enslaved in antebellum America. Indeed, there were enslaved people who worked as stonemasons. So the possibility remains that there were slaves who performed some of the necessary skilled labor for the monument." According to historian Jesse Holland, it is very likely that African American slaves were among the construction workers, given that slavery prevailed in Washington and its surrounding states at that time, and slaves were commonly used in public and private construction. Gordon's arguments are valid for the second phase (1879–1888) after slavery was abolished when every stone laid required dressing and polishing by a skilled stonemason. This includes the iron staircase which was constructed 1885–86. That the stonecutters in the quarry were slaves is confirmed because all quarry workers were slaves during the construction of the United States Capitol during the 1790s. However, Holland's views are valid for the first phase because most of its construction only required unskilled manual labor. No information survives concerning the method used to lift stones that weighed several tons each during the first phase, whether by a manual winch or a steam engine. The surviving information concerning slaves that built the core of the United States Capitol during the 1790s is not much help. At the time, the District of Columbia outside of Georgetown was sparsely populated so the federal government rented slaves from their owners who were paid a fee for their slaves' normal daily labor. Any overtime for Sundays, holidays, and nights was paid directly to the slaves which they could use for daily needs or to save to buy their freedom. Conversely, the first phase of the monument was constructed by a private entity, the Washington National Monument Society, which may not have been as magnanimous as the federal government, but most information was lost during the 1850s while two Societies vied for control of the monument. Useful information concerning the use of slaves during the major expansion of the Capitol during the 1850s, nearly contemporaneous with the monument's first phase, does not exist. Only a small number of stones used in the first phase required a skilled stonemason, the marble blocks on the outer surface of the monument (their inner surfaces were left very rough) and those gneiss stones that form the rough inner walls of the monument (all other surfaces of those inner stones within the walls were left jagged). The vast majority of all gneiss stones laid during the first phase, those between the outer and inner surfaces of the walls, from very large to very small jagged stones, form a pile of rubble held together by a large amount of mortar. The top surface of this rubble can be seen below at Walls in an 1880 drawing made just before the polished/rough marble and granite stones used in the second phase were laid atop it. The original foundation below the walls was made of layered gneiss rubble, but without the massive stones used within the walls. Most of the gneiss stones used during the first phase were obtained from quarries in the upper Potomac River Valley. Almost all the marble stones of the first and second phases was Cockeysville Marble, obtained from quarries north of downtown Baltimore in rural Baltimore County where stone for their first Washington Monument was obtained. On Independence Day, July 4, 1848, the Freemasons, the same organization to which Washington belonged, laid the cornerstone (symbolically, not physically). According to Joseph R. Chandler: Two years later, on a torrid July 4, 1850, George Washington Parke Custis (1781–1857), the adopted son of George Washington and grandson of Martha Washington (1731–1802), dedicated a stone from the people of the District of Columbia to the Monument at a ceremony that 12th President Zachary Taylor (1784–1850, served 1849–1850) attended, just five days before he died from food poisoning. Donations run out Construction continued until 1854, when donations ran out and the monument had reached a height of . At that time a memorial stone that was contributed by Pope Pius IX, called the Pope's Stone, was destroyed by members of the anti-Catholic, nativist American Party, better known as the "Know-Nothings", during the early morning hours of (a priest replaced it in 1982 using the Latin phrase "A Roma Americae" instead of the original stone's English phrase "Rome to America"). Economic and political conditions of the time caused public contributions to the Washington National Monument Society to cease, so they appealed to Congress for money. The request had just reached the floor of the House of Representatives when the Know-Nothing Party seized control of the Society on February 22, 1855, a year after construction funds ran out. Congress immediately tabled its expected contribution of $200,000 to the Society, effectively halting the Federal appropriation. During its tenure, the Know-Nothing Society added only two courses of masonry, or , to the monument using rejected masonry it found on site, increasing the height of the shaft to . The original Society refused to recognize the takeover, so the two rival Societies existed side by side until 1858. With the Know-Nothing Party disintegrating and unable to secure contributions for the monument, it surrendered its possession of the monument to the original Society three and a half years later on . To prevent future takeovers, the U.S. Congress incorporated the Society on with a stated charter and set of rules and procedures. Post–Civil War The American Civil War (1861–1865), halted all work on the monument, but interest grew after the war's end. Engineers studied the foundation several times to determine if it was strong enough for continued construction after 20 years of effective inactivity. In 1876, the American Centennial of the Declaration of Independence, Congress agreed to appropriate another $200,000 to resume construction. Before work could begin again, arguments about the most appropriate design resumed. Many people thought a simple obelisk, one without the colonnade, would be too bare. Architect Mills was reputed to have said omitting the colonnade would make the monument look like "a stalk of asparagus"; another critic said it offered "little ... to be proud of". This attitude led people to submit alternative designs. Both the Washington National Monument Society and Congress held discussions about how the monument should be finished. The Society considered five new designs and an anonymous "interesting project of California" (which later turned out to be by Arthur Frank Mathews), concluding that the one by William Wetmore Story, seemed "vastly superior in artistic taste and beauty". Congress deliberated over those five proposals (among others by Paul Schulze, who built Boylston Hall and John Fraser as well as Mills's original. While it was deciding, it ordered work on the obelisk to continue. Finally, the members of the society agreed to abandon the colonnade and alter the obelisk so it conformed to classical Egyptian proportions. Resumption Construction resumed in 1879 under the direction of Lieutenant Colonel Thomas Lincoln Casey of the United States Army Corps of Engineers. Casey redesigned the foundation, strengthening it so it could support a structure that ultimately weighed more than 40,000 tons (). The first stone atop the unfinished stump was laid on August 7, 1880, in a small ceremony attended by President Rutherford B. Hayes, Casey and a few others. The president placed a small coin on which he had scratched his initials and the date in the bed of wet cement at the level before the first stone was laid on top of it. Casey found 92 memorial stones ("presented stones") already inlaid into the interior walls of the first phase of construction. Before construction continued he temporarily removed eight stones at the level so that the walls at that level could be sloped outward, producing thinner second-phase walls. He inserted those stones and most of the remaining memorial stones stored in the lapidarium into the interior walls during 1885–1889. The bottom third of the monument is a slightly lighter shade than the rest of the construction because the marble was obtained from different quarries. The building of the monument proceeded quickly after Congress had provided sufficient funding. In four years, it was completed, with the 100-ounce (2.83 kg) aluminum apex/lightning-rod being put in place on December 6, 1884. The apex was the largest single piece of aluminum cast at the time, when aluminum commanded a price comparable to silver. Two years later, the Hall–Héroult process made aluminum easier to produce and the price of aluminum plummeted, though it should have provided a lustrous, non-rusting apex. The monument opened to the public on October 9, 1888. Dedication The Monument was dedicated on February 21, 1885. Over 800 people were present on the monument grounds to hear speeches during a frigid day by Ohio Senator John Sherman (1823–1900), the Rev. Henderson Suter, William Wilson Corcoran (of the Washington National Monument Society) read by Dr. James C. Welling because Corcoran was unable to attend, Freemason Myron M. Parker, Col. Thomas Lincoln Casey of the Army Corps of Engineers, and President Chester A. Arthur. President Arthur proclaimed: I do now ... in behalf of the people, receive this monument ... and declare it dedicated from this time forth to the immortal name and memory of George Washington. After the speeches Lieutenant-General Philip Sheridan (1831–1888), Civil War Cavalry veteran and then General-in-Chief of the United States Army led a procession, which included the dignitaries and the crowd, past the Executive Mansion, now the White House, then via Pennsylvania Avenue to the east main entrance of the Capitol Building, where President Arthur (1829–1886, served 1881–1885) received passing troops. Then, in the House of Representatives Chamber at the Capitol, the president, his Cabinet, diplomats and others listened to Representative John Davis Long (1838–1915), (former Lieutenant Governor and Governor of Massachusetts and future Secretary of the Navy) read a speech written a few months earlier by Robert C. Winthrop (1809–1894), formerly the Speaker of the House of Representatives when the cornerstone was laid 37 years earlier in 1848, but now too ill to personally deliver his speech. A final speech was given by John W. Daniel (1842–1910), of Virginia, a well-regarded lawyer, author and Representative (congressman), and Senator. The festivities concluded that evening with fireworks, both aerial and ground displays. Later history At completion, it was the world's tallest building, until the Eiffel Tower was completed four years later in Paris in 1889. It is still the tallest building in Washington, D.C. The Heights of Buildings Act of 1910 restricts new building heights to no more than greater than the width of the adjacent street. This monument is taller than the obelisks around the capitals of Europe and in Egypt and Ethiopia, but ordinary antique obelisks were quarried as a monolithic block of stone, and were therefore seldom taller than approximately . The Washington Monument attracted enormous crowds before it officially opened. For six months after its dedication, 10,041 people climbed the 900 steps and 47 large landings to the top. After the elevator that had been used to raise building materials was altered to carry passengers, the number of visitors grew rapidly, and an average of 55,000 people per month were going to the top by 1888, only three years after its completion and dedication. The annual visitor count peaked at an average of 1.1 million people between 1979 and 1997. From 2005 to 2010, when restrictions were placed on the number of visitors allowed per day, the Washington Monument had an annual average of 631,000 visitors. As with all historic areas administered by the National Park Service (an agency of the U.S. Department of the Interior), the national memorial was listed on the National Register of Historic Places on October 15, 1966. In the early 1900s, material started oozing out between the outer stones of the first construction period below the mark, and was referred to by tourists as "geological tuberculosis". This was caused by the weathering of the cement and rubble filler between the outer and inner walls. As the lower section of the monument was exposed to cold and hot and damp and dry weather conditions, the material dissolved and worked its way through the cracks between the stones of the outer wall, solidifying as it dripped down their outer surface. For ten hours in December 1982, the Washington Monument and eight tourists were held hostage by a nuclear arms protester, Norman Mayer, claiming to have explosives in a van he drove to the monument's base. United States Park Police shot and killed Mayer. The monument was undamaged in the incident, and it was discovered later that Mayer did not have explosives. After this incident, the surrounding grounds were modified in places to restrict the possible unauthorized approach of motor vehicles. The monument underwent an extensive restoration project between the years of 1998 and 2001. During this time it was completely covered in scaffolding designed by the American architect Michael Graves (who was also responsible for the interior changes). The project included cleaning, repairing and repointing the monument's exterior and interior stonework. The stone in publicly accessible interior spaces was encased in glass to prevent vandalism, while new windows with narrower frames were installed (to increase the viewing space). New exhibits celebrating the life of George Washington, and the monument's place in history, were also added. A temporary interactive visitor center, dubbed the "Discovery Channel Center" was also constructed during the project. The center provided a simulated ride to the top of the monument, and shared information with visitors during phases in which the monument was closed. The majority of the project's phases were completed by summer 2000, allowing the monument to reopen July 31, 2000. The monument temporarily closed again on December 4, 2000, to allow a new elevator cab to be installed, completing the final phase of the restoration project. The new cab included glass windows, allowing visitors to see some of the 194 memorial stones with their inscriptions embedded in the monument's walls. The installation of the cab took much longer than anticipated, and the monument did not reopen until February 22, 2002. The final cost of the restoration project was $10.5 million. On September 7, 2004, the monument closed for a $15 million renovation, which included numerous security upgrades and redesign of the monument grounds by landscape architect Laurie Olin (b. 1938). The renovations were due partly to security concerns following the September 11, 2001 attacks and the start of the War on Terror. The monument reopened April 1, 2005, while the surrounding grounds remained closed until the landscape was finished later that summer. 2011 earthquake damage On August 23, 2011, the Washington Monument sustained damage during the 5.8 magnitude 2011 Virginia earthquake; over 150 cracks were found in the monument. A National Park Service spokesperson reported that inspectors discovered a crack near the top of the structure, and announced that the monument would be closed indefinitely. A block in the pyramidion also was partially dislodged, and pieces of stone, stone chips, mortar, and paint chips came free of the monument and "littered" the interior stairs and observation deck. The Park Service said it was bringing in two structural engineering firms (Wiss, Janney, Elstner Associates, Inc. and Tipping Mar Associates) with extensive experience in historic buildings and earthquake-damaged structures to assess the monument. Officials said an examination of the monument's exterior revealed a "debris field" of mortar and pieces of stone around the base of the monument, and several "substantial" pieces of stone had fallen inside the memorial. A crack in the central stone of the west face of the pyramidion was wide and long. Park Service inspectors also discovered that the elevator system had been damaged, and was operating only to the level, but was soon repaired. On September 27, 2011, Denali National Park ranger Brandon Latham arrived to assist four climbers belonging to a "difficult access" team from Wiss, Janney, Elstner Associates. The reason for the inspection was the park agency's suspicion that there were more cracks on the monument's upper section not visible from the inside. The agency said it filled the cracks that occurred on August 23. After Hurricane Irene hit the area on August 27, water was discovered inside the memorial, leading the Park Service to suspect there was more undiscovered damage. The rappellers used radios to report what they found to engineering experts on the ground. Wiss, Janney, Elstner climber Dave Megerle took three hours to set up the rappelling equipment and set up a barrier around the monument's lightning rod system atop the pyramidion; it was the first time the hatch in the pyramidion had been open since 2000. The external inspection of the monument was completed on October 5, 2011. In addition to the long west crack, the inspection found several corner cracks and surface spalls (pieces of stone broken loose) at or near the top of the monument, and more loss of joint mortar lower down the monument. The full report was issued in December 2011. Bob Vogel, Superintendent of the National Mall and Memorial Parks, emphasized that the monument was not in danger of collapse. "It's structurally sound and not going anywhere", he told the national media at a press conference on September 26, 2011. More than $200,000 was spent between August 24 and September 26 inspecting the structure. The National Park Service said that it would soon begin sealing the exterior cracks on the monument to protect it from rain and snow. On July 9, 2012, the National Park Service announced that the monument would be closed for repairs until 2014. The National Park Service hired construction management firm Hill International in conjunction with joint-venture partner Louis Berger Group to provide coordination between the designer, Wiss, Janney, and Elstner Associates, the general contractor Perini, and numerous stakeholders. NPS said a portion of the plaza at the base of the monument would be removed and scaffolding constructed around the exterior. In July 2013, lighting was added to the scaffolding. Some stone pieces saved during the 2011 inspection would be refastened to the monument, while "Dutchman patches" would be used in other places. Several of the stone lips that help hold the pyramidion's exterior slabs in place were also damaged, so engineers installed stainless steel brackets to more securely fasten them to the monument. The National Park Service reopened the Washington Monument to visitors on May 12, 2014, eight days ahead of schedule. Repairs to the monument cost $15 million, with taxpayers funding $7.5 million of the cost and David Rubenstein funding the other $7.5 million. At the reopening Interior Secretary Sally Jewell, Today show weatherman Al Roker, and American Idol Season 12 winner Candice Glover were present. Subsequent problems and repairs The monument continued to be plagued by problems after the earthquake, including in January 2017 when the lights illuminating it went out. The monument was closed again in September 2016 due to reliability issues with the elevator system. On December 2, 2016, the National Park Service announced that the monument would be closed until 2019 in order to modernize the elevator. The $2–3 million project was to correct the elevator's ongoing mechanical, electrical and computer issues, which had shuttered the monument since August 17. The National Park Service requested funding in its FY 2017 President's Budget Request to construct a permanent screening facility for the Washington Monument. The final months of closure were for mitigation of possibly contaminated underground soil thought to have been introduced in the 1880s. The monument reopened September 19, 2019. Repeated closures After reopening in September 2019, the Washington Monument was closed on March 14, 2020, because of the COVID-19 pandemic. It reopened on October 1, 2020, and remained open through the remainder of that year, except for brief closures. On January 11, 2021, a few days after the January 6 United States Capitol attack, the National Park Service announced a two-week closure of the monument until after the presidential inauguration due to "credible threats to visitors and park resources". Following a lack of violence, the closure was extended due to a revival of COVID-19 fears. The monument then reopened on July 14, 2021, only to close yet again on August 16 for two weeks due to lightning strikes which damaged some electrical systems. On September 20, 2022, the monument was closed for one evening because a man was defacing the monument with red paint and graffiti. He was arrested and charged with vandalism, to which he pleaded guilty, and later sentenced to a year of probation and ordered to pay restitution to the Park Service. Components Cornerstone The cornerstone was laid with great ceremony at the northeast corner of the lowest course or step of the old foundation on . Robert Mills, the architect of the monument, stated in September 1848, "The foundations are now brought up nearly to the surface of the ground; the second step being nearly completed, which covers up the corner stone." Therefore, the cornerstone was laid below the 1848 ground level. In 1880, the ground level was raised to the base of the shaft by the addition of a wide earthen embankment encircling the reinforced foundation, widened another 30 feet in 1881, and then the knoll was constructed in 1887–88. If the cornerstone had not been moved during the strengthening of the foundation in 1879–80, its upper surface would now be below the pavement just outside the northeast corner of the shaft. It would now be sandwiched between the concrete slab under the old foundation and the concrete buttress completely encircling what remains of the old foundation. During the strengthening process, about half by volume of the periphery of the lowest seven of eight courses or steps of the old foundation (gneiss rubble) was removed to provide good footing for the buttress. Although a few diagrams, pictures and descriptions of this process exist, the fate of the cornerstone is not mentioned. The cornerstone was a marble block high and square with a large hole for a zinc case filled with memorabilia. The hole was covered by a copper plate inscribed with the date of the Declaration of Independence (July 4, 1776), the date the cornerstone was laid (July 4, 1848), and the names of the managers of the Washington National Monument Society. The memorabilia in the zinc case included items associated with the monument, the city of Washington, the national government, state governments, benevolent societies, and George Washington, plus miscellaneous publications, both governmental and commercial, a coin set, and a Bible, totaling 73 items or collections of items, as well as 71 newspapers containing articles relating to George Washington or the monument. The ceremony began with a parade of dignitaries in carriages, marching troops, fire companies, and benevolent societies. A long oration was delivered by the Speaker of the House of Representatives Robert C. Winthrop. Then, the cornerstone was pronounced sound after a Masonic ceremony using George Washington's Masonic gavel, apron and sash, as well as other Masonic symbols. In attendance were President James K. Polk and other federal, state and local government officials, Elizabeth Schuyler Hamilton, Mrs. Dolley Madison, Mrs. John Quincy Adams, and George Washington Parke Custis, among 15,000 to 20,000 others, including a bald eagle. The ceremony ended with fireworks that evening. Memorial stones States, cities, foreign countries, benevolent societies, other organizations, and individuals have contributed 194 memorial stones, all inserted into the east and west interior walls above stair landings or levels for easy viewing, except one on the south interior wall between stairs that is difficult to view. The sources disagree on the number of stones for two reasons: whether one or both "height stones" are included, and stones not yet on display at the time of a source's publication cannot be included. The "height stones" refer to two stones that indicate height: during the first phase of construction a stone with an inscription that includes the phrase "from the foundation to this height 100 feet" () was installed just below the stairway and high above the stairway; during the second phase of construction a stone with a horizontal line and the phrase "top of statue on Capitol" was installed on the level. The Historic Structure Report (HSR, 2004) named 194 "memorial stones" by level, including both height stones. Jacob (2005) described in detail and pictured 193 "commemorative stones", including the stone but not the Capitol stone. The Historic American Buildings Survey (HABS, 1994) showed the location of 193 "memorial stones" but did not describe or name any. HABS showed both height stones but did not show one stone not yet installed in 1994. Olszewski (1971) named 190 "memorial stones" by level, including the Capitol stone but not the 100-foot stone. Olszewski did not include three stones not yet installed in 1971. Of 194 stones, 94 are marble, 40 are granite, 29 are limestone, 8 are sandstone, with 23 miscellaneous types, including stones with two types of material and those whose materials are not identified. Unusual materials include native copper (Michigan), pipestone (Minnesota), petrified wood (Arizona), and jadeite (Alaska). The stones vary in size from about square (Carthage) to about (Philadelphia and New York City). Utah contributed one stone as a territory and another as a state, both with inscriptions that include its pre-territorial name, Deseret, both located on the level. A stone at the level of the monument is inscribed in (My Language, My Country, My Nation, Welsh forever). The stone, imported from Wales, was donated by Welsh citizens of New York. Two other stones were presented by the Sunday Schools of the Methodist Episcopal Church in New York and the Sabbath School children of the Methodist Episcopal Church in Philadelphia—the former quotes from the Bible verse Proverbs 10:7, "The memory of the just is blessed". Ottoman Sultan Abdul Mejid I donated $30,000 toward the construction of the Washington monument. The Sultans' donation was the largest single donation toward the building of the Washington Monument. The Sultan's intention was to bridge peace between the Ottomans and the Americans. The stone containing the Turkish inscriptions commemorating this event is on the level. The abbreviated translation of the inscriptions states, "So as to strengthen the friendship between the two countries. Abdul-Mejid Kahn has also had his name written on the monument to Washington." It combines the works of two eminent calligraphers: an imperial tughra by Mustafa Rakım's student Haşim Efendi, and an inscription in jalī ta'līq script by Kazasker Mustafa Izzet Efendi, the calligrapher who wrote the giant medallions at Hagia Sophia in Istanbul. One stone was donated by the Ryukyu Kingdom and brought back by Commodore Matthew C. Perry, but never arrived in Washington (it was replaced in 1989). Many of the stones donated for the monument carried inscriptions that did not commemorate George Washington. For example, one from the Templars of Honor and Temperance stated "We will not make, buy, sell, or use as a beverage, any spiritous or malt liquors, Wine, Cider, or any other Alcoholic Liquor." (George Washington himself had owned a whiskey distillery which operated at Mount Vernon after he left the presidency.) Aluminum apex The aluminum apex, composed of a metal that at the time was as rare and valuable as silver, was cast by William Frishmuth of Philadelphia. At the time of casting, it was the largest piece of aluminum in the world. Before the installation, it was put on public display at Tiffany's in New York City and stepped over by visitors who could say they had "stepped over the top of the Washington Monument". It was tall before was vaporized from its tip by lightning strikes during 1885–1934, when it was protected from further damage by tall lightning rods surrounding it. Its base is square. The angle between opposite sides at its tip is 34°48'. It weighed before lightning strikes removed a small amount of aluminum from its tip and sides. Spectral analysis in 1934 showed that it was composed of 97.87% aluminum with the rest impurities. It has a shallow depression in its base to match a slightly raised area atop the small upper surface of the marble capstone, which aligns the sides of the apex with those of the capstone, and the downward protruding lip around that area prevents water from entering the joint. It has a large hole in the center of its base to receive a threaded diameter copper rod which attaches it to the monument and used to form part of the lightning protection system. In 2015 the National Geodetic Survey reported the coordinates of the 1 mm dimple atop the aluminum apex as (WGS 84). The four faces of the external aluminum apex all bear inscriptions in cursive writing (Snell Round hand), which are incised into the aluminum. The apex was inscribed on site after it was delivered. Most inscriptions are the original 1884 inscriptions, except for the top three lines on the east face which were added in 1934. From 1885 to 1934 a wide gold-plated copper band that held eight short lightning rods, two per side but not at its corners, covered most of the inscriptions, which were damaged and illegible as shown in the accompanying picture made in 1934. A new band including eight long lightning rods, one at each corner and one at the middle of each side, was added in 1934 and removed and discarded in 2013. The inscriptions that it covered were still damaged and illegible in 2013. Only the top four and bottom two lines of the north face, the first and last lines of the west face, the top four lines of the south face, and the top three lines of the east face are still legible. Even though the inscriptions are no longer covered, no attempt was made to repair them when the apex was accessible in 2013. The following table shows legible inscriptions in and illegible inscriptions in . No colors appear on the actual apex. The inscriptions occupy the lower portions of triangles, thus the inscribed upper lines are necessarily shorter than some lower lines. Although most printed sources, Harvey (1903), Olszewski (1971), Torres (1984), and the Historic Structure Report (2004), refer to the original 1884 inscriptions, the National Geodetic Survey (2015) refers to both the 1884 and 1934 inscriptions. All sources print them according to their own editorial rules, resulting in excessive capitalization (Harvey, Olszewski, and NGS) and inappropriate line breaks. No printed source uses cursive writing, although pictures of the apex clearly show that it was used for both the 1884 and 1934 inscriptions. A replica displayed on the 490-foot level uses totally different line breaks from those on the external apex—it also omits the 1934 inscriptions. In October 2007, it was discovered that the display of this replica was positioned so that the Laus Deo (Latin for "praise be to God") inscription could not be seen and Laus Deo was omitted from the placard describing the apex. The National Park Service rectified the omission by creating a new display. Lightning protection The pyramidion, the pointed top of the monument, was originally designed with an tall inscribed aluminum apex which served as a single lightning rod, installed . Six months later on lightning damaged the marble blocks of the pyramidion, so a net of gold-plated copper rods supporting 200 gold-plated, platinum-tipped copper points spaced every was installed over the entire pyramidion. The original net included a gold-plated copper band attached to the aluminum apex by four large set screws which supported eight closely spaced vertical points that did not protrude above the apex. In 1934 these eight short points were lengthened to extend them above the apex by . In 2013 this original system was removed and discarded. It was replaced by only two thick solid aluminum lightning rods protruding above the tip of the apex by about attached to the east and west sides of the marble capstone just below the apex. Until it was removed, the original lightning protection system was connected to the tops of the four iron columns supporting the elevator with large copper rods. Even though the aluminum apex is still connected to the columns with large copper rods, it is no longer part of the lightning protection system because it is now disconnected from the present lightning rods which shield it. The two lightning rods present since 2013 are connected to the iron columns with two large braided aluminum cables leading down the surface of the pyramidion near its southeast and northwest corners. They enter the pyramidion at its base, where they are tied together (electrically shorted) via large braided aluminum cables encircling the pyramidion above its base. The bottom of the iron columns are connected to ground water below the monument via four large copper rods that pass through a square well half filled with sand in the center of the foundation. The effectiveness of the lightning protection system has not been affected by a significant draw down of the water table since 1884 because the soil's water content remains roughly 20% both above and below the height of the water table. Walls During the first phase of construction (1848–1854), the walls were built with bluestone gneiss rubble, ranging from very large irregular stones having a cross section of about down to spalls (broken pieces of stone) all embedded in a large amount of mortar. The outer surface is marble stones thick in high courses or rows horizontally encircling the monument. Although each course contains both stretchers (stones parallel to the wall) and headers (stones projecting into the wall), about two to three times as many stretchers as headers were used. Their joints were so thin that some stones pressed on bare stone below them, breaking off many pieces since it was constructed. The batter or slope of the outer surface is 0.247 inches per foot (2.06 cm/m, 1°11'). The inner surface has disorderly rows of smaller roughly dressed bluestone gneiss. The base of the first phase walls has an outer dimension of square and a thickness of . The interior well is square and has square corners. The weight of the first phase walls up to is . During the second phase (1879–1884), the walls were constructed of smoothly dressed (ashlar) large marble and granite blocks (rectangular cuboids) laid down in an orderly manner (Flemish bond) with thick joints. Two-foot high marble surface stones, using an equal number of stretchers and headers, were backed by granite blocks from the 152-foot level (the first course above the rubble) to the 218-foot level, where marble headers become increasingly visible on the internal surface of the walls up to the 450-foot level, above which only marble stones are used. Between the 150- and 160-foot levels the inner walls rapidly slope outward, increasing the shaft well from 25 feet 1 inch square to square with a corresponding decrease in the thickness of the walls and their weight. The second phase walls at the 160-foot level were thick, which, combined with the larger shaft well, yields an outer dimension of square at that level. The top of the second phase walls are square and thick. The second phase interior walls have rounded corners ( radii). The weight of the second phase walls (from 150 feet to 500 feet) are . The walls of the entire shaft (combined first and second phases) are high. The first phase of the walls was constructed under the direction of William Dougherty. Its white Cockeysville marble exterior came from the Texas quarry now adjacent to and east of north I-83 near the Warren Road exit in Cockeysville, Maryland. The quarry was named for the Texas Station (no longer extant) and 19th-century town on the Northern Central Railway. During the first phase it was operated by Thomas Symington, but is now operated by Martin Marietta Materials and no longer produces building stone. The second phase of construction was under the direction of Lt Col/Col Thomas Lincoln Casey of the United States Army Corps of Engineers, who removed two defective courses added by the Know-Nothings and the last 152-foot course added by Dougherty before Casey began his construction. The next three courses of white marble () came from Sheffield, Massachusetts, while all courses above them came from the Beaver Dam quarry just west of the 19th-century town of Cockeysville. The latter quarry is located on Beaver Dam Road near its intersection with McCormick Road. During the second phase the quarry was operated by Hugh Sisson, but is now flooded, is called Beaverdam Pond, and is the home of the Beaver Dam Swimming Club. Both 19th-century towns are now within the city limits of Cockeysville. Pyramidion The marble capstone of the pyramidion is a truncated pyramid with a cubical keystone projecting from its base and a deep groove surrounding the keystone. The aluminum apex replaces its truncated top. The inside upper edges of the topmost slabs on the four faces of the pyramidion rest on the keystone and in the groove. It has a large vertical hole through which a threaded copper rod passes and screws into the base of the apex, which used to form part of its lightning protection system. The keystone and groove occupy so much of its base that only a small horizontal area near its outer edge remains. The weight of the capstone is transferred to both the inner and outer portions of the shiplap upper edges of the slabs. It weighs , is high from its base to its top, and is square at its base. The marble pyramidion has an extremely complex construction to save weight yet remain strong. Its surface slabs or panels are usually only thick (with small thick and thin portions) and generally do not support the weight of slabs above them, instead transferring their own weight via wide internal marble ribs to the shaft's walls. The slabs are generally wide and high with a vertical overlap (shiplap) to prevent water from entering the horizontal joints. Twelve such courses, the internal ribs, the marble capstone, and the aluminum apex comprise the pyramidion. Its height is . Its weight is . The slope of the walls of the pyramidion is 17°24' from the vertical. There are twelve ribs, three per wall, which spring from the level, all being integrated into the walls up to the level. All are free standing above 500 feet, relying on mortise and tenon joints to attach neighboring stones. The eight corner ribs terminate six courses above the shaft, each corner rib resting on its neighboring corner rib via a miter joint, forming four corner arches. Each such arch supports a pair of square corner stones, one above the other totaling one course in height. Each corner rib is linked to the nearest center rib at the sixth course via a marble tie beam. The four center ribs terminate eight courses above the shaft at a marble cruciform (cross shaped) keystone, forming two main arches that cross each other. Two stones, each one course high, are mounted on each of the four ribs, supporting two additional courses above the cruciform keystone, leaving two courses to support the capstone's weight by themselves. The observation floor (nominally the 500-foot level) is above the entry lobby floor or lowest landing level. It is above the marble base of the pyramidion and the top of the shaft walls. Four pairs of wide observation windows are provided, spaced apart, inner stone edge to edge, all just above the lowest course of slabs (504-foot level). Six are high while two on the east face are high for easier egress. All were originally provided with thin marble shutters in a bronze frame each of which could be opened inward, one left and the other right per wall. After two people committed suicide by jumping through the open windows in the 1920s, hinged horizontal iron bars were added to them in 1929. A ninth opening in a slab on the south face just below the capstone is provided for access to the outside of the pyramidion. It is covered by a stone slab which is internally removable. In 1931, four red aircraft warning lights were installed, one per face in one of its observation windows. Pilots complained that they could not be easily seen, so the monument was floodlit on all sides as well. In 1958, eight diameter holes for new red aircraft warning lights were bored, one above each window near the top edge of the fourth course of slabs (516-foot level) in the pyramidion. In 1958 the observation windows were glazed with shatterproof glass. In 1974–1976, they were glazed with bulletproof glass and the shutters removed. New bulletproof glass was installed during 1997–2000. The pyramidion has two inscriptions, neither of which is regarded as a memorial stone. One is the year "1884" on the underside of the cruciform keystone; the other is at the same level as that keystone on the north face of the west center rib containing the names and titles of the four highest ranked builders. Its inscription () is almost identical to the inscription on the south face of the aluminum apex except for "U.S.", which is part of the phrase "14th U.S. Infantry" in the inscription inside the pyramidion, but the apex has only "14th Infantry". Additionally, the internal inscription does not use cursive writing and all letters in all names are capitals. Foundation The first phase began with the excavation of about of topsoil down to a level of loam, consisting of equal parts of sand and clay, hard enough to require picks to break it up. On this "bed of the foundation" the cornerstone was laid at the northeast corner of the proposed foundation. The rest of the foundation was then constructed of bluestone gneiss rubble and spalls, with every crevice filled with lime mortar. The dimensions of this old foundation were high, square at the base, and square at the top, laid down in eight steps, similar to a truncated step pyramid. At the center of the foundation a brick-lined square well was dug to a depth of below the bed of the foundation to keep it dry and to supply water during construction. During the second phase, after determining that the proposed weight of the monument was too great for the old foundation to safely bear, the thickness of the walls atop the unfinished stump was reduced and the foundation was strengthened by adding a large unreinforced concrete slab below the perimeter of the old foundation to increase the monument's load bearing area two and one half times. The slab was thick, with an outer perimeter square, an inner perimeter square, with undisturbed loam inside the inner perimeter except for the water well. The area at the base of the second phase foundation is . The strengthened foundation (old foundation and concrete slab) has a total depth of below the bottom of the lowest course of marble blocks (now below ground), and below the entry lobby floor. Casey reported that nowhere did the load exceed and did not exceed near the outer perimeter. To properly distribute the load from the shaft to slab, about half by volume of the outer periphery of the old rubble foundation below its top step was removed. A continuous sloping unreinforced concrete buttress encircles what remains. The buttress is square at its base, square at its top, and high. The perimeter of the original top step of the old rubble foundation rests on the larger top of the concrete buttress. Its slope (lower external angle from the vertical) is 49°. This buttress rests in a depression (triangular cross-section) on the top surface of the concrete slab. The slab was constructed by digging pairs of wide drifts on opposite sides of the monument's center line to keep the monument properly balanced. The drifts were filled with unreinforced concrete with depressions or dowel stones on their sides to interlock the sections. An earthen terrace wide with its top at the base of the walls and steep sides was constructed in 1880–81 over the reinforced foundation while the rest of the monument was being constructed. During 1887–88, a knoll was constructed around the terrace tapering out roughly onto the surrounding terrain. This earthen terrace and knoll serves as an additional buttress for the foundation. The weight of the foundation is , including earth and gneiss rubble above the concrete foundation that is within its outer perimeter. Stairs and elevator The monument is filled with ironwork, consisting of its stairs, elevator columns and associated tie beams, none of which supports the weight of the stonework. It was redesigned in 1958 to reduce congestion and improve the flow of visitors. Originally, visitors entered and exited the west side of the elevator on the observation floor, causing congestion. So the large landing at the 490-foot level was expanded to a full floor and the original spiral stair in the northeast corner between the levels was replaced by two spiral stairs in the northeast and southeast corners. Now visitors exit the elevator on the observation floor, then walk down either spiral stair before reboarding the elevator for their trip back down. The main stairs spiral up the interior walls from the entry lobby floor to the elevator reboarding floor at the level. The elevator occupies the center of the shaft well from the entry lobby to the observation floor, with an elevator machine room (installed 1925–26) whose floor is above the observation floor and an elevator pit (excavated 1879) whose floor is below the entry lobby floor. The stairs and elevator are supported by four wrought iron columns each. The four supporting the stairs extend from the entry lobby floor to the observation floor and were set at the corners of a square. The four supporting the elevator extend from the floor of the elevator pit to above the observation floor and were set at the corners of a square. The weight of the ironwork is . Cast iron, wrought iron, and steel were all used. The two small spiral stairs installed in 1958 are aluminum. Most landings occupy the entire east and west interior walls every from and including the east landing at the level up to the west landing at the level, east then west alternately. Three stairs with small landings rise from the entry lobby floor to the level successively along the north, west and south interior walls. Landings from the level up to the level are by , while landings from the level to the level are by . All stairs are on the north and south walls except for the aforementioned west stair between the levels, and the two spiral stairs. About one fourth of visitors chose to ascend the monument using the stairs when they were available. They were closed to up traffic in 1971, and then closed to all traffic except by special arrangement in 1976. The stairs had 898 steps until 1958, consisting of 18 risers in each of the 49 main stairs plus 16 risers in the spiral stair. Since 1958 the stairs have had 897 risers if only one spiral stair is counted because both spiral stairs now have 15 risers each. These figures do not include two additional steps in the entry passage that were covered up in 1975 by a ramp and its inward horizontal extension to meet the higher (since 1886) entry lobby floor. One step was away from the outer walls and the other was at the end of the passage, away from the outer walls. As initially constructed, the interior was relatively open with two-rail handrails, but a couple of suicides and an accidental fall prompted the addition of tall wire screening high with a large diamond mesh) on the inside edge of the stairs and landings in 1929. The original steam powered elevator, which took 10 to 12 minutes to ascend to the observation floor, was replaced by an electric elevator powered by an on-site dynamo in 1901 which took five minutes to ascend. The monument was connected to the electrical grid in 1923, allowing the installation of a modern electric elevator in 1925–26 which took 70 seconds. The latter was replaced in 1958 and again in 1998 by 70-second elevators. From 1997 to 2000, the wire screening at three platforms was replaced by large glass panels to allow visitors on the elevator to view three clusters of memorial stones that were synchronously lit as the elevator automatically slowed while passing them during its descent. Flags Fifty American flags (not state flags), one for each state, are now flown 24 hours a day around a large circle centered on the monument. Forty eight American flags (one for each state then in existence) were flown on wooden flag poles on Washington's birthday since 1920 and later on Independence Day, Memorial Day, and other special occasions until early 1958. Both the flags and flag poles were removed and stored between these days. In 1958 fifty tall aluminum flag poles (anticipating Alaska and Hawaii) were installed, evenly spaced around a diameter circle. During 2004–05, the diameter of the circle was reduced to . Since Washington's birthday 1958, 48 American flags were flown on a daily basis, increasing to 49 flags on , and then to 50 flags since . When 48 and 49 flags were flown, only 48 and 49 flag poles of the available 50 were placed into base receptacles. All flags were removed and stored overnight. Since , 50 American flags have flown 24 hours a day. Approximate vesica piscis During the 2004 grounds renovation, two partially overlapping large circles were added to the landscaping with the obelisk in their intersection. The lens shape formed by such an intersection is called vesica piscis when two same-radius circles overlap, with the center of each lying on the perimeter of the other, which is not the case on the monument grounds. Miscellaneous details The total cost of the monument from 1848 to 1888 was $1,409,500 (). The weight of the above ground portion of the monument is , whereas its total weight, including the foundation below ground and any earth above it that is within its outer perimeter is . The total number of blocks in the monument, including all marble, granite and gneiss blocks, whether externally or internally visible or hidden from view within the walls or old foundation is over 36,000. The number of marble blocks externally visible is about 10,000. The monument stands tall according to the National Geodetic Survey (measured 2013–14) or tall according to the National Park Service (measured 1884). In 1975, a ramp covered two steps at the entrance to the monument, so the ground next to the ramp was raised to match its height, reducing the remaining height to the monument's apex. It is both the world's tallest predominantly stone structure and the world's tallest obelisk. It is the tallest monumental column in the world if all are measured above their pedestrian entrances, but two are taller when measured above ground, though they are neither all stone nor true obelisks. The tallest masonry structure in the world is the brick Anaconda Smelter Stack in Montana at tall. But this includes a non-masonry concrete foundation, leaving the stack's brick chimney at tall, only about taller than the monument's 2015 height. If the monument's aluminum apex is also discounted, then the stack's masonry portion is taller than the monument's masonry portion. Security In 2001, a temporary visitor security screening center was added to the east entrance of the Washington Monument in the wake of the September 11 attacks. The one-story facility was designed to reduce the ability of a terrorist attack on the interior of the monument, or an attempt to seize and hold it. Visitors obtained their timed-entry tickets from the Monument Lodge east of the memorial and passed through metal detectors and bomb-sniffing sensors prior to entering the monument. After exiting the monument, they passed through a turnstile to prevent them from re-entering. This facility, a one-story cube of wood around a metal frame, was intended to be temporary until a new screening facility could be designed. On March 6, 2014, the National Capital Planning Commission approved a new visitor screening facility to replace the temporary one. The facility will be two stories high and contain space for screening 20 to 25 visitors at a time. The exterior walls (which will be slightly frosted to prevent viewing of the security screening process) will consist of an outer sheet of bulletproof glass or polycarbonate, a metal mesh insert, and another sheet of bulletproof glass. The inner sheet will consist of two sheets (slightly separated) of laminated glass. A airspace will exist between the inner and outer glass walls to help insulate the facility. Two (possibly three) geothermal heat pumps will be built on the north side of the monument to provide heating and cooling of the facility. The new facility will also provide an office for National Park Service and United States Park Police staff. The structure is designed so that it may be removed without damaging the monument. The United States Commission of Fine Arts approved the aesthetic design of the screening facility in June 2013. A recessed trench wall known as a ha-ha has been built to minimize the visual impact of a security barrier surrounding the monument. After the September 11 attacks and another unrelated terror threat at the monument, authorities had put up a circle of temporary Jersey barriers to prevent large motor vehicles from approaching. The unsightly barrier was replaced by a less-obtrusive low granite stone wall that doubles as a seating bench and also incorporates lighting. Designed by the famed landscape architect Laurie Olin, the installation received the 2005 Park/Landscape Award of Merit from the American Society of Landscape Architects. See also Washington Monument syndrome Architecture of Washington, D.C. List of national memorials of the United States List of public art in Washington, D.C., Ward 2 List of tallest freestanding structures List of tallest towers List of tallest structures built before the 20th century Adams Memorial (proposed) Bunker Hill Monument Benjamin Franklin National Memorial Jefferson Memorial James Madison Memorial Building George Mason Memorial Memorial to the 56 Signers of the Declaration of Independence Presidential memorials in the United States Tuckahoe marble Yule Marble Notes References External links Official NPS website: Washington Monument Harper's Weekly cartoon, February 21, 1885, the day of formal dedication Today in HistoryDecember 6 Prehistory on the Mall at the Washington Monument 1888 establishments in Washington, D.C. Buildings and structures completed in 1888 Former world's tallest buildings Historic American Buildings Survey in Washington, D.C. Historic American Engineering Record in Washington, D.C. Historic Civil Engineering Landmarks IUCN protected area errors Monuments and memorials on the National Register of Historic Places in Washington, D.C. Monuments and memorials to George Washington in the United States National Mall and Memorial Parks National memorials of the United States Obelisks in the United States Robert Mills buildings Terminating vistas in the United States Towers in Washington, D.C.
Washington Monument
[ "Engineering" ]
14,042
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
167,611
https://en.wikipedia.org/wiki/Mailing%20list
A mailing list is a collection of names and addresses used by an individual or an organization to send material to multiple recipients. The term is often extended to include the people subscribed to such a list, so the group of subscribers is referred to as "the mailing list", or simply "the list". Types At least two types of mailing lists can be defined: an announcement list is closer to the original sense, where a "mailing list" of people was used as a recipient for newsletters, periodicals or advertising. Traditionally this was done through the postal system, but with the rise of email, the electronic mailing list became popular. This type of list is used primarily as a one-way conduit of information and may only be "posted to" by selected people. This may also be referred to by the term newsletter. Newsletter and promotional emailing lists are employed in various sectors as parts of direct marketing campaigns. a "discussion list" allows subscribing members (sometimes even people outside the list) to post their own items which are broadcast to all of the other mailing list members. Recipients may answer in a similar fashion, thus, actual discussion and information exchanges can occur. Mailing lists of this type are usually topic-oriented (for example, politics, scientific discussion, health problems, joke contests), and the topic may range from extremely narrow to "whatever you think could interest us." In this they are similar to Usenet newsgroups, another form of discussion group that may have an aversion to off-topic messages. Historically mailing lists preceded email/web forums; both can provide analogous functionalities. When used in that fashion, mailing lists are sometimes known as discussion lists or discussion forums. Discussion lists provide some advantages over typical web forums, so they are still used in various projects, notably Git and Debian. The advantages over web forums include the ability to work offline, the ability to sign/encrypt posts via GPG, and the ability to use an e-mail client's features, such as filters. Tracking Mailers want to know when items are delivered, partly to know how to staff call centers. Salting (or seeding) their lists enables them to compare delivery times, especially when time-of-year affects arrival delays. It may also provide information about poor handling of samples. Having seeded entries in an eMail list simplifies tracking who may have "borrowed" the list without permission. More definitions When similar or identical material is sent out to all subscribers on a mailing list, it is often referred to as a mailshot or a blast. A list for such use can also be referred to as a distribution list. On legitimate (non-spam) mailing lists, individuals can subscribe or unsubscribe themselves. Mailing lists are often rented or sold. If rented, the renter agrees to use the mailing list only at contractually agreed-upon times. The mailing list owner typically enforces this by "salting" (known as "seeding" in direct mail) the mailing list with fake addresses and creating new salts for each time the list is rented. Unscrupulous renters may attempt to bypass salts by renting several lists and merging them to find common, valid addresses. Mailing list brokers exist to help organizations rent their lists. For some list owners, such as specialized niche publications or charitable groups, their lists may be some of their most valuable assets, and mailing list brokers help them maximize the value of their lists. Transmission may be paper-based or electronic. Each has its strengths, although a 2022 article claimed that compared to email, "direct mail still brings in the lion's share of revenue for most organizations." A mailing list is simply a list of e-mail addresses of people who are interested in the same subject, are members of the same work group, or who are taking classes together. When a member of the list sends a note to the group's special address, the e-mail is broadcast to all of the members of the list. The key advantage of a mailing list over things such as web-based discussion is that as the new message becomes available they are immediately delivered to the participants' mailboxes. A mailing list sometimes can also include information such as phone number, postal address, fax number, and more. Electronic mailing list An electronic mailing list or email list is a special use of email that allows for widespread distribution of information to many Internet users. It is similar to a traditional mailing list – a list of names and addresses – as might be kept by an organization for sending publications to its members or customers, but typically refers to four things: a list of email addresses, the people ("subscribers") receiving mail at those addresses, thus defining a community gathered around a topic of interest, the publications (email messages) sent to those addresses, and a reflector, which is a single email address that, when designated as the recipient of a message, will send a copy of that message to all of the subscribers. Mechanism Electronic mailing lists usually are fully or partially automated through the use of special mailing list software and a reflector address set up on a server capable of receiving email. Incoming messages sent to the reflector address are processed by the software, and, depending on their content, are acted upon internally (in the case of messages containing commands directed at the software itself) or are distributed to all email addresses subscribed to the mailing list. A web-based interface is often available to allow people to subscribe, unsubscribe, and change their preferences. However, mailing list servers existed long before the World Wide Web, so most also accept commands over email to a special email address. This allows subscribers (or those who want to be subscribers) to perform such tasks as subscribing and unsubscribing, temporarily halting the sending of messages to them, or changing available preferences – all via email. The common format for sending these commands is to send an email that contains simply the command followed by the name of the electronic mailing list the command pertains to. Examples: subscribe anylist or subscribe anylist John Doe. Electronic mailing list servers may be set to forward messages to subscribers of a particular mailing list either individually as they are received by the list server, or in digest form in which all messages received on a particular day by the list server are combined into one email that is sent once per day to subscribers. Some mailing lists allow individual subscribers to decide how they prefer to receive messages from the list server (individual or digest). History Mailing lists have first been scholarly mailing lists. The genealogy of mailing lists as a communication tool between scientists can be traced back to the times of the fledgling Arpanet. The aim of the computer scientists involved in this project was to develop protocols for the communication between computers. In so doing, they have also built the first tools of human computer-mediated communication. Broadly speaking, the scholarly mailing lists can even be seen as the modern version of the salons of the Enlightenment ages, designed by scholars for scholars. The "threaded conversation" structure (where the header of a first post defines the topic of a series of answers thus constituting a thread) is a typical and ubiquitous structure of discourse within lists and fora of the Internet. It is pivotal to the structure and topicality of debates within mailing lists as an arena, or public sphere in Habermas wording. The flame wars (as the liveliest episodes) give valuable and unique information to historians to comprehend what is at stake in the communities gathered around lists. Anthropologists, sociologists and historians have used mailing lists as fieldwork. Topics include TV series fandom, online culture, or scientific practices among many other academic studies. From the historian's point of view, the issue of the preservation of mailing lists heritage (and Internet fora heritage in general) is essential. Not only the text of the corpus of messages has yet to be perennially archived, but also their related metadata, timestamps, headers that define topics, etc. Mailing lists archives are a unique opportunity for historians to explore interactions, debates, even tensions that reveal a lot about communities. List security On both discussion lists and newsletter lists precautions are taken to avoid spamming. Discussion lists often require every message to be approved by a moderator before being sent to the rest of the subscribers (moderated lists), although higher-traffic lists typically only moderate messages from new subscribers. Companies sending out promotional newsletters have the option of working with whitelist mail distributors, which agree to standards and high fines from ISPs should any of the opt-in subscribers complain. In exchange for their compliance and agreement to prohibitive fines, the emails sent by whitelisted companies are not blocked by spam filters, which often can reroute these legitimate, non-spam emails. Subscription Some mailing lists are open to anyone who wants to join them, while others require an approval from the list owner before one may join. Joining a mailing list is called "subscribing" and leaving a list is called "unsubscribing". Archives A mailing list archive is a collection of past messages from one or more electronic mailing lists. Such archives often include searching and indexing functionality. Many archives are directly associated with the mailing list, but some organizations, such as Gmane, collect archives from multiple mailing lists hosted at different organizations; thus, one message sent to one popular mailing list may end up in many different archives. Gmane had over 9,000 mailing list archives as of 16 January 2007. Some popular free software programs for collecting mailing list archives are Hypermail, MHonArc, FUDforum, and public-inbox (which is notably used for archiving the Linux kernel mailing list along with many other software development mailing lists and has a web-service API used by search-and-retrieval tools intended for use by the Linux kernel development community). Listwashing Listwashing is the process through which individual entries in mailing lists are to be removed. These mailing lists typically contain email addresses or phone numbers of those that have not voluntarily subscribed. Only complainers are removed via this process. Because most of those that have not voluntarily subscribed stay on the list, this helps spammers to maintain a low-complaint list of spammable email addresses. Internet service providers who forward complaints to the spamming party are often seen as assisting the spammer in list washing, or, in short, helping spammers. Most legitimate list holders provide their customers with listwashing and data deduplication service regularly for no charge or a small fee. See also CAN-SPAM Act of 2003 Computational Chemistry List Dgroups eGroups Direct digital marketing Direct marketing Distribution list Bulk email software Google Groups List of mailing list software Linux kernel mailing list LISTSERV MSN Groups Netiquette Newsletter Online consultation Robinson list Squeeze page Usenet Yahoo! Groups References Direct marketing Email Internet culture Postal systems Social information processing Spamming Virtual communities
Mailing list
[ "Technology" ]
2,311
[ "Transport systems", "Postal systems" ]
167,632
https://en.wikipedia.org/wiki/Chaperone%20%28protein%29
In molecular biology, molecular chaperones are proteins that assist the conformational folding or unfolding of large proteins or macromolecular protein complexes. There are a number of classes of molecular chaperones, all of which function to assist large proteins in proper protein folding during or after synthesis, and after partial denaturation. Chaperones are also involved in the translocation of proteins for proteolysis. The first molecular chaperones discovered were a type of assembly chaperones which assist in the assembly of nucleosomes from folded histones and DNA. One major function of molecular chaperones is to prevent the aggregation of misfolded proteins, thus many chaperone proteins are classified as heat shock proteins, as the tendency for protein aggregation is increased by heat stress. The majority of molecular chaperones do not convey any steric information for protein folding, and instead assist in protein folding by binding to and stabilizing folding intermediates until the polypeptide chain is fully translated. The specific mode of function of chaperones differs based on their target proteins and location. Various approaches have been applied to study the structure, dynamics and functioning of chaperones. Bulk biochemical measurements have informed us on the protein folding efficiency, and prevention of aggregation when chaperones are present during protein folding. Recent advances in single-molecule analysis have brought insights into structural heterogeneity of chaperones, folding intermediates and affinity of chaperones for unstructured and structured protein chains. Functions of molecular chaperones Many chaperones are heat shock proteins, that is, proteins expressed in response to elevated temperatures or other cellular stresses. Heat shock protein chaperones are classified based on their observed molecular weights into Hsp60, Hsp70, Hsp90, Hsp104, and small Hsps. The Hsp60 family of protein chaperones are termed chaperonins, and are characterized by a stacked double-ring structure and are found in prokaryotes, in the cytosol of eukaryotes, and in mitochondria. Some chaperone systems work as foldases: they support the folding of proteins in an ATP-dependent manner (for example, the GroEL/GroES or the DnaK/DnaJ/GrpE system). Although most newly synthesized proteins can fold in absence of chaperones, a minority strictly requires them for the same. Other chaperones work as holdases: they bind folding intermediates to prevent their aggregation, for example DnaJ or Hsp33. Chaperones can also work as disaggregases, which interact with aberrant protein assemblies and revert them to monomers. Some chaperones can assist in protein degradation, leading proteins to protease systems, such as the ubiquitin-proteasome system in eukaryotes. Chaperone proteins participate in the folding of over half of all mammalian proteins. Macromolecular crowding may be important in chaperone function. The crowded environment of the cytosol can accelerate the folding process, since a compact folded protein will occupy less volume than an unfolded protein chain. However, crowding can reduce the yield of correctly folded protein by increasing protein aggregation. Crowding may also increase the effectiveness of the chaperone proteins such as GroEL, which could counteract this reduction in folding efficiency. Some highly specific 'steric chaperones' convey unique structural information onto proteins, which cannot be folded spontaneously. Such proteins violate Anfinsen's dogma, requiring protein dynamics to fold correctly. Other types of chaperones are involved in transport across membranes, for example membranes of the mitochondria and endoplasmic reticulum (ER) in eukaryotes. A bacterial translocation-specific chaperone SecB maintains newly synthesized precursor polypeptide chains in a translocation-competent (generally unfolded) state and guides them to the translocon. New functions for chaperones continue to be discovered, such as bacterial adhesin activity, induction of aggregation towards non-amyloid aggregates, suppression of toxic protein oligomers via their clustering, and in responding to diseases linked to protein aggregation and cancer maintenance. Human chaperone proteins In human cell lines, chaperone proteins were found to compose ~10% of the gross proteome mass, and are ubiquitously and highly expressed across human tissues. Chaperones are found extensively in the endoplasmic reticulum (ER), since protein synthesis often occurs in this area. Endoplasmic reticulum In the endoplasmic reticulum (ER) there are general, lectin- and non-classical molecular chaperones that moderate protein folding. General chaperones: GRP78/BiP, GRP94, GRP170. Lectin chaperones: calnexin and calreticulin Non-classical molecular chaperones: HSP47 and ERp29 Folding chaperones: Protein disulfide isomerase (PDI), Peptidyl prolyl cis-trans isomerase (PPI), Prolyl isomerase ERp57 Nomenclature and examples of chaperone families There are many different families of chaperones; each family acts to aid protein folding in a different way. In bacteria like E. coli, many of these proteins are highly expressed under conditions of high stress, for example, when the bacterium is placed in high temperatures, thus heat shock protein chaperones are the most extensive. A variety of nomenclatures are in use for chaperones. As heat shock proteins, the names are classically formed by "Hsp" followed by the approximate molecular mass in kilodaltons; such names are commonly used for eukaryotes such as yeast. The bacterial names have more varied forms, and refer directly to their apparent function at discovery. For example, "GroEL" originally stands for "phage growth defect, overcome by mutation in phage gene E, large subunit". Hsp10 and Hsp60 Hsp10/60 (GroEL/GroES complex in E. coli) is the best characterized large (~ 1 MDa) chaperone complex. GroEL (Hsp60) is a double-ring 14mer with a hydrophobic patch at its opening; it is so large it can accommodate native folding of 54-kDa GFP in its lumen. GroES (Hsp10) is a single-ring heptamer that binds to GroEL in the presence of ATP or ADP. GroEL/GroES may not be able to undo previous aggregation, but it does compete in the pathway of misfolding and aggregation. Also acts in the mitochondrial matrix as a molecular chaperone. Hsp70 and Hsp40 Hsp70 (DnaK in E. coli) is perhaps the best characterized small (~ 70 kDa) chaperone. The Hsp70 proteins are aided by Hsp40 proteins (DnaJ in E. coli), which increase the ATP consumption rate and activity of the Hsp70s. The two proteins are named "Dna" in bacteria because they were initially identified as being required for E. coli DNA replication. It has been noted that increased expression of Hsp70 proteins in the cell results in a decreased tendency toward apoptosis. Although a precise mechanistic understanding has yet to be determined, it is known that Hsp70s have a high-affinity bound state to unfolded proteins when bound to ADP, and a low-affinity state when bound to ATP. It is thought that many Hsp70s crowd around an unfolded substrate, stabilizing it and preventing aggregation until the unfolded molecule folds properly, at which time the Hsp70s lose affinity for the molecule and diffuse away. Hsp70 also acts as a mitochondrial and chloroplastic molecular chaperone in eukaryotes. Hsp90 Hsp90 (HtpG in E. coli) may be the least understood chaperone. Its molecular weight is about 90 kDa, and it is necessary for viability in eukaryotes (possibly for prokaryotes as well). Heat shock protein 90 (Hsp90) is a molecular chaperone essential for activating many signaling proteins in the eukaryotic cell. Each Hsp90 has an ATP-binding domain, a middle domain, and a dimerization domain. Originally thought to clamp onto their substrate protein (also known as a client protein) upon binding ATP, the recently published structures by Vaughan et al. and Ali et al. indicate that client proteins may bind externally to both the N-terminal and middle domains of Hsp90. Hsp90 may also require co-chaperones-like immunophilins, Sti1, p50 (Cdc37), and Aha1, and also cooperates with the Hsp70 chaperone system. Hsp100 Hsp100 (Clp family in E. coli) proteins have been studied in vivo and in vitro for their ability to target and unfold tagged and misfolded proteins. Proteins in the Hsp100/Clp family form large hexameric structures with unfoldase activity in the presence of ATP. These proteins are thought to function as chaperones by processively threading client proteins through a small 20 Å (2 nm) pore, thereby giving each client protein a second chance to fold. Some of these Hsp100 chaperones, like ClpA and ClpX, associate with the double-ringed tetradecameric serine protease ClpP; instead of catalyzing the refolding of client proteins, these complexes are responsible for the targeted destruction of tagged and misfolded proteins. Hsp104, the Hsp100 of Saccharomyces cerevisiae, is essential for the propagation of many yeast prions. Deletion of the HSP104 gene results in cells that are unable to propagate certain prions. Bacteriophage The genes of bacteriophage (phage) T4 that encode proteins with a role in determining phage T4 structure were identified using conditional lethal mutants. Most of these proteins proved to be either major or minor structural components of the completed phage particle. However among the gene products (gps) necessary for phage assembly, Snustad identified a group of gps that act catalytically rather than being incorporated themselves into the phage structure. These gps were gp26, gp31, gp38, gp51, gp28, and gp4 [gene 4 is synonymous with genes 50 and 65, and thus the gp can be designated gp4(50)(65)]. The first four of these six gene products have since been recognized as being chaperone proteins. Additionally, gp40, gp57A, gp63 and gpwac have also now been identified as chaperones. Phage T4 morphogenesis is divided into three independent pathways: the head, the tail and the long tail fiber pathways as detailed by Yap and Rossman. With regard to head morphogenesis, chaperone gp31 interacts with the bacterial host chaperone GroEL to promote proper folding of the major head capsid protein gp23. Chaperone gp40 participates in the assembly of gp20, thus aiding in the formation of the connector complex that initiates head procapsid assembly. Gp4(50)(65), although not specifically listed as a chaperone, acts catalytically as a nuclease that appears to be essential for morphogenesis by cleaving packaged DNA to enable the joining of heads to tails. During overall tail assembly, chaperone proteins gp26 and gp51 are necessary for baseplate hub assembly. Gp57A is required for correct folding of gp12, a structural component of the baseplate short tail fibers. Synthesis of the long tail fibers depends on the chaperone protein gp57A that is needed for the trimerization of gp34 and gp37, the major structural proteins of the tail fibers. The chaperone protein gp38 is also required for the proper folding of gp37. Chaperone proteins gp63 and gpwac are employed in attachment of the long tail fibers to the tail baseplate. History The investigation of chaperones has a long history. The term "molecular chaperone" appeared first in the literature in 1978, and was invented by Ron Laskey to describe the ability of a nuclear protein called nucleoplasmin to prevent the aggregation of folded histone proteins with DNA during the assembly of nucleosomes. The term was later extended by R. John Ellis in 1987 to describe proteins that mediated the post-translational assembly of protein complexes. In 1988, it was realised that similar proteins mediated this process in both prokaryotes and eukaryotes. The details of this process were determined in 1989, when the ATP-dependent protein folding was demonstrated in vitro. Clinical significance There are many disorders associated with mutations in genes encoding chaperones (i.e. multisystem proteinopathy) that can affect muscle, bone and/or the central nervous system. See also Biological machines Chaperome Chaperonin Chemical chaperones Heat shock protein Heat shock factor 1 Molecular chaperone therapy Pharmacoperone Proteasome Protein dynamics Notes References Protein biosynthesis
Chaperone (protein)
[ "Chemistry" ]
2,804
[ "Protein biosynthesis", "Gene expression", "Biosynthesis" ]
167,647
https://en.wikipedia.org/wiki/Seamount
A seamount is a large submarine landform that rises from the ocean floor without reaching the water surface (sea level), and thus is not an island, islet, or cliff-rock. Seamounts are typically formed from extinct volcanoes that rise abruptly and are usually found rising from the seafloor to in height. They are defined by oceanographers as independent features that rise to at least above the seafloor, characteristically of conical form. The peaks are often found hundreds to thousands of meters below the surface, and are therefore considered to be within the deep sea. During their evolution over geologic time, the largest seamounts may reach the sea surface where wave action erodes the summit to form a flat surface. After they have subsided and sunk below the sea surface, such flat-top seamounts are called "guyots" or "tablemounts". Earth's oceans contain more than 14,500 identified seamounts, of which 9,951 seamounts and 283 guyots, covering a total area of , have been mapped but only a few have been studied in detail by scientists. Seamounts and guyots are most abundant in the North Pacific Ocean, and follow a distinctive evolutionary pattern of eruption, build-up, subsidence and erosion. In recent years, several active seamounts have been observed, for example Kamaʻehuakanaloa (formerly Lōʻihi) in the Hawaiian Islands. Because of their abundance, seamounts are one of the most common marine ecosystems in the world. Interactions between seamounts and underwater currents, as well as their elevated position in the water, attract plankton, corals, fish, and marine mammals alike. Their aggregational effect has been noted by the commercial fishing industry, and many seamounts support extensive fisheries. There are ongoing concerns on the negative impact of fishing on seamount ecosystems, and well-documented cases of stock decline, for example with the orange roughy (Hoplostethus atlanticus). 95% of ecological damage is done by bottom trawling, which scrapes whole ecosystems off seamounts. Because of their large numbers, many seamounts remain to be properly studied, and even mapped. Bathymetry and satellite altimetry are two technologies working to close the gap. There have been instances where naval vessels have collided with uncharted seamounts; for example, Muirfield Seamount is named after the ship that struck it in 1973. However, the greatest danger from seamounts are flank collapses; as they get older, extrusions seeping in the seamounts put pressure on their sides, causing landslides that have the potential to generate massive tsunamis. Geography Seamounts can be found in every ocean basin in the world, distributed extremely widely both in space and in age. A seamount is technically defined as an isolated rise in elevation of or more from the surrounding seafloor, and with a limited summit area, of conical form. There are more than 14,500 seamounts. In addition to seamounts, there are more than 80,000 small knolls, ridges and hills less than 1,000 m in height in the world's oceans. Most seamounts are volcanic in origin, and thus tend to be found on oceanic crust near mid-ocean ridges, mantle plumes, and island arcs. Overall, seamount and guyot coverage is greatest as a proportion of seafloor area in the North Pacific Ocean, equal to 4.39% of that ocean region. The Arctic Ocean has only 16 seamounts and no guyots, and the Mediterranean and Black seas together have only 23 seamounts and 2 guyots. The 9,951 seamounts which have been mapped cover an area of . Seamounts have an average area of , with the smallest seamounts found in the Arctic Ocean and the Mediterranean and Black Seas; whilst the largest mean seamount size, , occurs in the Indian Ocean. The largest seamount has an area of and it occurs in the North Pacific. Guyots cover a total area of and have an average area of , more than twice the average size of seamounts. Nearly 50% of guyot area and 42% of the number of guyots occur in the North Pacific Ocean, covering . The largest three guyots are all in the North Pacific: the Kuko Guyot (estimated ), Suiko Guyot (estimated ) and the Pallada Guyot (estimated ). Grouping Seamounts are often found in groupings or submerged archipelagos, a classic example being the Emperor Seamounts, an extension of the Hawaiian Islands. Formed millions of years ago by volcanism, they have since subsided far below sea level. This long chain of islands and seamounts extends thousands of kilometers northwest from the island of Hawaii. There are more seamounts in the Pacific Ocean than in the Atlantic, and their distribution can be described as comprising several elongate chains of seamounts superimposed on a more or less random background distribution. Seamount chains occur in all three major ocean basins, with the Pacific having the most number and most extensive seamount chains. These include the Hawaiian (Emperor), Mariana, Gilbert, Tuomotu and Austral Seamounts (and island groups) in the north Pacific and the Louisville and Sala y Gomez ridges in the southern Pacific Ocean. In the North Atlantic Ocean, the New England Seamounts extend from the eastern coast of the United States to the mid-ocean ridge. Craig and Sandwell noted that clusters of larger Atlantic seamounts tend to be associated with other evidence of hotspot activity, such as on the Walvis Ridge, Vitória-Trindade Ridge, Bermuda Islands and Cape Verde Islands. The mid-Atlantic ridge and spreading ridges in the Indian Ocean are also associated with abundant seamounts. Otherwise, seamounts tend not to form distinctive chains in the Indian and Southern Oceans, but rather their distribution appears to be more or less random. Isolated seamounts and those without clear volcanic origins are less common; examples include Bollons Seamount, Eratosthenes Seamount, Axial Seamount and Gorringe Ridge. If all known seamounts were collected into one area, they would make a landform the size of Europe. Their overall abundance makes them one of the most common, and least understood, marine structures and biomes on Earth, a sort of exploratory frontier. Geology Geochemistry and evolution Most seamounts are built by one of two volcanic processes, although some, such as the Christmas Island Seamount Province near Australia, are more enigmatic. Volcanoes near plate boundaries and mid-ocean ridges are built by decompression melting of rock in the upper mantle. The lower density magma rises through the crust to the surface. Volcanoes formed near or above subducting zones are created because the subducting tectonic plate adds volatiles to the overriding plate that lowers its melting point. Which of these two process involved in the formation of a seamount has a profound effect on its eruptive materials. Lava flows from mid-ocean ridge and plate boundary seamounts are mostly basaltic (both tholeiitic and alkalic), whereas flows from subducting ridge volcanoes are mostly calc-alkaline lavas. Compared to mid-ocean ridge seamounts, subduction zone seamounts generally have more sodium, alkali, and volatile abundances, and less magnesium, resulting in more explosive, viscous eruptions. All volcanic seamounts follow a particular pattern of growth, activity, subsidence and eventual extinction. The first stage of a seamount's evolution is its early activity, building its flanks and core up from the sea floor. This is followed by a period of intense volcanism, during which the new volcano erupts almost all (e.g. 98%) of its total magmatic volume. The seamount may even grow above sea level to become an oceanic island (for example, the 2009 eruption of Hunga Tonga). After a period of explosive activity near the ocean surface, the eruptions slowly die away. With eruptions becoming infrequent and the seamount losing its ability to maintain itself, the volcano starts to erode. After finally becoming extinct (possibly after a brief rejuvenated period), they are ground back down by the waves. Seamounts are built in a far more dynamic oceanic setting than their land counterparts, resulting in horizontal subsidence as the seamount moves with the tectonic plate towards a subduction zone. Here it is subducted under the plate margin and ultimately destroyed, but it may leave evidence of its passage by carving an indentation into the opposing wall of the subduction trench. The majority of seamounts have already completed their eruptive cycle, so access to early flows by researchers is limited by late volcanic activity. Ocean-ridge volcanoes in particular have been observed to follow a certain pattern in terms of eruptive activity, first observed with Hawaiian seamounts but now shown to be the process followed by all seamounts of the ocean-ridge type. During the first stage the volcano erupts basalt of various types, caused by various degrees of mantle melting. In the second, most active stage of its life, ocean-ridge volcanoes erupt tholeiitic to mildly alkalic basalt as a result of a larger area melting in the mantle. This is finally capped by alkalic flows late in its eruptive history, as the link between the seamount and its source of volcanism is cut by crustal movement. Some seamounts also experience a brief "rejuvenated" period after a hiatus of 1.5 to 10 million years, the flows of which are highly alkalic and produce many xenoliths. In recent years, geologists have confirmed that a number of seamounts are active undersea volcanoes; two examples are Kamaʻehuakanaloa (formerly Lo‘ihi) in the Hawaiian Islands and Vailulu'u in the Manu'a Group (Samoa). Lava types The most apparent lava flows at a seamount are the eruptive flows that cover their flanks, however igneous intrusions, in the forms of dikes and sills, are also an important part of seamount growth. The most common type of flow is pillow lava, named so after its distinctive shape. Less common are sheet flows, which are glassy and marginal, and indicative of larger-scale flows. Volcaniclastic sedimentary rocks dominate shallow-water seamounts. They are the products of the explosive activity of seamounts that are near the water's surface, and can also form from mechanical wear of existing volcanic rock. Structure Seamounts can form in a wide variety of tectonic settings, resulting in a very diverse structural bank. Seamounts come in a wide variety of structural shapes, from conical to flat-topped to complexly shaped. Some are built very large and very low, such as Koko Guyot and Detroit Seamount; others are built more steeply, such as Kamaʻehuakanaloa Seamount and Bowie Seamount. Some seamounts also have a carbonate or sediment cap. Many seamounts show signs of intrusive activity, which is likely to lead to inflation, steepening of volcanic slopes, and ultimately, flank collapse. There are also several sub-classes of seamounts. The first are guyots, seamounts with a flat top. These tops must be or more below the surface of the sea; the diameters of these flat summits can be over . Knolls are isolated elevation spikes measuring less than . Lastly, pinnacles are small pillar-like seamounts. Ecology Ecological role of seamounts Seamounts are exceptionally important to their biome ecologically, but their role in their environment is poorly understood. Because they project out above the surrounding sea floor, they disturb standard water flow, causing eddies and associated hydrological phenomena that ultimately result in water movement in an otherwise still ocean bottom. Currents have been measured at up to 0.9 knots, or 48 centimeters per second. Because of this upwelling seamounts often carry above-average plankton populations, seamounts are thus centers where the fish that feed on them aggregate, in turn falling prey to further predation, making seamounts important biological hotspots. Seamounts provide habitats and spawning grounds for these larger animals, including numerous fish. Some species, including black oreo (Allocyttus niger) and blackstripe cardinalfish (Apogon nigrofasciatus), have been shown to occur more often on seamounts than anywhere else on the ocean floor. Marine mammals, sharks, tuna, and cephalopods all congregate over seamounts to feed, as well as some species of seabirds when the features are particularly shallow. Seamounts often project upwards into shallower zones more hospitable to sea life, providing habitats for marine species that are not found on or around the surrounding deeper ocean bottom. Because seamounts are isolated from each other they form "undersea islands" creating the same biogeographical interest. As they are formed from volcanic rock, the substrate is much harder than the surrounding sedimentary deep sea floor. This causes a different type of fauna to exist than on the seafloor, and leads to a theoretically higher degree of endemism. However, recent research especially centered at Davidson Seamount suggests that seamounts may not be especially endemic, and discussions are ongoing on the effect of seamounts on endemicity. They have, however, been confidently shown to provide a habitat to species that have difficulty surviving elsewhere. The volcanic rocks on the slopes of seamounts are heavily populated by suspension feeders, particularly corals, which capitalize on the strong currents around the seamount to supply them with food. These coral are therefore host to numerous other organisms in a commensal relationship, for example brittle stars, who climb the coral to get themselves off the seafloor, helping them to catch food particles, or small zooplankton, as they drift by. This is in sharp contrast with the typical deep-sea habitat, where deposit-feeding animals rely on food they get off the ground. In tropical zones extensive coral growth results in the formation of coral atolls late in the seamount's life. In addition soft sediments tend to accumulate on seamounts, which are typically populated by polychaetes (annelid marine worms) oligochaetes (microdrile worms), and gastropod mollusks (sea slugs). Xenophyophores have also been found. They tend to gather small particulates and thus form beds, which alters sediment deposition and creates a habitat for smaller animals. Many seamounts also have hydrothermal vent communities, for example Suiyo and Kamaʻehuakanaloa seamounts. This is helped by geochemical exchange between the seamounts and the ocean water. Seamounts may thus be vital stopping points for some migratory animals, specifically whales. Some recent research indicates whales may use such features as navigational aids throughout their migration. For a long time it has been surmised that many pelagic animals visit seamounts as well, to gather food, but proof of this aggregating effect has been lacking. The first demonstration of this conjecture was published in 2008. Fishing The effect that seamounts have on fish populations has not gone unnoticed by the commercial fishing industry. Seamounts were first extensively fished in the second half of the 20th century, due to poor management practices and increased fishing pressure seriously depleting stock numbers on the typical fishing ground, the continental shelf. Seamounts have been the site of targeted fishing since that time. Nearly 80 species of fish and shellfish are commercially harvested from seamounts, including spiny lobster (Palinuridae), mackerel (Scombridae and others), red king crab (Paralithodes camtschaticus), red snapper (Lutjanus campechanus), tuna (Scombridae), Orange roughy (Hoplostethus atlanticus), and perch (Percidae). Conservation The ecological conservation of seamounts is hurt by the simple lack of information available. Seamounts are very poorly studied, with only 350 of the estimated 100,000 seamounts in the world having received sampling, and fewer than 100 in depth. Much of this lack of information can be attributed to a lack of technology, and to the daunting task of reaching these underwater structures; the technology to fully explore them has only been around the last few decades. Before consistent conservation efforts can begin, the seamounts of the world must first be mapped, a task that is still in progress. Overfishing is a serious threat to seamount ecological welfare. There are several well-documented cases of fishery exploitation, for example the orange roughy (Hoplostethus atlanticus) off the coasts of Australia and New Zealand and the pelagic armorhead (Pseudopentaceros richardsoni) near Japan and Russia. The reason for this is that the fishes that are targeted over seamounts are typically long-lived, slow-growing, and slow-maturing. The problem is confounded by the dangers of trawling, which damages seamount surface communities, and the fact that many seamounts are located in international waters, making proper monitoring difficult. Bottom trawling in particular is extremely devastating to seamount ecology, and is responsible for as much as 95% of ecological damage to seamounts. Corals from seamounts are also vulnerable, as they are highly valued for making jewellery and decorative objects. Significant harvests have been produced from seamounts, often leaving coral beds depleted. Individual nations are beginning to note the effect of fishing on seamounts, and the European Commission has agreed to fund the OASIS project, a detailed study of the effects of fishing on seamount communities in the North Atlantic. Another project working towards conservation is CenSeam, a Census of Marine Life project formed in 2005. CenSeam is intended to provide the framework needed to prioritise, integrate, expand and facilitate seamount research efforts in order to significantly reduce the unknown and build towards a global understanding of seamount ecosystems, and the roles they have in the biogeography, biodiversity, productivity and evolution of marine organisms. Possibly the best ecologically studied seamount in the world is Davidson Seamount, with six major expeditions recording over 60,000 species observations. The contrast between the seamount and the surrounding area was well-marked. One of the primary ecological havens on the seamount is its deep sea coral garden, and many of the specimens noted were over a century old. Following the expansion of knowledge on the seamount there was extensive support to make it a marine sanctuary, a motion that was granted in 2008 as part of the Monterey Bay National Marine Sanctuary. Much of what is known about seamounts ecologically is based on observations from Davidson. Another such seamount is Bowie Seamount, which has also been declared a marine protected area by Canada for its ecological richness. Exploration The study of seamounts has been hindered for a long time by the lack of technology. Although seamounts have been sampled as far back as the 19th century, their depth and position meant that the technology to explore and sample seamounts in sufficient detail did not exist until the last few decades. Even with the right technology available, only a scant 1% of the total number have been explored, and sampling and information remains biased towards the top . New species are observed or collected and valuable information is obtained on almost every submersible dive at seamounts. Before seamounts and their oceanographic impact can be fully understood, they must be mapped, a daunting task due to their sheer number. The most detailed seamount mappings are provided by multibeam echosounding (sonar), however after more than 5000 publicly held cruises, the amount of the sea floor that has been mapped remains minuscule. Satellite altimetry is a broader alternative, albeit not as detailed, with 13,000 catalogued seamounts; however this is still only a fraction of the total 100,000. The reason for this is that uncertainties in the technology limit recognition to features or larger. In the future, technological advances could allow for a larger and more detailed catalogue. Observations from CryoSat-2 combined with data from other satellites has shown thousands of previously uncharted seamounts, with more to come as data is interpreted. Deep-sea mining Seamounts are a possible future source of economically important metals. Even though the ocean makes up 70% of Earth's surface area, technological challenges have severely limited the extent of deep sea mining. But with the constantly decreasing supply on land, some mining specialists see oceanic mining as the destined future, and seamounts stand out as candidates. Seamounts are abundant, and all have metal resource potential because of various enrichment processes during the seamount's life. An example for epithermal gold mineralization on the seafloor is Conical Seamount, located about 8 km south of Lihir Island in Papua New Guinea. Conical Seamount has a basal diameter of about 2.8 km and rises about 600 m above the seafloor to a water depth of 1050 m. Grab samples from its summit contain the highest gold concentrations yet reported from the modern seafloor (max. 230 g/t Au, avg. 26 g/t, n=40). Iron-manganese, hydrothermal iron oxide, sulfide, sulfate, sulfur, hydrothermal manganese oxide, and phosphorite (the latter especially in parts of Micronesia) are all mineral resources that are deposited upon or within seamounts. However, only the first two have any potential of being targeted by mining in the next few decades. Dangers Some seamounts have not been mapped and thus pose a navigational danger. For instance, Muirfield Seamount is named after the ship that hit it in 1973. More recently, the submarine USS San Francisco ran into an uncharted seamount in 2005 at a speed of , sustaining serious damage and killing one seaman. One major seamount risk is that often, in the late of stages of their life, extrusions begin to seep in the seamount. This activity leads to inflation, over-extension of the volcano's flanks, and ultimately flank collapse, leading to submarine landslides with the potential to start major tsunamis, which can be among the largest natural disasters in the world. In an illustration of the potent power of flank collapses, a summit collapse on the northern edge of Vlinder Seamount resulted in a pronounced headwall scarp and a field of debris up to away. A catastrophic collapse at Detroit Seamount flattened its whole structure extensively. Lastly, in 2004, scientists found marine fossils up the flank of Kohala mountain in Hawaii. Subsidation analysis found that at the time of their deposition, this would have been up the flank of the volcano, far too high for a normal wave to reach. The date corresponded with a massive flank collapse at the nearby Mauna Loa, and it was theorized that it was a massive tsunami, generated by the landslide, that deposited the fossils. See also Asphalt volcano Bathymetry Evolution of Hawaiian volcanoes Hotspot (geology) List of submarine volcanoes Marine protected area Mud volcano Oceanic trench Submarine eruption Submarine volcano Topographic prominence Volcanic island References Bibliography Geology Keating, B.H., Fryer, P., Batiza, R., Boehlert, G.W. (Eds.), 1987: Seamounts, islands and atolls. Geophys. Monogr. 43:319–334. Menard, H.W. (1964). Marine Geology of the Pacific. International Series in the Earth Sciences. McGraw-Hill, New York, 271 pp. Ecology Pitcher, T.J., Morato, T., Hart, P.J.B., Clark, M.R., Haggan, N. and Santos, R.S. (eds) (2007). "Seamounts: Ecology, Fisheries and Conservation". Fish and Aquatic Resources Series 12, Blackwell, Oxford, UK. 527pp. External links Geography and geology Earthref Seamount Catalogue. A database of seamount maps and catalogue listings. Volcanic History of Seamounts in the Gulf of Alaska. The giant Ruatoria debris avalanche on the northern Hikurangi margin, New Zealand . Aftermath of a seamount carving into the far side of a subduction trench. Evolution of Hawaiian volcanoes. The life cycle of seamounts was originally observed off of the Hawaiian arc. How Volcanoes Work: Lava and Water. An explanation of the different types of lava-water interactions. Ecology A review of the effects of seamounts on biological processes. NOAA paper. Mountains in the Sea, a volume on the biological and geological effects of seamounts, available fully online. SeamountsOnline, seamount biology database. Vulnerability of deep sea corals to fishing on seamounts beyond areas of national jurisdiction , United Nations Environment Program. Physical oceanography Fisheries science
Seamount
[ "Physics" ]
5,232
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
167,660
https://en.wikipedia.org/wiki/Cell%20type
A cell type is a classification used to identify cells that share morphological or phenotypical features. A multicellular organism may contain cells of a number of widely differing and specialized cell types, such as muscle cells and skin cells, that differ both in appearance and function yet have identical genomic sequences. Cells may have the same genotype, but belong to different cell types due to the differential regulation of the genes they contain. Classification of a specific cell type is often done through the use of microscopy (such as those from the cluster of differentiation family that are commonly used for this purpose in immunology). Recent developments in single cell RNA sequencing facilitated classification of cell types based on shared gene expression patterns. This has led to the discovery of many new cell types in e.g. mouse cortex, hippocampus, dorsal root ganglion and spinal cord. Animals have evolved a greater diversity of cell types in a multicellular body (100–150 different cell types), compared with 10–20 in plants, fungi, and protists. The exact number of cell types is, however, undefined, and the Cell Ontology, as of 2021, lists over 2,300 different cell types. Multicellular organisms All higher multicellular organisms contain cells specialised for different functions. Most distinct cell types arise from a single totipotent cell that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division). Multicellular organisms are composed of cells that fall into two fundamental types: germ cells and somatic cells. During development, somatic cells will become more specialized and form the three primary germ layers: ectoderm, mesoderm, and endoderm. After formation of the three germ layers, cells will continue to specialize until they reach a terminally differentiated state that is much more resistant to changes in cell type than its progenitors. The simplest organism considered to have well defined cell types are some volvoceans, such as Volvox carteri, in which each organism is composed of distinct and interdependent cell populations, some somatic and some reproductive. Conceptual definition Even though the concept of cell type is widely used, specialists still discuss the exact definition of what constitutes a cell type. Humans A list of cell types in the human body may include several hundred distinct types depending on the source. A 2006 peer-reviewed article by Vickaryous and Hall listed 411 distinct human cell types. See also List of distinct cell types in the adult human body List of human cell types derived from the germ layers Stem cell Types of plant cells References Further reading External links Developmental biology
Cell type
[ "Biology" ]
566
[ "Behavior", "Developmental biology", "Reproduction" ]
167,664
https://en.wikipedia.org/wiki/Epsilon%20Eridani
Epsilon Eridani (Latinized from ε Eridani), proper name Ran, is a star in the southern constellation of Eridanus. At a declination of −9.46°, it is visible from most of Earth's surface. Located at a distance from the Sun, it has an apparent magnitude of 3.73, making it the third-closest individual star (or star system) visible to the naked eye. The star is estimated to be less than a billion years old. This relative youth gives Epsilon Eridani a higher level of magnetic activity than the Sun, with a stellar wind 30 times as strong. The star's rotation period is 11.2 days at the equator. Epsilon Eridani is smaller and less massive than the Sun, and has a lower level of elements heavier than helium. It is a main-sequence star of spectral class K2, with an effective temperature of about , giving it an orange hue. It is a candidate member of the Ursa Major moving group of stars, which share a similar motion through the Milky Way, implying these stars shared a common origin in an open cluster. Periodic changes in Epsilon Eridani's radial velocity have yielded evidence of a giant planet orbiting it, designated Epsilon Eridani b. The discovery of the planet was initially controversial, but most astronomers now regard the planet as confirmed. In 2015 the planet was given the proper name AEgir . The Epsilon Eridani planetary system also includes a debris disc consisting of a Kuiper belt analogue at 70 au from the star and warm dust between about 3 au and 20 au from the star. The gap in the debris disc between 20 and 70 au implies the likely existence of outer planets in the system. As one of the nearest Sun-like stars, Epsilon Eridani has been the target of several observations in the search for extraterrestrial intelligence. Epsilon Eridani appears in science fiction stories and has been suggested as a destination for interstellar travel. From Epsilon Eridani, the Sun would appear as a star in Serpens, with an apparent magnitude of 2.4. Nomenclature ε Eridani, Latinised to Epsilon Eridani, is the star's Bayer designation. Despite being a relatively bright star, it was not given a proper name by early astronomers. It has several other catalogue designations. Upon its discovery, the planet was designated Epsilon Eridani b, following the usual designation system for extrasolar planets. The planet and its host star were selected by the International Astronomical Union (IAU) as part of the NameExoWorlds competition for giving proper names to exoplanets and their host stars, for some systems that did not already have proper names. The process involved nominations by educational groups and public voting for the proposed names. In December 2015, the IAU announced the winning names were Ran for the star and AEgir for the planet. Those names had been submitted by the pupils of the 8th Grade at Mountainside Middle School in Colbert, Washington, United States. Both names derive from Norse mythology: Rán is the goddess of the sea and Ægir, her husband, is the god of the ocean. In 2016, the IAU organised a Working Group on Star Names (WGSN) to catalogue and standardise proper names for stars. In its first bulletin of July 2016, the WGSN explicitly recognised the names of exoplanets and their host stars that were produced by the competition. Epsilon Eridani is now listed as Ran in the IAU Catalog of Star Names. Professional astronomers have mostly continued to refer to the star as Epsilon Eridani. In Chinese, (), meaning Celestial Meadows, refers to an asterism consisting of ε Eridani, γ Eridani, δ Eridani, π Eridani, ζ Eridani, η Eridani, π Ceti, τ1 Eridani, τ2 Eridani, τ3 Eridani, τ4 Eridani, τ5 Eridani, τ6 Eridani, τ7 Eridani, τ8 Eridani and τ9 Eridani. Consequently, the Chinese name for ε Eridani itself is (, the Fourth [Star] of Celestial Meadows.) Observational history Cataloguing Epsilon Eridani has been known to astronomers since at least the 2nd century AD, when Claudius Ptolemy (a Greek astronomer from Alexandria, Egypt) included it in his catalogue of more than a thousand stars. The catalogue was published as part of his astronomical treatise the Almagest. The constellation Eridanus was named by Ptolemy – , and Epsilon Eridani was listed as its thirteenth star. Ptolemy called Epsilon Eridani (here is the number four). This refers to a group of four stars in Eridanus: γ, π, δ and ε (10th–13th in Ptolemy's list). ε is the most western of these, and thus the first of the four in the apparent daily motion of the sky from east to west. Modern scholars of Ptolemy's catalogue designate its entry as "P 784" (in order of appearance) and "Eri 13". Ptolemy described the star's magnitude as 3. Epsilon Eridani was included in several star catalogues of medieval Islamic astronomical treatises, which were based on Ptolemy's catalogue: in Al-Sufi's Book of Fixed Stars, published in 964, Al-Biruni's Mas'ud Canon, published in 1030, and Ulugh Beg's Zij-i Sultani, published in 1437. Al-Sufi's estimate of Epsilon Eridani's magnitude was 3. Al-Biruni quotes magnitudes from Ptolemy and Al-Sufi (for Epsilon Eridani he quotes the value 4 for both Ptolemy's and Al-Sufi's magnitudes; original values of both these magnitudes are 3). Its number in order of appearance is 786. Ulugh Beg carried out new measurements of Epsilon Eridani's coordinates in his observatory at Samarkand, and quotes magnitudes from Al-Sufi (3 for Epsilon Eridani). The modern designations of its entry in Ulugh Beg's catalogue are "U 781" and "Eri 13" (the latter is the same as Ptolemy's catalogue designation). In 1598 Epsilon Eridani was included in Tycho Brahe's star catalogue, republished in 1627 by Johannes Kepler as part of his Rudolphine Tables. This catalogue was based on Tycho Brahe's observations of 1577–1597, including those on the island of Hven at his observatories of Uraniborg and Stjerneborg. The sequence number of Epsilon Eridani in the constellation Eridanus was 10, and it was designated ; the meaning is the same as Ptolemy's description. Brahe assigned it magnitude 3. Epsilon Eridani's Bayer designation was established in 1603 as part of the Uranometria, a star catalogue produced by German celestial cartographer Johann Bayer. His catalogue assigned letters from the Greek alphabet to groups of stars belonging to the same visual magnitude class in each constellation, beginning with alpha (α) for a star in the brightest class. Bayer made no attempt to arrange stars by relative brightness within each class. Thus, although Epsilon is the fifth letter in the Greek alphabet, the star is the tenth-brightest in Eridanus. In addition to the letter ε, Bayer had given it the number 13 (the same as Ptolemy's catalogue number, as were many of Bayer's numbers) and described it as . Bayer assigned Epsilon Eridani magnitude 3. In 1690 Epsilon Eridani was included in the star catalogue of Johannes Hevelius. Its sequence number in constellation Eridanus was 14, its designation was , and it was assigned magnitude 3 or 4 (sources differ). The star catalogue of English astronomer John Flamsteed, published in 1712, gave Epsilon Eridani the Flamsteed designation of 18 Eridani, because it was the eighteenth catalogued star in the constellation of Eridanus by order of increasing right ascension. In 1818 Epsilon Eridani was included in Friedrich Bessel's catalogue, based on James Bradley's observations from 1750–1762, and at magnitude 4. It also appeared in Nicolas Louis de Lacaille's catalogue of 398 principal stars, whose 307-star version was published in 1755 in the , and whose full version was published in 1757 in , Paris. In its 1831 edition by Francis Baily, Epsilon Eridani has the number 50. Lacaille assigned it magnitude 3. In 1801 Epsilon Eridani was included in , Joseph Jérôme Lefrançois de Lalande's catalogue of about 50,000 stars, based on his observations of 1791–1800, in which observations are arranged in time order. It contains three observations of Epsilon Eridani. In 1847, a new edition of Lalande's catalogue was published by Francis Baily, containing the majority of its observations, in which the stars were numbered in order of right ascension. Because every observation of each star was numbered and Epsilon Eridani was observed three times, it got three numbers: 6581, 6582 and 6583. (Today numbers from this catalogue are used with the prefix "Lalande", or "Lal".) Lalande assigned Epsilon Eridani magnitude 3. Also in 1801 it was included in the catalogue of Johann Bode, in which about 17,000 stars were grouped into 102 constellations and numbered (Epsilon Eridani got the number 159 in the constellation Eridanus). Bode's catalogue was based on observations of various astronomers, including Bode himself, but mostly on Lalande's and Lacaille's (for the southern sky). Bode assigned Epsilon Eridani magnitude 3. In 1814 Giuseppe Piazzi published the second edition of his star catalogue (its first edition was published in 1803), based on observations during 1792–1813, in which more than 7000 stars were grouped into 24 hours (0–23). Epsilon Eridani is number 89 in hour 3. Piazzi assigned it magnitude 4. In 1918 Epsilon Eridani appeared in the Henry Draper Catalogue with the designation HD 22049 and a preliminary spectral classification of K0. Detection of proximity Based on observations between 1800 and 1880, Epsilon Eridani was found to have a large proper motion across the celestial sphere, which was estimated at three arcseconds per year (angular velocity). This movement implied it was relatively close to the Sun, making it a star of interest for the purpose of stellar parallax measurements. This process involves recording the position of Epsilon Eridani as Earth moves around the Sun, which allows a star's distance to be estimated. From 1881 to 1883, American astronomer William L. Elkin used a heliometer at the Royal Observatory at the Cape of Good Hope, South Africa, to compare the position of Epsilon Eridani with two nearby stars. From these observations, a parallax of was calculated. By 1917, observers had refined their parallax estimate to 0.317 arcseconds. The modern value of 0.3109 arcseconds is equivalent to a distance of about . Circumstellar discoveries Based on apparent changes in the position of Epsilon Eridani between 1938 and 1972, Peter van de Kamp proposed that an unseen companion with an orbital period of 25 years was causing gravitational perturbations in its position. This claim was refuted in 1993 by Wulff-Dieter Heintz and the false detection was blamed on a systematic error in the photographic plates. Launched in 1983, the space telescope IRAS detected infrared emissions from stars near to the Sun, including an excess infrared emission from Epsilon Eridani. The observations indicated a disk of fine-grained cosmic dust was orbiting the star; this debris disk has since been extensively studied. Evidence for a planetary system was discovered in 1998 by the observation of asymmetries in this dust ring. The clumping in the dust distribution could be explained by gravitational interactions with a planet orbiting just inside the dust ring. In 1987, the detection of an orbiting planetary object was announced by Bruce Campbell, Gordon Walker and Stephenson Yang. From 1980 to 2000, a team of astronomers led by Artie P. Hatzes made radial velocity observations of Epsilon Eridani, measuring the Doppler shift of the star along the line of sight. They found evidence of a planet orbiting the star with a period of about seven years. Although there is a high level of noise in the radial velocity data due to magnetic activity in its photosphere, any periodicity caused by this magnetic activity is expected to show a strong correlation with variations in emission lines of ionized calcium (the Ca II H and K lines). Because no such correlation was found, a planetary companion was deemed the most likely cause. This discovery was supported by astrometric measurements of Epsilon Eridani made between 2001 and 2003 with the Hubble Space Telescope, which showed evidence for gravitational perturbation of Epsilon Eridani by a planet. SETI and proposed exploration In 1960, physicists Philip Morrison and Giuseppe Cocconi proposed that extraterrestrial civilisations might be using radio signals for communication. Project Ozma, led by astronomer Frank Drake, used the Tatel Telescope to search for such signals from the nearby Sun-like stars Epsilon Eridani and Tau Ceti. The systems were observed at the emission frequency of neutral hydrogen, 1,420 MHz (21 cm). No signals of intelligent extraterrestrial origin were detected. Drake repeated the experiment in 2010, with the same negative result. Despite this lack of success, Epsilon Eridani made its way into science fiction literature and television shows for many years following news of Drake's initial experiment. In Habitable Planets for Man, a 1964 RAND Corporation study by space scientist Stephen H. Dole, the probability of a habitable planet being in orbit around Epsilon Eridani were estimated at 3.3%. Among the known nearby stars, it was listed with the 14 stars that were thought most likely to have a habitable planet. William I. McLaughlin proposed a new strategy in the search for extraterrestrial intelligence (SETI) in 1977. He suggested that widely observable events such as nova explosions might be used by intelligent extraterrestrials to synchronise the transmission and reception of their signals. This idea was tested by the National Radio Astronomy Observatory in 1988, which used outbursts of Nova Cygni 1975 as the timer. Fifteen days of observation showed no anomalous radio signals coming from Epsilon Eridani. Because of the proximity and Sun-like properties of Epsilon Eridani, in 1985 physicist and author Robert L. Forward considered the system as a plausible target for interstellar travel. The following year, the British Interplanetary Society suggested Epsilon Eridani as one of the targets in its Project Daedalus study. The system has continued to be among the targets of such proposals, such as Project Icarus in 2011. Based on its nearby location, Epsilon Eridani was among the target stars for Project Phoenix, a 1995 microwave survey for signals from extraterrestrial intelligence. The project had checked about 800 stars by 2004 but had not yet detected any signals. Properties At a distance of , Epsilon Eridani is the 13th-nearest known star (and ninth nearest solitary star or stellar system) to the Sun as of 2014. Its proximity makes it one of the most studied stars of its spectral type. Epsilon Eridani is located in the northern part of the constellation Eridanus, about 3° east of the slightly brighter star Delta Eridani. With a declination of −9.46°, Epsilon Eridani can be viewed from much of Earth's surface, at suitable times of year. Only to the north of latitude 80° N is it permanently hidden below the horizon. The apparent magnitude of 3.73 can make it difficult to observe from an urban area with the unaided eye, because the night skies over cities are obscured by light pollution. Epsilon Eridani has an estimated mass of 0.82 solar masses and a radius of 0.738 solar radii. It shines with a luminosity of only 0.34 solar luminosities. The estimated effective temperature is 5,084 K. With a stellar classification of K2 V, it is the second-nearest K-type main-sequence star (after Alpha Centauri B). Since 1943 the spectrum of Epsilon Eridani has served as one of the stable anchor points by which other stars are classified. Its metallicity, the fraction of elements heavier than helium, is slightly lower than the Sun's. In Epsilon Eridani's chromosphere, a region of the outer atmosphere just above the light emitting photosphere, the abundance of iron is estimated at 74% of the Sun's value. The proportion of lithium in the atmosphere is five times less than that in the Sun. Epsilon Eridani's K-type classification indicates that the spectrum has relatively weak absorption lines from absorption by hydrogen (Balmer lines) but strong lines of neutral atoms and singly ionized calcium (Ca II). The luminosity class V (dwarf) is assigned to stars that are undergoing thermonuclear fusion of hydrogen in their core. For a K-type main-sequence star, this fusion is dominated by the proton–proton chain reaction, in which a series of reactions effectively combines four hydrogen nuclei to form a helium nucleus. The energy released by fusion is transported outward from the core through radiation, which results in no net motion of the surrounding plasma. Outside of this region, in the envelope, energy is carried to the photosphere by plasma convection, where it then radiates into space. Magnetic activity Epsilon Eridani has a higher level of magnetic activity than the Sun, and thus the outer parts of its atmosphere (the chromosphere and corona) are more dynamic. The average magnetic field strength of Epsilon Eridani across the entire surface is , which is more than forty times greater than the magnetic-field strength in the Sun's photosphere. The magnetic properties can be modelled by assuming that regions with a magnetic flux of about 0.14 T randomly cover approximately 9% of the photosphere, whereas the remainder of the surface is free of magnetic fields. The overall magnetic activity of Epsilon Eridani shows co-existing and year activity cycles. Assuming that its radius does not change over these intervals, the long-term variation in activity level appears to produce a temperature variation of 15 K, which corresponds to a variation in visual magnitude (V) of 0.014. The magnetic field on the surface of Epsilon Eridani causes variations in the hydrodynamic behaviour of the photosphere. This results in greater jitter during measurements of its radial velocity. Variations of −1 were measured over a 20 year period, which is much higher than the measurement uncertainty of −1. This makes interpretation of periodicities in the radial velocity of Epsilon Eridani, such as those caused by an orbiting planet, more difficult. Epsilon Eridani is classified as a BY Draconis variable because it has regions of higher magnetic activity that move into and out of the line of sight as it rotates. Measurement of this rotational modulation suggests that its equatorial region rotates with an average period of 11.2 days, which is less than half of the rotation period of the Sun. Observations have shown that Epsilon Eridani varies as much as 0.050 in V magnitude due to starspots and other short-term magnetic activity. Photometry has also shown that the surface of Epsilon Eridani, like the Sun, is undergoing differential rotation i.e. the rotation period at equator differs from that at high latitude. The measured periods range from 10.8 to 12.3 days. The axial tilt of Epsilon Eridani toward the line of sight from Earth is highly uncertain: estimates range from 24° to 72°. The high levels of chromospheric activity, strong magnetic field, and relatively fast rotation rate of Epsilon Eridani are characteristic of a young star. Most estimates of the age of Epsilon Eridani place it in the range from 200 million to 800 million years. The low abundance of heavy elements in the chromosphere of Epsilon Eridani usually indicates an older star, because the interstellar medium (out of which stars form) is steadily enriched by heavier elements produced by older generations of stars. This anomaly might be caused by a diffusion process that has transported some of the heavier elements out of the photosphere and into a region below Epsilon Eridani's convection zone. The X-ray luminosity of Epsilon Eridani is about (). It is more luminous in X-rays than the Sun at peak activity. The source for this strong X-ray emission is Epsilon Eridani's hot corona. Epsilon Eridani's corona appears larger and hotter than the Sun's, with a temperature of , measured from observation of the corona's ultraviolet and X-ray emission. It displays a cyclical variation in X-ray emission that is consistent with the magnetic activity cycle. The stellar wind emitted by Epsilon Eridani expands until it collides with the surrounding interstellar medium of diffuse gas and dust, resulting in a bubble of heated hydrogen gas (an astrosphere, the equivalent of the heliosphere that surrounds the Sun). The absorption spectrum from this gas has been measured with the Hubble Space Telescope, allowing the properties of the stellar wind to be estimated. Epsilon Eridani's hot corona results in a mass loss rate in Epsilon Eridani's stellar wind that is 30 times higher than the Sun's. This stellar wind generates the astrosphere that spans about and contains a bow shock that lies from Epsilon Eridani. At its estimated distance from Earth, this astrosphere spans 42 arcminutes, which is wider than the apparent size of the full Moon. Kinematics Epsilon Eridani has a high proper motion, moving −0.976 arcseconds per year in right ascension (the celestial equivalent of longitude) and 0.018 arcseconds per year in declination (celestial latitude), for a combined total of 0.962 arcseconds per year. The star has a radial velocity of (away from the Sun). The space velocity components of Epsilon Eridani in the galactic co-ordinate system are = , which means that it is travelling within the Milky Way at a mean galactocentric distance of 28.7 kly (8.79 kiloparsecs) from the core along an orbit that has an eccentricity of 0.09. The position and velocity of Epsilon Eridani indicate that it may be a member of the Ursa Major Moving Group, whose members share a common motion through space. This behaviour suggests that the moving group originated in an open cluster that has since diffused. The estimated age of this group is years, which lies within the range of the age estimates for Epsilon Eridani. During the past million years, three stars are believed to have come within of Epsilon Eridani. The most recent and closest of these encounters was with Kapteyn's Star, which approached to a distance of about roughly 12,500 years ago. Two more distant encounters were with Sirius and Ross 614. None of these encounters are thought to have been close enough to affect the circumstellar disk orbiting Epsilon Eridani. Epsilon Eridani made its closest approach to the Sun about 105,000 years ago, when they were separated by . Based upon a simulation of close encounters with nearby stars, the binary star system Luyten 726-8, which includes the variable star UV Ceti, will encounter Epsilon Eridani in approximately 31,500 years at a minimum distance of about 0.9 ly (0.29 parsecs). They will be less than 1 ly (0.3 parsecs) apart for about 4,600 years. If Epsilon Eridani has an Oort cloud, Luyten 726-8 could gravitationally perturb some of its comets with long orbital periods. Planetary system Debris disc An infrared excess around Epsilon Eridani was detected by IRAS indicating the presence of circumstellar dust. Observations with the James Clerk Maxwell Telescope (JCMT) at a wavelength of 850 μm show an extended flux of radiation out to an angular radius of 35 arcseconds around Epsilon Eridani, resolving the debris disc for the first time. Higher resolution images have since been taken with the Atacama Large Millimeter Array, showing that the belt is located 70 au from the star with a width of just 11 au. The disc is inclined 33.7° from face-on, making it appear elliptical. Dust and possibly water ice from this belt migrates inward because of drag from the stellar wind and a process by which stellar radiation causes dust grains to slowly spiral toward Epsilon Eridani, known as the Poynting–Robertson effect. At the same time, these dust particles can be destroyed through mutual collisions. The time scale for all of the dust in the disk to be cleared away by these processes is less than Epsilon Eridani's estimated age. Hence, the current dust disk must have been created by collisions or other effects of larger parent bodies, and the disk represents a late stage in the planet-formation process. It would have required collisions between 11 Earth masses' worth of parent bodies to have maintained the disk in its current state over its estimated age. The disk contains an estimated mass of dust equal to a sixth of the mass of the Moon, with individual dust grains exceeding 3.5 μm in size at a temperature of about 55 K. This dust is being generated by the collision of comets, which range up to 10 to 30 km in diameter and have a combined mass of 5 to 9 times that of Earth. This is similar to the estimated 10 Earth masses in the primordial Kuiper belt. The disk around Epsilon Eridani contains less than of carbon monoxide. This low level suggests a paucity of volatile-bearing comets and icy planetesimals compared to the Kuiper belt. The JCMT images show signs of clumpy structure in the belt that may be explained by gravitational perturbation from a planet, dubbed Epsilon Eridani c. The clumps in the dust are theorised to occur at orbits that have an integer resonance with the orbit of the suspected planet. For example, the region of the disk that completes two orbits for every three orbits of a planet is in a 3:2 orbital resonance. The planet proposed to cause these perturbations is predicted to have a semimajor axis of between 40 and 50 au. However, the brightest clumps have since been identified as background sources and the existence of the remaining clumps remains debated. Dust is also present closer to the star. Observations from NASA's Spitzer Space Telescope suggest that Epsilon Eridani actually has two asteroid belts and a cloud of exozodiacal dust. The latter is an analogue of the zodiacal dust that occupies the plane of the Solar System. One belt sits at approximately the same position as the one in the Solar System, orbiting at a distance of from Epsilon Eridani, and consists of silicate grains with a diameter of 3 μm and a combined mass of about 1018 kg. If the planet Epsilon Eridani b exists then this belt is unlikely to have had a source outside the orbit of the planet, so the dust may have been created by fragmentation and cratering of larger bodies such as asteroids. The second, denser belt, most likely also populated by asteroids, lies between the first belt and the outer comet disk. The structure of the belts and the dust disk suggests that more than two planets in the Epsilon Eridani system are needed to maintain this configuration. In an alternative scenario, the exozodiacal dust may be generated in the outer belt. This dust is then transported inward past the orbit of Epsilon Eridani b. When collisions between the dust grains are taken into account, the dust will reproduce the observed infrared spectrum and brightness. Outside the radius of ice sublimation, located beyond 10 au from Epsilon Eridani where the temperatures fall below 100 K, the best fit to the observations occurs when a mix of ice and silicate dust is assumed. Inside this radius, the dust must consist of silicate grains that lack volatiles. The inner region around Epsilon Eridani, from a radius of 2.5 AU inward, appears to be clear of dust down to the detection limit of the 6.5 m MMT telescope. Grains of dust in this region are efficiently removed by drag from the stellar wind, while the presence of a planetary system may also help keep this area clear of debris. Still, this does not preclude the possibility that an inner asteroid belt may be present with a combined mass no greater than the asteroid belt in the Solar System. Long-period planets As one of the nearest Sun-like stars, Epsilon Eridani has been the target of many attempts to search for planetary companions. Its chromospheric activity and variability mean that finding planets with the radial velocity method is difficult, because the stellar activity may create signals that mimic the presence of planets. Searches for exoplanets around Epsilon Eridani with direct imaging have been unsuccessful. Infrared observation has shown there are no bodies of three or more Jupiter masses in this system, out to at least a distance of 500 au from the host star. Planets with similar masses and temperatures as Jupiter should be detectable by Spitzer at distances beyond 80 au. One roughly Jupiter-sized long-period planet has been detected and characterized by both the radial velocity and astrometry methods. Planets more than 150% as massive as Jupiter can be ruled out at the inner edge of the debris disk at 30–35 au. Planet b (AEgir) Referred to as Epsilon Eridani b, this planet was announced in 2000, but the discovery remained controversial over roughly the next two decades. A comprehensive study in 2008 called the detection "tentative" and described the proposed planet as "long suspected but still unconfirmed". Many astronomers believed the evidence is sufficiently compelling that they regard the discovery as confirmed. The discovery was questioned in 2013 because a search program at La Silla Observatory did not confirm it exists. Further studies since 2018 have gradually reaffirmed the planet's existence through a combination of radial velocity and astrometry. Published sources remain in disagreement as to the planet's basic parameters. Recent values for its orbital period range from 7.3 to 7.6 years, estimates of the size of its elliptical orbit—the semimajor axis—range from 3.38 au to 3.53 au, and approximations of its orbital eccentricity range from 0.055 to 0.26. Initially, the planet's mass was unknown, but a lower limit could be estimated based on the orbital displacement of Epsilon Eridani. Only the component of the displacement along the line of sight to Earth was known, which yields a value for the formula m sin i, where m is the mass of the planet and i is the orbital inclination. Estimates for the value of ranged from 0.60 Jupiter masses to 1.06 Jupiter masses, which sets the lower limit for the mass of the planet (because the sine function has a maximum value of 1). Taking in the middle of that range at 0.78, and estimating the inclination at 30° as was suggested by Hubble astrometry, this yields a value of Jupiter masses for the planet's mass. More recent astrometric studies have found lower masses, ranging from 0.63 to 0.78 Jupiter masses. Of all the measured parameters for this planet, the value for orbital eccentricity is the most uncertain. The eccentricity of 0.7 suggested by some older studies is inconsistent with the presence of the proposed asteroid belt at a distance of 3 au. If the eccentricity was this high, the planet would pass through the asteroid belt and clear it out within about ten thousand years. If the belt has existed for longer than this period, which appears likely, it imposes an upper limit on Epsilon Eridani b's eccentricity of about 0.10–0.15. If the dust disk is instead being generated from the outer debris disk, rather than from collisions in an asteroid belt, then no constraints on the planet's orbital eccentricity are needed to explain the dust distribution. Potential habitability Epsilon Eridani is a target for planet finding programs because it has properties that allow an Earth-like planet to form. Although this system was not chosen as a primary candidate for the now-canceled Terrestrial Planet Finder, it was a target star for NASA's proposed Space Interferometry Mission to search for Earth-sized planets. The proximity, Sun-like properties and suspected planets of Epsilon Eridani have also made it the subject of multiple studies on whether an interstellar probe can be sent to Epsilon Eridani. The orbital radius at which the stellar flux from Epsilon Eridani matches the solar constant—where the emission matches the Sun's output at the orbital distance of the Earth—is 0.61 au. That is within the maximum habitable zone of a conjectured Earth-like planet orbiting Epsilon Eridani, which currently stretches from about 0.5 to 1.0 au. As Epsilon Eridani ages over a period of 20 billion years, the net luminosity will increase, causing this zone to slowly expand outward to about 0.6–1.4 au. The presence of a large planet with a highly elliptical orbit in proximity to Epsilon Eridani's habitable zone reduces the likelihood of a terrestrial planet having a stable orbit within the habitable zone. A young star such as Epsilon Eridani can produce large amounts of ultraviolet radiation that may be harmful to life, but on the other hand it is a cooler star than the Sun and so produces less ultraviolet radiation to start with. The orbital radius where the UV flux matches that on the early Earth lies at just under 0.5 au. Because that is actually slightly closer to the star than the habitable zone, this has led some researchers to conclude there is not enough energy from ultraviolet radiation reaching into the habitable zone for life to ever get started around the young Epsilon Eridani. See also List of multiplanetary systems Lists of planets List of nearest stars and brown dwarfs Notes References External links K-type main-sequence stars Eridani, Epsilon BY Draconis variables Planetary systems with one confirmed planet Circumstellar disks Ursa Major moving group Local Bubble Ran 50 Eridanus (constellation) Eridani, Epsilon BD-09 0697 Eridani, 18 0144 022049 016537 1084
Epsilon Eridani
[ "Astronomy" ]
7,239
[ "Eridanus (constellation)", "Constellations" ]
167,718
https://en.wikipedia.org/wiki/Porcelain
Porcelain, also called china () is a ceramic material made by heating raw materials, generally including kaolinite, in a kiln to temperatures between . The greater strength and translucence of porcelain, relative to other types of pottery, arise mainly from vitrification and the formation of the mineral mullite within the body at these high temperatures. End applications include tableware, decorative ware such as figurines, and products in technology and industry such as electrical insulators and laboratory ware. The manufacturing process used for porcelain is similar to that used for earthenware and stoneware, the two other main types of pottery, although it can be more challenging to produce. It has usually been regarded as the most prestigious type of pottery due to its delicacy, strength, and high degree of whiteness. It is frequently both glazed and decorated. Though definitions vary, porcelain can be divided into three main categories: hard-paste, soft-paste, and bone china. The categories differ in the composition of the body and the firing conditions. Porcelain slowly evolved in China and was finally achieved (depending on the definition used) at some point about 2,000 to 1,200 years ago. It slowly spread to other East Asian countries, then to Europe, and eventually to the rest of the world. The European name, porcelain in English, comes from the old Italian porcellana (cowrie shell) because of its resemblance to the surface of the shell. Porcelain is also referred to as china or fine china in some English-speaking countries, as it was first seen in imports from China during the 17th century. Properties associated with porcelain include low permeability and elasticity; considerable strength, hardness, whiteness, translucency, and resonance; and a high resistance to corrosive chemicals and thermal shock. Porcelain has been described as being "completely vitrified, hard, impermeable (even before glazing), white or artificially coloured, translucent (except when of considerable thickness), and resonant". However, the term "porcelain" lacks a universal definition and has "been applied in an unsystematic fashion to substances of diverse kinds that have only certain surface-qualities in common". Traditionally, East Asia only classifies pottery into low-fired wares (earthenware) and high-fired wares (often translated as porcelain), the latter also including what Europeans call "stoneware", which is high-fired but not generally white or translucent. Terms such as "proto-porcelain", "porcellaneous", or "near-porcelain" may be used in cases where the ceramic body approaches whiteness and translucency. In 2021, the global market for porcelain tableware was estimated to be worth US$22.1 billion. Types Hard paste Hard-paste porcelain was invented in China, and it was also used in Japanese porcelain. Most of the finest quality porcelain wares are made of this material. The earliest European porcelains were produced at the Meissen factory in the early 18th century; they were formed from a paste composed of kaolin and alabaster and fired at temperatures up to in a wood-fired kiln, producing a porcelain of great hardness, translucency, and strength. Later, the composition of the Meissen hard paste was changed, and the alabaster was replaced by feldspar and quartz, allowing the pieces to be fired at lower temperatures. Kaolinite, feldspar, and quartz (or other forms of silica) continue to constitute the basic ingredients for most continental European hard-paste porcelains. Soft paste Soft-paste porcelains date back to early attempts by European potters to replicate Chinese porcelain by using mixtures of clay and frit. Soapstone and lime are known to have been included in these compositions. These wares were not yet actual porcelain wares, as they were neither hard nor vitrified by firing kaolin clay at high temperatures. As these early formulations suffered from high pyroplastic deformation, or slumping in the kiln at high temperatures, they were uneconomic to produce and of low quality. Formulations were later developed based on kaolin with quartz, feldspars, nepheline syenite, or other feldspathic rocks. These are technically superior and continue to be produced. Soft-paste porcelains are fired at lower temperatures than hard-paste porcelains; therefore, these wares are generally less hard than hard-paste porcelains. Bone china Although originally developed in England in 1748 to compete with imported porcelain, bone china is now made worldwide, including in China. The English had read the letters of Jesuit missionary François Xavier d'Entrecolles, which described Chinese porcelain manufacturing secrets in detail. One writer has speculated that a misunderstanding of the text could possibly have been responsible for the first attempts to use bone-ash as an ingredient in English porcelain, although this is not supported by modern researchers and historians. Traditionally, English bone china was made from two parts of bone ash, one part of kaolin, and one part of china stone, although the latter has been replaced by feldspars from non-UK sources. Materials Kaolin is the primary material from which porcelain is made, even though clay minerals might account for only a small proportion of the whole. The word paste is an old term for both unfired and fired materials. A more common terminology for the unfired material is "body"; for example, when buying materials a potter might order an amount of porcelain body from a vendor. The composition of porcelain is highly variable, but the clay mineral kaolinite is often a raw material. Other raw materials can include feldspar, ball clay, glass, bone ash, steatite, quartz, petuntse and alabaster. The clays used are often described as being long or short, depending on their plasticity. Long clays are cohesive (sticky) and have high plasticity; short clays are less cohesive and have lower plasticity. In soil mechanics, plasticity is determined by measuring the increase in content of water required to change a clay from a solid state bordering on the plastic, to a plastic state bordering on the liquid, though the term is also used less formally to describe the ease with which a clay may be worked. Clays used for porcelain are generally of lower plasticity than many other pottery clays. They wet very quickly, meaning that small changes in the content of water can produce large changes in workability. Thus, the range of water content within which these clays can be worked is very narrow and consequently must be carefully controlled. Production Forming Porcelain can be made using all the shaping techniques for pottery. Glazing Biscuit porcelain is unglazed porcelain treated as a finished product, mostly for figures and sculpture. Unlike their lower-fired counterparts, porcelain wares do not need glazing to render them impermeable to liquids and for the most part are glazed for decorative purposes and to make them resistant to dirt and staining. Many types of glaze, such as the iron-containing glaze used on the celadon wares of Longquan, were designed specifically for their striking effects on porcelain. Decoration Porcelain often receives underglaze decoration using pigments that include cobalt oxide and copper, or overglaze enamels, allowing a wider range of colours. Like many earlier wares, modern porcelains are often biscuit-fired at around , coated with glaze and then sent for a second glaze-firing at a temperature of about or greater. Another early method is "once-fired", where the glaze is applied to the unfired body and the two fired together in a single operation. Firing In this process, "green" (unfired) ceramic wares are heated to high temperatures in a kiln to permanently set their shapes, vitrify the body and the glaze. Porcelain is fired at a higher temperature than earthenware so that the body can vitrify and become non-porous. Many types of porcelain in the past have been fired twice or even three times, to allow decoration using less robust pigments in overglaze enamel. History Chinese porcelain Porcelain was invented in China over a centuries-long development period beginning with "proto-porcelain" wares dating from the Shang dynasty (1600–1046 BCE). By the time of the Eastern Han dynasty (25–220 CE) these early glazed ceramic wares had developed into porcelain, which Chinese defined as high-fired ware. By the late Sui dynasty (581–618 CE) and early Tang dynasty (618–907 CE), the now-standard requirements of whiteness and translucency had been achieved, in types such as Ding ware. The wares were already exported to the Islamic world, where they were highly prized. Eventually, porcelain and the expertise required to create it began to spread into other areas of East Asia. During the Song dynasty (960–1279 CE), artistry and production had reached new heights. The manufacture of porcelain became highly organised, and the dragon kilns excavated from this period could fire as many as 25,000 pieces at a time, and over 100,000 by the end of the period. While Xing ware is regarded as among the greatest of the Tang dynasty porcelain, Ding ware became the premier porcelain of the Song dynasty. By the Ming dynasty, production of the finest wares for the court was concentrated in a single city, and Jingdezhen porcelain, originally owned by the imperial government, remains the centre of Chinese porcelain production. By the time of the Ming dynasty (1368–1644 CE), porcelain wares were being exported to Asia and Europe. Some of the most well-known Chinese porcelain art styles arrived in Europe during this era, such as the coveted "blue-and-white" wares. The Ming dynasty controlled much of the porcelain trade, which was expanded to Asia, Africa and Europe via the Silk Road. In 1517, Portuguese merchants began direct trade by sea with the Ming dynasty, and in 1598, Dutch merchants followed. Some porcelains were more highly valued than others in imperial China. The most valued types can be identified by their association with the court, either as tribute offerings, or as products of kilns under imperial supervision. Since the Yuan dynasty, the largest and best centre of production has made Jingdezhen porcelain. During the Ming dynasty, Jingdezhen porcelain had become a source of imperial pride. The Yongle emperor erected a white porcelain brick-faced pagoda at Nanjing, and an exceptionally smoothly glazed type of white porcelain is peculiar to his reign. Jingdezhen porcelain's fame came to a peak during the Qing dynasty. Japanese porcelain Although the Japanese elite were keen importers of Chinese porcelain from early on, they were not able to make their own until the arrival of Korean potters that were taken captive during the Japanese invasions of Korea (1592–1598). They brought an improved type of kiln, and one of them spotted a source of porcelain clay near Arita, and before long several kilns had started in the region. At first their wares were similar to the cheaper and cruder Chinese porcelains with underglaze blue decoration that were already widely sold in Japan; this style was to continue for cheaper everyday wares until the 20th century. Exports to Europe began around 1660, through the Chinese and the Dutch East India Company, the only Europeans allowed a trading presence. Chinese exports had been seriously disrupted by civil wars as the Ming dynasty fell apart, and the Japanese exports increased rapidly to fill the gap. At first the wares used European shapes and mostly Chinese decoration, as the Chinese had done, but gradually original Japanese styles developed. Nabeshima ware was produced in kilns owned by the families of feudal lords, and were decorated in the Japanese tradition, much of it related to textile design. This was not initially exported, but used for gifts to other aristocratic families. Imari ware and Kakiemon are broad terms for styles of export porcelain with overglaze "enamelled" decoration begun in the early period, both with many sub-types. A great range of styles and manufacturing centres were in use by the start of the 19th century, and as Japan opened to trade in the second half, exports expanded hugely and quality generally declined. Much traditional porcelain continues to replicate older methods of production and styles, and there are several modern industrial manufacturers. By the early 1900s, Filipino porcelain artisans working in Japanese porcelain centres for much of their lives, later on introduced the craft into the native population in the Philippines, although oral literature from Cebu in the central Philippines have noted that porcelain were already being produced by the natives locally during the time of Cebu's early rulers, prior to the arrival of colonizers in the 16th century. Korean porcelain Olive green glaze was introduced in the late Silla Dynasty. Most ceramics from Silla are generally leaf-shaped, which is a very common shape in Korea. Korean celadon comes in a variety of colors, from turquoise to putty. Additionally, in the late 13th century, the Inlay technique of expressing pigmented patterns by filling the hollow parts of pottery with white and red clay was frequently used. The main difference from those in China is that many specimens have inlay decoration under the glaze. Most Korean ceramics from the Joseon Dynasty (1392-1910) are of excellent decorative quality. It usually has a melon shape and is asymmetrical. European porcelain Imported Chinese porcelains were held in such great esteem in Europe that in English china became a commonly used synonym for the Italian-derived porcelain. The first mention of porcelain in Europe is in Il Milione by Marco Polo in the 13th century. Apart from copying Chinese porcelain in faience (tin glazed earthenware), the soft-paste Medici porcelain in 16th-century Florence was the first real European attempt to reproduce it, with little success. Early in the 16th century, Portuguese traders returned home with samples of kaolin, which they discovered in China to be essential in the production of porcelain wares. However, the Chinese techniques and composition used to manufacture porcelain were not yet fully understood. Countless experiments to produce porcelain had unpredictable results and met with failure. In the German state of Saxony, the search concluded in 1708 when Ehrenfried Walther von Tschirnhaus produced a hard, white, translucent type of porcelain specimen with a combination of ingredients, including kaolin and alabaster, mined from a Saxon mine in Colditz. It was a closely guarded trade secret of the Saxon enterprise. In 1712, many of the elaborate Chinese porcelain manufacturing secrets were revealed throughout Europe by the French Jesuit father Francois Xavier d'Entrecolles and soon published in the Lettres édifiantes et curieuses de Chine par des missionnaires jésuites. The secrets, which d'Entrecolles read about and witnessed in China, were now known and began seeing use in Europe. Meissen Von Tschirnhaus along with Johann Friedrich Böttger were employed by Augustus II, King of Poland and Elector of Saxony, who sponsored their work in Dresden and in the town of Meissen. Tschirnhaus had a wide knowledge of science and had been involved in the European quest to perfect porcelain manufacture when, in 1705, Böttger was appointed to assist him in this task. Böttger had originally been trained as a pharmacist; after he turned to alchemical research, he claimed to have known the secret of transmuting dross into gold, which attracted the attention of Augustus. Imprisoned by Augustus as an incentive to hasten his research, Böttger was obliged to work with other alchemists in the futile search for transmutation and was eventually assigned to assist Tschirnhaus. One of the first results of the collaboration between the two was the development of a red stoneware that resembled that of Yixing. A workshop note records that the first specimen of hard, white and vitrified European porcelain was produced in 1708. At the time, the research was still being supervised by Tschirnhaus; however, he died in October of that year. It was left to Böttger to report to Augustus in March 1709 that he could make porcelain. For this reason, credit for the European discovery of porcelain is traditionally ascribed to him rather than Tschirnhaus. The Meissen factory was established in 1710 after the development of a kiln and a glaze suitable for use with Böttger's porcelain, which required firing at temperatures of up to to achieve translucence. Meissen porcelain was once-fired, or green-fired. It was noted for its great resistance to thermal shock; a visitor to the factory in Böttger's time reported having seen a white-hot teapot being removed from the kiln and dropped into cold water without damage. Although widely disbelieved this has been replicated in modern times. Russian porcelain In 1744, Elizabeth of Russia signed an agreement to establish the first porcelain manufactory; previously it had to be imported. The technology of making "white gold" was carefully hidden by its creators. Peter the Great had tried to reveal the "big porcelain secret", and sent an agent to the Meissen factory, and finally hired a porcelain master from abroad. This relied on the research of the Russian scientist Dmitry Ivanovich Vinogradov. His development of porcelain manufacturing technology was not based on secrets learned through third parties, but was the result of painstaking work and careful analysis. Thanks to this, by 1760, Imperial Porcelain Factory, Saint Petersburg became a major European factories producing tableware, and later porcelain figurines. Eventually other factories opened: Gardner porcelain, Dulyovo (1832), Kuznetsovsky porcelain, Popovsky porcelain, and Gzhel. During the twentieth century, under Soviet governments, ceramics continued to be a popular artform, supported by the state, with an increasingly propagandist role. One artist, who worked at the Baranovsky Porcelain Factory and at the Experimental Ceramic and Artistic Plant in Kyiv, was Oksana Zhnikrup, whose porcelain figures of the ballet and the circus were widely known. Soft paste porcelain The pastes produced by combining clay and powdered glass (frit) were called Frittenporzellan in Germany and frita in Spain. In France they were known as pâte tendre and in England as "soft-paste". They appear to have been given this name because they do not easily retain their shape in the wet state, or because they tend to slump in the kiln under high temperature, or because the body and the glaze can be easily scratched. France Experiments at Rouen produced the earliest soft-paste in France, but the first important French soft-paste porcelain was made at the Saint-Cloud factory before 1702. Soft-paste factories were established with the Chantilly manufactory in 1730 and at Mennecy in 1750. The Vincennes porcelain factory was established in 1740, moving to larger premises at Sèvres in 1756. Vincennes soft-paste was whiter and freer of imperfections than any of its French rivals, which put Vincennes/Sèvres porcelain in the leading position in France and throughout the whole of Europe in the second half of the 18th century. Italy Doccia porcelain of Florence was founded in 1735 and remains in production, unlike Capodimonte porcelain which was moved from Naples to Madrid by its royal owner, after producing from 1743 to 1759. After a gap of 15 years Naples porcelain was produced from 1771 to 1806, specializing in Neoclassical styles. All these were very successful, with large outputs of high-quality wares. In and around Venice, Francesco Vezzi was producing hard-paste from around 1720 to 1735; survivals of Vezzi porcelain are very rare, but less so than from the Hewelke factory, which only lasted from 1758 to 1763. The soft-paste Cozzi factory fared better, lasting from 1764 to 1812. The Le Nove factory produced from about 1752 to 1773, then was revived from 1781 to 1802. England The first soft-paste in England was demonstrated by Thomas Briand to the Royal Society in 1742 and is believed to have been based on the Saint-Cloud formula. In 1749, Thomas Frye took out a patent on a porcelain containing bone ash. This was the first bone china, subsequently perfected by Josiah Spode. William Cookworthy discovered deposits of kaolin in Cornwall, and his factory at Plymouth, established in 1768, used kaolin and china stone to make hard-paste porcelain with a body composition similar to that of the Chinese porcelains of the early 18th century. But the great success of English ceramics in the 18th century was based on soft-paste porcelain, and refined earthenwares such as creamware, which could compete with porcelain, and had devastated the faience industries of France and other continental countries by the end of the century. Most English porcelain from the late 18th century to the present is bone china. In the twenty-five years after Briand's demonstration, a number of factories were founded in England to make soft-paste tableware and figures: Chelsea (1743) Bow (1745) St James's (1748) Bristol porcelain (1748) Longton Hall (1750) Royal Crown Derby (1750 or 1757) Royal Worcester (1751) Lowestoft porcelain (1757) Wedgwood (1759) Spode (1767) Applications other than decorative and tableware Electric insulators Porcelain has been used for electrical insulators since at least 1878, with another source reporting earlier use of porcelain insulators on the telegraph line between Frankfurt and Berlin. It is widely used for insulators in electrical power transmission system due to its high stability of electrical, mechanical and thermal properties even in harsh environments. A body for electrical porcelain typically contains varying proportions of ball clay, kaolin, feldspar, quartz, calcined alumina and calcined bauxite. A variety of secondary materials can also be used, such as binders which burn off during firing. UK manufacturers typically fired the porcelain to a maximum of 1200 °C in an oxidising atmosphere, whereas reduction firing is standard practice at Chinese manufacturers. In 2018, a porcelain bushing insulator manufactured by NGK in Handa, Aichi Prefecture, Japan was certified as the world's largest ceramic structure by Guinness World Records. It is 11.3 m in height and 1.5 m in diameter. The global market for high-voltage insulators was estimated to be worth US$4.95 billion in 2015, of which porcelain accounts for just over 48%. Chemical porcelain A type of porcelain characterised by low thermal expansion, high mechanical strength and high chemical resistance. Used for laboratory ware, such as reaction vessels, combustion boats, evaporating dishes and Büchner funnels. Raw materials for the body include kaolin, quartz, feldspar, calcined alumina, and possibly also low percentages of other materials. A number of International standards specify the properties of the porcelain, such as ASTM C515. Tiles A porcelain tile has been defined as 'a ceramic mosaic tile or paver that is generally made by the dust-pressed method of a composition resulting in a tile that is dense, fine-grained, and smooth with sharply formed face, usually impervious and having colors of the porcelain type which are usually of a clear, luminous type or granular blend thereof.' Manufacturers are found across the world with Italy being the global leader, producing over 380 million square metres in 2006. Historic examples of rooms decorated entirely in porcelain tiles can be found in several palaces including ones at Galleria Sabauda in Turin, Museo di Doccia in Sesto Fiorentino, Museo di Capodimonte in Naples, the Royal Palace of Madrid and the nearby Royal Palace of Aranjuez. and the Porcelain Tower of Nanjing. More recent examples include the Dakin Building in Brisbane, California and the Gulf Building in Houston, Texas, which when constructed in 1929 had a porcelain logo on its exterior. Sanitaryware Because of its durability, inability to rust and impermeability, glazed porcelain has been in use for personal hygiene since at least the third quarter of the 17th century. During this period, porcelain chamber pots were commonly found in higher-class European households, and the term "bourdaloue" was used as the name for the pot. Whilst modern sanitaryware, such as closets and washbasins, is made of ceramic materials, porcelain is no longer used and vitreous china is the dominant material. Bath tubs are not made of porcelain, but of enamel on a metal base, usually of cast iron. Porcelain enamel is a marketing term used in the US, and is not porcelain but vitreous enamel. Dental porcelain Dental porcelain is used for crowns, bridges and veneers. A formulation of dental porcelain is 70-85% feldspar, 12-25% quartz, 3-5% kaolin, up to 15% glass and around 1% colourants. Manufacturers The Americas Brazil Germer Porcelanas Finas Porcelana Schmidt United States Blue Ridge CoorsTek, Inc. Franciscan Lenox Lotus Ware Pickard China Asia China Ding ware Jingdezhen porcelain Iran Maghsoud Group of Factories, (1993–present) Zarin Iran Porcelain Industries, (1881–present) Japan Hirado ware Kakiemon Nabeshima ware Narumi Noritake Malaysia Royal Selangor South Korea Haengnam Chinaware Hankook Chinaware Sri Lanka Dankotuwa Porcelain Noritake Lanka Porcelain Royal Fernwood Porcelain Taiwan Franz Collection Turkey Yildiz Porselen (1890–1936, 1994–present) Kütahya Porselen (1970–present) Güral Porselen (1989–present) Porland Porselen (1976–present) Istanbul Porselen (1963 – early 1990s) Sümerbank Porselen (1957–1994) United Arab Emirates RAK Porcelain Vietnam Minh Long I porcelain (1970–present) Bát Tràng porcelain (1352–present) Europe Austria Vienna Porcelain Manufactory, 1718–1864 Vienna Porcelain Manufactory Augarten, 1923–present Croatia Inkerpor (1953–present) Czech Republic Haas & Czjzek, Horní Slavkov (1792–2011) Thun 1794, Klášterec nad Ohří (1794–present) Český porcelán a.s., Dubí, Eichwelder Porzellan und Ofenfabriken Bloch & Co. Böhmen (1864–present) Rudolf Kämpf, Nové Sedlo (Sokolov District) (1907–present) Denmark Aluminia Bing & Grøndahl Denmark porcelain P. Ipsens Enke Kastrup Vaerk Kronjyden Porcelænshaven Royal Copenhagen (1775–present) GreenGate Finland Arabia France Saint-Cloud porcelain (1693–1766) Chantilly porcelain (1730–1800) Vincennes porcelain (1740–1756) Mennecy-Villeroy porcelain (1745–1765) Sèvres porcelain (1756–present) Revol porcelain (1789–present) Limoges porcelain Haviland porcelain Germany Current porcelain manufacturers in Germany Hungary Hollóháza Porcelain Manufactory (1777–present) Herend Porcelain Manufacture (1826–present) Zsolnay Porcelain Manufacture (1853–present) Italy Richard-Ginori 1735 Manifattura di Doccia (1735–present) Capodimonte porcelain (1743–1759) Naples porcelain (1771–1806) Manifattura Italiana Porcellane Artistiche Fabris (1922–1972) Mangani SRL, Porcellane d'Arte (Florence) Lithuania Jiesia Netherlands (1883–1916) Loosdrechts Porselein Weesp Porselein Norway Egersund porcelain Figgjo (1941–present) Herrebøe porcelain Porsgrund Stavangerflint Poland AS Ćmielów Fabryka Fajansu i Porcelany Polskie Fabryki Porcelany "Ćmielów" i "Chodzież" S.A. Kristoff Porcelana Lubiana S.A. Portugal Vista Alegre Sociedade Porcelanas de Alcobaça Costa Verde (company), located in the district of Aveiro Russia Imperial Porcelain Factory, Saint Petersburg (1744–present) Verbilki Porcelain (1766–present), Verbilki near Taldom Gzhel ceramics (1802–present), Gzhel Dulevo Farfor (1832–present), Likino-Dulyovo Spain Buen Retiro Royal Porcelain Factory (1760–1812) Real Fábrica de Sargadelos (1808–present, intermittently) Porvasal Sweden Rörstrand Gustavsberg porcelain Switzerland Suisse Langenthal United Kingdom Aynsley China (1775–present) Belleek (1884–present) Bow porcelain factory (1747–1776) Caughley porcelain Chelsea porcelain factory (c. 1745; merged with Derby in 1770) Coalport porcelain Davenport Goss crested china Liverpool porcelain Longton Hall porcelain Lowestoft Porcelain Factory Mintons Ltd (1793–1968; merged with Royal Doulton) Nantgarw Pottery New Hall porcelain Plymouth Porcelain Rockingham Pottery Royal Crown Derby (1750/57–present) Royal Doulton (1815–2009; acquired by Fiskars) Royal Worcester (1751–2008; acquired by Portmeirion Pottery) Spode (1767–2008; acquired by Portmeirion Pottery) Saint James's Factory (or "Girl-in-a-Swing", 1750s) Swansea porcelain Vauxhall porcelain Wedgwood, (factory 1759–present, porcelain 1812–1829, and modern. Acquired by Fiskars) See also Blue and white porcelain List of porcelain manufacturers Notes and references Notes References Sources Battie, David, ed., Sotheby's Concise Encyclopedia of Porcelain, 1990, Conran Octopus. Le Corbellier, Clare, Eighteenth-century Italian porcelain, 1985, Metropolitan Museum of Art, (fully available online as PDF) Smith, Lawrence, Harris, Victor and Clark, Timothy, Japanese Art: Masterpieces in the British Museum, 1990, British Museum Publications, Vainker, S.J., Chinese Pottery and Porcelain, 1991, British Museum Press, 9780714114705 Watson, William ed., The Great Japan Exhibition: Art of the Edo Period 1600–1868, 1981, Royal Academy of Arts/Weidenfeld & Nicolson Further reading Burton, William (1906). Porcelain, Its Nature, Art and Manufacture. London: Batsford. Combined Nomenclature of the European Communities – EC Commission in Luxembourg, 1987. Gleeson, Janet, The Arcanum: The Extraordinary True Story of the Invention of European Porcelain, 1998, Bantam Press. Valenstein, S. (1998). A Handbook of Chinese ceramics, Metropolitan Museum of Art, New York. . External links How porcelain is made How bisque porcelain is made ArtLex Art Dictionary – Porcelain Ceramic materials Chinese culture Chinese inventions Dielectrics Materials with minor glass phase Pottery Tableware
Porcelain
[ "Physics", "Engineering" ]
6,431
[ "Materials", "Ceramic materials", "Ceramic engineering", "Dielectrics", "Matter" ]
167,740
https://en.wikipedia.org/wiki/Insult
An insult is an expression, statement, or behavior that is often deliberately disrespectful, offensive, scornful, or derogatory towards an individual or a group. Insults can be intentional or unintentional, and they often aim to belittle, offend, or humiliate the target. While intentional insults can sometimes include factual information, they are typically presented in a pejorative manner, intended to provoke a negative emotional response or have a harmful reaction effect when used harmfully. Insults can also be made unintentionally or in a playful way but could in some cases also have negative impacts and effects even when they were not intended to insult. Insults can have varying impacts, effects, and meanings depending on intent, use, recipient's understanding of the meaning, and intent behind the action or words, and social setting and social norms including cultural references and meanings. History In ancient Rome, political speeches and debates were known to include strong harshness and personal attacks. Historians suggest that insults and verbal attacks were common in the political discourse of the time. This practice reflected the highly confrontational nature of political engagement in ancient Rome. Many religious texts and beliefs have also contributed to views on insults and the implications of making insults in anger. Buddhism teaches 'Right Speech' is a part of the Noble Eightfold Path. In Christianity, for example, the Sermon on the Mount delivered by Jesus includes teachings on the significance of anger. Jesus emphasized the importance of managing one's emotions and non judgment in this example. In addition to political contexts, history also reveals unusual instances of insults. The Cadaver Synod, was an event where Pope Stephen VI held a posthumous trial for Pope Formosus in 897 AD. Stephen became the Pope after Pope Formosus and had his body dug up, dressed, and placed on a throne to stand trial even after his death. Unintentional Insults An example of an unintentional insult may be not tasting a dessert made by a host. Comments made carelessly can also become unintentional insults. Another example could include comments made carelessly about facial features, personality traits, personal taste (e.g. in music), underestimating personal abilities or interests, asking about involvement in something potentially creating stereotypes, jokes, or even walking away from someone outside are among some things that may cause offence accidentally. Jocular exchange Lacan considered insults a primary form of social interaction, central to the imaginary order – "a situation that is symbolized in the 'Yah-boo, so are you' of the transitivist quarrel, the original form of aggressive communication". Erving Goffman points out that every "crack or remark set up the possibility of a counter-riposte, topper, or squelch, that is, a comeback". He cites the example of possible interchanges at a dance in a school gym: Backhanded compliments A backhanded (or left-handed) compliment, or asteism, is an insult that is disguised as, or accompanied by, a compliment, especially in situations where the belittling or condescension is intentional. Examples of backhanded compliments include, but are not limited to: "I did not expect you to ace that exam. Good for you.", which could impugn the target's success as a fluke. "That skirt makes you look far thinner.", insinuating hidden fat, with the implication that fat is something to be ashamed of. "I wish I could be as straightforward as you, but I always try to get along with everyone.", insinuating an overbearing attitude. "I like you. You have the boldness of a much younger person.", insinuating decline with age. Negging is a type of backhanded compliment used for emotional manipulation or as a seduction method. The term was coined and prescribed by pickup artists. Negging is often viewed as a straightforward insult rather than as a pick-up line, in spite of the fact that proponents of the technique traditionally stress it is not an insult. Personal attacks A personal attack is an insult which is directed at some attribute of the person. The Federal Communications Commission's personal attack rule defined a personal attack as one made upon the honesty, character, integrity, or like personal qualities in the Communications Act of 1934. Personal attacks are generally considered a fallacy when used in arguments since they do not attempt to debunk the opposing sides argument, rather attacking the qualities of a person. Sexuality Verbal insults often take a phallic or pudendal form. This includes profanity, and may also include insults to one's sexuality. There are also insults pertaining to the extent of one's sexual activity. For example, according to James Bloodworth, "incel" “has gradually crept into the vocabulary of every internet troll, sometimes being used against men who blame and harass women for not wanting to sleep with them.” Entertainment Insults in poetic form is practiced throughout history, more often as entertainment rather than maliciousness. Flyting is a contest consisting of the exchange of insults between two parties, often conducted in verse and became public entertainment in Scotland in the 15th and 16th centuries. Senna is a form of Old Norse Eddic poetry consisting of an exchange of insults between participants. O du eselhafter Peierl (Oh, you asinine Peierl), composed by Wolfgang Amadeus Mozart, was meant for fun, mocking, scatological humor directed at a friend of Mozart's. More modern versions include poetry slam, dozens, diss song and battle rap. In the 1980s Masters of the Universe franchise, the character of Skeletor became known for insulting those around him with comedic putdowns. There is also now a comedy genre of insult comedy. Anatomies Various typologies of insults have been proposed over the years. Ethologist Desmond Morris, noting that "almost any action can operate as an Insult Signal if it is performed out of its appropriate context – at the wrong time or in the wrong place", classes such signals in ten "basic categories": Uninterest signals Boredom signals Impatience signals Superiority signals Deformed-compliment signals Mock-discomfort signals Rejection signals Mockery signals Symbolic insults Dirt signals Elizabethans took great interest in such analyses, distinguishing out, for example, the "fleering frump ... when we give a mock with a scornful countenance as in some smiling sort looking aside or by drawing the lip awry, or shrinking up the nose". Shakespeare humorously set up an insult-hierarchy of seven-fold "degrees. The first, the Retort Courteous; the second, the Quip Modest; the third, the Reply Churlish; the fourth, the Reproof Valiant; the fifth, the Countercheck Quarrelsome; the sixth, the Lie with Circumstance; the seventh, the Lie Direct". Perceptions What qualifies as an insult is also determined both by the individual social situation and by changing social mores. Thus on one hand the insulting "obscene invitations of a man to a strange girl can be the spicy endearments of a husband to his wife". See also References Further reading Thomas Conley: Toward a rhetoric of insult. University of Chicago Press, 2010, . External links Abuse Harassment and bullying Emotions Pejorative terms
Insult
[ "Biology" ]
1,520
[ "Behavior", "Abuse", "Harassment and bullying", "Aggression", "Human behavior" ]
167,741
https://en.wikipedia.org/wiki/Hydrographic%20survey
Hydrographic survey is the science of measurement and description of features which affect maritime navigation, marine construction, dredging, offshore wind farms, offshore oil exploration and drilling and related activities. Surveys may also be conducted to determine the route of subsea cables such as telecommunications cables, cables associated with wind farms, and HVDC power cables. Strong emphasis is placed on soundings, shorelines, tides, currents, seabed and submerged obstructions that relate to the previously mentioned activities. The term hydrography is used synonymously to describe maritime cartography, which in the final stages of the hydrographic process uses the raw data collected through hydrographic survey into information usable by the end user. Hydrography is collected under rules which vary depending on the acceptance authority. Traditionally conducted by ships with a sounding line or echo sounding, surveys are increasingly conducted with the aid of aircraft and sophisticated electronic sensor systems in shallow waters. Offshore survey is a specific discipline of hydrographic survey primarily concerned with the description of the condition of the seabed and the condition of the subsea oilfield infrastructure that interacts with it. Organizations National and international offices Hydrographic offices evolved from naval heritage and are usually found within national naval structures, for example Spain's Instituto Hidrográfico de la Marina. Coordination of those organizations and product standardization is voluntarily joined with the goal of improving hydrography and safe navigation is conducted by the International Hydrographic Organization (IHO). The IHO publishes Standards and Specifications followed by its Member States as well as Memoranda of Understanding and Co-operative Agreements with hydrographic survey interests. The product of such hydrography is most often seen on nautical charts published by the national agencies and required by the International Maritime Organization (IMO), the Safety of Life at Sea (SOLAS) and national regulations to be carried on vessels for safety purposes. Increasingly those charts are provided and used in electronic form unders IHO standards. Non-national agencies Governmental entities below the national level conduct or contract for hydrographic surveys for waters within their jurisdictions with both internal and contract assets. Such surveys commonly are conducted by national organizations or under their supervision or the standards they have approved, particularly when the use is for the purposes of chart making and distribution or the dredging of state-controlled waters. In the United States, there is coordination with the National Hydrography Dataset in survey collection and publication. State environmental organizations publish hydrographic data relating to their mission. Private organizations Commercial entities also conduct large-scale hydrographic and geophysical surveying, particularly in the dredging, marine construction, oil exploration, and drilling industries. Industrial entities installing submarine communications cables or power require detailed surveys of cable routes prior to installation and increasingly use acoustic imagery equipment previously found only in military applications when conducting their surveys. Specialized companies exist that have both the equipment and expertise to contract with both commercial and governmental entities to perform such surveys . Companies, universities, and investment groups will often fund hydrographic surveys of public waterways prior to developing areas adjacent those waterways. Survey firms are also contracted to survey in support of design and engineering firms that are under contract for large public projects. Private surveys are also conducted before dredging operations and after these operations are completed. Companies with large private slips, docks, or other waterfront installations have their facilities and the open water near their facilities surveyed regularly, as do islands in areas subject to variable erosion such as in the Maldives. Methods Lead lines and sounding poles The history of hydrographic surveying dates almost as far back as that of sailing. For many centuries, a hydrographic survey required the use of lead lines – ropes or lines with depth markings attached to lead weights to make one end sink to the bottom when lowered over the side of a ship or boat – and sounding poles, which were poles with depth markings which could be thrust over the side until they touched bottom. In either case, the depths measured had to be read manually and recorded, as did the position of each measurement with regard to mapped reference points as determined by three-point sextant fixes. The process was labor-intensive and time-consuming and, although each individual depth measurement could be accurate, even a thorough survey as a practical matter could include only a limited number of sounding measurements relative to the area being surveyed, inevitably leaving gaps in coverage between soundings. Wire-drag surveying In 1904, wire-drag surveys were introduced into hydrography, and the United States Coast and Geodetic Survey′s Nicholas H. Heck played a prominent role in developing and perfecting the technique between 1906 and 1916. In the wire-drag method, a wire attached to two ships or boats and set at a certain depth by a system of weights and buoys was dragged between two points. If the wire encountered an obstruction, it would become taut and form a "V" shape. The location of the "V" revealed the position of submerged rocks, wrecks, and other obstructions, while the depth at which the wire was set showed the depth at which the obstruction was encountered. This method revolutionized hydrographic surveying, as it allowed a quicker, less laborious, and far more complete survey of an area than did the use of lead lines and sounding poles. From a navigational safety point of view, a wire-drag survey would not miss a hazard to navigation that projected above the drag wire depth. Prior to the advent of sidescan sonar, wire-drag surveying was the only method for searching large areas for obstructions and lost vessels and aircraft. Between 1906 and 1916, Heck expanded the capability of wire-drag systems from a relatively limited area to sweeps covering channels in width. The wire-drag technique was a major contribution to hydrographic surveying during much of the rest of the 20th century. So valuable was wire-drag surveying in the United States that for decades the U.S. Coast and Geodetic Survey, and later the National Oceanic and Atmospheric Administration, fielded a pair of sister ships of identical design specifically to work together on such surveys. USC&GS Marindin and USC&GS Ogden conducted wire-drag surveys together from 1919 to 1942, USC&GS Hilgard (ASV 82) and USC&GS Wainwright (ASV 83) took over from 1942 to 1967, and USC&GS Rude (ASV 90) (later NOAAS Rude (S 590)) and USC&GS Heck (ASV 91) (later NOAAS Heck (S 591)) worked together on wire-drag operations from 1967. The rise of new electronic technologies – sidescan sonar and multibeam swath systems – in the 1950s, 1960s and 1970s eventually made the wire-drag system obsolete. Sidescan sonar could create images of underwater obstructions with the same fidelity as aerial photography, while multibeam systems could generate depth data for 100 percent of the bottom in a surveyed area. These technologies allowed a single vessel to do what wire-drag surveying required two vessels to do, and wire-drag surveys finally came to an end in the early 1990s. Vessels were freed from working together on wire-drag surveys, and in the U.S. National Oceanic and Atmospheric Administration (NOAA), for example, Rude and Heck operated independently in their later years. Single-beam echosounders Single-beam echosounders and fathometers began to enter service in the 1930s which used sonar to measure the depth beneath a vessel. This greatly increased the speed of acquiring sounding data over that possible with lead lines and sounding poles by allowing information on depths beneath a vessel to be gathered in a series of lines spaced at a specified distance. However, it shared the weakness of earlier methods by lacking depth information for areas in between the strips of sea bottom the vessel sounded. Multibeam Echosounders A multibeam echosounder (MBES) is a type of sonar that is used to map the seabed. It emits acoustic waves in a fan shape beneath its transceiver. The time it takes for the sound waves to reflect off the seabed and return to the receiver is used to calculate the water depth. Unlike other sonars and echo sounders, MBES uses beamforming to extract directional information from the returning soundwaves, producing a swath of depth soundings from a single ping. Explicit inclusion of phraseology like: "For all MBES surveys for LINZ, high resolution, geo-referenced backscatter intensity is to be logged and rendered as a survey deliverable." in a set of contract survey requirements, is a clear indication that the wider hydrographic community is embracing the benefits that can be accrued by employing MBES technology and, in particular, are accepting as a fact that a MBES which provides acoustic backscatter data is a valuable tool of the trade. The introduction of multispectral multibeam echosounders continues the trajectory of technological innovations providing the hydrographic surveying community with better tools for more rapidly acquiring better data for multiple uses. A multispectral multibeam echosounder is the culmination of many progressive advances in hydrography from the early days of acoustic soundings when the primary concern about the strength of returning echoes from the bottom was whether, or not, they would be sufficiently large to be noted (detected). The operating frequencies of the early acoustic sounders were primarily based on the ability of magneostrictive and piezoelectric materials whose physical dimensions could be modified by means of electrical current or voltage. Eventually it became apparent, that while the operating frequency of the early single vertical beam acoustic sounders had little, or no, bearing on the measured depths when the bottom was hard (composed primarily of sand, pebbles, cobbles, boulders, or rock), there was a noticeable frequency dependency of the measured depths when the bottom was soft (composed primarily of silt, mud or flocculent suspensions). It was observed that higher frequency single vertical beam echosounders could provide detectable echo amplitudes from high porosity sediments, even if those sediments appeared to be acoustically transparent at lower frequencies. In the late 1960s, single-beam hydrographic surveys were conducted using widely spaced track lines and the shallow (peak) soundings in the bottom data were retained in preference to deeper soundings in the sounding record. During that same time period, early side scan sonar was introduced into the operational practices of shallow water hydrographic surveying. The frequencies of the early side scan sonars were a matter of engineering design expediency and the most important aspect of the side scanning echoes was not the value of their amplitudes, but rather that the amplitudes were spatially variable. In fact, important information was deduced about the shape of the bottom and manmade items on the bottom, based on the regions where there were absences of detectable echo amplitudes (shadows) In 1979, in hopes of a technological solution to the problems of surveying in "floating mud", the Director of the National Ocean Survey (NOS) established a NOS study team to conduct investigations to determine the functional specifications for a replacement shallow water depth sounder. The outcome of the study was a class of vertical-beam depth sounders, which is still widely used. It simultaneously pinged at two acoustic frequencies, separated by more than 2 octaves, making depth and echo-amplitude measurements that were concurrent, both spatially and temporally, albeit at a single vertical grazing angle. The first MBES generation was dedicated to mapping the seafloor in deep water. Those pioneering MBES made little, or no, explicit use of the amplitudes, as their objective was to obtain accurate measurements of the bathymetry (representing both the peaks and deeps). Furthermore, their technical characteristics did not make it easy to observe spatial variations in the echo amplitudes. Subsequent to the early MBES bathymetric surveys and at the time when single frequency side scan sonar had begun to produce high quality images of the seabed that were capable of providing a degree of discrimination between different types of sediments, the potential of the echo amplitudes from a MBES was recognized. With Marty Klein's introduction of dual frequency (nominally 100 kHz and 500 kHz) side scan sonar, it was apparent that spatially and temporally coincident backscatter from any given seabed at those two widely separated acoustic frequencies, would likely provide two separate and unique images of that seascape. Admittedly, the along-track insonification and receiving beam patterns were different, and due to the absence of bathymetric data, the precise backscatter grazing angles were unknown. However, the overlapping sets of side scanning across-track grazing angles at the two frequencies were always the same. Following the grounding of the off Cape Cod, Massachusetts, in 1992, the emphasis for shallow water surveying migrated toward full bottom coverage surveys by employing MBES with increasing operating frequencies to further improve the spatial resolution of the soundings. Given that side scan sonar, with its across-track fan-shaped swath of insonification, had successfully exploited the cross-track variation in echo amplitudes, to achieve high quality images of the seabed, it seemed a natural progression that the fan-shaped across-track pattern of insonification associated with the new monotone higher frequency shallow water MBES, might also be exploited for seabed imagery. Images acquired under the initial attempts at MBES bottom imaging were less than stellar, but fortunately improvements were forthcoming. Side scan sonar parses the continual echo returns from a receive beam that is perfectly aligned with the insonification beam using time-after-transmit, a technique that is independent of water depth and the cross-track beam opening angle of the sonar receive transducer. The initial attempt at multibeam imagery employed multiple receive beams, which only partially overlapped the MBES fan-shaped insonification beam, to segment the continual echo returns into intervals that were dependent on water depth and receiver cross-track beam opening angle. Consequently, the segmented intervals were non-uniform in both their length of time and time-after-transmit. The backscatter from each ping in each of the beam-parsed segments was reduced to a single value and assigned to the same geographical coordinates as those assigned to that beam's measured sounding. In subsequent modifications to MBES bottom imaging, the echo sequence in each of the beam-parsed intervals was designated as a snippet. On each ping, each snippet from each beam was additionally parsed according to time-after-transmit. Each of the echo amplitude measurements made within a snippet from a particular beam was assigned a geographical position based on linear interpolation between positions assigned to the soundings measured, on that ping, in the two adjacent cross-track beams. The snippet modification to MBES imagery significantly improved the quality of the imagery by increasing the number of echo amplitude measurements available to be rendered as a pixel in the image and also by having a more uniform spatial distribution of the pixels in the image which represented an actual measured echo amplitude. The introduction of multispectral multibeam echosounders continued the progressive advances in hydrography. In particular, multispectral multibeam echosounders not only provide "multiple look" depth measurements of a seabed, they also provide multispectral backscatter data that are spatially and temporally coincident with those depth measurements. A multispectral multibeam echosounder directly computes a position of origin for each of the backscatter amplitudes in the output data set. Those positions are based on the backscatter measurements themselves and not by interpolation from some other derived data set. Consequently, multispectral multibeam imagery is more acute compared to previous multibeam imagery. The inherent precision of the bathymetric data from a multispectral multibeam echosounder is also a benefit to those users that may be attempting to employ the acoustic backscatter angular response function to discriminate between different sediment types. Multispectral multibeam echosounders reinforces the fact that spatially and temporally coincident backscatter, from any given seabed, at widely separated acoustic frequencies provides separate and unique images of the seascape. Crowdsourcing Crowdsourcing also is entering hydrographic surveying, with projects such as OpenSeaMap, TeamSurv and ARGUS. Here, volunteer vessels record position, depth, and time data using their standard navigation instruments, and then the data is post-processed to account for speed of sound, tidal, and other corrections. With this approach there is no need for a specific survey vessel, or for professionally qualified surveyors to be on board, as the expertise is in the data processing that occurs once the data is uploaded to the server after the voyage. Apart from obvious cost savings, this also gives a continuous survey of an area, but the drawbacks are time in recruiting observers and getting a high enough density and quality of data. Although sometimes accurate to 0.1 – 0.2m, this approach cannot substitute for a rigorous systematic survey, where this is required. Nevertheless, the results are often adequate for many requirements where high resolution, high accuracy surveys are not required, are unaffordable or simply have not been done yet. General Bathymetric Chart of the Oceans Modern integrated hydrographic surveying In suitable shallow-water areas lidar (light detection and ranging) may be used. Equipment can be installed on inflatable craft, such as Zodiacs, small craft, autonomous underwater vehicles (AUVs), unmanned underwater vehicles (UUVs), Remote Operated Vehicles (ROV) or large ships, and can include sidescan, single-beam and multibeam equipment. At one time different data collection methods and standards were used in collecting hydrographic data for maritime safety and for scientific or engineering bathymetric charts, but increasingly, with the aid of improved collection techniques and computer processing, the data is collected under one standard and extracted for specific use. After data is collected, it has to undergo post-processing. A massive amount of data is collected during the typical hydrographic survey, often several soundings per square foot. Depending on the final use intended for the data (for example, navigation charts, Digital Terrain Model, volume calculation for dredging, topography, or bathymetry) this data must be thinned out. It must also be corrected for errors (i.e., bad soundings,) and for the effects of tides, heave, water level salinity and thermoclines (water temperature differences) as the velocity of sound varies with temperature and salinity and affects accuracy. Usually the surveyor has additional data collection equipment on site to measure and record the data required for correcting the soundings. The final output of charts can be created with a combination of specialty charting software or a computer-aided design (CAD) package, usually Autocad. Although the accuracy of crowd-sourced surveying can rarely reach the standards of traditional methods, the algorithms used rely on a high data density to produce final results that are more accurate than single measurements. A comparison of crowd-sourced surveys with multibeam surveys indicates an accuracy of crowd-sourced surveys of around plus or minus 0.1 to 0.2 meter (about 4 to 8 inches). See also References External links International Hydrographic Organization IHO – Download / OHI – Téléchargement NGA – Products and Services Available to the Public United Kingdom Hydrographic Office Indian Naval Hydrographic Department Australian Hydrographic Service (AHS) Armada Esapñola – Instituto Hidrográfico de la Marina NOAA, Office of Coast Survey, Survey Data NOAA Marine Operations (Survey Fleet) Hydro International (Professional journal for hydrography with technical and industry news articles.) NOAA maintains a massive database of survey results, charts, and data on the NOAA site. NOAA's Hydrographic Website NOS Data Explorer portal Hydrography Surveying Field surveys
Hydrographic survey
[ "Engineering", "Environmental_science" ]
4,048
[ "Hydrography", "Surveying", "Hydrology", "Civil engineering" ]
167,748
https://en.wikipedia.org/wiki/Sex%20position
A sex position is a positioning of the bodies that people use to engage in sexual intercourse or other sexual activities. Sexual acts are generally described by the positions the participants adopt in order to perform those acts. Though sexual intercourse generally involves penetration of the body of one person by another, sex positions commonly involve non-penetrative sexual activities. Three broad and overlapping categories of sexual activity are commonly practiced: vaginal sex, anal sex, and oral sex (mouth-on-genital or mouth-on-anus). Sex acts may also be part of a fourth category, manual sex, which is stimulating the genitals or anus by using fingers or hands. Some acts may include stimulation by a device (sex toy), such as a dildo or vibrator. There are numerous sex positions that participants may adopt in any of these types of sex acts, and some authors have argued that the number of sex positions is essentially limitless. History Sex manuals typically present a guide to sex positions. They have a long history. In the Greco-Roman era, a sex manual was written by Philaenis of Samos, possibly a hetaira (courtesan) of the Hellenistic period (3rd–1st century BC). The Kama Sutra of Vatsyayana, believed to have been written in the 1st to 6th centuries, has a notorious reputation as a sex manual. Different sex positions result in differences in the depth and angle of sexual penetration. Alfred Kinsey categorized six primary positions, The earliest known European medieval text dedicated to sexual positions is the Speculum al foderi, (The Mirror of Coitus) a 15th-century Catalan text discovered in the 1970s. Exclusively penetrative These positions involve the insertion of a phallic object(s) (such as a penis, strap-on dildo, plug, or other nonporous object(s)) into a vagina, anus or mouth. Penetrating partner on top with front entry The most used sex position is the missionary position. In this position, the participants face each other. The receiving partner lies on their back with legs apart, while the penetrating partner lies on top. This position and the following variations may be used for vaginal or anal intercourse. The penetrating partner stands in front of the receiving partner, whose legs dangle over the edge of a bed or some other platform like a table. With the receiving partner's legs lifted towards the ceiling and resting against the penetrating partner, this is sometimes called the butterfly position. This can also be done as a kneeling position. The receiving partner lies on their back. The penetrating partner stands and lifts the receiving partner's pelvis for penetration. A variant is for the receiving partner to rest their legs on the penetrating partner's shoulders. The receiving partner lies on their back, legs pulled up straight and knees near to the head. The penetrating partner holds the receiving partner's legs and penetrates from above. Similarly to the previous position, but the receiving partner's legs need not be straight and the penetrating partner wraps their arms around the receiving partner to push the legs as close as possible to the chest. Called the stopperage in Burton's translation of The Perfumed Garden. The coital alignment technique, a position where a woman is vaginally penetrated by a man, and the penetrating partner moves upward along the woman's body until the penis is pointing down, the dorsal side of the penis now rubbing against the clitoris. The receiving partner crosses their feet behind their head (or at least puts their feet next to their ears), while lying on their back. The penetrating partner then holds the receiving partner tightly around each instep or ankle and lies on the receiving partner full-length. A variation is to have the receiving partner cross their ankles on their stomach, knees to shoulders, and then have the penetrating partner lie on the receiving partner's crossed ankles with their full weight. Called the Viennese oyster by The Joy of Sex. Penetrating from behind Most of these positions can be used for either vaginal or anal penetration. Variants include: The receiving partner is on all fours with their torso horizontal and the penetrating partner inserts either their penis or sex toy into either the vagina or anus from behind. The receiving partner's torso is angled downwards and the penetrating partner raises their own hips above those of the receiving partner for maximum penetration. The penetrating partner places their feet on each side of the receiving partner while keeping their knees bent and effectively raising up as high as possible while maintaining penetration. The penetrating partner's hands usually have to be placed on the receiving partner's back to keep from falling forward. The receiving partner kneels upright while the penetrating partner gently pulls the receiving partner's arms backwards at the wrists towards them. In the spoons position both partners lie on their side, facing the same direction. Variants of this technique include the following: The receiving partner lies on their side. The penetrating partner kneels and penetrates from behind. Alternatively, the penetrating partner can stand if the receiving partner is on a raised surface. The receiving partner lies facing down in prone position, possibly with their legs spread. The penetrating partner lies on top of them. The placement of a pillow beneath the receiving partner's hips can help increase stimulation in this position. The receiving partner lies face down, knees together. The penetrating partner lies on top with spread legs. The receiving partner lies on their side with their uppermost leg forward. The penetrating partner kneels astride the receiver's lowermost leg. Receiving partner on top Most of these positions can be used for either vaginal or anal penetration. When the receiving partner is a woman, these positions are sometimes called the woman on top, or cowgirl positions. A feature of these positions is that the penetrating partner lies on their back with the receiving partner on top: The receiving partner can kneel while straddling the penetrating partner, with the participants facing each other. Alternatively, the receiving partner can face away from the penetrating partner. This position is sometimes called the reverse cowgirl position. The receiving partner can arch back with hands on the ground. The receiving partner can squat (instead of kneel) facing the penetrating partner. The receiving partner can bring forward their knees against the ground. The penetrating partner lies with their upper back on a low table, couch, chair or edge of bed, keeping their feet flat on the floor and back parallel to floor. The receiving partner straddles them, also keeping their feet on the floor. Receiving partner can assume any of various positions. The lateral coital position was recommended by Masters and Johnson, and was preferred by three quarters of their heterosexual study participants after having tried it. The position involves the male on his back, with the female rolled slightly to the side so that her pelvis is atop his, but her weight is beside his. This position can also be used for anal penetration, and is not limited to heterosexual partners. Sitting and kneeling Most of these positions can be used for either vaginal or anal penetration. The penetrating partner sits on an area surface, legs outstretched. The receiving partner sits on top and wraps their legs around the penetrating partner. Called pounding on the spot in the Burton translation of The Perfumed Garden. If the penetrating partner sits cross-legged, it is called the lotus position or lotus flower. The position can be combined with fondling of erogenous zones. The penetrating partner sits in a chair. The receiving partner straddles penetrating partner and sits, facing the penetrating partner, feet on floor. This is sometimes called a lap dance, which is somewhat erroneous as a lap dance typically does not involve penetration. The receiving partner may also sit in reverse, with their back to the penetrating partner. The penetrating partner sits on a couch or in a chair that has armrests. The receiving partner sits in the penetrating partner's lap, perpendicular to penetrating partner, with their back against the armrest. The penetrating partner kneels while the receiving partner lies on their back, ankles on each side of penetrating partner's shoulders. Standing Most of these positions can be used for either vaginal or anal penetration. In the basic standing position, both partners stand facing each other. The following variations are possible: In the basic standing position, both partners stand facing each other and engage in vaginal sex. In order to match heights, the shorter partner can, for instance, stand on a stair or wear high heels. It may be easier to maintain solid thrusts if the woman has her back to a wall. With such a support, the Kama Sutra calls this position the Suspended Congress. This position is most often used in upright places, such as a wall in a bedroom or a shower. The penetrating partner stands, and the receiving partner wraps their arms around his neck, and their legs around his waist, thereby exposing either the vagina or anus to the man's penis. This position is made easier with the use of a solid object behind the receiver, as above. To assume this position, it can be easier to start with the receiving partner laying on their back on the edge of a bed; the penetrating partner puts his elbows under their knees, enters them, and then lifts them as he rises to a standing position. In Japan, this is colloquially called the Ekiben position, after a specific bento lunch box sold at train stations. Alternatively, the receiving partner can face away from the penetrating partner which allows for anal sex. This position is varied by having the receiving partner assume different semi-standing positions. For instance, they may bend at the waist, resting their hands or elbows on a table. Anal sex positions These positions involve anal penetration: Doggy style penetration maximizes the depth of penetration, but can pose the risk of pushing against the sigmoid colon. If the receiving partner is male, this increases the chances of stimulating the prostate. The penetrating partner controls the thrusting rhythm. A variation is the leapfrog position, in which the receiving partner angles their torso downward. The receiving partner may also lie flat and face down, with the penetrating partner straddling their thighs. In the missionary positions, to achieve optimal alignment, the receiving partner's legs should be in the air with the knees drawn towards their chest. Some sort of support (such as a pillow) under the receiving partner's hips can also be useful. The penetrating partner positions themselves between the receiving partner's legs. The penetrating partner controls the thrusting rhythm. This position is often cited as good for beginners, because it allows them to relax more fully than is usual in the doggy style position. The spoons position allows the receiving partner to control initial penetration and the depth, speed and force of subsequent thrusting. The receiving partner on top positions allow the receiving partner more control over the depth, rhythm and speed of penetration. More specifically, the receiving partner can slowly push their anus down on the penetrating partner, allowing time for their muscles to relax. Less common positions These positions are more innovative, and perhaps not as widely known or practiced as the ones listed above. The receiving partner lies on their back with knees up and legs apart. The penetrating partner lies on their side perpendicular to the receiver, with the penetrating partner's hips under the arch formed by receiver's legs. This position is sometimes called the T-square. The receiving partner's legs are together turning to one side while looking up towards the penetrator, who has spread legs and is kneeling straight behind the other's hips. The penetrator's hands are on the other's hips. This position can be called the modified T-square. The Seventh Posture of Burton's translation of The Perfumed Garden is an unusual position not described in other classical sex manuals. The receiving partner lies on their side. The penetrating partner faces the receiver, straddling the receiver's lower leg, and lifts the receiver's upper leg on either side of the body onto the crook of penetrating partner's elbow or onto the shoulder. While some references describe this position as being "for acrobats and not to be taken seriously", others have found it very comfortable, especially during pregnancy. The piledriver is a difficult position sometimes seen in porn videos. It is described in many ways by different sources. In a heterosexual context, the woman lies on her back, then raises her hips as high as possible, so that her partner, standing, can enter her vaginally or anally. The position places considerable strain on the woman's neck, so firm cushions should be used to support her. The receiver lies face down legs spread on the edge of the bed and parallel to the floor, while the penetrator stands behind, holding both legs. The rusty bike pump is similar to a piledriver where penetration is achieved from above at a downward angle with the receiving partner bottom side up. Others The receiving partner is on the bottom. The penetrating partner lies on top perpendicularly to them. The penetrating partner lies on their back, legs spread. The receiving partner is on their back on top of the penetrator, legs spread, facing the opposite direction. The penetrator and the receiver lie on their backs, heads pointed away from one another. Each places one leg on the other's shoulder (as a brace) and the other leg out somewhat to the side. The receiving partner lies on their back with the penetrating partner lying perpendicular. The receiving partner bends the knee closest to the penetrating partner's head enough so that there is room for the penetrating partner's waist to fit beneath it, while the penetrating partner's legs straddle the receiving partner's other leg. The in-and-out thrusting action will move more along a side-to-side rather than top-to-bottom axis. This position allows for breast stimulation during sex, for partners to maintain eye contact if they wish, and for a good view of both partners as they reach orgasm. The penetrating partner sits on edge of a bed or chair with feet spread wide on floor. The receiving partner lies on their back on the floor and drapes their legs and thighs over the legs of the penetrating partner. The penetrating partner holds the knees of the receiving partner and controls thrusts. Using furniture or special apparatus Most sex acts are typically performed on a bed or other simple platform. As the range of supports available increases, so does the range of positions that are possible. Ordinary furniture can be used for this purpose. Also, various forms of erotic furniture and other apparatus such as fisting slings and trapezes have been used to facilitate even more exotic sexual positions. Positions to promote or prevent conception Pregnancy is a potential result of any form of sexual activity where sperm comes in contact with the vagina; this is typically during vaginal sex, but pregnancy can result from anal sex, digital sex (fingering), oral sex, or by another body part, if sperm is transferred from one area to the vagina between a fertile female and a fertile male. Men and women are typically fertile during puberty. Though certain sexual positions are believed to produce more favorable results than others, none of these are effective means of contraception. Positions during pregnancy The goal is to prevent excessive pressure on the belly and to restrict penetration as required by the particular partners. Some of the positions below are popular positions for sex during pregnancy. Woman on top: takes the pressure off of the woman's abdomen and allows her to control the depth and frequency of thrusting. Woman on back: like the missionary, but with less pressure on abdomen or uterus. The woman lies on her back and raises her knees up towards her chest. The partner kneels between her legs and enters from the front. A pillow is placed under her bottom for added comfort. Sideways: also keeps pressure off of her abdomen while supporting her uterus at the same time. Spooning: very popular positions to use during the late stages of pregnancy; allowing only shallow penetration and relieves the pressure on the abdomen. Sitting: she mounts the sitting partner, relieving her abdomen of pressure. From behind: allowing her to support abdomen and breasts. Non-exclusively penetrative Oral sex positions Oral sex is genital stimulation by the mouth. It may be penetrative or non-penetrative, and may take place before, during, as, or following intercourse. It may also be performed simultaneously (for example, when one partner performs cunnilingus, while the other partner performs fellatio), or only one partner may perform upon the other; this creates a multitude of variations. Fellatio Fellatio is oral sex performed on the penis. Possible positions include: Sitting The receiver lies on their back while the partner kneels between the receiver's legs. The receiver lies on their back while the partner lies off to the side of their legs. The receiver sits in a chair, the partner kneels in front of them between their legs. Standing The receiver stands while the partner either kneels in front of them or sits (in a chair or on the edge of a bed, etc.) and bends forward. The receiver stands while the partner, also standing, bends forward at the waist. The receiver stands or crouches at the edge of the bed, facing the bed. The active partner lies on the bed with their head hanging over the edge of the bed backward. The receiver inserts their penis into the partner's mouth, usually to achieve deep throat penetration. Lying While the active partner lies on their back, the receiver assumes the missionary position but adjusted forward. The active partner (with breasts) lies on their back, and the receiver inserts their penis between the breasts, and into the mouth. Cunnilingus Cunnilingus is oral sex performed on the vulva. Possible positions include: The receiver lies on her back as in the missionary position. The active partner lies on their front between their legs. The active partner sits. The receiver stands facing away and bends at the hips. The active partner sits. The receiver stands or squats facing towards partner and may arch their back, to create further stimulation. The active partner lies on their back while the receiver kneels with their legs at their sides and their vulva on their mouth. In other words, the receiver sits on the partner's face. The receiver rests on all fours as in the doggy style position. The partner lies on their back with their head under their vulva. Their feet may commonly extend off the bed and rest on the floor. The receiver stands, possibly bracing themself against a wall. The active partner kneels in front of them. The receiver sits on the bed with their legs open, the active partner kneels in front of them. The receiver is upside-down (standing on hands, held by partner, or using support, such as bondage or furniture), with the active partner standing or kneeling (depending on elevation) in front or behind. Such a position may be difficult to achieve, or maintain for extended time periods, but the rush of blood to the brain can alter stimulation's effect. The receiver stands on hands, resting each leg on either side of the active partner's head, with the active partner standing or kneeling facing them. Depending on which way up the receiver is facing, different stimulation and levels of comfort may be available. Sixty-nine Simultaneous oral sex between two people is called 69. They can lie side-by-side, lie one on top of the other, or stand with one partner holding the other upside down. Anilingus Anilingus is oral sex performed on the anus. Positions for anilingus are often variants on those for genital-oral sex. Possible positions include: The passive partner is on all fours in the doggy position with the active partner behind. The passive partner is on their back in the missionary position with their legs up. The passive partner on top in the 69 position. The rusty trombone, in which a male stands while the active partner performs both anilingus from behind, generally from a kneeling position, and also manually stimulates the standing partner's penis, thus somewhat resembling someone playing the trombone. Other positions Fingering of the vulva, vagina or anus. Fisting: inserting the entire hand into the vagina or rectum. Non-penetrative Non-penetrative sex or frottage generally refers to a sexual activity that excludes penetration, and often includes rubbing one's genitals on one's sexual partner. This may include the partner's genitals or buttocks, and can involve different sex positions. As part of foreplay or to avoid penetrative sex, people engage in a variety of non-penetrative sexual behavior, which may or may not lead to orgasm. Dry humping: frottage while clothed. This act is common, although not essential, in the dance style known as "grinding". Handjob: manual stimulation of a partner's penis. Fingering: manual stimulation of a partner's vulva. Footjob: using the feet to stimulate the genitals. Mammary intercourse: using the breasts together to stimulate the penis through the cleavage. Axillary intercourse: with the penis in the armpit. Commonly known as "bagpiping". Orgasm control: By self or by a partner managing the physical stimulation and sensation connected with the emotional and physiologic excitement levels. Through the practice of masturbation, an individual can learn to develop control of their own body's orgasmic response and timing. In partnered stimulation, either partner can control their own orgasmic response and timing. With mutual agreement, either partner can similarly learn to control or enhance their partner's orgasmic response and timing. Partnered stimulation orgasm techniques referred to as expanded orgasm, extended orgasm or orgasm control can be learned and practiced for either partner to refine their control of the orgasmic response of the other. Partners mutually choose which is in control or in response to the other. The slang term humping may refer to masturbation—thrusting one's genitals against the surface of non-sexual objects, clothed or unclothed; or it may refer to penetrative sex. Genital-genital rubbing Genital-genital rubbing (often termed GG rubbing by primatologists to describe the ubiquitous behavior among female bonobos) is the sexual act of mutually rubbing genitals; it is commonly grouped with frottage, as well as other terms, such as non-penetrative sex or outercourse: Intercrural sex or interfemoral sex: the penis is placed between the partner's thighs, perhaps rubbing the vulva, scrotum or perineum. Frot: two males mutually rubbing penises together. Tribadism: two females mutually rubbing vulvae together. Docking: inserting the glans penis into the foreskin of another penis. Group sex People may participate in group sex. While group sex does not imply that all participants must be in sexual contact with all others simultaneously, some positions are only possible with three or more people. As with the positions listed above, more group sex positions become practical if erotic furniture is used. Threesomes When three people have sex with each other, it is called a threesome. Possible ways of having all partners in sexual contact with each include some of the following: One person performs oral sex on one partner while they engage in receptive anal or vaginal intercourse with the other partner. Sometimes called a spit roast. The 369 position is where two people engage in oral sex in the 69 position while a third person positions himself to penetrate one of the others; usually a man engaging in sex doggie-style with the woman on top in the 69 position. A man has vaginal or anal sex with one partner, while himself being anally penetrated by another (possibly with a strap-on dildo). Two participants engage in cowgirl position, a third straddles man's face allowing him to go down on them. Generally called a double cowgirl. Three partners lie or stand in parallel, with one between the other two. Sometimes called a sandwich. This term may specifically refer to the double penetration of a woman, with one penis in her anus, and the other in her vagina or of a male, with two penises in his anus. Two participants have vaginal/anal sex with each other, and one/both perform oral sex on a third. Three people perform oral/vaginal/anal sex on one another simultaneously, commonly called a daisy chain. The slang term lucky Pierre is sometimes used in reference to the person playing the middle role in a threesome, being anally penetrated while engaging in penetrative anal or vaginal sex. Foursomes A 469 is a four-person sexual position where two individuals engage in 69 oral sex while a third and a fourth person both position themselves on each end to penetrate the two engaged in simultaneous oral sex; similar to a 369, with the addition of a fourth person. With many participants These positions can be expanded to accommodate any number of participants: A group of males masturbating is called a circle jerk. Sexual intercourse involving multiple women in which one man is the central focus is known as reverse gangbang. A group of males masturbating and ejaculating on one person's face is known as bukkake. A group of men, women, or both, each performing oral sex upon each other, in a circular arrangement, is a daisy chain. When one woman or man is given the serial or parallel attention of many, often involving a queue (pulling a train), it is often termed a gang bang. Multiple penetration A person may be sexually penetrated multiple times simultaneously. Penetration may involve use of fingers, toes, sex toys, or penises. Scenes of multiple penetration are common in pornography. If one person is penetrated by two objects, it is generically called double penetration (DP). Double penetration of the vagina, anus, or mouth can involve: Simultaneous penetration of the anus by two penises or other objects. This is commonly called double anal penetration (DAP). Simultaneous penetration of the vagina by two penises or other objects. This is commonly called double vaginal penetration (DVP) or double stuffing. Simultaneous penetration of the vagina and anus. The shocker accomplishes this using several fingers of one hand. Simultaneous penetration of the mouth and either the vagina or anus. If the penetrating objects are penises, this is sometimes called the spit roast, the Chinese finger trap, or the Eiffel tower. Cultural differences and preferences Sexual practices vary between cultures. Latin American couples that recorded their sexual activities do not practice the missionary position as much as couples from United States reported. The duration of sexual intercourse seems to be similar amongst European and Latin American couples. See also Bondage positions and methods References Further reading Historical Kama Sutra The Perfumed Garden Modern (235 pages) (272 pages) (101 pages—design criteria for assistive furniture, with sections on accommodation of disabled persons.) (96 pages) (376 pages) External links Sex positions Sexology Sexual intercourse
Sex position
[ "Biology" ]
5,571
[ "Behavior", "Sexual acts", "Sexology", "Behavioural sciences", "Sexuality", "Mating" ]
167,777
https://en.wikipedia.org/wiki/Topic%20map
A topic map is a standard for the representation and interchange of knowledge, with an emphasis on the findability of information. Topic maps were originally developed in the late 1990s as a way to represent back-of-the-book index structures so that multiple indexes from different sources could be merged. However, the developers quickly realized that with a little additional generalization, they could create a meta-model with potentially far wider application. The ISO/IEC standard is formally known as ISO/IEC 13250:2003. A topic map represents information using topics, representing any concept, from people, countries, and organizations to software modules, individual files, and events, associations, representing hypergraph relationships between topics, and occurrences, representing information resources relevant to a particular topic. Topic maps are similar to concept maps and mind maps in many respects, though only topic maps are ISO standards. Topic maps are a form of semantic web technology similar to RDF. Ontology and merging Topics, associations, and occurrences can all be typed, where the types must be defined by the one or more creators of the topic map(s). The definitions of allowed types is known as the ontology of the topic map. Topic maps explicitly support the concept of merging of identity between multiple topics or topic maps. Furthermore, because ontologies are topic maps themselves, they can also be merged thus allowing for the automated integration of information from diverse sources into a coherent new topic map. Features such as subject identifiers (URIs given to topics) and PSIs (published subject indicators) are used to control merging between differing taxonomies. Scoping on names provides a way to organise the various names given to a particular topic by different sources. Current standard The work standardizing topic maps (ISO/IEC 13250) took place under the umbrella of the ISO/IEC JTC 1/SC 34/WG 3 committee (ISO/IEC Joint Technical Committee 1, Subcommittee 34, Working Group 3 – Document description and processing languages – Information Association). However, WG3 was disbanded and maintenance of ISO/IEC 13250 was assigned to WG8. The topic maps (ISO/IEC 13250) reference model and data model standards are defined independent of any specific serialization or syntax. TMRM Topic Maps – Reference Model TMDM Topic Maps – Data Model Data format The specification is summarized in the abstract as follows: "This specification provides a model and grammar for representing the structure of information resources used to define topics, and the associations (relationships) between topics. Names, resources, and relationships are said to be characteristics of abstract subjects, which are called topics. Topics have their characteristics within scopes: i.e. the limited contexts within which the names and resources are regarded as their name, resource, and relationship characteristics. One or more interrelated documents employing this grammar is called a topic map." XML serialization formats In 2000, Topic Maps was defined in an XML syntax XTM. This is now commonly known as "XTM 1.0" and is still in fairly common use. The ISO standards committee published an updated XML syntax in 2006, XTM 2.0 which is increasingly in use today. Note that XTM 1.0 predates and therefore is not compatible with the more recent versions of the (ISO/IEC 13250) standard. Other formats Other proposed or standardized serialization formats include: CXTM – Canonical XML Topic Maps format (canonicalization of topic maps) CTM – a Compact Topic Maps Notation (not based on XML) GTM – a Graphical Topic Maps Notation The above standards are all recently proposed or defined as part of ISO/IEC 13250. As described below, there are also other, serialization formats such as LTM, AsTMa= that have not been put forward as standards. Linear topic map notation (LTM) serves as a kind of shorthand for writing topic maps in plain text editors. This is useful for writing short personal topic maps or exchanging partial topic maps by email. The format can be converted to XTM. There is another format called AsTMa which serves a similar purpose. When writing topic maps manually it is much more compact, but of course can be converted to XTM. Alternatively, it can be used directly with the Perl Module TM (which also supports LTM). The data formats of XTM and LTM are similar to the W3C standards for RDF/XML or the older N3 notation. Related standards Topic Maps API A de facto API standard called Common Topic Maps Application Programming Interface (TMAPI) was published in April 2004 and is supported by many Topic Maps implementations or vendors: TMAPI – Common Topic Maps Application Programming Interface TMAPI 2.0 – Topic Maps Application Programming Interface (v2.0) Query standard In normal use it is often desirable to have a way to arbitrarily query the data within a particular Topic Maps store. Many implementations provide a syntax by which this can be achieved (somewhat like 'SQL for Topic Maps') but the syntax tends to vary a lot between different implementations. With this in mind, work has gone into defining a standardized syntax for querying topic maps: ISO 18048: TMQL – Topic Maps Query Language Constraint standards It can also be desirable to define a set of constraints that can be used to guarantee or check the semantic validity of topic maps data for a particular domain. (Somewhat like database constraints for topic maps). Constraints can be used to define things like 'every document needs an author' or 'all managers must be human'. There are often implementation specific ways of achieving these goals, but work has gone into defining a standardized constraint language as follows: ISO 19756: TMCL – Topic Maps Constraint Language TMCL is functionally similar to RDF Schema with Web Ontology Language (OWL). Earlier standards The "Topic Maps" concept has existed for a long time. The HyTime standard was proposed as far back as 1992 (or earlier?). Earlier versions of ISO 13250 (than the current revision) also exist. More information about such standards can be found at the ISO Topic Maps site. RDF relationship Some work has been undertaken to provide interoperability between the W3C's RDF/OWL/SPARQL family of semantic web standards and the ISO's family of Topic Maps standards though the two have slightly different goals. The semantic expressive power of Topic Maps is, in many ways, equivalent to that of RDF, but the major differences are that Topic Maps (i) provide a higher level of semantic abstraction (providing a template of topics, associations and occurrences, while RDF only provides a template of two arguments linked by one relationship) and (hence) (ii) allow n-ary relationships (hypergraphs) between any number of nodes, while RDF is limited to triplets. See also Knowledge graph Semantic interoperability Topincs a commercial proprietary topic maps editor Unified Modeling Language (UML) References Further reading Lutz Maicher and Jack Park: Charting the Topic Maps Research and Applications Landscape, Springer, Jack Park and Sam Hunting: XML Topic Maps: Creating and Using Topic Maps for the Web, Addison-Wesley, (in bibMap) External links Information portal about Topic Maps An Introduction to Topic Maps at Microsoft Docs Topic Maps Lab Knowledge representation languages Technical communication ISO standards IEC standards Diagrams Semantic relations
Topic map
[ "Technology" ]
1,508
[ "Computer standards", "IEC standards" ]
167,803
https://en.wikipedia.org/wiki/Bartholin%27s%20gland
The Bartholin's glands (named after Caspar Bartholin the Younger; also called Bartholin glands or greater vestibular glands) are two pea-sized compound alveolar glands located slightly posterior and to the left and right of the opening of the vagina. They secrete mucus to lubricate the vagina. They are homologous to bulbourethral glands in males. However, while Bartholin's glands are located in the superficial perineal pouch in females, bulbourethral glands are located in the deep perineal pouch in males. Their duct length is 1.5 to 2.0 cm and they open into navicular fossa. The ducts are paired and they open on the surface of the vulva. Structure The embryological origin of the Bartholin's glands is derived from the urogenital sinus; therefore, the innervation and blood supply are via the pudendal nerve and external pudendal artery, respectively. The superficial inguinal lymph nodes and pelvic nodes provide lymphatic drainage. These glands are pea-sized (0.5–1.0 cm) and are lined with columnar epithelium. The duct length is 1.5–2 cm and is lined with squamous epithelium. These are located just beneath the fascia and their ducts drain into the vestibular mucosa. These mucoid alkaline secreting glands are arranged as lobules consisting of alveoli lined by cuboidal or columnar epithelium. Their efferent ducts are composed of transitional epithelium, which merges into squamous epithelium as it enters the distal vagina. The more proximal portions of the ductal system are lined by transitional epithelium and may be lined by columnar epithelium before arborization into glandular secretory elements. These glands lie on the perineal membrane and beneath the bulbospongiosus muscle at the tail end of the vestibular bulb deep to the posterior labia majora. The intimate relation between the enormously vascular tissue of the vestibular bulb and the Bartholin's glands is responsible for the risk of hemorrhage associated with the removal of this latter structure. The openings of the Bartholin's glands are located on the posterior margin of the introitus bilaterally in a groove between the hymen and the labium minus at the 4:00 and 8:00 o'clock positions. The glands duct opening is seen on the posterolateral aspect of the vestibule 3 to 4 mm outside the hymen or hymenal caruncles lateral to the hymenal ring. History Bartholin's glands were first described in 1677 by the 17th-century Danish anatomist Caspar Bartholin the Younger (1655–1738). Earlier he jointly discovered the glands in cows with Joseph Guichard Duverney (1648-1730), a French anatomist. Some sources mistakenly ascribe their discovery to his grandfather, theologian and anatomist Caspar Bartholin the Elder (1585–1629). Function Bartholin's glands secrete mucus to provide vaginal lubrication during sexual arousal. The fluid may slightly moisten the labial opening of the vagina, serving to make contact with this sensitive area more comfortable. Fluid from the Bartholin's glands is combined with other vaginal secretions as a "lubrication fluid" in the amount of about 6 grams per day, and contains high potassium and low sodium concentrations relative to blood plasma, with a slightly acidic pH of 4.7. Clinical pathology It is possible for the Bartholin's glands to become blocked and inflamed resulting in pain. This is known as bartholinitis or a Bartholin's cyst. A Bartholin's cyst in turn can become infected and form an abscess. Adenocarcinoma of the gland is rare and benign tumors and hyperplasia are even more rare. Bartholin gland carcinoma is a rare malignancy that occurs in 1% of vulvar cancers. This may be due to the presence of three different types of epithelial tissue. Inflammation of the Skene's glands and Bartholin glands may appear similar to cystocele. Other animals The major vestibular glands are found in many mammals such as cats, cows, and some sheep. See also List of distinct cell types in the adult human body List of related male and female reproductive organs Mesonephric duct Skene's gland References Glands Exocrine system Human female reproductive system Mammal female reproductive system Anatomy named for one who described it Sex organs
Bartholin's gland
[ "Biology" ]
1,005
[ "Exocrine system", "Organ systems" ]
167,823
https://en.wikipedia.org/wiki/Isometric%20projection
Isometric projection is a method for visually representing three-dimensional objects in two dimensions in technical and engineering drawings. It is an axonometric projection in which the three coordinate axes appear equally foreshortened and the angle between any two of them is 120 degrees. Overview The term "isometric" comes from the Greek for "equal measure", reflecting that the scale along each axis of the projection is the same (unlike some other forms of graphical projection). An isometric view of an object can be obtained by choosing the viewing direction such that the angles between the projections of the x, y, and z axes are all the same, or 120°. For example, with a cube, this is done by first looking straight towards one face. Next, the cube is rotated ±45° about the vertical axis, followed by a rotation of approximately 35.264° (precisely arcsin or arctan , which is related to the Magic angle) about the horizontal axis. Note that with the cube (see image) the perimeter of the resulting 2D drawing is a perfect regular hexagon: all the black lines have equal length and all the cube's faces are the same area. Isometric graph paper can be placed under a normal piece of drawing paper to help achieve the effect without calculation. In a similar way, an isometric view can be obtained in a 3D scene. Starting with the camera aligned parallel to the floor and aligned to the coordinate axes, it is first rotated horizontally (around the vertical axis) by ±45°, then 35.264° around the horizontal axis. Another way isometric projection can be visualized is by considering a view within a cubical room starting in an upper corner and looking towards the opposite, lower corner. The x-axis extends diagonally down and right, the y-axis extends diagonally down and left, and the z-axis is straight up. Depth is also shown by height on the image. Lines drawn along the axes are at 120° to one another. In all these cases, as with all axonometric and orthographic projections, such a camera would need a object-space telecentric lens, in order that projected lengths not change with distance from the camera. The term "isometric" is often mistakenly used to refer to axonometric projections, generally. There are, however, actually three types of axonometric projections: isometric, dimetric and oblique. Rotation angles From the two angles needed for an isometric projection, the value of the second may seem counterintuitive and deserves some further explanation. Let's first imagine a cube with sides of length 2, and its center at the axis origin, which means all its faces intersect the axes at a distance of 1 from the origin. We can calculate the length of the line from its center to the middle of any edge as using Pythagoras' theorem . By rotating the cube by 45° on the x-axis, the point (1, 1, 1) will therefore become (1, 0, ) as depicted in the diagram. The second rotation aims to bring the same point on the positive z-axis and so needs to perform a rotation of value equal to the arctangent of which is approximately 35.264°. Mathematics There are eight different orientations to obtain an isometric view, depending into which octant the viewer looks. The isometric transform from a point a in 3D space to a point b in 2D space looking into the first octant can be written mathematically with rotation matrices as: where α = arcsin(tan 30°) ≈ 35.264° and β = 45°. As explained above, this is a rotation around the vertical (here y) axis by β, followed by a rotation around the horizontal (here x) axis by α. This is then followed by an orthographic projection to the xy-plane: The other 7 possibilities are obtained by either rotating to the opposite sides or not, and then inverting the view direction or not. History and limitations First formalized by Professor William Farish (1759–1837), the concept of isometry had existed in a rough empirical form for centuries. From the middle of the 19th century, isometry became an "invaluable tool for engineers, and soon thereafter axonometry and isometry were incorporated in the curriculum of architectural training courses in Europe and the U.S." According to Jan Krikke (2000) however, "axonometry originated in China. Its function in Chinese art was similar to linear perspective in European art. Axonometry, and the pictorial grammar that goes with it, has taken on a new significance with the advent of visual computing". As with all types of parallel projection, objects drawn with isometric projection do not appear larger or smaller as they extend closer to or away from the viewer. While advantageous for architectural drawings where measurements need to be taken directly, the result is a perceived distortion, as unlike perspective projection, it is not how human vision or photography normally work. It also can easily result in situations where depth and altitude are difficult to gauge, as is shown in the illustration to the right or above. This can appear to create paradoxical or impossible shapes, such as the Penrose stairs. Usage in video games and pixel art Isometric video game graphics are graphics employed in video games and pixel art that utilize a parallel projection, but which angle the viewpoint to reveal facets of the environment that would otherwise not be visible from a top-down perspective or side view, thereby producing a three-dimensional effect. Despite the name, isometric computer graphics are not necessarily truly isometric—i.e., the , , and axes are not necessarily oriented 120° to each other. Instead, a variety of angles are used, with dimetric projection and a 2:1 pixel ratio being the most common. The terms " perspective", " view", "2.5D", and "pseudo 3D" are also sometimes used, although these terms can bear slightly different meanings in other contexts. Once common, isometric projection became less so with the advent of more powerful 3D graphics systems, and as video games began to focus more on action and individual characters. However, video games utilizing isometric projection—especially computer role-playing games—have seen a resurgence in recent years within the indie gaming scene. See also Graphical projection References External links Isometric Projection Graphical projections de:Perspektive#Isometrische Axonometrie, nach DIN 5 it:Assonometria#Assonometria isometrica
Isometric projection
[ "Mathematics" ]
1,347
[ "Functions and mappings", "Graphical projections", "Mathematical objects", "Mathematical relations" ]
167,879
https://en.wikipedia.org/wiki/Nocturnal%20emission
A wet dream, sex dream, or sleep orgasm, is a spontaneous occurrence of sexual arousal during sleep that includes ejaculation (nocturnal emission) and orgasm for a male, and vaginal lubrication and/or orgasm for a female. Context Nocturnal emissions can happen after stressful dreams in REM sleep which activate the sympathetic nervous system, hence leading to ejaculation. They can also happen after sex dreams. Nocturnal emissions can start as early as age nine, and are most common during adolescence and early young adult years, but they may happen any time after puberty. It is possible for men to wake up during a wet dream, or simply to sleep through it, but for women, some researchers have added the requirement that she should awaken during the orgasm, and perceive that the orgasm happened, before it counts as a wet dream. Vaginal lubrication alone does not mean that the woman has had an orgasm. Composition Due to the difficulty in collecting ejaculate produced during nocturnal emissions, relatively few studies have examined its composition. In the largest study, which included nocturnal emission samples from 10 men with idiopathic anejaculation, the semen concentration was equivalent to samples obtained from the same men by penile vibratory stimulation, although the proportions of sperm which were motile, and which were of normal morphology, were higher in the nocturnal emission specimens. Frequency In a detailed study, men and women reported that roughly 8% of their everyday dreams contain some form of sexual-related activity. 4% of sex dreams among both men and women resulted in orgasms. In males The frequency of nocturnal emissions is highly variable. Some reported that it is due to being sexually inactive for a period of 5–26 weeks, with no engagement in either intercourse or masturbation. Some males have experienced large numbers of nocturnal emissions as teenagers, while others have never experienced any. In the U.S., 83% of men experience nocturnal emissions at some time in their life. For males who have experienced nocturnal emissions, the mean frequency ranges from 0.36 times per week (about once every three weeks) for single 15-year-old males to 0.18 times per week for 40-year-old single males. For married males, the mean ranges from 0.23 times per week (about once per month) for 19-year-old married males to 0.15 times per week (about once every two months) for 50-year-old married males. In Indonesia surveys have shown that 93% of men experience nocturnal emissions by the age of 24. Some males have the emissions only at a specific age, while others have them throughout their lives following puberty. The frequency with which one has nocturnal emissions has not been conclusively linked to frequency of masturbation. Alfred Kinsey found: One factor that can affect the number of nocturnal emissions males have is whether they take testosterone-based drugs. In a 1998 study by Finkelstein et al, the number of boys reporting nocturnal emissions drastically increased as their testosterone doses were increased, from 17% of subjects with no treatment to 90% of subjects at a high dose. Thirteen percent of males experience their first ejaculation as a result of a nocturnal emission. Kinsey found that males experiencing their first ejaculation through a nocturnal emission were older than those experiencing their first ejaculation by means of masturbation. The study indicates that such a first ejaculation resulting from a nocturnal emission was delayed a year or more from what would have been developmentally possible for such males through physical stimulation. In females In 1953, sex researcher Alfred Kinsey found that nearly 40% of the women he interviewed have had one or more nocturnal orgasms or wet dreams. Those who reported experiencing these said that they usually had them several times a year and that they first occurred as early as thirteen, and usually by the age of 21. Kinsey defined female nocturnal orgasm as sexual arousal during sleep that awakens one to perceive the experience of orgasm. Research published by Barbara L. Wells in the 1986 Journal of Sex Research indicates that as many as 85% of women have experienced nocturnal orgasm by the age of 21. This research was based on women waking up with or during orgasm. Studies have found that males typically have more frequent spontaneous nocturnal sexual experiences than females. However, female wet dreams may also be more difficult to identify with certainty than male wet dreams because ejaculation is usually associated with male orgasm while vaginal lubrication may not indicate orgasm. Cultural views Numerous cultural and religious views have been advanced related to nocturnal emissions. Below is a limited summary of some perspectives. Jewish and Samaritan Some examples of passages under the Mosaic law of the Hebrew Bible teach that under the law of Moses, a man who had a nocturnal emission incurred ritual defilement (as with any other instance of ejaculation): The first of these is part of a passage stating similar regulations about sexual intercourse and menstruation. Leviticus 12 makes similar regulations about childbirth. A third passage relates more specifically to priests, requiring any "of the offspring of Aaron who has ... a discharge", among other causes of ritual defilement, to abstain from eating holy offerings until after a ritual immersion in a mikveh and until the subsequent nocturnal emission. In Judaism, the Tikkun HaKlali, also known as "The General Remedy", is a set of ten Psalms designed in 1805 by Rebbe Nachman, whose recital is intended to serve as repentance for nocturnal emissions. Patristic Christian Saint Augustine held that male nocturnal emissions, unlike masturbation, did not pollute the conscience of a man, because they were not voluntary carnal acts, and were therefore not to be considered a sin. A similar view was expressed by Aquinas, who wrote in the Summa Theologica II-II-154-5: Islamic A wet dream (, ihtilam) is not a sin in Islam. Moreover, whereas a person fasting (in Ramadan or otherwise) would normally be considered to have broken their fast by ejaculating on purpose (during either masturbation or intercourse), nocturnal emission is not such a cause. However, they are still required to bathe prior to undergoing some rituals in the religion. Muslim scholars consider ejaculation something that makes one temporarily ritually impure, a condition known as junub, meaning that a Muslim who has had an orgasm or ejaculated must have a ghusl (consisting of ablution followed by bathing the entire body so that not a single hair remains dry on the whole body—may also require one to rub the body according to Maliki school of thought, dalk in Arabic—while showering) before they can read any verse of the Quran or perform the formal prayers. Informal supplications and prayers (du'a) do not require such a bath. Indian traditions The Hindu text suggests those who had nocturnal emissions to bathe and chant mantras praying to return their virility. Vinaya suggests masturbation is a sin, but a nocturnal emission is not. During the third Buddhist council, it was suggested that having wet dreams as an Arhat does not count as a sin. Medieval Europe In European folklore, nocturnal emissions were believed to be caused by a succubus copulating with the individual at night, an event associated with sleep paralysis and possibly night terrors. East Asia Traditional East Asian medicine considered it problematic, because it was considered to be an act of evil spirits that tries to rob the life of a person. The literature suggests a "cure" for nocturnal emissions, which prescribes fried leek seeds three times a day. See also Spermarche Nocturnal clitoral tumescence Nocturnal enuresis Nocturnal penile tumescence Sleep sex Somnophilia References Bibliography External links Orgasm Ejaculation Sleep physiology Sleep Men's health Succubi
Nocturnal emission
[ "Biology" ]
1,628
[ "Behavior", "Sleep physiology", "Sleep" ]
167,906
https://en.wikipedia.org/wiki/Cultivar
A cultivar is a kind of cultivated plant that people have selected for desired traits and which retains those traits when propagated. Methods used to propagate cultivars include division, root and stem cuttings, offsets, grafting, tissue culture, or carefully controlled seed production. Most cultivars arise from deliberate human manipulation, but some originate from wild plants that have distinctive characteristics. Cultivar names are chosen according to rules of the International Code of Nomenclature for Cultivated Plants (ICNCP), and not all cultivated plants qualify as cultivars. Horticulturists generally believe the word cultivar was coined as a term meaning "cultivated variety". Popular ornamental plants like roses, camellias, daffodils, rhododendrons, and azaleas are commonly cultivars produced by breeding and selection or as sports, for floral colour or size, plant form, or other desirable characteristics. Similarly, the world's agricultural food crops are almost exclusively cultivars that have been selected for characters such as improved yield, flavour, and resistance to disease, and very few wild plants are now used as food sources. Trees used in forestry are also special selections grown for their enhanced quality and yield of timber. Cultivars form a major part of Liberty Hyde Bailey's broader group, the cultigen, which is defined as a plant whose origin or selection is primarily due to intentional human activity. A cultivar is not the same as a botanical variety, which is a taxonomic rank below subspecies, and there are differences in the rules for creating and using the names of botanical varieties and cultivars. In recent times, the naming of cultivars has been complicated by the use of statutory patents for plants and recognition of plant breeders' rights. The International Union for the Protection of New Varieties of Plants (UPOV – ) offers legal protection of plant cultivars to persons or organisations that introduce new cultivars to commerce. UPOV requires that a cultivar be "distinct", "uniform", and "stable". To be "distinct", it must have characters that easily distinguish it from any other known cultivar. To be "uniform" and "stable", the cultivar must retain these characters in repeated propagation. The naming of cultivars is an important aspect of cultivated plant taxonomy, and the correct naming of a cultivar is prescribed by the Rules and Recommendations of the International Code of Nomenclature for Cultivated Plants (ICNCP, commonly denominated the Cultivated Plant Code). A cultivar is given a cultivar name, which consists of the scientific Latin botanical name followed by a cultivar epithet. The cultivar epithet is usually in a vernacular language. Etymology The word cultivar originated from the need to distinguish between wild plants and those with characteristics that arose in cultivation, presently denominated cultigens. This distinction dates to the Greek philosopher Theophrastus (370–285 BC), the "Father of Botany", who was keenly aware of this difference. Botanical historian Alan Morton noted that Theophrastus in his Historia Plantarum (Enquiry into Plants) "had an inkling of the limits of culturally induced (phenotypic) changes and of the importance of genetic constitution" (Historia Plantarum, Book 3, 2, 2 and Causa Plantarum, Book 1, 9, 3). The International Code of Nomenclature for algae, fungi, and plants uses as its starting point for modern botanical nomenclature the Latin names in Linnaeus' (1707–1778) Species Plantarum (tenth edition) and Genera Plantarum (fifth edition). In Species Plantarum, Linnaeus enumerated all plants known to him, either directly or from his extensive reading. He recognised the rank of varietas (botanical "variety", a rank below that of species and subspecies) and he indicated these varieties with letters of the Greek alphabet, such as α, β, and λ, before the varietal name, rather than using the abbreviation "var." as is the present convention. Most of the varieties that Linnaeus enumerated were of "garden" origin rather than being wild plants. In time the need to distinguish between wild plants and those with variations that had been cultivated increased. In the nineteenth century many "garden-derived" plants were given horticultural names, sometimes in Latin and sometimes in a vernacular language. From circa the 1900s, cultivated plants in Europe were recognised in the Scandinavian, Germanic, and Slavic literature as stamm or sorte, but these words could not be used internationally because, by international agreement, any new denominations had to be in Latin. In the twentieth century an improved international nomenclature was proposed for cultivated plants. Liberty Hyde Bailey of Cornell University in New York, United States created the word cultivar in 1923 when he wrote that: In that essay, Bailey used only the rank of species for the cultigen, but it was obvious to him that many domesticated plants were more like botanical varieties than species, and that realization appears to have motivated the suggestion of the new category of cultivar. Bailey created the word cultivar. It is generally assumed to be a blend of cultivated and variety but Bailey never explicitly stated the etymology and it has been suggested that the word is actually a blend of cultigen and variety. The neologism cultivar was promoted as "euphonious" and "free from ambiguity". The first Cultivated Plant Code of 1953 subsequently commended its use, and by 1960 it had achieved common international acceptance. Cultigens The words cultigen and cultivar may be confused with each other. A cultigen is any plant that is deliberately selected for or altered in cultivation, as opposed to an indigen; the Cultivated Plant Code states that cultigens are "maintained as recognisable entities solely by continued propagation". Cultigens can have names at any of many taxonomic ranks, including those of grex, species, cultivar group, variety, form, and cultivar; and they may be plants that have been altered in cultivation, including by genetic modification, but have not been formally denominated. A cultigen or a component of a cultigen can be accepted as a cultivar if it is recognisable and has stable characters. Therefore, all cultivars are cultigens, because they are cultivated, but not all cultigens are cultivars, because some cultigens have not been formally distinguished and named as cultivars. Formal definition The Cultivated Plant Code notes that the word cultivar is used in two different senses: first, as a "classification category" the cultivar is defined in Article 2 of the International Code of Nomenclature for Cultivated Plants (2009, 8th edition) as follows: The basic category of cultivated plants whose nomenclature is governed by this Code is the cultivar. There are two other classification categories for cultigens, the grex and the group. The Code then defines a cultivar as a "taxonomic unit within the classification category of cultivar". This is the sense of cultivar that is most generally understood and which is used as a general definition. Different kinds Which plants are chosen to be named as cultivars is simply a matter of convenience as the category was created to serve the practical needs of horticulture, agriculture, and forestry. Members of a particular cultivar are not necessarily genetically identical. The Cultivated Plant Code emphasizes that different cultivated plants may be accepted as different cultivars, even if they have the same genome, while cultivated plants with different genomes may be regarded as the same cultivar. The production of cultivars generally entails considerable human involvement although in a few cases it may be as little as simply selecting variation from plants growing in the wild (whether by collecting growing tissue to propagate from or by gathering seed). Cultivars generally occur as ornamentals and food crops: Malus 'Granny Smith' and Malus 'Red Delicious' are cultivars of apples propagated by cuttings or grafting, Lactuca 'Red Sails' and Lactuca 'Great Lakes' are lettuce cultivars propagated by seeds. Named cultivars of Hosta and Hemerocallis plants are cultivars produced by micropropagation or division. Clones Cultivars that are produced asexually are genetically identical and known as clones; this includes plants propagated by division, layering, cuttings, grafts, and budding. The propagating material may be taken from a particular part of the plant, such as a lateral branch, or from a particular phase of the life cycle, such as a juvenile leaf, or from aberrant growth as occurs with witch's broom. Plants whose distinctive characters are derived from the presence of an intracellular organism may also form a cultivar provided the characters are reproduced reliably from generation to generation. Plants of the same chimera (which have mutant tissues close to normal tissue) or graft-chimeras (which have vegetative tissue from different kinds of plants and which originate by grafting) may also constitute a cultivar. Seed-produced Some cultivars "come true from seed", retaining their distinguishing characteristics when grown from seed. Such plants are termed a "variety", "selection", or "strain" but these are ambiguous and confusing words that are best avoided. In general, asexually propagated cultivars grown from seeds produce highly variable seedling plants, and should not be labelled with, or sold under, the parent cultivar's name. Seed-raised cultivars may be produced by uncontrolled pollination when characteristics that are distinct, uniform and stable are passed from parents to progeny. Some are produced as "lines" that are produced by repeated self-fertilization or inbreeding or "multilines" that are made up of several closely related lines. Sometimes they are F1 hybrids which are the result of a deliberate repeatable single cross between two pure lines. A few F2 hybrid seed cultivars also exist, such as Achillea 'Summer Berries'. Some cultivars are agamospermous plants, which retain their genetic composition and characteristics under reproduction. Occasionally cultivars are raised from seed of a specially selected provenance – for example the seed may be taken from plants that are resistant to a particular disease. Genetically modified Genetically modified plants with characteristics resulting from the deliberate implantation of genetic material from a different germplasm may form a cultivar. However, the International Code of Nomenclature for Cultivated Plants notes, "In practice such an assemblage is often marketed from one or more lines or multilines that have been genetically modified. These lines or multilines often remain in a constant state of development which makes the naming of such an assemblage as a cultivar a futile exercise." However, retired transgenic varieties such as the fish tomato, which are no longer being developed, do not run into this obstacle and can be given a cultivar name. Cultivars may be selected because of a change in the ploidy level of a plant which may produce more desirable characteristics. Cultivar names Every unique cultivar has a unique name within its denomination class (which is almost always the genus). Names of cultivars are regulated by the International Code of Nomenclature for Cultivated Plants, and may be registered with an International Cultivar Registration Authority (ICRA). There are sometimes separate registration authorities for different plant types such as roses and camellias. In addition, cultivars may be associated with commercial marketing names referred to in the Cultivated Plant Code as "trade designations" (see below). Presenting in text A cultivar name consists of a botanical name (of a genus, species, infraspecific taxon, interspecific hybrid or intergeneric hybrid) followed by a cultivar epithet. The cultivar epithet is enclosed by single quotes; it should not be italicized if the botanical name is italicized; and each of the words within the epithet is capitalized (with some permitted exceptions such as conjunctions). It is permissible to place a cultivar epithet after a common name provided the common name is botanically unambiguous. Cultivar epithets published before 1 January 1959 were often given a Latin form and can be readily confused with the specific epithets in botanical names; after that date, newly coined cultivar epithets must be in a modern vernacular language to distinguish them from botanical epithets. For example, the full cultivar name of the King Edward potato is Solanum tuberosum 'King Edward'. 'King Edward' is the cultivar epithet, which, according to the Rules of the Cultivated Plant Code, is bounded by single quotation marks. For patented or trademarked plant product lines developed from a given cultivar, the commercial product name is typically indicated by the symbols "TM" or "®", or is presented in capital letters with no quotation marks, following the cultivar name, as in the following example, where "Bloomerang" is the commercial name and 'Penda' is the cultivar epithet: Syringa 'Penda' BLOOMERANG. Examples of correct text presentation: Cryptomeria japonica 'Elegans' Chamaecyparis lawsoniana 'Aureomarginata' (pre-1959 name, Latin in form) Chamaecyparis lawsoniana 'Golden Wonder' (post-1959 name, English language) Pinus densiflora 'Akebono' (post-1959 name, Japanese language) Apple 'Sundown' Some incorrect text presentation examples: Cryptomeria japonica "Elegans" (double quotes are unacceptable) Berberis thunbergii cv. 'Crimson Pygmy' (this once-common usage is now unacceptable, as it is no longer correct to use "cv." in this context; Berberis thunbergii 'Crimson Pygmy' is correct) Rosa cv. 'Peace' (this is now incorrect for two reasons: firstly, the use of "cv."; secondly, "Peace" is a trade designation or "selling name" for the cultivar R. 'Madame A. Meilland' and should therefore be printed in a different typeface from the rest of the name, without quote marks, for example: Rosa Peace) Although "cv." has not been permitted by the International Code of Nomenclature for Cultivated Plants since the 1995 edition, it is still widely used and recommended by other authorities. Group names Where several very similar cultivars exist they can be associated into a Group (formerly Cultivar-group). As Group names are used with cultivar names it is necessary to understand their way of presentation. Group names are presented in normal type and the first letter of each word capitalised as for cultivars, but they are not placed in single quotes. When used in a name, the first letter of the word "Group" is itself capitalized. Presenting in text Brassica oleracea Capitata Group (the group of cultivars including all typical cabbages) Brassica oleracea Botrytis Group (the group of cultivars including all typical cauliflowers) Hydrangea macrophylla Groupe Hortensia (in French) = Hydrangea macrophylla Hortensia Group (in English) Where cited with a cultivar name the group should be enclosed in parentheses, as follows: Hydrangea macrophylla (Hortensia Group) 'Ayesha' Legal protection of cultivars and their names Since the 1990s there has been an increasing use of legal protection for newly produced cultivars. Plant breeders expect legal protection for the cultivars they produce. According to proponents of such protections, if other growers can immediately propagate and sell these cultivars as soon as they come on the market, the breeder's benefit is largely lost. Legal protection for cultivars is obtained through the use of Plant breeders' rights and plant Patents but the specific legislation and procedures needed to take advantage of this protection vary from country to country. Controversial use of legal protection for cultivars The use of legal protection for cultivars can be controversial, particularly for food crops that are staples in developing countries, or for plants selected from the wild and propagated for sale without any additional breeding work; some people consider this practice unethical. Trade designations and selling names The formal scientific name of a cultivar, like Solanum tuberosum 'King Edward', is a way of uniquely designating a particular kind of plant. This scientific name is in the public domain and cannot be legally protected. Plant retailers wish to maximize their share of the market and one way of doing this is to replace the Latin scientific names on plant labels in retail outlets with appealing marketing names that are easy to use, pronounce, and remember. Marketing names lie outside the scope of the Cultivated Plant Code which refers to them as "trade designations". If a retailer or wholesaler has the sole legal rights to a marketing name then that may offer a sales advantage. Plants protected by plant breeders' rights (PBR) may have a "true" cultivar name – the recognized scientific name in the public domain – and a "commercial synonym" – an additional marketing name that is legally protected. An example would be Rosa = 'Poulmax', in which Rosa is the genus, is the trade designation, and 'Poulmax' is scientific cultivar name. Because a name that is attractive in one language may have less appeal in another country, a plant may be given different selling names from country to country. Quoting the original cultivar name allows the correct identification of cultivars around the world. The main body coordinating plant breeders' rights is the International Union for the Protection of New Varieties of Plants (, UPOV) and this organization maintains a database of new cultivars protected by PBR in all countries. International Cultivar Registration Authorities An International Cultivar Registration Authority (ICRA) is a voluntary, non-statutory organization appointed by the Commission for Nomenclature and Cultivar Registration of the International Society of Horticultural Science. ICRAs are generally formed by societies and institutions specializing in particular plant genera such as Dahlia or Rhododendron and are currently located in Europe, North America, China, India, Singapore, Australia, New Zealand, South Africa and Puerto Rico. Each ICRA produces an annual report and its reappointment is considered every four years. The main task is to maintain a register of the names within the group of interest and where possible this is published and placed in the public domain. One major aim is to prevent the duplication of cultivar and Group epithets within a genus, as well as ensuring that names are in accord with the latest edition of the Cultivated Plant Code. In this way, over the last 50 years or so, ICRAs have contributed to the stability of cultivated plant nomenclature. In recent times many ICRAs have also recorded trade designations and trademarks used in labelling plant material, to avoid confusion with established names. New names and other relevant data are collected by and submitted to the ICRA and in most cases there is no cost. The ICRA then checks each new epithet to ensure that it has not been used before and that it conforms with the Cultivated Plant Code. Each ICRA also ensures that new names are formally established (i.e. published in hard copy, with a description in a dated publication). They record details about the plant, such as parentage, the names of those concerned with its development and introduction, and a basic description highlighting its distinctive characters. ICRAs are not responsible for assessing the distinctiveness of the plant in question. Most ICRAs can be contacted electronically and many maintain web sites for an up-to-date listing. See also Lists of cultivars Plant variety (law) Plant Landrace Hybrid fruit Notes References Bibliography External links Sale point of the Latest Edition (October 2009) of The International Code of Nomenclature for Cultivated Plants International Cultivar Registration Authorities The Language of Horticulture Opinion piece by Tony Lord (from The Plantsman magazine) Hortivar – The Food and Agriculture Organization of the United Nations Horticulture Cultivars Performance Database Plant taxonomy Plant breeding Forest management
Cultivar
[ "Chemistry", "Biology" ]
4,167
[ "Plant breeding", "Plant taxonomy", "Plants", "Molecular biology" ]
167,921
https://en.wikipedia.org/wiki/Tarmacadam
Tarmacadam is a concrete road surfacing material made by combining tar and macadam (crushed stone and sand), patented by Welsh inventor Edgar Purnell Hooley in 1902. It is a more durable and dust-free enhancement of simple compacted stone macadam surfaces invented by Scottish engineer John Loudon McAdam in the early 19th century. The terms "tarmacadam" and tarmac are also used for a variety of other materials, including tar-grouted macadam, bituminous surface treatments and modern asphalt concrete. Origins Macadam roads pioneered by Scottish engineer John Loudon McAdam in the 1820s are prone to rutting and generating dust. Methods to stabilise macadam surfaces with tar date back to at least 1834 when John Henry Cassell, operating from Cassell's Patent Lava Stone Works in Millwall, England, patented "lava stone." This method involved spreading tar on the subgrade, placing a typical macadam layer, and finally sealing the macadam with a mixture of tar and sand. Tar-grouted macadam was in use well before 1900 and involved scarifying the surface of an existing macadam pavement, spreading tar and re-compacting. Although the use of tar in road construction was known in the 19th century, it was little used and was not introduced on a large scale until the motorcar arrived on the scene in the early 20th century. Ironically, although John Loudon McAdam himself had been a supplier of coke for Britain's first Coal-Tar factory, he never in his own lifetime advocated for the use of tar as a binding agent for his road designs, preferring free-draining materials (see the page Macadam). In 1901, Edgar Purnell Hooley was walking in Denby, Derbyshire, when he noticed a smooth stretch of road close to an ironworks. He was informed that a barrel of tar had fallen onto the road and someone poured waste slag from the nearby furnaces to cover up the mess. Hooley noticed this unintentional resurfacing had solidified the road, and there was no rutting and no dust. Hooley's 1902 patent for tarmac involved mechanically mixing tar and aggregate before lay-down and then compacting the mixture with a steamroller. The tar was modified by adding small amounts of Portland cement, resin and pitch. Nottingham's Radcliffe Road became the first tarmac road in the world. In 1903 Hooley formed Tar Macadam Syndicate Ltd and registered tarmac as a trademark. Later developments As petroleum production increased, the by-product bitumen became available in greater quantities and largely supplanted coal tar. The macadam construction process quickly became obsolete because of the onerous and impractical manual labour required. The somewhat similar tar and chip method, also known as bituminous surface treatment (BST) or chipseal, remains popular. While the specific tarmac pavement is not common in some countries today, many people use the word to refer to generic paved areas at airports, especially the apron near airport terminals, although these areas are often made of concrete. Similarly in the UK, the word tarmac is much more commonly used by the public when referring to asphalt concrete. See also History of road transport References External links Asphalt Brands that became generic English inventions Pavements
Tarmacadam
[ "Physics", "Chemistry" ]
692
[ "Amorphous solids", "Asphalt", "Unsolved problems in physics", "Chemical mixtures" ]
168,041
https://en.wikipedia.org/wiki/Anatomical%20terms%20of%20location
Standard anatomical terms of location are used to describe unambiguously the anatomy of animals, including humans. The terms, typically derived from Latin or Greek roots, describe something in its standard anatomical position. This position provides a definition of what is at the front ("anterior"), behind ("posterior") and so on. As part of defining and describing terms, the body is described through the use of anatomical planes and anatomical axes. The meaning of terms that are used can change depending on whether an organism is bipedal or quadrupedal. Additionally, for some animals such as invertebrates, some terms may not have any meaning at all; for example, an animal that is radially symmetrical will have no anterior surface, but can still have a description that a part is close to the middle ("proximal") or further from the middle ("distal"). International organisations have determined vocabularies that are often used as standards for subdisciplines of anatomy. For example, Terminologia Anatomica for humans and Nomina Anatomica Veterinaria for animals. These allow parties that use anatomical terms, such as anatomists, veterinarians, and medical doctors, to have a standard set of terms to communicate clearly the position of a structure. Introduction Standard anatomical and zoological terms of location have been developed, usually based on Latin and Greek words, to enable all biological and medical scientists, veterinarians, doctors and anatomists to precisely delineate and communicate information about animal bodies and their organs, even though the meaning of some of the terms often is context-sensitive. Much of this information has been standardised in internationally agreed vocabularies for humans (Terminologia Anatomica) and animals (Nomina Anatomica Veterinaria). Different terms are used for groups of creatures with different body layouts, such as bipeds (creatures that stand on two feet, such as humans) and quadrupeds. The reasoning is that the neuraxis is different between the two groups, and so is what is considered the standard anatomical position, such as how humans tend to be standing upright and with their arms reaching forward. Thus, the "top" of a human is the head, whereas the "top" of a dog would be the back, and the "top" of a flounder may be on either the left or right side. Unique terms are also used to describe invertebrates as well, because of their wider variety of shapes and symmetry. Standard anatomical position Because animals can change orientation with respect to their environment, and because appendages like limbs and tentacles can change position with respect to the main body, terms to describe position need to refer to an animal when it is in its standard anatomical position. This means descriptions as if the organism is in its standard anatomical position, even when the organism in question has appendages in another position. This helps avoid confusion in terminology when referring to the same organism in different postures. In humans, this refers to the body in a standing position with arms at the side and palms facing forward, with thumbs out and to the sides. Combined terms Many anatomical terms can be combined, either to indicate a position in two axes simultaneously or to indicate the direction of a movement relative to the body. For example, "anterolateral" indicates a position that is both anterior and lateral to the body axis (such as the bulk of the pectoralis major muscle). In radiology, an X-ray image may be said to be "anteroposterior", indicating that the beam of X-rays, known as its projection, passes from their source to patient's anterior body wall first, then through the body to exit through posterior body wall and into the detector/film to produce a radiograph. The opposite is true for the term "posteroanterior," while side-to-side projections are known as either "lateromedial" (from the outside of the left or right side of the body toward the inside) or "mediolateral"(from the inside of that side of the body toward the outside. The same logic is applied to all planes of the body and, thus top-to-bottom or bottom-to-top X-ray projections are known as "superoinferior" and "inferosuperior," respectively. However, within the diagnostic imaging industry, for this particular example, the terms "cranial" (towards the head) and "caudal" (towards the tail, or, downwards, away from the head) are known interchangeable alternatives to the previous two projection terms. Combined terms were once generally hyphenated, but the modern tendency is to omit the hyphen. Planes Anatomical terms describe structures with relation to four main anatomical planes: The median plane, also called the midsagittal plane, which divides the body into left and right. This passes through the head, spinal cord, navel and, in many animals, the tail. The sagittal planes, also called the parasagittal planes, which are to the median plane. The coronal plane, also called the frontal plane, which divides the body into front and back. The transverse plane, also called the axial plane or horizontal plane, which is perpendicular to the other two planes. In a human, this plane is parallel to the ground; in a quadruped, this divides the animal into anterior and posterior sections. Axes The axes of the body are lines drawn about which an organism is roughly symmetrical. To do this, distinct ends of an organism are chosen, and the axis is named according to those directions. An organism that is symmetrical on both sides has three main axes that intersect at right angles. An organism that is round or not symmetrical may have different axes. Example axes are: The anteroposterior axis The cephalocaudal axis The dorsoventral axis Examples of axes in specific animals are shown below. Modifiers Several terms are commonly seen and used as prefixes: Sub- () is used to indicate something that is beneath, or something that is subordinate to or lesser than. For example, subcutaneous means beneath the skin. Hypo- () is used to indicate something that is beneath. For example, the hypoglossal nerve supplies the muscles beneath the tongue. Infra- () is used to indicate something that is within or below. For example, the infraorbital nerve runs within the orbit. Inter- () is used to indicate something that is between. For example, the intercostal muscles run between the ribs. Super- or Supra- () is used to indicate something that is above something else. For example, the supraorbital ridges are above the eyes. Other terms are used as suffixes, added to the end of words: -ad () and -ab () are used to indicate that something is towards (-ad) or away from (-ab) something else. For example, "distad" means "in the distal direction", and "distad of the femur" means "beyond the femur in the distal direction". Further examples may include cephalad (towards the cephalic end), craniad, and proximad. Main terms Superior and inferior Superior () describes what is above something and inferior () describes what is below it. For example, in the anatomical position, the most superior part of the human body is the head and the most inferior is the feet. As a second example, in humans, the neck is superior to the chest but inferior to the head. Anterior and posterior Anterior () describes what is in front, and posterior () describes what is to the back of something. For example, for a dog the nose is anterior to the eyes and the tail is considered the most posterior part; for many fish the gill openings are posterior to the eyes but anterior to the tail. Medial and lateral These terms describe how close something is to the midline, or the medial plane. Lateral () describes something to the sides of an animal, as in "left lateral" and "right lateral". Medial () describes structures close to the midline, or closer to the midline than another structure. For example, in a human, the arms are lateral to the torso. The genitals are medial to the legs. Temporal has a similar meaning to lateral but is restricted to the head. The terms "left" and "right" are sometimes used, or their Latin alternatives (; ). However, it is preferred to use more precise terms where possible. Terms derived from lateral include: Contralateral (): on the side opposite to another structure. For example, the right arm and leg are controlled by the left, contralateral, side of the brain. Ipsilateral (): on the same side as another structure. For example, the left arm is ipsilateral to the left leg. Bilateral (): on both sides of the body. For example, bilateral orchiectomy means removal of testes on both sides of the body. Unilateral (): on one side of the body. For example, a stroke can result in unilateral weakness, meaning weakness on one side of the body. Varus () and valgus ( ) are terms used to describe a state in which a part further away is abnormally placed towards (varus) or away from (valgus) the midline. Proximal and distal The terms proximal () and distal () are used to describe parts of a feature that are close to or distant from the main mass of the body, respectively. Thus the upper arm in humans is proximal and the hand is distal. "Proximal and distal" are frequently used when describing appendages, such as fins, tentacles, and limbs. Although the direction indicated by "proximal" and "distal" is always respectively towards or away from the point of attachment, a given structure can be either proximal or distal in relation to another point of reference. Thus the elbow is distal to a wound on the upper arm, but proximal to a wound on the lower arm. The terms are also applied to internal anatomy, such as to the reproductive tract of snails. Unfortunately, different authors use the terms in opposite senses. Some consider "distal" as further from a point of origin near the centre of the body and others as further from where the organ reaches the body's surface; or other points of origin may be envisaged. This terminology is also employed in molecular biology and therefore by extension is also used in chemistry, specifically referring to the atomic loci of molecules from the overall moiety of a given compound. Central and peripheral Central and peripheral refer to the distance towards and away from the centre of something. That might be an organ, a region in the body, or an anatomical structure. For example, the central nervous system and the peripheral nervous systems. Central () describes something close to the centre. For example, the great vessels run centrally through the body; many smaller vessels branch from these. Peripheral (, originally from Ancient Greek) describes something further away from the centre of something. For example, the arm is peripheral to the body. Superficial and deep These terms refer to the distance of a structure from the surface. Deep () describes something further away from the surface of the organism. For example, the external oblique muscle of the abdomen is deep to the skin. "Deep" is one of the few anatomical terms of location derived from Old English rather than Latin – the anglicised Latin term would have been "profound" (). Superficial () describes something near the outer surface of the organism. For example, in skin, the epidermis is superficial to the subcutis. Dorsal and ventral These two terms, used in anatomy and embryology, describe something at the back (dorsal) or front/belly (ventral) of an organism. The dorsal () surface of an organism refers to the back, or upper side, of an organism. If talking about the skull, the dorsal side is the top. The ventral () surface refers to the front, or lower side, of an organism. For example, in a fish, the pectoral fins are dorsal to the anal fin, but ventral to the dorsal fin. The terms are used in other contexts; for example dorsal and ventral gun turrets on a bomber aircraft. Rostral, cranial, and caudal Specific terms exist to describe how close or far something is to the head or tail of an animal. To describe how close to the head of an animal something is, three distinct terms are used: Rostral () describes something situated toward the oral or nasal region, or in the case of the brain, toward the tip of the frontal lobe. Cranial () or cephalic () describes how close something is to the head of an organism. Caudal () describes how close something is to the trailing end of an organism. For example, in horses, the eyes are caudal to the nose and rostral to the back of the head. These terms are generally preferred in veterinary medicine and not used as often in human medicine. In humans, "cranial" and "cephalic" are used to refer to the skull, with "cranial" being used more commonly. The term "rostral" is rarely used in human anatomy, apart from embryology, and refers more to the front of the face than the superior aspect of the organism. Similarly, the term "caudal" is used more in embryology and only occasionally used in human anatomy. This is because the brain is situated at the superior part of the head whereas the nose is situated in the anterior part. Thus, the "rostrocaudal axis" refers to a C shape (see image). Other terms and special cases Anatomical landmarks The location of anatomical structures can also be described in relation to different anatomical landmarks. They are used in anatomy, surface anatomy, surgery, and radiology. Structures may be described as being at the level of a specific spinal vertebra, depending on the section of the vertebral column the structure is at. The position is often abbreviated. For example, structures at the level of the fourth cervical vertebra may be abbreviated as "C4", at the level of the fourth thoracic vertebra "T4", and at the level of the third lumbar vertebra "L3". Because the sacrum and coccyx are fused, they are not often used to provide the location. References may also take origin from superficial anatomy, made to landmarks that are on the skin or visible underneath. For example, structures may be described relative to the anterior superior iliac spine, the medial malleolus or the medial epicondyle. Anatomical lines are used to describe anatomical location. For example, the mid-clavicular line is used as part of the cardiac exam in medicine to feel the apex beat of the heart. Mouth and teeth Special terms are used to describe the mouth and teeth. Fields such as osteology, palaeontology and dentistry apply special terms of location to describe the mouth and teeth. This is because although teeth may be aligned with their main axes within the jaw, some different relationships require special terminology as well; for example, teeth also can be rotated, and in such contexts terms like "anterior" or "lateral" become ambiguous. For example, the terms "distal" and "proximal" are also redefined to mean the distance away or close to the dental arch, and "medial" and "lateral" are used to refer to the closeness to the midline of the dental arch. Terms used to describe structures include "buccal" () and "palatal" () referring to structures close to the cheek and hard palate respectively. Hands and feet Several anatomical terms are particular to the hands and feet. Additional terms may be used to avoid confusion when describing the surfaces of the hand and what is the "anterior" or "posterior" surface. The term "anterior", while anatomically correct, can be confusing when describing the palm of the hand; Similarly is "posterior", used to describe the back of the hand and arm. This confusion can arise because the forearm can pronate and supinate and flip the location of the hand. For improved clarity, the directional term palmar () is commonly used to describe the front of the hand, and dorsal is the back of the hand. For example, the top of a dog's paw is its dorsal surface; the underside, either the palmar (on the forelimb) or the plantar (on the hindlimb) surface. The palmar fascia is palmar to the tendons of muscles which flex the fingers, and the dorsal venous arch is so named because it is on the dorsal side of the foot. In humans, volar can also be used synonymously with palmar to refer to the underside of the palm, but plantar is used exclusively to describe the sole. These terms describe location as palmar and plantar; For example, volar pads are those on the underside of hands or fingers; the plantar surface describes the sole of the heel, foot or toes. Similarly, in the forearm, for clarity, the sides are named after the bones. Structures closer to the radius are radial, structures closer to the ulna are ulnar, and structures relating to both bones are referred to as radioulnar. Similarly, in the lower leg, structures near the tibia (shinbone) are tibial and structures near the fibula are fibular (or peroneal). Rotational direction Anteversion and retroversion are complementary terms describing an anatomical structure that is rotated forwards (towards the front of the body) or backwards (towards the back of the body), relative to some other position. They are particularly used to describe the curvature of the uterus. Anteversion () describes an anatomical structure being tilted further forward than normal, whether pathologically or incidentally. For example, a woman's uterus typically is anteverted, tilted slightly forward. A misaligned pelvis may be anteverted, that is to say tilted forward to some relevant degree. Retroversion () describes an anatomical structure tilted back away from something. An example is a retroverted uterus. Other directional terms Several other terms are also used to describe location. These terms are not used to form the fixed axes. Terms include: Axial (): around the central axis of the organism or the extremity. Two related terms, "abaxial" and "adaxial", refer to locations away from and toward the central axis of an organism, respectively Luminal (): on the—hollow—inside of an organ's lumen (body cavity or tubular structure); adluminal is towards, abluminal is away from the lumen. Opposite to outermost (the adventitia, serosa, or the cavity's wall). Parietal (): pertaining to the outer wall of a body cavity. For example, the parietal peritoneum is the lining on the inside of the abdominal cavity. Parietal can also refer specifically to the parietal bone of the skull or associated structures. Terminal () at the extremity of a usually projecting structure. For example, "...an antenna with a terminal sensory hair". Visceral and viscus (): associated with organs within the body's cavities and pertaining to the innermost layer. For example, the stomach is covered with a lining called the visceral peritoneum, as opposed to the parietal peritoneum. Viscus can also be used to mean "organ". For example, the stomach is a viscus within the abdominal cavity, and visceral pain refers to pain originating from internal organs. Aboral (opposite to oral) is used to denote a location along the gastrointestinal tract that is relatively closer to the anus. Specific animals and other organisms Different terms are used because of different body plans in animals, whether animals stand on one or two legs, and whether an animal is symmetrical or not, as discussed above. For example, as humans are approximately bilaterally symmetrical organisms, anatomical descriptions usually use the same terms as those for other vertebrates. However, humans stand upright on two legs, meaning their anterior/posterior and ventral/dorsal directions are the same, and the inferior/superior directions are necessary. Humans do not have a beak, so a term such as "rostral" used to refer to the beak in some animals is instead used to refer to part of the brain; humans do also not have a tail so a term such as "caudal" that refers to the tail end may also be used in humans and animals without tails to refer to the hind part of the body. In invertebrates, the large variety of body shapes presents a difficult problem when attempting to apply standard directional terms. Depending on the organism, some terms are taken by analogy from vertebrate anatomy, and appropriate novel terms are applied as needed. Some such borrowed terms are widely applicable in most invertebrates; for example proximal, meaning "near" refers to the part of an appendage nearest to where it joins the body, and distal, meaning "standing away from" is used for the part furthest from the point of attachment. In all cases, the usage of terms is dependent on the body plan of the organism. Asymmetrical and spherical organisms In organisms with a changeable shape, such as amoeboid organisms, most directional terms are meaningless, since the shape of the organism is not constant and no distinct axes are fixed. Similarly, in spherically symmetrical organisms, there is nothing to distinguish one line through the centre of the organism from any other. An indefinite number of triads of mutually perpendicular axes could be defined, but any such choice of axes would be useless, as nothing would distinguish a chosen triad from any others. In such organisms, only terms such as superficial and deep, or sometimes proximal and distal, are usefully descriptive. Elongated organisms In organisms that maintain a constant shape and have one dimension longer than the other, at least two directional terms can be used. The long or longitudinal axis is defined by points at the opposite ends of the organism. Similarly, a perpendicular transverse axis can be defined by points on opposite sides of the organism. There is typically no basis for the definition of a third axis. Usually such organisms are planktonic (free-swimming) protists, and are nearly always viewed on microscope slides, where they appear essentially two-dimensional. In some cases a third axis can be defined, particularly where a non-terminal cytostome or other unique structure is present. Some elongated protists have distinctive ends of the body. In such organisms, the end with a mouth (or equivalent structure, such as the cytostome in Paramecium or Stentor), or the end that usually points in the direction of the organism's locomotion (such as the end with the flagellum in Euglena), is normally designated as the anterior end. The opposite end then becomes the posterior end. Properly, this terminology would apply only to an organism that is always planktonic (not normally attached to a surface), although the term can also be applied to one that is sessile (normally attached to a surface). Organisms that are attached to a substrate, such as sponges, animal-like protists also have distinctive ends. The part of the organism attached to the substrate is usually referred to as the basal end (), whereas the end furthest from the attachment is referred to as the apical end (). Radially symmetrical organisms Radially symmetrical organisms include those in the group Radiata primarily jellyfish, sea anemones and corals and the comb jellies. Adult echinoderms, such as starfish, sea urchins, sea cucumbers and others are also included, since they are pentaradial, meaning they have five discrete rotational symmetry. Echinoderm larvae are not included, since they are bilaterally symmetrical. Radially symmetrical organisms always have one distinctive axis. Cnidarians (jellyfish, sea anemones and corals) have an incomplete digestive system, meaning that one end of the organism has a mouth, and the opposite end has no opening from the gut (coelenteron). For this reason, the end of the organism with the mouth is referred to as the oral end (), and the opposite surface is the aboral end (). Unlike vertebrates, cnidarians have only a single distinctive axis. "Lateral", "dorsal", and "ventral" have no meaning in such organisms, and all can be replaced by the generic term peripheral (). Medial can be used, but in the case of radiates indicates the central point, rather than a central axis as in vertebrates. Thus, there are multiple possible radial axes and medio-peripheral (half-) axes. However, some biradially symmetrical comb jellies do have distinct "tentacular" and "pharyngeal" axes and are thus anatomically equivalent to bilaterally symmetrical animals. Spiders Special terms are used for spiders. Two specialized terms are useful in describing views of arachnid legs and pedipalps. Prolateral refers to the surface of a leg that is closest to the anterior end of an arachnid's body. Retrolateral refers to the surface of a leg that is closest to the posterior end of an arachnid's body. Most spiders have eight eyes in four pairs. All the eyes are on the carapace of the prosoma, and their sizes, shapes and locations are characteristic of various spider families and other taxa. Usually, the eyes are arranged in two roughly parallel, horizontal and symmetrical rows of eyes. Eyes are labelled according to their position as anterior and posterior lateral eyes (ALE) and (PLE); and anterior and posterior median eyes (AME) and (PME). See also Chirality Geometric terms of location Handedness Laterality Proper right and proper left Reflection symmetry Sinistral and dextral References Citations General sources Animal anatomy Medical terminology Orientation (geometry) Position
Anatomical terms of location
[ "Physics", "Mathematics", "Biology" ]
5,479
[ "Geometric measurement", "Point (geometry)", "Physical quantities", "Position", "Topology", "Space", "Vector physical quantities", "Geometry", "Spacetime", "Orientation (geometry)", "Wikipedia categories named after physical quantities", "Anatomy" ]
168,183
https://en.wikipedia.org/wiki/Transhuman
Transhuman, or trans-human, is the concept of an intermediary form between human and posthuman. In other words, a transhuman is a being that resembles a human in most respects but who has powers and abilities beyond those of standard humans. These abilities might include improved intelligence, awareness, strength, and/or durability. Transhumans appear in science-fiction, sometimes as cyborgs or genetically-enhanced humans. History of hypotheses In his Divine Comedy, Dante Alighieri coined the word "trasumanar" meaning "to transcend human nature, to pass beyond human nature" in the first canto of Paradiso. The use of the term "transhuman" goes back to French philosopher Pierre Teilhard de Chardin, who wrote in his 1949 book The Future of Mankind. And in a 1951 unpublished revision of the same book: In 1957 book New Bottles for New Wine, English evolutionary biologist Julian Huxley wrote: One of the first professors of futurology, FM-2030, who taught "new concepts of the Human" at The New School of New York City in the 1960s, used "transhuman" as shorthand for "transitional human". Calling transhumans the "earliest manifestation of new evolutionary beings", FM argued that signs of transhumans included physical and mental augmentations including prostheses, reconstructive surgery, intensive use of telecommunications, a cosmopolitan outlook and a globetrotting lifestyle, androgyny, mediated reproduction (such as in vitro fertilisation), absence of religious beliefs, and a rejection of traditional family values. FM-2030 used the concept of transhuman as an evolutionary transition, outside the confines of academia, in his contributing final chapter to the 1972 anthology Woman, Year 2000. In the same year, American cryonics pioneer Robert Ettinger contributed to conceptualization of "transhumanity" in his book Man into Superman. In 1982, American Natasha Vita-More authored a statement titled Transhumanist Arts Statement and outlined what she perceived as an emerging transhuman culture. Jacques Attali, writing in 2006, envisaged transhumans as an altruistic vanguard of the later 21st century: Vanguard players (I shall call them transhumans) will run (they are already running) relational enterprises in which profit will be no more than a hindrance, not a final goal. Each of these transhumans will be altruistic, a citizen of the planet, at once nomadic and sedentary, his neighbor's equal in rights and obligations, hospitable and respectful of the world. Together, transhumans will give birth to planetary institutions and change the course of industrial enterprises. In March 2007, American physicist Gregory Cochran and paleoanthropologist John Hawks published a study, alongside other recent research on which it builds, which amounts to a radical reappraisal of traditional views, which tended to assume that humans have reached an evolutionary endpoint. Physical anthropologist Jeffrey McKee argued the new findings of accelerated evolution bear out predictions he made in a 2000 book The Riddled Chain. Based on computer models, he argued that evolution should speed up as a population grows because population growth creates more opportunities for new mutations; and the expanded population occupies new environmental niches, which would drive evolution in new directions. Whatever the implications of the recent findings, McKee concludes that they highlight a ubiquitous point about evolution: "every species is a transitional species". Transhumans in fiction Examples of transhuman entities in fiction exist within many popular video games. For example, the Bioshock media franchise depicts individuals receiving doses of a substance called ADAM, harvested from a fictional type of sea slugs, able to give the user fantastical powers through genetic engineering. Thus, previously standard humans can gain the ability to summon ice, wield lightning, turn invisible, and commit other seeming miracles due to their enhancement. A 2014 article from Ars Technica speculated that mutating clumps of mobile genetic elements known as "transposons" could possibly be used as a semi-parasitic tool to raise people to a higher status in terms of their abilities, making at least part of the game's scenario theoretically plausible. Similar commentary later occurred from gamers with the advent of CRISPR gene editing. Transhumans also have played a major role in the Star Trek media franchise. For example, in "Space Seed", the twenty-second episode of the first season of Star Trek: The Original Series that initially aired on February 16, 1967, a charismatic and physically intimidating genius called Khan Noonien Singh attempts to take control of the Enterprise operated by the show's protagonists. The selectively bred individual had advanced beyond simple human status and nearly succeeds. The starship's crew opt to exile the leader and his league of similar beings to a habitable but isolated alien planet instead of assigning a true punishment per se, a ruling which he accepts without protest. Played by Ricardo Montalbán, Khan returns in the 1982 film Star Trek II: The Wrath of Khan, which broadly serves as a sequel to the episode. References to "Space Seed" appear in episodes of Star Trek: Deep Space Nine, Star Trek: Enterprise, and the 2013 film Star Trek Into Darkness as well. References External links Space of Possible Modes of Being World Transhumanist Association Teilhard de Chardin and Transhumanism Transhuman documentary 1940s neologisms Transhumanism
Transhuman
[ "Technology", "Engineering", "Biology" ]
1,133
[ "Genetic engineering", "Transhumanism", "Ethics of science and technology" ]
168,185
https://en.wikipedia.org/wiki/Sleeper%20ship
A sleeper ship is a hypothetical type of crewed spacecraft, or starship in which most or all of the crew spend the journey in some form of hibernation or suspended animation. The only known technology that allows long-term suspended animation of humans is the freezing of early-stage human embryos through embryo cryopreservation, which is behind the concept of embryo space colonization. Description The most common role of sleeper ships in fiction is for interstellar or intergalactic travel, usually at sub-light speed. Travel times for such journeys could reach into the hundreds or thousands of years, making some form of life extension, such as suspended animation, necessary for the original crew to live to see their destination. Suspended animation is also required on ships that cannot be used as generation ships. Freezing the astronauts would probably involve whole-body vitrification and would, most likely, be frozen at 145 kelvins to reduce the risk of fracturing. Suspended animation can also be useful to reduce the consumption of life support system resources by crew members who are not needed during the trip, or by an author as a plot device, and for this reason, sleeper ships sometimes also appear in contexts involving travel within the solar system or any other system of planets orbiting one star. Examples in fiction There are numerous examples of sleeper ships in science fiction literature and films. Some of the best-known examples are: "Far Centaurus", published in Destination: Universe! by AE van Vogt The interplanetary sleeper ship USSC Discovery One is one of the main subjects of the whole Space Odyssey franchise. As it is told in 2001: A Space Odyssey, six crew members are on the spacecraft: Commander "Dave" Bowman and Frank Poole were de-hibernated before the others; Victor Kaminsky, Jack Kimball and Charles Hunter were killed by the sixth member: the psychopathic artificial intelligence HAL 9000. Pandorum The TV series Lost in Space used a sleeper ship, the Jupiter II, intended to hold the crew for the five-and-a-half-year journey to Alpha Centauri. Dr. Smith, a saboteur and accidental stowaway, is forced to revive the crew to save the ship from meteors. The film Lost in Space in which the suspended animation system fails, bringing the crew of Jupiter 2 out of sleep prematurely. Nostromo – The sleeper/cargo ship in the film Alien. Sulaco – The sleeper/war ship in the film Aliens. The Hunter-Gratzner in the film Pitch Black transports a number of its passengers in cryostasis. Planet of the Apes Avatar Stargate SG-1 – A ship is found with its crew in cryostasis. Stargate Universe – The crew of the Destiny enter stasis in order to sleep during a potentially millennia-long trip. Prometheus SS Botany Bay, a sleeper ship from the Star Trek: The Original Series episode "Space Seed" (it is also mentioned in Star Trek II: The Wrath of Khan and Star Trek Into Darkness), transporting a crew including Khan Noonien Singh. One (Star Trek: Voyager) illustrates a similar technique. Called a "sail-ship" by Cordwainer Smith in Think Blue, Count Two Cargo New Mayflower and Ark from Frederik Pohl's novel The World at the End of Time After Earth Homeworld Freelancer – Five sleeper ships built by the Alliance: the Bretonia, the Rheinland, the Hispania, the Kusari and the Liberty. Of these five, all but the Hispania go on to found different star systems, each named after them: the Hispania can be found in the game as an abandoned derelict, the descendants of its passengers having become the "Outcast" and "Corsair" pirate factions. The space station-starship hybrid Endurance from Interstellar is used by NASA astronauts to travel through a wormhole to another galaxy in order to find a new planet for humanity or bomb the planet with human embryos. Event Horizon – The vessel Lewis and Clark is dispatched to find what happened to the ship Event Horizon. The Crew is put in hibernation. In the official trailer of the video game Civilization: Beyond Earth, a sleeper ship was seen sending humans from Earth to an alien planet. Mass Effect Andromeda – Representatives of the human race, in addition to that of other Milky Way Galaxy races, travel to the Andromeda Galaxy aboard a series of arks, with the aim of colonizing a series of new homeworlds. The travel time is only 600 years due to the discovery of a form of faster-than-light travel that cuts down the time to traverse the 2.5 million light years between galaxies (by comparison, it is mentioned in the game that a cargo vessel, also sent in the direction of Andromeda, will not arrive for more than two million years.) Halo Hyperion, a novel by Dan Simmons. The Seed Ships are a type of slower than light travel that put the passengers into a fugue-like state for the length of the trip. Heaven's Vault Passengers – It features the sleeper ship Avalon. The suspended animation system, which keeps 5,000 passengers in hibernation on board the Avalon during its 120-year trip to the planet Homestead II, fails as a result of asteroid collisions causing a mechanical engineer to be awoken prematurely. Don't Look Up – The US president and some of her inner circle escape an extinction event where a large comet hits the Earth. See also Embryo space colonization Generation ship Interstellar ark Seedship Torpor Inducing Transfer Habitat For Human Stasis To Mars References Fiction about suspended animation Fictional technology Hypothetical spacecraft Interstellar travel Fictional spacecraft by type Sleep in fiction
Sleeper ship
[ "Astronomy", "Technology", "Biology" ]
1,166
[ "Exploratory engineering", "Astronomical hypotheses", "Behavior", "Sleep in fiction", "Hypothetical spacecraft", "Interstellar travel", "Sleep" ]
168,239
https://en.wikipedia.org/wiki/Lifting%20body
A lifting body is a fixed-wing aircraft or spacecraft configuration in which the body itself produces lift. In contrast to a flying wing, which is a wing with minimal or no conventional fuselage, a lifting body can be thought of as a fuselage with little or no conventional wing. Whereas a flying wing seeks to maximize cruise efficiency at subsonic speeds by eliminating non-lifting surfaces, lifting bodies generally minimize the drag and structure of a wing for subsonic, supersonic and hypersonic flight, or spacecraft re-entry. All of these flight regimes pose challenges for proper flight safety. Lifting bodies were a major area of research in the 1960s and 70s as a means to build a small and lightweight crewed spacecraft. The US built a number of lifting body rocket planes to test the concept, as well as several rocket-launched re-entry vehicles that were tested over the Pacific. Interest waned as the US Air Force lost interest in the crewed mission, and major development ended during the Space Shuttle design process when it became clear that the highly shaped fuselages made it difficult to fit fuel tankage. Advanced spaceplane concepts in the 1990s and 2000s did use lifting-body designs. Examples include the HL-20 Personnel Launch System (1990) and the Prometheus spaceplane (2010). The Dream Chaser lifting-body spaceplane, an extension of HL-20 technology, was proposed as one of three vehicles to potentially carry US crew to and from the International Space Station, but eventually was selected as a resupply vehicle instead. In 2015 the ESA Intermediate eXperimental Vehicle performed the first ever successful reentry of a lifting body spacecraft. History The lifting body had been imagined by 1917, in which year an aircraft with something like a delta wing plan form with a thick included fuselage was described in a patent by Roy Scroggs. However at low airspeeds the lifting body is inefficient and did not enter mainstream airplane design. Aerospace-related lifting body research arose from the idea of spacecraft re-entering the Earth's atmosphere and landing much like a regular airplane. Following atmospheric re-entry, the capsule spacecraft from the Mercury, Gemini, and Apollo series had very little control over where they landed. A steerable spacecraft with wings could significantly extend its landing envelope. However, the vehicle's wings would have to be designed to withstand the dynamic and thermal stresses of both re-entry and hypersonic flight. One proposal eliminated wings altogether: design the fuselage body to produce lift by itself. NASA's refinements of the lifting body concept began in 1962 with R. Dale Reed of NASA's Armstrong Flight Research Center. The first full-size model to come out of Reed's program was the NASA M2-F1, an unpowered craft made of wood. Initial tests were performed by towing the M2-F1 along a dry lakebed at Edwards Air Force Base California, behind a modified Pontiac Catalina. Later the craft was towed behind a C-47 and released. Since the M2-F1 was a glider, a small rocket motor was added in order to extend the landing envelope. The M2-F1 was soon nicknamed the "Flying Bathtub". In 1963, NASA began programs with heavier rocket-powered lifting-body vehicles to be air launched from under the starboard wing of a NB-52B, a derivative of the B-52 jet bomber. The first flights started in 1966. Of the Dryden lifting bodies, all but the unpowered NASA M2-F1 used an XLR11 rocket engine as was used on the Bell X-1. A follow-on design designated the Northrop HL-10 was developed at NASA Langley Research Center. Air flow separation caused the crash of the Northrop M2-F2 lifting body. The HL-10 attempted to solve part of this problem by angling the port and starboard vertical stabilizers outward and enlarging the center one. Starting 1965 the Russian lifting-body Mikoyan-Gurevich MiG-105 or EPOS (Russian acronym for Experimental Passenger Orbital Aircraft) was developed and several test flights made. Work ended in 1978 when the efforts shifted to the Buran program, while work on another small-scale spacecraft partly continued in the Bor program. The IXV is a European Space Agency lifting body experimental re-entry vehicle intended to validate European reusable launchers which could be evaluated in the frame of the FLPP program. The IXV made its first flight in February 2015, launched by a Vega rocket. Orbital Sciences proposed a commercial lifting-body spaceplane in 2010. The Prometheus is more fully described below. Aerospace applications Lifting bodies pose complex control, structural, and internal configuration issues. Lifting bodies were eventually rejected in favor of a delta wing design for the Space Shuttle. Data acquired in flight test using high-speed landing approaches at very steep descent angles and high sink rates was used for modeling Shuttle flight and landing profiles. In planning for atmospheric re-entry, the landing site is selected in advance. For reusable reentry vehicles, typically a primary site is preferred that is closest to the launch site in order to reduce costs and improve launch turnaround time. However, weather near the landing site is a major factor in flight safety. In some seasons, weather at landing sites can change quickly relative to the time necessary to initiate and execute re-entry and safe landing. Due to weather, it is possible the vehicle may have to execute a landing at an alternate site. Furthermore, most airports do not have runways of sufficient length to support the approach landing speed and roll distance required by spacecraft. Few airports exist in the world that can support or be modified to support this type of requirement. Therefore, alternate landing sites are very widely spaced across the U.S. and around the world. The Shuttle's delta wing design was driven by these issues. These requirements were further exacerbated by requirements that extended the Shuttle's flight landing envelope. Nonetheless, the lifting body concept has been implemented in a number of other aerospace programs, the previously mentioned NASA X-38, Lockheed Martin X-33, BAC's Multi Unit Space Transport And Recovery Device, Europe's EADS Phoenix, and the joint Russian-European Kliper spacecraft. Of the three basic design shapes usually analyzed for such programs (capsule, lifting body, aircraft) the lifting body may offer the best trade-off in terms of maneuverability and thermodynamics while meeting its customers' mission requirements. Current systems The Dream Chaser is a suborbital and orbital vertical-takeoff, horizontal-landing (VTHL) lifting-body spaceplane being developed by Sierra Nevada Corporation (SNC). The Dream Chaser design is planned to eventually carry up to seven people to and from low Earth orbit, and the spaceplane is currently planned to be used for delivering cargo to the International Space Station under the Commercial Resupply Services program. The vehicle will launch vertically on a Vulcan Centaur and land horizontally on conventional runways. Body lift Some aircraft with wings also employ bodies that generate lift. Some of the early 1930s high-wing monoplane designs of the Bellanca Aircraft Company, such as the Bellanca Aircruiser, had vaguely airfoil-shaped fuselages capable of generating some lift, with even the wing struts on some versions given widened fairings to give them some lift-generating capability. The Gee Bee R-1 Super Sportster racing plane of the 1930s, likewise, from more modern aerodynamic studies, has been shown to have had considerable ability to generate lift with its fuselage design, important for the R-1's intended racing role, while in highly banked pylon turns while racing. Vincent Burnelli developed several aircraft between the 1920s and 1950 that used fuselage lift. Like the earlier Bellanca monoplanes, the Short SC.7 Skyvan produces a substantial amount of lift from its fuselage shape, almost as much as the 35% each of the wings produces. Fighters like the F-15 Eagle also produce substantial lift from the wide fuselage between the wings. Because the F-15 Eagle's wide fuselage is so efficient at lift, an F-15 is able to land successfully with only one wing, albeit under nearly full power, with thrust contributing significantly to lift. In the summer of 1983, an Israeli F-15 staged a mock dogfight with Skyhawks for training purposes, near Nahal Tzin in the Negev desert. During the exercise, one of the Skyhawks miscalculated and collided forcefully with the F-15's wing root. The F-15's pilot was aware that the wing had been seriously damaged, but decided to try and land in a nearby airbase, not knowing the extent of his wing damage. It was only after he had landed, when he climbed out of the cockpit and looked backward, that the pilot realized what had happened: the wing had been completely torn off the plane, and he had landed the plane with only one wing attached. A few months later, the damaged F-15 had been given a new wing, and returned to operational duty in the squadron. The engineers at McDonnell Douglas had a hard time believing the story of the one-winged landing: as far as their planning models were concerned, this was an impossibility. In 2010, Orbital Sciences proposed the Prometheus "blended lifting-body" spaceplane vehicle, about one-quarter the size of the Space Shuttle, as a commercial option for carrying astronauts to low Earth orbit under the commercial crew program. The Vertical Takeoff, Horizontal Landing (VTHL) vehicle was to have been launched on a human-rated Atlas V rocket but would land on a runway. The initial design was to have carried a crew of 4, but it could carry up to 6, or a combination of crew and cargo. In addition to Orbital Sciences, the consortium behind the proposal included Northrop Grumman, which would have built the spaceplane, and the United Launch Alliance, which would have provided the launch vehicle. Failing to be selected for a CCDev phase 2 award by NASA, Orbital announced in April 2011 that they would likely wind down their efforts to develop a commercial crew vehicle. Design principles of lifting bodies are used also in the construction of hybrid airships. Armstrong Flight Research Center The US government developed a variety of proof-of-concept and flight-test vehicle lifting body designs from the early 1960s through the mid-1970s at Armstrong Flight Research Center. These included: M2-F1 M2-F2 M2-F3 HL-10 X-24A X-24B Pilots and flights Wood, Haise and Engle each made a single car-towed flight of the M2-F1. Popular culture Lifting bodies have appeared in some science fiction works, including the movie Marooned, and as John Crichton's spacecraft Farscape-1 in the TV series Farscape. The Discovery Channel TV series conjectured using lifting bodies to deliver a probe to a distant earth-like planet in the animated Alien Planet. Gerry Anderson's 1969 Doppelgänger used a VTOL lifting body lander / ascender to visit an Earth-like planet, only to crash in both attempts. His series UFO featured a lifting body craft visually similar to the M2-F2 for orbital operations ("The Man Who Came Back"). In the Buzz Aldrin's Race Into Space computer game, a modified X-24A becomes an alternative lunar capable spacecraft that the player can choose over the Gemini or Apollo capsule. The 1970s television program The Six Million Dollar Man used footage of a lifting body aircraft, culled from actual NASA exercises, in the show's title sequence. The scenes included an HL-10's separation from its carrier plane—a modified B-52—and an M2-F2 piloted by Bruce Peterson, crashing and tumbling violently along the Edwards dry lakebed runway. The cause of the crash was attributed to the onset of Dutch roll stemming from control instability as induced by flow separation. The episode "The Deadly Replay" (season 2 episode 8 aired 9/22/1974) features the HL-10 as a prop of the story. See also Martin X-23 PRIME BOR-4 Kliper Lockheed Star Clipper Lockheed Martin X-33 HL-20 Personnel Launch System Dream Chaser (spacecraft) Space Rider (spacecraft) Prometheus (spacecraft) Facetmobile Blended wing body Flying wing MUSTARD 1953 Horton "Wingless" http://aerospacelegacyfoundation.com/aviation-history-flying-wings/ Arup S-2 1932, Snyder "Arup" (blurs the boundary between "flying wing" and lifting body) Burnelli RB-1 References References Other sources McPhee, John (1973), The Deltoid Pumpkin Seed; . (Story of the Aereon, a combination aerodyne/aerostat, a.k.a. hybrid airship.) External links Lifting Bodies Fact Sheet (NASA) NASA Tech Paper 3101: Numerical Analysis and Simulation of an Assured Crew Return Vehicle Flow Field (The math of airflow over a lifting body) NASA Photo Collections from Dryden Flight Research Center HL-10 M2-F1 M2-F2 M2-F3 X-24A and X24B Short M2-F1 history Some history of lifting body flight Wingless Flight: The Lifting Body Story. NASA History Series SP-4220 1997 PDF Aircraft configurations Wing configurations
Lifting body
[ "Engineering" ]
2,746
[ "Aircraft configurations", "Aerospace engineering" ]
168,243
https://en.wikipedia.org/wiki/John%20C.%20Baez
John Carlos Baez (; born June 12, 1961) is an American mathematical physicist and a professor of mathematics at the University of California, Riverside (UCR) in Riverside, California. He has worked on spin foams in loop quantum gravity, applications of higher categories to physics, and applied category theory. Additionally, Baez is known on the World Wide Web as the author of the crackpot index. Education John C. Baez attended Princeton University where he graduated with an A.B. in mathematics in 1982; his senior thesis was titled "Recursivity in quantum mechanics", under the supervision of John P. Burgess. He earned his doctorate in 1986 from the Massachusetts Institute of Technology under the direction of Irving Segal. Career Baez was a post-doctoral researcher at Yale University. Since 1989, he has been a faculty member at UC Riverside. From 2010 to 2012, he took a leave of absence to work at the Centre for Quantum Technologies in Singapore and has since worked there in the summers. Research His research includes work on spin foams in loop quantum gravity. He also worked on applications of higher categories to physics, such as the cobordism hypothesis. He has also dedicated many efforts towards applied category theory, including network theory. Recognition Baez won the 2013 Levi L. Conant Prize for his expository paper with John Huerta, "The algebra of grand unified theories". He was named a Fellow of the American Mathematical Society, in the 2022 class of fellows, "for contributions to higher category theory and mathematical physics, and for popularization of these subjects". Forums Baez is the author of This Week's Finds in Mathematical Physics, an irregular column on the internet featuring mathematical exposition and criticism. He started This Week's Finds in 1993 for the Usenet community, and it now has a following in its new form, the blog Azimuth. This Week's Finds anticipated the concept of a personal weblog. Azimuth also covers other topics that include combating climate change and various other environmental issues. He is also co-founder of the n-Category Café (or n-Café), a group blog concerning higher category theory and its applications, as well as its philosophical repercussions. The founders of the blog are Baez, David Corfield and Urs Schreiber, and the list of blog authors has extended since. The n-Café community is associated with the nLab wiki and nForum forum, which now run independently of n-Café. It is hosted on The University of Texas at Austin's official website. Family Baez's uncle Albert Baez was a physicist and a co-inventor of the X-ray microscope; Albert interested him in physics as a child. Through Albert, he is cousins with singers Joan Baez and Mimi Fariña. John Baez is married to Lisa Raphals who is a professor of Chinese and comparative literature at UCR. Selected publications Papers Books References External links Baez's home page at UCR's official website (ucr.edu) Azimuth blog by Baez The n-Category Café Home page in nLab Essays "Should I be thinking about quantum gravity?", essay by Baez at The World Question Center 1961 births 20th-century American mathematicians 21st-century American mathematicians 21st-century American physicists American bloggers Massachusetts Institute of Technology School of Science alumni Princeton University alumni Yale University fellows Usenet people Category theorists University of California, Riverside faculty Living people Loop quantum gravity researchers American relativity theorists Higher category theory American academics of Mexican descent Science bloggers 21st-century science writers Mathematicians from California Academics from San Francisco Hispanic and Latino American scientists Fellows of the American Mathematical Society Hispanic and Latino American physicists Historical treatment of octonions
John C. Baez
[ "Mathematics" ]
771
[ "Higher category theory", "Mathematical structures", "Category theory", "Category theorists" ]
168,244
https://en.wikipedia.org/wiki/F-number
An f-number is a measure of the light-gathering ability of an optical system such as a camera lens. It is calculated by dividing the system's focal length by the diameter of the entrance pupil ("clear aperture"). The f-number is also known as the focal ratio, f-ratio, or f-stop, and it is key in determining the depth of field, diffraction, and exposure of a photograph. The f-number is dimensionless and is usually expressed using a lower-case hooked f with the format N, where N is the f-number. The f-number is also known as the inverse relative aperture, because it is the inverse of the relative aperture, defined as the aperture diameter divided by focal length. The relative aperture indicates how much light can pass through the lens at a given focal length. A lower f-number means a larger relative aperture and more light entering the system, while a higher f-number means a smaller relative aperture and less light entering the system. The f-number is related to the numerical aperture (NA) of the system, which measures the range of angles over which light can enter or exit the system. The numerical aperture takes into account the refractive index of the medium in which the system is working, while the f-number does not. Notation The f-number is given by: where is the focal length, and is the diameter of the entrance pupil (effective aperture). It is customary to write f-numbers preceded by "", which forms a mathematical expression of the entrance pupil's diameter in terms of and . For example, if a lens's focal length were and its entrance pupil's diameter were , the f-number would be 2. This would be expressed as in a lens system. The aperture diameter would be equal to . Camera lenses often include an adjustable diaphragm, which changes the size of the aperture stop and thus the entrance pupil size. This allows the user to vary the f-number as needed. The entrance pupil diameter is not necessarily equal to the aperture stop diameter, because of the magnifying effect of lens elements in front of the aperture. Ignoring differences in light transmission efficiency, a lens with a greater f-number projects darker images. The brightness of the projected image (illuminance) relative to the brightness of the scene in the lens's field of view (luminance) decreases with the square of the f-number. A focal length lens has an entrance pupil diameter of . A focal length lens has an entrance pupil diameter of . Since the area is proportional to the square of the pupil diameter, the amount of light admitted by the lens is four times that of the lens. To obtain the same photographic exposure, the exposure time must be reduced by a factor of four. A focal length lens has an entrance pupil diameter of . The lens's entrance pupil has four times the area of the lens's entrance pupil, and thus collects four times as much light from each object in the lens's field of view. But compared to the lens, the lens projects an image of each object twice as high and twice as wide, covering four times the area, and so both lenses produce the same illuminance at the focal plane when imaging a scene of a given luminance. Stops, f-stop conventions, and exposure The word stop is sometimes confusing due to its multiple meanings. A stop can be a physical object: an opaque part of an optical system that blocks certain rays. The aperture stop is the aperture setting that limits the brightness of the image by restricting the input pupil size, while a field stop is a stop intended to cut out light that would be outside the desired field of view and might cause flare or other problems if not stopped. In photography, stops are also a unit used to quantify ratios of light or exposure, with each added stop meaning a factor of two, and each subtracted stop meaning a factor of one-half. The one-stop unit is also known as the EV (exposure value) unit. On a camera, the aperture setting is traditionally adjusted in discrete steps, known as f-stops. Each "stop" is marked with its corresponding f-number, and represents a halving of the light intensity from the previous stop. This corresponds to a decrease of the pupil and aperture diameters by a factor of 1/ or about 0.7071, and hence a halving of the area of the pupil. Most modern lenses use a standard f-stop scale, which is an approximately geometric sequence of numbers that corresponds to the sequence of the powers of the square root of 2: , , , , , , , , , , , , , , , etc. Each element in the sequence is one stop lower than the element to its left, and one stop higher than the element to its right. The values of the ratios are rounded off to these particular conventional numbers, to make them easier to remember and write down. The sequence above is obtained by approximating the following exact geometric sequence: In the same way as one f-stop corresponds to a factor of two in light intensity, shutter speeds are arranged so that each setting differs in duration by a factor of approximately two from its neighbour. Opening up a lens by one stop allows twice as much light to fall on the film in a given period of time. Therefore, to have the same exposure at this larger aperture as at the previous aperture, the shutter would be opened for half as long (i.e., twice the speed). The film will respond equally to these equal amounts of light, since it has the property of reciprocity. This is less true for extremely long or short exposures, where there is reciprocity failure. Aperture, shutter speed, and film sensitivity are linked: for constant scene brightness, doubling the aperture area (one stop), halving the shutter speed (doubling the time open), or using a film twice as sensitive, has the same effect on the exposed image. For all practical purposes extreme accuracy is not required (mechanical shutter speeds were notoriously inaccurate as wear and lubrication varied, with no effect on exposure). It is not significant that aperture areas and shutter speeds do not vary by a factor of precisely two. Photographers sometimes express other exposure ratios in terms of 'stops'. Ignoring the f-number markings, the f-stops make a logarithmic scale of exposure intensity. Given this interpretation, one can then think of taking a half-step along this scale, to make an exposure difference of a "half stop". Fractional stops Most twentieth-century cameras had a continuously variable aperture, using an iris diaphragm, with each full stop marked. Click-stopped aperture came into common use in the 1960s; the aperture scale usually had a click stop at every whole and half stop. On modern cameras, especially when aperture is set on the camera body, f-number is often divided more finely than steps of one stop. Steps of one-third stop ( EV) are the most common, since this matches the ISO system of film speeds. Half-stop steps are used on some cameras. Usually the full stops are marked, and the intermediate positions click but are not marked. As an example, the aperture that is one-third stop smaller than is , two-thirds smaller is , and one whole stop smaller is . The next few f-stops in this sequence are: To calculate the steps in a full stop (1 EV) one could use The steps in a half stop ( EV) series would be The steps in a third stop ( EV) series would be As in the earlier DIN and ASA film-speed standards, the ISO speed is defined only in one-third stop increments, and shutter speeds of digital cameras are commonly on the same scale in reciprocal seconds. A portion of the ISO range is the sequence while shutter speeds in reciprocal seconds have a few conventional differences in their numbers (, , and second instead of , , and ). In practice the maximum aperture of a lens is often not an integral power of (i.e., to the power of a whole number), in which case it is usually a half or third stop above or below an integral power of . Modern electronically controlled interchangeable lenses, such as those used for SLR cameras, have f-stops specified internally in -stop increments, so the cameras' -stop settings are approximated by the nearest -stop setting in the lens. Standard full-stop f-number scale Including aperture value AV: Conventional and calculated f-numbers, full-stop series: Typical one-half-stop f-number scale Typical one-third-stop f-number scale Sometimes the same number is included on several scales; for example, an aperture of may be used in either a half-stop or a one-third-stop system; sometimes and and other differences are used for the one-third stop scale. Typical one-quarter-stop f-number scale H-stop An H-stop (for hole, by convention written with capital letter H) is an f-number equivalent for effective exposure based on the area covered by the holes in the diffusion discs or sieve aperture found in Rodenstock Imagon lenses. T-stop A T-stop (for transmission stops, by convention written with capital letter T) is an f-number adjusted to account for light transmission efficiency (transmittance). A lens with a T-stop of projects an image of the same brightness as an ideal lens with 100% transmittance and an f-number of . A particular lens's T-stop, , is given by dividing the f-number by the square root of the transmittance of that lens: For example, an lens with transmittance of 75% has a T-stop of 2.3: Since real lenses have transmittances of less than 100%, a lens's T-stop number is always greater than its f-number. With 8% loss per air-glass surface on lenses without coating, multicoating of lenses is the key in lens design to decrease transmittance losses of lenses. Some reviews of lenses do measure the T-stop or transmission rate in their benchmarks. T-stops are sometimes used instead of f-numbers to more accurately determine exposure, particularly when using external light meters. Lens transmittances of 60%–95% are typical. T-stops are often used in cinematography, where many images are seen in rapid succession and even small changes in exposure will be noticeable. Cinema camera lenses are typically calibrated in T-stops instead of f-numbers. In still photography, without the need for rigorous consistency of all lenses and cameras used, slight differences in exposure are less important; however, T-stops are still used in some kinds of special-purpose lenses such as Smooth Trans Focus lenses by Minolta and Sony. ASA/ISO numbers Photographic film's and electronic camera sensor's sensitivity to light is often specified using ASA/ISO numbers. Both systems have a linear number where a doubling of sensitivity is represented by a doubling of the number, and a logarithmic number. In the ISO system, a 3° increase in the logarithmic number corresponds to a doubling of sensitivity. Doubling or halving the sensitivity is equal to a difference of one T-stop in terms of light transmittance. Gain Most electronic cameras allow to amplify the signal coming from the pickup element. This amplification is usually called gain and is measured in decibels. Every of gain is equivalent to one T-stop in terms of light transmittance. Many camcorders have a unified control over the lens f-number and gain. In this case, starting from zero gain and fully open iris, one can either increase f-number by reducing the iris size while gain remains zero, or one can increase gain while iris remains fully open. Sunny 16 rule An example of the use of f-numbers in photography is the sunny 16 rule: an approximately correct exposure will be obtained on a sunny day by using an aperture of and the shutter speed closest to the reciprocal of the ISO speed of the film; for example, using ISO 200 film, an aperture of and a shutter speed of second. The f-number may then be adjusted downwards for situations with lower light. Selecting a lower f-number is "opening up" the lens. Selecting a higher f-number is "closing" or "stopping down" the lens. Effects on image sharpness Depth of field increases with f-number, as illustrated in the image here. This means that photographs taken with a low f-number (large aperture) will tend to have subjects at one distance in focus, with the rest of the image (nearer and farther elements) out of focus. This is frequently used for nature photography and portraiture because background blur (the aesthetic quality known as 'bokeh') can be aesthetically pleasing and puts the viewer's focus on the main subject in the foreground. The depth of field of an image produced at a given f-number is dependent on other parameters as well, including the focal length, the subject distance, and the format of the film or sensor used to capture the image. Depth of field can be described as depending on just angle of view, subject distance, and entrance pupil diameter (as in von Rohr's method). As a result, smaller formats will have a deeper field than larger formats at the same f-number for the same distance of focus and same angle of view since a smaller format requires a shorter focal length (wider angle lens) to produce the same angle of view, and depth of field increases with shorter focal lengths. Therefore, reduced–depth-of-field effects will require smaller f-numbers (and thus potentially more difficult or complex optics) when using small-format cameras than when using larger-format cameras. Beyond focus, image sharpness is related to f-number through two different optical effects: aberration, due to imperfect lens design, and diffraction which is due to the wave nature of light. The blur-optimal f-stop varies with the lens design. For modern standard lenses having 6 or 7 elements, the sharpest image is often obtained around –, while for older standard lenses having only 4 elements (Tessar formula) stopping to will give the sharpest image. The larger number of elements in modern lenses allow the designer to compensate for aberrations, allowing the lens to give better pictures at lower f-numbers. At small apertures, depth of field and aberrations are improved, but diffraction creates more spreading of the light, causing blur. Light falloff is also sensitive to f-stop. Many wide-angle lenses will show a significant light falloff (vignetting) at the edges for large apertures. Photojournalists have a saying, " and be there", meaning that being on the scene is more important than worrying about technical details. Practically, (in 35 mm and larger formats) allows adequate depth of field and sufficient lens speed for a decent base exposure in most daylight situations. Human eye Computing the f-number of the human eye involves computing the physical aperture and focal length of the eye. Typically, the pupil can dilate to be as large as 6–7 mm in darkness, which translates into the maximal physical aperture. Some individuals' pupils can dilate to over 9 mm wide. The f-number of the human eye varies from about in a very brightly lit place to about in the dark. Computing the focal length requires that the light-refracting properties of the liquids in the eye be taken into account. Treating the eye as an ordinary air-filled camera and lens results in an incorrect focal length and f-number. Focal ratio in telescopes In astronomy, the f-number is commonly referred to as the focal ratio (or f-ratio) notated as . It is still defined as the focal length of an objective divided by its diameter or by the diameter of an aperture stop in the system: Even though the principles of focal ratio are always the same, the application to which the principle is put can differ. In photography the focal ratio varies the focal-plane illuminance (or optical power per unit area in the image) and is used to control variables such as depth of field. When using an optical telescope in astronomy, there is no depth of field issue, and the brightness of stellar point sources in terms of total optical power (not divided by area) is a function of absolute aperture area only, independent of focal length. The focal length controls the field of view of the instrument and the scale of the image that is presented at the focal plane to an eyepiece, film plate, or CCD. For example, the SOAR 4-meter telescope has a small field of view (about ) which is useful for stellar studies. The LSST 8.4 m telescope, which will cover the entire sky every three days, has a very large field of view. Its short 10.3 m focal length () is made possible by an error correction system which includes secondary and tertiary mirrors, a three element refractive system and active mounting and optics. Camera equation (G#) The camera equation, or G#, is the ratio of the radiance reaching the camera sensor to the irradiance on the focal plane of the camera lens: where is the transmission coefficient of the lens, and the units are in inverse steradians (sr−1). Working f-number The f-number accurately describes the light-gathering ability of a lens only for objects an infinite distance away. This limitation is typically ignored in photography, where f-number is often used regardless of the distance to the object. In optical design, an alternative is often needed for systems where the object is not far from the lens. In these cases the working f-number is used. The working f-number is given by: where is the uncorrected f-number, is the image-space numerical aperture of the lens, is the absolute value of the lens's magnification for an object a particular distance away, and is the pupil magnification. Since the pupil magnification is seldom known it is often assumed to be 1, which is the correct value for all symmetric lenses. In photography this means that as one focuses closer, the lens's effective aperture becomes smaller, making the exposure darker. The working f-number is often described in photography as the f-number corrected for lens extensions by a bellows factor. This is of particular importance in macro photography. History The system of f-numbers for specifying relative apertures evolved in the late nineteenth century, in competition with several other systems of aperture notation. Origins of relative aperture In 1867, Sutton and Dawson defined "apertal ratio" as essentially the reciprocal of the modern f-number. In the following quote, an "apertal ratio" of "" is calculated as the ratio of to , corresponding to an f-stop: In every lens there is, corresponding to a given apertal ratio (that is, the ratio of the diameter of the stop to the focal length), a certain distance of a near object from it, between which and infinity all objects are in equally good focus. For instance, in a single view lens of 6-inch focus, with a in. stop (apertal ratio one-twenty-fourth), all objects situated at distances lying between 20 feet from the lens and an infinite distance from it (a fixed star, for instance) are in equally good focus. Twenty feet is therefore called the 'focal range' of the lens when this stop is used. The focal range is consequently the distance of the nearest object, which will be in good focus when the ground glass is adjusted for an extremely distant object. In the same lens, the focal range will depend upon the size of the diaphragm used, while in different lenses having the same apertal ratio the focal ranges will be greater as the focal length of the lens is increased. The terms 'apertal ratio' and 'focal range' have not come into general use, but it is very desirable that they should, in order to prevent ambiguity and circumlocution when treating of the properties of photographic lenses. In 1874, John Henry Dallmeyer called the ratio the "intensity ratio" of a lens: The rapidity of a lens depends upon the relation or ratio of the aperture to the equivalent focus. To ascertain this, divide the equivalent focus by the diameter of the actual working aperture of the lens in question; and note down the quotient as the denominator with 1, or unity, for the numerator. Thus to find the ratio of a lens of 2 inches diameter and 6 inches focus, divide the focus by the aperture, or 6 divided by 2 equals 3; i.e., is the intensity ratio. Although he did not yet have access to Ernst Abbe's theory of stops and pupils, which was made widely available by Siegfried Czapski in 1893, Dallmeyer knew that his working aperture was not the same as the physical diameter of the aperture stop: It must be observed, however, that in order to find the real intensity ratio, the diameter of the actual working aperture must be ascertained. This is easily accomplished in the case of single lenses, or for double combination lenses used with the full opening, these merely requiring the application of a pair of compasses or rule; but when double or triple-combination lenses are used, with stops inserted between the combinations, it is somewhat more troublesome; for it is obvious that in this case the diameter of the stop employed is not the measure of the actual pencil of light transmitted by the front combination. To ascertain this, focus for a distant object, remove the focusing screen and replace it by the collodion slide, having previously inserted a piece of cardboard in place of the prepared plate. Make a small round hole in the centre of the cardboard with a piercer, and now remove to a darkened room; apply a candle close to the hole, and observe the illuminated patch visible upon the front combination; the diameter of this circle, carefully measured, is the actual working aperture of the lens in question for the particular stop employed. This point is further emphasized by Czapski in 1893. According to an English review of his book, in 1894, "The necessity of clearly distinguishing between effective aperture and diameter of physical stop is strongly insisted upon." J. H. Dallmeyer's son, Thomas Rudolphus Dallmeyer, inventor of the telephoto lens, followed the intensity ratio terminology in 1899. Aperture numbering systems At the same time, there were a number of aperture numbering systems designed with the goal of making exposure times vary in direct or inverse proportion with the aperture, rather than with the square of the f-number or inverse square of the apertal ratio or intensity ratio. But these systems all involved some arbitrary constant, as opposed to the simple ratio of focal length and diameter. For example, the Uniform System (U.S.) of apertures was adopted as a standard by the Photographic Society of Great Britain in the 1880s. Bothamley in 1891 said "The stops of all the best makers are now arranged according to this system." U.S. 16 is the same aperture as , but apertures that are larger or smaller by a full stop use doubling or halving of the U.S. number, for example is U.S. 8 and is U.S. 4. The exposure time required is directly proportional to the U.S. number. Eastman Kodak used U.S. stops on many of their cameras at least in the 1920s. By 1895, Hodges contradicts Bothamley, saying that the f-number system has taken over: "This is called the system, and the diaphragms of all modern lenses of good construction are so marked." Here is the situation as seen in 1899: Piper in 1901 discusses five different systems of aperture marking: the old and new Zeiss systems based on actual intensity (proportional to reciprocal square of the f-number); and the U.S., C.I., and Dallmeyer systems based on exposure (proportional to square of the f-number). He calls the f-number the "ratio number", "aperture ratio number", and "ratio aperture". He calls expressions like the "fractional diameter" of the aperture, even though it is literally equal to the "absolute diameter" which he distinguishes as a different term. He also sometimes uses expressions like "an aperture of f 8" without the division indicated by the slash. Beck and Andrews in 1902 talk about the Royal Photographic Society standard of , , , , etc. The R.P.S. had changed their name and moved off of the U.S. system some time between 1895 and 1902. Typographical standardization By 1920, the term f-number appeared in books both as F number and f/number. In modern publications, the forms f-number and f number are more common, though the earlier forms, as well as F-number are still found in a few books; not uncommonly, the initial lower-case f in f-number or f/number is set in a hooked italic form: ƒ. Notations for f-numbers were also quite variable in the early part of the twentieth century. They were sometimes written with a capital F, sometimes with a dot (period) instead of a slash, and sometimes set as a vertical fraction. The 1961 ASA standard PH2.12-1961 American Standard General-Purpose Photographic Exposure Meters (Photoelectric Type) specifies that "The symbol for relative apertures shall be or followed by the effective ƒ-number." They show the hooked italic 'ƒ' not only in the symbol, but also in the term f-number, which today is more commonly set in an ordinary non-italic face. See also Circle of confusion Group f/64 Photographic lens design Pinhole camera Preferred number References External links Large format photography—how to select the f-stop Optical quantities Science of photography Dimensionless numbers of physics Logarithmic scales of measurement
F-number
[ "Physics", "Mathematics" ]
5,387
[ "Optical quantities", "Logarithmic scales of measurement", "Quantity", "Physical quantities" ]
168,258
https://en.wikipedia.org/wiki/Beta%20Tauri
Beta Tauri is the second-brightest star in the constellation of Taurus. It has the official name Elnath; Beta Tauri is the current Bayer designation, which is Latinised from β Tauri and abbreviated Beta Tau or β Tau. The original designation of Gamma Aurigae is now rarely used. It is a chemically peculiar B7 giant star, 134 light years away from the Sun with an apparent magnitude of 1.65. Nomenclature This star has two Bayer designations: β Tauri (Latinised to Beta Tauri) and γ Aurigae (Latinised to Gamma Aurigae). Ptolemy considered the star to be shared by Auriga, and Johann Bayer assigned it a designation in both constellations. When the modern constellation boundaries were fixed in 1930, the designation γ Aurigae largely dropped from use. The traditional name Elnath, variously El Nath or Alnath, comes from the Arabic word النطح an-naţħ, meaning "the butting" (i.e. the bull's horns). As in many other Arabic star names, the article ال is transliterated literally as el, yet overwhelmingly in Arabic pronunciation it is assimilated to the n, meaning it is omitted. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Elnath for this star. In Chinese, (), meaning Five Chariots, refers to an asterism consisting of β Tauri, ι Aurigae, Capella, β Aurigae and θ Aurigae. Consequently, the Chinese name for β Tauri itself is (; .) Physical properties The absolute magnitude of Beta Tauri is −1.34, similar to another star in Taurus, Maia in the Pleiades star cluster. Like Maia, β Tauri is a B-class giant with a luminosity 700 times solar (). It has evolved to become a giant star, larger and cooler than when it was on the main sequence. However, being approximately 130 light-years distant compared to Maia's estimated 360 light-years, β Tauri ranks as the second-brightest star in the constellation. It is a mercury-manganese star, a type of non-magnetic chemically peculiar star with unusually large signatures of some heavy elements in its spectrum. Relative to the Sun, β Tauri is notable for a high abundance of manganese, but little calcium and magnesium. However, the lack of strong mercury signatures, together with notably high levels of silicon and chromium, have led some authors to give other classifications, including as a "SrCrEu star" or even an Ap star. Its limb-darkened angular diameter has been measured at . At a distance of , this corresponds to a linear radius of . At the southern edge of the narrow plane of the Milky Way Galaxy a few degrees west of the galactic anticenter, β Tauri figures (appears) as a foreground object south of many nebulae and star clusters such as M36, M37, and M38. It is 5.39 degrees north of the ecliptic, still few enough to be occultable by the Moon. Such occultations occur when the Moon's ascending node is near the March equinox, as in 2007. Most are visible only in the Southern Hemisphere, because the star is at the northern edge of the lunar occultation zone – but rarely as far north as southern California. Companions A faint star is, angularly from our viewpoint, close enough for astronomers to consider, and guides to mention, the pair as a double star. This visual companion, BD+28°795B, has a position angle of 239 degrees and is separated from the main star by 33.4 arcseconds (″). Six angularly closer, even fainter stars have been found in a search for brown dwarf and planetary companionsall considered background objects. A very close companion was reported from lunar occultation measurements at a distance of , but not confirmed by other observers. Radial velocity measurements indicate that Beta Tauri is a single-lined spectroscopic binary, but there is no published information about the companion or orbit. References External links Jim Kaler's Stars: Elnath B-type giants Mercury-manganese stars Taurus (constellation) Tauri, Beta 1791 BD+28 0795 Tauri, 112 035497 025428 Elnath Auriga
Beta Tauri
[ "Astronomy" ]
958
[ "Auriga", "Taurus (constellation)", "Constellations" ]
168,313
https://en.wikipedia.org/wiki/Pictogram
A pictogram (also pictogramme, pictograph, or simply picto) is a graphical symbol that conveys meaning through its visual resemblance to a physical object. Pictograms are used in systems of writing and visual communication. A pictography is a writing system which uses pictograms. Some pictograms, such as hazard pictograms, may be elements of formal languages. In the field of prehistoric art, the term "pictograph" has a different definition, and specifically refers to art painted on rock surfaces. Pictographs are contrasted with petroglyphs, which are carved or incised. Historical Early written symbols were based on pictograms (pictures which resemble what they signify) and ideograms (symbols which represent ideas). Ancient Sumerian, Egyptian, and Chinese civilizations began to adapt such symbols to represent concepts, developing them into logographic writing systems. Pictograms are still in use as the main medium of written communication in some non-literate cultures in Africa, the Americas, and Oceania. Pictograms are often used as simple, pictorial, representational symbols by most contemporary cultures. Pictograms can be considered an art form, or can be considered a written language and are designated as such in Pre-Columbian art, Native American art, Ancient Mesopotamia and Painting in the Americas before Colonization. One example of many is the Rock art of the Chumash people, part of the Native American history of California. In 2011, UNESCO's World Heritage List added "Petroglyph Complexes of the Mongolian Altai, Mongolia" to celebrate the importance of the pictograms engraved in rocks. Some scientists in the field of neuropsychiatry and neuropsychology, such as Mario Christian Meyer, are studying the symbolic meaning of indigenous pictographs and petroglyphs, aiming to create new ways of communication between native people and modern scientists to safeguard and valorize their cultural diversity. Modern uses An early modern example of the extensive use of pictograms may be seen in the map in the London suburban timetables of the London and North Eastern Railway, 1936–1947, designed by George Dow, in which a variety of pictograms was used to indicate facilities available at or near each station. Pictograms remain in common use today, serving as pictorial, representational signs, instructions, or statistical diagrams. Because of their graphical nature and fairly realistic style, they are widely used to indicate public toilets, or places such as airports and train stations. Because they are a concise way to communicate a concept to people who speak many different languages, pictograms have also been used extensively at the Olympics since the 1964 summer games in Tokyo featured designs by Masaru Katsumi. Later Olympic pictograms have been redesigned for each set of games. Pictographic writing as a modernist poetic technique is credited to Ezra Pound, though French surrealists credit the Pacific Northwest American Indians of Alaska who introduced writing, via totem poles, to North America. Contemporary artist Xu Bing created Book from the Ground, a universal language made up of pictograms collected from around the world. A Book from the Ground chat program has been exhibited in museums and galleries internationally. In mathematics In statistics, pictograms are charts in which icons represent numbers to make it more interesting and easier to understand. A key is often included to indicate what each icon represents. All icons must be of the same size, but a fraction of an icon can be used to show the respective fraction of that amount. For example, the following table: can be graphed as follows: Key: = 10 letters As the values are rounded to the nearest 5 letters, the second icon on Tuesday is the left half of the original. Standardization Pictograms can often transcend languages in that they can communicate to speakers of a number of tongues and language families equally effectively, even if the languages and cultures are completely different. This is why road signs and similar pictographic material are often applied as global standards expected to be understood by nearly all. A standard set of pictograms was defined in the international standard ISO 7001: Public Information Symbols. Other common sets of pictographs are the laundry symbols used on clothing tags and the chemical hazard symbols as standardized by the GHS system. Pictograms have been popularized in use on the Internet and in software, better known as "icons" displayed on a computer screen in order to help user navigate a computer system or mobile device. See also Bouba/kiki effect Crop art Emoticon Emoji Icon (computing) Ideasthesia Ideogram List of Stone Age art List of symbols Pecked curvilinear nucleated Petroform Petroglyph Rebus Road sign Rock art Rock art of the Chumash people Sound symbolism Stick figure, in art Symbol Traffic sign Warning sign Yakima Indian Painted Rocks Notes References Reed, Ishmael (2003). From Totems to Hip-Hop: A Multicultural Anthology of Poetry Across the Americas, 1900–2002, Ishmael Reed, ed. . External links Pictogram & Communication: About 1,500 practical pictograms based on Design principles of pictorial symbols for communication support(JIS T 0103:2005) CAPL:The Culturally Authentic Pictorial Lexicon, photographic illustrations of objects for multiple languages Pictogram Encyclopedia, The collection site of more than 500 pictograms, Pictograms are categorized, and easy to find unique pictogram Pictopen - Modern Pictographic Writing NounProject - Free Pictograms under open licences Modern Pictograms - Explore word and pictogram relationships Wolfram|Alpha - Number to pictogram translator Infographics Rock art Pre-Columbian art Indigenous art History of communication Proto-writing Statistical charts and diagrams
Pictogram
[ "Mathematics" ]
1,211
[ "Symbols", "Pictograms" ]
168,315
https://en.wikipedia.org/wiki/Laundry%20symbol
A laundry symbol, also called a care symbol, is a pictogram indicating the manufacturer's suggestions as to methods of washing, drying, dry-cleaning and ironing clothing. Such symbols are written on labels, known as care labels or care tags, attached to clothing to indicate how a particular item should best be cleaned. While there are internationally recognized standards for the care labels and pictograms, their exact use and form differ by region. In some standards, pictograms coexist with or are complemented by written instructions. Standards GINETEX, the France-based European association for textile care labelling, was formed in 1963 in part to define international standards for the care and labelling of textiles. By the early 1970s, GINETEX was working with ISO to develop international standards for textile labelling, eventually leading to the ISO 3758 standard, Textiles – Care labelling code using symbols. ISO 3758 was supplemented in 1993, revised in 2005 and again in 2012 and 2023 with reviews of the standard held on a five-year cycle. In March 1970, the Canadian Government Specifications Board published 86-GP-1, Standard for Care Labelling of Textiles, which promoted a symbol-based textile care labelling system in which symbols were colored: green indicated "no precautions are necessary", yellow indicated "some caution is necessary", and red indicated "prohibited". Publication 86-GP-1 was revised several times over the following three decades; the most noteworthy change was in 1979, when temperatures changed from Fahrenheit to Celsius, and any additional instructions were to be added in text, in both English and French. In 2003, the system was withdrawn in favor of a black-and-white symbol-based system harmonized with North American and international standards. The inclusion of care symbols on garments made or sold in Canada has always been voluntary; only fabric content labels are mandatory (since 1972). In 1996, in the United States, ASTM International published a system of pictorial care instructions as D5489 Standard Guide for Care Symbols for Care Instructions on Textile Products, with revisions in 1998, 2001, 2007, 2014, and 2018. American Cleaning institute developed and published their guide to fabric care symbols. Additional textile care labelling systems have been developed for Australia, China, and Japan. Worldwide, all of these systems tend to use similar pictograms or labelling to convey laundry care instructions. , the pictograms are not encoded in Unicode standards, because these symbols are not in the public domain across various countries, and are copyrighted. Pictograms General The care label describes the allowable treatment of the garment without damaging the textile. Whether this treatment is necessary or sufficient, is not stated. A milder than specified treatment is always acceptable. The symbols are protected and their use is required to comply with the license conditions; incorrect labelling is prohibited. A bar below each symbol calls for a gentler treatment than usual and a double bar for a very gentle treatment. Washing A stylized washtub is shown, and the number in the tub means the maximum wash temperature (degrees Celsius). A bar under the tub signifies a gentler treatment in the washing machine. A double bar signifies very gentle handling. A hand in the tub signifies that only (gentle) hand washing (not above 40 °C) is allowed. A cross through washtub means that the textile may not be washed under normal household conditions. In the North American standard, dots are used to indicate the proper temperature range. In the European standard, the level of wash agitation recommended is indicated by bars below the wash tub symbol. Absence of bar indicates a maximum agitation (cotton wash), a single bar indicates medium agitation (synthetics cycle) and a double bar indicates very minimal agitation (silk/wool cycle). The bar symbols also indicate the level of spin recommended with more bars indicating lower preferred spin speed. Bleaching An empty triangle (formerly lettered Cl) allows the bleaching with chlorine or non-chlorine bleach. Two oblique lines in the triangle prohibit chlorine bleaching. A crossed triangle prohibits any bleaching. Drying A circle in the square symbolizes a clothes dryer. One dot requires drying at reduced temperature and two dots for normal temperature. The crossed symbol means that the clothing does not tolerate machine drying. In the US and Japan, there are other icons for natural/line drying. Tumble drying Natural drying Ironing The iron with up to three dots allows for ironing. The number of dots are assigned temperatures: one prescribes , two for and three for . An iron with a cross prohibits ironing. Professional cleaning A circle identifies the possibilities of professional cleaning. A bar under the symbol means clean gently, and two bars means very gentle cleaning. Dry cleaning The letters P and F in a circle are for the different solvents used in professional dry cleaning. Wet cleaning The letter W in a circle is for professional wet cleaning. References External links GINETEX: The International Association for Textile Care Labelling-Care Symbols ISO 3758:2012 — Textiles — Care labelling code using symbols The revised Canadian standard Swedish care symbols United States care symbols US, Japanese, and UK woven washing label symbols Consumer symbols Symbol Pictograms
Laundry symbol
[ "Mathematics" ]
1,080
[ "Symbols", "Pictograms" ]
168,322
https://en.wikipedia.org/wiki/Vitalism
Vitalism is a belief that starts from the premise that "living organisms are fundamentally different from non-living entities because they contain some non-physical element or are governed by different principles than are inanimate things." Where vitalism explicitly invokes a vital principle, that element is often referred to as the "vital spark", "energy", "élan vital" (coined by vitalist Henri Bergson), "vital force", or "vis vitalis", which some equate with the soul. In the 18th and 19th centuries, vitalism was discussed among biologists, between those who felt that the known mechanics of physics would eventually explain the difference between life and non-life and vitalists who argued that the processes of life could not be reduced to a mechanistic process. Vitalist biologists such as Johannes Reinke proposed testable hypotheses meant to show inadequacies with mechanistic explanations, but their experiments failed to provide support for vitalism. Biologists now consider vitalism in this sense to have been refuted by empirical evidence, and hence regard it either as a superseded scientific theory, or as a pseudoscience since the mid-20th century. Vitalism has a long history in medical philosophies: many traditional healing practices posited that disease results from some imbalance in vital forces. History Ancient times The notion that bodily functions are due to a vitalistic principle existing in all living creatures has roots going back at least to ancient Egypt. In Greek philosophy, the Milesian school proposed natural explanations deduced from materialism and mechanism. However, by the time of Lucretius, this account was supplemented, (for example, by the unpredictable clinamen of Epicurus), and in Stoic physics, the pneuma assumed the role of logos. Galen believed the lungs draw pneuma from the air, which the blood communicates throughout the body. Medieval In Europe, medieval physics was influenced by the idea of pneuma, helping to shape later aether theories. Early modern Vitalists included English anatomist Francis Glisson (1597–1677) and the Italian doctor Marcello Malpighi (1628–1694). Caspar Friedrich Wolff (1733–1794) is considered to be the father of epigenesis in embryology, that is, he marks the point when embryonic development began to be described in terms of the proliferation of cells rather than the incarnation of a preformed soul. However, this degree of empirical observation was not matched by a mechanistic philosophy: in his Theoria Generationis (1759), he tried to explain the emergence of the organism by the actions of a vis essentialis (an organizing, formative force). Carl Reichenbach (1788–1869) later developed the theory of Odic force, a form of life-energy that permeates living things. In the 17th century, modern science responded to Newton's action at a distance and the mechanism of Cartesian dualism with vitalist theories: that whereas the chemical transformations undergone by non-living substances are reversible, so-called "organic" matter is permanently altered by chemical transformations (such as cooking). As worded by Charles Birch and John B. Cobb, "the claims of the vitalists came to the fore again" in the 18th century: "Georg Ernst Stahl's followers were active as were others, such as the physician genius Francis Xavier Bichat of the Hotel Dieu." However, "Bichat moved from the tendency typical of the French vitalistic tradition to progressively free himself from metaphysics in order to combine with hypotheses and theories which accorded to the scientific criteria of physics and chemistry." John Hunter recognised "a 'living principle' in addition to mechanics." Johann Friedrich Blumenbach was influential in establishing epigenesis in the life sciences in 1781 with his publication of Über den Bildungstrieb und das Zeugungsgeschäfte. Blumenbach cut up freshwater Hydra and established that the removed parts would regenerate. He inferred the presence of a "formative drive" (Bildungstrieb) in living matter. But he pointed out that this name, 19th century Jöns Jakob Berzelius, one of the early 19th century founders of modern chemistry, argued that a regulative force must exist within living matter to maintain its functions. Berzelius contended that compounds could be distinguished by whether they required any organisms in their synthesis (organic compounds) or whether they did not (inorganic compounds). Vitalist chemists predicted that organic materials could not be synthesized from inorganic components, but Friedrich Wöhler synthesised urea from inorganic components in 1828. However, contemporary accounts do not support the common belief that vitalism died when Wöhler made urea. This Wöhler Myth, as historian Peter Ramberg called it, originated from a popular history of chemistry published in 1931, which, "ignoring all pretense of historical accuracy, turned Wöhler into a crusader who made attempt after attempt to synthesize a natural product that would refute vitalism and lift the veil of ignorance, until 'one afternoon the miracle happened. Between 1833 and 1844, Johannes Peter Müller wrote a book on physiology called Handbuch der Physiologie, which became the leading textbook in the field for much of the nineteenth century. The book showed Müller's commitments to vitalism; he questioned why organic matter differs from inorganic, then proceeded to chemical analyses of the blood and lymph. He describes in detail the circulatory, lymphatic, respiratory, digestive, endocrine, nervous, and sensory systems in a wide variety of animals but explains that the presence of a soul makes each organism an indivisible whole. He claimed that the behaviour of light and sound waves showed that living organisms possessed a life-energy for which physical laws could never fully account. Louis Pasteur (1822–1895) after his famous rebuttal of spontaneous generation, performed several experiments that he felt supported vitalism. According to Bechtel, Pasteur "fitted fermentation into a more general programme describing special reactions that only occur in living organisms. These are irreducibly vital phenomena." Rejecting the claims of Berzelius, Liebig, Traube and others that fermentation resulted from chemical agents or catalysts within cells, Pasteur concluded that fermentation was a "vital action". 20th century Hans Driesch (1867–1941) interpreted his experiments as showing that life is not run by physicochemical laws. His main argument was that when one cuts up an embryo after its first division or two, each part grows into a complete adult. Driesch's reputation as an experimental biologist deteriorated as a result of his vitalistic theories, which scientists have seen since his time as pseudoscience. Vitalism is a superseded scientific hypothesis, and the term is sometimes used as a pejorative epithet. Ernst Mayr (1904–2005) wrote: Other vitalists included Johannes Reinke and Oscar Hertwig. Reinke used the word neovitalism to describe his work, claiming that it would eventually be verified through experimentation, and that it was an improvement over the other vitalistic theories. The work of Reinke influenced Carl Jung. John Scott Haldane adopted an anti-mechanist approach to biology and an idealist philosophy early on in his career. Haldane saw his work as a vindication of his belief that teleology was an essential concept in biology. His views became widely known with his first book Mechanism, life and personality in 1913. Haldane borrowed arguments from the vitalists to use against mechanism; however, he was not a vitalist. Haldane treated the organism as fundamental to biology: "we perceive the organism as a self-regulating entity", "every effort to analyze it into components that can be reduced to a mechanical explanation violates this central experience". The work of Haldane was an influence on organicism. Haldane stated that a purely mechanist interpretation could not account for the characteristics of life. Haldane wrote a number of books in which he attempted to show the invalidity of both vitalism and mechanist approaches to science. Haldane explained: By 1931, biologists had "almost unanimously abandoned vitalism as an acknowledged belief." Emergentism Contemporary science and engineering sometimes describe emergent processes, in which the properties of a system cannot be fully described in terms of the properties of the constituents. This may be because the properties of the constituents are not fully understood, or because the interactions between the individual constituents are important for the behavior of the system. Whether emergence should be grouped with traditional vitalist concepts is a matter of semantic controversy. According to Emmeche et al. (1997): Mesmerism A popular vitalist theory of the 18th century was "animal magnetism", in the theories of Franz Mesmer (1734–1815). However, the use of the (conventional) English term animal magnetism to translate Mesmer's magnétisme animal can be misleading for three reasons: Mesmer chose his term to clearly distinguish his variant of magnetic force from those referred to, at that time, as mineral magnetism, cosmic magnetism and planetary magnetism. Mesmer felt that this particular force/power only resided in the bodies of humans and animals. Mesmer chose the word "animal," for its root meaning (from Latin animus="breath") specifically to identify his force as a quality that belonged to all creatures with breath; viz., the animate beings: humans and animals. Mesmer's ideas became so influential that King Louis XVI of France appointed two commissions to investigate mesmerism; one was led by Joseph-Ignace Guillotin, the other, led by Benjamin Franklin, included Bailly and Lavoisier. The commissioners learned about Mesmeric theory, and saw its patients fall into fits and trances. In Franklin's garden, a patient was led to each of five trees, one of which had been "mesmerized"; he hugged each in turn to receive the "vital fluid," but fainted at the foot of a 'wrong' one. At Lavoisier's house, four normal cups of water were held before a "sensitive" woman; the fourth produced convulsions, but she calmly swallowed the mesmerized contents of a fifth, believing it to be plain water. The commissioners concluded that "the fluid without imagination is powerless, whereas imagination without the fluid can produce the effects of the fluid." Medical philosophies Vitalism has a long history in medical philosophies: many traditional healing practices posited that disease results from some imbalance in vital forces. One example of a similar notion in Africa is the Yoruba concept of ase. In the European tradition founded by Hippocrates, these vital forces were associated with the four temperaments and humours. Multiple Asian traditions posited an imbalance or blocking of qi or prana. Amongst unterritorialized traditions such as religions and arts, forms of vitalism continue to exist as philosophical positions or as memorial tenets. Complementary and alternative medicine therapies include energy therapies, associated with vitalism, especially biofield therapies such as therapeutic touch, Reiki, external qi, chakra healing and SHEN therapy. In these therapies, the "subtle energy" field of a patient is manipulated by a practitioner. The subtle energy is held to exist beyond the electromagnetic energy produced by the heart and brain. Beverly Rubik describes the biofield as a "complex, dynamic, extremely weak EM field within and around the human body...." The founder of homeopathy, Samuel Hahnemann, promoted an immaterial, vitalistic view of disease: "...they are solely spirit-like (dynamic) derangements of the spirit-like power (the vital principle) that animates the human body." The view of disease as a dynamic disturbance of the immaterial and dynamic vital force is taught in many homeopathic colleges and constitutes a fundamental principle for many contemporary practising homeopaths. Criticism Vitalism has sometimes been criticized as begging the question by inventing a name. Molière had famously parodied this fallacy in Le Malade imaginaire, where a quack "answers" the question of "Why does opium cause sleep?" with "Because of its dormitive virtue (i.e., soporific power)." Thomas Henry Huxley compared vitalism to stating that water is the way it is because of its "aquosity". His grandson Julian Huxley in 1926 compared "vital force" or élan vital to explaining a railroad locomotive's operation by its élan locomotif ("locomotive force"). Another criticism is that vitalists have failed to rule out mechanistic explanations. This is rather obvious in retrospect for organic chemistry and developmental biology, but the criticism goes back at least a century. In 1912, Jacques Loeb published The Mechanistic Conception of Life, in which he described experiments on how a sea urchin could have a pin for its father, as Bertrand Russell put it (Religion and Science). He offered this challenge: "... we must either succeed in producing living matter artificially, or we must find the reasons why this is impossible." (pp. 5–6) Loeb addressed vitalism more explicitly: "It is, therefore, unwarranted to continue the statement that in addition to the acceleration of oxidations the beginning of individual life is determined by the entrance of a metaphysical "life principle" into the egg; and that death is determined, aside from the cessation of oxidations, by the departure of this "principle" from the body. In the case of the evaporation of water we are satisfied with the explanation given by the kinetic theory of gases and do not demand that to repeat a well-known jest of Huxley the disappearance of the "aquosity" be also taken into consideration." (pp. 14–15) Bechtel states that vitalism "is often viewed as unfalsifiable, and therefore a pernicious metaphysical doctrine." For many scientists, "vitalist" theories were unsatisfactory "holding positions" on the pathway to mechanistic understanding. In 1967, Francis Crick, the co-discoverer of the structure of DNA, stated "And so to those of you who may be vitalists I would make this prophecy: what everyone believed yesterday, and you believe today, only cranks will believe tomorrow." While many vitalistic theories have in fact been falsified, notably Mesmerism, the pseudoscientific retention of untested and untestable theories continues to this day. Alan Sokal published an analysis of the wide acceptance among professional nurses of "scientific theories" of spiritual healing. (Pseudoscience and Postmodernism: Antagonists or Fellow-Travelers?). Use of a technique called therapeutic touch was especially reviewed by Sokal, who concluded, "nearly all the pseudoscientific systems to be examined in this essay are based philosophically on vitalism" and added that "Mainstream science has rejected vitalism since at least the 1930s, for a plethora of good reasons that have only become stronger with time." Joseph C. Keating, Jr. discusses vitalism's past and present roles in chiropractic and calls vitalism "a form of bio-theology." He further explains that: "Vitalism is that rejected tradition in biology which proposes that life is sustained and explained by an unmeasurable, intelligent force or energy. The supposed effects of vitalism are the manifestations of life itself, which in turn are the basis for inferring the concept in the first place. This circular reasoning offers pseudo-explanation, and may deceive us into believing we have explained some aspect of biology when in fact we have only labeled our ignorance. 'Explaining an unknown (life) with an unknowable (Innate),' suggests chiropractor Joseph Donahue, 'is absurd'." Keating views vitalism as incompatible with scientific thinking: "Chiropractors are not unique in recognizing a tendency and capacity for self-repair and auto-regulation of human physiology. But we surely stick out like a sore thumb among professions which claim to be scientifically based by our unrelenting commitment to vitalism. So long as we propound the 'One cause, one cure' rhetoric of Innate, we should expect to be met by ridicule from the wider health science community. Chiropractors can't have it both ways. Our theories cannot be both dogmatically held vitalistic constructs and be scientific at the same time. The purposiveness, consciousness and rigidity of the Palmers' Innate should be rejected." Keating also mentions Skinner's viewpoint: "Vitalism has many faces and has sprung up in many areas of scientific inquiry. Psychologist B.F. Skinner, for example, pointed out the irrationality of attributing behavior to mental states and traits. Such 'mental way stations,' he argued, amount to excess theoretical baggage which fails to advance cause-and-effect explanations by substituting an unfathomable psychology of 'mind'." According to Williams, "[t]oday, vitalism is one of the ideas that form the basis for many pseudoscientific health systems that claim that illnesses are caused by a disturbance or imbalance of the body's vital force." "Vitalists claim to be scientific, but in fact they reject the scientific method with its basic postulates of cause and effect and of provability. They often regard subjective experience to be more valid than objective material reality." Victor Stenger states that the term "bioenergetics" "is applied in biochemistry to refer to the readily measurable exchanges of energy within organisms, and between organisms and the environment, which occur by normal physical and chemical processes. This is not, however, what the new vitalists have in mind. They imagine the bioenergetic field as a holistic living force that goes beyond reductionist physics and chemistry." Such a field is sometimes explained as electromagnetic, though some advocates also make confused appeals to quantum physics. Joanne Stefanatos states that "The principles of energy medicine originate in quantum physics." Stenger offers several explanations as to why this line of reasoning may be misplaced. He explains that energy exists in discrete packets called quanta. Energy fields are composed of their component parts and so only exist when quanta are present. Therefore, energy fields are not holistic, but are rather a system of discrete parts that must obey the laws of physics. This also means that energy fields are not instantaneous. These facts of quantum physics place limitations on the infinite, continuous field that is used by some theorists to describe so-called "human energy fields". Stenger continues, explaining that the effects of EM forces have been measured by physicists as accurately as one part in a billion and there is yet to be any evidence that living organisms emit a unique field. Vitalistic thinking has been identified in the naive biological theories of children: "Recent experimental results show that a majority of preschoolers tend to choose vitalistic explanations as most plausible. Vitalism, together with other forms of intermediate causality, constitute unique causal devices for naive biology as a core domain of thought." See also Egregore Energy (esotericism) Etheric body Georges Canguilhem Henri Bergson Holism in science Homeopathy Hylozoism Irreducible complexity Lebensphilosophie Mind–body dualism Montpellier vitalism Morphic resonance Odic force Orenda Orgone Orthogenesis Qi Ratiovitalism Royal Commission on Animal Magnetism Spirit (animating force) Vis medicatrix naturae Vital materialism Vitality Notes References Sources External links Vitalism at the Skeptic's Dictionary For vital force and vitalism in the Spanish context, see Nicolás Fernández-Medina's Life Embodied: The Promise of Vital Force in Spanish Modernity (McGill-Queen's UP, 2018). History of biology Obsolete scientific theories Pseudoscience
Vitalism
[ "Biology" ]
4,216
[ "Non-Darwinian evolution", "Vitalism", "Biology theories" ]
168,340
https://en.wikipedia.org/wiki/Sewage%20sludge
Sewage sludge is the residual, semi-solid material that is produced as a by-product during sewage treatment of industrial or municipal wastewater. The term "septage" also refers to sludge from simple wastewater treatment but is connected to simple on-site sanitation systems, such as septic tanks. After treatment, and dependent upon the quality of sludge produced (for example with regards to heavy metal content), sewage sludge is most commonly either disposed of in landfills, dumped in the ocean or applied to land for its fertilizing properties, as pioneered by the product Milorganite. The term "Biosolids" is often used as an alternative to the term sewage sludge in the United States, particularly in conjunction with reuse of sewage sludge as fertilizer after sewage sludge treatment. Biosolids can be defined as organic wastewater solids that can be reused after stabilization processes such as anaerobic digestion and composting. Opponents of sewage sludge reuse reject this term as a public relations term. Treatment process Sewage sludge treatment is the process of removing contaminants from wastewater. Sewage sludge is produced from the treatment of wastewater in sewage treatment plants and consists of two basic forms — raw primary sludge and secondary sludge, also known as activated sludge in the case of the activated sludge process. Sewage sludge is usually treated by one or several of the following treatment steps: lime stabilization, thickening, dewatering, drying, anaerobic digestion or composting. Some treatment processes, such as composting and alkaline stabilization, that involve significant amendments may affect contaminant strength and concentration: depending on the process and the contaminant in question, treatment may decrease or in some cases increase the bioavailability and/or solubility of contaminants. Regarding sludge stabilization processes, anaerobic and aerobic digestion seem to be the most common used methods in EU-27. When fresh sewage or wastewater enters a primary settling tank, approximately 50% of the suspended solid matter will settle out in an hour and a half. This collection of solids is known as raw sludge or primary solids and is said to be "fresh" before anaerobic processes become active. The sludge will become putrescent in a short time once anaerobic bacteria take over, and must be removed from the sedimentation tank before this happens. This is accomplished in one of two ways. Most commonly, the fresh sludge is continuously extracted from the bottom of a hopper-shaped tank by mechanical scrapers and passed to separate sludge-digestion tanks. In some treatment plants an Imhoff tank is used: sludge settles through a slot into the lower story or digestion chamber, where it is decomposed by anaerobic bacteria, resulting in liquefaction and reduced volume of the sludge. The secondary treatment process also generates a sludge largely composed of bacteria and protozoa with entrained fine solids, and this is removed by settlement in secondary settlement tanks. Both sludge streams are typically combined and are processed by anaerobic or aerobic treatment process at either elevated or ambient temperatures. After digesting for an extended period, the result is called "digested" sludge and may be disposed of by drying and then landfilling. Following treatment, sewage sludge is either landfilled, dumped in the ocean, incinerated, applied on agricultural land or, in some cases, retailed or given away for free to the general public. According to a review article published in 2012, sludge reuse (including direct agricultural application and composting) was the predominant choice for sludge management in EU-15 (53% of produced sludge), following by incineration (21% of produced sludge). On the other hand, the most common disposal method in EU-12 countries was landfilling. Quantities produced The amount of sewage sludge produced is proportional to the amount and concentration of wastewater treated, and it also depends on the type of wastewater treatment process used. It can be expressed as kg dry solids per cubic metre of wastewater treated. The total sludge production from a wastewater treatment process is the sum of sludge from primary settling tanks (if they are part of the process configuration) plus excess sludge from the biological treatment step. For example, primary sedimentation produces about 110–170 kg/ML of so-called primary sludge, with a value of 150 kg/ML regarded as being typical for municipal wastewater in the U.S. or Europe. The sludge production is expressed as kg of dry solids produced per ML of wastewater treated; one mega litre (ML) is 103 m3. Of the biological treatment processes, the activated sludge process produces about 70–100 kg/ML of waste activated sludge, and a trickling filter process produces slightly less sludge from the biological part of the process: 60–100 kg/ML. This means that the total sludge production of an activated sludge process that uses primary sedimentation tanks is in the range of 180–270 kg/ML, being the sum of primary sludge and waste activated sludge. United States municipal wastewater treatment plants in 1997 produced about 7.7 million dry tons of sewage sludge, and about 6.8 million dry tons in 1998 according to EPA estimates. As of 2004, about 60% of all sewage sludge was applied to land as a soil amendment and fertilizer for growing crops. In a review article published in 2012, it was reported that a total amount of 10.1 million tn DS/year were produced in EU-27 countries. As of 2023, the EU produced 2 to 3 million tons of sludge each year. Worldwide it is estimated that as much as 75 Million Mg of dry sewage sludge per year. Production of sewage sludge can be reduced by conversion from flush toilets to dry toilets such as urine-diverting dry toilets and composting toilets. Disposal Landfill Sewage sludge deposition in landfills can circulate human-virulent species of Cryptosporidium and Giardia pathogens. Sonication and quicklime stabilization are most effective in inactivation of these pathogens; microwave energy disintegration and top-soil stabilization were less effective. A Texas county has launched a first-of-its-kind criminal investigation into waste management giant Synagro over PFAS-contaminated sewage sludge it is selling to Texas farmers as a cheap alternative to fertilizer. As of 2023, 11% of sludge produced in the EU was disposed of in landfills. The EU is attempting to phase out the disposal of sludge in landfills. Ocean dumping It used to be common practice to dump sewage sludge into the ocean, however, this practice has stopped in many nations due to environmental concerns as well to domestic and international laws and treaties. Ronald Reagan signed the law that prohibited ocean dumping as a means of disposal of sewage sludge in the US in 1988. Incineration Sludge can also be incinerated in sludge incineration plants which comes with its own set of environmental concerns (air pollution, disposal of the ash). Pyrolysis of the sludge to create syngas and potentially biochar is possible, as is combustion of biofuel produced from drying sewage sludge or incineration in a waste-to-energy facility for direct production of electricity and steam for district heating or industrial uses. Thermal processes can greatly reduce the volume of the sludge, as well as achieve remediation of all or some of the biological concerns. Direct waste-to-energy incineration and complete combustion systems (such as the Gate 5 Energy System) will require multi-step cleaning of the exhaust gas, to ensure no hazardous substances are released. In addition, the ash produced by incineration or incomplete combustion processes (such as fluidized-bed dryers) may be difficult to use without subsequent treatment due to high heavy metal content; solutions to this include leaching of the ashes to remove heavy metals or in the case of ash produced in a complete-combustion process, or with biochar produced from a pyrolytic process, the heavy metals may be fixed in place and the ash material readily usable as a LEEDs preferred additive to concrete or asphalt. Examples of other ways to use dried sewage sludge as an energy resource include the Gate 5 Energy System, an innovative process to power a steam turbine using heat from burning milled and dried sewage sludge, or combining dried sewage sludge with coal in coal-fired power stations. In both cases this allows for production of electricity with less carbon-dioxide emissions than conventional coal-fired power stations. As of 2023, 27% of sludge produced in the EU was incinerated. Use Land application Biosolids is a term widely used to denote the byproduct of domestic and commercial sewage and wastewater treatment that is to be used in agriculture. National regulations that dictate the practice of land application of treated sewage sludge differ widely and e.g. in the US there are widespread disputes about this practice. Depending on their level of treatment and resultant pollutant content, biosolids can be used in regulated applications for non-food agriculture, food agriculture, or distribution for unlimited use. Treated biosolids can be produced in cake, granular, pellet, or liquid form and are spread over land before being incorporated into the soil or injected directly into the soil by specialist contractors. Such use was pioneered by the production of Milorganite in 1926. Use of sewage sludge has shown an increase in level of soil available phosphorus and soil salinity. The findings of a 20-year field study of air, land, and water in Arizona, concluded that use of biosolids is sustainable and improves the soil and crops. Other studies report that plants uptake large quantities of heavy metals and toxic pollutants that are retained by produce, which is then consumed by humans. A PhD thesis studying the addition of sludge to neutralize soil acidity concluded that the practice was not recommended if large amounts are used because the sludge produces acids when it oxidizes. Studies have indicated that pharmaceuticals and personal care products, which often adsorb to sludge during wastewater treatment, can persist in agricultural soils following biosolid application. Some of these chemicals, including potential endocrine disruptor triclosan, can also travel through the soil column and leach into agricultural tile drainage at detectable levels. Other studies, however, have shown that these chemicals remain adsorbed to surface soil particles, making them more susceptible to surface erosion than infiltration. These studies are also mixed in their findings regarding the persistence of chemicals such as triclosan, triclocarban, and other pharmaceuticals. The impact of this persistence in soils is unknown, but the link to human and land animal health is likely tied to the capacity for plants to absorb and accumulate these chemicals in their consumed tissues. Studies of this kind are in early stages, but evidence of root uptake and translocation to leaves did occur for both triclosan and triclocarban in soybeans. This effect was not present in corn when tested in a different study. A cautionary approach to land application of biosolids has been advocated by some for regions where soils have lower capacities for toxics absorption or due to the presence of unknowns in sewage biosolids. In 2007 the Northeast Regional Multi-State Research Committee (NEC 1001) issued conservative guidelines tailored to the soils and conditions typical of the northeastern US. Use of sewage sludge is prohibited for produce to be labeled USDA-certified organic. In 2014 the United States grocery chain Whole Foods banned produce grown in sewage sludge. Treated sewage sludge has been used in the UK, Europe and China agriculturally for more than 80 years, though there is increasing pressure in some countries to stop the practice of land application due to farm land contamination and negative public opinion. In the 1990s, there was pressure in some European countries to ban the use of sewage sludge as a fertilizer. Switzerland, Sweden, Austria, and others introduced a ban. Still, the dominant method for disposal of sewage sludge in the EU is via application to agricultural lands. As of 2023, 40% of sludge produced in the EU was used on agricultural land. Since the 1960s there has been cooperative activity with industry to reduce the inputs of persistent substances from factories. This has been very successful and, for example, the content of cadmium in sewage sludge in major European cities is now only 1% of what it was in 1970. Transformation into products Sewage sludge is an agglomeration of concentrated wastes, and therefore it contains many potentially extractable and useable components. These can include using sludge to produce energy, create carbon-based components, extract phosphorus and nitrogen, or make bricks or other construction materials. Recycling of phosphate is regarded as especially important because the phosphate industry predicts that at the current rate of extraction the economic reserves will be exhausted in 100 or at most 250 years. Phosphate can be recovered with minimal capital expenditure as technology currently exists, but municipalities have little political will to attempt nutrient extraction, instead opting for a "take all the other stuff" mentality. One potential drawback of extracting products from sludge — as opposed to land application — is that only some of the sludge is used and the rest still needs disposal. It can also be very expensive to develop and use appropriate technologies for extracting resources. Contaminants The specific content of sewage sludge is affected by what enters the sewage stream, and how the sewage is treated and processed. As wastewater treatment policies are passed or amended to allow or regulate potential contaminants into the sewage stream, the content of the sewage sludge reflects those changes. For example, the EU's Urban Waste Water Treatment Directive shapes the types of contaminants that enter the EU's sewage treatment stream. Pathogens Bacteria in treated sludge products can actually regrow under certain environmental conditions. Pathogens could easily remain undetected in untreated sewage sludge. Pathogens are not a significant health issue if sewage sludge is properly treated and site-specific management practices are followed. Heavy metals One of the main concerns in the treated sludge is the concentrated metals content (lead, arsenic, cadmium, thallium, etc.); certain metals are regulated while others are not. Leaching methods can be used to reduce the metal content and meet the regulatory limit. In 2009, the EPA released the Targeted National Sewage Sludge Study, which reports on the level of metals, chemicals, hormones, and other materials present in a statistical sample of sewage sludges. Some highlights include: Lead, arsenic, chromium, and cadmium are estimated by the EPA to be present in detectable quantities in 100% of national sewage sludges in the US, while thallium is only estimated to be present in 94.1% of sludges. Silver is present to the degree of 20 mg/kg of sludge, on average, while some sludges have up to 200 milligrams of silver per kilogram of sludge; one outlier demonstrated a silver lode of 800–900 mg per kg of sludge. Barium is present at the rate of 500 mg/kg, while manganese is present at the rate of 1 g/kg sludge. Micro-pollutants Micro-pollutants are compounds which are normally found at concentrations up to microgram per liter and milligram per kilogram in the aquatic and terrestrial environment, respectively, and they are considered to be potential threats to environmental ecosystems. They can become concentrated in sewage sludge. Each of these disposal options comes with myriad potential—and in some cases proven—human health and environment impacts. Several organic micro-pollutants such as endocrine disrupting compounds, pharmaceuticals and per-fluorinated compounds have been detected in sewage sludge samples around the world at concentrations ranging up to some hundreds mg/kg of dried sludge. Sterols and other hormones have also been detected. Other hazardous substances Sewage treatment plants receive various forms of hazardous waste from hospitals, nursing homes, industry and households. Low levels of constituents such as PCBs, dioxin, and brominated flame retardants, may remain in treated sludge. There are potentially thousands of other components of sludge that remain untested/undetected disposed of from modern society that also end up in sludge (pharmaceuticals, nano particles, etc.) which have been proven to be hazardous to both human and ecological health. In 2013, in South Carolina PCBs were discovered in very high levels in wastewater sludge. The problem was not discovered until thousands of acres of farm land in South Carolina were discovered to be contaminated by this hazardous material. SCDHEC issued emergency regulatory order banning all PCB laden sewage sludge from being land applied on farm fields or deposited into landfills in South Carolina. Also in 2013, after DHEC request, the city of Charlotte decided to stop land applying sewage sludge in South Carolina while authorities investigated the source of PCB contamination. In February 2014, the city of Charlotte admitted PCBs have entered their sewage treatment centers as well. Contaminants of concern in sewage sludge are plasticizers, PDBEs, PFASs ("forever chemicals"), and others generated by human activities, including personal care products and medicines. Synthetic fibers from fabrics persist in treated sewage sludge as well as in biosolids-treated soils and may thus serve as an indicator of past biosolids application. Pollutant ceiling concentration The term "pollutant" is defined as part of the EPA 503 rule. The components of sludge have pollutant limits defined by the EPA. "A Pollutant is an organic substance, an inorganic substance, a combination of organic and inorganic substances, or a pathogenic organism that, after discharge and upon exposure, ingestion, inhalation, or assimilation into an organism either directly from the environment or indirectly by ingestion through the food chain, could, on the basis of information available to the Administrator of EPA, cause death, disease, behavioral abnormalities, cancer, genetic mutations, physiological malfunctions (including malfunction in reproduction), or physical deformations in either organisms or offspring of the organisms." The maximum component pollutant limits by the US EPA are: Health risks In 2011, the EPA commissioned a study at the United States National Research Council (NRC) to determine the health risks of sludge. In this document the NRC pointed out that many of the dangers of sludge are unknown and unassessed. The NRC published "Biosolids Applied to Land: Advancing Standards and Practices" in July 2002. The NRC concluded that while there is no documented scientific evidence that sewage sludge regulations have failed to protect public health, there is persistent uncertainty on possible adverse health effects. The NRC noted that further research is needed and made about 60 recommendations for addressing public health concerns, scientific uncertainties, and data gaps in the science underlying the sewage sludge standards. The EPA responded with a commitment to conduct research addressing the NRC recommendations. Residents living near Class B sludge processing sites may experience asthma or pulmonary distress due to bioaerosols released from sludge fields. A 2004 survey of 48 individuals near affected sites found that most reported irritation symptoms, about half reported an infection within a month of the application, and about a fourth were affected by Staphylococcus aureus, including two deaths. The number of reported S. aureus infections was 25 times as high as in hospitalized patients, a high-risk group. The authors point out that regulations call for protective gear when handling Class B biosolids and that similar protections could be considered for residents in nearby areas given the wind conditions. In 2007, a health survey of persons living in close proximity to Class B sludged land was conducted. A sample of 437 people exposed to Class B sludge (living within of sludged land) - and using a control group of 176 people not exposed to sludge (not living within of sludged land) reported the following: Although correlation does not imply causation, such extensive correlations may lead reasonable people to conclude that precaution is necessary in dealing with sludge and sludged farmlands. Harrison and Oakes suggest that, in particular, "until investigations are carried out that answer these questions (...about the safety of Class B sludge...), land application of Class B sludges should be viewed as a practice that subjects neighbors and workers to substantial risk of disease." They further suggest that even Class A treated sludge may have chemical contaminants (including heavy metals, such as lead) or endotoxins present, and a precautionary approach may be justified on this basis, though the vast majority of incidents reported by Lewis, et al. have been correlated with exposure to Class B untreated sludge and not Class A treated sludge. A 2005 report by the state of North Carolina concluded that "a surveillance program of humans living near application sites should be developed to determine if there are adverse health effects in humans and animals as a result of biosolids application." Studies of the potential uses of sewage sludge around homes, such as covering lead-contaminated soil in Baltimore, have created debates over whether participants should have been informed about potential risks, when there remains uncertainty about those risks. The chain of sewage sledge to biosolids to fertilizers has resulted in PFASs ("forever chemicals") contamination of farm produce in Maine in 2021 and beef raised in Michigan in 2022. The EPA PFAS Strategic Roadmap initiative, running from 2021 to 2024, will consider the full lifecycle of PFAS including health risks of PFAS in wastewater sludge. Regulation and guidelines European Union The EC encourages the use of sewage sludge in agriculture because it conserves organic matter and completes nutrient cycles. European countries that joined the EU after 2004 favor landfills as a means of disposal for sewage sludge. In 2006, the predicted sewage sludge growth rate was 10 million tons of sewage sludge per year. This increase in the amount of sewage sludge accumulation in the EU can be due to the increase in the number of households that are connected to the sewage system. The EU has directives in place to encourage the use of sewage sludge in agriculture, in a way that the soil, humans, and the environment are not harmed. A guideline the EU has put into place it that sewage sludge should not be added to fruit and vegetable crops that are in season. In Austria, in order to dispose of the sewage sludge in a landfill, it must first be treated in a way that reduces its biological reactivity. Sweden no longer allows sewage sludge to be disposed in the land fills. In the EU, regulations regarding sewage sludge disposal differ because legislation regarding landfill disposal in not in the national regulations for the EU. Sewage Sludge Directive The EU's Sewage Sludge Directive (86/278/EEC) sets out regulations to pursue the dual purpose of promoting the use of sewage sludge as an agricultural fertilizer, while ensuring environmental protections and human health. These rules include sludge treatment requirements, as well as limits on the time and place of sewage sludge applications, depending on the type of food crop. This is intended to protect human health while maintaining the ecological health of the soil and water. The directive explicitly regulates the allowable levels of seven heavy metals (cadmium, copper, nickel, lead, zinc, mercury, and chromium) in soil and sludge, and regulates any application of sewage sludge that would cause levels of these heavy metals in soil to exceed those limits. EU member states are tasked with implementing and enforcing the Directive within their borders, as well as monitoring and reporting on sludge production, treatment, characteristics, and use. Member states are allowed to set more stringent limits for heavy metals than set out in the Sewage Sludge Directive, and can set limits for other pollutants. As of 2021, more than half of the EU member states had stricter limits for mercury and cadmium than required under the Directive. Member states are also allowed to limit or promote the use of sewage sludge for agriculture as they choose, meaning that some countries prohibit the use of sludge in agriculture, while some use up to 50% of the sludge they generate in agriculture. Spain, France, Italy, and the United Kingdom (while it was still part of the EU) have particularly promoted the use of sludge in agriculture. Each of Austria's federal states has its own regulations for the use of sewage sludge in agriculture, including different limits for heavy metals. For example, Tyrol has banned the use of sludge on agricultural lands, while in Salzburg it is only allowed under certain conditions. Since the Directive's passage, there has been the substantial decrease in heavy metal residues in agricultural soils over time (well below the limits set), though it is not possible to determine what proportion of the decrease is due to the Directive itself, as opposed to other national and EU legislation. The Sewage Sludge Directive has been evaluated several times under EU proposals to build a circular economy through the reduction and reuse of wastes. In 2014, a European Commission evaluation of the Sewage Sludge Directive suggested it was appropriate for its goals, and did not need revision. In 2023, as part of the European Green Deal and Circular Economy Action Plan, the EU re-evaluated the Sewage Sludge Directive, and found that it should be maintained – as the use of sewage sludge as fertilizer aligns with circular economy goals and potentially reduces the EU carbon emissions – but that the potential pollutants and contaminants regulated under the Directive should be reviewed and potentially revised. This evaluation noted that, as of 2023, the original Directive had not been seriously updated since its original passage in 1986, even though in the intervening decades there had been many developments in both environmental policy, expectations, and research, as well as member states' national policies around sewage sludge. The evaluation particularly emphasized concerns about methane emissions, microplastic contamination, and antibiotic resistances. The Sewage Sludge Directive has not yet set limits for other contaminants, such as organic pollutants, pathogens, microplastics, pharmaceutical residues, and personal care product residues. With the identification of these new contaminants in sludge since the Sewage Sludge Directive originally passed, several researchers have suggested that the EU should consider revising the Directive to address their potential risks to health and environment. United States After the 1991 Congressional ban on ocean dumping, the U.S. Environmental Protection Agency (EPA) instituted a policy of digested sludge reuse on agricultural land. The US EPA promulgated regulations – 40 CFR Part 503 – that continued to allow the use of biosolids on land as fertilizers and soil amendments which had been previously allowed under Part 257. The EPA promoted biosolids recycling throughout the 1990s. The EPA's Part 503 regulations were developed with input from university, EPA, and USDA researchers from around the country and involved an extensive review of the scientific literature and the largest risk assessment the agency had conducted to that time. The Part 503 regulations became effective in 1993. According to the EPA, biosolids that meet treatment and pollutant content criteria of Part 503.13 "can be safely recycled and applied as fertilizer to sustainably improve and maintain productive soils and stimulate plant growth." However, they can not be disposed of in a sludge only landfill under Part 503.23 because of high chromium levels and boundary restrictions. Under the Obama Administration, the Biosolids Center of Excellence (headquartered in EPA Region 7) was created to monitor and enforce compliance with biosolids regulation. The Center receives and reviews annual reports from the major producers of biosolids. Eight U.S. states oversee their own biosolids programs: Arizona, Michigan, Ohio, Oklahoma, South Dakota, Texas, Utah, and Wisconsin; other states' programs are overseen by the EPA. Classes of sewage sludge in the United States In the United States, two classes of sewage sludge are defined by the amount of pathogens (i.e. bacteria, viruses) remaining in the sludge, and therefore the types of uses allowed by law. Both classes of sludge may still contain radioactive or pharmaceutical wastes. Class A sludge must be treated so that specific pathogens (like Salmonella) are no longer detected. This class of sludge can be used for all land applications, including where the public may come into contact with it (i.e. agricultural land, home use, for public sale). Biosolids that meet Class A pathogen reduction requirements or equivalent treatment by a "Process to Further Reduce Pathogens" (PFRP) have the least restrictions on use. PFRPs include pasteurization, heat drying, thermophilic composting (aerobic digestion, most common method), and beta or gamma ray irradiation. Class B sludge also requires treatment to reduce pathogens, but pathogens are still detectable in the sludge (such as some parasitic worm eggs). This class of sludge has much stricter restrictions on its use. Biosolids that meet the Class B pathogen treatment and pollutant criteria, in accordance with the EPA "Standards for the use or disposal of sewage sludge" (40 CFR Part 503), can be land applied with formal site restrictions and strict record keeping. Evaluation of the U.S. sewage sludge program The EPA Office of the Inspector General (OIG) completed two assessments in 2000 and 2002 of the EPA sewage sludge program. The follow-up report in 2002 documented that "the EPA cannot assure the public that current land application practices are protective of human health and the environment." The report also documented that there had been an almost 100% reduction in EPA enforcement resources since the earlier assessment. This is probably the greatest issue with the practice: under both the federal program operated by the EPA and those of the several states, there is limited inspection and oversight by agencies charged with regulating these practices. To some degree, this lack of oversight is a function of the perceived (by the regulatory agencies) benign nature of the practice. However, a greater underlying issue is funding. Few states and the US EPA have the discretionary funds necessary to establish and implement a full enforcement program for biosolids. As detailed in the 1995 Plain English Guide to the Part 503 Risk Assessment, the EPA's most comprehensive risk assessment was completed for biosolids. Court cases in the United States In 2009, James Rosendall of Grand Rapids, MI was sentenced by United States District Judge Avern Cohn to 11 months in prison followed by three years of supervised release for conspiring to commit bribery. Rosendall was the former president of Synagro of Michigan, a subsidiary of Synagro Technologies. His duties included obtaining the approval of the City of Detroit to process and dispose of the city's wastewater. In 2011, Travis County Commissioners declared that Synagro's solid waste disposal activities would be inappropriate and prohibited land use according to the towns already established ordinances. A battle between the home rule of local government and states rights/commerce rights has been waged between the small town of Kern County, California, and Los Angeles, California. Kern county passed an ordinance "Keep Kern Clean" ballot initiative which banned sludge from being applied in Kern County. Los Angeles sued and after a protracted verdict, won the case in 2016. In 2012, two families won a $225,000 tort lawsuit against a sludge company that contaminated their properties. In 2013 in Pennsylvania, the case Gilbert vs. Synagro, a judge barred a nuisance, negligence and trespass lawsuit under Pennsylvania's Right to Farm Act. History of sewage sludge disposal in New York City Since 1884 when sewage was first treated the amount of sludge has increased along with population and more advanced treatment technology (secondary treatment in addition to primary treatment). In the case of New York City, at first the sludge was discharged directly along the banks of rivers surrounding the city, then later piped further into the rivers, and then further still out into the harbor. In 1924, to relieve a dismal condition in New York Harbor, New York City began dumping sludge at sea at a location in the New York Bight called the 12-Mile Site. This was deemed a successful public health measure and not until the late 1960s was there any examination of its consequences to marine life or to humans. There was accumulation of sludge particles on the seafloor and consequent changes in the numbers and types of benthic organisms. In 1970 a large area around the site was closed to shellfishing. From then until 1986, the practice of dumping at the 12-Mile Site came under increasing pressure stemming from a series of untoward environmental crises in the New York Bight that were attributed partly to sludge dumping. In 1986, sludge dumping was moved still further seaward to a site over the deep ocean called the 106-Mile Site. Then, again in response to political pressure arising from events unrelated to ocean dumping, the practice ended entirely in 1992. Since 1992, New York City sludge has been applied to land (outside of New York state). The wider question is whether or not changes on the sea floor caused by the portion of sludge that settles are severe enough to justify the added operational cost and human health concerns of applying sludge to land. See also Milorganite References Further reading "Biosolids Applied to Land: Advancing Standards and Practices", National Research Council, July 2002 Biogas substrates Sewerage Sanitation
Sewage sludge
[ "Chemistry", "Engineering", "Environmental_science" ]
6,983
[ "Sewerage", "Environmental engineering", "Water pollution" ]
168,345
https://en.wikipedia.org/wiki/Imhoff%20tank
The Imhoff tank, named for German engineer Karl Imhoff (1876–1965), is a chamber suitable for the reception and processing of sewage. It may be used for the clarification of sewage by simple settling and sedimentation, along with anaerobic digestion of the settled sludge. It consists of an upper chamber in which sedimentation takes place, from which settled solids slide down on the inclined bottom slopes towards a lower chamber in which the sludge accumulates. The two chambers are otherwise unconnected, with the more liquid sewage flowing only through the upper sedimentation chamber and only a slow flow of sludge in the lower digestion chamber. The lower chamber requires separate biogas vents and pipes for the removal of digested sludge, typically after 6–9 months of digestion. The Imhoff tank is in effect a two-story septic tank and retains the septic tank's simplicity while eliminating many of its drawbacks, which largely result from the mixing of fresh sewage and septic sludge in the same chamber. Typically, well-designed and operated Imhoff tanks are expected to remove suspended solids with an efficiency between 50-70%. Effluent coming out from Imhoff tanks can be either discharged in the environment, sent to a centralized wastewater treatment facility, or sent to constructed wetlands for disinfection and nutrient removal. As a result of anaerobic digestion of settled sludge, methane, carbon dioxide, hydrogen and hydrogen sulphide are typically formed. While in the past this gas mix used to be exploited for energy production given the relatively high methane content, nowadays gas from Imhoff tanks is typically vented out in the environment. This wastes the energy potential recovery of the technology and increases its carbon footprint, given the high content of methane which has a global warming potential about 25 times larger than the one of carbon dioxide. Imhoff tanks are being superseded in sewage treatment by plain sedimentation tanks using mechanical methods for continuously collecting the sludge, which is moved to separate digestion tanks. This arrangement permits both improved sedimentation results and better temperature control in the digestion process, leading to a more rapid and complete digestion of the sludge. A test for settleable solids in water, wastewater and stormwater uses an Imhoff cone, with or without stopcock. The volume of solids is measured after a specified time period at the bottom of a one-liter cone using graduated markings. See also Anaerobic digester types List of waste water treatment technologies References External links Imhoff Tank implications Imhoff OM guide, Texas Anaerobic digester types Sewerage infrastructure
Imhoff tank
[ "Chemistry" ]
533
[ "Water treatment", "Sewerage infrastructure" ]
168,369
https://en.wikipedia.org/wiki/Membrane%20protein
Membrane proteins are common proteins that are part of, or interact with, biological membranes. Membrane proteins fall into several broad categories depending on their location. Integral membrane proteins are a permanent part of a cell membrane and can either penetrate the membrane (transmembrane) or associate with one or the other side of a membrane (integral monotopic). Peripheral membrane proteins are transiently associated with the cell membrane. Membrane proteins are common, and medically important—about a third of all human proteins are membrane proteins, and these are targets for more than half of all drugs. Nonetheless, compared to other classes of proteins, determining membrane protein structures remains a challenge in large part due to the difficulty in establishing experimental conditions that can preserve the correct (native) conformation of the protein in isolation from its native environment. Function Membrane proteins perform a variety of functions vital to the survival of organisms: Membrane receptor proteins relay signals between the cell's internal and external environments. Transport proteins move molecules and ions across the membrane. They can be categorized according to the Transporter Classification database. Membrane enzymes may have many activities, such as oxidoreductase, transferase or hydrolase. Cell adhesion molecules allow cells to identify each other and interact. For example, proteins involved in immune response The localization of proteins in membranes can be predicted reliably using hydrophobicity analyses of protein sequences, i.e. the localization of hydrophobic amino acid sequences. Integral membrane proteins Integral membrane proteins are permanently attached to the membrane. Such proteins can be separated from the biological membranes only using detergents, nonpolar solvents, or sometimes denaturing agents. They can be classified according to their relationship with the bilayer: Integral polytopic proteins are transmembrane proteins that span across the membrane more than once. These proteins may have different transmembrane topology. These proteins have one of two structural architectures: Helix bundle proteins, which are present in all types of biological membranes; Beta barrel proteins, which are found only in outer membranes of Gram-negative bacteria, and outer membranes of mitochondria and chloroplasts. Bitopic proteins are transmembrane proteins that span across the membrane only once. Transmembrane helices from these proteins have significantly different amino acid distributions to transmembrane helices from polytopic proteins. Integral monotopic proteins are integral membrane proteins that are attached to only one side of the membrane and do not span the whole way across. Peripheral membrane proteins Peripheral membrane proteins are temporarily attached either to the lipid bilayer or to integral proteins by a combination of hydrophobic, electrostatic, and other non-covalent interactions. Peripheral proteins dissociate following treatment with a polar reagent, such as a solution with an elevated pH or high salt concentrations. Integral and peripheral proteins may be post-translationally modified, with added fatty acid, diacylglycerol or prenyl chains, or GPI (glycosylphosphatidylinositol), which may be anchored in the lipid bilayer. Polypeptide toxins Polypeptide toxins and many antibacterial peptides, such as colicins or hemolysins, and certain proteins involved in apoptosis, are sometimes considered a separate category. These proteins are water-soluble but can undergo significant conformational changes, form oligomeric complexes and associate irreversibly or reversibly with the lipid bilayer. In genomes Membrane proteins, like soluble globular proteins, fibrous proteins, and disordered proteins, are common. It is estimated that 20–30% of all genes in most genomes encode for membrane proteins. For instance, about 1000 of the ~4200 proteins of E. coli are thought to be membrane proteins, 600 of which have been experimentally verified to be membrane resident. In humans, current thinking suggests that fully 30% of the genome encodes membrane proteins. In disease Membrane proteins are the targets of over 50% of all modern medicinal drugs. Among the human diseases in which membrane proteins have been implicated are heart disease, Alzheimer's and cystic fibrosis. Purification of membrane proteins Although membrane proteins play an important role in all organisms, their purification has historically, and continues to be, a huge challenge for protein scientists. In 2008, 150 unique structures of membrane proteins were available, and by 2019 only 50 human membrane proteins had had their structures elucidated. In contrast, approximately 25% of all proteins are membrane proteins. Their hydrophobic surfaces make structural and especially functional characterization difficult. Detergents can be used to render membrane proteins water-soluble, but these can also alter protein structure and function. Making membrane proteins water-soluble can also be achieved through engineering the protein sequence, replacing selected hydrophobic amino acids with hydrophilic ones, taking great care to maintain secondary structure while revising overall charge. Affinity chromatography is one of the best solutions for purification of membrane proteins. The polyhistidine-tag is a commonly used tag for membrane protein purification, and the alternative rho1D4 tag has also been successfully used. See also References Further reading External links Organizations Membrane Protein Structural Dynamics Consortium Experts for Membrane Protein Research and Purification Membrane protein databases TCDB - Transporter Classification database, a comprehensive classification of transmembrane transporter proteins Orientations of Proteins in Membranes (OPM) database - 3D structures of integral and peripheral membrane proteins arranged in the lipid bilayer Protein Data Bank of Transmembrane Proteins - 3D models of transmembrane proteins approximately arranged in the lipid bilayer. TransportDB - Genomics-oriented database of transporters from TIGR Membrane PDB - Database of 3D structures of integral membrane proteins and hydrophobic peptides with an emphasis on crystallization conditions Mpstruc database - A curated list of selected transmembrane proteins from the Protein Data Bank MemProtMD - a database of membrane protein structures simulated by coarse-grained molecular dynamics Membranome database provides information about bitopic proteins from several model organisms
Membrane protein
[ "Biology" ]
1,249
[ "Protein classification", "Membrane proteins" ]
168,387
https://en.wikipedia.org/wiki/Business%20intelligence
Business intelligence (BI) consists of strategies, methodologies, and technologies used by enterprises for data analysis and management of business information. Common functions of BI technologies include reporting, online analytical processing, analytics, dashboard development, data mining, process mining, complex event processing, business performance management, benchmarking, text mining, predictive analytics, and prescriptive analytics. BI tools can handle large amounts of structured and sometimes unstructured data to help organizations identify, develop, and otherwise create new strategic business opportunities. They aim to allow for the easy interpretation of these big data. Identifying new opportunities and implementing an effective strategy based on insights is assumed to potentially provide businesses with a competitive market advantage and long-term stability, and help them take strategic decisions. Business intelligence can be used by enterprises to support a wide range of business decisions ranging from operational to strategic. Basic operating decisions include product positioning or pricing. Strategic business decisions involve priorities, goals, and directions at the broadest level. In all cases, BI is believed to be most effective when it combines data derived from the market in which a company operates (external data) with data from company sources internal to the business such as financial and operations data (internal data). When combined, external and internal data can provide a complete picture which, in effect, creates an "intelligence" that cannot be derived from any singular set of data. Among their many uses, business intelligence tools empower organizations to gain insight into new markets, to assess demand and suitability of products and services for different market segments, and to gauge the impact of marketing efforts.<ref name=":0"> Chugh, R. & Grandhi, S. (2013,). [https://www.researchgate.net/publication/273861123_Why_Business_Intelligence_Significance_of_Business_Intelligence_Tools_and_Integrating_BI_Governance_with_Corporate_Governance "Why Business Intelligence? Significance of Business Intelligence tools and integrating BI governance with corporate governance". International Journal of E-Entrepreneurship and Innovation', vol. 4, no.2, pp. 1–14.]</ref> BI applications use data gathered from a data warehouse (DW) or from a data mart, and the concepts of BI and DW combine as "BI/DW" or as "BIDW". A data warehouse contains a copy of analytical data that facilitates decision support. History The earliest known use of the term business intelligence is in Richard Millar Devens' Cyclopædia of Commercial and Business Anecdotes (1865). Devens used the term to describe how the banker Sir Henry Furnese gained profit by receiving and acting upon information about his environment, prior to his competitors: The ability to collect and react accordingly based on the information retrieved, Devens says, is central to business intelligence. When Hans Peter Luhn, a researcher at IBM, used the term business intelligence in an article published in 1958, he employed the Webster's Dictionary definition of intelligence: "the ability to apprehend the interrelationships of presented facts in such a way as to guide action towards a desired goal." In 1989, Howard Dresner (later a Gartner analyst) proposed business intelligence as an umbrella term to describe "concepts and methods to improve business decision making by using fact-based support systems." It was not until the late 1990s that this usage was widespread. Definition According to Solomon Negash and Paul Gray, business intelligence (BI) can be defined as systems that combine: Data gathering Data storage Knowledge management with analysis to evaluate complex corporate and competitive information for presentation to planners and decision makers, with the objective of improving the timeliness and the quality of the input to the decision process." According to Forrester Research, business intelligence is "a set of methodologies, processes, architectures, and technologies that transform raw data into meaningful and useful information used to enable more effective strategic, tactical, and operational insights and decision-making." Under this definition, business intelligence encompasses information management (data integration, data quality, data warehousing, master-data management, text- and content-analytics, et al.). Therefore, Forrester refers to data preparation and data usage as two separate but closely linked segments of the business-intelligence architectural stack. Some elements of business intelligence are: Multidimensional aggregation and allocation Denormalization, tagging, and standardization Realtime reporting with analytical alert A method of interfacing with unstructured data sources Group consolidation, budgeting, and rolling forecasts Statistical inference and probabilistic simulation Key performance indicators optimization Version control and process management Open item management Forrester distinguishes this from the business-intelligence market'', which is "just the top layers of the BI architectural stack, such as reporting, analytics, and dashboards." Compared with competitive intelligence Though the term business intelligence is sometimes a synonym for competitive intelligence (because they both support decision making), BI uses technologies, processes, and applications to analyze mostly internal, structured data and business processes while competitive intelligence gathers, analyzes, and disseminates information with a topical focus on company competitors. If understood broadly, competitive intelligence can be considered as a subset of business intelligence. Compared with business analytics Business intelligence and business analytics are sometimes used interchangeably, but there are alternate definitions. Thomas Davenport, professor of information technology and management at Babson College argues that business intelligence should be divided into querying, reporting, Online analytical processing (OLAP), an "alerts" tool, and business analytics. In this definition, business analytics is the subset of BI focusing on statistics, prediction, and optimization, rather than the reporting functionality. Unstructured data Business operations can generate a very large amount of data in the form of e-mails, memos, notes from call-centers, news, user groups, chats, reports, web-pages, presentations, image-files, video-files, and marketing material. According to Merrill Lynch, more than 85% of all business information exists in these forms; a company might only use such a document a single time. Because of the way it is produced and stored, this information is either unstructured or semi-structured. The management of semi-structured data is an unsolved problem in the information technology industry. According to projections from Gartner (2003), white-collar workers spend 30–40% of their time searching, finding, and assessing unstructured data. BI uses both structured and unstructured data. The former is easy to search, and the latter contains a large quantity of the information needed for analysis and decision-making. Because of the difficulty of properly searching, finding, and assessing unstructured or semi-structured data, organizations may not draw upon these vast reservoirs of information, which could influence a particular decision, task, or project. This can ultimately lead to poorly informed decision-making. Therefore, when designing a business intelligence/DW-solution, the specific problems associated with semi-structured and unstructured data must be accommodated for as well as those for the structured data. Limitations of semi-structured and unstructured data There are several challenges to developing BI with semi-structured data. According to Inmon & Nesavich, some of those are: Physically accessing unstructured textual data – unstructured data is stored in a huge variety of formats. Terminology – Among researchers and analysts, there is a need to develop standardized terminology. Volume of data – As stated earlier, up to 85% of all data exists as semi-structured data. Couple that with the need for word-to-word and semantic analysis. Searchability of unstructured textual data – A simple search on some data, e.g. apple, results in links where there is a reference to that precise search term. (Inmon & Nesavich, 2008) gives an example: "a search is made on the term felony. In a simple search, the term felony is used, and everywhere there is a reference to felony, a hit to an unstructured document is made. But a simple search is crude. It does not find references to crime, arson, murder, embezzlement, vehicular homicide, and such, even though these crimes are types of felonies". Metadata To solve problems with searchability and assessment of data, it is necessary to know something about the content. This can be done by adding context through the use of metadata. Many systems already capture some metadata (e.g. filename, author, size, etc.), but more useful would be metadata about the actual content – e.g. summaries, topics, people, or companies mentioned. Two technologies designed for generating metadata about content are automatic categorization and information extraction. Generative AI Generative business intelligence is the application of generative AI techniques, such as large language models, in business intelligence. This combination facilitates data analysis and enables users to interact with data more intuitively, generating actionable insights through natural language queries. Microsoft Copilot was for example integrated into the business analytics tool Power BI. Applications Business intelligence can be applied to the following business purposes: Performance metrics and benchmarking inform business leaders of progress towards business goals. (Business process management). Analytics quantify processes for a business to arrive at optimal decisions, and to perform business knowledge discovery. Analytics may variously involve data mining, process mining, statistical analysis, predictive analytics, predictive modeling, business process modeling, data lineage, complex event processing, and prescriptive analytics. For example within banking industry, academic research has explored potential for BI based analytics in credit evaluation, customer churn management for managerial adoption Reporting, dashboards and data visualization, executive information system, and/or OLAP BI can facilitate collaboration both inside and outside the business by enabling data sharing and electronic data interchange Knowledge management is concerned with the creation, distribution, use, and management of business intelligence, and of business knowledge in general. Knowledge management leads to learning management and regulatory compliance. Roles Some common technical roles for business intelligence developers are: Business analyst Data analyst Data engineer Data scientist Database administrator Financial analyst Risk In a 2013 report, Gartner categorized business intelligence vendors as either an independent "pure-play" vendor or a consolidated "mega-vendor". In 2019, the BI market was shaken within Europe for the new legislation of GDPR (General Data Protection Regulation) which puts the responsibility of data collection and storage onto the data user with strict laws in place to make sure the data is compliant. Growth within Europe has steadily increased since May 2019 when GDPR was brought. The legislation refocused companies to look at their own data from a compliance perspective but also revealed future opportunities using personalization and external BI providers to increase market share. See also Agile Business Intelligence Analytic applications Arcplan Artificial intelligence marketing Business activity monitoring Business Intelligence 2.0 Business Intelligence Competency Center Business intelligence software Business process discovery Business process management Customer dynamics Decision engineering Embedded analytics Enterprise planning systems Integrated business planning Management information system Mobile business intelligence Operational intelligence Process mining Real-time business intelligence Sales intelligence Test and learn References Bibliography . External links Financial data analysis Data management Financial technology Information management
Business intelligence
[ "Technology" ]
2,301
[ "Data management", "Information systems", "Data", "Information management" ]
168,389
https://en.wikipedia.org/wiki/Arithmetic%20progression
An arithmetic progression or arithmetic sequence is a sequence of numbers such that the difference from any succeeding term to its preceding term remains constant throughout the sequence. The constant difference is called common difference of that arithmetic progression. For instance, the sequence 5, 7, 9, 11, 13, 15, . . . is an arithmetic progression with a common difference of 2. If the initial term of an arithmetic progression is and the common difference of successive members is , then the -th term of the sequence () is given by A finite portion of an arithmetic progression is called a finite arithmetic progression and sometimes just called an arithmetic progression. The sum of a finite arithmetic progression is called an arithmetic series. History According to an anecdote of uncertain reliability, in primary school Carl Friedrich Gauss reinvented the formula for summing the integers from 1 through , for the case , by grouping the numbers from both ends of the sequence into pairs summing to 101 and multiplying by the number of pairs. Regardless of the truth of this story, Gauss was not the first to discover this formula. Similar rules were known in antiquity to Archimedes, Hypsicles and Diophantus; in China to Zhang Qiujian; in India to Aryabhata, Brahmagupta and Bhaskara II; and in medieval Europe to Alcuin, Dicuil, Fibonacci, Sacrobosco, and anonymous commentators of Talmud known as Tosafists. Some find it likely that its origin goes back to the Pythagoreans in the 5th century BC. Sum Computation of the sum 2 + 5 + 8 + 11 + 14. When the sequence is reversed and added to itself term by term, the resulting sequence has a single repeated value in it, equal to the sum of the first and last numbers (2 + 14 = 16). Thus 16 × 5 = 80 is twice the sum. The sum of the members of a finite arithmetic progression is called an arithmetic series. For example, consider the sum: This sum can be found quickly by taking the number n of terms being added (here 5), multiplying by the sum of the first and last number in the progression (here 2 + 14 = 16), and dividing by 2: In the case above, this gives the equation: This formula works for any arithmetic progression of real numbers beginning with and ending with . For example, Derivation To derive the above formula, begin by expressing the arithmetic series in two different ways: Rewriting the terms in reverse order: Adding the corresponding terms of both sides of the two equations and halving both sides: This formula can be simplified as: Furthermore, the mean value of the series can be calculated via: : The formula is essentially the same as the formula for the mean of a discrete uniform distribution, interpreting the arithmetic progression as a set of equally probable outcomes. Product The product of the members of a finite arithmetic progression with an initial element a1, common differences d, and n elements in total is determined in a closed expression where denotes the Gamma function. The formula is not valid when is negative or zero. This is a generalization of the facts that the product of the progression is given by the factorial and that the product for positive integers and is given by Derivation where denotes the rising factorial. By the recurrence formula , valid for a complex number , , , so that for a positive integer and a positive complex number. Thus, if , , and, finally, Examples Example 1 Taking the example , the product of the terms of the arithmetic progression given by up to the 50th term is Example 2 The product of the first 10 odd numbers is given by = Standard deviation The standard deviation of any arithmetic progression is where is the number of terms in the progression and is the common difference between terms. The formula is essentially the same as the formula for the standard deviation of a discrete uniform distribution, interpreting the arithmetic progression as a set of equally probable outcomes. Intersections The intersection of any two doubly infinite arithmetic progressions is either empty or another arithmetic progression, which can be found using the Chinese remainder theorem. If each pair of progressions in a family of doubly infinite arithmetic progressions have a non-empty intersection, then there exists a number common to all of them; that is, infinite arithmetic progressions form a Helly family. However, the intersection of infinitely many infinite arithmetic progressions might be a single number rather than itself being an infinite progression. Amount of arithmetic subsets of length k of the set {1,...,n} Let denote the number of arithmetic subsets of length one can make from the set and let be defined as: Then: As an example, if one expects arithmetic subsets and, counting directly, one sees that there are 9; these are See also Geometric progression Harmonic progression Triangular number Arithmetico-geometric sequence Inequality of arithmetic and geometric means Primes in arithmetic progression Linear difference equation Generalized arithmetic progression, a set of integers constructed as an arithmetic progression is, but allowing several possible differences Heronian triangles with sides in arithmetic progression Problems involving arithmetic progressions Utonality Polynomials calculating sums of powers of arithmetic progressions References External links Arithmetic series Articles containing proofs Sequences and series
Arithmetic progression
[ "Mathematics" ]
1,062
[ "Sequences and series", "Mathematical analysis", "Mathematical structures", "Mathematical objects", "Articles containing proofs" ]
168,393
https://en.wikipedia.org/wiki/Polystyrene
Polystyrene (PS) is a synthetic polymer made from monomers of the aromatic hydrocarbon styrene. Polystyrene can be solid or foamed. General-purpose polystyrene is clear, hard, and brittle. It is an inexpensive resin per unit weight. It is a poor barrier to air and water vapor and has a relatively low melting point. Polystyrene is one of the most widely used plastics, with the scale of its production being several million tonnes per year. Polystyrene is naturally transparent, but can be colored with colorants. Uses include protective packaging (such as packing peanuts and optical disc jewel cases), containers, lids, bottles, trays, tumblers, disposable cutlery, in the making of models, and as an alternative material for phonograph records. As a thermoplastic polymer, polystyrene is in a solid (glassy) state at room temperature but flows if heated above about 100 °C, its glass transition temperature. It becomes rigid again when cooled. This temperature behaviour is exploited for extrusion (as in Styrofoam) and also for molding and vacuum forming, since it can be cast into molds with fine detail. The temperatures behavior can be controlled by photocrosslinking. Under ASTM standards, polystyrene is regarded as not biodegradable. It is accumulating as a form of litter in the outside environment, particularly along shores and waterways, especially in its foam form, and in the Pacific Ocean. History Polystyrene was discovered in 1839 by Eduard Simon, an apothecary from Berlin. From storax, the resin of the Oriental sweetgum tree Liquidambar orientalis, he distilled an oily substance, that he named styrol, now called styrene. Several days later, Simon found that it had thickened into a jelly, now known to have been a polymer, that he dubbed styrol oxide ("Styroloxyd") because he presumed that it had resulted from oxidation (styrene oxide is a distinct compound). By 1845 Jamaican-born chemist John Buddle Blyth and German chemist August Wilhelm von Hofmann showed that the same transformation of styrol took place in the absence of oxygen. They called the product "meta styrol"; analysis showed that it was chemically identical to Simon's Styroloxyd. In 1866 Marcellin Berthelot correctly identified the formation of meta styrol/Styroloxyd from styrol as a polymerisation process. About 80 years later it was realized that heating of styrol starts a chain reaction that produces macromolecules, following the thesis of German organic chemist Hermann Staudinger (1881–1965). This eventually led to the substance receiving its present name, polystyrene. The company I. G. Farben began manufacturing polystyrene in Ludwigshafen, about 1931, hoping it would be a suitable replacement for die-cast zinc in many applications. Success was achieved when they developed a reactor vessel that extruded polystyrene through a heated tube and cutter, producing polystyrene in pellet form. Ray McIntire (1918–1996), a chemical engineer of Dow Chemical, rediscovered a process first patented in early 1930s by Swedish inventor Carl Munters. According to the Science History Institute, "Dow bought the rights to Munters's method and began producing a lightweight, water-resistant, and buoyant material that seemed perfectly suited for building docks and watercraft and for insulating homes, offices, and chicken sheds." In 1944, Styrofoam was patented. Before 1949, chemical engineer Fritz Stastny (1908–1985) developed pre-expanded PS beads by incorporating aliphatic hydrocarbons, such as pentane. These beads are the raw material for molding parts or extruding sheets. BASF and Stastny applied for a patent that was issued in 1949. The molding process was demonstrated at the Kunststoff Messe 1952 in Düsseldorf. Products were named Styropor. The crystal structure of isotactic polystyrene was reported by Giulio Natta. In 1954, the Koppers Company in Pittsburgh, Pennsylvania, developed expanded polystyrene (EPS) foam under the trade name Dylite. In 1960, Dart Container, the largest manufacturer of foam cups, shipped their first order. Structure and production In chemical terms, polystyrene is a long chain hydrocarbon wherein alternating carbon centers are attached to phenyl groups (a derivative of benzene). Polystyrene's chemical formula is ; it contains the chemical elements carbon and hydrogen. The material's properties are determined by short-range van der Waals attractions between polymer chains. Since the molecules consist of thousands of atoms, the cumulative attractive force between the molecules is large. When heated (or deformed at a rapid rate, due to a combination of viscoelastic and thermal insulation properties), the chains can take on a higher degree of confirmation and slide past each other. This intermolecular weakness (versus the high intramolecular strength due to the hydrocarbon backbone) confers flexibility and elasticity. The ability of the system to be readily deformed above its glass transition temperature allows polystyrene (and thermoplastic polymers in general) to be readily softened and molded upon heating. Extruded polystyrene is about as strong as an unalloyed aluminium but much more flexible and much less dense (1.05 g/cm3 for polystyrene vs. 2.70 g/cm3 for aluminium). Production Polystyrene is an addition polymer that results when styrene monomers polymerize (interconnect). In the polymerization, the carbon-carbon π bond of the vinyl group is broken and a new carbon-carbon σ bond is formed, attaching to the carbon of another styrene monomer to the chain. Since only one kind of monomer is used in its preparation, it is a homopolymer. The newly formed σ bond is stronger than the π bond that was broken, thus it is difficult to depolymerize polystyrene. About a few thousand monomers typically comprise a chain of polystyrene, giving a molar mass of 100,000–400,000 g/mol. Each carbon of the backbone has tetrahedral geometry, and those carbons that have a phenyl group (benzene ring) attached are stereogenic. If the backbone were to be laid as a flat elongated zig-zag chain, each phenyl group would be tilted forward or backward compared to the plane of the chain. The relative stereochemical relationship of consecutive phenyl groups determines the tacticity, which affects various physical properties of the material. Tacticity In polystyrene, tacticity describes the extent to which the phenyl group is uniformly aligned (arranged at one side) in the polymer chain. Tacticity has a strong effect on the properties of the plastic. Standard polystyrene is atactic. The diastereomer where all of the phenyl groups are on the same side is called isotactic polystyrene, which is not produced commercially. Atactic polystyrene The only commercially important form of polystyrene is atactic, in which the phenyl groups are randomly distributed on both sides of the polymer chain. This random positioning prevents the chains from aligning with sufficient regularity to achieve any crystallinity. The plastic has a glass transition temperature Tg of ≈90 °C. Polymerization is initiated with free radicals. Syndiotactic polystyrene Ziegler–Natta polymerization can produce an ordered syndiotactic polystyrene with the phenyl groups positioned on alternating sides of the hydrocarbon backbone. This form is highly crystalline with a Tm (melting point) of . Syndiotactic polystyrene resin is currently produced under the trade name XAREC by Idemitsu corporation, who use a metallocene catalyst for the polymerisation reaction. Degradation Polystyrene is relatively chemically inert. While it is waterproof and resistant to breakdown by many acids and bases, it is easily attacked by many organic solvents (e.g. it dissolves quickly when exposed to acetone), chlorinated solvents, and aromatic hydrocarbon solvents. Because of its resilience and inertness, it is used for fabricating many objects of commerce. Like other organic compounds, polystyrene burns to give carbon dioxide and water vapor, in addition to other thermal degradation by-products. Polystyrene, being an aromatic hydrocarbon, typically combusts incompletely as indicated by the sooty flame. The process of depolymerizing polystyrene into its monomer, styrene, is called pyrolysis. This involves using high heat and pressure to break down the chemical bonds between each styrene compound. Pyrolysis usually goes up to 430 °C. The high energy cost of doing this has made commercial recycling of polystyrene back into styrene monomer difficult. Organisms Polystyrene is generally considered to be non-biodegradable. However, certain organisms are able to degrade it, albeit very slowly. In 2015, researchers discovered that mealworms, the larvae form of the darkling beetle Tenebrio molitor, could digest and subsist healthily on a diet of EPS. About 100 mealworms could consume between 34 and 39 milligrams of this white foam in a day. The droppings of mealworm were found to be safe for use as soil for crops. In 2016, it was also reported that superworms (Zophobas morio) may eat expanded polystyrene (EPS). A group of high school students in Ateneo de Manila University found that compared to Tenebrio molitor larvae, Zophobas morio larvae may consume greater amounts of EPS over longer periods of time. In 2022 scientists identified several bacterial genera, including Pseudomonas, Rhodococcus and Corynebacterium, in the gut of superworms that contain encoded enzymes associated with the degradation of polystyrene and the breakdown product styrene. The bacterium Pseudomonas putida is capable of converting styrene oil into the biodegradable plastic PHA. This may someday be of use in the effective disposing of polystyrene foam. It is worthy to note the polystyrene must undergo pyrolysis to turn into styrene oil. Forms produced Polystyrene is commonly injection molded, vacuum formed, or extruded, while expanded polystyrene is either extruded or molded in a special process. Polystyrene copolymers are also produced; these contain one or more other monomers in addition to styrene. In recent years the expanded polystyrene composites with cellulose and starch have also been produced. Polystyrene is used in some polymer-bonded explosives (PBX). Sheet or molded polystyrene Polystyrene (PS) is used for producing disposable plastic cutlery and dinnerware, CD "jewel" cases, smoke detector housings, license plate frames, plastic model assembly kits, and many other objects where a rigid, economical plastic is desired. Production methods include thermoforming (vacuum forming) and injection molding. Polystyrene Petri dishes and other laboratory containers such as test tubes and microplates play an important role in biomedical research and science. For these uses, articles are almost always made by injection molding, and often sterilized post-molding, either by irradiation or by treatment with ethylene oxide. Post-mold surface modification, usually with oxygen-rich plasmas, is often done to introduce polar groups. Much of modern biomedical research relies on the use of such products; they, therefore, play a critical role in pharmaceutical research. Thin sheets of polystyrene are used in polystyrene film capacitors as it forms a very stable dielectric, but has largely fallen out of use in favor of polyester. Foams Polystyrene foams are 95–98% air. Polystyrene foams are good thermal insulators and are therefore often used as building insulation materials, such as in insulating concrete forms and structural insulated panel building systems. Grey polystyrene foam, incorporating graphite, has superior insulation properties. Carl Munters and John Gudbrand Tandberg of Sweden received a US patent for polystyrene foam as an insulation product in 1935 (USA patent number 2,023,204). PS foams also exhibit good damping properties, therefore it is used widely in packaging. The trademark Styrofoam by Dow Chemical Company is informally used (mainly US & Canada) for all foamed polystyrene products, although strictly it should only be used for "extruded closed-cell" polystyrene foams made by Dow Chemicals. Foams are also used for non-weight-bearing architectural structures (such as ornamental pillars). Expanded polystyrene (EPS) Expanded polystyrene (EPS) is a rigid and tough, closed-cell foam with a normal density range of 11 to 32 kg/m3. It is usually white and made of pre-expanded polystyrene beads. The manufacturing process for EPS conventionally begins with the creation of small polystyrene beads. Styrene monomers (and potentially other additives) are suspended in water, where they undergo free-radical addition polymerization. The polystyrene beads formed by this mechanism may have an average diameter of around 200 μm. The beads are then permeated with a "blowing agent", a material that enables the beads to be expanded. Pentane is commonly used as the blowing agent. The beads are added to a continuously agitated reactor with the blowing agent, among other additives, and the blowing agent seeps into pores within each bead. The beads are then expanded using steam. EPS is used for food containers, molded sheets for building insulation, and packing material either as solid blocks formed to accommodate the item being protected or as loose-fill "peanuts" cushioning fragile items inside boxes. EPS also has been widely used in automotive and road safety applications such as motorcycle helmets and road barriers on automobile race tracks. A significant portion of all EPS products are manufactured through injection molding. Mold tools tend to be manufactured from steels (which can be hardened and plated), and aluminum alloys. The molds are controlled through a split via a channel system of gates and runners. EPS is colloquially called "styrofoam" in the Anglosphere, an genericization of Dow Chemical's brand of extruded polystyrene. EPS in building construction Sheets of EPS are commonly packaged as rigid panels (common in Europe is a size of 100 cm x 50 cm, usually depending on an intended type of connection and glue techniques, it is, in fact, 99.5 cm x 49.5 cm or 98 cm x 48 cm; less common is 120 x 60 cm; size or in the United States). Common thicknesses are from 10 mm to 500 mm. Many customizations, additives, and thin additional external layers on one or both sides are often added to help with various properties. An example of this is lamination with cement board to form a structural insulated panel. Thermal conductivity is measured according to EN 12667. Typical values range from 0.032 to 0.038 W/(m⋅K) depending on the density of the EPS board. The value of 0.038 W/(m⋅K) was obtained at 15 kg/m3 while the value of 0.032 W/(m⋅K) was obtained at 40 kg/m3 according to the datasheet of K-710 from StyroChem Finland. Adding fillers (graphites, aluminum, or carbons) has recently allowed the thermal conductivity of EPS to reach around 0.030–0.034 W/(m⋅K) (as low as 0.029 W/(m⋅K)) and as such has a grey/black color which distinguishes it from standard EPS. Several EPS producers have produced a variety of these increased thermal resistance EPS usage for this product in the UK and EU. Water vapor diffusion resistance (μ) of EPS is around 30–70. ICC-ES (International Code Council Evaluation Service) requires EPS boards used in building construction meet ASTM C578 requirements. One of these requirements is that the limiting oxygen index of EPS as measured by ASTM D2863 be greater than 24 volume %. Typical EPS has an oxygen index of around 18 volume %; thus, a flame retardant is added to styrene or polystyrene during the formation of EPS. The boards containing a flame retardant when tested in a tunnel using test method UL 723 or ASTM E84 will have a flame spread index of less than 25 and a smoke-developed index of less than 450. ICC-ES requires the use of a 15-minute thermal barrier when EPS boards are used inside of a building. According to the EPS-IA ICF organization, the typical density of EPS used for insulated concrete forms (expanded polystyrene concrete) is . This is either Type II or Type IX EPS according to ASTM C578. EPS blocks or boards used in building construction are commonly cut using hot wires. Extruded polystyrene (XPS) Extruded polystyrene foam (XPS) consists of closed cells. It offers improved surface roughness, higher stiffness and reduced thermal conductivity. The density range is about 28–34 kg/m3. Extruded polystyrene material is also used in crafts and model building, in particular architectural models. Because of the extrusion manufacturing process, XPS does not require facers to maintain its thermal or physical property performance. Thus, it makes a more uniform substitute for corrugated cardboard. Thermal conductivity varies between 0.029 and 0.039 W/(m·K) depending on bearing strength/density and the average value is ≈0.035 W/(m·K). Water vapor diffusion resistance (μ) of XPS is around 80–250. Commonly extruded polystyrene foam materials include: Styrofoam, also known as Blue Board, produced by DuPont Depron, a thin insulation sheet also used for model building Water absorption of polystyrene foams Although it is a closed-cell foam, both expanded and extruded polystyrene are not entirely waterproof or vapor proof. In expanded polystyrene there are interstitial gaps between the expanded closed-cell pellets that form an open network of channels between the bonded pellets, and this network of gaps can become filled with liquid water. If the water freezes into ice, it expands and can cause polystyrene pellets to break off from the foam. Extruded polystyrene is also permeable by water molecules and can not be considered a vapor barrier. Water-logging commonly occurs over a long period in polystyrene foams that are constantly exposed to high humidity or are continuously immersed in water, such as in hot tub covers, in floating docks, as supplemental flotation under boat seats, and for below-grade exterior building insulation constantly exposed to groundwater. Typically an exterior vapor barrier such as impermeable plastic sheeting or a sprayed-on coating is necessary to prevent saturation. Oriented polystyrene Oriented polystyrene (OPS) is produced by stretching extruded PS film, improving visibility through the material by reducing haziness and increasing stiffness. This is often used in packaging where the manufacturer would like the consumer to see the enclosed product. Some benefits to OPS are that it is less expensive to produce than other clear plastics such as polypropylene (PP), (PET), and high-impact polystyrene (HIPS), and it is less hazy than HIPS or PP. The main disadvantage of OPS is that it is brittle, and will crack or tear easily. Co-polymers Ordinary (homopolymeric) polystyrene has an excellent property profile about transparency, surface quality and stiffness. Its range of applications is further extended by copolymerization and other modifications (blends e.g. with PC and syndiotactic polystyrene). Several copolymers are used based on styrene: The brittleness of homopolymeric polystyrene is overcome by elastomer-modified styrene-butadiene copolymers. Copolymers of styrene and acrylonitrile (SAN) are more resistant to thermal stress, heat and chemicals than homopolymers and are also transparent. Copolymers called ABS have similar properties and can be used at low temperatures, but they are opaque. Styrene-butane co-polymers Styrene-butane co-polymers can be produced with a low butene content. Styrene-butane co-polymers include PS-I and SBC (see below), both co-polymers are impact resistant. PS-I is prepared by graft co-polymerization, SBC by anionic block co-polymerization, which makes it transparent in case of appropriate block size. If styrene-butane co-polymer has a high butylene content, styrene-butadiene rubber (SBR) is formed. The impact strength of styrene-butadiene co-polymers is based on phase separation, polystyrene and poly-butane are not soluble in each other (see Flory–Huggins solution theory). Co-polymerization creates a boundary layer without complete mixing. The butadiene fractions (the "rubber phase") assemble to form particles embedded in a polystyrene matrix. A decisive factor for the improved impact strength of styrene-butadiene copolymers is their higher absorption capacity for deformation work. Without applied force, the rubber phase initially behaves like a filler. Under tensile stress, crazes (microcracks) are formed, which spread to the rubber particles. The energy of the propagating crack is then transferred to the rubber particles along its path. A large number of cracks give the originally rigid material a laminated structure. The formation of each lamella contributes to the consumption of energy and thus to an increase in elongation at break. Polystyrene homo-polymers deform when a force is applied until they break. Styrene-butane co-polymers do not break at this point, but begin to flow, solidify to tensile strength and only break at much higher elongation. With a high proportion of polybutadiene, the effect of the two phases is reversed. Styrene-butadiene rubber behaves like an elastomer but can be processed like a thermoplastic. Impact-resistant polystyrene (PS-I) PS-I (impact resistant polystyrene) consists of a continuous polystyrene matrix and a rubber phase dispersed therein. It is produced by polymerization of styrene in the presence of polybutadiene dissolved (in styrene). Polymerization takes place simultaneously in two ways: Graft copolymerization: The growing polystyrene chain reacts with a double bond of the polybutadiene. As a result, several polystyrene chains are attached to one polybutadiene. S represents in the figure the styrene repeat unit B the butadiene repeat unit. However, the middle block often does not consist of such depicted butane homo-polymer but of a styrene-butadiene co-polymer: SSSSSS­SSSSSSS­SSSSSSBBSBBSB­SBBBBSB­SSBBBSBSSSSSSS­SSSSSSS­SSSSSSSSSSSSSSSS By using a statistical copolymer at this position, the polymer becomes less susceptible to cross-linking and flows better in the melt. For the production of SBS, the first styrene is homopolymerized via anionic copolymerization. Typically, an organometallic compound such as butyllithium is used as a catalyst. Butadiene is then added and after styrene again its polymerization. The catalyst remains active during the whole process (for which the used chemicals must be of high purity). The molecular weight distribution of the polymers is very low (polydispersity in the range of 1.05, the individual chains have thus very similar lengths). The length of the individual blocks can be adjusted by the ratio of catalyst to monomer. The size of the rubber sections, in turn, depends on the block length. The production of small structures (smaller than the wavelength of the light) ensure transparency. In contrast to PS-I, however, the block copolymer does not form any particles but has a lamellar structure. Styrene-butadiene rubber Styrene-butadiene rubber (SBR) is produced like PS-I by graft copolymerization, but with a lower styrene content. Styrene-butadiene rubber thus consists of a rubber matrix with a polystyrene phase dispersed therein. Unlike PS-I and SBC, it is not a thermoplastic, but an elastomer. Within the rubber phase, the polystyrene phase is assembled into domains. This causes physical cross-linking on a microscopic level. When the material is heated above the glass transition point, the domains disintegrate, the cross-linking is temporarily suspended and the material can be processed like a thermoplastic. Acrylonitrile butadiene styrene Acrylonitrile butadiene styrene (ABS) is a material that is stronger than pure polystyrene. Others SMA is a copolymer with maleic anhydride. Styrene can be copolymerized with other monomers; for example, divinylbenzene can be used for cross-linking the polystyrene chains to give the polymer used in solid phase peptide synthesis. Styrene-acrylonitrile resin (SAN) has a greater thermal resistance than pure styrene. Environmental issues Production Polystyrene foams are produced using blowing agents that form bubbles and expand the foam. In expanded polystyrene, these are usually hydrocarbons such as pentane, which may pose a flammability hazard in manufacturing or storage of newly manufactured material, but have relatively mild environmental impact. Extruded polystyrene is usually made with hydrofluorocarbons (HFC-134a), which have global warming potentials of approximately 1000–1300 times that of carbon dioxide. Packaging, particularly expanded polystyrene, is a contributor of microplastics from both land and maritime activities. Environmental degradation Polystyrene is not biodegradeable but it is susceptible to photo-oxidation. For this reason commercial products contain light stabilizers. Litter Animals do not recognize polystyrene foam as an artificial material and may even mistake it for food. Polystyrene foam blows in the wind and floats on water due to its low specific gravity. It can have serious effects on the health of birds and marine animals that swallow significant quantities. Juvenile rainbow trout exposed to polystyrene fragments show toxic effects in the form of substantial histomorphometrical changes. Reducing Restricting the use of foamed polystyrene takeout food packaging is a priority of many solid waste environmental organisations. Efforts have been made to find alternatives to polystyrene, especially foam in restaurant settings. The original impetus was to eliminate chlorofluorocarbons (CFC), which was a former component of foam. United States In 1987, Berkeley, California, banned CFC food containers. The following year, Suffolk County, New York, became the first U.S. jurisdiction to ban polystyrene in general. However, legal challenges by the Society of the Plastics Industry kept the ban from going into effect until at last it was delayed when the Republican and Conservative parties gained the majority of the county legislature. In the meantime, Berkeley became the first city to ban all foam food containers. As of 2006, about one hundred localities in the United States, including Portland, Oregon, and San Francisco had some sort of ban on polystyrene foam in restaurants. For instance, in 2007 Oakland, California, required restaurants to switch to disposable food containers that would biodegrade if added to food compost. In 2013, San Jose became reportedly the largest city in the country to ban polystyrene foam food containers. Some communities have implemented wide polystyrene bans, such as Freeport, Maine, which did so in 1990. In 1988, the first U.S. ban of general polystyrene foam was enacted in Berkeley, California. On 1 July 2015, New York City became the largest city in the United States to attempt to prohibit the sale, possession, and distribution of single-use polystyrene foam (the initial decision was overturned on appeal). In San Francisco, supervisors approved the toughest ban on "Styrofoam" (EPS) in the US which went into effect 1 January 2017. The city's Department of the Environment can make exceptions for certain uses like shipping medicines at prescribed temperatures. The U.S. Green Restaurant Association does not allow polystyrene foam to be used as part of its certification standard. Several green leaders, including the Dutch Ministry of the Environment, advise people to reduce their environmental harm by using reusable coffee cups. In March 2019, Maryland banned polystyrene foam food containers and became the first state in the country to pass a food container foam ban through the state legislature. Maine was the first state to officially get a foam food container ban onto the books. In May 2019, Maryland Governor Hogan allowed the foam ban (House Bill 109) to become law without a signature making Maryland the second state to have a food container foam ban on the books, but is the first one to take effect on 1 July 2020. In September 2020, the New Jersey state legislature voted to ban disposable foam food containers and cups made of polystyrene foam. Outside the United States China banned expanded polystyrene takeout/takeaway containers and tableware around 1999. However, compliance has been a problem and, in 2013, the Chinese plastics industry was lobbying for the ban's repeal. India and Taiwan also banned polystyrene-foam food-service ware before 2007. The government of Zimbabwe, through its Environmental Management Agency (EMA), banned polystyrene containers (popularly called 'kaylite' in the country), under Statutory Instrument 84 of 2012 (Plastic Packaging and Plastic Bottles) (Amendment) Regulations, 2012 (No 1.) The city of Vancouver, Canada, has announced its Zero Waste 2040 plan in 2018. The city will introduce bylaw amendments to prohibit business license holders from serving prepared food in polystyrene foam cups and take-out containers, beginning 1 June 2019. In 2019, the European Union voted to ban expanded polystyrene food packaging and cups, with the law officially going into effect in 2021. Fiji passed the Environmental Management Bill in December 2020. Imports of polystyrene products were banned in January 2021. Recycling In general, polystyrene is not accepted in curbside collection recycling programs and is not separated and recycled where it is accepted. In Germany, polystyrene is collected as a consequence of the packaging law (Verpackungsverordnung) that requires manufacturers to take responsibility for recycling or disposing of any packaging material they sell. Most polystyrene products are currently not recycled due to the lack of incentive to invest in the compactors and logistical systems required. Due to the low density of polystyrene foam, it is not economical to collect. However, if the waste material goes through an initial compaction process, the material changes density from typically 30 kg/m3 to 330 kg/m3 and becomes a recyclable commodity of high value for producers of recycled plastic pellets. Expanded polystyrene scrap can be easily added to products such as EPS insulation sheets and other EPS materials for construction applications; many manufacturers cannot obtain sufficient scrap because of collection issues. When it is not used to make more EPS, foam scrap can be turned into products such as clothes hangers, park benches, flower pots, toys, rulers, stapler bodies, seedling containers, picture frames, and architectural molding from recycled PS. As of 2016, around 100 tonnes of EPS are recycled every month in the UK. Recycled EPS is also used in many metal casting operations. Rastra is made from EPS that is combined with cement to be used as an insulating amendment in the making of concrete foundations and walls. American manufacturers have produced insulating concrete forms made with approximately 80% recycled EPS since 1993. Upcycling A March 2022 joint study by scientists Sewon Oh and Erin Stache at Cornell University in Ithaca, New York found a new processing method of upcycling polystyrene to benzoic acid. The process involved irradiation of polystyrene with iron chloride and acetone under white light and oxygen for 20 hours. The scientists also demonstrated a similar scalable commercial process of upcycling polystyrene into valuable small-molecules (like benzoic acid) taking just a few hours. Incineration If polystyrene is properly incinerated at high temperatures (up to 1000 °C) and with plenty of air (14 m3/kg), the chemicals generated are water, carbon dioxide, and possibly small amounts of residual halogen-compounds from flame-retardants. If only incomplete incineration is done, there will also be leftover carbon soot and a complex mixture of volatile compounds. According to the American Chemistry Council, when polystyrene is incinerated in modern facilities, the final volume is 1% of the starting volume; most of the polystyrene is converted into carbon dioxide, water vapor, and heat. Because of the amount of heat released, it is sometimes used as a power source for steam or electricity generation. When polystyrene was burned at temperatures of 800–900 °C (the typical range of a modern incinerator), the products of combustion consisted of "a complex mixture of polycyclic aromatic hydrocarbons (PAHs) from alkyl benzenes to benzoperylene. Over 90 different compounds were identified in combustion effluents from polystyrene." The American National Bureau of Standards Center for Fire Research found 57 chemical by-products released during the combustion of expanded polystyrene (EPS) foam. Safety Health The American Chemistry Council, formerly known as the Chemical Manufacturers' Association, writes: From 1999 to 2002, a comprehensive review of the potential health risks associated with exposure to styrene was conducted by a 12-member international expert panel selected by the Harvard Center for Risk Assessment. The scientists had expertise in toxicology, epidemiology, medicine, risk analysis, pharmacokinetics, and exposure assessment. The Harvard study reported that styrene is naturally present in trace quantities in foods such as strawberries, beef, and spices, and is naturally produced in the processing of foods such as wine and cheese. The study also reviewed all the published data on the quantity of styrene contributing to the diet due to migration of food packaging and disposable food contact articles, and concluded that risk to the general public from exposure to styrene from foods or food-contact applications (such as polystyrene packaging and foodservice containers) was at levels too low to produce adverse effects. Polystyrene is commonly used in containers for food and drinks. The styrene monomer (from which polystyrene is made) is a cancer suspect agent. Styrene is "generally found in such low levels in consumer products that risks aren't substantial". Polystyrene which is used for food contact may not contain more than 1% (0.5% for fatty foods) of styrene by weight. Styrene oligomers in polystyrene containers used for food packaging have been found to migrate into the food. Another Japanese study conducted on wild-type and AhR-null mice found that the styrene trimer, which the authors detected in cooked polystyrene container-packed instant foods, may increase thyroid hormone levels. Whether polystyrene can be microwaved with food is controversial. Some containers may be safely used in a microwave, but only if labeled as such. Some sources suggest that foods containing carotene (vitamin A) or cooking oils must be avoided. Because of the pervasive use of polystyrene, these serious health related issues remain topical. Fire hazards Like other organic compounds, polystyrene is flammable. Polystyrene is classified according to DIN4102 as a "B3" product, meaning highly flammable or "Easily Ignited". As a consequence, although it is an efficient insulator at low temperatures, its use is prohibited in any exposed installations in building construction if the material is not flame-retardant. It must be concealed behind drywall, sheet metal, or concrete. Foamed polystyrene plastic materials have been accidentally ignited and caused huge fires and losses of life, for example at the Düsseldorf International Airport and in the Channel Tunnel (where polystyrene was inside a railway carriage that caught fire). See also Styrofoam Foam food container Bioplastic Geofoam Structural insulated panel Polystyrene sulfonate Shrinky Dinks Insulating concrete form Foamcore References Sources Bibliography External links Polystyrene Composition – The University of Southern Mississippi SPI resin identification code – Society of the Plastics Industry Polystyrene: Local Ordinances – Californians Against Waste Take a Closer Look at Today's Polystyrene Packaging (brochure by the industry group American Chemistry Council, arguing that the material is "safe, affordable and environmentally responsible") Insulators Building insulation materials Organic polymers Packaging materials Food packaging Thermoplastics Commodity chemicals Vinyl polymers
Polystyrene
[ "Chemistry" ]
8,076
[ "Organic compounds", "Organic polymers", "Commodity chemicals", "Products of chemical industry" ]
168,394
https://en.wikipedia.org/wiki/Styrene
Styrene is an organic compound with the chemical formula C6H5CH=CH2. Its structure consists of a vinyl group as substituent on benzene. Styrene is a colorless, oily liquid, although aged samples can appear yellowish. The compound evaporates easily and has a sweet smell, although high concentrations have a less pleasant odor. Styrene is the precursor to polystyrene and several copolymers, and is typically made from benzene for this purpose. Approximately 25 million tonnes of styrene were produced in 2010, increasing to around 35 million tonnes by 2018. Natural occurrence Styrene is named after storax balsam (often commercially sold as styrax), the resin of Liquidambar trees of the Altingiaceae plant family. Styrene occurs naturally in small quantities in some plants and foods (cinnamon, coffee beans, balsam trees and peanuts) and is also found in coal tar. History In 1839, the German apothecary Eduard Simon isolated a volatile liquid from the resin (called storax or styrax (Latin)) of the American sweetgum tree (Liquidambar styraciflua). He called the liquid "styrol" (now called styrene). He also noticed that when styrol was exposed to air, light, or heat, it gradually transformed into a hard, rubber-like substance, which he called "styrol oxide". By 1845, the German chemist August Wilhelm von Hofmann and his student John Buddle Blyth had determined styrene's empirical formula: C8H8. They had also determined that Simon's "styrol oxide" – which they renamed "metastyrol" – had the same empirical formula as styrene. Furthermore, they could obtain styrene by dry-distilling "metastyrol". In 1865, the German chemist Emil Erlenmeyer found that styrene could form a dimer, and in 1866 the French chemist Marcelin Berthelot stated that "metastyrol" was a polymer of styrene (i.e. polystyrene). Meanwhile, other chemists had been investigating another component of storax, namely, cinnamic acid. They had found that cinnamic acid could be decarboxylated to form "cinnamene" (or "cinnamol"), which appeared to be styrene. In 1845, French chemist Emil Kopp suggested that the two compounds were identical, and in 1866, Erlenmeyer suggested that both "cinnamol" and styrene might be vinylbenzene. However, the styrene that was obtained from cinnamic acid seemed different from the styrene that was obtained by distilling storax resin: the latter was optically active. Eventually, in 1876, the Dutch chemist van 't Hoff resolved the ambiguity: the optical activity of the styrene that was obtained by distilling storax resin was due to a contaminant. Industrial production From ethylbenzene The vast majority of styrene is produced from ethylbenzene, and almost all ethylbenzene produced worldwide is intended for styrene production. As such, the two production processes are often highly integrated. Ethylbenzene is produced via a Friedel–Crafts reaction between benzene and ethene; originally this used aluminum chloride as a catalyst, but in modern production this has been replaced by zeolites. By dehydrogenation Around 80% of styrene is produced by the dehydrogenation of ethylbenzene. This is achieved using superheated steam (up to 600 °C) over an iron(III) oxide catalyst. The reaction is highly endothermic and reversible, with a typical yield of 88–94%. The crude ethylbenzene/styrene product is then purified by distillation. As the difference in boiling points between the two compounds is only 9 °C at ambient pressure this necessitates the use of a series of distillation columns. This is energy intensive and is further complicated by the tendency of styrene to undergo thermally induced polymerisation into polystyrene, requiring the continuous addition of polymerization inhibitor to the system. Via ethylbenzene hydroperoxide Styrene is also co-produced commercially in a process known as POSM (Lyondell Chemical Company) or SM/PO (Shell) for styrene monomer / propylene oxide. In this process, ethylbenzene is treated with oxygen to form the ethylbenzene hydroperoxide. This hydroperoxide is then used to oxidize propylene to propylene oxide, which is also recovered as a co-product. The remaining 1-phenylethanol is dehydrated to give styrene: Other industrial routes Pyrolysis gasoline extraction Extraction from pyrolysis gasoline is performed on a limited scale. From toluene and methanol Styrene can be produced from toluene and methanol, which are cheaper raw materials than those in the conventional process. This process has suffered from low selectivity associated with the competing decomposition of methanol. Exelus Inc. claims to have developed this process with commercially viable selectivities, at 400–425 °C and atmospheric pressure, by forcing these components through a proprietary zeolitic catalyst. It is reported that an approximately 9:1 mixture of styrene and ethylbenzene is obtained, with a total styrene yield of over 60%. From benzene and ethane Another route to styrene involves the reaction of benzene and ethane. This process is being developed by Snamprogetti and Dow. Ethane, along with ethylbenzene, is fed to a dehydrogenation reactor with a catalyst capable of simultaneously producing styrene and ethylene. The dehydrogenation effluent is cooled and separated and the ethylene stream is recycled to the alkylation unit. The process attempts to overcome previous shortcomings in earlier attempts to develop production of styrene from ethane and benzene, such as inefficient recovery of aromatics, production of high levels of heavies and tars, and inefficient separation of hydrogen and ethane. Development of the process is ongoing. Laboratory synthesis A laboratory synthesis of styrene entails the decarboxylation of cinnamic acid: Styrene was first prepared by this method. Polymerization The presence of the vinyl group allows styrene to polymerize. Commercially significant products include polystyrene, acrylonitrile butadiene styrene (ABS), styrene-butadiene (SBR) rubber, styrene-butadiene latex, SIS (styrene-isoprene-styrene), S-EB-S (styrene-ethylene/butylene-styrene), styrene-divinylbenzene (S-DVB), styrene-acrylonitrile resin (SAN), and unsaturated polyesters used in resins and thermosetting compounds. These materials are used in rubber, plastic, insulation, fiberglass, pipes, automobile and boat parts, food containers, and carpet backing. Hazards Autopolymerisation As a liquid or a gas, pure styrene will polymerise spontaneously to polystyrene, without the need of external initiators. This is known as autopolymerisation. At 100 °C it will autopolymerise at a rate of ~2% per hour, and more rapidly than this at higher temperatures. As the autopolymerisation reaction is exothermic it can be self-accelerating, with a real risk of a thermal runaway, potentially leading to an explosion. Examples include the 2019 explosion of the tanker Stolt Groenland, explosions at the Phillips Petroleum Company in 1999 and 2000 and overheating styrene tanks leading to the 2020 Visakhapatnam gas leak, which killed several people. The autopolymerisation reaction can only be kept in check by the continuous addition of polymerisation inhibitors. Health effects Styrene is regarded as a "known carcinogen", especially in case of eye contact, but also in case of skin contact, of ingestion and of inhalation, according to several sources. Styrene is largely metabolized into styrene oxide in humans, resulting from oxidation by cytochrome P450. Styrene oxide is considered toxic, mutagenic, and possibly carcinogenic. Styrene oxide is subsequently hydrolyzed in vivo to styrene glycol by the enzyme epoxide hydrolase. The US Environmental Protection Agency (EPA) has described styrene to be "a suspected toxin to the gastrointestinal tract, kidney, and respiratory system, among others". On 10 June 2011, the US National Toxicology Program has described styrene as "reasonably anticipated to be a human carcinogen". However, a STATS author describes a review that was done on scientific literature and concluded that "The available epidemiologic evidence does not support a causal relationship between styrene exposure and any type of human cancer". Despite this claim, work has been done by Danish researchers to investigate the relationship between occupational exposure to styrene and cancer. They concluded, "The findings have to be interpreted with caution, due to the company based exposure assessment, but the possible association between exposures in the reinforced plastics industry, mainly styrene, and degenerative disorders of the nervous system and pancreatic cancer, deserves attention". In 2012, the Danish EPA concluded that the styrene data do not support a cancer concern for styrene. The US EPA does not have a cancer classification for styrene, but it has been the subject of their Integrated Risk Information System (IRIS) program. The National Toxicology Program of the US Department of Health and Human Services has determined that styrene is "reasonably anticipated to be a human carcinogen". Various regulatory bodies refer to styrene, in various contexts, as a possible or potential human carcinogen. The International Agency for Research on Cancer considers styrene to be "probably carcinogenic to humans". The neurotoxic properties of styrene have also been studied and reported effects include effects on vision (although unable to reproduce in a subsequent study) and on hearing functions. Studies on rats have yielded contradictory results, but epidemiologic studies have observed a synergistic interaction with noise in causing hearing difficulties. References External links American Industrial Hygiene Association, The Ear Poisons, The Synergist, November 2018. CDC – Styrene – NIOSH Workplace Safety and Health Topic Safety and Health Topics | Styrene (OSHA) Nordic Expert Group, Occupational Exposure to Chemicals and Hearing Impairment, 2010. Hazardous air pollutants Monomers Vinylbenzenes C2-Benzenes Commodity chemicals Phenyl compounds Chemical hazards IARC Group 2A carcinogens Sweet-smelling chemicals
Styrene
[ "Chemistry", "Materials_science" ]
2,356
[ "Products of chemical industry", "Chemical hazards", "Polymer chemistry", "Monomers", "Commodity chemicals" ]
168,435
https://en.wikipedia.org/wiki/Light%20meter
A light meter (or illuminometer) is a device used to measure the amount of light. In photography, an exposure meter is a light meter coupled to either a digital or analog calculator which displays the correct shutter speed and f-number for optimum exposure, given a certain lighting situation and film speed. Similarly, exposure meters are also used in the fields of cinematography and scenic design, in order to determine the optimum light level for a scene. Light meters also are used in the general field of architectural lighting design to verify proper installation and performance of a building lighting system, and in assessing the light levels for growing plants. If a light meter is giving its indications in luxes, it is called a "luxmeter". Evolution Actinometers The earliest exposure meters were called actinometers (not to be confused with the scientific instrument with the same name), described as early as 1840 but developed in the late 1800s after commercial photographic plates became available with consistent sensitivity. These photographic actinometers used light-sensitive paper; the photographer would measure the time required for the paper to darken to a control value, providing an input to a mechanical calculation of shutter speed and aperture for a given plate number. They were popular between approximately 1890 and 1920. Extinction types The next exposure meters, developed at about the same time but not displacing actinometers in popularity until the 1920s and 1930s, are known as extinction meters, evaluating the correct exposure settings by variable attenuation. One type of extinction meter contained a numbered or lettered row of neutral density filters of increasing density. The photographer would position the meter in front of their subject and note the filter with the greatest density that still allowed incident light to pass through. In another example, sold as Heyde's Aktino-Photometer starting from the early 1900s, the photographer views the scene through an eyepiece and turns the meter to vary the effective density until the scene can no longer be seen. The letter or number corresponding to the filter strength causing the "extinction" of the scene was used as an index into a chart of appropriate aperture and shutter speed combinations for a given film speed. Extinction meters tended to provide inconsistent results because they depended on subjective interpretation and the light sensitivity of the human eye, which can vary from person to person. Photoelectric types Starting in 1932, electronic light meters removed the human element and relied on technologies incorporating (in chronological order) selenium, CdS (1960s), and silicon (semiconductor, 1970s) photodetectors. Most modern light meters use silicon sensors. They indicate the exposure either with a needle galvanometer or on an LCD screen. Selenium light meters use sensors that are photovoltaic: they generate a voltage proportional to light exposure. Selenium sensors generate enough voltage for direct connection to a meter; they need no battery to operate and this made them very convenient in completely mechanical cameras. Selenium sensors however cannot measure low light accurately (ordinary lightbulbs can take them close to their limits) and are altogether unable to measure very low light, such as candlelight, moonlight, starlight etc. CdS light meters use a photoresistor sensor whose electrical resistance decreases proportionately to the intensity of light exposure. These require a battery to operate, but are significantly more sensitive to low light, able to detect lighting levels approximately of the lower sensitivity limit of selenium cells. However, CdS sensors fell out of favor due to their slower response and extended sensitivity to red and infrared wavelengths. Semiconductor sensors are also photovoltaic, but the voltage generated is much weaker than selenium cells and semiconductor-based light meters need an amplification circuit and therefore require a power source such as batteries to operate. These are usually named after the materials and filtration used to ensure the spectral response is similar to the human eye or photographic film, such as 'Silicon Blue Cell' (SBC) or ''. Many modern consumer still and video cameras include a built-in meter that measures a scene-wide light level and are able to make an approximate measure of appropriate exposure based on that. Photographers working with controlled lighting and cinematographers use handheld light meters to precisely measure the light falling on various parts of their subjects and use suitable lighting to produce the desired exposure levels. Reflected and incident measurements Exposure meters generally are sorted into reflected-light or incident-light types, depending on the method used to measure the scene. Reflected-light meters measure the light reflected by the scene to be photographed. All in-camera meters are reflected-light meters. Reflected-light meters are calibrated to show the appropriate exposure for "average" scenes. An unusual scene with a preponderance of light colors or specular highlights would have a higher reflectance; a reflected-light meter taking a reading would incorrectly compensate for the difference in reflectance and lead to underexposure. Badly underexposed sunset photos are common exactly because of this effect: the brightness of the setting sun fools the camera's light meter and, unless the in-camera logic or the photographer take care to compensate, the picture will be grossly underexposed and dull. This pitfall (but not in the setting-sun case) is avoided by incident-light meters which measure the amount of light falling on the subject using a diffuser with a flat or (more commonly) hemispherical field of view placed on top of the light sensor. Because the incident-light reading is independent of the subject's reflectance, it is less likely to lead to incorrect exposures for subjects with unusual average reflectance. Taking an incident-light reading requires placing the meter at the subject's position and pointing it in the general direction of the camera, something not always achievable in practice, e.g., in landscape photography where the subject distance approaches infinity. Another way to avoid under- or over-exposure for subjects with unusual reflectance is to use a spot meter: a specialized reflected-light meter that measures light in a very tight cone, typically with a one degree circular angle of view. An experienced photographer can take multiple readings over the shadows, midrange, and highlights of the scene to determine optimal exposure, using systems like the Zone System. Many modern cameras include sophisticated multi-segment metering systems that measure the luminance of different parts of the scene to determine the optimal exposure. When using a film whose spectral sensitivity is not a good match to that of the light meter, for example orthochromatic black-and-white or infrared film, the meter may require special filters and re-calibration to match the sensitivity of the film. There are other types of specialized photographic light meters. Flash meters are used in flash photography to verify correct exposure. Color meters are used where high fidelity in color reproduction is required. Densitometers are used in photographic reproduction. Exposure meter calibration In most cases, an incident-light meter will cause a medium tone to be recorded as a medium tone, and a reflected-light meter will cause whatever is metered to be recorded as a medium tone. What constitutes a "medium tone" depends on meter calibration and several other factors, including film processing or digital image conversion. Meter calibration establishes the relationship between subject lighting and recommended camera settings. The calibration of photographic light meters is covered by ISO 2720:1974. Exposure equations For reflected-light meters, camera settings are related to ISO speed and subject luminance by the reflected-light exposure equation: where is the relative aperture (f-number) is the exposure time ("shutter speed") in seconds is the average scene luminance is the ISO arithmetic speed is the reflected-light meter calibration constant For incident-light meters, camera settings are related to ISO speed and subject illuminance by the incident-light exposure equation: where is the illuminance is the incident-light meter calibration constant Calibration constants Determination of calibration constants has been largely subjective; ISO 2720:1974 states that The constants and shall be chosen by statistical analysis of the results of a large number of tests carried out to determine the acceptability to a large number of observers, of a number of photographs, for which the exposure was known, obtained under various conditions of subject manner and over a range of luminances. In practice, the variation of the calibration constants among manufacturers is considerably less than this statement might imply, and values have changed little since the early 1970s. ISO 2720:1974 recommends a range for of 10.6 to 13.4 with luminance in cd/m2. Two values for are in common use: 12.5 (Canon, Nikon, and Sekonic) and 14 (Minolta, Kenko, and Pentax); the difference between the two values is approximately EV. The earliest calibration standards were developed for use with wide-angle averaging reflected-light meters (Jones and Condit 1941). Although wide-angle average metering has largely given way to other metering sensitivity patterns (e.g., spot, center-weighted, and multi-segment), the values for determined for wide-angle averaging meters have remained. The incident-light calibration constant depends on the type of light receptor. Two receptor types are common: flat (cosine-responding) and hemispherical (cardioid-responding). With a flat receptor, ISO 2720:1974 recommends a range for of 240 to 400 with illuminance in lux; a value of 250 is commonly used. A flat receptor typically is used for measurement of lighting ratios, for measurement of illuminance, and occasionally, for determining exposure for a flat subject. For determining practical photographic exposure, a hemispherical receptor has proven more effective. Don Norwood, inventor of incident-light exposure meter with a hemispherical receptor, thought that a sphere was a reasonable representation of a photographic subject. According to his patent (Norwood 1938), the objective was to provide an exposure meter which is substantially uniformly responsive to light incident upon the photographic subject from practically all directions which would result in the reflection of light to the camera or other photographic register. and the meter provided for "measurement of the effective illumination obtaining at the position of the subject." With a hemispherical receptor, ISO 2720:1974 recommends a range for of 320 to 540 with illuminance in lux; in practice, values typically are between 320 (Minolta) and 340 (Sekonic). The relative responses of flat and hemispherical receptors depend upon the number and type of light sources; when each receptor is pointed at a small light source, a hemispherical receptor with = 330 will indicate an exposure approximately 0.40 step greater than that indicated by a flat receptor with = 250. With a slightly revised definition of illuminance, measurements with a hemispherical receptor indicate "effective scene illuminance." Calibrated reflectance It is commonly stated that reflected-light meters are calibrated to an 18% reflectance, but the calibration has nothing to do with reflectance, as should be evident from the exposure formulas. However, some notion of reflectance is implied by a comparison of incident- and reflected-light meter calibration. Combining the reflected-light and incident-light exposure equations and rearranging gives Reflectance is defined as A uniform perfect diffuser (one following Lambert's cosine law) of luminance emits a flux density of ; reflectance then is Illuminance is measured with a flat receptor. It is straightforward to compare an incident-light measurement using a flat receptor with a reflected-light measurement of a uniformly illuminated flat surface of constant reflectance. Using values of 12.5 for and 250 for gives With a of 14, the reflectance would be 17.6%, close to that of a standard 18% neutral test card. In theory, an incident-light measurement should agree with a reflected-light measurement of a test card of suitable reflectance that is perpendicular to the direction to the meter. However, a test card seldom is a uniform diffuser, so incident- and reflected-light measurements might differ slightly. In a typical scene, many elements are not flat and are at various orientations to the camera, so that for practical photography, a hemispherical receptor usually has proven more effective for determining exposure. Using values of 12.5 for and 330 for gives With a slightly revised definition of reflectance, this result can be taken as indicating that the average scene reflectance is approximately 12%. A typical scene includes shaded areas as well as areas that receive direct illumination, and a wide-angle averaging reflected-light meter responds to these differences in illumination as well as differing reflectances of various scene elements. Average scene reflectance then would be where "effective scene illuminance" is that measured by a meter with a hemispherical receptor. ISO 2720:1974 calls for reflected-light calibration to be measured by aiming the receptor at a transilluminated diffuse surface, and for incident-light calibration to be measured by aiming the receptor at a point source in a darkened room. For a perfectly diffusing test card and perfectly diffusing flat receptor, the comparison between a reflected-light measurement and an incident-light measurement is valid for any position of the light source. However, the response of a hemispherical receptor to an off-axis light source is approximately that of a cardioid rather than a cosine, so the 12% "reflectance" determined for an incident-light meter with a hemispherical receptor is valid only when the light source is on the receptor axis. Cameras with internal meters Calibration of cameras with internal meters is covered by ISO 2721:1982; nonetheless, many manufacturers specify (though seldom state) exposure calibration in terms of , and many calibration instruments (e.g., Kyoritsu-Arrowin multi-function camera testers ) use the specified to set the test parameters. Exposure determination with a neutral test card If a scene differs considerably from a statistically average scene, a wide-angle averaging reflected-light measurement may not indicate the correct exposure. To simulate an average scene, a substitute measurement sometimes is made of a neutral test card, or gray card. At best, a flat card is an approximation to a three-dimensional scene, and measurement of a test card may lead to underexposure unless adjustment is made. The instructions for a Kodak neutral test card recommend that the indicated exposure be increased by step for a frontlighted scene in sunlight. The instructions also recommend that the test card be held vertically and faced in a direction midway between the Sun and the camera; similar directions are also given in the Kodak Professional Photoguide. The combination of exposure increase and the card orientation gives recommended exposures that are reasonably close to those given by an incident-light meter with a hemispherical receptor when metering with an off-axis light source. In practice, additional complications may arise. Many neutral test cards are far from perfectly diffuse reflectors, and specular reflections can cause increased reflected-light meter readings that, if followed, would result in underexposure. It is possible that the neutral test card instructions include a correction for specular reflections. Use in illumination Light meters or light detectors are also used in illumination. Their purpose is to measure the illumination level in the interior and to switch off or reduce the output level of luminaires. This can greatly reduce the energy burden of the building by significantly increasing the efficiency of its lighting system. It is therefore recommended to use light meters in lighting systems, especially in rooms where one cannot expect users to pay attention to manually switching off the lights. Examples include hallways, stairs, and big halls. There are, however, significant obstacles to overcome in order to achieve a successful implementation of light meters in lighting systems, of which user acceptance is by far the most formidable. Unexpected or too frequent switching and too bright or too dark rooms are very annoying and disturbing for users of the rooms. Therefore, different switching algorithms have been developed: difference algorithm, where lights are switched on at a lower light level than they switch off, thus taking ensuring the difference between the light level of the 'on' state and 'off' state is not too big time delay algorithms: a certain amount of time must pass since the last switch a certain amount of time at a sufficient illumination level must pass. Other uses In Scientific Research & Development uses, a light meter consists of a radiometer (the electronics/readout), a photo-diode or sensor (generates an output when exposed to electromagnetic radiation/light) a filter (used to modify the incoming light so only the desired portion of incoming radiation reaches the sensor) and a cosine correcting input optic (assures the sensor can see the light coming in from all directions accurately). When the word light meter or photometer is used in place of radiometer or optometer, or it is often assumed the system was configured to see only visible light. Visible light sensors are often called illuminance or photometric sensors because they have been filtered to be sensitive only to 400-700 nanometers (nm) mimicking the human eyes' sensitivity to light. How accurately the meter measures often depends on how well the filtration matches the human eyes' response. The sensor will send a signal to the meter that is proportional to the amount of light that reaches the sensor after being collected by the optics and passing through the filter. The meter then converts the incoming signal (typically current or voltage) from the sensor into a reading of calibrated units such as Foot-Candles (fc) or Lux (lm/m^2). Calibration in fc or lux, is the second most important feature of a light meter. It not only converts the signal from V or mA, but it also provides accuracy and unit to unit repeatability. National Institute of Standards and Technology (NIST) traceability and ISO/IEC 17025 accreditation are two well known terms that verify the system includes a valid calibration. The meter/radiometer/photometer portion may have many features including: Zero: subtracts ambient/background light levels, or stabilize the meter to the working environment Hold: freezes the value on the display. Range: for systems that are not linear and auto ranging this function allows the user to select the portion of the meter electronics that best handles the signal level in use. Units: For illuminance the units are typically only lux and foot-candles but many light meters can also be used for UV, VIS and IR applications so the readout could change to W/cm^2, candela, Watts etc. Integrate: sums up the values into a dose or exposure level i.e. lux*sec or J/cm^2. Along with having a variety of features, a light meter may also be usable for a variety of applications. These may include the measurement of other bands of light such UVA, UVB, UVC and Near IR. For example, UVA and UVB light meters are used for phototherapy or treatment of skin conditions, germicidal radiometers are used for measuring the UVC level from lamps used for disinfection and sterilization, luminance meters are used to measure the brightness of a sign, display or exit sign, PAR quantum sensors are used to measure how much of a given light source's emission will help plants grow, and UV-curing radiometers test how much of the lights emission is effective for hardening a glue, plastic, or protective coating. Some light meters also have the ability to provide a readout in many different units. Lux and footcandles are the common units for visible light, but so are Candelas, Lumens, and Candela per square meter. In the realm of disinfection, UVC is typically measured in watts per square centimeter, or watts for a given individual lamp assembly, whereas systems used in the context of the curing of coatings often provide readouts in Joules per Square centimeter. Regular measurements of UVC light intensity thus can serve to provide assurance of proper disinfection of water and food-preparation surfaces, or reliable coating hardness in painted products. Although a light meter can take the form of a very simple handheld tool with one-button operation, there are also many advanced light-measurement systems available for use in numerous different applications. These can be incorporated into automated systems that can, for example, wipe lamps clean when a certain reduction in output is detected, or that can trigger an alarm when lamp-failure occurs. See also Selenium meter Photometer | Photodetector Colorimetry | Photometry | Radiometry Light value Photomultiplier tubes for detecting light at very low levels. PIN diode solid state electronic devices for detecting incident light. References Bibliography Ctein. 1997. Post Exposure: Advanced Techniques for the Photographic Printer. Boston: Focal Press. . Eastman Kodak Company. Instructions for Kodak Neutral Test Card, 453-1-78-ABX. Rochester: Eastman Kodak Company. Eastman Kodak Company. 1992. Kodak Professional Photoguide. Kodak publication no. R-28. Rochester: Eastman Kodak Company. ISO 2720:1974. General Purpose Photographic Exposure Meters (Photoelectric Type) — Guide to Product Specification. International Organization for Standardization. ISO 2721:2013. Photography — Film-based cameras — Automatic controls of exposure. International Organization for Standardization. Jones, Loyd A., and H. R. Condit. 1941. The Brightness Scale of Exterior Scenes and the Correct Computation of Photographic Exposure. Journal of the Optical Society of America. 31:651–678. . External links The Problem with Lux Meters An article suggesting that Lux meter may read incorrectly when measuring light not from a tungsten source (i.e. fluorescent, metal halide, sodium, LED and other types). Exposure Metering: Relating Subject Lighting to Film Exposure (PDF) A discussion of meter calibration and its practical effects. Estimating Luminance and Illuminance (PDF) A Kodak guide to using a camera's exposure meter. Basic Light Measurement Principles An article from International Light Technologies on basic principles Photography equipment Electromagnetic radiation meters Lighting
Light meter
[ "Physics", "Technology", "Engineering" ]
4,623
[ "Measuring instruments", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Electromagnetic radiation meters" ]
168,479
https://en.wikipedia.org/wiki/Enhanced%20CD
Enhanced CD is a certification mark of the Recording Industry Association of America for various technologies that combine audio and computer data for use in both compact disc and CD-ROM players. Formats that fall under the enhanced CD category include mixed mode CD (Yellow Book CD-ROM/Red Book CD-DA), CD-i, CD-i Ready, and CD-Extra/CD-Plus (Blue Book, also called simply Enhanced Music CD or E-CD). See also DualDisc CDVU+ Super Audio CD Mixed Mode CD References 120 mm discs Audio storage Certification marks Video storage Optical computer storage media
Enhanced CD
[ "Mathematics" ]
123
[ "Symbols", "Certification marks" ]
168,506
https://en.wikipedia.org/wiki/Esophagus
The esophagus (American English), oesophagus (British English), or œsophagus (archaic spelling) (see spelling difference) all ; : ((o)e)(œ)sophagi or ((o)e)(œ)sophaguses), colloquially known also as the food pipe, food tube, or gullet, is an organ in vertebrates through which food passes, aided by peristaltic contractions, from the pharynx to the stomach. The esophagus is a fibromuscular tube, about long in adults, that travels behind the trachea and heart, passes through the diaphragm, and empties into the uppermost region of the stomach. During swallowing, the epiglottis tilts backwards to prevent food from going down the larynx and lungs. The word oesophagus is from Ancient Greek οἰσοφάγος (oisophágos), from οἴσω (oísō), future form of φέρω (phérō, "I carry") + ἔφαγον (éphagon, "I ate"). The wall of the esophagus from the lumen outwards consists of mucosa, submucosa (connective tissue), layers of muscle fibers between layers of fibrous tissue, and an outer layer of connective tissue. The mucosa is a stratified squamous epithelium of around three layers of squamous cells, which contrasts to the single layer of columnar cells of the stomach. The transition between these two types of epithelium is visible as a zig-zag line. Most of the muscle is smooth muscle although striated muscle predominates in its upper third. It has two muscular rings or sphincters in its wall, one at the top and one at the bottom. The lower sphincter helps to prevent reflux of acidic stomach content. The esophagus has a rich blood supply and venous drainage. Its smooth muscle is innervated by involuntary nerves (sympathetic nerves via the sympathetic trunk and parasympathetic nerves via the vagus nerve) and in addition voluntary nerves (lower motor neurons) which are carried in the vagus nerve to innervate its striated muscle. The esophagus passes through the thoracic cavity into the diaphragm into the stomach. The esophagus may be affected by gastric reflux, cancer, prominent dilated blood vessels called varices that can bleed heavily, tears, constrictions, and disorders of motility. Diseases may cause difficulty swallowing (dysphagia), painful swallowing (odynophagia), chest pain, or cause no symptoms at all. Clinical investigations include X-rays when swallowing barium sulfate, endoscopy, and CT scans. Surgically, the esophagus is difficult to access in part due to its position between critical organs and directly between the sternum and spinal column. Structure The esophagus is one of the upper parts of the digestive system. There are taste buds on its upper part. It begins at the back of the mouth, passing downward through the rear part of the mediastinum, through the diaphragm, and into the stomach. In humans, the esophagus generally starts around the level of the sixth cervical vertebra behind the cricoid cartilage of the trachea, enters the diaphragm at about the level of the tenth thoracic vertebra, and ends at the cardia of the stomach, at the level of the eleventh thoracic vertebra. The esophagus is usually about 25 cm (10 in) in length. Many blood vessels serve the esophagus, with blood supply varying along its course. The upper parts of the esophagus and the upper esophageal sphincter receive blood from the inferior thyroid artery, the parts of the esophagus in the thorax from the bronchial arteries and branches directly from the thoracic aorta, and the lower parts of the esophagus and the lower esophageal sphincter receive blood from the left gastric artery and the left inferior phrenic artery. The venous drainage also differs along the course of the esophagus. The upper and middle parts of the esophagus drain into the azygos and hemiazygos veins, and blood from the lower part drains into the left gastric vein. All these veins drain into the superior vena cava, with the exception of the left gastric vein, which is a branch of the portal vein. Lymphatically, the upper third of the esophagus drains into the deep cervical lymph nodes, the middle into the superior and posterior mediastinal lymph nodes, and the lower esophagus into the gastric and celiac lymph nodes. This is similar to the lymphatic drainage of the abdominal structures that arise from the foregut, which all drain into the celiac nodes. Position The upper esophagus lies at the back of the mediastinum behind the trachea, adjoining along the tracheoesophageal stripe, and in front of the erector spinae muscles and the vertebral column. The lower esophagus lies behind the heart and curves in front of the thoracic aorta. From the bifurcation of the trachea downwards, the esophagus passes behind the right pulmonary artery, left main bronchus, and left atrium. At this point, it passes through the diaphragm. The thoracic duct, which drains the majority of the body's lymph, passes behind the esophagus, curving from lying behind the esophagus on the right in the lower part of the esophagus, to lying behind the esophagus on the left in the upper esophagus. The esophagus also lies in front of parts of the hemiazygos veins and the intercostal veins on the right side. The vagus nerve divides and covers the esophagus in a plexus. Constrictions The esophagus has four points of constriction. When a corrosive substance, or a solid object is swallowed, it is most likely to lodge and damage one of these four points. These constrictions arise from particular structures that compress the esophagus. These constrictions are: At the start of the esophagus, where the laryngopharynx joins the esophagus, behind the cricoid cartilage Where it is crossed on the front by the aortic arch in the superior mediastinum Where the esophagus is compressed by the left main bronchus in the posterior mediastinum The esophageal hiatus, where it passes through the diaphragm in the posterior mediastinum Sphincters The esophagus is surrounded at the top and bottom by two muscular rings, known respectively as the upper esophageal sphincter and the lower esophageal sphincter. These sphincters act to close the esophagus when food is not being swallowed. The upper esophageal sphincter is an anatomical sphincter, which is formed by the lower portion of the inferior pharyngeal constrictor, also known as the cricopharyngeal sphincter due to its relation with cricoid cartilage of the larynx anteriorly. However, the lower esophageal sphincter is not an anatomical but rather a functional sphincter, meaning that it acts as a sphincter but does not have a distinct thickening like other sphincters. The upper esophageal sphincter surrounds the upper part of the esophagus. It consists of skeletal muscle but is not under voluntary control. Opening of the upper esophageal sphincter is triggered by the swallowing reflex. The primary muscle of the upper esophageal sphincter is the cricopharyngeal part of the inferior pharyngeal constrictor. The lower esophageal sphincter, or gastroesophageal sphincter, surrounds the lower part of the esophagus at the junction between the esophagus and the stomach. It is also called the cardiac sphincter or cardioesophageal sphincter, named from the adjacent part of the stomach, the cardia. Dysfunction of the gastroesophageal sphincter causes gastroesophageal reflux, which causes heartburn, and, if it happens often enough, can lead to gastroesophageal reflux disease, with damage of the esophageal mucosa. Nerve supply The esophagus is innervated by the vagus nerve and the cervical and thoracic sympathetic trunk. The vagus nerve has a parasympathetic function, supplying the muscles of the esophagus and stimulating glandular contraction. Two sets of nerve fibers travel in the vagus nerve to supply the muscles. The upper striated muscle, and upper esophageal sphincter, are supplied by neurons with bodies in the nucleus ambiguus, whereas fibers that supply the smooth muscle and lower esophageal sphincter have bodies situated in the dorsal motor nucleus. The vagus nerve plays the primary role in initiating peristalsis. The sympathetic trunk has a sympathetic function. It may enhance the function of the vagus nerve, increasing peristalsis and glandular activity, and causing sphincter contraction. In addition, sympathetic activation may relax the muscle wall and cause blood vessel constriction. Sensation along the esophagus is supplied by both nerves, with gross sensation being passed in the vagus nerve and pain passed up the sympathetic trunk. Gastroesophageal junction The gastroesophageal junction (also known as the esophagogastric junction) is the junction between the esophagus and the stomach, at the lower end of the esophagus. The pink color of the esophageal mucosa contrasts to the deeper red of the gastric mucosa, and the mucosal transition can be seen as an irregular zig-zag line, which is often called the z-line. Histological examination reveals abrupt transition between the stratified squamous epithelium of the esophagus and the simple columnar epithelium of the stomach. Normally, the cardia of the stomach is immediately distal to the z-line and the z-line coincides with the upper limit of the gastric folds of the cardia; however, when the anatomy of the mucosa is distorted in Barrett's esophagus the true gastroesophageal junction can be identified by the upper limit of the gastric folds rather than the mucosal transition. The functional location of the lower oesophageal sphincter is generally situated about below the z-line. Microanatomy The human esophagus has a mucous membrane consisting of a tough stratified squamous epithelium without keratin, a smooth lamina propria, and a muscularis mucosae. The epithelium of the esophagus has a relatively rapid turnover and serves a protective function against the abrasive effects of food. In many animals, the epithelium contains a layer of keratin, representing a coarser diet. There are two types of glands, with mucus-secreting esophageal glands being found in the submucosa and esophageal cardiac glands, similar to cardiac glands of the stomach, located in the lamina propria and most frequent in the terminal part of the organ. The mucus from the glands gives a good protection to the lining. The submucosa also contains the submucosal plexus, a network of nerve cells that is part of the enteric nervous system. The muscular layer of the esophagus has two types of muscle. The upper third of the esophagus contains striated muscle, the lower third contains smooth muscle, and the middle third contains a mixture of both. Muscle is arranged in two layers: one in which the muscle fibers run longitudinal to the esophagus, and the other in which the fibers encircle the esophagus. These are separated by the myenteric plexus, a tangled network of nerve fibers involved in the secretion of mucus and in peristalsis of the smooth muscle of the esophagus. The outermost layer of the esophagus is the adventitia in most of its length, with the abdominal part being covered in serosa. This makes it distinct from many other structures in the gastrointestinal tract that only have a serosa. Development In early embryogenesis, the esophagus develops from the endodermal primitive gut tube. The ventral part of the embryo abuts the yolk sac. During the second week of embryological development, as the embryo grows, it begins to surround parts of the sac. The enveloped portions form the basis for the adult gastrointestinal tract. The sac is surrounded by a network of vitelline arteries. Over time, these arteries consolidate into the three main arteries that supply the developing gastrointestinal tract: the celiac artery, superior mesenteric artery, and inferior mesenteric artery. The areas supplied by these arteries are used to define the midgut, hindgut and foregut. The surrounded sac becomes the primitive gut. Sections of this gut begin to differentiate into the organs of the gastrointestinal tract, such as the esophagus, stomach, and intestines. The esophagus develops as part of the foregut tube. The innervation of the esophagus develops from the pharyngeal arches. Function Swallowing Food is ingested through the mouth and when swallowed passes first into the pharynx and then into the esophagus. The esophagus is thus one of the first components of the digestive system and the gastrointestinal tract. After food passes through the esophagus, it enters the stomach. When food is being swallowed, the epiglottis moves backward to cover the larynx, preventing food from entering the trachea. At the same time, the upper esophageal sphincter relaxes, allowing a bolus of food to enter. Peristaltic contractions of the esophageal muscle push the food down the esophagus. These rhythmic contractions occur both as a reflex response to food that is in the mouth, and also as a response to the sensation of food within the esophagus itself. Along with peristalsis, the lower esophageal sphincter relaxes. Reducing gastric reflux The stomach produces gastric acid, a strongly acidic mixture consisting of hydrochloric acid (HCl) and potassium and sodium salts to enable food digestion. Constriction of the upper and lower esophageal sphincters helps to prevent reflux (backflow) of gastric contents and acid into the esophagus, protecting the esophageal mucosa. The acute angle of His and the lower crura of the diaphragm also help this sphincteric action. Gene and protein expression About 20,000 protein-coding genes are expressed in human cells and nearly 70% of these genes are expressed in the normal esophagus. Some 250 of these genes are more specifically expressed in the esophagus with less than 50 genes being highly specific. The corresponding esophagus-specific proteins are mainly involved in squamous differentiation such as keratins KRT13, KRT4 and KRT6C. Other specific proteins that help lubricate the inner surface of esophagus are mucins such as MUC21 and MUC22. Many genes with elevated expression are also shared with skin and other organs that are composed of squamous epithelia. Clinical significance The main conditions affecting the esophagus are described here. For a more complete list, see esophageal disease. Inflammation Inflammation of the esophagus is known as esophagitis. Reflux of gastric acids from the stomach, infection, substances ingested (for example, corrosives), some medications (such as bisphosphonates), and food allergies can all lead to esophagitis. Esophageal candidiasis is an infection of the yeast Candida albicans that may occur when a person is immunocompromised. the causes of some forms of esophagitis, such as eosinophilic esophagitis, are not well-characterized, but may include Th2-mediated atopies or genetic factors. There appear to be correlations between eosinophilic esophagitis, asthma (itself with an eosinophilic component), eczema, and allergic rhinitis, though it is not clear whether these conditions contribute to eosinophilic esophagitis or vice versa, or if they are symptoms of mutual underlying factors. Esophagitis can cause painful swallowing and is usually treated by managing the cause of the esophagitis - such as managing reflux or treating infection. Barrett's esophagus Prolonged esophagitis, particularly from gastric reflux, is one factor thought to play a role in the development of Barrett's esophagus. In this condition, there is metaplasia of the lining of the lower esophagus, which changes from stratified squamous epithelia to simple columnar epithelia. Barrett's esophagus is thought to be one of the main contributors to the development of esophageal cancer. Cancer There are two main types of cancer of the esophagus. Squamous cell carcinoma is a carcinoma that can occur in the squamous cells lining the esophagus. This type is much more common in China and Iran. The other main type is an adenocarcinoma that occurs in the glands or columnar tissue of the esophagus. This is most common in developed countries in those with Barrett's esophagus, and occurs in the cuboidal cells. In its early stages, esophageal cancer may not have any symptoms at all. When severe, esophageal cancer may eventually cause obstruction of the esophagus, making swallowing of any solid foods very difficult and causing weight loss. The progress of the cancer is staged using a system that measures how far into the esophageal wall the cancer has invaded, how many lymph nodes are affected, and whether there are any metastases in different parts of the body. Esophageal cancer is often managed with radiotherapy, chemotherapy, and may also be managed by partial surgical removal of the esophagus. Inserting a stent into the esophagus, or inserting a nasogastric tube, may also be used to ensure that a person is able to digest enough food and water. , the prognosis for esophageal cancer is still poor, so palliative therapy may also be a focus of treatment. Varices Esophageal varices are swollen twisted branches of the azygous vein in the lower third of the esophagus. These blood vessels anastomose (join up) with those of the portal vein when portal hypertension develops. These blood vessels are engorged more than normal, and in the worst cases may partially obstruct the esophagus. These blood vessels develop as part of a collateral circulation that occurs to drain blood from the abdomen as a result of portal hypertension, usually as a result of liver diseases such as cirrhosis. This collateral circulation occurs because the lower part of the esophagus drains into the left gastric vein, which is a branch of the portal vein. Because of the extensive venous plexus that exists between this vein and other veins, if portal hypertension occurs, the direction of blood drainage in this vein may reverse, with blood draining from the portal venous system, through the plexus. Veins in the plexus may engorge and lead to varices. Esophageal varices often do not have symptoms until they rupture. A ruptured varix is considered a medical emergency because varices can bleed a lot. A bleeding varix may cause a person to vomit blood, or suffer shock. To deal with a ruptured varix, a band may be placed around the bleeding blood vessel, or a small amount of a clotting agent may be injected near the bleed. A surgeon may also try to use a small inflatable balloon to apply pressure to stop the wound. IV fluids and blood products may be given in order to prevent hypovolemia from excess blood loss. Motility disorders Several disorders affect the motility of food as it travels down the esophagus. This can cause difficult swallowing, called dysphagia, or painful swallowing, called odynophagia. Achalasia refers to a failure of the lower esophageal sphincter to relax properly, and generally develops later in life. This leads to progressive enlargement of the esophagus, and possibly eventual megaesophagus. A nutcracker esophagus refers to swallowing that can be extremely painful. Diffuse esophageal spasm is a spasm of the esophagus that can be one cause of chest pain. Such referred pain to the wall of the upper chest is quite common in esophageal conditions. Sclerosis of the esophagus, such as with systemic sclerosis or in CREST syndrome may cause hardening of the walls of the esophagus and interfere with peristalsis. Malformations Esophageal strictures are usually benign and typically develop after a person has had reflux for many years. Other strictures may include esophageal webs (which can also be congenital) and damage to the esophagus by radiotherapy, corrosive ingestion, or eosinophilic esophagitis. A Schatzki ring is fibrosis at the gastroesophageal junction. Strictures may also develop in chronic anemia, and Plummer-Vinson syndrome. Two of the most common congenital malformations affecting the esophagus are an esophageal atresia where the esophagus ends in a blind sac instead of connecting to the stomach; and an esophageal fistula – an abnormal connection between the esophagus and the trachea. Both of these conditions usually occur together. These are found in about 1 in 3500 births. Half of these cases may be part of a syndrome where other abnormalities are also present, particularly of the heart or limbs. The other cases occur singly. Imaging An X-ray of swallowed barium may be used to reveal the size and shape of the esophagus, and the presence of any masses. The esophagus may also be imaged using a flexible camera inserted into the esophagus, in a procedure called an endoscopy. If an endoscopy is used on the stomach, the camera will also have to pass through the esophagus. During an endoscopy, a biopsy may be taken. If cancer of the esophagus is being investigated, other methods, including a CT scan, may also be used. History The word esophagus (British English: oesophagus), comes from the () meaning gullet. It derives from two roots (eosin) to carry and () to eat. The use of the word oesophagus, has been documented in anatomical literature since at least the time of Hippocrates, who noted that "the oesophagus ... receives the greatest amount of what we consume." Its existence in other animals and its relationship with the stomach was documented by the Roman naturalist Pliny the Elder (AD23–AD79), and the peristaltic contractions of the esophagus have been documented since at least the time of Galen. The first attempt at surgery on the esophagus focused in the neck, and was conducted in dogs by Theodore Billroth in 1871. In 1877 Czerny carried out surgery in people. By 1908, an operation had been performed by Voeckler to remove the esophagus, and in 1933 the first surgical removal of parts of the lower esophagus, (to control esophageal cancer), had been conducted. The Nissen fundoplication, in which the stomach is wrapped around the lower esophageal sphincter to stimulate its function and control reflux, was first conducted by Rudolph Nissen in 1955. Other animals Vertebrates In tetrapods, the pharynx is much shorter, and the esophagus correspondingly longer, than in fish. In the majority of vertebrates, the esophagus is simply a connecting tube, but in some birds, which regurgitate components to feed their young, it is extended towards the lower end to form a crop for storing food before it enters the true stomach. In ruminants, animals with four chambered stomachs, a groove called the sulcus reticuli is often found in the esophagus, allowing milk to drain directly into the hind stomach, the abomasum. In the horse the esophagus is about in length, and carries food to the stomach. A muscular ring, called the cardiac sphincter, connects the stomach to the esophagus. This sphincter is very well developed in horses. This and the oblique angle at which the esophagus connects to the stomach explains why horses cannot vomit. The esophagus is also the area of the digestive tract where horses may have the condition known as choke. The esophagus of snakes is remarkable for the distension it undergoes when swallowing prey. In most fish, the esophagus is extremely short, primarily due to the length of the pharynx (which is associated with the gills). However, some fish, including lampreys, chimaeras, and lungfish, have no true stomach, so that the esophagus effectively runs from the pharynx directly to the intestine, and is therefore somewhat longer. In many vertebrates, the esophagus is lined by stratified squamous epithelium without glands. In fish, the esophagus is often lined with columnar epithelium, and in amphibians, sharks and rays, the esophageal epithelium is ciliated, helping to wash food along, in addition to the action of muscular peristalsis. In addition, in the bat Plecotus auritus, fish and some amphibians, glands secreting pepsinogen or hydrochloric acid have been found. The muscle of the esophagus in many mammals is initially striated but then becomes smooth muscle in the caudal third or so. In canines and ruminants, however, it is entirely striated to allow regurgitation to feed young (canines) or regurgitation to chew cud (ruminants). It is entirely smooth muscle in amphibians, reptiles and birds. Contrary to popular belief, an adult human body would not be able to pass through the esophagus of a whale, which generally measures less than in diameter, although in larger baleen whales it may be up to when fully distended. Invertebrates A structure with the same name is often found in invertebrates, including molluscs and arthropods, connecting the oral cavity with the stomach. In terms of the digestive system of snails and slugs, the mouth opens into an esophagus, which connects to the stomach. Because of torsion, which is the rotation of the main body of the animal during larval development, the esophagus usually passes around the stomach, and opens into its back, furthest from the mouth. In species that have undergone de-torsion, however, the esophagus may open into the anterior of the stomach, which is the reverse of the usual gastropod arrangement. There is an extensive rostrum at the front of the esophagus in all carnivorous snails and slugs. In the freshwater snail species Tarebia granifera, the brood pouch is above the esophagus. In the cephalopods, the brain often surrounds the esophagus. See also References External links Digestive system Thorax (human anatomy) Organs (anatomy) Human head and neck Abdomen
Esophagus
[ "Biology" ]
6,191
[ "Digestive system", "Organ systems" ]
168,568
https://en.wikipedia.org/wiki/Film%20speed
Film speed is the measure of a photographic film's sensitivity to light, determined by sensitometry and measured on various numerical scales, the most recent being the ISO system introduced in 1974. A closely related system, also known as ISO, is used to describe the relationship between exposure and output image lightness in digital cameras. Prior to ISO, the most common systems were ASA in the United States and DIN in Europe. The term speed comes from the early days of photography. Photographic emulsions that were more sensitive to light needed less time to generate an acceptable image and thus a complete exposure could be finished faster, with the subjects having to hold still for a shorter length of time. Emulsions that were less sensitive were deemed "slower" as the time to complete an exposure was much longer and often usable only for still life photography. Exposure times for photographic emulsions shortened from hours to fractions of a second by the late 19th century. In both film and digital photography, choice of speed will almost always affect image quality. Higher sensitivities, which require shorter exposures, typically result in reduced image quality due to coarser film grain or increased digital image noise. Lower sensitivities, which require longer exposures, will retain more viable image data due to finer grain or less noise, and therefore more detail. Ultimately, sensitivity is limited by the quantum efficiency of the film or sensor. To determine the exposure time needed for a given film, a light meter is typically used. Film speed measurement systems Emulsion speed rating criteria Five criteria for the rating of emulsion speed have been used since the late 19th century, listed here by name and date, these criteria are: threshold (1880), inertia (1890), fixed density (1934), minimum useful gradient (1939) and fractional gradient (1939). Threshold The threshold criterion is the point on the characteristic curve corresponding to just perceptible density above fog. Inertia The inertia speed point of an emulsion is determined on the Hurter and Driffield characteristic curve by the intercept between the gradient of the straight line part of the curve and the line representing the base + fog (B+F) on the density axis. Fixed density The fixed density speed point is determined by defining a fixed minimum density as the basis the emulsion speed (e.g. 0.1 above B+F). Minimum useful gradient The minimum useful gradient criterion places the speed point where the gradient first reaches an agreed value (e.g. tan 𝜃 = 0.2). Fractional gradient The fractional gradient is defined as the speed point at which the slope of the characteristic curve first reaches a fixed fraction (e.g. 0.3) of the average gradient over a range (e.g. 1.5) of the characteristic curve. Historical systems Warnerke The first known practical sensitometer, which allowed measurements of the speed of photographic materials, was invented by the Polish engineer Leon Warnerke – pseudonym of (1837–1900) – in 1880, among the achievements for which he was awarded the Progress Medal of the Photographic Society of Great Britain in 1882. It was commercialized since 1881. The Warnerke Standard Sensitometer consisted of a frame holding an opaque screen with an array of typically 25 numbered, gradually pigmented squares brought into contact with the photographic plate during a timed test exposure under a phosphorescent tablet excited before by the light of a burning magnesium ribbon. The speed of the emulsion was then expressed in 'degrees' Warnerke (sometimes seen as Warn. or °W.) corresponding with the last number visible on the exposed plate after development and fixation. Each number represented an increase of 1/3 in speed, typical plate speeds were between 10° and 25° Warnerke at the time. His system saw some success but proved to be unreliable due to its spectral sensitivity to light, the fading intensity of the light emitted by the phosphorescent tablet after its excitation as well as high built-tolerances. The concept, however, was later built upon in 1900 by Henry Chapman Jones (1855–1932) in the development of his plate tester and modified speed system. Hurter & Driffield Another early practical system for measuring the sensitivity of an emulsion was that of Hurter and Driffield (H&D), originally described in 1890, by the Swiss-born Ferdinand Hurter (1844–1898) and British Vero Charles Driffield (1848–1915). In their system, speed numbers were inversely proportional to the exposure required. For example, an emulsion rated at 250 H&D would require ten times the exposure of an emulsion rated at 2500 H&D. The methods to determine the sensitivity were later modified in 1925 (in regard to the light source used) and in 1928 (regarding light source, developer and proportional factor)—this later variant was sometimes called "H&D 10". The H&D system was officially accepted as a standard in the former Soviet Union from 1928 until September 1951, when it was superseded by GOST 2817–50. Scheiner The Scheinergrade (Sch.) system was devised by the German astronomer Julius Scheiner (1858–1913) in 1894 originally as a method of comparing the speeds of plates used for astronomical photography. Scheiner's system rated the speed of a plate by the least exposure to produce a visible darkening upon development. Speed was expressed in degrees Scheiner, originally ranging from 1° to 20° Sch., with each increment of a degree corresponding to a multiplicative factor of increased light sensitivity. This multiplicative factor was determined by the constraint that an increment of 19° Sch. (from 1° to 20° Sch.) corresponded to a hundredfold increase in sensitivity. Thus emulsions that differed by 1° Sch. on the Scheiner scale were -fold more (or, less) sensitive to each other. An increment of 3° Sch. came close to a doubling of sensitivity . The system was later extended to cover larger ranges and some of its practical shortcomings were addressed by the Austrian scientist Josef Maria Eder (1855–1944) and Flemish-born botanist (1896–1960), (who, in 1919/1920, jointly developed their Eder–Hecht neutral wedge sensitometer measuring emulsion speeds in Eder–Hecht grades). It remained difficult for manufacturers to reliably determine film speeds, often only by comparing with competing products, so that an increasing number of modified semi-Scheiner-based systems started to spread, which no longer followed Scheiner's original procedures and thereby defeated the idea of comparability. Scheiner's system was eventually abandoned in Germany, when the standardized DIN system was introduced in 1934. In various forms, it continued to be in widespread use in other countries for some time. DIN The DIN system, officially DIN standard 4512 by the (then known as the (DNA)), was published in January 1934. It grew out of drafts for a standardized method of sensitometry put forward by the as proposed by the committee for sensitometry of the since 1930 and presented by (1868–1945) and Emanuel Goldberg (1881–1970) at the influential VIII. International Congress of Photography (German: ) held in Dresden from 3 to 8 August 1931. The DIN system was inspired by Scheiner's system, but the sensitivities were represented as the base 10 logarithm of the sensitivity multiplied by 10, similar to decibels. Thus an increase of 20° (and not 19° as in Scheiner's system) represented a hundredfold increase in sensitivity, and a difference of 3° was much closer to the base 10 logarithm of 2 (0.30103...): . As in the Scheiner system, speeds were expressed in 'degrees'. Originally the sensitivity was written as a fraction with 'tenths' (for example "18/10° DIN"), where the resultant value 1.8 represented the relative base 10 logarithm of the speed. 'Tenths' were later abandoned with DIN 4512:1957-11, and the example above would be written as "18° DIN". The degree symbol was finally dropped with DIN 4512:1961-10. This revision also saw significant changes in the definition of film speeds in order to accommodate then-recent changes in the American ASA PH2.5-1960 standard, so that film speeds of black-and-white negative film effectively would become doubled, that is, a film previously marked as "18° DIN" would now be labeled as "21 DIN" without emulsion changes. Originally only meant for black-and-white negative film, the system was later extended and regrouped into nine parts, including DIN 4512-1:1971-04 for black-and-white negative film, DIN 4512-4:1977-06 for color reversal film and DIN 4512-5:1977-10 for color negative film. On an international level the German DIN 4512 system has been effectively superseded in the 1980s by ISO 6:1974, ISO 2240:1982, and ISO 5800:1979 where the same sensitivity is written in linear and logarithmic form as "ISO 100/21°" (now again with degree symbol). These ISO standards were subsequently adopted by DIN as well. Finally, the latest DIN 4512 revisions were replaced by corresponding ISO standards, DIN 4512-1:1993-05 by DIN ISO 6:1996-02 in September 2000, DIN 4512-4:1985-08 by DIN ISO 2240:1998-06 and DIN 4512-5:1990-11 by DIN ISO 5800:1998-06 both in July 2002. BSI When BS 935:1941 was published during World War II, specifying exposure tables for negative materials, it employed the same fixed-density speed criterion used in the German DIN 4512:1934 system. The British Standard also used logarithmic speed numbers, following the example of Scheiner and DIN. When the American ASA Z38.2.1:1943 standard was published, it used a fractional gradient speed criterion and arithmetic speed numbers, for compatibility with Weston and GE. British standard BS 1380:1947 adopted the fractional gradient criterion of the American 1943 standard, and also included arithmetic speed numbers in addition to logarithmic numbers. The logarithmic speed number proposed in the later BS 1380:1957 standard was almost identical to the DIN 4512:1957 standard, except that the BS number was +9 degrees greater than the corresponding DIN number; in 1971, the BS and DIN standards changed this to +10 degrees. Following an increasing effort to produce international standards, the British, American, and German standards became identical in ISO 6:1974, which corresponded to BS 1380:Part1:1973. Weston Before the advent of the ASA system, the system of Weston film speed ratings was introduced by Edward Faraday Weston (1878–1971) and his father Dr. Edward Weston (1850–1936), a British-born electrical engineer, industrialist and founder of the US-based Weston Electrical Instrument Corporation, with the Weston model 617, one of the earliest photo-electric exposure meters, in August 1932. The meter and film rating system were invented by William Nelson Goodwin, Jr., who worked for them and later received a Howard N. Potts Medal for his contributions to engineering. The company tested and frequently published speed ratings for most films of the time. Weston film speed ratings could since be found on most Weston exposure meters and were sometimes referred to by film manufacturers and third parties in their exposure guidelines. Since manufacturers were sometimes creative about film speeds, the company went as far as to warn users about unauthorized uses of their film ratings in their "Weston film ratings" booklets. The Weston Cadet (model 852 introduced in 1949), Direct Reading (model 853 introduced 1954) and Master III (models 737 and S141.3 introduced in 1956) were the first in their line of exposure meters to switch and utilize the meanwhile established ASA scale instead. Other models used the original Weston scale up until ca. 1955. The company continued to publish Weston film ratings after 1955, but while their recommended values often differed slightly from the ASA film speeds found on film boxes, these newer Weston values were based on the ASA system and had to be converted for use with older Weston meters by subtracting 1/3 exposure stop as per Weston's recommendation. Vice versa, "old" Weston film speed ratings could be converted into "new" Westons and the ASA scale by adding the same amount, that is, a film rating of 100 Weston (up to 1955) corresponded with 125 ASA (as per ASA PH2.5-1954 and before). This conversion was not necessary on Weston meters manufactured and Weston film ratings published since 1956 due to their inherent use of the ASA system; however the changes of the ASA PH2.5-1960 revision may be taken into account when comparing with newer ASA or ISO values. General Electric Prior to the establishment of the ASA scale and similar to Weston film speed ratings another manufacturer of photo-electric exposure meters, General Electric, developed its own rating system of so-called General Electric film values (often abbreviated as G-E or GE) around 1937. Film speed values for use with their meters were published in regularly updated General Electric Film Values leaflets and in the General Electric Photo Data Book. General Electric switched to use the ASA scale in 1946. Meters manufactured since February 1946 are equipped with the ASA scale (labeled "Exposure Index") already. For some of the older meters with scales in "Film Speed" or "Film Value" (e.g. models DW-48, DW-49 as well as early DW-58 and GW-68 variants), replaceable hoods with ASA scales were available from the manufacturer. The company continued to publish recommended film values after that date, however, they were then aligned to the ASA scale. ASA Based on earlier research work by Loyd Ancile Jones (1884–1954) of Kodak and inspired by the systems of Weston film speed ratings and General Electric film values, the American Standards Association (now named ANSI) defined a new method to determine and specify film speeds of black-and-white negative films in 1943. ASA Z38.2.1–1943 was revised in 1946 and 1947 before the standard grew into ASA PH2.5-1954. Originally, ASA values were frequently referred to as American standard speed numbers or ASA exposure-index numbers. (See also: Exposure Index (EI).) The ASA scale is a linear scale, that is, a film denoted as having a film speed of 200 ASA is twice as fast as a film with 100 ASA. The ASA standard underwent a major revision in 1960 with ASA PH2.5-1960, when the method to determine film speed was refined and previously applied safety factors against under-exposure were abandoned, effectively doubling the nominal speed of many black-and-white negative films. For example, an Ilford HP3 that had been rated at 200 ASA before 1960 was labeled 400 ASA afterwards without any change to the emulsion. Similar changes were applied to the DIN system with DIN 4512:1961-10 and the BS system with BS 1380:1963 in the following years. In addition to the established arithmetic speed scale, ASA PH2.5-1960 also introduced logarithmic ASA grades (100 ASA = 5° ASA), where a difference of 1° ASA represented a full exposure stop and therefore the doubling of a film speed. For some while, ASA grades were also printed on film boxes, and they saw life in the form of the APEX speed value Sv (without degree symbol) as well. ASA PH2.5-1960 was revised as ANSI PH2.5-1979, without the logarithmic speeds, and later replaced by NAPM IT2.5–1986 of the National Association of Photographic Manufacturers, which represented the US adoption of the international standard ISO 6. The latest issue of ANSI/NAPM IT2.5 was published in 1993. The standard for color negative film was introduced as ASA PH2.27-1965 and saw a string of revisions in 1971, 1976, 1979, and 1981, before it finally became ANSI IT2.27–1988 prior to its withdrawal. Color reversal film speeds were defined in ANSI PH2.21-1983, which was revised in 1989 before it became ANSI/NAPM IT2.21 in 1994, the US adoption of the ISO 2240 standard. On an international level, the ASA system was superseded by the ISO film speed system between 1982 and 1987, however, the arithmetic ASA speed scale continued to live on as the linear speed value of the ISO system. GOST (Cyrillic: ) was an arithmetic film speed scale defined in GOST 2817-45 and GOST 2817–50. It was used in the former Soviet Union since October 1951, replacing Hurter & Driffield (H&D, Cyrillic: ХиД) numbers, which had been used since 1928. GOST 2817-50 was similar to the ASA standard, having been based on a speed point at a density 0.2 above base plus fog, as opposed to the ASA's 0.1. GOST markings are only found on pre-1987 photographic equipment (film, cameras, lightmeters, etc.) of Soviet Union manufacture. On 1 January 1987, the GOST scale was realigned to the ISO scale with GOST 10691–84, This evolved into multiple parts including GOST 10691.6–88 and GOST 10691.5–88, which both became functional on 1 January 1991. Current system: ISO The ASA and DIN film speed standards have been combined into the ISO standards since 1974. The current International Standard for measuring the speed of color negative film is ISO 5800:2001 (first published in 1979, revised in November 1987) from the International Organization for Standardization (ISO). Related standards ISO 6:1993 (first published in 1974) and ISO 2240:2003 (first published in July 1982, revised in September 1994 and corrected in October 2003) define scales for speeds of black-and-white negative film and color reversal film, respectively. The determination of ISO speeds with digital still-cameras is described in ISO 12232:2019 (first published in August 1998, revised in April 2006, corrected in October 2006 and again revised in February 2019). The ISO system defines both an arithmetic and a logarithmic scale. The arithmetic ISO scale corresponds to the arithmetic ASA system, where a doubling of film sensitivity is represented by a doubling of the numerical film speed value. In the logarithmic ISO scale, which corresponds to the DIN scale, adding 3° to the numerical value constitutes a doubling of sensitivity. For example, a film rated ISO 200/24° is twice as sensitive as one rated ISO 100/21°. Commonly, the logarithmic speed is omitted; for example, "ISO 100" denotes "ISO 100/21°", while logarithmic ISO speeds are written as "ISO 21°" as per the standard. Conversion between current scales Conversion from arithmetic speed S to logarithmic speed S° is given by and rounding to the nearest integer; the log is base 10. Conversion from logarithmic speed to arithmetic speed is given by and rounding to the nearest standard arithmetic speed in Table 1 below. Table notes: Speeds shown in bold under APEX, ISO, and ASA are values actually assigned in speed standards from the respective agencies; other values are calculated extensions to assigned speeds using the same progressions as for the assigned speeds. APEX Sv values 1 to 10 correspond with logarithmic ASA grades 1° to 10° found in ASA PH2.5-1960. ASA arithmetic speeds from 4 to 5 are taken from ANSI PH2.21-1979 (Table 1, p. 8). ASA arithmetic speeds from 6 to 3200 are taken from ANSI PH2.5-1979 (Table 1, p. 5) and ANSI PH2.27-1979. ISO arithmetic speeds from 4 to 3200 are taken from ISO 5800:1987 (Table "ISO speed scales", p. 4). ISO arithmetic speeds from 6 to 10000 are taken from ISO 12232:1998 (Table 1, p. 9). ISO 12232:1998 does not specify speeds greater than 10000. However, the upper limit for Snoise 10000 was given as 12500, suggesting that ISO may have envisioned a progression of 12500, 25000, 50000, and 100000, similar to that from 1250 to 10000. This was consistent with ASA PH2.12-1961. For digital cameras, Nikon, Canon, Sony, Pentax, and Fujifilm chose to express the greater speeds in an exact power-of-2 progression from the highest previously realized speed (6400) rather than rounding to an extension of the existing progression. Speed ratings greater than 10000 have finally been defined in ISO 12232:2019. Most of the modern 35 mm film SLRs support an automatic film speed range from ISO 25/15° to 5000/38° with DX-coded films, or ISO 6/9° to 6400/39° manually (without utilizing exposure compensation). The film speed range with support for TTL flash is smaller, typically ISO 12/12° to 3200/36° or less. The Booster accessory for the Canon Pellix QL (1965) and Canon FT QL (1966) supported film speeds from 25 to 12800 ASA. The film speed dial of the Canon A-1 (1978) supported a speed range from 6 to 12800 ASA (but already called ISO film speeds in the manual). On this camera exposure compensation and extreme film speeds were mutually exclusive. The Leica R8 (1996) and R9 (2002) officially supported film speeds of 8000/40°, 10000/41° and 12800/42° (in the case of the R8) or 12500/42° (in the case of the R9), and utilizing its ±3 EV exposure compensation the range could be extended from ISO 0.8/0° to ISO 100000/51° in half exposure steps. Digital camera manufacturers' arithmetic speeds from 12800 to 409600 are from specifications by Nikon (12800, 25600, 51200, 102400 in 2009, 204800 in 2012, 409600 in 2014), Canon (12800, 25600, 51200, 102400 in 2009, 204800 in 2011, 4000000 in 2015), Sony (12800 in 2009, 25600 in 2010, 409600 in 2014), Pentax (12800, 25600, 51200 in 2010, 102400, 204800 in 2014), and Fujifilm (12800 in 2011). Historic ASA and DIN conversion As discussed in the ASA and DIN sections, the definition of the ASA and DIN scales changed several times in the 1950s up into the early 1960s making it necessary to convert between the different scales. Since the ISO system combines the newer ASA and DIN definitions, this conversion is also necessary when comparing older ASA and DIN scales with the ISO scale. The picture shows an ASA/DIN conversion in a 1952 photography book in which 21/10° DIN was converted to ASA 80 instead of ASA 100. Some classic camera's exposure guides show the old conversion as they were valid at the time of production, for example the exposure guide of the classic camera Tessina (since 1957), where 21/10° DIN is related to ASA 80, 18° DIN to ASA 40, etc. Users of classic cameras may become confused if they are not aware of the historic background of changing standards. Determining film speed Film speed is found from a plot of optical density vs. log of exposure for the film, known as the D–log H curve or Hurter–Driffield curve. There typically are five regions in the curve: the base + fog, the toe, the linear region, the shoulder, and the overexposed region. For black-and-white negative film, the "speed point" m is the point on the curve where density exceeds the base + fog density by 0.1 when the negative is developed so that a point n where the log of exposure is 1.3 units greater than the exposure at point m has a density 0.8 greater than the density at point m. The exposure Hm, in lux-s, is that for point m when the specified contrast condition is satisfied. The ISO arithmetic speed is determined from: This value is then rounded to the nearest standard speed in Table 1 of ISO 6:1993. Determining speed for color negative film is similar in concept but more complex because it involves separate curves for blue, green, and red. The film is processed according to the film manufacturer's recommendations rather than to a specified contrast. ISO speed for color reversal film is determined from the middle rather than the threshold of the curve; it again involves separate curves for blue, green, and red, and the film is processed according to the film manufacturer's recommendations. Applying film speed Film speed is used in the exposure equations to find the appropriate exposure parameters. Four variables are available to the photographer to obtain the desired effect: lighting, film speed, f-number (aperture size), and shutter speed (exposure time). The equation may be expressed as ratios, or, by taking the logarithm (base 2) of both sides, by addition, using the APEX system, in which every increment of 1 is a doubling of exposure; this increment is commonly known as a "stop". The effective f-number is proportional to the ratio between the lens focal length and aperture diameter, the diameter itself being proportional to the square root of the aperture area. Thus, a lens set to allows twice as much light to strike the focal plane as a lens set to 2. Therefore, each f-number factor of the square root of two (approximately 1.4) is also a stop, so lenses are typically marked in that progression: 1.4, 2, 2.8, 4, 5.6, 8, 11, 16, 22, 32, etc.. The ISO arithmetic speed has a useful property for photographers without the equipment for taking a metered light reading. Correct exposure will usually be achieved for a frontlighted scene in bright sun if the aperture of the lens is set to f/16 and the shutter speed is the reciprocal of the ISO film speed (e.g. 1/100 second for 100 ISO film). This known as the sunny 16 rule. Exposure index Exposure index, or EI, refers to speed rating assigned to a particular film and shooting situation in variance to the film's actual speed. It is used to compensate for equipment calibration inaccuracies or process variables, or to achieve certain effects. The exposure index may simply be called the speed setting, as compared to the speed rating. For example, a photographer may rate an ISO 400 film at EI 800 and then use push processing to obtain printable negatives in low-light conditions. The film has been exposed at EI 800. Another example occurs where a camera's shutter is miscalibrated and consistently overexposes or underexposes the film; similarly, a light meter may be inaccurate. One may adjust the EI setting accordingly in order to compensate for these defects and consistently produce correctly exposed negatives. Reciprocity Upon exposure, the amount of light energy that reaches the film determines the effect upon the emulsion. If the brightness of the light is multiplied by a factor and the exposure of the film decreased by the same factor by varying the camera's shutter speed and aperture, so that the energy received is the same, the film will be developed to the same density. This rule is called reciprocity. The systems for determining the sensitivity for an emulsion are possible because reciprocity holds over a wide range of customary conditions. In practice, reciprocity works reasonably well for normal photographic films for the range of exposures between 1/1000 second to 1/2 second. However, this relationship breaks down outside these limits, a phenomenon known as reciprocity failure. Film sensitivity and grain The size of silver halide grains in the emulsion affects film sensitivity, which is related to granularity because larger grains give film greater sensitivity to light. Fine-grain film, such as film designed for portraiture or copying original camera negatives, is relatively insensitive, or "slow", because it requires brighter light or a longer exposure than a "fast" film. Fast films, used for photographing in low light or capturing high-speed motion, produce comparatively grainy images. Kodak has defined a "Print Grain Index" (PGI) to characterize film grain (color negative films only), based on perceptual just-noticeable difference of graininess in prints. They also define "granularity", a measurement of grain using an RMS measurement of density fluctuations in uniformly exposed film, measured with a microdensitometer with 48 micrometre aperture. Granularity varies with exposure — underexposed film looks grainier than overexposed film. Marketing anomalies Some high-speed black-and-white films, such as Ilford Delta 3200, P3200 T-Max, and T-MAX P3200 are marketed with film speeds in excess of their true ISO speed as determined using the ISO testing method. According to the respective data sheets, the Ilford product is actually an ISO 1000 film, while the Kodak film's speed is nominally 800 to 1000 ISO. The manufacturers do not indicate that the 3200 number is an ISO rating on their packaging. Kodak and Fuji also marketed E6 films designed for pushing (hence the "P" prefix), such as Ektachrome P800/1600 and Fujichrome P1600, both with a base speed of ISO 400. The DX codes on the film cartridges indicate the marketed film speed (i.e. 3200), not the ISO speed, in order to automate shooting and development. Digital camera ISO speed and exposure index In digital camera systems, an arbitrary relationship between exposure and sensor data values can be achieved by setting the signal gain of the sensor. The relationship between the sensor data values and the lightness of the finished image is also arbitrary, depending on the parameters chosen for the interpretation of the sensor data into an image color space such as sRGB. For digital photo cameras ("digital still cameras"), an exposure index (EI) rating—commonly called ISO setting—is specified by the manufacturer such that the sRGB image files produced by the camera will have a lightness similar to what would be obtained with film of the same EI rating at the same exposure. The usual design is that the camera's parameters for interpreting the sensor data values into sRGB values are fixed, and a number of different EI choices are accommodated by varying the sensor's signal gain in the analog realm, prior to conversion to digital. Some camera designs provide at least some EI choices by adjusting the sensor's signal gain in the digital realm ("expanded ISO"). A few camera designs also provide EI adjustment through a choice of lightness parameters for the interpretation of sensor data values into sRGB; this variation allows different tradeoffs between the range of highlights that can be captured and the amount of noise introduced into the shadow areas of the photo. Digital cameras have far surpassed film in terms of sensitivity to light, with ISO equivalent speeds of up to 4,560,000, a number that is unfathomable in the realm of conventional film photography. Faster microprocessors, as well as advances in software noise reduction techniques allow this type of processing to be executed the moment the photo is captured, allowing photographers to store images that have a higher level of refinement and would have been prohibitively time-consuming to process with earlier generations of digital camera hardware. The ISO (International Organization of Standards) 12232:2019 standard The ISO standard ISO 12232:2006 gave digital still camera manufacturers a choice of five different techniques for determining the exposure index rating at each sensitivity setting provided by a particular camera model. Three of the techniques in ISO 12232:2006 were carried over from the 1998 version of the standard, while two new techniques allowing for measurement of JPEG output files were introduced from CIPA DC-004. Depending on the technique selected, the exposure index rating could depend on the sensor sensitivity, the sensor noise, and the appearance of the resulting image. The standard specified the measurement of light sensitivity of the entire digital camera system and not of individual components such as digital sensors, although Kodak has reported using a variation to characterize the sensitivity of two of their sensors in 2001. The Recommended Exposure Index (REI) technique, new in the 2006 version of the standard, allows the manufacturer to specify a camera model's EI choices arbitrarily. The choices are based solely on the manufacturer's opinion of what EI values produce well-exposed sRGB images at the various sensor sensitivity settings. This is the only technique available under the standard for output formats that are not in the sRGB color space. This is also the only technique available under the standard when multi-zone metering (also called pattern metering) is used. The Standard Output Sensitivity (SOS) technique, also new in the 2006 version of the standard, effectively specifies that the average level in the sRGB image must be 18% gray plus or minus 1/3 stop when the exposure is controlled by an automatic exposure control system calibrated per ISO 2721 and set to the EI with no exposure compensation. Because the output level is measured in the sRGB output from the camera, it is only applicable to sRGB JPEG—and not to output files in raw image format. It is not applicable when multi-zone metering is used. The CIPA DC-004 standard requires that Japanese manufacturers of digital still cameras use either the REI or SOS techniques, and DC-008 updates the Exif specification to differentiate between these values. Consequently, the three EI techniques carried over from ISO 12232:1998 are not widely used in recent camera models (approximately 2007 and later). As those earlier techniques did not allow for measurement from images produced with lossy compression, they cannot be used at all on cameras that produce images only in JPEG format. The saturation-based (SAT or Ssat) technique is closely related to the SOS technique, with the sRGB output level being measured at 100% white rather than 18% gray. The SOS value is effectively 0.704 times the saturation-based value. Because the output level is measured in the sRGB output from the camera, it is only applicable to sRGB images—typically TIFF—and not to output files in raw image format. It is not applicable when multi-zone metering is used. The two noise-based techniques have rarely been used for consumer digital still cameras. These techniques specify the highest EI that can be used while still providing either an "excellent" picture or a "usable" picture depending on the technique chosen. An update to this standard has been published as ISO 12232:2019, defining a wider range of ISO speeds. Measurements and calculations ISO speed ratings of a digital camera are based on the properties of the sensor and the image processing done in the camera, and are expressed in terms of the luminous exposure H (in lux seconds) arriving at the sensor. For a typical camera lens with an effective focal length f that is much smaller than the distance between the camera and the photographed scene, H is given by , where L is the luminance of the scene (in candela per m²), t is the exposure time (in seconds), N is the aperture f-number, and is a factor depending on the transmittance T of the lens, the vignetting factor v(θ), and the angle θ relative to the axis of the lens. A typical value is q = 0.65, based on θ = 10°, T = 0.9, and v = 0.98. Saturation-based speed The saturation-based speed is defined as , where is the maximum possible exposure that does not lead to a clipped or bloomed camera output. Typically, the lower limit of the saturation speed is determined by the sensor itself, but with the gain of the amplifier between the sensor and the analog-to-digital converter, the saturation speed can be increased. The factor 78 is chosen such that exposure settings based on a standard light meter and an 18-percent reflective surface will result in an image with a grey level of 18%/ = 12.7% of saturation. The factor indicates that there is half a stop of headroom to deal with specular reflections that would appear brighter than a 100% reflecting diffuse white surface. Noise-based speed The noise-based speed is defined as the exposure that will lead to a given signal-to-noise ratio on individual pixels. Two ratios are used, the 40:1 ("excellent image quality") and the 10:1 ("acceptable image quality") ratio. These ratios have been subjectively determined based on a resolution of 70 pixels per cm (178 DPI) when viewed at 25 cm (9.8 inch) distance. The noise is defined as the standard deviation of a weighted average of the luminance and color of individual pixels. The noise-based speed is mostly determined by the properties of the sensor and somewhat affected by the noise in the electronic gain and AD converter. Standard output sensitivity (SOS) In addition to the above speed ratings, the standard also defines the standard output sensitivity (SOS), how the exposure is related to the digital pixel values in the output image. It is defined as where is the exposure that will lead to values of 118 in 8-bit pixels, which is 18 percent of the saturation value in images encoded as sRGB or with gamma = 2.2. Discussion The standard specifies how speed ratings should be reported by the camera. If the noise-based speed (40:1) is higher than the saturation-based speed, the noise-based speed should be reported, rounded downwards to a standard value (e.g. 200, 250, 320, or 400). The rationale is that exposure according to the lower saturation-based speed would not result in a visibly better image. In addition, an exposure latitude can be specified, ranging from the saturation-based speed to the 10:1 noise-based speed. If the noise-based speed (40:1) is lower than the saturation-based speed, or undefined because of high noise, the saturation-based speed is specified, rounded upwards to a standard value, because using the noise-based speed would lead to overexposed images. The camera may also report the SOS-based speed (explicitly as being an SOS speed), rounded to the nearest standard speed rating. For example, a camera sensor may have the following properties: , , and . According to the standard, the camera should report its sensitivity as ISO 100 (daylight) ISO speed latitude 50–1600 ISO 100 (SOS, daylight). The SOS rating could be user controlled. For a different camera with a noisier sensor, the properties might be , , and . In this case, the camera should report ISO 200 (daylight), as well as a user-adjustable SOS value. In all cases, the camera should indicate for the white balance setting for which the speed rating applies, such as daylight or tungsten (incandescent light). Despite these detailed standard definitions, cameras typically do not clearly indicate whether the user "ISO" setting refers to the noise-based speed, saturation-based speed, or the specified output sensitivity, or even some made-up number for marketing purposes. Because the 1998 version of ISO 12232 did not permit measurement of camera output that had lossy compression, it was not possible to correctly apply any of those measurements to cameras that did not produce sRGB files in an uncompressed format such as TIFF. Following the publication of CIPA DC-004 in 2006, Japanese manufacturers of digital still cameras are required to specify whether a sensitivity rating is REI or SOS. A greater SOS setting for a given sensor comes with some loss of image quality, just like with analog film. However, this loss is visible as image noise rather than grain. APS- and 35 mm-sized digital image sensors, both CMOS and CCD based, do not produce significant noise until about ISO 1600. See also Frame rate Lens speed Preferred number References Further reading ISO 6:1974, ISO 6:1993 (1993-02). Photography — Black-and-white pictorial still camera negative film/process systems — Determination of ISO speed. Geneva: International Organization for Standardization. ISO 2240:1982 (1982-07), ISO 2240:1994 (1994-09), ISO 2240:2003 (2003–10). Photography — Colour reversal camera films — Determination of ISO speed. Geneva: International Organization for Standardization. ISO 2720:1974. General Purpose Photographic Exposure Meters (Photoelectric Type) — Guide to Product Specification. Geneva: International Organization for Standardization. ISO 5800:1979, ISO 5800:1987 (1987-11), ISO 5800:1987/Cor 1:2001 (2001-06). Photography — Colour negative films for still photography — Determination of ISO speed. Geneva: International Organization for Standardization. ISO 12232:1998 (1998-08), ISO 12232:2006 (2006-04-15), ISO 12232:2006 (2006-10-01), ISO 12232:2019 (2019-02-01). Photography — Digital still cameras — Determination of exposure index, ISO speed ratings, standard output sensitivity, and recommended exposure index. Geneva: International Organization for Standardization. ASA Z38.2.1-1943, ASA Z38.2.1-1946, ASA Z38.2.1-1947 (1947-07-15). American Standard Method for Determining Photographic Speed and Speed Number. New York: American Standards Association. Superseded by ASA PH2.5-1954. ASA PH2.5-1954, ASA PH2.5-1960. American Standard Method for Determining Speed of photographic Negative Materials (Monochrome, Continuous Tone). New York: United States of America Standards Institute (USASI). Superseded by ANSI PH2.5-1972. ANSI PH2.5-1972, ANSI PH2.5-1979 (1979-01-01), ANSI PH2.5-1979(R1986). Speed of photographic negative materials (monochrome, continuous tone, method for determining). New York: American National Standards Institute. Superseded by NAPM IT2.5-1986. NAPM IT2.5-1986, ANSI/ISO 6-1993 ANSI/NAPM IT2.5-1993 (1993-01-01). Photography — Black-and-White Pictorial Still Camera Negative Film/Process Systems — Determination of ISO Speed (same as ANSI/ISO 6-1993). National Association of Photographic Manufacturers. This represents the US adoption of ISO 6. ASA PH2.12-1957, ASA PH2.12-1961. American Standard, General-Purpose Photographic Exposure Meters (photoelectric type). New York: American Standards Association. Superseded by ANSI PH3.49-1971. ANSI PH2.21-1983 (1983-09-23), ANSI PH2.21-1983(R1989). Photography (Sensitometry) Color reversal camera films – Determination of ISO speed. New York: American Standards Association. Superseded by ANSI/ISO 2240-1994 ANSI/NAPM IT2.21-1994. ANSI/ISO 2240-1994 ANSI/NAPM IT2.21-1994. Photography – Colour reversal camera films – determination of ISO speed. New York: American National Standards Institute. This represents the US adoption of ISO 2240. ASA PH2.27-1965 (1965-07-06), ASA PH2.27-1971, ASA PH2.27-1976, ANSI PH2.27-1979, ANSI PH2.27-1981, ANSI PH2.27-1988 (1988-08-04). Photography – Colour negative films for still photography – Determination of ISO speed (withdrawn). New York: American Standards Association. Superseded by ANSI IT2.27-1988. ANSI IT2.27-1988 (1994-08/09?). Photography Color negative films for still photography – Determination of ISO speed. New York: American National Standards Institute. Withdrawn. This represented the US adoption of ISO 5800. ANSI PH3.49-1971, ANSI PH3.49-1971(R1987). American National Standard for general-purpose photographic exposure meters (photoelectric type). New York: American National Standards Institute. After several revisions, this standard was withdrawn in favor of ANSI/ISO 2720:1974. ANSI/ISO 2720:1974, ANSI/ISO 2720:1974(R1994) ANSI/NAPM IT3.302-1994. General Purpose Photographic Exposure Meters (Photoelectric Type) — Guide to Product Specification. New York: American National Standards Institute. This represents the US adoption of ISO 2720. BSI BS 1380:1947, BSI BS 1380:1963. Speed and exposure index. British Standards Institution. Superseded by BSI BS 1380-1:1973 (1973-12), BSI BS 1380-2:1984 (1984-09), BSI BS 1380-3:1980 (1980-04) and others. BSI BS 1380-1:1973 (1973-12-31). Speed of sensitized photographic materials: Negative monochrome material for still and cine photography. British Standards Institution. Replaced by BSI BS ISO 6:1993, superseded by BSI BS ISO 2240:1994. BSI BS 1380-2:1984 ISO 2240:1982 (1984-09-28). Speed of sensitized photographic materials. Method for determining the speed of colour reversal film for still and amateur cine photography. British Standards Institution. Superseded by BSI BS ISO 2240:1994. BSI BS 1380-3:1980 ISO 5800:1979 (1980-04-30). Speed of sensitized photographic materials. Colour negative film for still photography. British Standards Institution. Superseded by BSI BS ISO 5800:1987. BSI BS ISO 6:1993 (1995-03-15). Photography. Black-and-white pictorial still camera negative film/process systems. Determination of ISO speed. British Standards Institution. This represents the British adoption of ISO 6:1993. BSI BS ISO 2240:1994 (1993-03-15), BSI BS ISO 2240:2003 (2004-02-11). Photography. Colour reversal camera films. Determination of ISO speed. British Standards Institution. This represents the British adoption of ISO 2240:2003. BSI BS ISO 5800:1987 (1995-03-15). Photography. Colour negative films for still photography. Determination of ISO speed. British Standards Institution. This represents the British adoption of ISO 5800:1987. DIN 4512:1934-01, DIN 4512:1957-11 (Blatt 1), DIN 4512:1961-10 (Blatt 1). Photographische Sensitometrie, Bestimmung der optischen Dichte. Berlin: Deutscher Normenausschuß (DNA). Superseded by DIN 4512-1:1971-04, DIN 4512-4:1977-06, DIN 4512-5:1977-10 and others. DIN 4512-1:1971-04, DIN 4512-1:1993-05. Photographic sensitometry; systems of black and white negative films and their process for pictorial photography; determination of speed. Berlin: Deutsches Institut für Normung (before 1975: Deutscher Normenausschuß (DNA)). Superseded by DIN ISO 6:1996-02. DIN 4512-4:1977-06, DIN 4512-4:1985-08. Photographic sensitometry; determination of the speed of colour reversal films. Berlin: Deutsches Institut für Normung. Superseded by DIN ISO 2240:1998-06. DIN 4512-5:1977-10, DIN 4512-5:1990-11. Photographic sensitometry; determination of the speed of colour negative films. Berlin: Deutsches Institut für Normung. Superseded by DIN ISO 5800:1998-06. DIN ISO 6:1996-02. Photography – Black-and-white pictorial still camera negative film/process systems – Determination of ISO speed (ISO 6:1993). Berlin: Deutsches Institut für Normung. This represents the German adoption of ISO 6:1993. DIN ISO 2240:1998-06, DIN ISO 2240:2005-10. Photography – Colour reversal camera films – Determination of ISO speed (ISO 2240:2003). Berlin: Deutsches Institut für Normung. This represents the German adoption of ISO 2240:2003. DIN ISO 5800:1998-06, DIN ISO 5800:2003-11. Photography – Colour negative films for still photography – Determination of ISO speed (ISO 5800:1987 + Corr. 1:2001). Berlin: Deutsches Institut für Normung. This represents the German adoption of ISO 5800:2001. Leslie B. Stroebel, John Compton, Ira Current, Richard B. Zakia. Basic Photographic Materials and Processes, second edition. Boston: Focal Press, 2000. . External links What is the meaning of ISO for digital cameras? Digital Photography FAQ Signal-dependent noise modeling, estimation, and removal for digital imaging sensors "Handbook of Photography" by Henney and Dudley (1939) Spreadsheet Comparing Film Speed Systems Science of photography Physical quantities
Film speed
[ "Physics", "Mathematics" ]
10,443
[ "Physical phenomena", "Quantity", "Physical quantities", "Physical properties" ]
168,609
https://en.wikipedia.org/wiki/Cycle%20%28graph%20theory%29
In graph theory, a cycle in a graph is a non-empty trail in which only the first and last vertices are equal. A directed cycle in a directed graph is a non-empty directed trail in which only the first and last vertices are equal. A graph without cycles is called an acyclic graph. A directed graph without directed cycles is called a directed acyclic graph. A connected graph without cycles is called a tree. Definitions Circuit and cycle A circuit is a non-empty trail in which the first and last vertices are equal (closed trail). Let be a graph. A circuit is a non-empty trail with a vertex sequence . A cycle or simple circuit is a circuit in which only the first and last vertices are equal. n is called the length of the circuit resp. length of the cycle. Directed circuit and directed cycle A directed circuit is a non-empty directed trail in which the first and last vertices are equal (closed directed trail). Let be a directed graph. A directed circuit is a non-empty directed trail with a vertex sequence . A directed cycle or simple directed circuit is a directed circuit in which only the first and last vertices are equal. n is called the length of the directed circuit resp. length of the directed cycle. Chordless cycle A chordless cycle in a graph, also called a hole or an induced cycle, is a cycle such that no two vertices of the cycle are connected by an edge that does not itself belong to the cycle. An antihole is the complement of a graph hole. Chordless cycles may be used to characterize perfect graphs: by the strong perfect graph theorem, a graph is perfect if and only if none of its holes or antiholes have an odd number of vertices that is greater than three. A chordal graph, a special type of perfect graph, has no holes of any size greater than three. The girth of a graph is the length of its shortest cycle; this cycle is necessarily chordless. Cages are defined as the smallest regular graphs with given combinations of degree and girth. A peripheral cycle is a cycle in a graph with the property that every two edges not on the cycle can be connected by a path whose interior vertices avoid the cycle. In a graph that is not formed by adding one edge to a cycle, a peripheral cycle must be an induced cycle. Cycle space The term cycle may also refer to an element of the cycle space of a graph. There are many cycle spaces, one for each coefficient field or ring. The most common is the binary cycle space (usually called simply the cycle space), which consists of the edge sets that have even degree at every vertex; it forms a vector space over the two-element field. By Veblen's theorem, every element of the cycle space may be formed as an edge-disjoint union of simple cycles. A cycle basis of the graph is a set of simple cycles that forms a basis of the cycle space. Using ideas from algebraic topology, the binary cycle space generalizes to vector spaces or modules over other rings such as the integers, rational or real numbers, etc. Cycle detection The existence of a cycle in directed and undirected graphs can be determined by whether a depth-first search (DFS) finds an edge that points to an ancestor of the current vertex (i.e., it contains a back edge). All the back edges which DFS skips over are part of cycles. In an undirected graph, the edge to the parent of a node should not be counted as a back edge, but finding any other already visited vertex will indicate a back edge. In the case of undirected graphs, only O(n) time is required to find a cycle in an n-vertex graph, since at most n − 1 edges can be tree edges. Many topological sorting algorithms will detect cycles too, since those are obstacles for topological order to exist. Also, if a directed graph has been divided into strongly connected components, cycles only exist within the components and not between them, since cycles are strongly connected. For directed graphs, distributed message-based algorithms can be used. These algorithms rely on the idea that a message sent by a vertex in a cycle will come back to itself. Distributed cycle detection algorithms are useful for processing large-scale graphs using a distributed graph processing system on a computer cluster (or supercomputer). Applications of cycle detection include the use of wait-for graphs to detect deadlocks in concurrent systems. Algorithm The aforementioned use of depth-first search to find a cycle can be described as follows: For every vertex v: visited(v) = finished(v) = false For every vertex v: DFS(v) where DFS(v) = if finished(v): return if visited(v): "Cycle found" return visited(v) = true for every neighbour w: DFS(w) finished(v) = true For undirected graphs, "neighbour" means all vertices connected to v, except for the one that recursively called DFS(v). This omission prevents the algorithm from finding a trivial cycle of the form v→w→v; these exist in every undirected graph with at least one edge. A variant using breadth-first search instead will find a cycle of the smallest possible length. Covering graphs by cycle In his 1736 paper on the Seven Bridges of Königsberg, widely considered to be the birth of graph theory, Leonhard Euler proved that, for a finite undirected graph to have a closed walk that visits each edge exactly once (making it a closed trail), it is necessary and sufficient that it be connected except for isolated vertices (that is, all edges are contained in one component) and have even degree at each vertex. The corresponding characterization for the existence of a closed walk visiting each edge exactly once in a directed graph is that the graph be strongly connected and have equal numbers of incoming and outgoing edges at each vertex. In either case, the resulting closed trail is known as an Eulerian trail. If a finite undirected graph has even degree at each of its vertices, regardless of whether it is connected, then it is possible to find a set of simple cycles that together cover each edge exactly once: this is Veblen's theorem. When a connected graph does not meet the conditions of Euler's theorem, a closed walk of minimum length covering each edge at least once can nevertheless be found in polynomial time by solving the route inspection problem. The problem of finding a single simple cycle that covers each vertex exactly once, rather than covering the edges, is much harder. Such a cycle is known as a Hamiltonian cycle, and determining whether it exists is NP-complete. Much research has been published concerning classes of graphs that can be guaranteed to contain Hamiltonian cycles; one example is Ore's theorem that a Hamiltonian cycle can always be found in a graph for which every non-adjacent pair of vertices have degrees summing to at least the total number of vertices in the graph. The cycle double cover conjecture states that, for every bridgeless graph, there exists a multiset of simple cycles that covers each edge of the graph exactly twice. Proving that this is true (or finding a counterexample) remains an open problem. Graph classes defined by cycle Several important classes of graphs can be defined by or characterized by their cycles. These include: Bipartite graph, a graph without odd cycles (cycles with an odd number of vertices) Cactus graph, a graph in which every nontrivial biconnected component is a cycle Cycle graph, a graph that consists of a single cycle Chordal graph, a graph in which every induced cycle is a triangle Directed acyclic graph, a directed graph with no directed cycles Forest, a cycle-free graph Line perfect graph, a graph in which every odd cycle is a triangle Perfect graph, a graph with no induced cycles or their complements of odd length greater than three Pseudoforest, a graph in which each connected component has at most one cycle Strangulated graph, a graph in which every peripheral cycle is a triangle Strongly connected graph, a directed graph in which every edge is part of a cycle Triangle-free graph, a graph without three-vertex cycles Even-cycle-free graph, a graph without even cycles Even-hole-free graph, a graph without even cycles of length larger or equal to 6 See also Cycle space Cycle basis Cycle detection in a sequence of iterated function values Minimum mean weight cycle References Graph theory objects
Cycle (graph theory)
[ "Mathematics" ]
1,751
[ "Mathematical relations", "Graph theory objects", "Graph theory" ]
168,632
https://en.wikipedia.org/wiki/Siemens
Siemens AG ( ) is a German multinational technology conglomerate. It is focused on industrial automation, distributed energy resources, rail transport and health technology. Siemens is the largest industrial manufacturing company in Europe, and holds the position of global market leader in industrial automation and industrial software. The origins of the conglomerate can be traced back to 1847 to the Telegraphen Bau-Anstalt von Siemens & Halske established in Berlin by Werner von Siemens and Johann Georg Halske. In 1966, the present-day corporation emerged from the merger of three companies: Siemens & Halske, Siemens-Schuckert, and Siemens-Reiniger-Werke. Today headquartered in Munich and Berlin, Siemens and its subsidiaries employ approximately 320,000 people worldwide and reported a global revenue of around €78 billion in 2023. The company is a component of the DAX and Euro Stoxx 50 stock market indices. As of December 2023, Siemens is the second largest German company by market capitalization. As of 2023, the principal divisions of Siemens are Digital Industries, Smart Infrastructure, Mobility, and Financial Services, with Siemens Mobility operating as an independent entity. Major business divisions that were once part of Siemens before being spun off include semiconductor manufacturer Infineon Technologies (1999), Siemens Mobile (2005), Gigaset Communications (2008), the photonics business Osram (2013), Siemens Healthineers (2017), and Siemens Energy (2020). History 1847 to 1901 Siemens & Halske was founded by Werner von Siemens and Johann Georg Halske on 1 October 1847. Based on the telegraph, their invention used a needle to point to the sequence of letters, instead of using Morse code. The company, then called Telegraphen-Bauanstalt von Siemens & Halske, opened its first workshop on 12 October. In 1848, the company built the first long-distance telegraph line in Europe: 500 km (300 miles) from Berlin to Frankfurt am Main. In 1850, the founder's younger brother, Carl Wilhelm Siemens, later Sir William Siemens, started to represent the company in London. The London agency became a branch office in 1858. In the 1850s, the company was involved in building long-distance telegraph networks in Russia. In 1855, a company branch headed by another brother, Carl Heinrich von Siemens, opened in St Petersburg, Russia. In 1867, Siemens completed the monumental Indo-European telegraph line stretching over 11,000 km (6800 miles) from London to Calcutta. In 1867, Werner von Siemens described a dynamo without permanent magnets. A similar system was also independently invented by Ányos Jedlik and Charles Wheatstone, but Siemens became the first company to build such devices. In 1881, a Siemens AC Alternator driven by a watermill was used to power the world's first electric street lighting in the town of Godalming, United Kingdom. The company continued to grow and diversified into electric trains and light bulbs. In 1885, Siemens sold one of its generators to George Westinghouse, thereby enabling Westinghouse to begin experimenting with AC networks in Pittsburgh, Pennsylvania. In 1887, Siemens opened its first office in Japan. In 1890, the founder retired and left the running of the company to his brother Carl and sons Arnold and Wilhelm. In 1892, Siemens was contracted to construct the Hobart electric tramway in Tasmania, Australia, as it increased its markets. The system opened in 1893 and became the first complete electric tram network in the Southern Hemisphere. 1901 to 1933 Siemens & Halske (S & H) was incorporated in 1897 and then merged parts of its activities with Schuckert & Co., Nuremberg, in 1903 to become Siemens-Schuckert. In 1907, Siemens (Siemens & Halske and Siemens-Schuckert) had 34,324 employees and was the seventh-largest company in the German empire by number of employees. (see List of German companies by employees in 1907) In 1919, S & H and two other companies jointly formed the Osram lightbulb company. During the 1920s and 1930s, S & H started to manufacture radios, television sets, and electron microscopes. In 1932, Reiniger, Gebbert & Schall (Erlangen), Phönix AG (Rudolstadt) and Siemens-Reiniger-Veifa mbH (Berlin) merged to form the Siemens-Reiniger-Werke AG (SRW), the third of the so-called parent companies that merged in 1966 to form the present-day Siemens AG. In the 1920s, Siemens constructed the Ardnacrusha Hydro Power station on the River Shannon in the then Irish Free State, and it was a world first for its design. The company is remembered for its desire to raise the wages of its underpaid workers, only to be overruled by the Cumann na nGaedheal government. 1933 to 1945 Siemens (at the time: Siemens-Schuckert) exploited the forced labour of deported people in extermination camps. The company owned a plant in Auschwitz concentration camp. Siemens exploited the forced labour of women deported to the Ravensbrück concentration camp; a Siemens factory was located in front of the camp. During the final years of World War II, numerous plants and factories in Berlin and other major cities were destroyed by Allied air raids. To prevent further losses, manufacturing was therefore moved to alternative places and regions not affected by the air war. The goal was to secure continued production of important war-related and everyday goods. According to records, Siemens was operating almost 400 alternative or relocated manufacturing plants at the end of 1944 and in early 1945. In 1972, Siemens sued German satirist F.C. Delius for his satirical history of the company, Unsere Siemens-Welt, and it was determined much of the book contained false claims although the trial itself publicized Siemens's history in Nazi Germany. The company supplied electrical parts to Nazi concentration camps and death camps. The factories had poor working conditions, where malnutrition and death were common. Also, the scholarship has shown that the camp factories were created, run, and supplied by the SS, in conjunction with company officials, sometimes high-level officials. 1945 to 2001 In the 1950s, and from their new base in Bavaria, S&H started to manufacture computers, semiconductor devices, washing machines, and pacemakers. In 1966, Siemens & Halske (S&H, founded in 1847), Siemens-Schuckertwerke (SSW, founded in 1903) and Siemens-Reiniger-Werke (SRW, founded in 1932) merged to form Siemens AG. In 1969, Siemens formed Kraftwerk Union with AEG by pooling their nuclear power businesses. The company's first digital telephone exchange was produced in 1980, and in 1988, Siemens and GEC acquired the UK defence and technology company Plessey. Plessey's holdings were split, and Siemens took over the avionics, radar and traffic control businesses—as Siemens Plessey. In 1977, Advanced Micro Devices (AMD) entered into a joint venture with Siemens, which wanted to enhance its technology expertise and enter the American market. Siemens purchased 20% of AMD's stock, giving the company an infusion of cash to increase its product lines. The two companies also jointly established Advanced Micro Computers (AMC), located in Silicon Valley and in Germany, allowing AMD to enter the microcomputer development and manufacturing field, in particular based on AMD's second-source Zilog Z8000 microprocessors. When the two companies' vision for Advanced Micro Computers diverged, AMD bought out Siemens's stake in the American division in 1979. AMD closed Advanced Micro Computers in late 1981 after switching focus to manufacturing second-source Intel x86 microprocessors. In 1985, Siemens bought Allis-Chalmers' interest in the partnership company Siemens-Allis (formed 1978) which supplied electrical control equipment. It was incorporated into Siemens's Energy and Automation division. In 1987, Siemens reintegrated Kraftwerk Union, the unit overseeing nuclear power business. In 1987, Siemens acquired Kongsberg Offshore from the Norwegian Government, selling it on to FMC Technologies in 1993 In 1989, Siemens bought the solar photovoltaic business, including 3 solar module manufacturing plants, from industry pioneer ARCO Solar, owned by oil firm ARCO. In 1991, Siemens acquired Nixdorf Computer and renamed it Siemens Nixdorf Informationssysteme, in order to produce personal computers. In October 1991, Siemens acquired the Industrial Systems Division of Texas Instruments, based in Johnson City, Tennessee. This division was organized as Siemens Industrial Automation, and was later absorbed by Siemens Energy and Automation, Inc. In 1992, Siemens bought out IBM's half of ROLM (Siemens had bought into ROLM five years earlier), thus creating SiemensROLM Communications; eventually dropping ROLM from the name later in the 1990s. In 1993–1994, Siemens C651 electric trains for Singapore's Mass Rapid Transit (MRT) system were built in Austria. In 1997, Siemens agreed to sell the defence arm of Siemens Plessey to British Aerospace (BAe) and a German aerospace company, DaimlerChrysler Aerospace. BAe and DASA acquired the British and German divisions of the operation respectively. In October 1997, Siemens Financial Services (SFS) was founded to act as a competence center for financing issues and as a manager of financial risks within Siemens. In 1998, Siemens acquired Westinghouse Power Generation for more than $1.5 billion from the CBS Corporation and moving Siemens from third to second in the world power generation market. In 1999, Siemens's semiconductor operations were spun off into a new company called Infineon Technologies. Its Electromechanical Components operations were converted into a legally independent company: Siemens Electromechanical Components GmbH & Co. KG, (which, later that year, was sold to Tyco International Ltd for approximately $1.1 billion. In the same year, Siemens Nixdorf Informationssysteme AG became part of Fujitsu Siemens Computers, with its retail banking technology group becoming Wincor Nixdorf. In 2000, Shared Medical Systems Corporation was acquired by the Siemens's Medical Engineering Group, eventually becoming part of Siemens Medical Solutions. Also in 2000, Atecs-Mannesman was acquired by Siemens, The sale was finalised in April 2001 with 50% of the shares acquired, acquisition, Mannesmann VDO AG merged into Siemens Automotive forming Siemens VDO Automotive AG, Atecs Mannesmann Dematic Systems merged into Siemens Production and Logistics forming Siemens Dematic AG, Mannesmann Demag Delaval merged into the Power Generation division of Siemens AG. Other parts of the company were acquired by Robert Bosch GmbH at the same time. Also, Moore Products Co. of Spring House, PA USA was acquired by Siemens Energy & Automation, Inc. 2001 to 2005 In 2001, Chemtech Group of Brazil was incorporated into the Siemens Group; it provides industrial process optimisation, consultancy and other engineering services. Also in 2001, Siemens formed joint venture Framatome with Areva SA of France by merging much of the companies' nuclear businesses. In 2002, Siemens sold some of its business activities to Kohlberg Kravis Roberts & Co. L.P. (KKR), with its metering business included in the sale package. In 2002, Siemens abandoned the solar photovoltaic industry by selling its participation in a joint-venture company, established in 2001 with Shell and E.ON, to Shell. In 2003, Siemens acquired the flow division of Danfoss and incorporated it into the Automation and Drives division. Also in 2003 Siemens acquired IndX software (realtime data organisation and presentation). The same year in an unrelated development Siemens reopened its office in Kabul. Also in 2003 agreed to buy Alstom Industrial Turbines; a manufacturer of small, medium and industrial gas turbines for €1.1 billion. On 11 February 2003, Siemens planned to shorten phones' shelf life by bringing out annual Xelibri lines, with new devices launched as spring -summer and autumn-winter collections. On 6 March 2003, the company opened an office in San Jose. On 7 March 2003, the company announced that it planned to gain 10 per cent of the mainland China market for handsets. On 18 March 2003, the company unveiled the latest in its series of Xelibri fashion phones. In 2004, the wind energy company Bonus Energy in Brande, Denmark was acquired, forming Siemens Wind Power division. Also in 2004, Siemens invested in Dasan Networks (South Korea, broadband network equipment) acquiring ~40% of the shares, Nokia Siemens disinvested itself of the shares in 2008. The same year Siemens acquired Photo-Scan (UK, CCTV systems), US Filter Corporation (water and Waste Water Treatment Technologies/ Solutions, acquired from Veolia), Huntsville Electronics Corporation (automobile electronics, acquired from Chrysler), and Chantry Networks (WLAN equipment). In 2005, Siemens sold the Siemens mobile manufacturing business to BenQ, forming the BenQ-Siemens division. Also in 2005 Siemens acquired Flender Holding GmbH (Bocholt, Germany, gears/industrial drives), Bewator AB (building security systems), Wheelabrator Air Pollution Control, Inc. (Industrial and power station dust control systems), AN Windenergie GmbH. (Wind energy), Power Technologies Inc. (Schenectady, USA, energy industry software and training), CTI Molecular Imaging (Positron emission tomography and molecular imaging systems), Myrio (IPTV systems), Shaw Power Technologies International Ltd (UK/USA, electrical engineering consulting, acquired from Shaw Group), and Transmitton (Ashby de la Zouch UK, rail and other industry control and asset management). 2005 and continuing: worldwide bribery scandal Beginning in 2005, Siemens became embroiled in a multi-national bribery scandal. Among the various incidents was the Siemens Greek bribery scandal, where the company was accused of deals with Greek government officials during the 2004 Summer Olympics. This case, along with others, triggered legal investigations in Germany, initiated by prosecutors in Italy, Liechtenstein, and Switzerland, and later followed by an American investigation in 2006 due to the company's activities while listed on US stock exchanges. Investigations found that Siemens had a pattern of bribing officials to secure contracts, with the company spending approximately $1.3 billion on bribes across several countries, and maintaining separate accounting records to conceal this. Following the investigations, Siemens settled in December 2008, paying a combined total of approximately $1.6 billion to the US and Germany in what was, at the time, the largest bribery fine in history. In addition, the company was required to invest $1 billion in developing and maintaining new internal compliance procedures. Siemens admitted to violating the accounting provisions of the Foreign Corrupt Practices Act, while its Bangladesh and Venezuela subsidiaries pleaded guilty to paying bribes. Despite initial expectations of a fine as high as $5 billion, the final amount was significantly less, in part due to Siemens's cooperation with the investigators, the upcoming change in the US administration, and Siemens's role as a US military contractor. The payments included $450 million in fines and penalties and a forfeiture of $350 million in profits in the US. Siemens also revamped its compliance systems, appointing Peter Y. Solmssen, a US lawyer, as an independent director in charge of compliance and accepting oversight from Theo Waigel, a former German finance minister. Siemens implemented new anti-corruption policies, including a comprehensive anti-corruption handbook, online tools for due diligence and compliance, a confidential communications channel for employees, and a corporate disciplinary committee. This process involved hiring approximately 500 full-time compliance personnel worldwide. Siemens's bribery culture was not new; it was highlighted as far back as 1914 when both Siemens and Vickers were involved in a scandal over bribes paid to Japanese naval authorities. The company resorted to bribery as it sought to expand its business in the developing world after World War II. Up until 1999, bribes were a tax-deductible business expense in Germany, with no penalties for bribing foreign officials. However, with the implementation of the 1999 OECD Anti-Bribery Convention, Siemens started using off-shore accounts to hide its bribery. During the investigation, key player Reinhard Siekaczek, a mid-level executive in the telecommunications unit, provided critical evidence. He disclosed that he had managed an annual global bribery budget of $40 to $50 million and provided information about the company's 2,700 worldwide contractors, who were typically used to channel money to government officials. Notable instances of bribery included substantial payments in Argentina, Israel, Venezuela, China, Nigeria, and Russia to secure large contracts. The investigation resulted in multiple prosecutions and settlements with various governments, as well as legal action against Siemens employees and those who received bribes. Noteworthy cases include the conviction of two former executives in 2007 for bribing Italian energy company Enel, a settlement with the Greek government in 2012 for 330 million euros over the Greek bribery scandal, and a guilty plea in 2014 from former Siemens executive Andres Truppel for channeling nearly $100 million in bribes to Argentine government officials. Siemens also faced repercussions from the World Bank due to fraudulent practices by its Russian affiliate. In 2009, Siemens agreed not to bid on World Bank projects for two years and to establish a $100 million fund at the World Bank to support anti-corruption activities over 15 years, known as the "Siemens Integrity Initiative." Other substantial fines include a payment of ₦7 billion (US$ million) to the Nigerian government in 2010, and a US$42.7 million penalty in Israel in 2014 to avoid charges of securities fraud. 2006 to 2011 In 2006, Siemens purchased Bayer Diagnostics which was incorporated into the Medical Solutions Diagnostics division on 1 January 2007, also in 2006 Siemens acquired Controlotron (New York) (ultrasonic flow meters), and also in 2006 Siemens acquired Diagnostic Products Corp., Kadon Electro Mechanical Services Ltd. (now TurboCare Canada Ltd.), Kühnle, Kopp, & Kausch AG, Opto Control, and VistaScape Security Systems. In January 2007, Siemens was fined €396 million by the European Commission for price fixing in EU electricity markets through a cartel involving 11 companies, including ABB, Alstom, Fuji Electric, Hitachi Japan, AE Power Systems, Mitsubishi Electric Corp, Schneider, Areva, Toshiba and VA Tech. According to the commission, "between 1988 and 2004, the companies rigged bids for procurement contracts, fixed prices, allocated projects to each other, shared markets and exchanged commercially important and confidential information." Siemens was given the highest fine of €396 million, more than half of the total, for its alleged leadership role in the activity. In March 2007, a Siemens board member was temporarily arrested and accused of illegally financing AUB, a business-friendly labour association which competes against the trade union IG Metall. He was released on bail. Offices of AUB and Siemens were searched. Siemens denied any wrongdoing. In April the Fixed Networks, Mobile Networks and Carrier Services divisions of Siemens merged with Nokia's Network Business Group in a 50/50 joint venture, creating a fixed and mobile network company called Nokia Siemens Networks. Nokia delayed the merger due to bribery investigations against Siemens. In October 2007, a court in Munich found that the company had bribed public officials in Libya, Russia, and Nigeria in return for the awarding of contracts; four former Nigerian Ministers of Communications were among those named as recipients of the payments. The company admitted to having paid the bribes and agreed to pay a fine of 201 million euros. In December 2007, the Nigerian government cancelled a contract with Siemens due to the bribery findings. Also in 2007, Siemens acquired Vai Ingdesi Automation (Argentina, Industrial Automation), UGS Corp., Dade Behring, Sidelco (Quebec, Canada), S/D Engineers Inc., and Gesellschaft für Systemforschung und Dienstleistungen im Gesundheitswesen mbH (GSD) (Germany). In July 2008, Siemens AG formed a joint venture of the Enterprise Communications business with the Gores Group, renamed Unify in 2013. The Gores Group holding a majority interest of 51% stake, with Siemens AG holding a minority interest of 49%. In August 2008, Siemens Project Ventures invested $15 million in the Arava Power Company. In a press release published that month, Peter Löscher, president and CEO of Siemens AG said: "This investment is another consequential step in further strengthening our green and sustainable technologies". Siemens now holds a 40% stake in the company. In January 2009, Siemens sold its 34% stake in Framatome, complaining limited managerial influence. In March, it formed an alliance with Rosatom of Russia to engage in nuclear-power activities. In April 2009, Fujitsu Siemens Computers became Fujitsu Technology Solutions as a result of Fujitsu buying out Siemens's share of the company. In June 2009 news broke that Nokia Siemens had supplied telecommunications equipment to the Iranian telecom company that included the ability to intercept and monitor telecommunications, a facility known as "lawful intercept". The equipment was believed to have been used in the suppression of the 2009 Iranian election protests, leading to criticism of the company, including by the European Parliament. Nokia Siemens later divested its call monitoring business, and reduced its activities in Iran. In October 2009, Siemens signed a $418 million contract to buy Solel Solar Systems, an Israeli company in the solar thermal power business. In December 2010, Siemens agreed to sell its IT Solutions and Services subsidiary for €850 million to Atos. As part of the deal, Siemens agreed to take a 15% stake in the enlarged Atos, to be held for a minimum of five years. In addition, Siemens concluded a seven-year outsourcing contract worth around €5.5 billion, under which Atos will provide managed services and systems integration to Siemens. At the same time, Germany’s Wegmann Group acquired Siemens's 49-percent stake in armored vehicle manufacturer Krauss-Maffei Wegmann GmbH, establishing Wegmann as the sole shareholder of KMW, pending approval by government authorities. 2011 to present In March 2011, it was decided to list Osram on the stock market in the autumn, but CEO Peter Löscher said Siemens intended to retain a long-term interest in the company, which was already independent from the technological and managerial viewpoints. In September 2011, Siemens, which had been responsible for constructing all 17 of Germany's existing nuclear power plants, announced that it would exit the nuclear sector following the Fukushima disaster and the subsequent changes to German energy policy. Chief executive Peter Löscher has supported the German government's planned Energiewende, its transition to renewable energy technologies, calling it a "project of the century" and saying Berlin's target of reaching 35% renewable energy sources by 2020 was feasible. In November 2012, Siemens acquired the Rail division of Invensys for £1.7 billion. In the same month, Siemens acquired a privately held company, LMS International NV. In August 2013, Nokia acquired 100% of the company Nokia Siemens Networks, with a buy-out of Siemens AG, ending Siemens role in telecommunication. In August 2013, Siemens won a $966.8 million order for power plant components from oil firm Saudi Aramco, the largest bid it has ever received from the Saudi company. In 2014, Siemens announced plans to build a $264 million facility for making offshore wind turbines in Paull, England, as Britain's wind power rapidly expands. Siemens chose the Hull area on the east coast of England because it is close to other large offshore projects planned in coming years. The new plant is expected to begin producing turbine rotor blades in 2016. The plant and the associated service center, in Green Port Hull nearby, will employ about 1,000 workers. The facilities will serve the UK market, where the electricity that major power producers generate from wind grew by about 38 percent in 2013, representing about 6 percent of total electricity, according to government figures. There are also plans to increase Britain's wind-generating capacity at least threefold by 2020, to 14 gigawatts. In May 2014, Rolls-Royce agreed to sell its gas turbine and compressor energy business to Siemens for £1 billion. In June 2014, Siemens and Mitsubishi Heavy Industries announced their formation of joint ventures to bid for Alstom's troubled energy and transportation businesses (in locomotives, steam turbines, and aircraft engines). A rival bid by General Electric (GE) has been criticized by French government sources, who consider Alstom's operations as a "vital national interest" at a moment when the French unemployment level stands above 10% and some voters are turning towards the far-right. In 2015, Siemens acquired U.S. oilfield equipment maker Dresser-Rand Group Inc for $7.6 billion. In November 2016, Siemens acquired EDA company Mentor Graphics for $4.5 billion. In November 2017, the U.S. Department of Justice charged three Chinese employees of Guangzhou Bo Yu Information Technology Company Limited with hacking into corporate entities, including Siemens AG. In December 2017, Siemens acquired the medical technology company Fast Track Diagnostics for an undisclosed amount. In August 2018, Siemens acquired rapid application development company Mendix for €0.6 billion in cash. In May 2018, Siemens acquired J2 Innovations for an undisclosed amount. In May 2018, Siemens acquired Enlighted, Inc. for an undisclosed amount. In September 2019, Siemens and Orascom Construction signed an agreement with the Iraqi government to rebuild two power plants, which is believed to set up the company for future deals in the country. In 2019–2020, Siemens was identified as a key engineering company supporting the controversial Adani Carmichael coal mine in Queensland (Australia). In January 2020, Siemens signed an agreement to acquire 99% equity share capital of Indian switchgear manufacturer C&S Electric at €267 million (₹2,100 crore). The takeover was approved by the Competition Commission of India in August 2020. In April 2020, Siemens acquired a 77% majority stake in Indian building solution provider iMetrex Technologies for an undisclosed sum. In April 2020, Siemens Energy was created as an independent company out of the energy division of Siemens. In August 2020, Siemens Healthineers AG announced that it plans to acquire U.S. cancer device and software company Varian Medical Systems in an all-stock deal valued at $16.4 billion. In February 2021, Roland Busch replaced Joe Kaeser as CEO. In October 2021, Siemens acquired the building IoT software and hardware company Wattsense for an undisclosed sum. In May 2022, Siemens made the decision to cease its operations in Russia after 170 years and disassociate itself from any involvement with the Russian government due to the ongoing war of aggression against Ukraine. This decision affected the approximately 3,000 employees working for the company in the country. The announcement came with a financial statement in which Siemens disclosed a second-quarter loss of approximately US$625 million as a direct consequence of the imposed sanctions on Russia. In July 2022, Siemens acquired ZONA Technology, an aerospace simulation firm. In October 2022, Siemens announced a strategic partnership with Swedish electric commercial vehicle manufacturer Volta Trucks to deliver and scale eMobility charging infrastructure to simplify the transition to fleet electrification. In October 2022, Siemens became a target of the Boycott, Divestment and Sanctions movement due to its award of a contract for the EuroAsia Interconnector, which is planned to connect the electricity grids of Greece and Cyprus with both Israel and its illegal settlements in the West Bank. In June 2023, Siemens announced a global investment plan of €2 billion to expand its manufacturing capacity, including specific commitments of €200 million for a new high-tech plant in Singapore and €140 million to enlarge a facility in Chengdu, China. The strategy aims to foster diversification across Asia, enhance growth in the Chinese market, and decrease dependency on a single country by utilizing Singapore as a primary export hub to Southeast Asia. Simultaneously, Siemens will allocate €1 billion for the development of new facilities and factories in Germany, including €500 million for the expansion and modernization of a factory in Erlangen, expected to enhance production capacity by 60% by 2029. This coincides with the German government's concerns about the economic and security risks associated with investing in China. Additional German investments will finance a new semiconductor factory in Forchheim and a training center for Siemens Healthineers in Erlangen. In August 2023, it was announced Siemens had signed an agreement to acquire the Veldhoven-headquartered eBus, eTruck and passenger vehicle fast charging technology company, Heliox. In March 2024, Siemens announced the creation of a new £100m digital engineering facility in Wiltshire, UK, aimed at replacing its existing rail infrastructure factory in Chippenham with a new research and development centre, expected to open by 2026. The move is endorsed by Chancellor Jeremy Hunt as "a big boost" for UK manufacturing. In March 2024, it was announced Siemens had agreed to acquire ebm-papst's industrial drive technology (IDT) division for undisclosed amount. Operations As of 2023, the principal divisions of Siemens are Digital Industries, Smart Infrastructure, Siemens Mobility, Siemens Healthineers and Siemens Financial Services, with Siemens Healthineers and Siemens Mobility operating as independent entities. Siemens also operates a number of "Portfolio Companies" with market-specific offerings. In 2020, the energy business was spun off into the separate Siemens Energy AG, with Siemens retaining a stake of 17.1% as of December 2023. Other business units of the company include Siemens Technology (T) for research and development, Siemens Real Estate (SRE) for corporate real estate management, Siemens Advanta for consulting services (including the management consulting division Siemens Advanta Consulting), next47 as a venture capital fund, and Siemens Global Business Services (GBS) as a shared services unit. Digital Industries The Digital Industries division focuses on the automation needs of discrete and process industries. This includes factory automation infrastructure, numerical control systems, engines, drives, inverters, integrated automation systems for machine tools and production machines, and machine to machine communication products. The division also develops industrial control systems, various types of sensors, and radio-frequency identification systems. In industrial automation and industrial software, Siemens is the global market leader. In addition to hardware, Digital Industries supplies software for product lifecycle management (PLM), simulation and testing of mechatronic systems, and the MindSphere cloud-based IoT operating system that connects physical infrastructure to the digital world. The software portfolio is supplemented by the Mendix platform for low-code application development and digital marketplaces like Supplyframe and Pixeom. Key customer markets span automotive, machine building, pharmaceuticals, chemicals, food and beverage, electronics, and semiconductors. In 2023, CEO Roland Busch announced the aim to raise software businesses sales share to 20% in the long term. In June 2023, Siemens launched a new open digital platform called "Siemens Xcelerator", which houses a curated portfolio of IoT-enabled hardware, software, and digital services from both Siemens and third parties. Siemens also announced a partnership with Nvidia, aiming to leverage its Omniverse platform with its 3D design capabilities. Xcelerator is part of a broader industry trend towards digital environments ("metaverses"), and is delivered through a software as a service (SaaS) subscription model, targeting accessibility for a range of businesses including small and medium-sized enterprises. Smart Infrastructure Siemens Smart Infrastructure offerings are categorized into buildings, electrification, and electrical products. Its buildings portfolio includes building automation systems, heating, ventilation, and air conditioning (HVAC) controls, and fire safety and security systems, and energy performance services. The electrification portfolio is dedicated to grid resilience and efficiency, encompassing grid simulation, operation control software, power-system automation and protection, and medium to low voltage switchgear. Moreover, it includes charging infrastructure for electric vehicles. In the realm of electrical products, the division offers low-voltage switching, measuring and control equipment, distribution systems, and medium voltage switchgear. In the renewable energy industry, the company provides a portfolio of products and services to help build and operate microgrids of any size. It provides generation and distribution of electrical energy as well as monitoring and controlling of microgrids. By using primarily renewable energy, microgrids reduce carbon-dioxide emissions, which is often required by government regulations. It supplied a sustainable storage product and microgrids to Enel Produzione SPA for the island of Ventotene in Italy. Siemens Mobility Siemens Mobility is a division involved in passenger and freight transportation. This includes providing rolling stock, which covers a range of vehicles for urban, regional, and long-distance travel. The division also offers rail infrastructure products and services such as rail automation, digital station solutions, railway communication systems, and yard and depot solutions. In 2019, the European Commission blocked a merger between Alstom and Siemens Mobility, citing anti-trust regulations. The plan would have seen the creation of a "European champion" to compete with China's CRRC. Siemens Healthineers Siemens Healthineers AG is a publicly listed company that was spun off from Siemens in 2017. As of 2022, Siemens retains a 75% majority stake in Siemens Healthineers. As a global provider of healthcare solutions and services, its range of offerings includes the manufacture and sale of diagnostic and therapeutic products, clinical consulting, and a variety of training services. Its operations are divided into four main sectors: imaging, diagnostics, Varian Medical Systems, and advanced therapies. Imaging includes magnetic resonance, computed tomography, X-ray, molecular imaging, and ultrasound devices. The diagnostics segment offers in-vitro diagnostic products for laboratory and point-of-care settings. Varian, an American company acquired by Siemens Healthineers in 2021, covers technologies related to cancer care, and advanced therapies focus on image-guided minimally invasive procedures. Siemens Financial Services Siemens Financial Services (SFS) is a division that delivers a range of financing solutions. These services target both Siemens's customers and external companies, including debt and equity investments. It provides leasing, lending, working capital, structured financing, and equipment and project financing solutions. SFS is also involved in providing financial advisory services and risk management expertise to Siemens's industrial businesses, helping assess risk profiles of projects and business models. Former operations Siemens is known for actively refining its core business through strategic divestitures, pursuing a strategy referred to as "Corporate Clarity" that focuses on selling non-core aspects of the business. Major business divisions that were once part of Siemens before being spun off include: Deutsche Grammophon/Polydor Records (1987) Infineon Technologies (1999) Siemens Mobile (2005) Gigaset Communications (2008) Osram (2013) Siemens Energy (2020) Joint ventures Siemens's current joint ventures include: Siemens Traction Equipment Ltd. (STEZ), Zhuzhou China, is a joint venture between Siemens, Zhuzhou CSR Times Electric Co., Ltd. (TEC) and CSR Zhuzhou Electric Locomotive Co., Ltd. (ZELC), which produces AC drive electric locomotives and AC locomotive traction components. OMNETRIC Group, A Siemens & Accenture company formed in 2014. Former joint ventures in which Siemens no longer holds any equity include: Fujitsu Siemens Computers (sold to Fujitsu in 2009) Nokia Siemens Networks (sold to Nokia in 2013) BSH Hausgeräte (sold to Bosch in 2014) Primetals Technologies (sold to Mitsubishi Heavy Industries in 2019). Silcar was a joint venture between Siemens Ltd and Thiess Services Pty Ltd until 2013. Silcar is a 3,000 person Australian organisation providing productivity and reliability for large scale and technically complex plant assets. Services include asset management, design, construction, operations and maintenance. Silcar operates across a range of industries and essential services including power generation, electrical distribution, manufacturing, mining and telecommunications. In July 2013, Thiess took full control. Corporate affairs Siemens is incorporated in Germany and has its corporate headquarters at the Wittelsbacherplatz in central Munich. Business trends For the fiscal year 2023, Siemens reported a revenue of €77.7 billion, an increase of 8% over the previous fiscal cycle. In December 2023, Siemens's shares traded at over US$93 per share, and its market capitalization was valued at US$147 billion. According to an Ernst & Young study published in December 2023, Siemens and SAP were the only German companies of the top 100 most valuable companies by market capitalization worldwide. The key trends of Siemens are (as at the financial year ending September 30): * In 2020, Siemens Energy became an independent company. Locations As of 2011, Siemens has operations in around 190 countries and approximately 285 production and manufacturing facilities. Research and development In 2023, Siemens invested a total of €6.1 billion in research and development. As of 30 September 2022, Siemens had approximately 46,900 employees engaged in research and development and held approximately 43,600 patents worldwide. Leadership Chairmen of the Siemens-Schuckertwerke Managing Board (1903 to 1966) Alfred Berliner (1903 to 1912) Carl Friedrich von Siemens (1912 to 1919) (1919 to 1920) (1920 to 1939) (1939 to 1945) (1945 to 1949) (1949 to 1951) Friedrich Bauer (1951 to 1962) Bernhard Plettner (1962 to 1966) Chairmen of the Siemens & Halske / Siemens-Schuckertwerke Supervisory Board (1918 to 1966) Wilhelm von Siemens (1918 to 1919) Carl Friedrich von Siemens (1919 to 1941) Hermann von Siemens (1941 to 1946) Friedrich Carl Siemens (1946 to 1948) Hermann von Siemens (1948 to 1956) Ernst von Siemens (1956 to 1966) Chairmen of Siemens AG's managing board (1966 to present) , , Bernhard Plettner (presidency of the managing board) (1966 to 1967) Erwin Hachmann, Bernhard Plettner, Gerd Tacke (presidency of the managing board) (1967 to 1968) Gerd Tacke (1968 to 1971) Bernhard Plettner (1971 to 1981) Karlheinz Kaske (1981 to 1992) Heinrich von Pierer (1992 to 2005) Klaus Kleinfeld (2005 to 2007) Peter Löscher (2007 to 2013) Joe Kaeser (2013 to 2021) Roland Busch (2021 to present) Chairmen of the Siemens AG Supervisory Board (1966 to present) Ernst von Siemens (1966 to 1971) Peter von Siemens (1971 to 1981) Bernhard Plettner (1981 to 1988) Heribald Närger (1988 to 1993) Hermann Franz (1993 to 1998) Karl-Hermann Baumann (1998 to 2005) Heinrich von Pierer (2005 to 2007) (2007 to 2018) Jim Hagemann Snabe (2018 to present) Managing Board (present day) Roland Busch (CEO Siemens AG) Klaus Helmrich Cedrik Neike (CEO Digital Industries) Matthias Rebellius (CEO Smart Infrastructure) Ralf P. Thomas (CFO) Judith Wiese Shareholders The company has issued 881,000,000 shares of common stock. The largest single shareholder continues to be the founding shareholder, the Siemens family, with a stake of 6.9%, while 62% is held by institutional asset managers, the largest being two divisions of the world's largest asset manager BlackRock. Moreover, 83.97% of the shares are considered public float, however including such strategic investors as the State of Qatar (DIC Company Ltd.) with 3.04%, the Government Pension Fund of Norway with 2.5% and Siemens AG itself with 3.04%; and 19% are held by private investors, 13% by investors that are considered unidentifiable. In terms of nationality, 26% are owned by German investors, 21% by US investors, followed by the UK (11%), France (8%), Switzerland (8%) and a number of others (26%). References Further reading Bundesarchiv Berlin, NS 19, No. 968, Communication on the creation of the barracks for the Siemens & Halske, the planned production and the planned expansion for 2,500 prisoners "after direct discussions with this company": Economic and Administrative Main Office of the SS (WVHA), Oswald Pohl, secretly, to Reichsführer SS (RFSS), Heinrich Himmler, dated 20 October 1942. Margarete Buber (1993). 303f: As prisoners of Stalin and Hitler, Frankfurt am Main; Berlin. Wilfried Feldenkirchen: 1918–1945 Siemens, Munich 1995, Ulrike fire, Claus Füllberg-Stolberg, Sylvia Kempe: work at Ravensbrück concentration camp, in: Women in concentration camps. Bergen-Belsen. Ravensbrück, Bremen, 1994, pp. 55–69 Feldenkirchen, Wilfried (2000). Siemens: From Workshop to Global Player, Munich. Feldenkirchen, Wilfried, and Eberhard Posner (2005). The Siemens Entrepreneurs: Continuity and Change, 1847–2005. Ten Portraits, Munich. Greider, William (1997). One World, Ready or Not. Penguin Press. . Sigrid Jacobeit: working at Siemens in Ravensbrück, in: Dietrich Eichholz (eds) War and economy. Studies on German economic history 1939–1945, Berlin 1999. Ursula Krause-Schmitt: The path to the Siemens stock led past the crematorium, in: Information. German Resistance Study Group, Frankfurt / Main, 18 Jg, No. 37/38, Nov. 1993, pp. 38–46 MSS in the estate include Wanda Kiedrzy'nska, in: National Library of Poland, Warsaw, Manuscript Division, Sygn. akc 12013/1 and archive the memorial I/6-7-139 RA. * Woman Ravensbruck concentration camp. An overall presentation, State Justice Administration in Ludwigsburg, IV ART 409-Z 39/59, April 1972, pp. 129ff. Karl-Heinz Roth: "Forced labor in the Siemens Group (1938-1945): Facts, controversies, problems". In: Hermann Kaienburg (ed.): concentration camps and the German Economy 1939–1945 (Social studies, H. 34), Opladen 1996, pp. 149–168 Karl-Heinz Roth: forced labor in the Siemens Group, with a summary table, page 157 See also Ursula Krause-Schmitt: "The road to Siemens stock led to the crematorium past over," pp. 36f, where, according to the catalogs of the International Tracing Service Arolsen and Martin Weinmann (eds.). The Nazi camp system, Frankfurt / Main 1990 and Feldkirchen: Siemens 1918–1945, pp. 198–214, and in particular the associated annotations 91–187. Carola Sachse: "Jewish forced labor and non-Jewish women and men at Siemens from 1940 to 1945", in: International Scientific Correspondence, No. 1/1991, pp. 12–24 Shaping the Future: The Siemens Entrepreneurs 1847–2018. Ed. Siemens Historical Institute, Hamburg 2018, . Weiher, Siegfried von /Herbert Goetzeler (1984). The Siemens Company, Its Historical Role in the Progress of Electrical Engineering 1847–1980, 2nd ed. Berlin and Munich. External links Siemens Historical Institute 1847 establishments in Prussia Auschwitz concentration camp Companies in the Euro Stoxx 50 Companies in the Dow Jones Global Titans 50 Companies involved in the Holocaust Companies listed on the Frankfurt Stock Exchange Companies in the DAX index Conglomerate companies established in 1847 Conglomerate companies of Germany Consumer electronics brands Electrical engineering companies of Germany Electrical wiring and construction supplies manufacturers Electric transformer manufacturers Electronics companies of Germany German brands Guitar amplification tubes Instrument-making corporations Locomotive manufacturers of Germany Home appliance manufacturers of Germany Manufacturers of industrial automation Manufacturing companies established in 1847 Mobile phone manufacturers Networking companies Nuclear technology companies of Germany Price fixing convictions Rolling stock manufacturers of Germany Telecommunications equipment vendors Werner von Siemens Wind turbine manufacturers Diesel engine manufacturers Marine engine manufacturers Electrical generation engine manufacturers Gas engine manufacturers Pump manufacturers Electric motor manufacturers Gas turbine manufacturers Steam turbine manufacturers Industrial machine manufacturers Radio manufacturers Companies formerly listed on the New York Stock Exchange
Siemens
[ "Engineering" ]
9,187
[ "Industrial machine manufacturers", "Radio electronics", "Radio manufacturers", "Industrial machinery" ]
168,651
https://en.wikipedia.org/wiki/High-performance%20liquid%20chromatography
High-performance liquid chromatography (HPLC), formerly referred to as high-pressure liquid chromatography, is a technique in analytical chemistry used to separate, identify, and quantify specific components in mixtures. The mixtures can originate from food, chemicals, pharmaceuticals, biological, environmental and agriculture, etc., which have been dissolved into liquid solutions. It relies on high pressure pumps, which deliver mixtures of various solvents, called the mobile phase, which flows through the system, collecting the sample mixture on the way, delivering it into a cylinder, called the column, filled with solid particles, made of adsorbent material, called the stationary phase. Each component in the sample interacts differently with the adsorbent material, causing different migration rates for each component. These different rates lead to separation as the species flow out of the column into a specific detector such as UV detectors. The output of the detector is a graph, called a chromatogram. Chromatograms are graphical representations of the signal intensity versus time or volume, showing peaks, which represent components of the sample. Each sample appears in its respective time, called its retention time, having area proportional to its amount. HPLC is widely used for manufacturing (e.g., during the production process of pharmaceutical and biological products), legal (e.g., detecting performance enhancement drugs in urine), research (e.g., separating the components of a complex biological sample, or of similar synthetic chemicals from each other), and medical (e.g., detecting vitamin D levels in blood serum) purposes. Chromatography can be described as a mass transfer process involving adsorption and/or partition. As mentioned, HPLC relies on pumps to pass a pressurized liquid and a sample mixture through a column filled with adsorbent, leading to the separation of the sample components. The active component of the column, the adsorbent, is typically a granular material made of solid particles (e.g., silica, polymers, etc.), 1.5–50 μm in size, on which various reagents can be bonded. The components of the sample mixture are separated from each other due to their different degrees of interaction with the adsorbent particles. The pressurized liquid is typically a mixture of solvents (e.g., water, buffers, acetonitrile and/or methanol) and is referred to as a "mobile phase". Its composition and temperature play a major role in the separation process by influencing the interactions taking place between sample components and adsorbent. These interactions are physical in nature, such as hydrophobic (dispersive), dipole–dipole and ionic, most often a combination. Operation The liquid chromatograph is complex and has sophisticated and delicate technology. In order to properly operate the system, there should be a minimum basis for understanding of how the device performs the data processing to avoid incorrect data and distorted results. HPLC is distinguished from traditional ("low pressure") liquid chromatography because operational pressures are significantly higher (around 50–1400 bar), while ordinary liquid chromatography typically relies on the force of gravity to pass the mobile phase through the packed column. Due to the small sample amount separated in analytical HPLC, typical column dimensions are 2.1–4.6 mm diameter, and 30–250 mm length. Also HPLC columns are made with smaller adsorbent particles (1.5–50 μm in average particle size). This gives HPLC superior resolving power (the ability to distinguish between compounds) when separating mixtures, which makes it a popular chromatographic technique. The schematic of an HPLC instrument typically includes solvents' reservoirs, one or more pumps, a solvent-degasser, a sampler, a column, and a detector. The solvents are prepared in advance according to the needs of the separation, they pass through the degasser to remove dissolved gasses, mixed to become the mobile phase, then flow through the sampler, which brings the sample mixture into the mobile phase stream, which then carries it into the column. The pumps deliver the desired flow and composition of the mobile phase through the stationary phase inside the column, then directly into a flow-cell inside the detector. The detector generates a signal proportional to the amount of sample component emerging from the column, hence allowing for quantitative analysis of the sample components. The detector also marks the time of emergence, the retention time, which serves for initial identification of the component. More advanced detectors, provide also additional information, specific to the analyte's characteristics, such as UV-VIS spectrum or mass spectrum, which can provide insight on its structural features. These detectors are in common use, such as UV/Vis, photodiode array (PDA) / diode array detector and mass spectrometry detector. A digital microprocessor and user software control the HPLC instrument and provide data analysis. Some models of mechanical pumps in an HPLC instrument can mix multiple solvents together at a ratios changing in time, generating a composition gradient in the mobile phase. Most HPLC instruments also have a column oven that allows for adjusting the temperature at which the separation is performed. The sample mixture to be separated and analyzed is introduced, in a discrete small volume (typically microliters), into the stream of mobile phase percolating through the column. The components of the sample move through the column, each at a different velocity, which are a function of specific physical interactions with the adsorbent, the stationary phase. The velocity of each component depends on its chemical nature, on the nature of the stationary phase (inside the column) and on the composition of the mobile phase. The time at which a specific analyte elutes (emerges from the column) is called its retention time. The retention time, measured under particular conditions, is an identifying characteristic of a given analyte. Many different types of columns are available, filled with adsorbents varying in particle size, porosity, and surface chemistry. The use of smaller particle size packing materials requires the use of higher operational pressure ("backpressure") and typically improves chromatographic resolution (the degree of peak separation between consecutive analytes emerging from the column). Sorbent particles may be ionic, hydrophobic or polar in nature. The most common mode of liquid chromatography is reversed phase, whereby the mobile phases used, include any miscible combination of water or buffers with various organic solvents (the most common are acetonitrile and methanol). Some HPLC techniques use water-free mobile phases (see normal-phase chromatography below). The aqueous component of the mobile phase may contain acids (such as formic, phosphoric or trifluoroacetic acid) or salts to assist in the separation of the sample components. The composition of the mobile phase may be kept constant ("isocratic elution mode") or varied ("gradient elution mode") during the chromatographic analysis. Isocratic elution is typically effective in the separation of simple mixtures. Gradient elution is required for complex mixtures, with varying interactions with the stationary and mobile phases. This is the reason why in gradient elution the composition of the mobile phase is varied typically from low to high eluting strength. The eluting strength of the mobile phase is reflected by analyte retention times, as the high eluting strength speeds up the elution (resulting in shortening of retention times). For example, a typical gradient profile in reversed phase chromatography for might start at 5% acetonitrile (in water or aqueous buffer) and progress linearly to 95% acetonitrile over 5–25 minutes. Periods of constant mobile phase composition (plateau) may be also part of a gradient profile. For example, the mobile phase composition may be kept constant at 5% acetonitrile for 1–3 min, followed by a linear change up to 95% acetonitrile. The chosen composition of the mobile phase depends on the intensity of interactions between various sample components ("analytes") and stationary phase (e.g., hydrophobic interactions in reversed-phase HPLC). Depending on their affinity for the stationary and mobile phases, analytes partition between the two during the separation process taking place in the column. This partitioning process is similar to that which occurs during a liquid–liquid extraction but is continuous, not step-wise. In the example using a water/acetonitrile gradient, the more hydrophobic components will elute (come off the column) later, then, once the mobile phase gets richer in acetonitrile (i.e., in a mobile phase becomes higher eluting solution), their elution speeds up. The choice of mobile phase components, additives (such as salts or acids) and gradient conditions depends on the nature of the column and sample components. Often a series of trial runs is performed with the sample in order to find the HPLC method which gives adequate separation. History and development Prior to HPLC, scientists used benchtop column liquid chromatographic techniques. Liquid chromatographic systems were largely inefficient due to the flow rate of solvents being dependent on gravity. Separations took many hours, and sometimes days to complete. Gas chromatography (GC) at the time was more powerful than liquid chromatography (LC), however, it was obvious that gas phase separation and analysis of very polar high molecular weight biopolymers was impossible. GC was ineffective for many life science and health applications for biomolecules, because they are mostly non-volatile and thermally unstable at the high temperatures of GC. As a result, alternative methods were hypothesized which would soon result in the development of HPLC. Following on the seminal work of Martin and Synge in 1941, it was predicted by Calvin Giddings, Josef Huber, and others in the 1960s that LC could be operated in the high-efficiency mode by reducing the packing-particle diameter substantially below the typical LC (and GC) level of 150 μm and using pressure to increase the mobile phase velocity. These predictions underwent extensive experimentation and refinement throughout the 60s into the 70s until these very days. Early developmental research began to improve LC particles, for example the historic Zipax, a superficially porous particle. The 1970s brought about many developments in hardware and instrumentation. Researchers began using pumps and injectors to make a rudimentary design of an HPLC system. Gas amplifier pumps were ideal because they operated at constant pressure and did not require leak-free seals or check valves for steady flow and good quantitation. Hardware milestones were made at Dupont IPD (Industrial Polymers Division) such as a low-dwell-volume gradient device being utilized as well as replacing the septum injector with a loop injection valve. While instrumentation developments were important, the history of HPLC is primarily about the history and evolution of particle technology. After the introduction of porous layer particles, there has been a steady trend to reduced particle size to improve efficiency. However, by decreasing particle size, new problems arose. The practical disadvantages stem from the excessive pressure drop needed to force mobile fluid through the column and the difficulty of preparing a uniform packing of extremely fine materials. Every time particle size is reduced significantly, another round of instrument development usually must occur to handle the pressure. Types Partition chromatography Partition chromatography was one of the first kinds of chromatography that chemists developed, and is barely used these days. The partition coefficient principle has been applied in paper chromatography, thin layer chromatography, gas phase and liquid–liquid separation applications. The 1952 Nobel Prize in chemistry was earned by Archer John Porter Martin and Richard Laurence Millington Synge for their development of the technique, which was used for their separation of amino acids. Partition chromatography uses a retained solvent, on the surface or within the grains or fibers of an "inert" solid supporting matrix as with paper chromatography; or takes advantage of some coulombic and/or hydrogen donor interaction with the stationary phase. Analyte molecules partition between a liquid stationary phase and the eluent. Just as in hydrophilic interaction chromatography (HILIC; a sub-technique within HPLC), this method separates analytes based on differences in their polarity. HILIC most often uses a bonded polar stationary phase and a mobile phase made primarily of acetonitrile with water as the strong component. Partition HPLC has been used historically on unbonded silica or alumina supports. Each works effectively for separating analytes by relative polar differences. HILIC bonded phases have the advantage of separating acidic, basic and neutral solutes in a single chromatographic run. The polar analytes diffuse into a stationary water layer associated with the polar stationary phase and are thus retained. The stronger the interactions between the polar analyte and the polar stationary phase (relative to the mobile phase) the longer the elution time. The interaction strength depends on the functional groups part of the analyte molecular structure, with more polarized groups (e.g., hydroxyl-) and groups capable of hydrogen bonding inducing more retention. Coulombic (electrostatic) interactions can also increase retention. Use of more polar solvents in the mobile phase will decrease the retention time of the analytes, whereas more hydrophobic solvents tend to increase retention times. Normal–phase chromatography Normal–phase chromatography was one of the first kinds of HPLC that chemists developed, but has decreased in use over the last decades. Also known as normal-phase HPLC (NP-HPLC), this method separates analytes based on their affinity for a polar stationary surface such as silica; hence it is based on analyte ability to engage in polar interactions (such as hydrogen-bonding or dipole-dipole type of interactions) with the sorbent surface. NP-HPLC uses a non-polar, non-aqueous mobile phase (e.g., chloroform), and works effectively for separating analytes readily soluble in non-polar solvents. The analyte associates with and is retained by the polar stationary phase. Adsorption strengths increase with increased analyte polarity. The interaction strength depends not only on the functional groups present in the structure of the analyte molecule, but also on steric factors. The effect of steric hindrance on interaction strength allows this method to resolve (separate) structural isomers. The use of more polar solvents in the mobile phase will decrease the retention time of analytes, whereas more hydrophobic solvents tend to induce slower elution (increased retention times). Very polar solvents such as traces of water in the mobile phase tend to adsorb to the solid surface of the stationary phase forming a stationary bound (water) layer which is considered to play an active role in retention. This behavior is somewhat peculiar to normal phase chromatography because it is governed almost exclusively by an adsorptive mechanism (i.e., analytes interact with a solid surface rather than with the solvated layer of a ligand attached to the sorbent surface; see also reversed-phase HPLC below). Adsorption chromatography is still somewhat used for structural isomer separations in both column and thin-layer chromatography formats on activated (dried) silica or alumina supports. Partition- and NP-HPLC fell out of favor in the 1970s with the development of reversed-phase HPLC because of poor reproducibility of retention times due to the presence of a water or protic organic solvent layer on the surface of the silica or alumina chromatographic media. This layer changes with any changes in the composition of the mobile phase (e.g., moisture level) causing drifting retention times. Recently, partition chromatography has become popular again with the development of Hilic bonded phases which demonstrate improved reproducibility, and due to a better understanding of the range of usefulness of the technique. Displacement chromatography The use of displacement chromatography is rather limited, and is mostly used for preparative chromatography. The basic principle is based on a molecule with a high affinity for the chromatography matrix (the displacer) which is used to compete effectively for binding sites, and thus displace all molecules with lesser affinities. There are distinct differences between displacement and elution chromatography. In elution mode, substances typically emerge from a column in narrow, Gaussian peaks. Wide separation of peaks, preferably to baseline, is desired in order to achieve maximum purification. The speed at which any component of a mixture travels down the column in elution mode depends on many factors. But for two substances to travel at different speeds, and thereby be resolved, there must be substantial differences in some interaction between the biomolecules and the chromatography matrix. Operating parameters are adjusted to maximize the effect of this difference. In many cases, baseline separation of the peaks can be achieved only with gradient elution and low column loadings. Thus, two drawbacks to elution mode chromatography, especially at the preparative scale, are operational complexity, due to gradient solvent pumping, and low throughput, due to low column loadings. Displacement chromatography has advantages over elution chromatography in that components are resolved into consecutive zones of pure substances rather than "peaks". Because the process takes advantage of the nonlinearity of the isotherms, a larger column feed can be separated on a given column with the purified components recovered at significantly higher concentration. Reversed-phase liquid chromatography (RP-LC) Reversed phase HPLC (RP-HPLC) is the most widespread mode of chromatography. It has a non-polar stationary phase and an aqueous, moderately polar mobile phase. In the reversed phase methods, the substances are retained in the system the more hydrophobic they are. For the retention of organic materials, the stationary phases, packed inside the columns, are consisted mainly of porous granules of silica gel in various shapes, mainly spherical, at different  diameters (1.5, 2, 3, 5, 7, 10 um), with varying pore diameters (60, 100, 150, 300, A), on whose surface are chemically bound various hydrocarbon ligands such as C3, C4, C8, C18. There are also polymeric hydrophobic particles that serve as stationary phases, when solutions at extreme pH are needed, or hybrid silica, polymerized with organic substances. The longer the hydrocarbon ligand on the stationary phase, the longer the sample components can be retained. Most of the current methods of separation of biomedical materials use C-18 type of columns, sometimes called by a trade names such as ODS (octadecylsilane) or RP-18 (Reversed Phase 18). The most common RP stationary phases are based on a silica support, which is surface-modified by bonding RMe2SiCl, where R is a straight chain alkyl group such as C18H37 or C8H17. With such stationary phases, retention time is longer for lipophylic molecules, whereas polar molecules elute more readily (emerge early in the analysis). A chromatographer can increase retention times by adding more water to the mobile phase, thereby making the interactions of the hydrophobic analyte with the hydrophobic stationary phase relatively stronger. Similarly, an investigator can decrease retention time by adding more organic solvent to the mobile phase. RP-HPLC is so commonly used among the biologists and life science users, therefore it is often incorrectly referred to as just "HPLC" without further specification. The pharmaceutical industry also regularly employs RP-HPLC to qualify drugs before their release. RP-HPLC operates on the principle of hydrophobic interactions, which originates from the high symmetry in the dipolar water structure and plays the most important role in all processes in life science. RP-HPLC allows the measurement of these interactive forces. The binding of the analyte to the stationary phase is proportional to the contact surface area around the non-polar segment of the analyte molecule upon association with the ligand on the stationary phase. This solvophobic effect is dominated by the force of water for "cavity-reduction" around the analyte and the C18-chain versus the complex of both. The energy released in this process is proportional to the surface tension of the eluent (water: 7.3 J/cm2, methanol: 2.2 J/cm2) and to the hydrophobic surface of the analyte and the ligand respectively. The retention can be decreased by adding a less polar solvent (methanol, acetonitrile) into the mobile phase to reduce the surface tension of water. Gradient elution uses this effect by automatically reducing the polarity and the surface tension of the aqueous mobile phase during the course of the analysis. Structural properties of the analyte molecule can play an important role in its retention characteristics. In theory, an analyte with a larger hydrophobic surface area (C–H, C–C, and generally non-polar atomic bonds, such as S-S and others) can be retained longer as it does not interact with the water structure. On the other hand, analytes with higher polar surface area (as a result of the presence of polar groups, such as -OH, -NH2, COO− or -NH3+ in their structure) are less retained, as they are better integrated into water. The interactions with the stationary phase can also affected by steric effects, or exclusion effects, whereby a component of very large molecule may have only restricted access to the pores of the stationary phase, where the interactions with surface ligands (alkyl chains) take place. Such surface hindrance typically results in less retention. Retention time increases with more hydrophobic (non-polar) surface area of the molecules. For example, branched chain compounds can elute more rapidly than their corresponding linear isomers because their overall surface area is lower. Similarly organic compounds with single C–C bonds frequently elute later than those with a C=C or even triple bond, as the double or triple bond makes the molecule more compact than a single C–C bond. Another important factor is the mobile phase pH since it can change the hydrophobic character of the ionizable analyte. For this reason most methods use a buffering agent, such as sodium phosphate, to control the pH. Buffers serve multiple purposes: control of pH which affects the ionization state of the ionizable analytes, affect the charge upon the ionizable silica surface of the stationary phase in between the bonded phase linands, and in some cases even act as ion pairing agents to neutralize analyte charge. Ammonium formate is commonly added in mass spectrometry to improve detection of certain analytes by the formation of analyte-ammonium adducts. A volatile organic acid such as acetic acid, or most commonly formic acid, is often added to the mobile phase if mass spectrometry is used to analyze the column effluents. Trifluoroacetic acid (TFA) as additive to the mobile phase is widely used for complex mixtures of biomedical samples, mostly peptides and proteins, using mostly UV based detectors. They are rarely used in mass spectrometry methods, due to residues it can leave in the detector and solvent delivery system, which interfere with the analysis and detection. However, TFA can be highly effective in improving retention of analytes such as carboxylic acids, in applications utilizing other detectors such as UV-VIS, as it is a fairly strong organic acid. The effects of acids and buffers vary by application but generally improve chromatographic resolution when dealing with ionizable components. Reversed phase columns are quite difficult to damage compared to normal silica columns, thanks to the shielding effect of the bonded hydrophobic ligands; however, most reversed phase columns consist of alkyl derivatized silica particles, and are prone to hydrolysis of the silica at extreme pH conditions in the mobile phase. Most types of RP columns should not be used with aqueous bases as these will hydrolyze the underlying silica particle and dissolve it. There are selected brands of hybrid or enforced silica based particles of RP columns which can be used at extreme pH conditions. The use of extreme acidic conditions is also not recommended, as they also might hydrolyzed as well as corrode the inside walls of the metallic parts of the HPLC equipment. As a rule, in most cases RP-HPLC columns should be flushed with clean solvent after use to remove residual acids or buffers, and stored in an appropriate composition of solvent. Some biomedical applications require non metallic environment for the optimal separation. For such sensitive cases there is a test for the metal content of a column is to inject a sample which is a mixture of 2,2'- and 4,4'-bipyridine. Because the 2,2'-bipy can chelate the metal, the shape of the peak for the 2,2'-bipy will be distorted (tailed) when metal ions are present on the surface of the silica... Size-exclusion chromatography Size-exclusion chromatography (SEC) separates polymer molecules and biomolecules based on differences in their molecular size (actually by a particle's Stokes radius). The separation process is based on the ability of sample molecules to permeate through the pores of gel spheres, packed inside the column, and is dependent on the relative size of analyte molecules and the respective pore size of the absorbent. The process also relies on the absence of any interactions with the packing material surface. Two types of SEC are usually termed: Gel permeation chromatography (GPC)—separation of synthetic polymers (aqueous or organic soluble). GPC is a powerful technique for polymer characterization using primarily organic solvents. Gel filtration chromatography (GFC)—separation of water-soluble biopolymers. GFC uses primarily aqueous solvents (typically for aqueous soluble biopolymers, such as proteins, etc.). The separation principle in SEC is based on the fully, or partially penetrating of the high molecular weight substances of the sample into the porous stationary-phase particles during their transport through column. The mobile-phase eluent is selected in such a way that it totally prevents interactions with the stationary phase's surface. Under these conditions, the smaller the size of the molecule, the more it is able to penetrate inside the pore space and the movement through the column takes longer. On the other hand, the bigger the molecular size, the higher the probability the molecule will not fully penetrate the pores of the stationary phase, and even travel around them, thus, will be eluted earlier. The molecules are separated in order of decreasing molecular weight, with the largest molecules eluting from the column first and smaller molecules eluting later. Molecules larger than the pore size do not enter the pores at all, and elute together as the first peak in the chromatogram and this is called total exclusion volume which defines the exclusion limit for a particular column. Small molecules will permeate fully through the pores of the stationary phase particles and will be eluted last, marking the end of the chromatogram, and may appear as a total penetration marker. In biomedical sciences it is generally considered as a low resolution chromatography and thus it is often reserved for the final, "polishing" step of the purification. It is also useful for determining the tertiary structure and quaternary structure of purified proteins. SEC is used primarily for the analysis of large molecules such as proteins or polymers. SEC works also in a preparative way by trapping the smaller molecules in the pores of a particles. The larger molecules simply pass by the pores as they are too large to enter the pores. Larger molecules therefore flow through the column quicker than smaller molecules: that is, the smaller the molecule, the longer the retention time. This technique is widely used for the molecular weight determination of polysaccharides. SEC is the official technique (suggested by European pharmacopeia) for the molecular weight comparison of different commercially available low-molecular weight heparins. Ion-exchange chromatography Ion-exchange chromatography (IEC) or ion chromatography (IC) is an analytical technique for the separation and determination of ionic solutes in aqueous samples from environmental and industrial origins such as metal industry, industrial waste water, in biological systems, pharmaceutical samples, food, etc. Retention is based on the attraction between solute ions and charged sites bound to the stationary phase. Solute ions charged the same as the ions on the column are repulsed and elute without retention, while solute ions charged oppositely to the charged sites of the column are retained on it. Solute ions that are retained on the column can be eluted from it by changing the mobile phase composition, such as increasing its salt concentration and pH or increasing the column temperature, etc. Types of ion exchangers include polystyrene resins, cellulose and dextran ion exchangers (gels), and controlled-pore glass or porous silica gel. Polystyrene resins allow cross linkage, which increases the stability of the chain. Higher cross linkage reduces swerving, which increases the equilibration time and ultimately improves selectivity. Cellulose and dextran ion exchangers possess larger pore sizes and low charge densities making them suitable for protein separation. In general, ion exchangers favor the binding of ions of higher charge and smaller radius. An increase in counter ion (with respect to the functional groups in resins) concentration reduces the retention time, as it creates a strong competition with the solute ions. A decrease in pH reduces the retention time in cation exchange while an increase in pH reduces the retention time in anion exchange. By lowering the pH of the solvent in a cation exchange column, for instance, more hydrogen ions are available to compete for positions on the anionic stationary phase, thereby eluting weakly bound cations. This form of chromatography is widely used in the following applications: water purification, preconcentration of trace components, ligand-exchange chromatography, ion-exchange chromatography of proteins, high-pH anion-exchange chromatography of carbohydrates and oligosaccharides, and others. Bioaffinity chromatography High performance affinity chromatography (HPAC) works by passing a sample solution through a column packed with a stationary phase that contains an immobilized biologically active ligand. The ligand is in fact a substrate that has a specific binding affinity for the target molecule in the sample solution. The target molecule binds to the ligand, while the other molecules in the sample solution pass through the column, having little or no retention. The target molecule is then eluted from the column using a suitable elution buffer. This chromatographic process relies on the capability of the bonded active substances to form stable, specific, and reversible complexes thanks to their biological recognition of certain specific sample components. The formation of these complexes involves the participation of common molecular forces such as the Van der Waals interaction, electrostatic interaction, dipole-dipole interaction, hydrophobic interaction, and the hydrogen bond. An efficient, biospecific bond is formed by a simultaneous and concerted action of several of these forces in the complementary binding sites. Aqueous normal-phase chromatography Aqueous normal-phase chromatography (ANP) is also called hydrophilic interaction liquid chromatography (HILIC). This is a chromatographic technique which encompasses the mobile phase region between reversed-phase chromatography (RP) and organic normal phase chromatography (ONP). HILIC is used to achieve unique selectivity for hydrophilic compounds, showing normal phase elution order, using "reversed-phase solvents", i.e., relatively polar mostly non-aqueous solvents in the mobile phase. Many biological molecules, especially those found in biological fluids, are small polar compounds that do not retain well by reversed phase-HPLC. This has made hydrophilic interaction LC (HILIC) an attractive alternative and useful approach for analysis of polar molecules. Additionally, because HILIC is routinely used with traditional aqueous mixtures with polar organic solvents such as ACN and methanol, it can be easily coupled to MS. Isocratic and gradient elution A separation in which the mobile phase composition remains constant throughout the procedure is termed isocratic (meaning constant composition). The word was coined by Csaba Horvath who was one of the pioneers of HPLC. The mobile phase composition does not have to remain constant. A separation in which the mobile phase composition is changed during the separation process is described as a gradient elution. For example, a gradient can start at 10% methanol in water, and end at 90% methanol in water after 20 minutes. The two components of the mobile phase are typically termed "A" and "B"; A is the "weak" solvent which allows the solute to elute only slowly, while B is the "strong" solvent which rapidly elutes the solutes from the column. In reversed-phase chromatography, solvent A is often water or an aqueous buffer, while B is an organic solvent miscible with water, such as acetonitrile, methanol, THF, or isopropanol. In isocratic elution, peak width increases with retention time linearly according to the equation for N, the number of theoretical plates. This can be a major disadvantage when analyzing a sample that contains analytes with a wide range of retention factors. Using a weaker mobile phase, the runtime is lengthened and results in slowly eluting peaks to be broad, leading to reduced sensitivity. A stronger mobile phase would improve issues of runtime and broadening of later peaks but results in diminished peak separation, especially for quickly eluting analytes which may have insufficient time to fully resolve. This issue is addressed through the changing mobile phase composition of gradient elution. By starting from a weaker mobile phase and strengthening it during the runtime, gradient elution decreases the retention of the later-eluting components so that they elute faster, giving narrower (and taller) peaks for most components, while also allowing for the adequate separation of earlier-eluting components. This also improves the peak shape for tailed peaks, as the increasing concentration of the organic eluent pushes the tailing part of a peak forward. This also increases the peak height (the peak looks "sharper"), which is important in trace analysis. The gradient program may include sudden "step" increases in the percentage of the organic component, or different slopes at different times – all according to the desire for optimum separation in minimum time. In isocratic elution, the retention order does not change if the column dimensions (length and inner diameter) change – that is, the peaks elute in the same order. In gradient elution, however, the elution order may change as the dimensions or flow rate change. if they are no scaled down or up according to the change The driving force in reversed phase chromatography originates in the high order of the water structure. The role of the organic component of the mobile phase is to reduce this high order and thus reduce the retarding strength of the aqueous component. Parameters Theoretical The theory of high performance liquid chromatography-HPLC is, at its core, the same as general chromatography theory. This theory has been used as the basis for system-suitability tests, as can be seen in the USP Pharmacopeia, which are a set of quantitative criteria, which test the suitability of the HPLC system to the required analysis at any step of it. This relation is also represented as a normalized unit-less factor known as the retention factor, or retention parameter, which is the experimental measurement of the capacity ratio, as shown in the Figure of Performance Criteria as well. tR is the retention time of the specific component and t0 is the time it takes for a non-retained substance to elute through the system without any retention, thus it is called the Void Time. The ratio between the retention factors, k', of every two adjacent peaks in the chromatogram is used in the evaluation of the degree of separation between them, and is called selectivity factor, α, as shown in the Performance Criteria graph. The plate count N as a criterion for system efficiency was developed for isocratic conditions, i.e., a constant mobile phase composition throughout the run. In gradient conditions, where the mobile phase changes with time during the chromatographic run, it is more appropriate to use the parameter peak capacity Pc as a measure for the system efficiency. The definition of peak capacity in chromatography is the number of peaks that can be separated within a retention window for a specific pre-defined resolution factor, usually ~1. It could also be envisioned as the runtime measured in number of peaks' average widths. The equation is shown in the Figure of the performance criteria. In this equation tg is the gradient time and w(ave) is the average peaks width at the base. The parameters are largely derived from two sets of chromatographic theory: plate theory (as part of partition chromatography), and the rate theory of chromatography / Van Deemter equation. Of course, they can be put in practice through analysis of HPLC chromatograms, although rate theory is considered the more accurate theory. They are analogous to the calculation of retention factor for a paper chromatography separation, but describes how well HPLC separates a mixture into two or more components that are detected as peaks (bands) on a chromatogram. The HPLC parameters are the: efficiency factor(N), the retention factor (kappa prime), and the separation factor (alpha). Together the factors are variables in a resolution equation, which describes how well two components' peaks separated or overlapped each other. These parameters are mostly only used for describing HPLC reversed phase and HPLC normal phase separations, since those separations tend to be more subtle than other HPLC modes (e.g., ion exchange and size exclusion). Void volume is the amount of space in a column that is occupied by solvent. It is the space within the column that is outside of the column's internal packing material. Void volume is measured on a chromatogram as the first component peak detected, which is usually the solvent that was present in the sample mixture; ideally the sample solvent flows through the column without interacting with the column, but is still detectable as distinct from the HPLC solvent. The void volume is used as a correction factor. Efficiency factor (N) practically measures how sharp component peaks on the chromatogram are, as ratio of the component peak's area ("retention time") relative to the width of the peaks at their widest point (at the baseline). Peaks that are tall, sharp, and relatively narrow indicate that separation method efficiently removed a component from a mixture; high efficiency. Efficiency is very dependent upon the HPLC column and the HPLC method used. Efficiency factor is synonymous with plate number, and the 'number of theoretical plates'. Retention factor (kappa prime) measures how long a component of the mixture stuck to the column, measured by the area under the curve of its peak in a chromatogram (since HPLC chromatograms are a function of time). Each chromatogram peak will have its own retention factor (e.g., kappa1 for the retention factor of the first peak). This factor may be corrected for by the void volume of the column. Separation factor (alpha) is a relative comparison on how well two neighboring components of the mixture were separated (i.e., two neighboring bands on a chromatogram). This factor is defined in terms of a ratio of the retention factors of a pair of neighboring chromatogram peaks, and may also be corrected for by the void volume of the column. The greater the separation factor value is over 1.0, the better the separation, until about 2.0 beyond which an HPLC method is probably not needed for separation. Resolution equations relate the three factors such that high efficiency and separation factors improve the resolution of component peaks in an HPLC separation. Internal diameter The internal diameter (ID) of an HPLC column is an important parameter. It can influence the detection response when reduced due to the reduced lateral diffusion of the solute band. It can also affect the separation selectivity, when flow rate and injection volumes are not scaled down or up proportionally to the smaller or larger diameter used, both in the isocratic and in gradient modes. It determines the quantity of analyte that can be loaded onto the column. Larger diameter columns are usually seen in preparative applications, such as the purification of a drug product for later use. Low-ID columns have improved sensitivity and lower solvent consumption in the recent ultra-high performance liquid chromatography (UHPLC). Larger ID columns (over 10 mm) are used to purify usable amounts of material because of their large loading capacity. Analytical scale columns (4.6 mm) have been the most common type of columns, though narrower columns are rapidly gaining in popularity. They are used in traditional quantitative analysis of samples and often use a UV-Vis absorbance detector. Narrow-bore columns (1–2 mm) are used for applications when more sensitivity is desired either with special UV-vis detectors, fluorescence detection or with other detection methods like liquid chromatography-mass spectrometry Capillary columns (under 0.3 mm) are used almost exclusively with alternative detection means such as mass spectrometry. They are usually made from fused silica capillaries, rather than the stainless steel tubing that larger columns employ. Particle size Most traditional HPLC is performed with the stationary phase attached to the outside of small spherical silica particles (very small beads). These particles come in a variety of sizes with 5 μm beads being the most common. Smaller particles generally provide more surface area and better separations, but the pressure required for optimum linear velocity increases by the inverse of the particle diameter squared. According to the equations of the column velocity, efficiency and backpressure, reducing the particle diameter by half and keeping the size of the column the same, will double the column velocity and efficiency; but four times increase the backpressure. And the small particles HPLC also can decrease the width broadening. Larger particles are used in preparative HPLC (column diameters 5 cm up to >30 cm) and for non-HPLC applications such as solid-phase extraction. Pore size Many stationary phases are porous to provide greater surface area. Small pores provide greater surface area while larger pore size has better kinetics, especially for larger analytes. For example, a protein which is only slightly smaller than a pore might enter the pore but does not easily leave once inside. Pump pressure Pumps vary in pressure capacity, but their performance is measured on their ability to yield a consistent and reproducible volumetric flow rate. Pressure may reach as high as 60 MPa (6000 lbf/in2), or about 600 atmospheres. Modern HPLC systems have been improved to work at much higher pressures, and therefore are able to use much smaller particle sizes in the columns (<2 μm). These "ultra high performance liquid chromatography" systems or UHPLCs, which could also be known as ultra high pressure chromatography systems, can work at up to 120 MPa (17,405 lbf/in2), or about 1200 atmospheres. The term "UPLC" is a trademark of the Waters Corporation, but is sometimes used to refer to the more general technique of UHPLC. Detectors HPLC detectors fall into two main categories: universal or selective. Universal detectors typically measure a bulk property (e.g., refractive index) by measuring a difference of a physical property between the mobile phase and mobile phase with solute while selective detectors measure a solute property (e.g., UV-Vis absorbance) by simply responding to the physical or chemical property of the solute. HPLC most commonly uses a UV-Vis absorbance detector; however, a wide range of other chromatography detectors can be used. A universal detector that complements UV-Vis absorbance detection is the charged aerosol detector (CAD). A kind of commonly utilized detector includes refractive index detectors, which provide readings by measuring the changes in the refractive index of the eluant as it moves through the flow cell. In certain cases, it is possible to use multiple detectors, for example LCMS normally combines UV-Vis with a mass spectrometer. When used with an electrochemical detector (ECD) the HPLC-ECD selectively detects neurotransmitters such as: norepinephrine, dopamine, serotonin, glutamate, GABA, acetylcholine and others in neurochemical analysis research applications. The HPLC-ECD detects neurotransmitters to the femtomolar range. Other methods to detect neurotransmitters include liquid chromatography-mass spectrometry, ELISA, or radioimmunoassays. Autosamplers Large numbers of samples can be automatically injected onto an HPLC system, by the use of HPLC autosamplers. In addition, HPLC autosamplers have an injection volume and technique which is exactly the same for each injection, consequently they provide a high degree of injection volume precision. It is possible to enable sample stirring within the sampling-chamber, thus promoting homogeneity. Applications Manufacturing HPLC has many applications in both laboratory and clinical science. It is a common technique used in pharmaceutical development, as it is a dependable way to obtain and ensure product purity. While HPLC can produce extremely high quality (pure) products, it is not always the primary method used in the production of bulk drug materials. According to the European pharmacopoeia, HPLC is used in only 15.5% of syntheses. However, it plays a role in 44% of syntheses in the United States pharmacopoeia. This could possibly be due to differences in monetary and time constraints, as HPLC on a large scale can be an expensive technique. An increase in specificity, precision, and accuracy that occurs with HPLC unfortunately corresponds to an increase in cost. Legal This technique is also used for detection of illicit drugs in various samples. The most common method of drug detection has been an immunoassay. This method is much more convenient. However, convenience comes at the cost of specificity and coverage of a wide range of drugs, therefore, HPLC has been used as well as an alternative method. As HPLC is a method of determining (and possibly increasing) purity, using HPLC alone in evaluating concentrations of drugs was somewhat insufficient. Therefore, HPLC in this context is often performed in conjunction with mass spectrometry. Using liquid chromatography-mass spectrometry (LC-MS) instead of gas chromatography-mass spectrometry (GC-MS) circumvents the necessity for derivitizing with acetylating or alkylation agents, which can be a burdensome extra step. LC-MS has been used to detect a variety of agents like doping agents, drug metabolites, glucuronide conjugates, amphetamines, opioids, cocaine, BZDs, ketamine, LSD, cannabis, and pesticides. Performing HPLC in conjunction with mass spectrometry reduces the absolute need for standardizing HPLC experimental runs. Research Similar assays can be performed for research purposes, detecting concentrations of potential clinical candidates like anti-fungal and asthma drugs. This technique is obviously useful in observing multiple species in collected samples, as well, but requires the use of standard solutions when information about species identity is sought out. It is used as a method to confirm results of synthesis reactions, as purity is essential in this type of research. However, mass spectrometry is still the more reliable way to identify species. Medical and health sciences Medical use of HPLC typically use mass spectrometer (MS) as the detector, so the technique is called LC-MS or LC-MS/MS for tandem MS, where two types of MS are operated sequentially. When the HPLC instrument is connected to more than one detector, it is called a hyphenated LC system. Pharmaceutical applications are the major users of HPLC, LC-MS and LC-MS/MS. This includes drug development and pharmacology, which is the scientific study of the effects of drugs and chemicals on living organisms, personalized medicine, public health and diagnostics. While urine is the most common medium for analyzing drug concentrations, blood serum is the sample collected for most medical analyses with HPLC. One of the most important roles of LC-MS and LC-MS/MS in the clinical lab is the Newborn Screening (NBS) for metabolic disorders and follow-up diagnostics. The infants' samples come in the shape of dried blood spot (DBS), which is simple to prepare and transport, enabling safe and accessible diagnostics, both locally and globally. Other methods of detection of molecules that are useful for clinical studies have been tested against HPLC, namely immunoassays. In one example of this, competitive protein binding assays (CPBA) and HPLC were compared for sensitivity in detection of vitamin D. Useful for diagnosing vitamin D deficiencies in children, it was found that sensitivity and specificity of this CPBA reached only 40% and 60%, respectively, of the capacity of HPLC. While an expensive tool, the accuracy of HPLC is nearly unparalleled. See also History of chromatography Capillary electrochromatography Column chromatography Csaba Horváth Ion chromatography Micellar liquid chromatography References Further reading L. R. Snyder, J.J. Kirkland, and J. W. Dolan, Introduction to Modern Liquid Chromatography, John Wiley & Sons, New York, 2009. M.W. Dong, Modern HPLC for practicing scientists. Wiley, 2006. L. R. Snyder, J.J. Kirkland, and J. L. Glajch, Practical HPLC Method Development, John Wiley & Sons, New York, 1997. S. Ahuja and H. T. Rasmussen (ed), HPLC Method Development for Pharmaceuticals, Academic Press, 2007. S. Ahuja and M.W. Dong (ed), Handbook of Pharmaceutical Analysis by HPLC, Elsevier/Academic Press, 2005. Y. V. Kazakevich and R. LoBrutto (ed.), HPLC for Pharmaceutical Scientists, Wiley, 2007. U. D. Neue, HPLC Columns: Theory, Technology, and Practice, Wiley-VCH, New York, 1997. M. C. McMaster, HPLC, a practical user's guide, Wiley, 2007. External links HPLC Chromatography Principle, Application [Basic Note] – 2020. at Rxlalit.com Hungarian inventions Chromatography Scientific techniques
High-performance liquid chromatography
[ "Chemistry" ]
10,890
[ "Chromatography", "Separation processes" ]