id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
1,616,889
https://en.wikipedia.org/wiki/Alachlor
Alachlor is an herbicide from the chloroacetanilide family. It is an odorless, white solid. The greatest use of alachlor is for control of annual grasses and broadleaf weeds in crops. Use of alachlor is illegal in the European Union and no products containing alachlor are currently registered in the United States. Its mode of action is elongase inhibition, and inhibition of geranylgeranyl pyrophosphate (GGPP) cyclisation enzymes, part of the gibberellin pathway. It is marketed under the trade names Alanex, Bronco, Cannon, Crop Star, Intrro, Lariat, Lasso, Micro-Tech and Partner. Uses The largest use of alachlor is as a herbicide for control of annual grasses and broadleaf weeds in crops, primarily on corn, sorghum, and soybeans. Application details Alachlor mixes well with other herbicides. It is marketed in mixed formulations with atrazine, glyphosate, trifluralin and imazaquin. It is a selective, systemic herbicide, absorbed by germinating shoots and by roots. Its mode of action is elongase inhibition, and inhibition of geranylgeranyl pyrophosphate (GGPP) cyclisation enzymes, part of the gibberellin pathway. Stated more simply, it works by interfering with a plant's ability to produce protein and by interfering with root growth. It is most commonly available as microgranules containing 15% active ingredients (AI), or emulsifiable concentrate containing 480 g/ litre of AI. Homologuation in Europe requires a maximum dose of 2,400 g per hectare of AI, or 5 litres/hectare of emulsifiable concentrate or 17 kg/ha of microgranules. The products are applied as either pre-drilling, soil incorporated or pre-emergence. Safety The United States Environmental Protection Agency (EPA) classifies the herbicide as toxicity class III - slightly toxic. The Maximum Contaminant Level Goal (MCLG) for Alachlor is zero, to prevent long-term effects. The Maximum Contaminant Level (MCL) for drinking water is two parts per billion (2 ppb). The EPA cited the following long-term effects for exposures at levels above the MCL in drinking water exposed to runoff from herbicide used on row crops: slight skin and eye irritation; at lifetime exposure to levels above the MCL: potential damage to liver, kidney, spleen; lining of nose and eyelids; cancer. The major source of environmental release of alachlor is through its manufacture and use as a herbicide. Alachlor was detected in rural domestic well water by EPA's National Survey of Pesticides in Drinking Water Wells. EPA's Pesticides in Ground Water Database reports detections of alachlor in ground water at concentrations above the MCL in at least 15 U.S. states. Alachlor is a controlled substance under Australian law and is listed as a Schedule 7 (Dangerous Poison) substance. Access, use and storage are strictly controlled under state and territory law. Since 2006, use of alachlor as a herbicide has been banned in the European Union. In "a judgment that could lend weight to other health claims against pesticides," in January, 2012 a French court found Monsanto, which manufactures Lasso, liable for chemical poisoning of a French farmer in 2004. In 2015 a French appeals court upheld the ruling and ordered the company to "fully compensate" the grower. Environmental fate Alachlor exhibits moderate sorption in soil, ranging from 43-209 mL/g. Photodegradation is a minor contributor to alachlor fate. Degradation in soil appears to be largely biologically mediated, and produces multiple metabolites. The half life in aerobic soil ranges from about 6 to 15 days and is considerably shorter under anaerobic conditions. One possible explanation for the short anaerobic half life is the observation that alachlor is rapidly transformed under anoxia to up to 14 degradation products in the presence of iron-bearing ferruginous smectites. The iron in such minerals can be used by certain soil bacteria as an electron acceptor when soils are flooded, thus the process of herbicide transformation by reduced clay is thought to be microbially mediated. Similar observations have been reported for the herbicides trifluralin and atrazine. Alachlor is often used in high school chemistry classrooms as a reactant in demonstrations such as the combustion of magnesium. Alachlor can be used as a substitution for methane gas in such an experiment when gas is not available. See also Acetochlor Metolachlor Butachlor References Herbicides Organochlorides Endocrine disruptors Ethers Acetanilides Alkyl-substituted benzenes
Alachlor
[ "Chemistry", "Biology" ]
1,018
[ "Herbicides", "Endocrine disruptors", "Functional groups", "Organic compounds", "Ethers", "Biocides" ]
1,616,932
https://en.wikipedia.org/wiki/Amperometric%20titration
Amperometric titration refers to a class of titrations in which the equivalence point is determined through measurement of the electric current produced by the titration reaction. It is a form of quantitative analysis. Background A solution containing the analyte, A, in the presence of some conductive buffer. If an electrolytic potential is applied to the solution through a working electrode, then the measured current depends (in part) on the concentration of the analyte. Measurement of this current can be used to determine the concentration of the analyte directly; this is a form of amperometry. However, the difficulty is that the measured current depends on several other variables, and it is not always possible to control all of them adequately. This limits the precision of direct amperometry. If the potential applied to the working electrode is sufficient to reduce the analyte, then the concentration of analyte close to the working electrode will decrease. More of the analyte will slowly diffuse into the volume of solution close to the working electrode, restoring the concentration. If the potential applied to the working electrode is great enough (an overpotential), then the concentration of analyte next to the working electrode will depend entirely on the rate of diffusion. In such a case, the current is said to be diffusion limited. As the analyte is reduced at the working electrode, the concentration of the analyte in the whole solution will very slowly decrease; this depends on the size of the working electrode compared to the volume of the solution. What happens if some other species which reacts with the analyte (the titrant) is added? (For instance, chromate ions can be added to oxidize lead ions.) After a small quantity of the titrant (chromate) is added, the concentration of the analyte (lead) has decreased due to the reaction with chromate. The current from the reduction of lead ion at the working electrode will decrease. The addition is repeated, and the current decreases again. A plot of the current against volume of added titrant will be a straight line. After enough titrant has been added to react completely with the analyte, the excess titrant may itself be reduced at the working electrode. Since this is a different species with different diffusion characteristics (and different half-reaction), the slope of current versus added titrant will have a different slope after the equivalence point. This change in slope marks the equivalence point, in the same way that, for instance, the sudden change in pH marks the equivalence point in an acid–base titration. The electrode potential may also be chosen such that the titrant is reduced, but the analyte is not. In this case, the presence of excess titrant is easily detected by the increase in current above background (charging) current. Advantages The chief advantage over direct amperometry is that the magnitude of the measured current is of interest only as an indicator. Thus, factors that are of critical importance to quantitative amperometry, such as the surface area of the working electrode, completely disappear from amperometric titrations. The chief advantage over other types of titration is the selectivity offered by the electrode potential, as well as by the choice of titrant. For instance, lead ion is reduced at a potential of -0.60 V (relative to the saturated calomel electrode), while zinc ions are not; this allows the determination of lead in the presence of zinc. Clearly this advantage depends entirely on the other species present in the sample. See also Titration References Electroanalytical methods Titration
Amperometric titration
[ "Chemistry" ]
745
[ "Electroanalytical methods", "Instrumental analysis", "Titration", "Electroanalytical chemistry" ]
1,616,977
https://en.wikipedia.org/wiki/Coefficient%20of%20haze
The coefficient of haze (also known as smoke shade) is a measurement of visibility interference in the atmosphere. One way to measure this is to draw about 1000 cubic feet of air sample through an air filter and obtain the radiation intensity through the filter. The coefficient is then calculated based on the absorbance formula where is the radiation (400 nm light) intensity transmitted through the sampled filter, and is the radiation intensity transmitted through a clean (control) filter. References Further reading Visibility
Coefficient of haze
[ "Physics", "Mathematics" ]
96
[ "Quantity", "Visibility", "Wikipedia categories named after physical quantities", "Physical quantities" ]
1,616,990
https://en.wikipedia.org/wiki/Coliform%20index
The coliform index is a rating of the purity of water based on a count of fecal bacteria. It is one of many tests done to assure sufficient water quality. Coliform bacteria are microorganisms that primarily originate in the intestines of warm-blooded animals. By testing for coliforms, especially the well known Escherichia coli (E. coli), which is a thermotolerant coliform, one can determine if the water has possibly been exposed to fecal contamination; that is, whether it has come in contact with human or animal feces. It is important to know this because many disease-causing organisms are transferred from human and animal feces to water, from where they can be ingested by people and infect them. Water that has been contaminated by feces usually contains pathogenic bacteria, which can cause disease. Some types of coliforms cause disease, but the coliform index is primarily used to judge if other types of pathogenic bacteria are likely to be present in the water. The coliform index is used because it is difficult to test for pathogenic bacteria directly. There are many different types of disease-causing bacteria, and they are usually present in low numbers which do not always show up in tests. Thermotolerant coliforms are present in higher numbers than individual types of pathogenic bacteria and they can be tested relatively easily. However, the coliform index is far from perfect. Thermotolerant coliforms can survive in water on their own, especially in tropical regions, so they do not always indicate fecal contamination. Furthermore, they do not give a good indication of how many pathogenic bacteria are present in the water, and they give no idea at all of whether there are pathogenic viruses or protozoa which also cause diseases and are rarely tested for. Therefore, it does not always give accurate or useful results regarding the purity of water. See also Bacteriological water analysis Indicator organism References Gleeson, C. and Gray, N (1997). The Coliform index and waterborne disease: Problems of microbial drinking water assessment E&FN SPON: London. Toxicology Escherichia coli Water quality indicators
Coliform index
[ "Chemistry", "Biology", "Environmental_science" ]
450
[ "Toxicology", "Water pollution", "Model organisms", "Water quality indicators", "Escherichia coli" ]
1,617,188
https://en.wikipedia.org/wiki/Laws%20of%20motion
In physics, a number of noted theories of the motion of objects have developed. Among the best known are: Classical mechanics Newton's laws of motion Euler's laws of motion Cauchy's equations of motion Kepler's laws of planetary motion General relativity Special relativity Quantum mechanics Motion (physics)
Laws of motion
[ "Physics" ]
64
[ "Physical phenomena", "Motion (physics)", "Space", "Mechanics", "Spacetime" ]
1,617,227
https://en.wikipedia.org/wiki/Hardy%20tool
Hardy tools, also known as anvil tools or bottom tools, are metalworking tools used in anvils. Description A hardy has a square shank, which prevents it from rotating when placed in the anvil's hardy hole. The term "hardy", used alone, refers to a cutting chisel used in the square hole of the anvil. Other bottom tools are identified by function. Typical hardy tools include chisels and bending drifts. They are generally used with a matching top tool. Different hardy tools are used to form and cut metal. The swage is used to make metal a specific cross section, usually round for final use as nails, bolts, rods or rivets. The fuller is used to stretch or help bend metal, and make dents and shoulders. Many hardy shapes have corresponding hammer like tools with head shapes to help form metal, called top tools, for example a V-shaped swage is used with an inverted V-shaped hammer like top tool to form iron into an angle shape. Gallery Notes References External links Metalworking hand tools Metallic objects Percussion instruments
Hardy tool
[ "Physics" ]
224
[ "Metallic objects", "Physical objects", "Matter" ]
1,617,558
https://en.wikipedia.org/wiki/Bridgman%27s%20thermodynamic%20equations
In thermodynamics, Bridgman's thermodynamic equations are a basic set of thermodynamic equations, derived using a method of generating multiple thermodynamic identities involving a number of thermodynamic quantities. The equations are named after the American physicist Percy Williams Bridgman. (See also the exact differential article for general differential relationships). The extensive variables of the system are fundamental. Only the entropy S , the volume V  and the four most common thermodynamic potentials will be considered. The four most common thermodynamic potentials are: {| |----- | Internal energy || U |----- | Enthalpy || H |----- | Helmholtz free energy || A |----- | Gibbs free energy || G |----- |} The first derivatives of the internal energy with respect to its (extensive) natural variables S  and V  yields the intensive parameters of the system - The pressure P  and the temperature T . For a simple system in which the particle numbers are constant, the second derivatives of the thermodynamic potentials can all be expressed in terms of only three material properties {| |----- | heat capacity (constant pressure) || CP |----- | Coefficient of thermal expansion || α |----- | Isothermal compressibility || βT |} Bridgman's equations are a series of relationships between all of the above quantities. Introduction Many thermodynamic equations are expressed in terms of partial derivatives. For example, the expression for the heat capacity at constant pressure is: which is the partial derivative of the enthalpy with respect to temperature while holding pressure constant. We may write this equation as: This method of rewriting the partial derivative was described by Bridgman (and also Lewis & Randall), and allows the use of the following collection of expressions to express many thermodynamic equations. For example from the equations below we have: and Dividing, we recover the proper expression for CP. The following summary restates various partial terms in terms of the thermodynamic potentials, the state parameters S, T, P, V, and the following three material properties which are easily measured experimentally. Bridgman's thermodynamic equations Note that Lewis and Randall use F and E for the Gibbs energy and internal energy, respectively, rather than G and U which are used in this article. See also Table of thermodynamic equations Exact differential References Thermodynamic equations Equations
Bridgman's thermodynamic equations
[ "Physics", "Chemistry", "Mathematics" ]
554
[ "Thermodynamic equations", "Equations of physics", "Mathematical objects", "Equations", "Thermodynamics" ]
1,617,571
https://en.wikipedia.org/wiki/Safety%20engineer
Safety engineers focus on development and maintenance of the integrated management system. They act as a quality assurance and conformance specialist. Health and safety engineers are responsible for developing and maintaining the safe work systems for employees and others. Scope of role The scope of a safety engineer is the development and maintenance of the integrated management system. Safety engineering professionals must have a thorough understanding of legislation, standards and systems. They need to have a fundamental knowledge of safety, contract law, tort, environmental law, policy, health, construction, computer science, engineering, labour hire, plant hire, communication and psychology. Professional safety studies include construction and engineering, architectural design of systems, fire protection, ergonomics, system and process safety, system safety, safety and health program management, accident investigation and analysis, product safety, construction safety, education and training methods, measurement of safety performance, human behavior, environmental safety and health, and safety, health and environmental laws, regulations and standards. Many safety engineers have backgrounds or advanced study in other disciplines, such as occupational health and safety, construction management and civil engineering, engineering, system engineering / industrial engineering, requirements engineering, reliability engineering, maintenance, human factors, operations, education, physical and social sciences and other fields. This extends their expertise beyond the basics of health and safety. Personality and role They must be personally pleasant, intelligent, and ruthless with themselves and their organisation. In particular, they have to be able to "sell" the failures that they discover to inspectors/ auditors, as well as the attendant expense and time needed to correct them. Often facts can be uncomfortable for the business. Safety engineers have to be ruthless about getting facts right from others, this includes from their fellow managers and engineers. It is common for a safety engineer to consider registers, plant and equipment, training and competency problems in the same day. Often the facts can be very uncomfortable. Teamwork Safety engineers work in a team that includes other engineering disciplines, project management, estimators, environmentalist, asset owners, regulators, doctors, auditors and lawyers. Safety works well in a true risk matrix system, in which safety is a managed by the ISO3100 risk management system and integrated into the safety, quality and environment management systems. However, hierarchy of controls may be more suitable for smaller groups of less than 5 workers as it’s easier to digest. See also American Society of Safety Engineers Biomedical engineering Chemical engineering Fire protection engineering Hazard Identification Life-critical Redundancy (engineering) Reliability engineering Safety engineering Safety life cycle Security engineering Zonal Safety Analysis External links and sources Trevor Kletz (1998) Process Plants: A Handbook for Inherently Safer Design CRC American Society of Safety Engineers (official website) Board of Certified Safety Professionals (official website) The Safety and Reliability Society (official website) Safety engineering Engineering occupations
Safety engineer
[ "Engineering" ]
570
[ "Safety engineering", "Systems engineering" ]
1,617,661
https://en.wikipedia.org/wiki/Booth%27s%20multiplication%20algorithm
Booth's multiplication algorithm is a multiplication algorithm that multiplies two signed binary numbers in two's complement notation. The algorithm was invented by Andrew Donald Booth in 1950 while doing research on crystallography at Birkbeck College in Bloomsbury, London. Booth's algorithm is of interest in the study of computer architecture. The algorithm Booth's algorithm examines adjacent pairs of bits of the 'N'-bit multiplier Y in signed two's complement representation, including an implicit bit below the least significant bit, y−1 = 0. For each bit yi, for i running from 0 to N − 1, the bits yi and yi−1 are considered. Where these two bits are equal, the product accumulator P is left unchanged. Where yi = 0 and yi−1 = 1, the multiplicand times 2i is added to P; and where yi = 1 and yi−1 = 0, the multiplicand times 2i is subtracted from P. The final value of P is the signed product. The representations of the multiplicand and product are not specified; typically, these are both also in two's complement representation, like the multiplier, but any number system that supports addition and subtraction will work as well. As stated here, the order of the steps is not determined. Typically, it proceeds from LSB to MSB, starting at i = 0; the multiplication by 2i is then typically replaced by incremental shifting of the P accumulator to the right between steps; low bits can be shifted out, and subsequent additions and subtractions can then be done just on the highest N bits of P. There are many variations and optimizations on these details. The algorithm is often described as converting strings of 1s in the multiplier to a high-order +1 and a low-order −1 at the ends of the string. When a string runs through the MSB, there is no high-order +1, and the net effect is interpretation as a negative of the appropriate value. A typical implementation Booth's algorithm can be implemented by repeatedly adding (with ordinary unsigned binary addition) one of two predetermined values A and S to a product P, then performing a rightward arithmetic shift on P. Let m and r be the multiplicand and multiplier, respectively; and let x and y represent the number of bits in m and r. Determine the values of A and S, and the initial value of P. All of these numbers should have a length equal to (x + y + 1). A: Fill the most significant (leftmost) bits with the value of m. Fill the remaining (y + 1) bits with zeros. S: Fill the most significant bits with the value of (−m) in two's complement notation. Fill the remaining (y + 1) bits with zeros. P: Fill the most significant x bits with zeros. To the right of this, append the value of r. Fill the least significant (rightmost) bit with a zero. Determine the two least significant (rightmost) bits of P. If they are 01, find the value of P + A. Ignore any overflow. If they are 10, find the value of P + S. Ignore any overflow. If they are 00, do nothing. Use P directly in the next step. If they are 11, do nothing. Use P directly in the next step. Arithmetically shift the value obtained in the 2nd step by a single place to the right. Let P now equal this new value. Repeat steps 2 and 3 until they have been done y times. Drop the least significant (rightmost) bit from P. This is the product of m and r. Example Find 3 × (−4), with m = 3 and r = −4, and x = 4 and y = 4: m = 0011, -m = 1101, r = 1100 A = 0011 0000 0 S = 1101 0000 0 P = 0000 1100 0 Perform the loop four times: P = 0000 1100 0. The last two bits are 00. P = 0000 0110 0. Arithmetic right shift. P = 0000 0110 0. The last two bits are 00. P = 0000 0011 0. Arithmetic right shift. P = 0000 0011 0. The last two bits are 10. P = 1101 0011 0. P = P + S. P = 1110 1001 1. Arithmetic right shift. P = 1110 1001 1. The last two bits are 11. P = 1111 0100 1. Arithmetic right shift. The product is 1111 0100, which is −12. The above-mentioned technique is inadequate when the multiplicand is the most negative number that can be represented (e.g. if the multiplicand has 4 bits then this value is −8). This is because then an overflow occurs when computing -m, the negation of the multiplicand, which is needed in order to set S. One possible correction to this problem is to extend A, S, and P by one bit each, while they still represent the same number. That is, while −8 was previously represented in four bits by 1000, it is now represented in 5 bits by 1 1000. This then follows the implementation described above, with modifications in determining the bits of A and S; e.g., the value of m, originally assigned to the first x bits of A, will be now be extended to x+1 bits and assigned to the first x+1 bits of A. Below, the improved technique is demonstrated by multiplying −8 by 2 using 4 bits for the multiplicand and the multiplier: A = 1 1000 0000 0 S = 0 1000 0000 0 P = 0 0000 0010 0 Perform the loop four times: P = 0 0000 0010 0. The last two bits are 00. P = 0 0000 0001 0. Right shift. P = 0 0000 0001 0. The last two bits are 10. P = 0 1000 0001 0. P = P + S. P = 0 0100 0000 1. Right shift. P = 0 0100 0000 1. The last two bits are 01. P = 1 1100 0000 1. P = P + A. P = 1 1110 0000 0. Right shift. P = 1 1110 0000 0. The last two bits are 00. P = 1 1111 0000 0. Right shift. The product is 11110000 (after discarding the first and the last bit) which is −16. How it works Consider a positive multiplier consisting of a block of 1s surrounded by 0s. For example, 00111110. The product is given by: where M is the multiplicand. The number of operations can be reduced to two by rewriting the same as In fact, it can be shown that any sequence of 1s in a binary number can be broken into the difference of two binary numbers: Hence, the multiplication can actually be replaced by the string of ones in the original number by simpler operations, adding the multiplier, shifting the partial product thus formed by appropriate places, and then finally subtracting the multiplier. It is making use of the fact that it is not necessary to do anything but shift while dealing with 0s in a binary multiplier, and is similar to using the mathematical property that 99 = 100 − 1 while multiplying by 99. This scheme can be extended to any number of blocks of 1s in a multiplier (including the case of a single 1 in a block). Thus, Booth's algorithm follows this old scheme by performing an addition when it encounters the first digit of a block of ones (0 1) and subtraction when it encounters the end of the block (1 0). This works for a negative multiplier as well. When the ones in a multiplier are grouped into long blocks, Booth's algorithm performs fewer additions and subtractions than the normal multiplication algorithm. See also Binary multiplier Non-adjacent form Redundant binary representation Wallace tree Dadda multiplier References Further reading External links Radix-4 Booth Encoding Radix-8 Booth Encoding in A Formal Theory of RTL and Computer Arithmetic Booth's Algorithm JavaScript Simulator Implementation in Python 1950 introductions 1950 in London 1950 in science Binary arithmetic Computer arithmetic algorithms Multiplication Birkbeck, University of London
Booth's multiplication algorithm
[ "Mathematics" ]
1,776
[ "Arithmetic", "Binary arithmetic" ]
1,617,684
https://en.wikipedia.org/wiki/Intraosseous%20infusion
Intraosseous infusion (IO) is the process of injecting medication, fluids, or blood products directly into the bone marrow; this provides a non-collapsible entry point into the systemic venous system. The intraosseous infusion technique is used to provide fluids and medication when intravenous access is not available or not feasible. Intraosseous infusions allow for the administered medications and fluids to go directly into the vascular system. The IO route of fluid and medication administration is an alternative to the preferred intravascular route when the latter cannot be established promptly in emergent situations. Intraosseous infusions are used when people have compromised intravenous access and need immediate delivery of life-saving fluids and medications. Background The use of the IV route to administer fluids has been around since the 1830s, and, in 1922, Cecil K. Drinker et al. saw that bone, specifically the sternum, could also be used as a route of administration for emergency purposes. To continue the expansion of knowledge regarding IO administration, a successful blood transfusion took place in 1940 using the sternum, and afterward, in 1941, Tocantins and O'Neill demonstrated successful vascular access using the bone marrow cavity of a long bone in rabbits. Because of Tocantins and O'Neill's success in their experiments with rabbits, human clinical trials were established using mainly the body of the sternum or the manubrium for access. Emanuel Papper and others then continued to advocate, research, and make advances on behalf of the IO administration. Once Papper showed that the bone marrow space could be used with comparable success to administer IV fluids and drugs, intraosseous infusion was popularized during World War II to prevent soldiers' deaths via hemorrhagic shock. While popular in the field during WWII, the use of IO was not seen as a standard for emergencies until the 1980s, and only so for children. With the rise of technology allowing the ease of technique of IO, and a lower risk of complications like bloodstream infections than when using peripheral access, the alternative of IO access has increased throughout the years for adults, as well. IO is now recommended in Advanced Cardiac and Pediatric Advanced Life Support treatment protocols, in cases where access via IV cannot be established on time. Indications Intraosseous access is indicated in emergency situations, such as when a person experiences some type of major trauma like shock, cardiac arrest, severe dehydration, or severe gastrointestinal hemorrhage. IO access can provide the quickest way to rapidly infuse needed medications and fluids in an emergency situation. In people who experience critical trauma and who do not have adequate blood pressure, the IO route doubles the success rate of the peripheral IV route. In addition to the emergency clinical scenario that can call for an IO route to be used, IO access is only indicated when access to peripheral veins is either not possible or delayed. When IV access is either not possible or delayed, other indications for utilizing the IO route include administering contrast if needed for radiology scans and drawing blood for laboratory testing and analysis. Situations that can result in decreased or delayed access to peripheral veins, and thus necessitate the use of an IO route to infuse medications and fluids include circumstances such as burns, fluid accumulation (edema), past IV drug use, obesity, and very low blood pressure. Contraindications Having adequate and timely peripheral venous access is a major contraindication to obtaining IO access. Fractures in the bone at the site of device insertion Burn damage to the tissues around the site of device insertion Cellulitis or other type of skin infection at the site of device insertion Osteogenesis imperfecta, also referred to as Brittle Bone Disease Osteoporosis Osteomyelitis Osteopetrosis Osteopenia Recent orthopedic surgery A recent failed attempt at device insertion in the same bone Procedure An IO infusion can be used on adult or pediatric populations when traditional methods of vascular access are difficult or otherwise cause unwanted delayed management of the administration of medications. The IO site can be used for 24 hours and should be removed as soon as intravenous access has been gained. Prolonged use of an IO site, lasting longer than 24 hours, is associated with osteomyelitis (an infection in the bone). The needle is inserted through the bone's hard cortex and into the soft marrow interior, which allows immediate access to the vascular system. The IO needle is positioned at a 90-degree angle to the injection site, and is advanced through manual traction, impact driven force, or power driven. Each IO device has different designated insertion locations. The most common site of insertion is the antero-medial aspect of the upper, proximal tibia as this site lies just under the skin and is easily located. Other insertion sites include the anterior aspect of the femur, the superior iliac crest, proximal humerus, proximal tibia, distal tibia and the sternum (manubrium). Although intravascular access is still the preferred method for medication delivery in the prehospital area, IO access for adults has become more common. As of 2010, the American Heart Association no longer recommends using the endotracheal tube (ET) for resuscitation drugs, except as a last resort when IV or IO access cannot be gained. ET absorption of medications is poor, and optimal ET drug dosings are unknown. IO administration is becoming more common in civilian and military pre-hospital emergency medical services (EMS) systems globally. Intraosseous access has roughly the same absorption rate as IV access, and allows for fluid resuscitation. For example, sodium bicarbonate can be administered IO during a cardiac arrest when IV access is unavailable. High flow rates are attainable with an IO infusion, up to 125 milliliters per minute. This high rate of flow is achieved using a pressure bag to administer the infusion directly into the bone. Large volume IO infusions are known to be painful. 1% lidocaine is used to ease the pain associated with large volume IO infusions in conscious people. Complications Like any medical procedure, intraosseous infusion has some potential complications. In a review by Tyler et al., an analysis across the included studies found the overall complication rate associated with IO infusions to be less than 1% (0.9%). Complications include: Bone fractures from the puncture devices Catheter misplacement which can lead to extravasation Bone and tissue damage from the puncturing device needle breaking off in the bone Compartment syndrome Osteomyelitis Epiphyseal plate injury in pediatric populations Many of these potential complications can be prevented with simple measures like using good technique and keeping the period of IO infusion short by switching to IV as soon as it becomes feasible. Bone fracture complications can be decreased by using modern techniques and requiring more regular training in the methods of intraosseous marrow access for infusion. Extravasation can lead to the more serious complication of compartment syndrome. The risk of developing compartment syndrome can be reduced by medical personnel checking the infusion site regularly for any signs of swelling. Swelling could indicate misplacement of the catheter. Avoiding puncturing the same bone in 48 hours can also lessen the risk of developing this complication. The risk of osteomyelitis, while very low ( <1%), can be further lessened by using sterile, hygienic practices and modern devices to make the puncture. Damage to the epiphyseal plate can be avoided by training medical personnel about proper landmarks to be used for determining puncture sites. Devices Intraosseous devices allow quick and safe access to the vascular system for fluid and drug administration. After proper education and training, medical professionals can obtain vascular access via the IO route of administration by using one of the multiple devices that have been approved by the FDA for 24-hour use. There are several FDA approved IO devices, categorized by their mechanism of action: Power Driver: EZ-IO By Arrow Teleflex. The EZ-IO device is a small device that works like a traditional drill and drill bit, consisting of a reusable, battery-powered driver and disposable, hollow IO needle. A trigger allows for the IO needle to enter the bone marrow space at a preset length without any pressure being applied. In the United States, the FDA has approved the use of the EZ-IO device in the proximal tibia and the head of the humerus. Spring-Loaded: the Bone Injection Gun (BIG) and the Pyng Medical Corporation FAST 1 The First Access for Shock and Trauma (FAST 1) spring-loaded device is designed for use in the sternum of an adult. The FAST 1 device consists of multiple needles in a probe that penetrates the manubrium once manual pressure is applied. The Bone Injection Gun (BIG) device is a small, plastic, disposable, spring-loaded device that has a trigger that shoots the IO needle into the IO insertion site, which is more than likely in the proximal tibia. Manual / Hand Powered: Hollow steel manually inserted needles have been around since the inception of IO administration, and use a removable trocar to aid in the insertion of the needle. Dense adult bone limits its use, but manual devices are commonly used in children because of their safety profile and ease of use, once training has taken place. The three most widely used are: Cardinal Health Jamishidi/Illinois needle Cook Critical Care threaded Sur-Fast needle Cook Critical Care Dieckman modified needle Each device is capable of achieving rapid vascular access, despite the mechanism of action, with insertion times comparable to the IV administration route. Special Populations Pediatrics A comparison of intravenous (IV), intramuscular (IM), and intraosseous (IO) routes of administration concluded that the intraosseous (IO) route is the preferred method versus intramuscular (IM) and comparable to intravenous (IV) administration in delivering pediatric anaesthetic drugs. Intraosseous infusion (IO) is used in pediatric populations during anesthesia when other intravenous access, central venous catherization or venous cutdown, are difficult to use or cannot be used. When individuals are severely ill and need "rapid, efficient, and safe delivery of drugs", IO is used. When inserting the intraosseous needle into a conscious individual, this can be very painful. For children, anesthesia is not recommended before this procedure for non-emergency situations. Instead, distracting and holding the child is preferred. Intraosseous infusion is used in instances such as, "immediate indication/life-threatening emergency, cardiac/respiratory arrest, acute shock, hypothermia, obesity, edema, thermal injury, etc." For children, the preferred sites of IO are the distal tibia, proximal tibia, and distal femur. The distal end of the tibia is the preferred site because it is easy to access and the most reliable. Depending on the procedure, a variety of needles are used for IO. For example, "standard steel hypodermic, butterfly, spinal, trephine, sternal, and standard bone marrow needles are used." Needles that have a short shaft are preferred and safe. For infants up to 6 to 8 months old, 18-gauge needles are used and for children more than 8 months old, 15- or 16- gauge needles are used. A study by Glaeser et al., concluded that individuals who received IO vs. peripheral and central intravenous access were able to obtain much faster and more successful IO access. Another study, by Fiorito et al., observed the safety of IO use during the transportation of critically ill pediatric individuals. Based on the results, they concluded that the use of IO was safe, based on 78% successful placement of the IO needle and complications that occurred in only 12% of the cases. Similarly to adults, contraindications for IO infusion use in pediatrics include bone diseases such as osteogenesis imperfecta and osteopetrosis, and fractures. Others include cellulitis, burns, and infections at the access site. References External links Medical treatments Routes of administration Dosage forms Emergency medical procedures
Intraosseous infusion
[ "Chemistry" ]
2,544
[ "Pharmacology", "Routes of administration" ]
1,617,904
https://en.wikipedia.org/wiki/Chisanbop
Chisanbop or chisenbop (from Korean chi (ji) finger + sanpŏp (sanbeop) calculation 지산법/指算法), sometimes called Fingermath, is a finger counting method used to perform basic mathematical operations. According to The Complete Book of Chisanbop by Hang Young Pai, chisanbop was created in the 1940s in Korea by Sung Jin Pai and revised by his son Hang Young Pai, who brought the system to the United States in 1977. With the chisanbop method it is possible to represent all numbers from 0 to 99 with the hands, rather than the usual 0 to 10, and to perform the addition, subtraction, multiplication and division of numbers. The system has been described as being easier to use than a physical abacus for students with visual impairments. Basic concepts Each finger has a value of one, while the thumb has a value of five. Therefore each hand can represent the digits 0-9, rather than the usual 0-5. The two hands combine to represent two digits; the right hand is the ones place, and the left hand is the tens place. This way, any number from 0 to 99 can be shown, and it's possible to count up to 99 instead of just 10. The hands can be held above a table, with the fingers pressing down on the table; or the hands can simply be held up, fingers extended, as with the more common practice of 0-10 counting. Adoption in the United States Chisanbop can be used for teaching math, or simply for counting. The results for teaching math have been mixed. A school in Shawnee Mission, Kansas, ran a pilot program with students in 1979. It was found that although they could add large numbers quickly, they could not add them in their heads. The program was dropped. Grace Burton of the University of North Carolina said, "It doesn't teach the basic number facts, only to count faster. Adding and subtracting quickly are only a small part of mathematics." See also Finger binary bi-quinary coded decimal References Further reading External links Interactive demonstration of Chisenbop Instructable: How to count higher than 10 on your fingers, step 3: Chisenbop Abacus Finger-counting
Chisanbop
[ "Mathematics" ]
478
[ "Numeral systems", "Finger-counting" ]
1,617,962
https://en.wikipedia.org/wiki/Brick%20nog
Brick nog (nogging or nogged, beam filling) is a construction technique in which bricks are used to fill the gaps in a wooden frame. Such walls may then be covered with tile, weatherboards, or rendering, or the brick may remain exposed on the interior or exterior of the building. The technique was developed in England from the late 1400s to early 1500s, developing out of methods such as wattle and daub and lath and plaster construction, with the bricks being laid in horizontal courses or a herringbone pattern. Brick used in this way is rarely mechanically fastened to the adjacent wood members, generally being held in place only by the mortar bonds and friction. It is an integral part of the building structure that can also serve as fireproofing, soundproofing, or the final exposed surface of the assembly. Generally, the term brick infill is used instead of nogging in half-timbered construction, and the word nog or noggin has also come to be used to describe timber bracing pieces between wall studs in timber frame construction. References Bricks
Brick nog
[ "Engineering" ]
222
[ "Architecture stubs", "Architecture" ]
1,617,963
https://en.wikipedia.org/wiki/Hora%C8%9Biu%20N%C4%83stase
Horațiu Năstase is a Romanian physicist and professor in the string theory group at Instituto de Física Teórica of the São Paulo State University in São Paulo, Brazil. He was born in Bucharest, Romania, and finished high school at the Nicolae Bălcescu High School (now Saint Sava National College). He did his undergraduate studies in the Physics Department of the University of Bucharest, graduating in 1995. His last year there he studied at the Niels Bohr Institute (NBI), Copenhagen University, with a scholarship which continued into the following year. In 1996 he joined the Physics Department of the State University of New York at Stony Brook from which he received his PhD in May 2000, with thesis written under the direction of Peter van Nieuwenhuizen. From 2000 to 2002 he was a postdoc at the Institute for Advanced Study in Princeton, after which he was an assistant research professor at Brown University until 2006. From 2007 to 2009 he was an assistant professor at the Global Edge Institute of the Tokyo Institute of Technology in Japan. Since 2010, Năstase holds a permanent position as assistant professor at IFT-UNESP in Brazil. Năstase attracted some media attention in 2005 by arguing that string theory could be tested by the Relativistic Heavy Ion Collider, through the AdS/CFT correspondence. He is also known for his work in 2002 with David Berenstein and Juan Maldacena to investigate the duality between strings on pp-wave spacetime and "BMN operators" in supersymmetric Yang–Mills theory. Publications References External links Instituto de Física Teórica Website Living people Year of birth missing (living people) Scientists from Bucharest Saint Sava National College alumni University of Bucharest alumni Romanian emigrants to the United States Romanian physicists String theorists Theoretical physicists Stony Brook University alumni Brown University faculty Academic staff of Tokyo Institute of Technology Academic staff of the São Paulo State University
Horațiu Năstase
[ "Physics" ]
393
[ "Theoretical physics", "Theoretical physicists" ]
1,617,971
https://en.wikipedia.org/wiki/Pteridophyte
A pteridophyte is a vascular plant (with xylem and phloem) that reproduces by means of spores. Because pteridophytes produce neither flowers nor seeds, they are sometimes referred to as "cryptogams", meaning that their means of reproduction is hidden. They are also the ancestors of the plants we see today. Ferns, horsetails (often treated as ferns), and lycophytes (clubmosses, spikemosses, and quillworts) are all pteridophytes. However, they do not form a monophyletic group because ferns (and horsetails) are more closely related to seed plants than to lycophytes. "Pteridophyta" is thus no longer a widely accepted taxon, but the term pteridophyte remains in common parlance, as do pteridology and pteridologist as a science and its practitioner, for example by the International Association of Pteridologists and the Pteridophyte Phylogeny Group. Description Pteridophytes (ferns and lycophytes) are free-sporing vascular plants that have a life cycle with alternating, free-living gametophyte and sporophyte phases that are independent at maturity. The body of the sporophyte is well differentiated into roots, stem and leaves. The root system is always adventitious. The stem is either underground or aerial. The leaves may be microphylls or megaphylls. Their other common characteristics include vascular plant apomorphies (e.g., vascular tissue) and land plant plesiomorphies (e.g., spore dispersal and the absence of seeds). Taxonomy Phylogeny Of the pteridophytes, ferns account for nearly 90% of the extant diversity. Smith et al. (2006), the first higher-level pteridophyte classification published in the molecular phylogenetic era, considered the ferns as monilophytes, as follows: Division Tracheophyta (tracheophytes) - vascular plants Subdivision Lycopodiophyta (lycophytes) - less than 1% of extant vascular plants Sub division Euphyllophytina (euphyllophytes) Infradivision Moniliformopses (monilophytes) Infradivision Spermatophyta - seed plants, ~260,000 species where the monilophytes comprise about 9,000 species, including horsetails (Equisetaceae), whisk ferns (Psilotaceae), and all eusporangiate and all leptosporangiate ferns. Historically both lycophytes and monilophytes were grouped together as pteridophytes (ferns and fern allies) on the basis of being spore-bearing ("seed-free"). In Smith's molecular phylogenetic study the ferns are characterised by lateral root origin in the endodermis, usually mesarch protoxylem in shoots, a pseudoendospore, plasmodial tapetum, and sperm cells with 30-1000 flagella. The term "moniliform" as in Moniliformopses and monilophytes means "bead-shaped" and was introduced by Kenrick and Crane (1997) as a scientific replacement for "fern" (including Equisetaceae) and became established by Pryer et al. (2004). Christenhusz and Chase (2014) in their review of classification schemes provide a critique of this usage, which they discouraged as irrational. In fact the alternative name Filicopsida was already in use. By comparison "lycopod" or lycophyte (club moss) means wolf-plant. The term "fern ally" included under Pteridophyta generally refers to vascular spore-bearing plants that are not ferns, including lycopods, horsetails, whisk ferns and water ferns (Marsileaceae, Salviniaceae and Ceratopteris). This is not a natural grouping but rather a convenient term for non-fern, and is also discouraged, as is eusporangiate for non-leptosporangiate ferns. However both Infradivision and Moniliformopses are also invalid names under the International Code of Botanical Nomenclature. Ferns, despite forming a monophyletic clade, are formally only considered as four classes (Psilotopsida; Equisetopsida; Marattiopsida; Polypodiopsida), 11 orders and 37 families, without assigning a higher taxonomic rank. Furthermore, within the Polypodiopsida, the largest grouping, a number of informal clades were recognised, including leptosporangiates, core leptosporangiates, polypods (Polypodiales), and eupolypods (including Eupolypods I and Eupolypods II). In 2014 Christenhusz and Chase, summarising the known knowledge at that time, treated this group as two separate unrelated taxa in a consensus classification; Lycopodiophyta (lycopods) 1 subclass, 3 orders, each with one family, 5 genera, approx. 1,300 species Polypodiophyta (ferns) 4 subclasses, 11 orders, 21 families, approx. 212 genera, approx. 10,535 species Subclass Equisetidae Warm. Subclass Ophioglossidae Klinge Subclass Marattiidae Klinge Subclass Polypodiidae Cronquist, Takht. & Zimmerm. These subclasses correspond to Smith's four classes, with Ophioglossidae corresponding to Psilotopsida. The two major groups previously included in Pteridophyta are phylogenetically related as follows: Subdivision Pteridophytes consist of two separate but related classes, whose nomenclature has varied. The system put forward by the Pteridophyte Phylogeny Group in 2016, PPG I, is: Class Lycopodiopsida Bartl. – lycophytes: clubmosses, quillworts and spikemosses; 3 extant orders Order Lycopodiales DC. ex Bercht. & J.Presl – clubmosses; 1 extant family Order Isoetales Prantl – quillworts; 1 extant family Order Selaginellales Prantl – spikemosses; 1 extant family Class Polypodiopsida Cronquist, Takht. & W.Zimm. – ferns; 11 extant orders Subclass Equisetidae Warm. – horsetails; 1 extant order, family and genus (Equisetum) Order Equisetales DC. ex Bercht. & J.Presl – 1 extant family Subclass Ophioglossidae Klinge – 2 extant orders Order Psilotales Prant – whisk ferns; 1 extant family Order Ophioglossales Link – grape ferns; 1 extant family Subclass Marattiidae Klinge – marattioid ferns; 1 extant order Order Marattiales Link – 1 extant family Subclass Polypodiidae Cronquist, Takht. & W.Zimm. – leptosporangiate ferns; 7 extant orders Order Osmundales Link – 1 extant family Order Hymenophyllales A.B.Frank – 1 extant family Order Gleicheniales Schimp – 3 extant families Order Schizaeales Schimp. – 3 extant families Order Salviniales Link – 2 extant families Order Cyatheales A.B.Frank – 8 extant families Order Polypodiales Link – 26 extant families In addition to these living groups, several groups of pteridophytes are now extinct and known only from fossils. These groups include the Rhyniopsida, Zosterophyllopsida, Trimerophytopsida, the Lepidodendrales and the Progymnospermopsida. Modern studies of the land plants agree that seed plants emerged from pteridophytes more closer to ferns than lycophytes. Therefore, pteridophytes do not form a clade but constitute a paraphyletic grade. Life cycle Just as with bryophytes and spermatophytes (seed plants), the life cycle of pteridophytes involves alternation of generations. This means that a diploid generation (the sporophyte, which produces spores) is followed by a haploid generation (the gametophyte or prothallus, which produces gametes). Pteridophytes differ from bryophytes in that the sporophyte is branched and generally much larger and more conspicuous, and from seed plants in that both generations are independent and free-living. The sexuality of pteridophyte gametophytes can be classified as follows: Dioicous: each individual gametophyte is either male (producing antheridia and hence sperm) or female (producing archegonia and hence egg cells). Monoicous: each individual gametophyte produces both antheridia and archegonia and can function both as a male and as a female. Protandrous: the antheridia mature before the archegonia (male first, then female). Protogynous: the archegonia mature before the antheridia (female first, then male). These terms are not the same as monoecious and dioecious, which refer to whether a seed plant's sporophyte bears both male and female gametophytes, i. e., produces both pollen and seeds, or just one of the sexes. See also Embryophyte Fern ally Plant sexuality References Bibliography External links British Pteridological Society Annual Review of Pteridological Research Pteridophytes Test Questions for Papers And Quizzes [Important] Cryptogams Plants Paraphyletic groups
Pteridophyte
[ "Biology" ]
2,092
[ "Plants", "Cryptogams", "Paraphyletic groups", "Phylogenetics", "Eukaryotes" ]
1,618,271
https://en.wikipedia.org/wiki/Title%20page
The title page of a book, thesis or other written work is the page at or near the front which displays its title, subtitle, author, publisher, and edition, often artistically decorated. (A half title, by contrast, displays only the title of a work.) The title page is one of the most important parts of the "front matter" or "preliminaries" of a book, as the data on it and its verso (together known as the "title leaf") are used to establish the "title proper and usually, though not necessarily, the statement of responsibility and the data relating to publication". This determines the way the book is cited in library catalogs and academic references. The title page often shows the title of the work, the person or body responsible for its intellectual content, and the imprint, which contains the name and address of the book's publisher and its date of publication. Particularly in paperback editions it may contain a shorter title than the cover or lack a descriptive subtitle. Further information about the publication of the book, including its copyright information, is frequently printed on the verso of the title page. Also often included there are the ISBN and a "printer's key", also known as the "number line", which indicates the print run to which the volume belongs. The first printed books, or incunabula, did not have title pages: the text simply begins on the first page, and the book is often identified by the initial words—the incipit—of the text proper. Other older books may have bibliographic information on the colophon at the end of the book. The Bulla Cruciatae contra Turcos (1463) is the earliest use of a title on the first page. Margaret M. Smith's The Title-Page, Its Early Development, 1460-1510 provides the genesis and development of the title page. Contamination of historic books In the 19th century, Paris green and similar arsenic pigments were often used on front and back covers, top, fore and bottom edges, title pages, book decorations, and in printed or manual colorations of illustrations of books. Since February 2024, several German libraries started to block public access to their stock of 19th century books to check for the degree of poisoning. See also Colophon Book design Half title Printer's key References Publications Bertram, Gitta, Nils Büttner, and Claus Zittel, eds. 2021. Gateways to the Book: Frontispieces and Title Pages in Early Modern Europe. Leiden: Brill. Fowler, Alastair. 2017. The Mind of the Book: Pictorial Title Pages. First edition. Oxford, United Kingdom: Oxford University Press. Gilmont, J.-F, Vanautgaerden, A., Deraedt, F. (2008). La page de titre à la Renaissance : treize études suivies de cinquante-quatre pages de titre commentées et d'un lexique des termes relatifs à la page de titre. Brepols. Morison, Stanley, Brooke Crutchley, and Kenneth Day. 1963. The Typographic Book, 1450-1935: A Study of Fine Typography through Five Centuries, Exhibited in Upwards of Three Hundred and Fifty Title and Text Pages Drawn from Presses Working in the European Tradition. Chicago: University of Chicago Press. Smith, Margaret M. (2000). The title-page : its early development, 1460-1510. Oak Knoll. External links Prints & People: A Social History of Printed Pictures, an exhibition catalog from The Metropolitan Museum of Art (fully available online as PDF), which contains material on title pages Glasgow University Library, Special Collections Department, Book of the Month Book design Typography
Title page
[ "Engineering" ]
795
[ "Book design", "Design" ]
1,618,377
https://en.wikipedia.org/wiki/Oceanic%20basin
In hydrology, an oceanic basin (or ocean basin) is anywhere on Earth that is covered by seawater. Geologically, most of the ocean basins are large geologic basins that are below sea level. Most commonly the ocean is divided into basins following the continents distribution: the North and South Atlantic (together approximately 75 million km2/ 29 million mi2), North and South Pacific (together approximately 155 million km2/ 59 million mi2), Indian Ocean (68 million km2/ 26 million mi2) and Arctic Ocean (14 million km2/ 5.4 million mi2). Also recognized is the Southern Ocean (20 million km2/ 7 million mi2). All ocean basins collectively cover 71% of the Earth's surface, and together they contain almost 97% of all water on the planet. They have an average depth of almost 4 km (about 2.5 miles). Definitions of boundaries Boundaries based on continents "Limits of Oceans and Seas", published by the International Hydrographic Office in 1953, is a document that defined the ocean's basins as they are largely known today. The main ocean basins are the ones named in the previous section. These main basins are divided into smaller parts. Some examples are: the Baltic Sea (with three subdivisions), the North Sea, the Greenland Sea, the Norwegian Sea, the Laptev Sea, the Gulf of Mexico, the South China Sea, and many more. The limits were set for convenience of compiling sailing directions but had no geographical or physical ground and to this day have no political significance. For instance, the line between the North and South Atlantic is set at the equator. The Antarctic or Southern Ocean, which reaches from 60° south to Antarctica had been omitted until 2000, but is now also recognized by the International Hydrographic Office. Nevertheless, and since ocean basins are interconnected, many oceanographers prefer to refer to one single ocean basin instead of multiple ones.   Older references (e.g., Littlehales 1930) consider the oceanic basins to be the complement to the continents, with erosion dominating the latter, and the sediments so derived ending up in the ocean basins. This vision is supported by the fact that oceans lie lower than continents, so the former serve as sedimentary basins that collect sediment eroded from the continents, known as clastic sediments, as well as precipitation sediments. Ocean basins also serve as repositories for the skeletons of carbonate- and silica-secreting organisms such as coral reefs, diatoms, radiolarians, and foraminifera. More modern sources (e.g., Floyd 1991) regard the ocean basins more as basaltic plains, than as sedimentary depositories, since most sedimentation occurs on the continental shelves and not in the geologically defined ocean basins. Definition based on surface connectivity The flow in the ocean is not uniform but varies with depth. Vertical circulation in the ocean is very slow compared to horizonal flow and observing the deep ocean is difficult. Defining the ocean basins based on connectivity of the entire ocean (depth and width) is therefore not possible. Froyland et al. (2014) defined ocean basins based on surface connectivity. This is achieved by creating a Markov Chain model of the surface ocean dynamics using short term time trajectory data from a global ocean model. These trajectories are of particles that move only on the surface of the ocean. The model outcome gives the probability of a particle at a certain grid point to end up somewhere else on the ocean's surface. With the model outcome a matrix can be created from which the Eigenvectors and Eigenvalues are taken. These Eigenvectors show regions of attraction, aka regions where things on the surface of the ocean (plastic, biomass, water etc.) become trapped. One of these regions is for example the Atlantic garbage patch. With this approach the five main ocean basins are still the North and South Atlantic, North and South Pacific and the Arctic Ocean, but with different boundaries between the basins. These boundaries show the lines of very little surface connectivity between the different regions which means that a particle on the ocean surface in a certain region is more likely to stay in the same region than to pass over to a different one. Formation of oceanic crusts and basins Earth's structure Depending on the chemical composition and the physical state, the Earth can be divided into three major components:  the mantle, the core, and the crust. The crust is referred to as the outside layer of the Earth. It is made of solid rock, mostly basalt and granite. The crust that lies below sea level is known as the oceanic crust, while on land it is known as the continental crust. The former is thinner and is composed of relatively dense basalt, while the latter is less dense and mainly composed of granite. The lithosphere is composed of the crust (oceanic and continental) and the uppermost part of the mantle. The lithosphere is broken into sections called plates. Processes of tectonic plates Tectonic plates move very slowly (5 to 10 cm (2 to 4 inches) per year) relative to each other and interact along their boundaries. This movement is responsible for most of the Earth's seismic and volcanic activity. Depending on how the plates interact with each other, there are three types of boundaries. Convergent boundary: the plates collide, and eventually the denser one slides underneath the lighter one, a process known as subduction. This type of interaction can take place between an oceanic and an oceanic crust, creating a so-called oceanic trench. It can also take place between an oceanic and a continental crust, forming a mountain range in the continent like the Andes, and it can take place between a continental and continental crust, resulting in large mountain chains, like the Himalayas. Divergent boundary: the plates move apart from each other. If this occurs on land a rift is formed, which eventually becomes a rift valley. The most active divergent boundaries lie under the sea. In the ocean, if magma or molten rock ascent from the mantle and fill the gap created by two diverging plates, a mid-ocean ridge is formed. Transform boundary: also called transform fault, occurs when the movement between the plates is horizontal, so no crust is created or destroyed. It can happen both, on land and in the sea, but most of the faults are in the oceanic crust. Size of trenches The Earth's deepest trench is the Mariana Trench which extends for about 2500 km (1600 miles) across the seabed. It is near the Mariana Islands, a volcanic archipelago in the West Pacific. Its deepest point is 10994 m (nearly 7 miles) below the surface of the sea. The Earth's longest trench runs alongside the coast of Peru and Chile, reaching a depth of 8065 m (26460 feet) and extending for approximately 5900 km (3700 miles). It occurs where the oceanic Nazca plate slides under the continental South American plate and is associated with the upthrust and volcanic activity of the Andes. History and age of oceanic crust The oldest oceanic crust is in the far western equatorial Pacific, east of the Mariana Islands. It is located far away from oceanic spreading centers, where oceanic crust is constantly created or destroyed. The oldest crust is estimated to be only around 200 million years old, compared to the age of Earth which is 4.6 billion years. 200 million years ago nearly all land mass was one large continent called Pangea, which started to split up. During the splitting process of Pangea, some ocean basins shrunk, such as the Pacific, while others were created, such as the Atlantic and Arctic basins. The Atlantic Basin began to form around 180 million years ago, when the continent Laurasia (North America and Eurasia) started to drift away from Africa and South America. The Pacific plate grew, and subduction led to a shrinking of its bordering plates. The Pacific plate continues to move northward. Around 130 million years ago the South Atlantic started to form, as South America and Africa started to separate. At around this time India and Madagascar rifted northwards, away from Australia and Antarctica, creating seafloor around Western Australia and East Antarctica. When Madagascar and India separated between 90 and 80 million years ago, the spreading ridges in the Indian Ocean were reorganized. The northernmost part of the Atlantic Ocean was also formed at this time when Europe and Greenland separated. About 60 million years ago a new rift and oceanic ridge formed between Greenland and Europe, separating them and initiating the formation of oceanic crust in the Norwegian Sea and the Eurasian Basin in the eastern Arctic Ocean. Changes in ocean basins State of the current ocean basins The area occupied by the individual ocean basins has fluctuated in the past due to, amongst other, tectonic plate movements. Therefore, an oceanic basin can be actively changing size and/or depth or can be relatively inactive. The elements of an active and growing oceanic basin include an elevated mid-ocean ridge, flanking abyssal hills leading down to abyssal plains and an oceanic trench. Changes in biodiversity, floodings and other climate variations are linked to sea-level, and are reconstructed with different models and observations (e.g., age of oceanic crust). Sea level is affected not only by the volume of the ocean basin, but also by the volume of water in them. Factors that influence the volume of the ocean basins are: Plate tectonics and the volume of mid-ocean ridges: the depth of the seafloor increases with distance to a ridge, as the oceanic lithosphere cools and thickens. The volume of ocean basins can be modeled using reconstructions of plate tectonics and using an age-depth relationship (see also Seafloor depth vs age). Marine sedimentations: these influence global mean depth and volume of the ocean, but they are difficult to determine and reconstruct. Passive margins and crustal extensions: to compensate the extension of continents due to continental rifting, oceanic crust decreases and therefore so does the volume of the ocean basin. However, the increase in continental area leads to a stretching and thinning of the continental crust, much of which ends up below sea level, thus again leading to an increase in ocean basin volume. The Atlantic Ocean and the Arctic Ocean are good examples of active, growing oceanic basins, whereas the Mediterranean Sea is shrinking. The Pacific Ocean is also an active, shrinking oceanic basin, even though it has both spreading ridge and oceanic trenches. Perhaps the best example of an inactive oceanic basin is the Gulf of Mexico, which formed in Jurassic times and has been doing nothing but collecting sediments since then. The Aleutian Basin is another example of a relatively inactive oceanic basin. The Japan Basin in the Sea of Japan which formed in the Miocene, is still tectonically active although recent changes have been relatively mild. See also List of abyssal plains and oceanic basins List of oceanic landforms Trough (geology) Solid Earth Notes Further reading External links Global Solid Earth Topography Physical oceanography Marine geology Coastal and oceanic landforms Oceanographical terminology
Oceanic basin
[ "Physics" ]
2,250
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
1,618,529
https://en.wikipedia.org/wiki/Sand%20filter
Sand filters are used as a step in the water treatment process of water purification. There are three main types; rapid (gravity) sand filters, upward flow sand filters and slow sand filters. All three methods are used extensively in the water industry throughout the world. The first two require the use of flocculant chemicals to work effectively while slow sand filters can produce very high quality water with pathogens removal from 90% to >99% (depending on the strains), taste and odour without the need for chemical aids. Sand filters can, apart from being used in water treatment plants, be used for water purification in singular households as they use materials which are available for most people. History The history of separation techniques reaches far back, as filter materials were already in use during ancient periods. Rushes and genista plants were used to fill sieving vessels that separated solid and liquid materials. The Egyptians also used porous clay vessels to filter drinking water, wine and other liquids. Sand bed filtration concept A sand bed filter is a kind of depth filter. Broadly, there are two types of filters for separating particulate solids from fluids: Surface filters, where particulates are captured on a permeable surface Depth filters, where particulates are captured within a porous body of material. In addition, there are passive and active devices for causing solid-liquid separation such as settling tanks, self-cleaning screen filters, hydrocyclones and centrifuges. There are several kinds of depth filters, some employing fibrous material and others employing granular materials. Sand bed filters are an example of a granular loose media depth filter. They are usually used to separate small amounts (<10 parts per million or <10 g per cubic metre) of fine solids (<100 micrometres) from aqueous solutions. In addition, they are usually used to purify the fluid rather than capture the solids as a valuable material. Therefore they find most of their uses in liquid effluent (wastewater) treatment. Particulate solids capture mechanisms Sand bed filters work by providing the particulate solids with many opportunities to be captured on the surface of a sand grain. As fluid flows through the porous sand along a tortuous route, the particulates come close to sand grains. They can be captured by one of several mechanisms: Direct collision Van der Waals or London force attraction Surface charge attraction Diffusion In addition, particulate solids can be prevented from being captured by surface charge repulsion if the surface charge of the sand is of the same sign (positive or negative) as that of the particulate solid. Furthermore, it is possible to dislodge captured particulates although they may be re-captured at a greater depth within the bed. Finally, a sand grain that is already contaminated with particulate solids may become more attractive or repel addition particulate solids. This can occur if by adhering to the sand grain the particulate loses surface charge and becomes attractive to additional particulates or the opposite and surface charge is retained repelling further particulates from the sand grain. In some applications it is necessary to pre-treat the effluent flowing into a sand bed to ensure that the particulate solids can be captured. This can be achieved by one of several methods: Adjusting the surface charge on the particles and the sand by changing the pH Coagulation – adding small, highly charged cations (aluminium 3+ or calcium 2+ are usually used) Flocculation – adding small amounts of charge polymer chains which either form a bridge between the particulate solids (making them bigger) or between the particulate solids and the sand. Operating regimes They can be operated either with upward flowing fluids or downward flowing fluids the latter being much more usual. For downward flowing devices the fluid can flow under pressure or by gravity alone. Pressure sand bed filters tend to be used in industrial applications and often referred to as rapid sand bed filters. Gravity fed units are used in water purification especially drinking water and these filters have found wide use in developing countries (slow sand filters). Overall, there are several categories of sand bed filter: rapid (gravity) sand filters rapid (pressure) sand bed filters upflow sand filters slow sand filters The sketch illustrates the general structure of a rapid pressure sand filter. The filter sand takes up most space of the chamber. It sits either on a nozzle floor or on top of a drainage system which allows the filtered water to exit. The pre-treated raw water enters the filter chamber on the top, flows through the filter medium and the effluent drains through the drainage system in the lower part. Large process plants have also a system implemented to evenly distribute the raw water to the filter. In addition, a distribution system controlling the air flow is usually included. It allows a constant air and water distribution and prevents too high water flows in specific areas. A typical grain distribution exits due to the frequent backwashing. Grains with smaller diameter are dominant in the upper part of the sand layer while coarse grain dominates in the lower parts. Two processes influencing the functionality of a filter are ripening and regeneration.At the beginning of a new filter run, the filter efficiency increases simultaneously with the number of captured particles in the medium. This process is called filter ripening. During filter ripening the effluent might not meet quality criteria and must be reinjected at previous steps in the plant. Regeneration methods allow the reuse of the filter medium. Accumulated solids from the filter bed are removed. During backwashing, water (and air) is pumped backwards through the filter system. Backwash water may partially be reinjected in front of the filter process and generated sewage needs to be discarded. The backwashing time is determined by either the turbidity value behind the filter, which must not exceed a set threshold, or by the head loss across the filter medium, which must also not exceed a certain value. Rapid pressure sand bed filter design Smaller sand grains provide more surface area and therefore a higher decontamination of the inlet water, but it also requires more pumping energy to drive the fluid through the bed. A compromise is that most rapid pressure sand bed filters use grains in the range 0.6 to 1.2 mm although for specialist applications other sizes may be specified. Larger feed particles (>100 micrometres) will tend to block the pores of the bed and turn it into a surface filter that blinds rapidly. Larger sand grains can be used to overcome this problem, but if significant amounts of large solids are in the feed they need to be removed upstream of the sand bed filter by a process such as settling. The depth of the sand bed is recommended to be around 0.6–1.8 m (2–6 ft) regardless of the application. This is linked to the maximum throughput discussed below. Guidance on the design of rapid sand bed filters suggests that they should be operated with a maximum flow rate of 9 m3/m2/hr (220 US gal/ft2/hr). Using the required throughput and the maximum flow rate, the required area of the bed can be calculated. The final key design point is to be sure that the fluid is properly distributed across the bed and that there are no preferred fluid paths where the sand may be washed away and the filter be compromised. Rapid pressure sand bed filters are typically operated with a feed pressure of 2 to 5 bar(a) (28 to 70 psi(a)). The pressure drop across a clean sand bed is usually very low. It builds as particulate solids are captured on the bed. Particulate solids are not captured uniformly with depth, more are captured higher up with bed with the concentration gradient decaying exponentially. This filter type will capture particles down to very small sizes, and does not have a true cut off size below which particles will always pass. The shape of the filter particle size-efficiency curve is a U-shape with high rates of particle capture for the smallest and largest particles with a dip in between for mid-sized particles. The build-up of particulate solids causes an increase in the pressure lost across the bed for a given flow rate. For a gravity fed bed when the pressure available is constant, the flow rate will fall. When the pressure loss or flow is unacceptable and the filter is not working effectively any longer, the bed is remove the accumulated particles. For a pressurized rapid sand bed filter this occurs when the pressure drop is around 0.5 bar. The backwash fluid is pumped backwards through the bed until it is fluidized and has expanded by up to about 30% (the sand grains start to mix and as they rub together they drive off the particulate solids). The smaller particulate solids are washed away with the backwash fluid and captured usually in a settling tank. The fluid flow required to fluidize the bed is typically 3 to 10 m3/m2/hr but not run for long (a few minutes). Small amounts of sand can be lost in the backwashing process and the bed may need to be topped up periodically. Slow sand filter design As the title indicates, the speed of filtration is changed in the slow sand filter, however, the biggest difference between slow and rapid sand filter, is that the top layer of sand is biologically active, as microbial communities are introduced to the system. The recommended and usual depth of the filter is 0.9 to 1.5 meters. Microbial layer is formed within 10–20 days from the start of the operation. During the process of filtration, raw water can percolate through the porous sand medium, stopping and trapping organic material, bacteria, viruses and cysts such as Giardia and Cryptosporidium. The regeneration procedure for slow sand filters is called scraping and is used to mechanically remove the dried out particles on the filter. However, this process can also be done under water, depending on the individual system. Another limiting factor for the water being treated is turbidity, which is for slow sand filters defined to be 10 NTU (Nephelometric Turbidity Units). Slow sand filters are a good option for limited budget operations as the filtration is not using any chemicals and requires little or no mechanical assistance. However, because of a continuous growing population in communities, slow sand filters are being replaced for rapid sand filters, mostly due to the running period length. Characteristics of rapid and slow sand filters Upflow bed filter design The continuously backflushing or upflow sand filter is the newest operating regime. The clearest difference with respect to the previous ones, is that the water to be filtered is fed from the bottom and the filtered water is obtained at the top. This reverse flow allows the backwash process to be integrated in the filtration process, thus decreasing the amount of rinse water to be used and reducing cleaning time. The maximum loading is about 5.4 lps/m2 with a constant head loss of 0.6 m. Mixed bed filters Filters that have different filtering layers, are called mixed bed filters or multimedia filters. Sand is a common filter material, but anthracite, granular activated carbon (GAC), garnet and ilmenite are also common filter materials. Anthracite is a harder material and has less volatile compared to other coals. Ilmenite and garnet are heavy compared to sand. Garnet consists of several minerals, causing a shifting red colour. Ilmenite is an oxide of iron and titanium. GAC can be used in the process of adsorption and filtration at the same time. These materials can be used both alone, or combined with other media. But the filtering layers will be always arranged by density, heavier compounds will be settled in the bottom, while the lighter ones will be located on top. Different combinations give different filter classification and also different porosity throughout the filter, which is translated into different pressure drop. A very common arrangement for these filers is composed of: anthracite on top, sand and garnet, with a support of gravel at the bottom. The depth of these filters is normally between 0.6-1 m, above 1 m the pressure drop rises sharply and less than 0.6 m reduces the thickness of each filtering layer, thus reducing its efficienciy. Nomal operating flux and pressure drop are between 3-7 gpm/ft2 and 3-7 psi. When pressure drop increases above 10 psi, a backwash operation ins needed, which consist of reversing the flow (water goes upwards) in order to remove the particles trapped in the filtering media, and this will exit from the top of the filter with the backwash water. Common for the backwash is around 3 times the normal filtering flux (must be high enough to lift the filtering media to remove the particles trapped in it). Monomedia is a one layered filter, commonly consisting of sand and is today replaced by newer technology. Deep-bed monomedia is also a one layered filter which consist of either anthracite or GAC. The deep-bed monomedia filter is used when there is a consistent water quality and this gives a longer run time. Dual media (two layered) often contain a sand layer in the bottom with an anthracite or GAC layer on top. Trimedia or mixed media is a filter with three layers. Trimedia often have garnet or ilmenite in the bottom layer, sand in the middle and anthracite at the top. Uses in water treatment All of these methods are used extensively in the water industry throughout the world. The first three in the list above require the use of flocculant chemicals to work effectively. Slow sand filters produce high-quality water without the use of chemical aids. Passing flocculated water through a rapid gravity sand filter strains out the floc and the particles trapped within it, reducing numbers of bacteria and removing most of the solids. The medium of the filter is sand of varying grades. Where taste and odor may be a problem (organoleptic impacts), the sand filter may include a layer of activated carbon to remove such taste and odor. Sand filters become clogged with floc or bioclogged after a period in use. Slow sand filters are then scraped (see above) while rapid sand filters are backwashed or pressure washed to remove the floc. This backwash water is run into settling tanks so that the floc can settle out and it is then disposed of as waste material. The supernatant water is then run back into the treatment process or disposed of as a waste-water stream. In some countries, the sludge may be used as a soil conditioner. Inadequate filter maintenance has been the cause of occasional drinking water contamination. Sand filters are occasionally used in the sewage treatment as a final polishing stage. In these filters the sand traps residual suspended material and bacteria and provides a physical matrix for bacterial decomposition of nitrogenous material, including ammonia and nitrates, into nitrogen gas. Sand filters are one of the most useful treatment processes as the filtering process (especially with slow sand filtration) combines within itself many of the purification functions. Advantages and limitations One of the advantages of sand filters is that they are useful for different applications. Moreover, the different types of operation modes: rapid, slow and Upflow, allow some flexibility to adapt the filtration method to the necessities and requirements of the users. Sand filters allow a high efficiency for color and microorganisms removal, and as they are very simple, operating costs are very low. What is more, its simplicity makes the automation of the processes easier, thus requiring less human intervention. The main limitations of this technology would be related to the clogging, that is, the obstruction of the filter media, which requires a significant amount of water to carry out the backflush operation and the use of chemicals in the pretreatment. Furthermore, slow sand filters usually require larger land areas compared to the rapid flow, especially if the raw water is highly contaminated. However, despite these limitations, they offer much more capabilities and that is why they are extensively used in the industry. Challenges in the application process In the process of water treatment, one should be aware of certain factors that might cause serious problems if not treated properly. Aforementioned processes such as filter ripening and backwashing influence not only the water quality but also the time needed for the full treatment. Backwashing reduces also the volume of the effluent. If a certain amount of water has to be delivered to e.g. a community, this water loss needs to be considered. In addition, backwashing waste needs to be treated or properly discarded. From the chemical perspective, varying raw water qualities and changes in the temperature effect, already at the entrance to the plant, the efficiency of the treatment process. Considerable uncertainty is involved regarding models used to construct sand filters. This is due to mathematical assumptions that have to be made such as all grains being spherical. The spherical shape affects the interpretation of the size since the diameter is different for spherical and non-spherical grains. The packing of the grains within the bed is also dependent on the shape of the grains. This then affects the porosity and hydraulic flow. Uses in industry Sand filters are used in various sectors and processes, where far-reaching removal of suspended matter from water or wastewater is required. Sectors where sand filtration is implemented include drinking water production, swimming pools, car washes, groundwater treatment, RWZI, slaughterhouses, fruit and vegetable processing industry, drinks, food industry, surface treatment of metals, … Cooling water production, drinking water preparation, pre-filtration in active carbon treatments and membrane systems, and the filtration of swimming pool water. Iron-removal from groundwater using aeration and sand filtration. Final purification of wastewater, follow-up to metal precipitation and sedimentation, to remove residual traces of metal-based sludge. Final purification of wastewater produced in the production of iron, steel and non-ferro alloys. Sand filtration can be preceded by processes like precipitation/sedimentation, coagulation/flocculation/sedimentation and flotation. Purification of wastewater containing sand-blasting grit and paint particles, at shipyards for example. Also used as final purification (or prior to active carbon filtration) to permit re-use. Used in greenhouse horticulture as drain-water disinfectant (slow sand filter). Stormwater filtration used for filtering pollutants from surface runoff. Sewage provides a physical matrix to decompose nitrogen based compounds such ammonia. See also American Water Works Association Water treatment Water purification Jewell water filter References Water filters Swimming pool equipment Filter nl:Zandfilter
Sand filter
[ "Chemistry" ]
3,836
[ "Water treatment", "Water filters", "Filters" ]
1,618,669
https://en.wikipedia.org/wiki/Mercury-arc%20valve
A mercury-arc valve or mercury-vapor rectifier or (UK) mercury-arc rectifier is a type of electrical rectifier used for converting high-voltage or high-current alternating current (AC) into direct current (DC). It is a type of cold cathode gas-filled tube, but is unusual in that the cathode, instead of being solid, is made from a pool of liquid mercury and is therefore self-restoring. As a result mercury-arc valves, when used as intended, are far more robust and durable and can carry much higher currents than most other types of gas discharge tube. Some examples have been in continuous service, rectifying 50-ampere currents, for decades. Invented in 1902 by Peter Cooper Hewitt, mercury-arc rectifiers were used to provide power for industrial motors, electric railways, streetcars, and electric locomotives, as well as for radio transmitters and for high-voltage direct current (HVDC) power transmission. They were the primary method of high power rectification before the advent of semiconductor rectifiers, such as diodes, thyristors and gate turn-off thyristors (GTOs). These solid state rectifiers have almost completely replaced mercury-arc rectifiers thanks to their lower cost, maintenance, and environmental risk, and higher reliability. History In 1882 Jules Jamin and G. Maneuvrier observed the rectifying properties of a mercury arc. The mercury arc rectifier was invented by Peter Cooper Hewitt in 1902 and further developed throughout the 1920s and 1930s by researchers in both Europe and North America. Before its invention, the only way to convert AC current provided by utilities to DC was by using expensive, inefficient, and high-maintenance rotary converters or motor–generator sets. Mercury-arc rectifiers or "converters" were used for charging storage batteries, arc lighting systems, the DC traction motors for trolleybuses, trams, and subways, and electroplating equipment. The mercury rectifier was used well into the 1970s, when it was finally replaced by semiconductor rectifiers. Operating principles Operation of the rectifier relies on an electrical arc discharge between electrodes in a sealed envelope containing mercury vapor at very low pressure. A pool of liquid mercury acts as a self-renewing cathode that does not deteriorate with time. The mercury emits electrons freely, whereas the carbon anodes emit very few electrons even when heated, so the current of electrons can only pass through the tube in one direction, from cathode to anode, which allows the tube to rectify alternating current. When an arc is formed, electrons are emitted from the surface of the pool, causing ionization of mercury vapor along the path towards the anodes. The mercury ions are attracted towards the cathode, and the resulting ionic bombardment of the pool maintains the temperature of the emission spot, so long as a current of a few amperes continues. While the current is carried by electrons, the positive ions returning to the cathode allow the conduction path to be largely unaffected by the space charge effects which limit the performance of vacuum tubes. Consequently, the valve can carry high currents at low arc voltages (typically 20–30 V) and so is an efficient rectifier. Hot-cathode, gas discharge tubes such as the thyratron may also achieve similar levels of efficiency but heated cathode filaments are delicate and have a short operating life when used at high current. The temperature of the envelope must be carefully controlled, since the behaviour of the arc is determined largely by the vapor pressure of the mercury, which in turn is set by the coolest spot on the enclosure wall. A typical design maintains temperature at and a mercury vapor pressure of 7 millipascals. The mercury ions emit light at characteristic wavelengths, the relative intensities of which are determined by the pressure of the vapor. At the low pressure within a rectifier, the light appears pale blue-violet and contains much ultraviolet light. Construction The construction of a mercury arc valve takes one of two basic forms — the glass-bulb type and the steel-tank type. Steel-tank valves were used for higher current ratings above approximately 500 A. Glass-bulb valves The earliest type of mercury vapor electric rectifier consists of an evacuated glass bulb with a pool of liquid mercury sitting in the bottom as the cathode. Over it curves the glass bulb, which condenses the mercury that is evaporated as the device operates. The glass envelope has one or more arms with graphite rods as anodes. Their number depends on the application, with one anode usually provided per phase. The shape of the anode arms ensures that any mercury that condenses on the glass walls drains back into the main pool quickly to avoid providing a conductive path between the cathode and respective anode. Glass envelope rectifiers can handle hundreds of kilowatts of direct-current power in a single unit. A six-phase rectifier rated 150 amperes has a glass envelope approximately 600 mm (24 inches) high by 300 mm (12 inches) outside diameter. These rectifiers will contain several kilograms of liquid mercury. The large size of the envelope is required due to the low thermal conductivity of glass. Mercury vapor in the upper part of the envelope must dissipate heat through the glass envelope in order to condense and return to the cathode pool. Some glass tubes were immersed in an oil bath to better control the temperature. The current-carrying capacity of a glass-bulb rectifier is limited partly by the fragility of the glass envelope (the size of which increases with rated power) and partly by the size of the wires fused into the glass envelope for connection of the anodes and cathode. Development of high-current rectifiers required leadwire materials and glass with very similar coefficients of thermal expansion in order to prevent leakage of air into the envelope. Current ratings of up to 500 A had been achieved by the mid-1930s, but most rectifiers for current ratings above this were realised using the more robust steel-tank design. Steel-tank valves For larger valves, a steel tank with ceramic insulators for the electrodes is used, with a vacuum pump system to counteract slight leakage of air into the tank around imperfect seals. Steel-tank valves, with water cooling for the tank, were developed with current ratings of several thousand amps. Like glass-bulb valves, steel-tank mercury arc valves were built with only a single anode per tank (a type also known as the excitron) or with multiple anodes per tank. Multiple-anode valves were usually used for multi-phase rectifier circuits (with 2, 3, 6 or 12 anodes per tank) but in HVDC applications, multiple anodes were often simply connected in parallel in order to increase the current rating. Starting (ignition) A conventional mercury-arc rectifier is started by a brief high-voltage arc within the rectifier, between the cathode pool and a starting electrode. The starting electrode is brought into contact with the pool and allowed to pass current through an inductive circuit. The contact with the pool is then broken, resulting in a high emf and an arc discharge. The momentary contact between the starting electrode and the pool may be achieved by a number of methods, including: allowing an external electromagnet to pull the electrode into contact with the pool; the electromagnet can also serve as the starting inductance, arranging the electromagnet to tip the bulb of a small rectifier, just enough to allow mercury from the pool to reach the starting electrode, providing a narrow neck of mercury between two pools, and by passing a very high current at negligible voltage through the neck, displacing the mercury by magnetostriction, thus opening the circuit, Passing current into the mercury pool through a bimetallic strip, which warms up under the heating action of the current and bends in such a way as to break the contact with the pool. Excitation Since momentary interruptions or reductions of output current may cause the cathode spot to extinguish, many rectifiers incorporate an additional electrode to maintain an arc whenever the plant is in use. Typically, a two or three phase supply of a few amperes passes through small excitation anodes. A magnetically shunted transformer of a few hundred VA rating is commonly used to provide this supply. This excitation or keep-alive circuit was necessary for single-phase rectifiers such as the excitron and for mercury-arc rectifiers used in the high-voltage supply of radiotelegraphy transmitters, as current flow was regularly interrupted every time the Morse key was released. Grid control Both glass and metal envelope rectifiers may have control grids inserted between the anode and cathode. Installation of a control grid between the anode and the pool cathode allows control of the conduction of the valve, thereby giving control of the mean output voltage produced by the rectifier. Start of the current flow can be delayed past the point at which the arc would form in an uncontrolled valve. This allows the output voltage of a valve group to be adjusted by delaying the firing point, and allows controlled mercury-arc valves to form the active switching elements in an inverter converting direct current into alternating current. To maintain the valve in the non-conducting state, a negative bias of a few volts or tens of volts is applied to the grid. As a result, electrons emitted from the cathode are repelled away from the grid, back towards the cathode, and so are prevented from reaching the anode. With a small positive bias applied to the grid, electrons pass through the grid, towards the anode, and the process of establishing an arc discharge can commence. However, once the arc has been established, it cannot be stopped by grid action, because the positive mercury ions produced by ionisation are attracted to the negatively charged grid and effectively neutralise it. The only way of stopping conduction is to make the external circuit force the current to drop below a (low) critical current. Although grid-controlled mercury-arc valves bear a superficial resemblance to triode valves, mercury-arc valves cannot be used as amplifiers except at extremely low values of current, well below the critical current needed to maintain the arc. Anode grading electrodes Mercury-arc valves are prone to an effect called arc-back (or backfire), whereby the valve conducts in the reverse direction when the voltage across it is negative. Arc-backs can be damaging or destructive to the valve, as well as creating high short-circuit currents in the external circuit, and are more prevalent at higher voltages. One example of the problems caused by backfire occurred in 1960 subsequent to the electrification of the Glasgow North Suburban Railway where steam services had to be re-introduced after several mishaps. For many years this effect limited the practical operating voltage of mercury-arc valves to a few kilovolts. The solution was found to be to include grading electrodes between the anode and control grid, connected to an external resistor-capacitor divider circuit. Dr. Uno Lamm conducted pioneering work at ASEA in Sweden on this problem throughout the 1930s and 1940s, leading to the first truly practical mercury-arc valve for HVDC transmission, which was put into service on the 20 MW, 100 kV HVDC link from mainland Sweden to the island of Gotland in 1954. Uno Lamm's work on high voltage mercury-arc valves led him to be known as the "Father of HVDC" power transmission and inspired the IEEE to dedicate an award named after him, for outstanding contributions in the field of HVDC. Mercury arc valves with grading electrodes of this type were developed up to voltage ratings of 150 kV. However, the tall porcelain column required to house the grading electrodes was more difficult to cool than the steel tank at cathode potential, so the usable current rating was limited to about 200–300 A per anode. Therefore, Mercury arc valves for HVDC were often constructed with four or six anode columns in parallel. The anode columns were always air-cooled, with the cathode tanks either water-cooled or air-cooled. Circuits Single-phase mercury-arc rectifiers were rarely used because the current dropped and the arc could be extinguished when the AC voltage changed polarity. The direct current produced by a single-phase rectifier thus contained a varying component (ripple) at twice the power supply frequency, which was undesirable in many applications for DC. The solution was to use two-, three-, or even six-phase AC power supplies so that the rectified current would maintain a more constant voltage level. Polyphase rectifiers also balanced the load on the supply system, which is desirable for reasons of system performance and economy. Most applications of mercury-arc valves for rectifiers used full-wave rectification with separate pairs of anodes for each phase. In full-wave rectification both halves of the AC waveform are utilised. The cathode is connected to the + side of the DC load, the other side being connected to the center tap of the transformer secondary winding, which always remains at zero potential with respect to ground or earth. For each AC phase, a wire from each end of that phase winding is connected to a separate anode "arm" on the mercury-arc rectifier. When the voltage at each anode becomes positive, it will begin to conduct through the mercury vapor from the cathode. As the anodes of each AC phase are fed from opposite ends of the centre tapped transformer winding, one will always be positive with respect to the center tap and both halves of the AC Waveform will cause current to flow in one direction only through the load. This rectification of the whole AC waveform is thus called full-wave rectification. With three-phase alternating current and full-wave rectification, six anodes were used to provide a smoother direct current. Three phase operation can improve the efficiency of the transformer as well as providing smoother DC current by enabling two anodes to conduct simultaneously. During operation, the arc transfers to the anodes at the highest positive potential (with respect to the cathode). In HVDC applications, a full-wave three-phase bridge rectifier or Graetz-bridge circuit was usually used, each valve accommodated in a single tank. Applications As solid-state metal rectifiers became available for low-voltage rectification in the 1920s, mercury arc tubes became limited to higher voltage and especially high-power applications. Mercury-arc valves were widely used until the 1960s for the conversion of alternating current into direct current for large industrial uses. Applications included power supply for streetcars, electric railways, and variable-voltage power supplies for large radio transmitters. Mercury-arc stations were used to provide DC power to legacy Edison-style DC power grids in urban centers until the 1950s. In the 1960s, solid-state silicon devices, first diodes and then thyristors, replaced all lower-power and lower voltage rectifier applications of mercury arc tubes. Several electric locomotives, including the New Haven EP5 and the Virginian EL-C, carried ignitrons on board to rectify incoming AC to traction motor DC. One of the last major uses of mercury arc valves was in HVDC power transmission, where they were used in many projects until the early 1970s, including the HVDC Inter-Island link between the North and South Islands of New Zealand and the HVDC Kingsnorth link from Kingsnorth power station to London. However, starting about 1975, silicon devices have made mercury-arc rectifiers largely obsolete, even in HVDC applications. The largest ever mercury-arc rectifiers, built by English Electric, were rated at 150 kV, 1800 A and were used until 2004 at the Nelson River DC Transmission System high-voltage DC-power-transmission project. The valves for the Inter-Island and Kingsnorth projects used four anode columns in parallel, while those of the Nelson River project used six anode columns in parallel in order to obtain the necessary current rating. The Inter-Island link was the last HVDC transmission scheme in operation using mercury arc valves. It was formally decommissioned on 1 August 2012. The mercury arc valve converter stations of the New Zealand scheme were replaced by new thyristor converter stations. A similar mercury arc valve scheme, the HVDC Vancouver Island link was replaced by a three-phase AC link in 2014. Mercury arc valves remain in use in some South African mines and Kenya (at Mombasa Polytechnic - Electrical & Electronic department). Mercury arc valves were used extensively in DC power systems on London Underground, and two were still observed to be in operation in 2000 at the disused deep-level air-raid shelter at Belsize Park. After they were no longer needed as shelters, Belsize Park and several other deep shelters were used as secure storage, particularly for music and television archives. This led to the mercury-arc rectifier at the Goodge Street shelter featuring in an early episode of Doctor Who as an alien brain, cast for its "eerie glow". Auckland's Museum Of Transport And Technology (MOTAT) still employs a Mercury arc valve to provide power to the tram which carries visitors between its two sites. Others Special types of single-phase mercury-arc rectifiers are the Ignitron and the . The Excitron is similar to other types of valve described above but depends critically on the existence of an excitation anode to maintain an arc discharge during the half-cycle when the valve is not conducting current. The Ignitron dispenses with excitation anodes by igniting the arc each time conduction is required to start. In this way, ignitrons also avoid the need for control grids. In 1919, the book "Cyclopedia of Telephony & Telegraphy Vol. 1" described an amplifier for telephone signals that used a magnetic field to modulate an arc in a mercury rectifier tube. This was never commercially important. Environmental hazard Mercury compounds are toxic, highly persistent in the environment, and present a danger to humans and the environment. The use of large quantities of mercury in fragile glass envelopes presents a hazard of potential release of mercury to the environment should the glass bulb be broken. Some HVDC converter stations have required extensive clean-up to eliminate traces of mercury emitted from the station over its service life. Steel tank rectifiers frequently required vacuum pumps, which continually emitted small amounts of mercury vapor. References Further reading ABB page on the history of high voltage direct current transmission The Tube Collector Virtual Museum. Description of mercury arc rectifiers and further links, including photographs 1903 illustrated article – A Great Electrical Discovery Testing The 50 Year Old Mercury Arc Rectifier — video with circuit diagrams, closeup views, explanation of operation Electric arcs Electric power systems components Electric power conversion Gas-filled tubes High-voltage direct current Mercury (element) Rectifiers
Mercury-arc valve
[ "Physics" ]
4,038
[ "Plasma phenomena", "Physical phenomena", "Electric arcs" ]
1,618,671
https://en.wikipedia.org/wiki/Quadrature%20%28geometry%29
In mathematics, particularly in geometry, quadrature (also called squaring) is a historical process of drawing a square with the same area as a given plane figure or computing the numerical value of that area. A classical example is the quadrature of the circle (or squaring the circle). Quadrature problems served as one of the main sources of problems in the development of calculus. They introduce important topics in mathematical analysis. History Antiquity Greek mathematicians understood the determination of an area of a figure as the process of geometrically constructing a square having the same area (squaring), thus the name quadrature for this process. The Greek geometers were not always successful (see squaring the circle), but they did carry out quadratures of some figures whose sides were not simply line segments, such as the lune of Hippocrates and the parabola. By a certain Greek tradition, these constructions had to be performed using only a compass and straightedge, though not all Greek mathematicians adhered to this dictum. For a quadrature of a rectangle with the sides a and b it is necessary to construct a square with the side (the geometric mean of a and b). For this purpose it is possible to use the following: if one draws the circle with diameter made from joining line segments of lengths a and b, then the height (BH in the diagram) of the line segment drawn perpendicular to the diameter, from the point of their connection to the point where it crosses the circle, equals the geometric mean of a and b. A similar geometrical construction solves the problems of quadrature of a parallelogram and of a triangle. Problems of quadrature for curvilinear figures are much more difficult. The quadrature of the circle with compass and straightedge was proved in the 19th century to be impossible. Nevertheless, for some figures a quadrature can be performed. The quadratures of the surface of a sphere and a parabola segment discovered by Archimedes became the highest achievement of analysis in antiquity. The area of the surface of a sphere is equal to four times the area of the circle formed by a great circle of this sphere. The area of a segment of a parabola determined by a straight line cutting it is 4/3 the area of a triangle inscribed in this segment. For the proofs of these results, Archimedes used the method of exhaustion attributed to Eudoxus. Medieval mathematics In medieval Europe, quadrature meant the calculation of area by any method. Most often the method of indivisibles was used; it was less rigorous than the geometric constructions of the Greeks, but it was simpler and more powerful. With its help, Galileo Galilei and Gilles de Roberval found the area of a cycloid arch, Grégoire de Saint-Vincent investigated the area under a hyperbola (Opus Geometricum, 1647), and Alphonse Antonio de Sarasa, de Saint-Vincent's pupil and commentator, noted the relation of this area to logarithms. Integral calculus John Wallis algebrised this method; he wrote in his Arithmetica Infinitorum (1656) some series which are equivalent to what is now called the definite integral, and he calculated their values. Isaac Barrow and James Gregory made further progress: quadratures for some algebraic curves and spirals. Christiaan Huygens successfully performed a quadrature of the surface area of some solids of revolution. The quadrature of the hyperbola by Gregoire de Saint-Vincent and A. A. de Sarasa provided a new function, the natural logarithm, of critical importance. With the invention of integral calculus came a universal method for area calculation. In response, the term quadrature has become traditional, and instead the modern phrase finding the area is more commonly used for what is technically the computation of a univariate definite integral. See also Gaussian quadrature Hyperbolic angle Numerical integration Quadratrix Tanh-sinh quadrature Notes References Boyer, C. B. (1989) A History of Mathematics, 2nd ed. rev. by Uta C. Merzbach. New York: Wiley, (1991 pbk ed. ). Eves, Howard (1990) An Introduction to the History of Mathematics, Saunders, , Christiaan Huygens (1651) Theoremata de Quadratura Hyperboles, Ellipsis et Circuli Jean-Etienne Montucla (1873) History of the Quadrature of the Circle, J. Babin translator, William Alexander Myers editor, link from HathiTrust. Christoph Scriba (1983) "Gregory's Converging Double Sequence: a new look at the controversy between Huygens and Gregory over the 'analytical' quadrature of the circle", Historia Mathematica 10:274–85. Integral calculus History of mathematics History of geometry Mathematical terminology
Quadrature (geometry)
[ "Mathematics" ]
1,021
[ "History of geometry", "Calculus", "Geometry", "nan", "Integral calculus" ]
1,618,711
https://en.wikipedia.org/wiki/Epsilon%20Reticuli
Epsilon Reticuli, Latinized from ε Reticuli, is a double star approximately 60 light-years away in the southern constellation of Reticulum. The brighter member is visible to the naked eye with an apparent visual magnitude of 4.44. The primary component is an orange subgiant, while the secondary is a white dwarf. The two stars share a common motion through space and hence most likely form a binary star system. The brighter star should be easily visible without optical aid under dark skies in the southern hemisphere. In 2000, an extrasolar planet was confirmed to be orbiting the primary star in the system. Star system The primary component, Epsilon Reticuli A, is a subgiant star with a stellar classification of K2III–IV, indicating that the fusing of hydrogen in its core is coming to an end and it is in the process of expanding to a red giant. With an estimated mass of about 1.5 times the solar mass, it was probably an F0 star while in the main sequence. It has a radius of 3.18 times the solar radius, a luminosity of 6.2 the solar luminosity and an effective temperature of 4,961 K. As is typical of stars with giant planets, it has a high metallicity, with an iron abundance 82% larger than the Sun's. The secondary star, Epsilon Reticuli B, is known as a visual companion since 1930, and in 2006 was confirmed as a physical companion on the basis of its common proper motion. It was noted that its color indices are incompatible with a main sequence object, but are consistent with a white dwarf. This was confirmed in 2007 by spectroscopic observations, that showed the absorption spectrum typical of a hydrogen-rich white dwarf (spectral type DA). This star has a visual apparent magnitude of 12.5 and is located at a separation of 13 arcseconds, corresponding to a projected physical separation of 240 AU and an orbital period of more than 2,700 years. It is estimated that Epsilon Reticuli B has a mass of and a radius of . Originally, when it was in the main sequence, it probably had a spectral type of A5 and a mass of , and spent 1.3 billion years on this phase. From a measured effective temperature of 15,310 K, it has a cooling age (time spent as a white dwarf) of 200 million years, corresponding to a total age of 1.5 billion years. This age is inconsistent with the primary estimated age of 2.8 billion years, which suggests a smaller mass for the white dwarf or a larger mass for the primary. Planetary system On December 11, 2000, a team of astronomers announced the discovery of a planet Epsilon Reticuli b. With a minimum mass of 1.17 that of Jupiter, the planet moves around Epsilon Reticuli with an average separation of 1.16 AU. The eccentricity of the planet is extremely low (at 0.06), and it completes an orbit every 418 days (or 1.13 years). Stability analysis shows that the planet's Lagrangian points would be stable enough to host Earth-sized planets, though as yet no trojan planets have been detected in this system. References External links K-type subgiants White dwarfs Binary stars Planetary systems with one confirmed planet Reticulum Reticuli, Epsilon 1355 Durchmusterung objects Gliese and GJ objects 027442 019921
Epsilon Reticuli
[ "Astronomy" ]
713
[ "Reticulum", "Constellations" ]
1,618,791
https://en.wikipedia.org/wiki/Melting%20points%20of%20the%20elements%20%28data%20page%29
Melting point In the following table, the use row is the value recommended for use in other Wikipedia pages in order to maintain consistency across content. Notes All values at standard pressure (101.325 kPa) unless noted. Triple point temperature values (marked "tp") are not valid at standard pressure. References WEL As quoted at http://www.webelements.com/ from these sources: A.M. James and M.P. Lord in Macmillan's Chemical and Physical Data, Macmillan, London, UK, 1992 G.W.C. Kaye and T.H. Laby in Tables of physical and chemical constants, Longman, London, UK, 15th edition, 1993 Unit is K. CRC Unit is °C LNG As quoted from: J.A. Dean (ed), Lange's Handbook of Chemistry (15th Edition), McGraw-Hill, 1999; Section 3; Table 3.2 Physical Constants of Inorganic Compounds Unit is °C Hoffer et al. Lavrukhina et al. Holman et al. Not used in this table. Table See also Boiling points of the elements (data page) List of chemical elements Properties of chemical elements Chemical element data pages
Melting points of the elements (data page)
[ "Chemistry" ]
252
[ "Properties of chemical elements", "Chemical element data pages", "Chemical data pages" ]
1,618,792
https://en.wikipedia.org/wiki/Boiling%20points%20of%20the%20elements%20%28data%20page%29
This is a list of the various reported boiling points for the elements, with recommended values to be used elsewhere on Wikipedia. Boiling points, Master List format In the following table, the use row is the value recommended for use in other Wikipedia pages in order to maintain consistency across content. Periodic Table format Notes Unless noted, all values refer to the normal boiling point at standard pressure (101.325 kPa). References Zhang et al. WebEl As quoted at http://www.webelements.com/ from these sources: CRC Lange As quoted from: Otozai et al. Lavrukhina et al. See also Melting points of the elements (data page) Densities of the elements (data page) Properties of chemical elements Chemical element data pages
Boiling points of the elements (data page)
[ "Chemistry" ]
159
[ "Properties of chemical elements", "Chemical data pages", "Chemical element data pages" ]
1,618,887
https://en.wikipedia.org/wiki/Square%20yard
The square yard (Northern India: gaj, Pakistan: gaz) is an imperial unit and U.S. customary unit of area. It is in widespread use in most of the English-speaking world, particularly the United States, United Kingdom, Canada, Pakistan and India. It is defined as the area of a square with sides of one yard (three feet, thirty-six inches, 0.9144 metres) in length. Symbols There is no universally agreed symbol but the following are used: square yards, square yard, square yds, square yd sq yards, sq yard, sq yds, sq yd, sq.yd. yards/-2, yard/-2, yds/-2, yd/-2 yards^2, yard^2, yds^2, yd^2 yards², yard², yds², yd² Conversions One square yard is equivalent to: 1,296 square inches 9 square feet ≈0.00020661157 acres ≈0.000000322830579 square miles 836 127.36 square millimetres 8 361.2736 square centimetres 0.83612736 square metres 0.000083612736 hectares 0.00000083612736 square kilometres 1.00969 gaj See also 1 E-1 m² for a comparison with other areas Area (geometry) Conversion of units Cubic yard Metrication in Canada Orders of magnitude (area) Square (algebra), Square root References Units of area Imperial units Customary units of measurement in the United States
Square yard
[ "Mathematics" ]
326
[ "Quantity", "Units of area", "Units of measurement" ]
1,618,989
https://en.wikipedia.org/wiki/Sphaleron
A sphaleron ( "slippery") is a static (time-independent) solution to the electroweak field equations of the Standard Model of particle physics, and is involved in certain hypothetical processes that violate baryon and lepton numbers. Such processes cannot be represented by perturbative methods such as Feynman diagrams, and are therefore called non-perturbative. Geometrically, a sphaleron is a saddle point of the electroweak potential (in infinite-dimensional field space). This saddle point rests at the top of a barrier between two different low-energy equilibria of a given system; the two equilibria are labeled with two different baryon numbers. One of the equilibria might consist of three baryons; the other, alternative, equilibrium for the same system might consist of three antileptons. In order to cross this barrier and change the baryon number, a system must either tunnel through the barrier (in which case the transition is an instanton-like process) or must for a reasonable period of time be brought up to a high enough energy that it can classically cross over the barrier (in which case the process is termed a "sphaleron" process and can be modeled with an eponymous sphaleron particle). In both the instanton and sphaleron cases, the process can only convert groups of three baryons into three antileptons (or three antibaryons into three leptons) and vice versa. This violates conservation of baryon number and lepton number, but the difference B − L is conserved. The minimum energy required to trigger the sphaleron process is believed to be around 10 TeV; however, sphalerons cannot be produced in existing LHC collisions, because although the LHC can create collisions of energy 10 TeV and greater, the generated energy cannot be concentrated in a manner that would create sphalerons. A sphaleron is similar to the midpoint of the instanton, so it is non-perturbative. This means that under normal conditions sphalerons are unobservably rare. However, they would have been more common at the higher temperatures of the early universe. Baryogenesis Since a sphaleron may convert baryons to antileptons and antibaryons to leptons and thus change the baryon number, if the density of sphalerons was at some stage high enough, they could wipe out any net excess of baryons or anti-baryons. This has two important implications in any theory of baryogenesis within the Standard Model: Any baryon net excess arising before the electroweak symmetry breaking would be wiped out due to abundant sphalerons caused by high temperatures existing in the early universe. While a baryon net excess can be created during the electroweak symmetry breaking, it can be preserved only if this phase transition was first-order. This is because in a second-order phase transition, sphalerons would wipe out any baryon asymmetry as it is created, while in a first-order phase transition, sphalerons would wipe out baryon asymmetry only in the unbroken phase. In absence of processes which violate B − L it is possible for an initial baryon asymmetry to be protected if it has a non-zero projection onto B − L. In this case the sphaleron processes would impose an equilibrium which distributes the initial B asymmetry between both B and L numbers. In some theories of baryogenesis, an imbalance of the number of leptons and antileptons is formed first by leptogenesis and sphaleron transitions then convert this to an imbalance in the numbers of baryons and antibaryons. Details For an SU(2) gauge theory, neglecting , we have the following equations for the gauge field and the Higgs field in the gauge where , , the symbols represent the generators of SU(2), is the electroweak coupling constant, and is the Higgs VEV absolute value. The functions and , which must be determined numerically, go from 0 to 1 in value as their argument, , goes from 0 to . For a sphaleron in the background of a non-broken phase, the Higgs field must obviously fall off eventually to zero as goes to infinity. Note that in the limit , the gauge sector approaches one of the pure-gauge transformation , which is the same as the pure gauge transformation to which the BPST instanton approaches as at , hence establishing the connection between the sphaleron and the instanton. Baryon number violation is caused by the "winding" of the fields from one equilibrium to another. Each time the weak gauge fields wind, the count for each of the quark families and each of the lepton families is raised (or lowered, depending on the winding direction) by one; as there are three quark families, baryon number can only change in multiples of three. The baryon number violation can alternatively be visualized in terms of a kind of Dirac sea: in the course of the winding, a baryon originally considered to be part of the vacuum is now considered a real baryon, or vice versa, and all the other baryons stacked inside the sea are accordingly shifted by one energy level. Energy release According to physicist Max Tegmark, the theoretical energy efficiency from conversion of baryons to antileptons would be orders of magnitude higher than the energy efficiency of existing power-generation technology such as nuclear fusion. Tegmark speculates that an extremely advanced civilization might use a "sphalerizer" to generate energy from ordinary baryonic matter. See also References and notes Notes Citations Electroweak theory Anomalies (physics)
Sphaleron
[ "Physics" ]
1,195
[ "Physical phenomena", "Fundamental interactions", "Electroweak theory" ]
1,618,993
https://en.wikipedia.org/wiki/Wilting
Wilting is the loss of rigidity of non-woody parts of plants. This occurs when the turgor pressure in non-lignified plant cells falls towards zero, as a result of diminished water in the cells. Wilting also serves to reduce water loss, as it makes the leaves expose less surface area. The rate of loss of water from the plant is greater than the absorption of water in the plant. The process of wilting modifies the leaf angle distribution of the plant (or canopy) towards more erectophile conditions. Lower water availability may result from: drought conditions, where the soil moisture drops below conditions most favorable for plant functioning; the temperature falls to the point where the plant's vascular system cannot function; high salinity, which causes water to diffuse from the plant cells and induce shrinkage; saturated soil conditions, where roots are unable to obtain sufficient oxygen for cellular respiration, and so are unable to transport water into the plant; or bacteria or fungi that clog the plant's vascular system. Wilting diminishes the plant's ability to transpire,reproduce and grow. Permanent wilting leads to the plant dying. Symptoms of wilting and blights resemble one another. The plants may recover during the night when evaporation is reduced as the stomata closes. In woody plants, reduced water availability leads to cavitation of the xylem. Wilting occurs in plants such as balsam and holy basil,and other types of plants. Wilting is an effect of the plant growth-inhibiting hormone, abscisic acid. With cucurbits, wilting can be caused by the squash vine borer. References Plant physiology Plant pathogens and diseases
Wilting
[ "Biology" ]
354
[ "Plant physiology", "Plant pathogens and diseases", "Plants" ]
1,619,050
https://en.wikipedia.org/wiki/Game%20of%20the%20Amazons
The Game of the Amazons (in Spanish, El Juego de las Amazonas; often called Amazons for short) is a two-player abstract strategy game invented in 1988 by Walter Zamkauskas of Argentina. The game is played by moving pieces and blocking the opponents from squares, and the last player able to move is the winner. It is a member of the territorial game family, a distant relative of Go and chess. The Game of the Amazons is played on a 10x10 chessboard (or an international checkerboard). Some players prefer to use a monochromatic board. The two players are White and Black; each player has four amazons (not to be confused with the amazon fairy chess piece), which start on the board in the configuration shown at right. A supply of markers (checkers, poker chips, etc.) is also required. Rules White moves first, and the players alternate moves thereafter. Each move consists of two parts. First, one moves one of one's own amazons one or more empty squares in a straight line (orthogonally or diagonally), exactly as a queen moves in chess; it may not cross or enter a square occupied by an amazon of either color or an arrow. Second, after moving, the amazon shoots an arrow from its landing square to another square, using another queenlike move. This arrow may travel in any orthogonal or diagonal direction (even backwards along the same path the amazon just traveled, into or across the starting square if desired). An arrow, like an amazon, cannot cross or enter a square where another arrow has landed or an amazon of either color stands. The square where the arrow lands is marked to show that it can no longer be used. The last player to be able to make a move wins. Draws are impossible. Territory and scoring The strategy of the game is based on using arrows (as well as one's four amazons) to block the movement of the opponent's amazons and gradually wall off territory, trying to trap the opponents in smaller regions and gain larger areas for oneself. Each move reduces the available playing area, and eventually each amazon finds itself in a territory blocked off from all other amazons. The amazon can then move about its territory firing arrows until it no longer has any room to move. Since it would be tedious to actually play out all these moves, in practice the game usually ends when all of the amazons are in separate territories. The player with the largest amount of territory will be able to win, as the opponent will have to fill in their own territory more quickly. Scores are sometimes used for tie-breaking purposes in Amazons tournaments. When scoring, it is important to note that although the number of moves remaining to a player is usually equal to the number of empty squares in the territories occupied by that player's amazons, it is nonetheless possible to have defective territories in which there are fewer moves left than there are empty squares. The simplest such territory is three squares of the same colour, not in a straight line, with the amazon in the middle (for example, a1+b2+c1 with the amazon at b2). History El Juego de las Amazonas was first published in Spanish in the Argentine puzzle magazine El Acertijo in December 1992. An approved English translation written by Michael Keller appeared in the magazine World Game Review in January 1994. Other game publications also published the rules, and the game gathered a small but devoted following. The Internet spread the game more widely. Michael Keller wrote the first known computer version of the game in VAX Fortran in 1994, and an updated version with graphics in Visual Basic in 1995. There are Amazons tournaments at the Computer Olympiad, a series of computer-versus-computer competitions. El Juego de las Amazonas (The Game of the Amazons) is a trademark of Ediciones de Mente. Computational complexity Usually, in the endgame, the board is partitioned into separate "royal chambers", with queens inside each chamber. We define simple Amazons endgames to be endgames where each chamber has at most one queen. Determining who wins in a simple Amazons endgame is NP-hard. This is proven by reducing it to finding the Hamiltonian path of a cubic subgraph of the square grid graph. Generalized Amazons (that is, determining the winner of a game of Amazons played on a n x n grid, started from an arbitrary configuration) is PSPACE-complete. This can be proved in two ways. The first way is by reducing a generalized Hex position, which is known to be PSPACE-complete, into an Amazons position. The second way is by reducing a certain kind of generalized geography called GEOGRAPHY-BP3, which is PSPACE-complete, to an Amazons position. This Amazons position uses only one black queen and one white queen, thus showing that generalized Amazons is PSPACE-complete even if only one queen on each side is allowed. References Further reading . . Board games introduced in 1988 Abstract strategy games PSPACE-complete problems
Game of the Amazons
[ "Mathematics" ]
1,040
[ "PSPACE-complete problems", "Mathematical problems", "Computational problems" ]
1,619,127
https://en.wikipedia.org/wiki/Shot%20peening
Shot peening is a cold working process used to produce a compressive residual stress layer and modify the mechanical properties of metals and composites. It entails striking a surface with shot (round metallic, glass, or ceramic particles) with force sufficient to create plastic deformation. In machining, shot peening is used to strengthen and relieve stress in components like steel automobile crankshafts and connecting rods. In architecture it provides a muted finish to metal. Shot peening is similar mechanically to sandblasting, though its purpose is not to remove material, but rather it employs the mechanism of plasticity to achieve its goal, with each particle functioning as a ball-peen hammer. Details Peening a surface spreads it plastically, causing changes in the mechanical properties of the surface. Its main application is to avoid the propagation of microcracks in a surface. By putting a material under compressive stress, shot peening prevents such cracks from propagating. Shot peening is often called for in aircraft repairs to relieve tensile stresses built up in the grinding process and replace them with beneficial compressive stresses. Depending on the part geometry, part material, shot material, shot quality, shot intensity, and shot coverage, shot peening can increase fatigue life up to 1000%. Plastic deformation induces a residual compressive stress in a peened surface, along with tensile stress in the interior. Surface compressive stresses confer resistance to metal fatigue and to some forms of stress corrosion. The tensile stresses deep in the part are not as troublesome as tensile stresses on the surface because cracks are less likely to start in the interior. Intensity is a key parameter of the shot peening process. After some development of the process, an analog was needed to measure the effects of shot peening. John Almen noticed that shot peening made the side of the sheet metal that was exposed begin to bend and stretch. He created the Almen strip to measure the compressive stresses in the strip created by the shot peening operation. One can obtain what is referred to as the "intensity of the blast stream" by measuring the deformation on the Almen strip that is in the shot peening operation. As the strip reaches a 10% deformation, the Almen strip is then hit with the same intensity for twice the amount of time. If the strip deforms another 10%, then one obtains the intensity of the blast stream. Another operation to gauge the intensity of a shot peening process is the use of an Almen round, developed by R. Bosshard. Coverage, the percentage of the surface indented once or more, is subject to variation due to the angle of the shot blast stream relative to the workpiece surface. The stream is cone-shaped, thus, shot arrives at varying angles. Processing the surface with a series of overlapping passes improves coverage, although variation in "stripes" will still be present. Alignment of the axis of the shot stream with the axis of the Almen strip is important. A continuous compressively stressed surface of the workpiece has been shown to be produced at less than 50% coverage but falls as 100% is approached. Optimizing coverage level for the process being performed is important for producing the desired surface effect. SAE International's includes several standards for shot peening in aerospace and other industries. Process and equipment Popular methods for propelling shot media include air blast systems and centrifugal blast wheels. In the air blast systems, media are introduced by various methods into the path of high pressure air and accelerated through a nozzle directed at the part to be peened. The centrifugal blast wheel consists of a high speed paddle wheel. Shot media are introduced in the center of the spinning wheel and propelled by the centrifugal force by the spinning paddles towards the part by adjusting the media entrance location, effectively timing the release of the media. Other methods include ultrasonic peening, wet peening, and laser peening (which does not use media). Media choices include spherical cast steel shot, ceramic bead, glass bead or conditioned (rounded) cut wire. Cut wire shot is preferred because it maintains its roundness as it is degraded, unlike cast shot which tends to break up into sharp pieces that can damage the workpiece. Cut wire shot can last five times longer than cast shot. Because peening demands well-graded shot of consistent hardness, diameter, and shape, a mechanism for removing shot fragments throughout the process is desirable. Equipment is available that includes separators to clean and recondition shot and feeders to add new shot automatically to replace the damaged material. Wheel blast systems include satellite rotation models, rotary throughfeed components, and various manipulator designs. There are overhead monorail systems as well as reverse-belted models. Workpiece holding equipment includes rotating index tables, loading and unloading robots, and jigs that hold multiple workpieces. For larger workpieces, manipulators to reposition them to expose features to the shot blast stream are available. Cut wire shot Cut wire shot is a metal shot used for shot peening, where small particles are fired at a workpiece by a compressed air jet. It is a low-cost manufacturing process, as the basic feedstock is inexpensive. As-cut particles are an effective abrasive due to the sharp edges created in the cutting process; however, as-cut shot is not a desirable shot peening medium, as its sharp edges are not suitable to the process. Cut shot is manufactured from high quality wire in which each particle is cut to a length about equal to its diameter. If required, the particles are conditioned (rounded) to remove the sharp corners produced during the cutting process. Depending on application, various hardness ranges are available, with the higher the hardness of the media the lower its durability. Other cut-wire shot applications include tumbling and vibratory finishing. Coverage Factors affecting coverage density include: number of impacts (shot flow), exposure time, shot properties (size, chemistry), and work piece properties. Coverage is monitored by visual examination to determine the percent coverage (0–100%). Coverage beyond 100% cannot be determined. The number of individual impacts is linearly proportional to shot flow, exposure area, and exposure time. Coverage is not linearly proportional because of the random nature of the process (chaos theory). When 100% coverage is achieved, locations on the surface have been impacted multiple times. At 150% coverage, 5 or more impacts occur at 52% of locations. At 200% coverage, 5 or more impacts occur at 84% of locations. Coverage is affected by shot geometry and the shot and workpiece chemistry. The size of the shot controls how many impacts there are per pound, where smaller shot produces more impacts per pound therefore requiring less exposure time. Soft shot impacting hard material will take more exposure time to reach acceptable coverage compared to hard shot impacting a soft material (since the harder shot can penetrate deeper, thus creating a larger impression). Coverage and intensity (measured by Almen strips) can have a profound effect on fatigue life. This can affect a variety of materials typically shot peened. Incomplete or excessive coverage and intensity can result in reduced fatigue life. Over-peening will cause excessive cold working on the surface of the workpiece, which can also cause fatigue cracks. Diligence is required when developing parameters for coverage and intensity, especially when using materials having different properties (i.e. softer metal to harder metal). Testing fatigue life over a range of parameters would result in a "sweet-spot" where there is near exponential growth to a peak fatigue life (x = peening intensity or media stream energy, y = time-to-crack or fatigue strength) and rapidly decay fatigue life as more intensity or coverage is added. The "sweet-spot" will directly correlate with the kinetic energy transferred and the material properties of the shot media and workpiece. Applications Shot peening is used on gear parts, cams and camshafts, clutch springs, coil springs, connecting rods, crankshafts, gearwheels, leaf and suspension springs, rock drills, and turbine blades. It is also used in foundries for sand removal, decoring, descaling, and surface finishing of castings such as engine blocks and cylinder heads. Its descaling action can be used in the manufacturing of steel products such as strip, plates, sheets, wire, and bar stock. Shot peening is a crucial process in spring making. Types of springs include leaf springs, extension springs, and compression springs. The most widely used application are for engine valve springs (compression springs) due to high cyclic fatigue. In an OEM valve spring application, the mechanical design combined with some shot peening ensures longevity. Automotive makers are shifting to more high performance higher stressed valve spring designs as engines evolve. In aftermarket high performance valve spring applications, the need for controlled and multi-step shot peening is a requirement to withstand extreme surface stresses that sometimes exceeds material specifications. The fatigue life of an extreme performance spring (NHRA, IHRA) can be as short as two passes on a 1/4 mile drag racing track before relaxation or failure occurs. Shot peening may be used for cosmetic effect. The surface roughness resulting from the overlapping dimples causes light to scatter upon reflection. Because peening typically produces larger surface features than sand-blasting, the resulting effect is more pronounced. Shot peening and abrasive blasting can apply materials on metal surfaces. When the shot or grit particles are blasted through a powder or liquid containing the desired surface coating, the impact plates or coats the workpiece surface. The process has been used to embed ceramic coatings, though the coverage is random rather than coherent. 3M developed a process where a metal surface was blasted with particles with a core of alumina and an outer layer of silica. The result was fusion of the silica to the surface. The process known as peen plating was developed by NASA. Fine powders of metals or non-metals are plated onto metal surfaces using glass bead shot as the blast medium. The process has evolved to applying solid lubricants such as molybdenum disulphide to surfaces. Biocompatible ceramics have been applied this way to biomedical implants. Peen plating subjects the coating material to high heat in the collisions with the shot and the coating must also be available in powder form, limiting the range of materials that can be used. To overcome the problem of heat, a process called temperature moderated-collision mediated coating (TM-CMC) has allowed the use of polymers and antibiotic materials as peened coatings. The coating is presented as an aerosol directed to the surface at the same time as a stream of shot particles. The TM-CMC process is still in the R&D phase of development. Compressive residual stress A sub-surface compressive residual stress profile is measured using techniques such as x-ray diffraction and hardness profile testings. The X-axis is depth in mm or inches and the Y-axis is residual stress in ksi or MPa. The maximum residual stress profile can be affected by the factors of shot peening, including: part geometry, part material, shot material, shot quality, shot intensity, and shot coverage. For example, shot peening a hardened steel part with a process and then using the same process for another unhardened part could result in over-peening; causing a sharp decrease in surface residual stresses, but not affecting sub-surface stresses. This is critical because maximum stresses are typically at the surface of the material. Mitigation of these lower surface stresses can be accomplished by a multi-stage post process with varied shot diameters and other surface treatments that remove the low residual stress layer. The compressive residual stress in a metal alloy is produced by the transfer of kinetic energy (K.E.) from a moving mass (shot particle or ball peen) into the surface of a material with the capacity to plastically deform. The residual stress profile is also dependent on coverage density. The mechanics of the collisions involve properties of the shot hardness, shape, and structure; as well as the properties of the workpiece. Factors for process development and the control for K.E. transfer for shot peening are: shot velocity (wheel speed or air pressure/nozzle design), shot mass, shot chemistry, impact angle and work piece properties. Example: if you needed very high residual stresses you would likely want to use large diameter cut-wire shot, a high-intensity process, direct blast onto the workpiece, and a very hard workpiece material. See also Autofrettage, which produces compressive residual stresses in pressure vessels. Case hardening Differential hardening Steel abrasives Shot peening of steel belts High-frequency impact treatment after-treatment of weld transitions Suncorite Trimite References
Shot peening
[ "Materials_science" ]
2,659
[ "Strengthening mechanisms of materials", "Shot peening" ]
1,619,142
https://en.wikipedia.org/wiki/Autocode
Autocode is the name of a family of "simplified coding systems", later called programming languages, devised in the 1950s and 1960s for a series of digital computers at the Universities of Manchester, Cambridge and London. Autocode was a generic term; the autocodes for different machines were not necessarily closely related as are, for example, the different versions of the single language Fortran. Today the term is used to refer to the family of early languages descended from the Manchester Mark 1 autocoder systems, which were generally similar. In the 1960s, the term autocoders was used more generically as to refer to any high-level programming language using a compiler. Examples of languages referred to as autocodes are COBOL and Fortran. Glennie's Autocode The first autocode and its compiler were developed by Alick Glennie in 1952 for the Mark 1 computer at the University of Manchester and is considered by some to be the first compiled programming language. His main goal was increased comprehensibility in the programming of Mark 1 machines, which were known for their particularly abstruse machine code. Although the resulting language was much clearer than the machine code, it was still very machine dependent. Below is an example of Glennie's Autocode function which calculates the formula: . The example omits necessary scaling instruction needed to place integers into variables and assumes that results of multiplication fit into lower accumulator. c@VA t@IC x@½C y@RC z@NC INTEGERS +5 →c # Put 5 into c →t # Load argument from lower accumulator # to variable t +t TESTA Z # Put |t| into lower accumulator -t ENTRY Z SUBROUTINE 6 →z # Run square root subroutine on # lower accumulator value # and put the result into z +tt →y →x # Calculate t^3 and put it into x +tx →y →x +z+cx CLOSE WRITE 1 # Put z + (c * x) into # lower accumulator # and return User's manual of Glennie's Autocode Compiler mentioned that "the loss of efficiency is no more than 10%". Impact of Glennie's Autocode on other Manchester users' programming habits was negligible. It wasn't even mentioned in Brooker's 1958 paper called "The Autocode Programs developed for the Manchester University Computers". Mark 1 Autocode The second autocode for the Mark 1 was planned in 1954 and developed by R. A. Brooker in 1955 and was called the "Mark 1 Autocode". The language was nearly machine-independent and had floating-point arithmetic, unlike the first one. On the other hand it allowed only one operation per line, offered few mnemonic names and had no way to define user subroutines. An example code which loads array of size 11 of floating-point numbers from the input would look like this n1 = 1 1 vn1 = I reads input into v[n[1]] n1 = n1 + 1 j1,11 ≥ n1 jumps to 1 if n[1] ≤ 11 Brooker's Autocode removed two main difficulties of Mark 1's programmer: scaling and management of two-level storage. Unlike its predecessor it was heavily used. Later Autocodes Brooker also developed an autocode for the Ferranti Mercury in the 1950s in conjunction with the University of Manchester. Mercury Autocode had a limited repertoire of variables a-z and a'-z' and, in some ways resembled early versions of the later Dartmouth BASIC language. It pre-dated ALGOL, having no concept of stacks and hence no recursion or dynamically-allocated arrays. In order to overcome the relatively small store size available on Mercury, large programs were written as distinct "chapters", each of which constituted an overlay. Some skill was required to minimise time-consuming transfers of control between chapters. This concept of overlays from drum under user control became common until virtual memory became available in later machines. Slightly different dialects of Mercury Autocode were implemented for the Ferranti Atlas (distinct from the later Atlas Autocode) and the ICT 1300 and 1900 range. The version for the EDSAC 2 was devised by David Hartley of University of Cambridge Mathematical Laboratory in 1961. Known as EDSAC 2 Autocode, it was a straight development from Mercury Autocode adapted for local circumstances, and was noted for its object code optimisation and source-language diagnostics which were advanced for the time. A version was developed for the successor Titan (the prototype Atlas 2 computer) as a temporary stop-gap while a more substantially advanced language known as CPL was being developed. CPL was never completed but did give rise to BCPL (developed by M. Richards), which in turn led to B and ultimately C. A contemporary but separate thread of development, Atlas Autocode was developed for the University of Manchester Atlas 1 machine. References Sources Knuth, Donald E.; Pardo, Luis Trabb (1976). "Early development of programming languages". Stanford University, Computer Science Department. Further reading The Autocodes: a User's Perspective (viii+64 pages) History of computing in the United Kingdom Procedural programming languages Programming languages created in 1952 Science and technology in Greater Manchester University of Manchester University of Cambridge Computer Laboratory
Autocode
[ "Technology" ]
1,102
[ "History of computing", "History of computing in the United Kingdom" ]
1,619,396
https://en.wikipedia.org/wiki/Q-analog
In mathematics, a q-analog of a theorem, identity or expression is a generalization involving a new parameter q that returns the original theorem, identity or expression in the limit as . Typically, mathematicians are interested in q-analogs that arise naturally, rather than in arbitrarily contriving q-analogs of known results. The earliest q-analog studied in detail is the basic hypergeometric series, which was introduced in the 19th century. q-analogs are most frequently studied in the mathematical fields of combinatorics and special functions. In these settings, the limit is often formal, as is often discrete-valued (for example, it may represent a prime power). q-analogs find applications in a number of areas, including the study of fractals and multi-fractal measures, and expressions for the entropy of chaotic dynamical systems. The relationship to fractals and dynamical systems results from the fact that many fractal patterns have the symmetries of Fuchsian groups in general (see, for example Indra's pearls and the Apollonian gasket) and the modular group in particular. The connection passes through hyperbolic geometry and ergodic theory, where the elliptic integrals and modular forms play a prominent role; the q-series themselves are closely related to elliptic integrals. q-analogs also appear in the study of quantum groups and in q-deformed superalgebras. The connection here is similar, in that much of string theory is set in the language of Riemann surfaces, resulting in connections to elliptic curves, which in turn relate to q-series. "Classical" q-theory Classical q-theory begins with the q-analogs of the nonnegative integers. The equality suggests that we define the q-analog of n, also known as the q-bracket or q-number of n, to be By itself, the choice of this particular q-analog among the many possible options is unmotivated. However, it appears naturally in several contexts. For example, having decided to use [n]q as the q-analog of n, one may define the q-analog of the factorial, known as the q-factorial, by This q-analog appears naturally in several contexts. Notably, while n! counts the number of permutations of length n, [n]q! counts permutations while keeping track of the number of inversions. That is, if inv(w) denotes the number of inversions of the permutation w and Sn denotes the set of permutations of length n, we have In particular, one recovers the usual factorial by taking the limit as . The q-factorial also has a concise definition in terms of the q-Pochhammer symbol, a basic building-block of all q-theories: From the q-factorials, one can move on to define the q-binomial coefficients, also known as Gaussian coefficients, Gaussian polynomials, or Gaussian binomial coefficients: The q-exponential is defined as: q-trigonometric functions, along with a q-Fourier transform, have been defined in this context. Combinatorial q-analogs The Gaussian coefficients count subspaces of a finite vector space. Let q be the number of elements in a finite field. (The number q is then a power of a prime number, , so using the letter q is especially appropriate.) Then the number of k-dimensional subspaces of the n-dimensional vector space over the q-element field equals Letting q approach 1, we get the binomial coefficient or in other words, the number of k-element subsets of an n-element set. Thus, one can regard a finite vector space as a q-generalization of a set, and the subspaces as the q-generalization of the subsets of the set. As another example, the number of flags is as the order in which we build the flag matters, and after taking the limit we get . This has been a fruitful point of view in finding interesting new theorems. For example, there are q-analogs of Sperner's theorem and Ramsey theory. Cyclic sieving Let q = (e2i/n)d be the d-th power of a primitive n-th root of unity. Let C be a cyclic group of order n generated by an element c. Let X be the set of k-element subsets of the n-element set {1, 2, ..., n}. The group C has a canonical action on X given by sending c to the cyclic permutation (1, 2, ..., n). Then the number of fixed points of cd on X is equal to q → 1 Conversely, by letting q vary and seeing q-analogs as deformations, one can consider the combinatorial case of as a limit of q-analogs as (often one cannot simply let in the formulae, hence the need to take a limit). This can be formalized in the field with one element, which recovers combinatorics as linear algebra over the field with one element: for example, Weyl groups are simple algebraic groups over the field with one element. Applications in the physical sciences q-analogs are often found in exact solutions of many-body problems. In such cases, the limit usually corresponds to relatively simple dynamics, e.g., without nonlinear interactions, while gives insight into the complex nonlinear regime with feedbacks. An example from atomic physics is the model of molecular condensate creation from an ultra cold fermionic atomic gas during a sweep of an external magnetic field through the Feshbach resonance. This process is described by a model with a q-deformed version of the SU(2) algebra of operators, and its solution is described by q-deformed exponential and binomial distributions. See also List of q-analogs Stirling number Young tableau References Andrews, G. E., Askey, R. A. & Roy, R. (1999), Special Functions, Cambridge University Press, Cambridge. Gasper, G. & Rahman, M. (2004), Basic Hypergeometric Series, Cambridge University Press, . Ismail, M. E. H. (2005), Classical and Quantum Orthogonal Polynomials in One Variable, Cambridge University Press. Koekoek, R. & Swarttouw, R. F. (1998), The Askey-scheme of hypergeometric orthogonal polynomials and its q-analogue, 98-17, Delft University of Technology, Faculty of Information Technology and Systems, Department of Technical Mathematics and Informatics. External links q-analog from MathWorld q-bracket from MathWorld q-factorial from MathWorld q-binomial coefficient from MathWorld Combinatorics
Q-analog
[ "Mathematics" ]
1,431
[ "Discrete mathematics", "Q-analogs", "Combinatorics" ]
1,619,428
https://en.wikipedia.org/wiki/Flow%20control%20%28data%29
In data communications, flow control is the process of managing the rate of data transmission between two nodes to prevent a fast sender from overwhelming a slow receiver. Flow control should be distinguished from congestion control, which is used for controlling the flow of data when congestion has actually occurred. Flow control mechanisms can be classified by whether or not the receiving node sends feedback to the sending node. Flow control is important because it is possible for a sending computer to transmit information at a faster rate than the destination computer can receive and process it. This can happen if the receiving computers have a heavy traffic load in comparison to the sending computer, or if the receiving computer has less processing power than the sending computer. Stop-and-wait Stop-and-wait flow control is the simplest form of flow control. In this method the message is broken into multiple frames, and the receiver indicates its readiness to receive a frame of data. The sender waits for a receipt acknowledgement (ACK) after every frame for a specified time (called a time out). The receiver sends the ACK to let the sender know that the frame of data was received correctly. The sender will then send the next frame only after the ACK. Operations Sender: Transmits a single frame at a time. Sender waits to receive ACK within time out. Receiver: Transmits acknowledgement (ACK) as it receives a frame. Go to step 1 when ACK is received, or time out is hit. If a frame or ACK is lost during transmission then the frame is re-transmitted. This re-transmission process is known as ARQ (automatic repeat request). The problem with Stop-and-wait is that only one frame can be transmitted at a time, and that often leads to inefficient transmission, because until the sender receives the ACK it cannot transmit any new packet. During this time both the sender and the channel are unutilised. Pros and cons of stop and wait Pros The only advantage of this method of flow control is its simplicity. Cons The sender needs to wait for the ACK after every frame it transmits. This is a source of inefficiency, and is particularly bad when the propagation delay is much longer than the transmission delay. Stop and wait can also create inefficiencies when sending longer transmissions. When longer transmissions are sent there is more likely chance for error in this protocol. If the messages are short the errors are more likely to be detected early. More inefficiency is created when single messages are broken into separate frames because it makes the transmission longer. Sliding window A method of flow control in which a receiver gives a transmitter permission to transmit data until a window is full. When the window is full, the transmitter must stop transmitting until the receiver advertises a larger window. Sliding-window flow control is best utilized when the buffer size is limited and pre-established. During a typical communication between a sender and a receiver the receiver allocates buffer space for n frames (n is the buffer size in frames). The sender can send and the receiver can accept n frames without having to wait for an acknowledgement. A sequence number is assigned to frames in order to help keep track of those frames which did receive an acknowledgement. The receiver acknowledges a frame by sending an acknowledgement that includes the sequence number of the next frame expected. This acknowledgement announces that the receiver is ready to receive n frames, beginning with the number specified. Both the sender and receiver maintain what is called a window. The size of the window is less than or equal to the buffer size. Sliding window flow control has far better performance than stop-and-wait flow control. For example, in a wireless environment if data rates are low and noise level is very high, waiting for an acknowledgement for every packet that is transferred is not very feasible. Therefore, transferring data as a bulk would yield a better performance in terms of higher throughput. Sliding window flow control is a point to point protocol assuming that no other entity tries to communicate until the current data transfer is complete. The window maintained by the sender indicates which frames it can send. The sender sends all the frames in the window and waits for an acknowledgement (as opposed to acknowledging after every frame). The sender then shifts the window to the corresponding sequence number, thus indicating that frames within the window starting from the current sequence number can be sent. Go back N An automatic repeat request (ARQ) algorithm, used for error correction, in which a negative acknowledgement (NACK) causes retransmission of the word in error as well as the next N–1 words. The value of N is usually chosen such that the time taken to transmit the N words is less than the round trip delay from transmitter to receiver and back again. Therefore, a buffer is not needed at the receiver. The normalized propagation delay (a) = , where Tp = length (L) over propagation velocity (V) and Tt = bitrate (r) over framerate (F). So that a =. To get the utilization you must define a window size (N). If N is greater than or equal to 2a + 1 then the utilization is 1 (full utilization) for the transmission channel. If it is less than 2a + 1 then the equation must be used to compute utilization. Selective repeat Selective repeat is a connection oriented protocol in which both transmitter and receiver have a window of sequence numbers. The protocol has a maximum number of messages that can be sent without acknowledgement. If this window becomes full, the protocol is blocked until an acknowledgement is received for the earliest outstanding message. At this point the transmitter is clear to send more messages. Comparison This section is geared towards the idea of comparing stop-and-wait, sliding window with the subsets of go back N and selective repeat. Stop-and-wait Error free: . With errors: . Selective repeat We define throughput T as the average number of blocks communicated per transmitted block. It is more convenient to calculate the average number of transmissions necessary to communicate a block, a quantity we denote by 0, and then to determine T from the equation . Transmit flow control Transmit flow control may occur: between data terminal equipment (DTE) and a switching center, via data circuit-terminating equipment (DCE), the opposite types interconnected straightforwardly, or between two devices of the same type (two DTEs, or two DCEs), interconnected by a crossover cable. The transmission rate may be controlled because of network or DTE requirements. Transmit flow control can occur independently in the two directions of data transfer, thus permitting the transfer rates in one direction to be different from the transfer rates in the other direction. Transmit flow control can be either stop-and-wait, or use a sliding window. Flow control can be performed either by control signal lines in a data communication interface (see serial port and RS-232), or by reserving in-band control characters to signal flow start and stop (such as the ASCII codes for XON/XOFF). Hardware flow control In common RS-232 there are pairs of control lines which are usually referred to as hardware flow control: RTS (request to send) and CTS (clear to send), used in RTS flow control DTR (data terminal ready) and DSR (data set ready), used in DTR flow control Hardware flow control is typically handled by the DTE or "master end", as it is first raising or asserting its line to command the other side: In the case of RTS control flow, DTE sets its RTS, which signals the opposite end (the slave end such as a DCE) to begin monitoring its data input line. When ready for data, the slave end will raise its complementary line, CTS in this example, which signals the master to start sending data, and for the master to begin monitoring the slave's data output line. If either end needs to stop the data, it lowers its respective "data readiness" line. For PC-to-modem and similar links, in the case of DTR flow control, DTR/DSR are raised for the entire modem session (say a dialup internet call where DTR is raised to signal the modem to dial, and DSR is raised by the modem when the connection is complete), and RTS/CTS are raised for each block of data. An example of hardware flow control is a half-duplex radio modem to computer interface. In this case, the controlling software in the modem and computer may be written to give priority to incoming radio signals such that outgoing data from the computer is paused by lowering CTS if the modem detects a reception. Polarity: RS-232 level signals are inverted by the driver ICs, so line polarity is TxD-, RxD-, CTS+, RTS+ (clear to send when HI, data 1 is a LO) for microprocessor pins the signals are TxD+, RxD+, CTS-, RTS- (clear to send when LO, data 1 is a HI) Software flow control Conversely, XON/XOFF is usually referred to as software flow control. Open-loop flow control The open-loop flow control mechanism is characterized by having no feedback between the receiver and the transmitter. This simple means of control is widely used. The allocation of resources must be a "prior reservation" or "hop-to-hop" type. Open-loop flow control has inherent problems with maximizing the utilization of network resources. Resource allocation is made at connection setup using a CAC (connection admission control) and this allocation is made using information that is already "old news" during the lifetime of the connection. Often there is an over-allocation of resources and reserved but unused capacities are wasted. Open-loop flow control is used by ATM in its CBR, VBR and UBR services (see traffic contract and congestion control). Open-loop flow control incorporates two controls; the controller and a regulator. The regulator is able to alter the input variable in response to the signal from the controller. An open-loop system has no feedback or feed forward mechanism, so the input and output signals are not directly related and there is increased traffic variability. There is also a lower arrival rate in such system and a higher loss rate. In an open control system, the controllers can operate the regulators at regular intervals, but there is no assurance that the output variable can be maintained at the desired level. While it may be cheaper to use this model, the open-loop model can be unstable. Closed-loop flow control The closed-loop flow control mechanism is characterized by the ability of the network to report pending network congestion back to the transmitter. This information is then used by the transmitter in various ways to adapt its activity to existing network conditions. Closed-loop flow control is used by ABR (see traffic contract and congestion control). Transmit flow control described above is a form of closed-loop flow control. This system incorporates all the basic control elements, such as, the sensor, transmitter, controller and the regulator. The sensor is used to capture a process variable. The process variable is sent to a transmitter which translates the variable to the controller. The controller examines the information with respect to a desired value and initiates a correction action if required. The controller then communicates to the regulator what action is needed to ensure that the output variable value is matching the desired value. Therefore, there is a high degree of assurance that the output variable can be maintained at the desired level. The closed-loop control system can be a feedback or a feed forward system: A feedback closed-loop system has a feed-back mechanism that directly relates the input and output signals. The feed-back mechanism monitors the output variable and determines if additional correction is required. The output variable value that is fed backward is used to initiate that corrective action on a regulator. Most control loops in the industry are of the feedback type. In a feed-forward closed loop system, the measured process variable is an input variable. The measured signal is then used in the same fashion as in a feedback system. The closed-loop model produces lower loss rate and queuing delays, as well as it results in congestion-responsive traffic. The closed-loop model is always stable, as the number of active lows is bounded. See also Software flow control Computer networking Traffic contract Congestion control Teletraffic engineering in broadband networks Teletraffic engineering Ethernet flow control Handshaking References Sliding window: last accessed 27 November 2012. External links RS-232 flow control and handshaking Network performance Logical link control Data transmission
Flow control (data)
[ "Engineering" ]
2,623
[ "Computer networks engineering", "Flow control (data)" ]
1,619,553
https://en.wikipedia.org/wiki/Gliese%20777
Gliese 777, often abbreviated as Gl 777 or GJ 777, is a binary star approximately 52 light-years away in the constellation of Cygnus. The system is also a binary star system made up of two stars and possibly a third. As of 2005, two extrasolar planets are known to orbit the primary star. Stellar components The primary star of the system (catalogued as Gliese 777 A) is a yellow subgiant, a Sun-like star that is ceasing fusing hydrogen in its core. The star is much older than the Sun, about 6.7 billion years old. It is 4% less massive than the Sun. It is also rather metal-rich, having about 70% more "metals" (elements heavier than helium) than the Sun, which is typical for stars with extrasolar planets. The secondary star (Gliese 777 B) is a distant, dim red dwarf star orbiting the primary at a distance of 3,000 astronomical units (0.047 light years). One orbit takes at least tens of thousands of years to complete. The star itself may be a binary, the secondary being a very dim red dwarf. Not much information is available on the star system. Planetary system In 2002, a discovery of a long-period, wide-orbiting, planet (Gliese 777 b) was announced by the Geneva extrasolar planet search team. The planet was estimated to orbit in a circular path with low orbital eccentricity, but that estimate was increased with later measurements (e=0.36). Initially therefore, the planet was believed to be a true "Jupiter-twin" but was later redefined as being more like an "eccentric Jupiter", with a mass of at least 1.5 times Jupiter and about the same size. In 2021, the true mass of Gliese 777 Ab was measured via astrometry. In 2005, further observation of the star showed another amplitude with a period of 17.1 days. The mass of this second planet (Gliese 777 c) was only 18 times more than Earth, or about the same as Neptune, indicating it was one of the smallest planets discovered at the time. It too was initially thought to be on a circular orbital path that with later measurements turned out to be not the case. There was a METI message sent to Gliese 777. It was transmitted from Eurasia's largest radar, 70-meter Eupatoria Planetary Radar. The message was named Cosmic Call 1; it was sent on July 1, 1999, and it will arrive at Gliese 777 in April 2051. See also 47 Ursae Majoris 51 Pegasi Gliese 229 References External links Extrasolar Planet Interactions by Rory Barnes & Richard Greenberg, Lunar and Planetary Lab, University of Arizona Binary stars Cygnus (constellation) 190360 098767 7670 0777 M-type main-sequence stars G-type subgiants Planetary systems with two confirmed planets Durchmusterung objects TIC objects
Gliese 777
[ "Astronomy" ]
633
[ "Cygnus (constellation)", "Constellations" ]
1,619,576
https://en.wikipedia.org/wiki/Mountain%20jet
Mountain jets are a type of jet stream created by surface winds channeled through mountain passes, sometimes causing high wind speeds and drastic temperature changes. Central America jets The North Pacific east of about 120°W is strongly influenced by winds blowing through gaps in the Central American cordillera. Air flow in the region forms the Intra-Americas Low-Level Jet, a westward flow about 1 km above sea level. This flow, trade winds, and cold air flowing south from North America contribute to winds flowing through several mountain valleys. Along Central America are three main wind jets through breaks in the American Cordillera, on the Pacific Ocean side due to prevailing winds. Tehuano wind blows from the Gulf of Mexico through Chivela Pass in Mexico's Isthmus of Tehuantepec and out over the Gulf of Tehuantepec on the Pacific coast. Chivela Pass is a gap between the Sierra Madre del Sur and the Sierra Madre range to the south. Papagayo wind shrieks over the lakes of Nicaragua and pushes far out over the Gulf of Papagayo on the Pacific coast. The Cordillera Central Mountains rise to the south, gradually descending to Gatun Lake and the Isthmus of Panama. Panama winds slice through to the Pacific through the Gaillard Cut in Panama, which also holds the Panama Canal. Cause The air flow is due to surges of cold dense air originating from the North American continent. The meteorological mechanism that causes Tehuano and Papagayo winds is relatively simple. In the winter, cold high-pressure weather systems move southward from North America over the Gulf of Mexico. These high pressure systems create strong pressure gradients between the atmosphere over the Gulf of Mexico and the warmer, moister atmosphere over the Pacific Ocean. Just as a river flows from high elevations to lower elevations, the air in the high pressure system will flow "downhill" toward lower pressure, but the Cordillera mountains block the flow of air, channeling it through Chivela Pass in Mexico, the lake district of Nicaragua, and also Gaillard (Culebra) Cut in Panama. Many times, a Tehuano wind is followed by Papagayo and Panama winds a few days later as the high pressure system moves south. The arrival of these cold surges, and their associated anticyclonic circulation, strengthens the trade winds at low latitudes, and this effect can last for several days. The wind flow over Central America is actually composed of the confluence of two air streams; one from the north, associated with cold surges, and the other from the northeast, associated with trade winds north of South America. Local effects The winds blow at speeds of 80 km/h or more down the hillsides from Chivela Pass and over the waters of the Gulf of Tehuantepec, sometimes extending more than 500 miles (800 km) into the Pacific Ocean. The surface waters under the Gulf of Tehuantepec wind jet can cool by as much as 10 °C in a day. In addition to the cold water that is detectable from other satellite sensors, the ocean's response to these winds shows up in satellite estimates of chlorophyll from ocean color measurements. The cold water and high chlorophyll concentration are signatures of mixing and upwelling of cold, nutrient-rich deep water. Fish converge on this food source, which supports the highly successful fishing industry in the Gulf of Tehuantepec. External links Atmospheric dynamics Geography of Central America Mountains
Mountain jet
[ "Chemistry" ]
720
[ "Atmospheric dynamics", "Fluid dynamics" ]
60,408
https://en.wikipedia.org/wiki/HCL%20Notes
HCL Notes (formerly Lotus Notes then IBM Notes) is a proprietary collaborative software platform for Unix (AIX), IBM i, Windows, Linux, and macOS, sold by HCLTech. The client application is called Notes while the server component is branded HCL Domino. HCL Notes provides business collaboration functions, such as email, calendars, to-do lists, contact management, discussion forums, file sharing, websites, instant messaging, blogs, document libraries, user directories, and custom applications. It can also be used with other HCL Domino applications and databases. IBM Notes 9 Social Edition removed integration with the office software package IBM Lotus Symphony, which had been integrated with the Lotus Notes client in versions 8.x. Lotus Development Corporation originally developed "Lotus Notes" in 1989. IBM bought Lotus in 1995 and it became known as the Lotus Development division of IBM. On December 6, 2018, IBM announced that it was selling a number of software products to HCLSoftware for $1.8bn, including Notes and Domino. This acquisition was completed in July 2019. Design HCL Domino is a client-server cross-platform application runtime environment. Domino provides email, calendars, instant messaging (with additional HCLSoftware voice- and video-conferencing and web-collaboration), discussions/forums, blogs, and an inbuilt personnel/user directory. In addition to these standard applications, an organization may use the Domino Designer development environment and other tools to develop additional integrated applications such as request approval / workflow and document management. The Domino product consists of several components: HCL Notes client application (since version 8, this is based on Eclipse) HCL Notes client, either: a rich client a web client, HCL iNotes a mobile email client, HCL Notes Traveler HCL Verse client, either: a web email client, Verse on Premises (VOP) a mobile email client, Verse Mobile (for iOS and Android) HCL Domino server HCL Domino Administration Client HCL Domino Designer (Eclipse-based integrated development environment) for creating client-server applications that run within the Notes framework Domino competes with products from other companies such as Microsoft, Google, Zimbra and others. Because of the application development abilities, HCL Domino is often compared to products like Microsoft Sharepoint. The database in Domino can be replicated between servers and between server and client, thereby allowing clients offline capabilities. Domino, a business application as well as a messaging server, is compatible with both Notes and web-browsers. Notes (and since IBM Domino 9, the HCAA) may be used to access any Domino application, such as discussion forums, document libraries, and numerous other applications. Notes resembles a web-browser in that it may run any compatible application that the user has permission for. Domino provides applications that can be used to: access, store and present information through a user interface enforce security replicate, that is, allow many different servers to contain the same information and have many users work with that data The standard storage mechanism in Domino is a document-database format, the "Notes Storage Facility" (.nsf). The .nsf file will normally contain both an application design and its associated data. Domino can also access relational databases, either through an additional server called HCL Enterprise Integrator for Domino, through ODBC calls or through the use of XPages. As Domino is an application runtime environment, email and calendars operate as applications within Notes, which HCL provides with the product. A Domino application-developer can change or completely replace that application. HCL has released the base templates as open source as well. Programmers can develop applications for Domino in a variety of development languages including: the Java programming language either directly or through XPages LotusScript, a language resembling Visual Basic the JavaScript programming language via the Domino AppDev Pack The client supports a formula language as well as JavaScript. Software developers can build applications to run either within the Notes application runtime environment or through a web server for use in a web browser, although the interface would need to be developed separately unless XPages is used. Use Notes can be used for email, as a calendar, PIM, instant messaging, Web browsing, and other applications. Notes can access both local- and server-based applications and data. Notes can function as an IMAP and POP email client with non-Domino mail servers. The system can retrieve recipient addresses from any LDAP server, including Active Directory, and includes a web browser, although it can be configured by a Domino Developer to launch a different web browser instead. Features include group calendars and schedules, SMTP/MIME-based email, NNTP-based news support, and automatic HTML conversion of all documents by the Domino HTTP task. Notes can be used with Sametime instant-messaging to allow to see other users online and chat with one or more of them at the same time. Beginning with Release 6.5, this function has been freely available. Presence awareness is available in email and other HCL Domino applications for users in organizations that use both Notes and Sametime. Since version 7, Notes has provided a Web services interface. Domino can be a Web server for HTML files; authentication of access to Domino databases or HTML files uses the Domino user directory and external systems such as Microsoft Active Directory. A design client, Domino Designer, can allow the development of database applications consisting of forms (which allow users to create documents) and views (which display selected document fields in columns). In addition to its role as a groupware system (email, calendaring, shared documents and discussions), HCL Notes and Domino can also construct "workflow"-type applications, particularly those which require approval processes and routing of data. Since Release 5, server clustering has had the ability to provide geographic redundancy for servers. Notes System Diagnostic (NSD) gathers information about the running of a Notes workstation or of a Domino server. On October 10, 2018, IBM released IBM Domino v10.0 and IBM Notes 10.0 as the latest release. In December, 2019, HCL released HCL Domino v11 and HCL Notes v11. Overview Client/server Notes and Domino are client/server database environments. The server software is called Domino and the client software is Notes. Domino software can run on Windows, Unix, AIX, and IBM mid-range systems and can scale to tens of thousands of users per server. There are different supported versions of the Domino server that are supported on the various levels of server operating systems. Usually the latest server operating system is only officially supported by a version of HCL Domino that is released at about the same time as that OS. Domino has security capabilities on a variety of levels. The authorizations can be granular, down to the field level in specific records all the way up to 10 different parameters that can be set up at a database level, with intermediate options in between. Users can also assign access for other users to their personal calendar and email on a more generic reader, editor, edit with delete and manage my calendar levels. All of the security in Notes and Domino is independent of the server OS or Active Directory. Optionally, the Notes client can be configured to have the user use their Active Directory identity. Data replication The first release of Lotus Notes included a generalized replication facility. The generalized nature of this feature set it apart from predecessors like Usenet and continued to differentiate Lotus Notes. Domino servers and Notes clients identify NSF files by their Replica IDs, and keep replicated files synchronized by bi-directionally exchanging data, metadata, and application logic and design. There are options available to define what meta-data replicates, or specifically exclude certain meta data from replicating. Replication between two servers, or between a client and a server, can occur over a network or a point-to-point modem connection. Replication between servers may occur at intervals according to a defined schedule, in near-real-time when triggered by data changes in server clusters, or when triggered by an administrator or program. Creation of a local replica of an NSF file on the hard disk of an HCL Notes client enables the user to fully use Notes and Domino databases while working off-line. The client synchronizes any changes when client and server next connect. Local replicas are also sometimes maintained for use while connected to the network in order to reduce network latency. Replication between a Notes client and Domino server can run automatically according to a schedule, or manually in response to a user or programmatic request. Since Notes 6, local replicas maintain all security features programmed into the applications. Earlier releases of Notes did not always do so. Early releases also did not offer a way to encrypt NSF files, raising concerns that local replicas might expose too much confidential data on laptops or insecure home office computers, but more recent releases offer encryption, and as of the default setting for newly created local replicas. Security Lotus Notes was the first widely adopted software product to use public key cryptography for client–server and server–server authentication and for encryption of data. Until US laws regulating encryption were changed in 2000, IBM and Lotus were prohibited from exporting versions of Notes that supported symmetric encryption keys that were longer than 40 bits. In 1997, Lotus negotiated an agreement with the NSA that allowed export of a version that supported stronger keys with 64 bits, but 24 of the bits were encrypted with a special key and included in the message to provide a "workload reduction factor" for the NSA. This strengthened the protection for users of Notes outside the US against private-sector industrial espionage, but not against spying by the US government. This implementation was widely announced, but with some justification many people did consider it to be a backdoor. Some governments objected to being put at a disadvantage to the NSA, and as a result Lotus continued to support the 40-bit version for export to those countries. Notes and Domino also uses a code-signature framework that controls the security context, runtime, and rights of custom code developed and introduced into the environment. Notes 5 introduced an execution control list (ECL) at the client level. The ECL allows or denies the execution of custom code based on the signature attached to it, preventing code from untrusted (and possibly malignant) sources from running. Notes and Domino 6 allowed client ECLs to be managed centrally by server administrators through the implementation of policies. Since release 4.5, the code signatures listed in properly configured ECLs prevent code from being executed by external sources, to avoid virus propagation through Notes/Domino environments. Administrators can centrally control whether each mailbox user can add exceptions to, and thus override, the ECL. Database security Access control lists (ACLs) control a user of server's level of access to that database. Only a user with Manager access can create or modify the ACL. Default entries in the ACL can be set when the Manager creates the database. Roles, rather than user id, can determine access level. Programming Notes and Domino is a cross-platform, distributed document-oriented NoSQL database and messaging framework and rapid application development environment that includes pre-built applications like email, calendar, etc. This sets it apart from its major commercial competitors, such as Microsoft Exchange or Novell GroupWise, which are purpose-built applications for mail and calendaring that offer APIs for extensibility. Domino databases are built using the Domino Designer client, available only for Microsoft Windows; standard user clients are available for Windows, Linux, and macOS. A key feature of Notes is that many replicas of the same database can exist at the same time on different servers and clients, across dissimilar platforms; the same storage architecture is used for both client and server replicas. Originally, replication in Notes happened at document (i.e., record) level. With release of Notes 4 in 1996, replication was changed so that it now occurs at field level. A database is a Notes Storage Facility (.nsf) file, containing basic units of storage known as a "note". Every note has a UniqueID that is shared by all its replicas. Every replica also has a UniqueID that uniquely identifies it within any cluster of servers, a domain of servers, or even across domains belonging to many organizations that are all hosting replicas of the same database. Each note also stores its creation and modification dates, and one or more Items. There are several classes of notes, including design notes and document notes. Design notes are created and modified with the Domino Designer client, and represent programmable elements, such as the GUI layout of forms for displaying and editing data, or formulas and scripts for manipulating data. Document notes represent user data, and are created and modified with the Notes client, via a web browser, via mail routing and delivery, or via programmed code. Document notes can have parent-child relationships, but Notes should not be considered a hierarchical database in the classic sense of information management systems. Notes databases are also not relational, although there is a SQL driver that can be used with Notes, and it does have some features that can be used to develop applications that mimic relational features. Notes does not support atomic transactions, and its file locking is rudimentary. Notes is a document-oriented database (document-based, schema-less, loosely structured) with support for rich content and powerful indexing facilities. This structure closely mimics paper-based work flows that Notes is typically used to automate. Items represent the content of a note. Every item has a name, a type, and may have some flags set. A note can have more than one item with the same name. Item types include Number, Number List, Text, Text List, Date-Time, Date-Time List, and Rich Text. Flags are used for managing attributes associated with the item, such as read or write security. Items in design notes represent the programmed elements of a database. For example, the layout of an entry form is stored in the rich text Body item within a form design note. This means that the design of the database can replicate to users' desktops just like the data itself, making it extremely easy to deploy updated applications. Items in document notes represent user-entered or computed data. An item named "Form" in a document note can be used to bind a document to a form design note, which directs the Notes client to merge the content of the document note items with the GUI information and code represented in the given form design note for display and editing purposes. However, other methods can be used to override this binding of a document to a form note. The resulting loose binding of documents to design information is one of the cornerstones of the power of Notes. Traditional database developers used to working with rigidly enforced schemas, on the other hand, may consider the power of this feature to be a double-edged sword. Notes application development uses several programming languages. Formula and LotusScript are the two original ones. LotusScript is similar to, and may even be considered a specialized implementation of, Visual Basic, but with the addition of many native classes that model the Notes environment, whereas Formula is similar to Lotus 1-2-3 formula language but is unique to Notes. Java was integrated into IBM Notes beginning with Release 4.5. With Release 5, Java support was greatly enhanced and expanded, and JavaScript was added. While LotusScript remains a primary tool in developing applications for the Lotus Notes client, Java and JavaScript are the primary tools for server-based processing, developing applications for browser access, and allowing browsers to emulate the functionality of the IBM Notes client. With XPages, the IBM Notes client can now natively process Java and JavaScript code, although applications development usually requires at least some code specific to only IBM Notes or only a browser. As of version 6, Lotus established an XML programming interface in addition to the options already available. The Domino XML Language (DXL) provides XML representations of all data and design resources in the Notes model, allowing any XML processing tool to create and modify IBM Notes and Domino data. Since Release 8.5, XPages were also integrated into IBM Notes. External to the Notes application, HCL provides toolkits in C, C++, and Java to connect to the Domino database and perform a wide variety of tasks. The C toolkit is the most mature, and the C++ toolkit is an objectized version of the C toolkit, lacking many functions the C toolkit provides. The Java toolkit is the least mature of the three and can be used for basic application needs. Database IBM Notes includes a database management system but Notes files are different from relational or object databases because they are document-centric. Document-oriented databases such as Notes allow multiple values in items (fields), do not require a schema, come with built-in document-level access control, and store rich text data. IBM Domino 7 to 8.5.x supports the use of IBM Db2 database as an alternative store for IBM Notes databases. This NSFDB2 feature, however, is now in maintenance mode with no further development planned. An IBM Notes database can be mapped to a relational database using tools like DECS, [LEI], JDBCSql for Domino or NotesSQL. Configuration The HCL Domino server or the Domino client store their configuration in their own databases / application files (*.nsf). No relevant configuration settings are saved in the Windows Registry if the operating system is Windows. Some other configuration options (primary the start configuration) is stored in the notes.ini (there are currently over 2000 known options available). Use as an email client Notes is commonly deployed as an end-user email client in larger organizations. When an organization employs an HCL Domino server, it usually also deploys the supplied Notes client for accessing the Notes application for email and calendaring but also to use document management and workflow applications. As Notes is a runtime environment, and the email and calendaring functions in Notes are simply an application provided by HCL, the administrators are free to develop alternate email and calendaring applications. It is also possible to alter, amend or extend the HCL supplied email and calendaring application. The Domino server also supports POP3 and IMAP mail clients, and through an extension product (HCL mail support for Microsoft Outlook) supports native access for Microsoft Outlook clients. HCL also provides iNotes (in Notes 6.5 renamed to "Domino Web Access" but in version 8.0 reverted to iNotes), to allow the use of email and calendaring features through web browsers on Windows, Mac and Linux, such as Internet Explorer and Firefox. There are several spam filtering programs available (including IBM Lotus Protector), and a rules engine allowing user-defined mail processing to be performed by the server. Comparison with other email clients Notes was designed as a collaborative application platform where email was just one of numerous applications that ran in the Notes client software. The Notes client was also designed to run on multiple platforms including Windows, OS/2, classic Mac OS, SCO Open Desktop UNIX, and Linux. These two factors have resulted in the user interface containing some differences from applications that only run on Windows. Furthermore, these differences have often remained in the product to retain backward compatibility with earlier releases, instead of conforming to updated Windows UI standards. The following are some of these differences. Properties dialog boxes for formatting text, hyperlinks and other rich-text information can remain open after a user makes changes to selected text. This provides flexibility to select new text and apply other formatting without closing the dialog box, selecting new text and opening a new format dialog box. Almost all other Windows applications require the user to close the dialog box, select new text, then open a new dialog box for formatting/changes. Properties dialog boxes also automatically recognize the type of text selected and display appropriate selections (for instance, a hyperlink properties box). Users can format tables as tabbed interfaces as part of form design (for applications) or within mail messages (or in rich-text fields in applications). This provides users the ability to provide tab-style organization to documents, similar to popular tab navigation in most web portals, etc. End-users can readily insert links to Notes applications, Notes views or other Notes documents into Notes documents. Deleting a document (or email) will delete it from every folder in which it appears, since the folders simply contain links to the same back-end document. Some other email clients only delete the email from the current folder; if the email appears in other folders it is left alone, requiring the user to hunt through multiple folders in order to completely delete a message. In Notes, clicking on "Remove from Folder" will remove the document only from that folder leaving all other instances intact. The All Documents and Sent "views" differ from other collections of documents known as "folders" and exhibit different behaviors. Specifically, mail cannot be dragged out of them, and so removed from those views; the email can only be "copied" from them. This is because these are views, and their membership indexes are maintained according to characteristics of the documents contained in them, rather than based on user interaction as is the case for a folder. This technical difference can be baffling to users, in environments where no training is given. All Documents contain all of the documents in a mailbox, no matter which folder it is in. The only way to remove something from All Documents is to delete it outright. Lotus Notes 7 and older versions had more differences, which were removed from subsequent releases: Users select a "New Memo" to send an email, rather than "New Mail" or "New Message". (Notes 8 calls the command "New Message") To select multiple documents in a Notes view, one drags one's mouse next to the documents to select, rather than using +single click. (Notes 8 uses keypress conventions.) The searching function offers a "phrase search", rather than the more common "or search", and Notes requires users to spell out Boolean conditions in search-strings. As a result, users must search for "delete AND folder" in order to find help text that contains the phrase "delete a folder". Searching for "delete folder" does not yield the desired result. (Notes 8 uses or-search conventions.) Lotus Notes 8.0 (released in 2007) became the first version to employ a dedicated user-experience team, resulting in changes in the IBM Notes client experience in the primary and new notes user interface. This new interface runs in the open source Eclipse Framework, which is a project started by IBM, opening up more application development opportunities through the use of Eclipse plug-ins. The new interface provides many new user interface features and the ability to include user-selected applications/applets in small panes in the interface. Lotus Notes 8.0 also included a new email interface / design to match the new Lotus Notes 8.0 eclipse based interface. Eclipse is a Java framework and allows IBM to port Notes to other platforms rapidly. An issue with Eclipse and therefore Notes 8.0 is the applications start-up and user-interaction speed. Lotus Notes 8.5 sped up the application and the increase in general specification of PCs means this is less of an issue. IBM Notes 9 continued the evolution of the user interface to more closely align with modern application interfaces found in many commercial packaged or web-based software. Currently, the software still does not have an auto-correct option - or even ability - to reverse accidental use of caps lock. Domino is now running on the Eclipse platform and offers many new development environments and tools such as XPages. For lower spec PCs, a new version of the old interface is still provided albeit as it is the old interface many of the new features are not available and the email user interface reverts to the Notes 7.x style. This new user experience builds on Notes 6.5 (released in 2003), which upgraded the email client, previously regarded by many as the product's Achilles heel. Features added at that time included: drag and drop of folders replication of unread marks between servers follow-up flags reply and forward indicators on emails ability to edit an attachment and save the changes back to an email id Reception Publications such as The Guardian in 2006 have criticized earlier versions of Lotus Notes for having an "unintuitive [user] interface" and cite widespread dissatisfaction with the usability of the client software. The Guardian indicated that Notes has not necessarily suffered as a result of this dissatisfaction due to the fact that "the people who choose [enterprise software] tend not to be the ones who use it." Earlier versions of Notes have also been criticized for violating an important usability best practice that suggests a consistent UI is often better than custom alternative. Software written for a particular operating system should follow that particular OS's user interface style guide. Not following those style guides can confuse users. A notable example is F5 keyboard shortcut, which is used to refresh window contents in Microsoft Windows. Pressing F5 in Lotus Notes before release 8.0 caused it to lock screen. Since this was a major point of criticism this was changed in release 8.0. Old versions did not support proportional scrollbars (which give the user an idea of how long the document is, relative to the portion being viewed). Proportional scroll bars were only introduced in Notes 8. Older versions of Notes also suffered from similar user interaction choices, many of which were also corrected in subsequent releases. One example that was corrected in Release 8.5: In earlier versions the out-of-office agent needed to be manually enabled when leaving and disabled when coming back, even if start and end date have been set. As of Release 8.5 the out-of-office notification now automatically shuts off without a need for a manual disable. Unlike some other e-mail client software programs, IBM Notes developers made a choice to not allow individual users to determine whether a return receipt is sent when they open an e-mail; rather, that option is configured at the server level. IBM developers believe "Allowing individual cancellation of return receipt violates the intent of a return receipt function within an organization". So, depending on system settings, users will have no choice in return receipts going back to spammers or other senders of unwanted e-mail. This has led tech sites to publish ways to get around this feature of Notes. For IBM Notes 9.0 and IBM iNotes 9.0, the IBM Domino server's .INI file can now contain an entry to control return receipt in a manner that's more aligned with community expectations (IBM Notes 9 Product Documentation). When Notes crashes, some processes may continue running and prevent the application from being restarted until they are killed. Related software Related IBM Lotus products Over the 30-year history of IBM Notes, Lotus Development Corporation and later IBM have developed many other software products that are based on, or integrated with IBM Notes. The most prominent of these is the IBM Lotus Domino server software, which was originally known as the Lotus Notes Server and gained a separate name with the release of version 4.5. The server platform also became the foundation for products such as IBM Lotus Quickr for Domino, for document management, and IBM Sametime for instant messaging, audio and video communication, and web conferencing, and with Release 8.5, IBM Connections. In early releases of IBM Notes, there was considerable emphasis on client-side integration with the IBM Lotus SmartSuite environment. With Microsoft's increasing predominance in office productivity software, the desktop integration focus switched for a time to Microsoft Office. With the release of version 8.0 in 2007, based on the Eclipse framework, IBM again added integration with its own office-productivity suite, the OpenOffice.org-derived IBM Lotus Symphony. IBM Lotus Expeditor is a framework for developing Eclipse-based applications. Other IBM products and technologies have also been built to integrate with IBM Notes. For mobile-device synchronization, this previously included the client-side IBM Lotus Easysync Pro product (no longer in development) and IBM Notes Traveler, a newer no-charge server-side add-on for mail, calendar and contact sync. A recent addition to IBM's portfolio are two IBM Lotus Protector products for mail security and encryption, which have been built to integrate with IBM Notes. Related software from other vendors With a long market history and large installed base, Notes and Domino have spawned a large third-party software ecosystem. Such products can be divided into four broad, and somewhat overlapping classes: Notes and Domino applications are software programs written in the form of one or more Notes databases, and often supplied as NTF templates. This type of software typically is focused on providing business benefit from Notes' core collaboration, workflow and messaging capabilities. Examples include customer relationship management (CRM), human resources, and project tracking systems. Some applications of this sort may offer a browser interface in addition to Notes client access. The code within these programs typically uses the same languages available to an in-house Domino developer: Notes formula language, LotusScript, Java and JavaScript. Notes and Domino add-ons, tools and extensions are generally executable programs written in C, C++ or another compiled language that are designed specifically to integrate with Notes and Domino. This class of software may include both client- and server-side executable components. In some cases, Notes databases may be used for configuration and reporting. Since the advent of the Eclipse-based Notes 8 Standard client, client-side add-ons may also include Eclipse plug-ins and XML-based widgets. The typical role for this type of software is to support or extend core Notes functionality. Examples include spam and anti-virus products, server administration and monitoring tools, messaging and storage management products, policy-based tools, data synchronization tools and developer tools. Notes and Domino-aware adds-ins and agents are also executable programs, but they are designed to extend the reach of a general networked software product to Notes and Domino data. This class includes server and client backup software, anti-spam and anti-virus products, and e-discovery and archiving systems. It also includes add-ins to integrate Notes with third-party offerings such as Cisco WebEx conferencing service or the Salesforce.com CRM platform. History Notes has a history spanning more than 30 years. Its chief inspiration was PLATO Notes, created by David R. Woolley at the University of Illinois in 1973. In today's terminology, PLATO Notes supported user-created discussion groups, and it was part of the foundation for an online community which thrived for more than 20 years on the PLATO system. Ray Ozzie worked with PLATO while attending the University of Illinois in the 1970s. When PC network technology began to emerge, Ozzie made a deal with Mitch Kapor, the founder of Lotus Development Corporation, that resulted in the formation of Iris Associates in 1984 to develop products that would combine the capabilities of PCs with the collaborative tools pioneered in PLATO. The agreement put control of product development under Ozzie and Iris, and sales and marketing under Lotus. In 1994, after the release and marketplace success of Notes R3, Lotus purchased Iris. In 1995 IBM purchased Lotus. In 2008, IBM released XPages technology, based on JavaServer Faces. This allows Domino applications to be better surfaced to browser clients, though the UX and business logic must be completely rewritten. Previously, Domino applications could be accessed through browsers, but required extensive web specific modifications to get full functionality in browsers. XPages also gave the application new capabilities that are not possible with the classic Notes client. The IBM Domino 9 Social Edition included the Notes Browser Plugin, which would surface Notes applications through a minified version of the rich desktop client contained in a browser tab. Branding Prior to release 4.5, the Lotus Notes branding encompassed both the client and server applications. In 1996, Lotus released an HTTP server add-on for the Notes 4 server called "Domino". This add-on allowed Notes documents to be rendered as web pages in real time. Later that year, the Domino web server was integrated into release 4.5 of the core Notes server and the entire server program was re-branded, taking on the name "Domino". Only the client program officially retained the "Lotus Notes" name. In November 2012, IBM announced it would be dropping the Lotus brand and moving forward with the IBM brand only to identify products, including Notes and Domino. On October 9, 2018, IBM announced the availability of the latest version of the client and server software. In 2019, Domino and Notes became enterprise software products managed under HCLSoftware. Release history 21st century IBM donated parts of the IBM Notes and Domino code to OpenOffice.org on September 12, 2007 and since 2008 has been regularly donating code to OpenNTF.org. Despite repeated predictions of the decline or impending demise of IBM Notes and Domino, such as Forbes magazine's 1998 "The decline and fall of Lotus", the installed base of Lotus Notes has increased from an estimated 42 million seats in September 1998 to approximately 140 million cumulative licenses sold through 2008. Once IBM Workplace was discontinued in 2006, speculation about dropping Notes was rendered moot. Moreover, IBM introduced iNotes for iPhone two years later. IBM contributed some of the code it had developed for the integration of the OpenOffice.org suite into Notes 8 to the project. IBM also packaged its version of OpenOffice.org for free distribution as IBM Lotus Symphony. IBM Notes and Domino 9 Social Edition shipped on March 21, 2013. Changes include significantly updated user interface, near-parity of IBM Notes and IBM iNotes functionality, the IBM Notes Browser Plugin, new XPages controls added to IBM Domino, refreshed IBM Domino Designer user interface, added support for To Dos on Android mobile devices, and additional server functionality as detailed in the Announcement Letter. In late 2016, IBM announced that there would not be a Notes 9.0.2 release, but 9.0.1 would be supported until at least 2021. In the same presentation IBM also stated that their internal users had been migrated away from Notes and onto the IBM Verse client. On October 25, 2017, IBM announced a plan to deliver a Domino V10 family update sometime in 2018. The new version will be built in partnership with HCLTech. IBM's development and support team responsible for these products are moving to HCL, however, the marketing, and sales continue to be IBM-led. Product strategy is shared between IBM and HCL. As part of the announcement, IBM indicated that there is no formal end to product support planned. On October 9, 2018, IBM announced IBM Domino 10.0 and IBM Notes 10.0 in Frankfurt, Germany, and made them available to download on October 10, 2018. See also List of IBM products IBM Collaboration Solutions (formerly Lotus) Software division Comparison of email clients IBM Lotus Domino Web Access Comparison of feed aggregators Lotus Multi-Byte Character Set (LMBCS) NotesPeek References HCL Domino Notes 1989 software Lotus Notes Lotus Notes Lotus Notes Lotus Notes Lotus Notes Proprietary database management systems Document-oriented databases NoSQL Proprietary commercial software for Linux Email systems Divested IBM products 2019 mergers and acquisitions
HCL Notes
[ "Technology" ]
7,308
[ "Email systems", "Telecommunications systems", "Computer systems" ]
60,426
https://en.wikipedia.org/wiki/Symbiogenesis
Symbiogenesis (endosymbiotic theory, or serial endosymbiotic theory) is the leading evolutionary theory of the origin of eukaryotic cells from prokaryotic organisms. The theory holds that mitochondria, plastids such as chloroplasts, and possibly other organelles of eukaryotic cells are descended from formerly free-living prokaryotes (more closely related to the Bacteria than to the Archaea) taken one inside the other in endosymbiosis. Mitochondria appear to be phylogenetically related to Rickettsiales bacteria, while chloroplasts are thought to be related to cyanobacteria. The idea that chloroplasts were originally independent organisms that merged into a symbiotic relationship with other one-celled organisms dates back to the 19th century, when it was espoused by researchers such as Andreas Schimper. The endosymbiotic theory was articulated in 1905 and 1910 by the Russian botanist Konstantin Mereschkowski, and advanced and substantiated with microbiological evidence by Lynn Margulis in 1967. Among the many lines of evidence supporting symbiogenesis are that mitochondria and plastids contain their own chromosomes and reproduce by splitting in two, parallel but separate from the sexual reproduction of the rest of the cell; that the chromosomes of some mitochondria and plastids are single circular DNA molecules similar to the circular chromosomes of bacteria; that the transport proteins called porins are found in the outer membranes of mitochondria and chloroplasts, and also bacterial cell membranes; and that cardiolipin is found only in the inner mitochondrial membrane and bacterial cell membranes. History The Russian botanist Konstantin Mereschkowski first outlined the theory of symbiogenesis (from Greek: σύν syn "together", βίος bios "life", and γένεσις genesis "origin, birth") in his 1905 work, The nature and origins of chromatophores in the plant kingdom, and then elaborated it in his 1910 The Theory of Two Plasms as the Basis of Symbiogenesis, a New Study of the Origins of Organisms. Mereschkowski proposed that complex life-forms had originated by two episodes of symbiogenesis, the incorporation of symbiotic bacteria to form successively nuclei and chloroplasts. Mereschkowski knew of the work of botanist Andreas Schimper. In 1883, Schimper had observed that the division of chloroplasts in green plants closely resembled that of free-living cyanobacteria. Schimper had tentatively proposed (in a footnote) that green plants had arisen from a symbiotic union of two organisms. In 1918 the French scientist Paul Jules Portier published Les Symbiotes, in which he claimed that the mitochondria originated from a symbiosis process. Ivan Wallin advocated the idea of an endosymbiotic origin of mitochondria in the 1920s. The Russian botanist Boris Kozo-Polyansky became the first to explain the theory in terms of Darwinian evolution. In his 1924 book A New Principle of Biology. Essay on the Theory of Symbiogenesis, he wrote, "The theory of symbiogenesis is a theory of selection relying on the phenomenon of symbiosis." These theories did not gain traction until more detailed electron-microscopic comparisons between cyanobacteria and chloroplasts were made, such as by Hans Ris in 1961 and 1962. These, combined with the discovery that plastids and mitochondria contain their own DNA, led to a resurrection of the idea of symbiogenesis in the 1960s. Lynn Margulis advanced and substantiated the theory with microbiological evidence in a 1967 paper, On the origin of mitosing cells. In her 1981 work Symbiosis in Cell Evolution she argued that eukaryotic cells originated as communities of interacting entities, including endosymbiotic spirochaetes that developed into eukaryotic flagella and cilia. This last idea has not received much acceptance, because flagella lack DNA and do not show ultrastructural similarities to bacteria or to archaea (see also: Evolution of flagella and Prokaryotic cytoskeleton). According to Margulis and Dorion Sagan, "Life did not take over the globe by combat, but by networking" (i.e., by cooperation). Christian de Duve proposed that the peroxisomes may have been the first endosymbionts, allowing cells to withstand growing amounts of free molecular oxygen in the Earth's atmosphere. However, it now appears that peroxisomes may be formed de novo, contradicting the idea that they have a symbiotic origin. The fundamental theory of symbiogenesis as the origin of mitochondria and chloroplasts is now widely accepted. From endosymbionts to organelles Biologists usually distinguish organelles from endosymbionts – whole organisms living inside other organisms – by their reduced genome sizes. As an endosymbiont evolves into an organelle, most of its genes are transferred to the host cell genome. The host cell and organelle therefore need to develop a transport mechanism that enables the return of the protein products needed by the organelle but now manufactured by the cell. Free-living ancestors Alphaproteobacteria were formerly thought to be the free-living organisms most closely related to mitochondria. Later research indicates that mitochondria are most closely related to Pelagibacterales bacteria, in particular, those in the SAR11 clade. Nitrogen-fixing filamentous cyanobacteria are the free-living organisms most closely related to plastids. Both cyanobacteria and alphaproteobacteria maintain a large (>6Mb) genome encoding thousands of proteins. Plastids and mitochondria exhibit a dramatic reduction in genome size when compared with their bacterial relatives. Chloroplast genomes in photosynthetic organisms are normally 120–200kb encoding 20–200 proteins and mitochondrial genomes in humans are approximately 16kb and encode 37 genes, 13 of which are proteins. Using the example of the freshwater amoeboid, however, Paulinella chromatophora, which contains chromatophores found to be evolved from cyanobacteria, Keeling and Archibald argue that this is not the only possible criterion; another is that the host cell has assumed control of the regulation of the former endosymbiont's division, thereby synchronizing it with the cell's own division. Nowack and her colleagues gene sequenced the chromatophore (1.02Mb) and found that only 867 proteins were encoded by these photosynthetic cells. Comparisons with their closest free living cyanobacteria of the genus Synechococcus (having a genome size 3Mb, with 3300 genes) revealed that chromatophores had undergone a drastic genome shrinkage. Chromatophores contained genes that were accountable for photosynthesis but were deficient in genes that could carry out other biosynthetic functions; this observation suggests that these endosymbiotic cells are highly dependent on their hosts for their survival and growth mechanisms. Thus, these chromatophores were found to be non-functional for organelle-specific purposes when compared with mitochondria and plastids. This distinction could have promoted the early evolution of photosynthetic organelles. The loss of genetic autonomy, that is, the loss of many genes from endosymbionts, occurred very early in evolutionary time. Taking into account the entire original endosymbiont genome, there are three main possible fates for genes over evolutionary time. The first is the loss of functionally redundant genes, in which genes that are already represented in the nucleus are eventually lost. The second is the transfer of genes to the nucleus, while the third is that genes remain in the organelle that was once an organism. The loss of autonomy and integration of the endosymbiont with its host can be primarily attributed to nuclear gene transfer. As organelle genomes have been greatly reduced over evolutionary time, nuclear genes have expanded and become more complex. As a result, many plastid and mitochondrial processes are driven by nuclear encoded gene products. In addition, many nuclear genes originating from endosymbionts have acquired novel functions unrelated to their organelles. Gene transfer mechanisms The mechanisms of gene transfer are not fully known; however, multiple hypotheses exist to explain this phenomenon. The possible mechanisms include the Complementary DNA (cDNA) hypothesis and the bulk flow hypothesis. The cDNA hypothesis involves the use of messenger RNA (mRNAs) to transport genes from organelles to the nucleus where they are converted to cDNA and incorporated into the genome. The cDNA hypothesis is based on studies of the genomes of flowering plants. Protein coding RNAs in mitochondria are spliced and edited using organelle-specific splice and editing sites. Nuclear copies of some mitochondrial genes, however, do not contain organelle-specific splice sites, suggesting a processed mRNA intermediate. The cDNA hypothesis has since been revised as edited mitochondrial cDNAs are unlikely to recombine with the nuclear genome and are more likely to recombine with their native mitochondrial genome. If the edited mitochondrial sequence recombines with the mitochondrial genome, mitochondrial splice sites would no longer exist in the mitochondrial genome. Any subsequent nuclear gene transfer would therefore also lack mitochondrial splice sites. The bulk flow hypothesis is the alternative to the cDNA hypothesis, stating that escaped DNA, rather than mRNA, is the mechanism of gene transfer. According to this hypothesis, disturbances to organelles, including autophagy (normal cell destruction), gametogenesis (the formation of gametes), and cell stress release DNA which is imported into the nucleus and incorporated into the nuclear DNA using non-homologous end joining (repair of double stranded breaks). For example, in the initial stages of endosymbiosis, due to a lack of major gene transfer, the host cell had little to no control over the endosymbiont. The endosymbiont underwent cell division independently of the host cell, resulting in many "copies" of the endosymbiont within the host cell. Some of the endosymbionts lysed (burst), and high levels of DNA were incorporated into the nucleus. A similar mechanism is thought to occur in tobacco plants, which show a high rate of gene transfer and whose cells contain multiple chloroplasts. In addition, the bulk flow hypothesis is also supported by the presence of non-random clusters of organelle genes, suggesting the simultaneous movement of multiple genes. Ford Doolittle proposed that (whatever the mechanism) gene transfer behaves like a ratchet, resulting in unidirectional transfer of genes from the organelle to the nuclear genome. When genetic material from an organelle is incorporated into the nuclear genome, either the organelle or nuclear copy of the gene may be lost from the population. If the organelle copy is lost and this is fixed, or lost through genetic drift, a gene is successfully transferred to the nucleus. If the nuclear copy is lost, horizontal gene transfer can occur again, and the cell can 'try again' to have successful transfer of genes to the nucleus. In this ratchet-like way, genes from an organelle would be expected to accumulate in the nuclear genome over evolutionary time. Endosymbiosis of protomitochondria Endosymbiotic theory for the origin of mitochondria suggests that the proto-eukaryote engulfed a protomitochondrion, and this endosymbiont became an organelle, a major step in eukaryogenesis, the creation of the eukaryotes. Mitochondria Mitochondria are organelles that synthesize the energy-carrying molecule ATP for the cell by metabolizing carbon-based macromolecules. The presence of DNA in mitochondria and proteins, derived from mtDNA, suggest that this organelle may have been a prokaryote prior to its integration into the proto-eukaryote. Mitochondria are regarded as organelles rather than endosymbionts because mitochondria and the host cells share some parts of their genome, undergo division simultaneously, and provide each other with means to produce energy. The endomembrane system and nuclear membrane were hypothesized to have derived from the protomitochondria. Nuclear membrane The presence of a nucleus is one major difference between eukaryotes and prokaryotes. Some conserved nuclear proteins between eukaryotes and prokaryotes suggest that these two types had a common ancestor. Another theory behind nucleation is that early nuclear membrane proteins caused the cell membrane to fold and form a sphere with pores like the nuclear envelope. As a way of forming a nuclear membrane, endosymbiosis could be expected to use less energy than if the cell was to develop a metabolic process to fold the cell membrane for the purpose. Digesting engulfed cells without energy-producing mitochondria would have been challenging for the host cell. On this view, membrane-bound bubbles or vesicles leaving the protomitochondria may have formed the nuclear envelope. The process of symbiogenesis by which the early eukaryotic cell integrated the proto-mitochondrion likely included protection of the archaeal host genome from the release of reactive oxygen species. These would have been formed during oxidative phosphorylation and ATP production by the proto-mitochondrion. The nuclear membrane may have evolved as an adaptive innovation for protecting against nuclear genome DNA damage caused by reactive oxygen species. Substantial transfer of genes from the ancestral proto-mitochondrial genome to the nuclear genome likely occurred during early eukaryotic evolution. The greater protection of the nuclear genome against reactive oxygen species afforded by the nuclear membrane may explain the adaptive benefit of this gene transfer. Endomembrane system Modern eukaryotic cells use the endomembrane system to transport products and wastes in, within, and out of cells. The membrane of nuclear envelope and endomembrane vesicles are composed of similar membrane proteins. These vesicles also share similar membrane proteins with the organelle they originated from or are traveling towards. This suggests that what formed the nuclear membrane also formed the endomembrane system. Prokaryotes do not have a complex internal membrane network like eukaryotes, but they could produce extracellular vesicles from their outer membrane. After the early prokaryote was consumed by a proto-eukaryote, the prokaryote would have continued to produce vesicles that accumulated within the cell. Interaction of internal components of vesicles may have led to the endoplasmic reticulum and the Golgi apparatus, both being parts of the endomembrane system. Cytoplasm The syntrophy hypothesis, proposed by López-García and Moreira around the year 2000, suggested that eukaryotes arose by combining the metabolic capabilities of an archaean, a fermenting deltaproteobacterium, and a methanotrophic alphaproteobacterium which became the mitochondrion. In 2020, the same team updated their syntrophy proposal to cover an Asgard archaean that produced hydrogen with deltaproteobacterium that oxidised sulphur. A third organism, an alphaproteobacterium able to respire both aerobically and anaerobically, and to oxidise sulphur, developed into the mitochondrion; it may possibly also have been able to photosynthesise. Date The question of when the transition from prokaryotic to eukaryotic form occurred and when the first crown group eukaryotes appeared on earth is unresolved. The oldest known body fossils that can be positively assigned to the Eukaryota are acanthomorphic acritarchs from the 1.631 Gya Deonar Formation of India. These fossils can still be identified as derived post-nuclear eukaryotes with a sophisticated, morphology-generating cytoskeleton sustained by mitochondria. This fossil evidence indicates that endosymbiotic acquisition of alphaproteobacteria must have occurred before 1.6 Gya. Molecular clocks have also been used to estimate the last eukaryotic common ancestor, however these methods have large inherent uncertainty and give a wide range of dates. Reasonable results include the estimate of c. 1.8 Gya. A 2.3 Gya estimate also seems reasonable, and has the added attraction of coinciding with one of the most pronounced biogeochemical perturbations in Earth history, the early Palaeoproterozoic Great Oxygenation Event. The marked increase in atmospheric oxygen concentrations at that time has been suggested as a contributing cause of eukaryogenesis, inducing the evolution of oxygen-detoxifying mitochondria. Alternatively, the Great Oxidation Event might be a consequence of eukaryogenesis, and its impact on the export and burial of organic carbon. Organellar genomes Plastomes and mitogenomes Some endosymbiont genes remain in the organelles. Plastids and mitochondria retain genes encoding rRNAs, tRNAs, proteins involved in redox reactions, and proteins required for transcription, translation, and replication. There are many hypotheses to explain why organelles retain a small portion of their genome; however no one hypothesis will apply to all organisms, and the topic is still quite controversial. The hydrophobicity hypothesis states that highly hydrophobic (water hating) proteins (such as the membrane bound proteins involved in redox reactions) are not easily transported through the cytosol and therefore these proteins must be encoded in their respective organelles. The code disparity hypothesis states that the limit on transfer is due to differing genetic codes and RNA editing between the organelle and the nucleus. The redox control hypothesis states that genes encoding redox reaction proteins are retained in order to effectively couple the need for repair and the synthesis of these proteins. For example, if one of the photosystems is lost from the plastid, the intermediate electron carriers may lose or gain too many electrons, signalling the need for repair of a photosystem. The time delay involved in signalling the nucleus and transporting a cytosolic protein to the organelle results in the production of damaging reactive oxygen species. The final hypothesis states that the assembly of membrane proteins, particularly those involved in redox reactions, requires coordinated synthesis and assembly of subunits; however, translation and protein transport coordination is more difficult to control in the cytoplasm. Non-photosynthetic plastid genomes The majority of the genes in the mitochondria and plastids are related to the expression (transcription, translation and replication) of genes encoding proteins involved in either photosynthesis (in plastids) or cellular respiration (in mitochondria). One might predict that the loss of photosynthesis or cellular respiration would allow for the complete loss of the plastid genome or the mitochondrial genome respectively. While there are numerous examples of mitochondrial descendants (mitosomes and hydrogenosomes) that have lost their entire organellar genome, non-photosynthetic plastids tend to retain a small genome. There are two main hypotheses to explain this occurrence: The essential tRNA hypothesis notes that there have been no documented functional plastid-to-nucleus gene transfers of genes encoding RNA products (tRNAs and rRNAs). As a result, plastids must make their own functional RNAs or import nuclear counterparts. The genes encoding tRNA-Glu and tRNA-fmet, however, appear to be indispensable. The plastid is responsible for haem biosynthesis, which requires plastid encoded tRNA-Glu (from the gene trnE) as a precursor molecule. Like other genes encoding RNAs, trnE cannot be transferred to the nucleus. In addition, it is unlikely trnE could be replaced by a cytosolic tRNA-Glu as trnE is highly conserved; single base changes in trnE have resulted in the loss of haem synthesis. The gene for tRNA-formylmethionine (tRNA-fmet) is also encoded in the plastid genome and is required for translation initiation in both plastids and mitochondria. A plastid is required to continue expressing the gene for tRNA-fmet so long as the mitochondrion is translating proteins. The limited window hypothesis offers a more general explanation for the retention of genes in non-photosynthetic plastids. According to this hypothesis, genes are transferred to the nucleus following the disturbance of organelles. Disturbance was common in the early stages of endosymbiosis, however, once the host cell gained control of organelle division, eukaryotes could evolve to have only one plastid per cell. Having only one plastid severely limits gene transfer as the lysis of the single plastid would likely result in cell death. Consistent with this hypothesis, organisms with multiple plastids show an 80-fold increase in plastid-to-nucleus gene transfer compared with organisms with single plastids. Evidence There are many lines of evidence that mitochondria and plastids including chloroplasts arose from bacteria. New mitochondria and plastids are formed only through binary fission, the form of cell division used by bacteria and archaea. If a cell's mitochondria or chloroplasts are removed, the cell does not have the means to create new ones. In some algae, such as Euglena, the plastids can be destroyed by certain chemicals or prolonged absence of light without otherwise affecting the cell: the plastids do not regenerate. Transport proteins called porins are found in the outer membranes of mitochondria and chloroplasts and are also found in bacterial cell membranes. A membrane lipid cardiolipin is exclusively found in the inner mitochondrial membrane and bacterial cell membranes. Some mitochondria and some plastids contain single circular DNA molecules that are similar to the DNA of bacteria both in size and structure. Genome comparisons suggest a close relationship between mitochondria and Alphaproteobacteria. Genome comparisons suggest a close relationship between plastids and cyanobacteria. Many genes in the genomes of mitochondria and chloroplasts have been lost or transferred to the nucleus of the host cell. Consequently, the chromosomes of many eukaryotes contain genes that originated from the genomes of mitochondria and plastids. Mitochondria and plastids contain their own ribosomes; these are more similar to those of bacteria (70S) than those of eukaryotes. Proteins created by mitochondria and chloroplasts use N-formylmethionine as the initiating amino acid, as do proteins created by bacteria but not proteins created by eukaryotic nuclear genes or archaea. Secondary endosymbiosis Primary endosymbiosis involves the engulfment of a cell by another free living organism. Secondary endosymbiosis occurs when the product of primary endosymbiosis is itself engulfed and retained by another free living eukaryote. Secondary endosymbiosis has occurred several times and has given rise to extremely diverse groups of algae and other eukaryotes. Some organisms can take opportunistic advantage of a similar process, where they engulf an alga and use the products of its photosynthesis, but once the prey item dies (or is lost) the host returns to a free living state. Obligate secondary endosymbionts become dependent on their organelles and are unable to survive in their absence. A secondary endosymbiosis event involving an ancestral red alga and a heterotrophic eukaryote resulted in the evolution and diversification of several other photosynthetic lineages including Cryptophyta, Haptophyta, Stramenopiles (or Heterokontophyta), and Alveolata. A possible secondary endosymbiosis has been observed in process in the heterotrophic protist Hatena. This organism behaves like a predator until it ingests a green alga, which loses its flagella and cytoskeleton but continues to live as a symbiont. Hatena meanwhile, now a host, switches to photosynthetic nutrition, gains the ability to move towards light, and loses its feeding apparatus. Despite the diversity of organisms containing plastids, the morphology, biochemistry, genomic organisation, and molecular phylogeny of plastid RNAs and proteins suggest a single origin of all extant plastids – although this theory was still being debated in 2008. Nitroplasts A unicellular marine alga, Braarudosphaera bigelowii (a coccolithophore, which is a eukaryote), has been found with a cyanobacterium as an endosymbiont. The cyanobacterium forms a nitrogen-fixing structure, dubbed the nitroplast. It divides evenly when the host cell undergoes mitosis, and many of its proteins derive from the host alga, implying that the endosymbiont has proceeded far along the path towards becoming an organelle. The cyanobacterium is named Candidatus Atelocyanobacterium thalassa, and is abbreviated UCYN-A. The alga is the first eukaryote known to have the ability to fix nitrogen. See also Angomonas deanei, a protozoan that harbours an obligate bacterial symbiont Hatena arenicola, a species that appears to be in the process of acquiring an endosymbiont Hydrogen hypothesis, the theory that mitochondria were acquired by hydrogen-dependent archaea, their endosymbionts being facultatively anaerobic bacteria Kleptoplasty, the sequestering of plastids from ingested algae Mixotricha paradoxa, which itself is a symbiont, contains numerous endosymbiotic bacteria Parakaryon myojinensis, a possible result of endosymbiosis independent of eukaryotes Parasite Eve, fiction about endosymbiosis Strigomonas culicis, another protozoan that harbours an obligate bacterial symbiont Viral eukaryogenesis, hypothesis that the cell nucleus originated from endosymbiosis References Further reading (General textbook) (Discusses theory of origin of eukaryotic cells by incorporating mitochondria and chloroplasts into anaerobic cells with emphasis on 'phage bacterial and putative viral mitochondrial/chloroplast interactions.) (Recounts evidence that chloroplast-encoded proteins affect transcription of nuclear genes, as opposed to the more well-documented cases of nuclear-encoded proteins that affect mitochondria or chloroplasts.) (Discusses theories on how mitochondria and chloroplast genes are transferred into the nucleus, and also what steps a gene needs to go through in order to complete this process.) External links Tree of Life Eukaryotes Biological hypotheses Endosymbiotic events Evolutionary biology Symbiosis Microbiology Eukaryote genetics
Symbiogenesis
[ "Chemistry", "Biology" ]
5,808
[ "Evolutionary biology", "Eukaryote genetics", "Symbiosis", "Behavior", "Endosymbiotic events", "Biological interactions", "Microbiology", "Microscopy", "Biological hypotheses", "Genetics by type of organism" ]
60,433
https://en.wikipedia.org/wiki/Nuclear%20bunker%20buster
A nuclear bunker buster, also known as an earth-penetrating weapon (EPW), is the nuclear equivalent of the conventional bunker buster. The non-nuclear component of the weapon is designed to penetrate soil, rock, or concrete to deliver a nuclear warhead to an underground target. These weapons would be used to destroy hardened, underground military bunkers or other below-ground facilities. An underground explosion releases a larger fraction of its energy into the ground, compared to a surface burst or air burst explosion at or above the surface, and so can destroy an underground target using a lower explosive yield. This in turn could lead to a reduced amount of radioactive fallout. However, it is unlikely that the explosion would be completely contained underground. As a result, significant amounts of rock and soil would be rendered radioactive and lofted as dust or vapor into the atmosphere, generating significant fallout. Base principle While conventional bunker busters use several methods to penetrate concrete structures, these are for the purpose of destroying the structure directly, and are generally limited in how much of a bunker (or system of bunkers) they can destroy by depth and their relatively low explosive force (compared to nuclear weapons). The primary difference between conventional and nuclear bunker busters is that, while the conventional version is meant for one target, the nuclear version can destroy an entire underground bunker system. The main principles in modern bunker design are largely centered around survivability in nuclear war. As a result of this both American and Soviet sites reached a state of "super hardening", involving defenses against the effects of a nuclear weapon such as spring- or counterweight-mounted (in the case of the R-36) control capsules and thick concrete walls ( for the Minuteman ICBM launch control capsule) heavily reinforced with rebar. These systems were designed to survive a near miss of 20 megatons. Liquid-fueled missiles such as those historically used by Russia are more fragile and easily damaged than solid-fueled missiles such as those used by the United States. The complex fuel storage facilities and equipment needed to fuel missiles for launch and de-fuel them for frequent maintenance add additional weaknesses and vulnerabilities. Therefore, a similar degree of silo "hardening" does not automatically equate to a similar level of missile "survivability". Major advancements in the accuracy and precision of nuclear and conventional weapons subsequent to the invention of the missile silo itself have also rendered many "hardening" technologies useless. With modern weapons capable of striking within feet (meters) of their intended targets, a modern "near miss" can be much more effective than a "hit" decades ago. A weapon need only cover the silo door with sufficient debris to prevent its immediate opening to render the missile inside useless for its intended mission of rapid strike or counter-strike deployment. A nuclear bunker buster negates most of the countermeasures involved in the protection of underground bunkers by penetrating the defenses prior to detonating. A relatively low yield may be able to produce seismic forces beyond those of an air burst or even ground burst of a weapon with twice its yield. Additionally, the weapon has the ability to impart more severe horizontal shock waves than many bunker systems are designed to combat by detonating at or near the bunker's depth, rather than above it. Geologic factors also play a major role in weapon effectiveness and facility survivability. Locating facilities in hard rock may appear to reduce the effectiveness of bunker-buster type weapons by decreasing penetration, but the hard rock also transmits shock forces to a far higher degree than softer soil types. The difficulties of drilling into and constructing facilities within hard rock also increase construction time and expense, as well as making it more likely construction will be discovered and new sites targeted by foreign militaries. Methods of operation Penetration by explosive force Concrete structure design has not changed significantly in the last 70 years. The majority of protected concrete structures in the U.S. military are derived from standards set forth in Fundamentals of Protective Design, published in 1946 (US Army Corps of Engineers). Various augmentations, such as glass, fibers, and rebar, have made concrete less vulnerable, but far from impenetrable. When explosive force is applied to concrete, three major fracture regions are usually formed: the initial crater, a crushed aggregate surrounding the crater, and "scabbing" on the surface opposite the crater. Scabbing, also known as spalling, is the violent separation of a mass of material from the opposite face of a plate or slab subjected to an impact or impulsive loading, without necessarily requiring that the barrier itself be penetrated. While soil is a less dense material, it also does not transmit shock waves as well as concrete. So while a penetrator may actually travel further through soil, its effect may be lessened due to its inability to transmit shock to the target. Hardened penetrator Further thinking on the subject envisions a hardened penetrator using kinetic energy to defeat the target's defenses and subsequently deliver a nuclear explosive to the buried target. The primary difficulty facing the designers of such a penetrator is the tremendous heat applied to the penetrator unit when striking the shielding (surface) at hundreds of meters per second. This has partially been solved by using metals such as tungsten (the metal with the highest melting point), and altering the shape of the projectile (such as an ogive). Altering the shape of the projectile to incorporate an ogive shape has yielded substantial improvement in penetration ability. Rocket sled testing at Eglin Air Force Base has demonstrated penetrations of in concrete when traveling at . The reason for this is liquefaction of the concrete in the target, which tends to flow over the projectile. Variation in the speed of the penetrator can either cause it to be vaporized on impact (in the case of traveling too fast), or to not penetrate far enough (in the case of traveling too slowly). An approximation for the penetration depth is obtained with an impact depth formula derived by Sir Isaac Newton. Combination penetrator-explosive munitions Another school of thought on nuclear bunker busters is using a light penetrator to travel 15 to 30 meters through shielding, and detonate a nuclear charge there. Such an explosion would generate powerful shock waves, which would be transmitted very effectively through the solid material comprising the shielding (see "scabbing" above). Policy and criticism of fallout The main criticisms of nuclear bunker busters regard fallout and nuclear proliferation. The purpose of an earth-penetrating nuclear bunker buster is to reduce the required yield needed to ensure the destruction of the target by coupling the explosion to the ground, yielding a shock wave similar to an earthquake. For example, the United States retired the B-53 warhead, with a yield of nine megatons, because the B-61 Mod 11 could attack similar targets with much lower yield (400 kilotons), due to the latter's superior ground penetration. By burying itself into the ground before detonation, a much higher proportion of the explosion energy is transferred to seismic shock when compared to the surface burst produced from the B-53's laydown delivery. Moreover, the globally dispersed fallout of an underground B-61 Mod 11 would likely be less than that of a surface burst B-53. Supporters note that this is one of the reasons nuclear bunker busters should be developed. Critics claim that developing new nuclear weapons sends a proliferating message to non-nuclear powers, undermining non-proliferation efforts. Critics also worry that the existence of lower-yield nuclear weapons for relatively limited tactical purposes will lower the threshold for their actual use, thus blurring the sharp line between conventional weapons intended for use and weapons of mass destruction intended only for hypothetical deterrence, and increasing the risk of escalation to higher-yield nuclear weapons. Local fallout from any nuclear detonation is increased with proximity to the ground. While a megaton-class yield surface burst will inevitably throw up many tons of (newly) radioactive debris, which falls back to the earth as fallout, critics contend that despite their relatively minuscule explosive yield, nuclear bunker busters create more local fallout per kiloton yield. Also, because of the subsurface detonation, radioactive debris may contaminate the local groundwater. The Union of Concerned Scientists advocacy group points out that at the Nevada Test Site, the depth required to contain fallout from an average-yield underground nuclear test was over 100 meters, depending upon the weapon's yield. They contend that it is improbable that penetrators could be made to burrow so deeply. With yields between 0.3 and 340 kilotons, they argue, it is unlikely the blast would be completely contained. Critics further state that the testing of new nuclear weapons would be prohibited by the proposed Comprehensive Test Ban Treaty. Although Congress refused to ratify the CTBT in 1999, and therefore this treaty has no legal force in the US, the US has adhered to the spirit of the treaty by maintaining a moratorium on nuclear testing since 1992. Proponents, however, contend that lower explosive yield devices and subsurface bursts would produce little to no climatic effects in the event of a nuclear war, in contrast to multi-megaton air and surface bursts (that is, if the nuclear winter hypothesis proves accurate). Lower fuzing heights, which would result from partially buried warheads, would limit or completely obstruct the range of the burning thermal rays of a nuclear detonation, therefore limiting the target, and its surroundings, to a fire hazard by reducing the range of thermal radiation with fuzing for subsurface bursts. Professors Altfeld and Cimbala have suggested that belief in the possibility of nuclear winter has actually made nuclear war more likely, contrary to the views of Carl Sagan and others, because it has inspired the development of more accurate, and lower explosive yield, nuclear weapons. Targets and the development of bunker busters As early as 1944, the Barnes Wallis Tallboy bomb and subsequent Grand Slam weapons were designed to penetrate deeply fortified structures through sheer explosive power. These were not designed to directly penetrate defences, though they could do this (for example, the Valentin submarine pens had ferrous concrete roofs thick which were penetrated by two Grand Slams on 27 March 1945), but rather to penetrate under the target and explode leaving a camouflet (cavern) which would undermine foundations of structures above, causing it to collapse, thus negating any possible hardening. The destruction of targets such as the V3 battery at Mimoyecques was the first operational use of the Tallboy. One bored through a hillside and exploded in the Saumur rail tunnel about below, completely blocking it, and showing that these weapons could destroy any hardened or deeply excavated installation. Modern targeting techniques allied with multiple strikes could perform a similar task. Development continued, with weapons such as the nuclear B61, and conventional thermobaric weapons and GBU-28. One of the more effective housings, the GBU-28 used its large mass () and casing (constructed from barrels of surplus 203 mm howitzers) to penetrate of concrete, and more than of earth. The B61 Mod 11, which first entered military service after the Cold war had ended, in January 1997, was specifically developed to allow for bunker penetration, and is speculated to have the ability to destroy hardened targets a few hundred feet beneath the earth. While penetrations of were sufficient for some shallow targets, both the Soviet Union and the United States were creating bunkers buried under huge volumes of soil or reinforced concrete in order to withstand the multi-megaton thermonuclear weapons developed in the 1950s and 1960s. Bunker penetration weapons were initially designed within this Cold War context. One likely Soviet Union/Russian target, Mount Yamantau, was regarded in the 1990s by Maryland Republican congressman, Roscoe Bartlett, as capable of surviving "half a dozen" repeated nuclear strikes of an unspecified yield, one after the other in a "direct hole". The Russian continuity of government facility at Kosvinsky Mountain, finished in early 1996, was designed to resist US earth-penetrating warheads and serves a similar role as the American Cheyenne Mountain Complex. The timing of the Kosvinsky completion date is regarded as one explanation for US interest in a new nuclear bunker buster and the declaration of the deployment of the B-61 Mod 11 in 1997. Kosvinsky is protected by about 300 meters (1000 feet) of granite. The weapon was revisited after the Cold War during the 2001 U.S. invasion of Afghanistan, and again during the 2003 invasion of Iraq. During the campaign in Tora Bora in particular, the United States believed that "vast underground complexes," deeply buried, were protecting opposing forces. Such complexes were not found. While a nuclear penetrator (the "Robust Nuclear Earth Penetrator", or "RNEP") was never built, the U.S. DOE was allotted budget to develop it, and tests were conducted by the U.S. Air Force Research Laboratory. The RNEP was to use the 1.2 megaton B83 physics package. The Bush administration removed its request for funding of the weapon in October 2005. Additionally, then U.S. Senator Pete Domenici announced funding for the nuclear bunker-buster has been dropped from the U.S. Department of Energy's 2006 budget at the department's request. While the project for the RNEP seems to be in fact canceled, Jane's Information Group speculated in 2005 that work might continue under another name. A more recent development (c. 2012) is the GBU-57 Massive Ordnance Penetrator, a 30,000 pound (14,000 kg) conventional gravity bomb. The USAF's B-2 Spirit bombers can each carry two such weapons. Notable US nuclear bunker busters Note that with the exception of strictly earth penetrating weapons, others were designed with air burst capability and some were depth charges as well. Mark 8 nuclear bomb (1952–1957): earth penetrating W8 for SSM-N-8 Regulus (cancelled): earth penetrating Mark 11 nuclear bomb (1956–1960): earth penetrating Mk 105 Hotpoint (1958–1965): laydown delivery B28 nuclear bomb (1958–1991): laydown delivery and ground burst Mark 39 nuclear bomb (1958–1962) laydown delivery and ground burst B43 nuclear bomb (1961–1990): laydown delivery and ground burst B53 nuclear bomb (1962–1997): laydown delivery B57 nuclear bomb (1963–1993): laydown delivery B61 nuclear bomb (1968–present): laydown delivery and ground burst Mod 11 (1997–present): earth penetrating, laydown delivery, and ground burst W61 for MGM-134 Midgetman (cancelled): earth penetrating B77 nuclear bomb (cancelled): laydown delivery B83 nuclear bomb (1983–present): laydown delivery and ground burst W86 for Pershing II (cancelled): earth penetrating Robust Nuclear Earth Penetrator (cancelled): earth penetrating See also Bunker buster (conventional, non-nuclear) Earthquake bomb Underground nuclear weapons testing Nuclear strategy Thermobaric weapon Nuclear weapon List of nuclear weapons Citations References . . . . External links Allbombs.html list of all US nuclear warheads at nuclearweaponarchive.org . . . / . Nuclear warfare Anti-fortification weapons Nuclear bombs Nuclear weapon design
Nuclear bunker buster
[ "Chemistry" ]
3,176
[ "Radioactivity", "Nuclear warfare" ]
60,455
https://en.wikipedia.org/wiki/Line-of-sight%20propagation
Line-of-sight propagation is a characteristic of electromagnetic radiation or acoustic wave propagation which means waves can only travel in a direct visual path from the source to the receiver without obstacles. Electromagnetic transmission includes light emissions traveling in a straight line. The rays or waves may be diffracted, refracted, reflected, or absorbed by the atmosphere and obstructions with material and generally cannot travel over the horizon or behind obstacles. In contrast to line-of-sight propagation, at low frequency (below approximately 3 MHz) due to diffraction, radio waves can travel as ground waves, which follow the contour of the Earth. This enables AM radio stations to transmit beyond the horizon. Additionally, frequencies in the shortwave bands between approximately 1 and 30 MHz, can be refracted back to Earth by the ionosphere, called skywave or "skip" propagation, thus giving radio transmissions in this range a potentially global reach. However, at frequencies above 30 MHz (VHF and higher) and in lower levels of the atmosphere, neither of these effects are significant. Thus, any obstruction between the transmitting antenna (transmitter) and the receiving antenna (receiver) will block the signal, just like the light that the eye may sense. Therefore, since the ability to visually see a transmitting antenna (disregarding the limitations of the eye's resolution) roughly corresponds to the ability to receive a radio signal from it, the propagation characteristic at these frequencies is called "line-of-sight". The farthest possible point of propagation is referred to as the "radio horizon". In practice, the propagation characteristics of these radio waves vary substantially depending on the exact frequency and the strength of the transmitted signal (a function of both the transmitter and the antenna characteristics). Broadcast FM radio, at comparatively low frequencies of around 100 MHz, are less affected by the presence of buildings and forests. Impairments to line-of-sight propagation Low-powered microwave transmitters can be foiled by tree branches, or even heavy rain or snow. The presence of objects not in the direct line-of-sight can cause diffraction effects that disrupt radio transmissions. For the best propagation, a volume known as the first Fresnel zone should be free of obstructions. Reflected radiation from the surface of the surrounding ground or salt water can also either cancel out or enhance the direct signal. This effect can be reduced by raising either or both antennas further from the ground: The reduction in loss achieved is known as height gain. See also Non-line-of-sight propagation for more on impairments in propagation. It is important to take into account the curvature of the Earth for calculation of line-of-sight paths from maps, when a direct visual fix cannot be made. Designs for microwave formerly used  Earth radius to compute clearances along the path. Mobile telephones Although the frequencies used by mobile phones (cell phones) are in the line-of-sight range, they still function in cities. This is made possible by a combination of the following effects: propagation over the rooftop landscape diffraction into the "street canyon" below multipath reflection along the street diffraction through windows, and attenuated passage through walls, into the building reflection, diffraction, and attenuated passage through internal walls, floors and ceilings within the building The combination of all these effects makes the mobile phone propagation environment highly complex, with multipath effects and extensive Rayleigh fading. For mobile phone services, these problems are tackled using: rooftop or hilltop positioning of base stations many base stations (usually called "cell sites"). A phone can typically see at least three, and usually as many as six at any given time. "sectorized" antennas at the base stations. Instead of one antenna with omnidirectional coverage, the station may use as few as 3 (rural areas with few customers) or as many as 32 separate antennas, each covering a portion of the circular coverage. This allows the base station to use a directional antenna that is pointing at the user, which improves the signal-to-noise ratio. If the user moves (perhaps by walking or driving) from one antenna sector to another, the base station automatically selects the proper antenna. rapid handoff between base stations (roaming) the radio link used by the phones is a digital link with extensive error correction and detection in the digital protocol sufficient operation of mobile phone in tunnels when supported by split cable antennas local repeaters inside complex vehicles or buildings A Faraday cage is composed of a conductor that completely surrounds an area on all sides, top, and bottom. Electromagnetic radiation is blocked where the wavelength is longer than any gaps. For example, mobile telephone signals are blocked in windowless metal enclosures that approximate a Faraday cage, such as elevator cabins, and parts of trains, cars, and ships. The same problem can affect signals in buildings with extensive steel reinforcement. Radio horizon The radio horizon is the locus of points at which direct rays from an antenna are tangential to the surface of the Earth. If the Earth were a perfect sphere without an atmosphere, the radio horizon would be a circle. The radio horizon of the transmitting and receiving antennas can be added together to increase the effective communication range. Radio wave propagation is affected by atmospheric conditions, ionospheric absorption, and the presence of obstructions, for example mountains or trees. Simple formulas that include the effect of the atmosphere give the range as: The simple formulas give a best-case approximation of the maximum propagation distance, but are not sufficient to estimate the quality of service at any location. Earth bulge In telecommunications, Earth bulge refers to the effect of earth's curvature on radio propagation. It is a consequence of a circular segment of earth profile that blocks off long-distance communications. Since the vacuum line of sight passes at varying heights over the Earth, the propagating radio wave encounters slightly different propagation conditions over the path. Vacuum distance to horizon Assuming a perfect sphere with no terrain irregularity, the distance to the horizon from a high altitude transmitter (i.e., line of sight) can readily be calculated. Let R be the radius of the Earth and h be the altitude of a telecommunication station. The line of sight distance d of this station is given by the Pythagorean theorem; The altitude of the station h is much smaller than the radius of the Earth R. Therefore, can be neglected compared with . Thus: If the height h is given in metres, and distance d in kilometres, If the height h is given in feet, and the distance d in statute miles, In the case, when there are two stations involve, e.g. a transmit station on ground with a station height h and a receive station in the air with a station height H, the line of sight distance can be calculated as follows: Atmospheric refraction The usual effect of the declining pressure of the atmosphere with height (vertical pressure variation) is to bend (refract) radio waves down towards the surface of the Earth. This results in an effective Earth radius, increased by a factor around . This k-factor can change from its average value depending on weather. Refracted distance to horizon The previous vacuum distance analysis does not consider the effect of atmosphere on the propagation path of RF signals. In fact, RF signals do not propagate in straight lines: Because of the refractive effects of atmospheric layers, the propagation paths are somewhat curved. Thus, the maximum service range of the station is not equal to the line of sight vacuum distance. Usually, a factor k is used in the equation above, modified to be k > 1 means geometrically reduced bulge and a longer service range. On the other hand, k < 1 means a shorter service range. Under normal weather conditions, k is usually chosen to be . That means that the maximum service range increases by 15%. for h in metres and d in kilometres; or for h in feet and d in miles. But in stormy weather, k may decrease to cause fading in transmission. (In extreme cases k can be less than 1.) That is equivalent to a hypothetical decrease in Earth radius and an increase of Earth bulge. For example, in normal weather conditions, the service range of a station at an altitude of 1500 m with respect to receivers at sea level can be found as, See also Anomalous propagation Dipole field strength in free space Knife-edge effect Multilateration Non-line-of-sight propagation Over-the-horizon radar Radial (radio) Rician fading, stochastic model of line-of-sight propagation Slant range References External links http://web.telia.com/~u85920178/data/pathlos.htm#bulges Article on the importance of Line Of Sight for UHF reception Attenuation Levels Through Roofs Approximating 2-Ray Model by using Binomial series by Matthew Bazajian Radio frequency propagation IEEE 802.11
Line-of-sight propagation
[ "Physics" ]
1,813
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
60,476
https://en.wikipedia.org/wiki/Augmented%20Backus%E2%80%93Naur%20form
In computer science, augmented Backus–Naur form (ABNF) is a metalanguage based on Backus–Naur form (BNF) but consisting of its own syntax and derivation rules. The motive principle for ABNF is to describe a formal system of a language to be used as a bidirectional communications protocol. It is defined by Internet Standard 68 ("STD 68", type case sic), which was , and it often serves as the definition language for IETF communication protocols. supersedes . updates it, adding a syntax for specifying case-sensitive string literals. Overview An ABNF specification is a set of derivation rules, written as where rule is a case-insensitive nonterminal, the definition consists of sequences of symbols that define the rule, a comment for documentation, and ending with a carriage return and line feed. Rule names are case-insensitive: <rulename>, <Rulename>, <RULENAME>, and <rUlENamE> all refer to the same rule. Rule names consist of a letter followed by letters, numbers, and hyphens. Angle brackets (<, >) are not required around rule names (as they are in BNF). However, they may be used to delimit a rule name when used in prose to discern a rule name. Terminal values Terminals are specified by one or more numeric characters. Numeric characters may be specified as the percent sign %, followed by the base (b = binary, d = decimal, and x = hexadecimal), followed by the value, or concatenation of values (indicated by .). For example, a carriage return is specified by %d13 in decimal or %x0D in hexadecimal. A carriage return followed by a line feed may be specified with concatenation as %d13.10. Literal text is specified through the use of a string enclosed in quotation marks ("). These strings are case-insensitive, and the character set used is (US-)ASCII. Therefore, the string "abc" will match “abc”, “Abc”, “aBc”, “abC”, “ABc”, “AbC”, “aBC”, and “ABC”. RFC 7405 added a syntax for case-sensitive strings: %s"aBc" will only match "aBc". Prior to that, a case-sensitive string could only be specified by listing the individual characters: to match “aBc”, the definition would be %d97.66.99. A string can also be explicitly specified as case-insensitive with a %i prefix. Operators White space White space is used to separate elements of a definition; for space to be recognized as a delimiter, it must be explicitly included. The explicit reference for a single whitespace character is WSP (linear white space), and LWSP is for zero or more whitespace characters with newlines permitted. The LWSP definition in RFC5234 is controversial because at least one whitespace character is needed to form a delimiter between two fields. Definitions are left-aligned. When multiple lines are required (for readability), continuation lines are indented by whitespace. Comment ; comment A semicolon (;) starts a comment that continues to the end of the line. Concatenation Rule1 Rule2 A rule may be defined by listing a sequence of rule names. To match the string “aba”, the following rules could be used: Alternative Rule1 / Rule2 A rule may be defined by a list of alternative rules separated by a solidus (/). To accept the rule fu or the rule bar, the following rule could be constructed: Incremental alternatives Rule1 =/ Rule2 Additional alternatives may be added to a rule through the use of =/ between the rule name and the definition. The rule is therefore equivalent to Value range %c##-## A range of numeric values may be specified through the use of a hyphen (-). The rule is equivalent to Sequence group (Rule1 Rule2) Elements may be placed in parentheses to group rules in a definition. To match "a b d" or "a c d", the following rule could be constructed: To match “a b” or “c d”, the following rules could be constructed: Variable repetition n*nRule To indicate repetition of an element, the form <a>*<b>element is used. The optional <a> gives the minimal number of elements to be included (with the default of 0). The optional <b> gives the maximal number of elements to be included (with the default of infinity). Use *element for zero or more elements, *1element for zero or one element, 1*element for one or more elements, and 2*3element for two or three elements, cf. regular expressions e*, e?, e+ and e{2,3}. Specific repetition nRule To indicate an explicit number of elements, the form <a>element is used and is equivalent to <a>*<a>element. Use 2DIGIT to get two numeric digits, and 3DIGIT to get three numeric digits. (DIGIT is defined below under "Core rules". Also see zip-code in the example below.) Optional sequence [Rule] To indicate an optional element, the following constructions are equivalent: Operator precedence The following operators have the given precedence from tightest binding to loosest binding: Strings, names formation Comment Value range Repetition Grouping, optional Concatenation Alternative Use of the alternative operator with concatenation may be confusing, and it is recommended that grouping be used to make explicit concatenation groups. Core rules The core rules are defined in the ABNF standard. Note that in the core rules diagram the CHAR2 charset is inlined in char-val and CHAR3 is inlined in prose-val in the RFC spec. They are named here for clarity in the main syntax diagram. Example The (U.S.) postal address example given in the augmented Backus–Naur form (ABNF) page may be specified as follows: postal-address = name-part street zip-part name-part = *(personal-part SP) last-name [SP suffix] CRLF name-part =/ personal-part CRLF personal-part = first-name / (initial ".") first-name = *ALPHA initial = ALPHA last-name = *ALPHA suffix = ("Jr." / "Sr." / 1*("I" / "V" / "X")) street = [apt SP] house-num SP street-name CRLF apt = 1*4DIGIT house-num = 1*8(DIGIT / ALPHA) street-name = 1*VCHAR zip-part = town-name "," SP state 1*2SP zip-code CRLF town-name = 1*(ALPHA / SP) state = 2ALPHA zip-code = 5DIGIT ["-" 4DIGIT] Pitfalls RFC 5234 adds a warning in conjunction to the definition of LWSP as follows: References Formal languages Metalanguages
Augmented Backus–Naur form
[ "Mathematics" ]
1,521
[ "Formal languages", "Mathematical logic" ]
60,478
https://en.wikipedia.org/wiki/Abort%20%28computing%29
In a computer or data transmission system, to abort means to terminate, usually in a controlled manner, a processing activity because it is impossible or undesirable for the activity to proceed or in conjunction with an error. Such an action may be accompanied by diagnostic information on the aborted process. In addition to being a verb, abort also has two noun senses. In the most general case, the event of aborting can be referred to as an abort. Sometimes the event of aborting can be given a special name, as in the case of an abort involving a Unix kernel where it is known as a kernel panic. Specifically in the context of data transmission, an abort is a function invoked by a sending station to cause the recipient to discard or ignore all bit sequences transmitted by the sender since the preceding flag sequence. In the C programming language, abort() is a standard library function that terminates the current application and returns an error code to the host environment. Types of aborts User-Initiated Aborts: Users can often abort tasks using keyboard shortcuts (like Ctrl + C in terminal applications) or commands to terminate processes. This is especially useful for stopping unresponsive programs or those taking longer than expected to execute. Programmatic Aborts: Developers can implement abort logic in their code. For instance, when a program encounters an error or invalid input, it may call functions like abort() in C or C++ to terminate execution. This approach helps prevent further errors or potential data corruption. System-Level Aborts: Operating systems might automatically abort processes under certain conditions, such as resource exhaustion or unresponsiveness. For example, a watchdog timer can terminate a process that remains idle beyond a specified time limit. Database Transactions: In database management, aborting (often termed ‘rolling back’) a transaction is crucial for maintaining data integrity. If a transaction cannot be completed successfully, aborting it returns the database to its previous state, which ensures that incomplete transactions don’t leave the data inconsistent. Aborts are typically logged, especially in critical systems, to facilitate troubleshooting and improve future runs. See also Abort, Retry, Fail? Abnormal end Crash Hang Reset Reboot References Computing terminology
Abort (computing)
[ "Technology" ]
474
[ "Computing stubs", "Computing terminology" ]
60,486
https://en.wikipedia.org/wiki/Automatic%20baud%20rate%20detection
Automatic baud rate detection (ABR, autobaud) refers to the process by which a receiving device (such as a modem) determines the speed, code level, start bit, and stop bits of incoming data by examining the first character, usually a preselected sign-on character (syncword) on a UART connection. ABR allows the receiving device to accept data from a variety of transmitting devices operating at different speeds without needing to establish data rates in advance. Process During the autobaud process, the baud rate of received character stream is determined by examining the received pattern and its timing, and the length of a start bit. These type of baud rate detection mechanism are supported by many hardware chips including processors such as STM32 MPC8280, MPC8360, and so on. When start bit length is used to determine the baud rate, it requires the character to be odd since UART sends LSB bit firstthis particular bit order scheme is referred to as little-endian. Often symbols 'a' or 'A' (0x61 or 0x41) are used. For example, the MPC8270 SCC tries to detect the length of the UART start bit for autobaud. Many protocols begin each frame with a preamble of alternating 1 and 0 bits that can be used for automatic baud rate detection. For example, the TI PGA460 uses a 'U' ( 0x55 ) sync byte for automatic baud rate detection as well as frame synchronization, and so does the LIN header (Local Interconnect Network#Header). For example, the UART-based FlexWire protocol begins each frame with a 'U' (0x55) sync byte. FlexWire receivers use the sync byte to precisely set their UART bit-clock frequency without a high-precision oscillator. For example, the Ethernet preamble contains 56 bits of alternating 1 and 0 bits for synchronizing bit clocks. Support Most modems currently on the market support autobaud. Before receiving any input data, most modems use a default baud rate of 9600 for output. For example, the following modems have been verified for autobaud and default output baud rate 9600: USRobotics USR5686G 56K Serial Controller Fax modem Hayes V92 External modem Microcom DeskPorte 28.8P The baud rate of modems are adjusted automatically after receiving input data by the autobaud process. See also Autonegotiation Telecommunications References "17.2 Autobaud Operation on a UART in MPC8280 PowerQUICC™ II Family Reference Manual" http://www.nxp.com/files/netcomm/doc/ref_manual/MPC8280RM.pdf "Automatic Baud Rate Detection on the MSP430" https://web.archive.org/web/20161026080239/http://www.ti.com/lit/an/slaa215/slaa215.pdf "How to implement “auto baud rate detection” feature on Cortex-M3" https://stackoverflow.com/q/38979647 "mpc8270 SCC2 UART issue" https://community.nxp.com/message/906833 Data transmission Units of measurement Telecommunications techniques
Automatic baud rate detection
[ "Mathematics" ]
733
[ "Quantity", "Units of measurement" ]
60,487
https://en.wikipedia.org/wiki/Abscissa%20and%20ordinate
In mathematics, the abscissa (; plural abscissae or abscissas) and the ordinate are respectively the first and second coordinate of a point in a Cartesian coordinate system: abscissa -axis (horizontal) coordinate ordinate -axis (vertical) coordinate Together they form an ordered pair which defines the location of a point in two-dimensional rectangular space. More technically, the abscissa of a point is the signed measure of its projection on the primary axis. Its absolute value is the distance between the projection and the origin of the axis, and its sign is given by the location on the projection relative to the origin (before: negative; after: positive). Similarly, the ordinate of a point is the signed measure of its projection on the secondary axis. In three dimensions, the third direction is sometimes referred to as the applicate. Etymology Though the word "abscissa" () has been used at least since De Practica Geometrie (1220) by Fibonacci (Leonardo of Pisa), its use in its modern sense may be due to Venetian mathematician Stefano degli Angeli in his work Miscellaneum Hyperbolicum, et Parabolicum (1659). Historically, the term was used in the more general sense of a 'distance'. In his 1892 work ("Lectures on history of mathematics"), volume 2, German historian of mathematics Moritz Cantor writes: At the same time it was presumably by [Stefano degli Angeli] that a word was introduced into the mathematical vocabulary for which especially in analytic geometry the future proved to have much in store. […] We know of no earlier use of the word abscissa in Latin original texts. Maybe the word appears in translations of the Apollonian conics, where [in] Book I, Chapter 20 there is mention of ἀποτεμνομέναις, for which there would hardly be a more appropriate Latin word than . The use of the word ordinate is related to the Latin phrase linea ordinata appliicata 'line applied parallel'. In parametric equations In a somewhat obsolete variant usage, the abscissa of a point may also refer to any number that describes the point's location along some path, e.g. the parameter of a parametric equation. Used in this way, the abscissa can be thought of as a coordinate-geometry analog to the independent variable in a mathematical model or experiment (with any ordinates filling a role analogous to dependent variables). See also Function (mathematics) Relation (mathematics) Line chart References Elementary mathematics Coordinate systems Dimension de:Kartesisches Koordinatensystem#Das Koordinatensystem im zweidimensionalen Raum pl:Układ współrzędnych kartezjańskich#Współrzędne
Abscissa and ordinate
[ "Physics", "Mathematics" ]
606
[ "Geometric measurement", "Physical quantities", "Theory of relativity", "Elementary mathematics", "Coordinate systems", "Dimension" ]
60,491
https://en.wikipedia.org/wiki/Abstraction%20%28computer%20science%29
In software engineering and computer science, abstraction is the process of generalizing concrete details, such as attributes, away from the study of objects and systems to focus attention on details of greater importance. Abstraction is a fundamental concept in computer science and software engineering, especially within the object-oriented programming paradigm. Examples of this include: the usage of abstract data types to separate usage from working representations of data within programs; the concept of functions or subroutines which represent a specific way of implementing control flow; the process of reorganizing common behavior from groups of non-abstract classes into abstract classes using inheritance and sub-classes, as seen in object-oriented programming languages. Rationale Computing mostly operates independently of the concrete world. The hardware implements a model of computation that is interchangeable with others. The software is structured in architectures to enable humans to create the enormous systems by concentrating on a few issues at a time. These architectures are made of specific choices of abstractions. Greenspun's tenth rule is an aphorism on how such an architecture is both inevitable and complex. Language abstraction is a central form of abstraction in computing: new artificial languages are developed to express specific aspects of a system. Modeling languages help in planning. Computer languages can be processed with a computer. An example of this abstraction process is the generational development of programming language from the first-generation programming language (machine language) to the second-generation programming language (assembly language) and the third-generation programming language (high-level programming language). Each stage can be used as a stepping stone for the next stage. The language abstraction continues for example in scripting languages and domain-specific languages. Within a programming language, some features let the programmer create new abstractions. These include subroutines, modules, polymorphism, and software components. Some other abstractions such as software design patterns and architectural styles remain invisible to a translator and operate only in the design of a system. Some abstractions try to limit the range of concepts a programmer needs to be aware of, by completely hiding the abstractions they are built on. The software engineer and writer Joel Spolsky has criticized these efforts by claiming that all abstractions are leaky – that they can never completely hide the details below; however, this does not negate the usefulness of abstraction. Some abstractions are designed to inter-operate with other abstractions – for example, a programming language may contain a foreign function interface for making calls to the lower-level language. Abstraction features Programming languages Different programming languages provide different types of abstraction, depending on the intended applications for the language. For example: In object-oriented programming languages such as C++, Object Pascal, or Java, the concept of abstraction has become a declarative statement – using the syntax function(parameters) = 0; (in C++) or the reserved words (keywords) abstract and interface (in Java). After such a declaration, it is the responsibility of the programmer to implement a class to instantiate the object of the declaration. Functional programming languages commonly exhibit abstractions related to functions, such as lambda abstractions (making a term into a function of some variable) and higher-order functions (parameters are functions). Modern members of the Lisp programming language family such as Clojure, Scheme and Common Lisp support macro systems to allow syntactic abstraction. Other programming languages such as Scala also have macros, or very similar metaprogramming features (for example, Haskell has Template Haskell, OCaml has MetaOCaml). These can allow programs to omit boilerplate code, abstract away tedious function call sequences, implement new control flow structures, and implement domain-specific languages (DSLs), which allow domain-specific concepts to be expressed in concise and elegant ways. All of these, when used correctly, improve both the programmer's efficiency and the clarity of source code by making the intended purpose more explicit. A consequence of syntactic abstraction is also that any Lisp dialect, and almost any programming language, can in principle, be implemented in any modern Lisp with significantly reduced (but still non-trivial in most cases) effort when compared to "more traditional" programming languages such as Python, C or Java. Specification methods Analysts have developed various methods to formally specify software systems. Some known methods include: Abstract-model based method (VDM, Z); Algebraic techniques (Larch, CLEAR, OBJ, ACT ONE, CASL); Process-based techniques (LOTOS, SDL, Estelle); Trace-based techniques (SPECIAL, TAM); Knowledge-based techniques (Refine, Gist). Specification languages Specification languages generally rely on abstractions of one kind or another, since specifications are typically defined earlier in a project, (and at a more abstract level) than an eventual implementation. The Unified Modeling Language (UML) specification language, for example, allows the definition of abstract classes, which in a waterfall project, remain abstract during the architecture and specification phase of the project. Control abstraction Programming languages offer control abstraction as one of the main purposes of their use. Computer machines understand operations at the very low level such as moving some bits from one location of the memory to another location and producing the sum of two sequences of bits. Programming languages allow this to be done in the higher level. For example, consider this statement written in a Pascal-like fashion: a := (1 + 2) * 5 To a human, this seems a fairly simple and obvious calculation ("one plus two is three, times five is fifteen"). However, the low-level steps necessary to carry out this evaluation, and return the value "15", and then assign that value to the variable "a", are actually quite subtle and complex. The values need to be converted to binary representation (often a much more complicated task than one would think) and the calculations decomposed (by the compiler or interpreter) into assembly instructions (again, which are much less intuitive to the programmer: operations such as shifting a binary register left, or adding the binary complement of the contents of one register to another, are simply not how humans think about the abstract arithmetical operations of addition or multiplication). Finally, assigning the resulting value of "15" to the variable labeled "a", so that "a" can be used later, involves additional 'behind-the-scenes' steps of looking up a variable's label and the resultant location in physical or virtual memory, storing the binary representation of "15" to that memory location, etc. Without control abstraction, a programmer would need to specify all the register/binary-level steps each time they simply wanted to add or multiply a couple of numbers and assign the result to a variable. Such duplication of effort has two serious negative consequences: it forces the programmer to constantly repeat fairly common tasks every time a similar operation is needed it forces the programmer to program for the particular hardware and instruction set Structured programming Structured programming involves the splitting of complex program tasks into smaller pieces with clear flow-control and interfaces between components, with a reduction of the complexity potential for side-effects. In a simple program, this may aim to ensure that loops have single or obvious exit points and (where possible) to have single exit points from functions and procedures. In a larger system, it may involve breaking down complex tasks into many different modules. Consider a system which handles payroll on ships and at shore offices: The uppermost level may feature a menu of typical end-user operations. Within that could be standalone executables or libraries for tasks such as signing on and off employees or printing checks. Within each of those standalone components there could be many different source files, each containing the program code to handle a part of the problem, with only selected interfaces available to other parts of the program. A sign on program could have source files for each data entry screen and the database interface (which may itself be a standalone third party library or a statically linked set of library routines). Either the database or the payroll application also has to initiate the process of exchanging data with between ship and shore, and that data transfer task will often contain many other components. These layers produce the effect of isolating the implementation details of one component and its assorted internal methods from the others. Object-oriented programming embraces and extends this concept. Data abstraction Data abstraction enforces a clear separation between the abstract properties of a data type and the concrete details of its implementation. The abstract properties are those that are visible to client code that makes use of the data type—the interface to the data type—while the concrete implementation is kept entirely private, and indeed can change, for example to incorporate efficiency improvements over time. The idea is that such changes are not supposed to have any impact on client code, since they involve no difference in the abstract behaviour. For example, one could define an abstract data type called lookup table which uniquely associates keys with values, and in which values may be retrieved by specifying their corresponding keys. Such a lookup table may be implemented in various ways: as a hash table, a binary search tree, or even a simple linear list of (key:value) pairs. As far as client code is concerned, the abstract properties of the type are the same in each case. Of course, this all relies on getting the details of the interface right in the first place, since any changes there can have major impacts on client code. As one way to look at this: the interface forms a contract on agreed behaviour between the data type and client code; anything not spelled out in the contract is subject to change without notice. Manual data abstraction While much of data abstraction occurs through computer science and automation, there are times when this process is done manually and without programming intervention. One way this can be understood is through data abstraction within the process of conducting a systematic review of the literature. In this methodology, data is abstracted by one or several abstractors when conducting a meta-analysis, with errors reduced through dual data abstraction followed by independent checking, known as adjudication. Abstraction in object oriented programming In object-oriented programming theory, abstraction involves the facility to define objects that represent abstract "actors" that can perform work, report on and change their state, and "communicate" with other objects in the system. The term encapsulation refers to the hiding of state details, but extending the concept of data type from earlier programming languages to associate behavior most strongly with the data, and standardizing the way that different data types interact, is the beginning of abstraction. When abstraction proceeds into the operations defined, enabling objects of different types to be substituted, it is called polymorphism. When it proceeds in the opposite direction, inside the types or classes, structuring them to simplify a complex set of relationships, it is called delegation or inheritance. Various object-oriented programming languages offer similar facilities for abstraction, all to support a general strategy of polymorphism in object-oriented programming, which includes the substitution of one data type for another in the same or similar role. Although not as generally supported, a configuration or image or package may predetermine a great many of these bindings at compile time, link time, or load time. This would leave only a minimum of such bindings to change at run-time. Common Lisp Object System or Self, for example, feature less of a class-instance distinction and more use of delegation for polymorphism. Individual objects and functions are abstracted more flexibly to better fit with a shared functional heritage from Lisp. C++ exemplifies another extreme: it relies heavily on templates and overloading and other static bindings at compile-time, which in turn has certain flexibility problems. Although these examples offer alternate strategies for achieving the same abstraction, they do not fundamentally alter the need to support abstract nouns in code – all programming relies on an ability to abstract verbs as functions, nouns as data structures, and either as processes. Consider for example a sample Java fragment to represent some common farm "animals" to a level of abstraction suitable to model simple aspects of their hunger and feeding. It defines an Animal class to represent both the state of the animal and its functions: public class Animal extends LivingThing { private Location loc; private double energyReserves; public boolean isHungry() { return energyReserves < 2.5; } public void eat(Food food) { // Consume food energyReserves += food.getCalories(); } public void moveTo(Location location) { // Move to new location this.loc = location; } } With the above definition, one could create objects of type and call their methods like this: thePig = new Animal(); theCow = new Animal(); if (thePig.isHungry()) { thePig.eat(tableScraps); } if (theCow.isHungry()) { theCow.eat(grass); } theCow.moveTo(theBarn); In the above example, the class Animal is an abstraction used in place of an actual animal, LivingThing is a further abstraction (in this case a generalisation) of Animal. If one requires a more differentiated hierarchy of animals – to differentiate, say, those who provide milk from those who provide nothing except meat at the end of their lives – that is an intermediary level of abstraction, probably DairyAnimal (cows, goats) who would eat foods suitable to giving good milk, and MeatAnimal (pigs, steers) who would eat foods to give the best meat-quality. Such an abstraction could remove the need for the application coder to specify the type of food, so they could concentrate instead on the feeding schedule. The two classes could be related using inheritance or stand alone, and the programmer could define varying degrees of polymorphism between the two types. These facilities tend to vary drastically between languages, but in general each can achieve anything that is possible with any of the others. A great many operation overloads, data type by data type, can have the same effect at compile-time as any degree of inheritance or other means to achieve polymorphism. The class notation is simply a coder's convenience. Object-oriented design Decisions regarding what to abstract and what to keep under the control of the coder become the major concern of object-oriented design and domain analysis—actually determining the relevant relationships in the real world is the concern of object-oriented analysis or legacy analysis. In general, to determine appropriate abstraction, one must make many small decisions about scope (domain analysis), determine what other systems one must cooperate with (legacy analysis), then perform a detailed object-oriented analysis which is expressed within project time and budget constraints as an object-oriented design. In our simple example, the domain is the barnyard, the live pigs and cows and their eating habits are the legacy constraints, the detailed analysis is that coders must have the flexibility to feed the animals what is available and thus there is no reason to code the type of food into the class itself, and the design is a single simple Animal class of which pigs and cows are instances with the same functions. A decision to differentiate DairyAnimal would change the detailed analysis but the domain and legacy analysis would be unchanged—thus it is entirely under the control of the programmer, and it is called an abstraction in object-oriented programming as distinct from abstraction in domain or legacy analysis. Considerations When discussing formal semantics of programming languages, formal methods or abstract interpretation, abstraction refers to the act of considering a less detailed, but safe, definition of the observed program behaviors. For instance, one may observe only the final result of program executions instead of considering all the intermediate steps of executions. Abstraction is defined to a concrete (more precise) model of execution. Abstraction may be exact or faithful with respect to a property if one can answer a question about the property equally well on the concrete or abstract model. For instance, if one wishes to know what the result of the evaluation of a mathematical expression involving only integers +, -, ×, is worth modulo n, then one needs only perform all operations modulo n (a familiar form of this abstraction is casting out nines). Abstractions, however, though not necessarily exact, should be sound. That is, it should be possible to get sound answers from them—even though the abstraction may simply yield a result of undecidability. For instance, students in a class may be abstracted by their minimal and maximal ages; if one asks whether a certain person belongs to that class, one may simply compare that person's age with the minimal and maximal ages; if his age lies outside the range, one may safely answer that the person does not belong to the class; if it does not, one may only answer "I don't know". The level of abstraction included in a programming language can influence its overall usability. The Cognitive dimensions framework includes the concept of abstraction gradient in a formalism. This framework allows the designer of a programming language to study the trade-offs between abstraction and other characteristics of the design, and how changes in abstraction influence the language usability. Abstractions can prove useful when dealing with computer programs, because non-trivial properties of computer programs are essentially undecidable (see Rice's theorem). As a consequence, automatic methods for deriving information on the behavior of computer programs either have to drop termination (on some occasions, they may fail, crash or never yield out a result), soundness (they may provide false information), or precision (they may answer "I don't know" to some questions). Abstraction is the core concept of abstract interpretation. Model checking generally takes place on abstract versions of the studied systems. Levels of abstraction Computer science commonly presents levels (or, less commonly, layers) of abstraction, wherein each level represents a different model of the same information and processes, but with varying amounts of detail. Each level uses a system of expression involving a unique set of objects and compositions that apply only to a particular domain. Each relatively abstract, "higher" level builds on a relatively concrete, "lower" level, which tends to provide an increasingly "granular" representation. For example, gates build on electronic circuits, binary on gates, machine language on binary, programming language on machine language, applications and operating systems on programming languages. Each level is embodied, but not determined, by the level beneath it, making it a language of description that is somewhat self-contained. Database systems Since many users of database systems lack in-depth familiarity with computer data-structures, database developers often hide complexity through the following levels: Physical level – The lowest level of abstraction describes how a system actually stores data. The physical level describes complex low-level data structures in detail. Logical level – The next higher level of abstraction describes what data the database stores, and what relationships exist among those data. The logical level thus describes an entire database in terms of a small number of relatively simple structures. Although implementation of the simple structures at the logical level may involve complex physical level structures, the user of the logical level does not need to be aware of this complexity. This is referred to as physical data independence. Database administrators, who must decide what information to keep in a database, use the logical level of abstraction. View level – The highest level of abstraction describes only part of the entire database. Even though the logical level uses simpler structures, complexity remains because of the variety of information stored in a large database. Many users of a database system do not need all this information; instead, they need to access only a part of the database. The view level of abstraction exists to simplify their interaction with the system. The system may provide many views for the same database. Layered architecture The ability to provide a design of different levels of abstraction can simplify the design considerably enable different role players to effectively work at various levels of abstraction support the portability of software artifacts (model-based ideally) Systems design and business process design can both use this. Some design processes specifically generate designs that contain various levels of abstraction. Layered architecture partitions the concerns of the application into stacked groups (layers). It is a technique used in designing computer software, hardware, and communications in which system or network components are isolated in layers so that changes can be made in one layer without affecting the others. See also Abstraction principle (computer programming) Abstraction inversion for an anti-pattern of one danger in abstraction Abstract data type for an abstract description of a set of data Algorithm for an abstract description of a computational procedure Bracket abstraction for making a term into a function of a variable Data modeling for structuring data independent of the processes that use it Encapsulation for abstractions that hide implementation details Greenspun's Tenth Rule for an aphorism about an (the?) optimum point in the space of abstractions Higher-order function for abstraction where functions produce or consume other functions Lambda abstraction for making a term into a function of some variable List of abstractions (computer science) Refinement for the opposite of abstraction in computing Integer (computer science) Heuristic (computer science) References Further reading Abstraction/information hiding – CS211 course, Cornell University. External links SimArch example of layered architecture for distributed simulation systems. Data management Abstraction Software engineering Object-oriented programming Articles with example Java code Articles with example Pascal code
Abstraction (computer science)
[ "Technology", "Engineering" ]
4,408
[ "Systems engineering", "Computer engineering", "Software engineering", "Data management", "Information technology", "Data" ]
60,546
https://en.wikipedia.org/wiki/Unique%20factorization%20domain
In mathematics, a unique factorization domain (UFD) (also sometimes called a factorial ring following the terminology of Bourbaki) is a ring in which a statement analogous to the fundamental theorem of arithmetic holds. Specifically, a UFD is an integral domain (a nontrivial commutative ring in which the product of any two non-zero elements is non-zero) in which every non-zero non-unit element can be written as a product of irreducible elements, uniquely up to order and units. Important examples of UFDs are the integers and polynomial rings in one or more variables with coefficients coming from the integers or from a field. Unique factorization domains appear in the following chain of class inclusions: Definition Formally, a unique factorization domain is defined to be an integral domain R in which every non-zero element x of R which is not a unit can be written as a finite product of irreducible elements pi of R: x = p1 p2 ⋅⋅⋅ pn with and this representation is unique in the following sense: If q1, ..., qm are irreducible elements of R such that x = q1 q2 ⋅⋅⋅ qm with , then , and there exists a bijective map such that pi is associated to qφ(i) for . Examples Most rings familiar from elementary mathematics are UFDs: All principal ideal domains, hence all Euclidean domains, are UFDs. In particular, the integers (also see Fundamental theorem of arithmetic), the Gaussian integers and the Eisenstein integers are UFDs. If R is a UFD, then so is R[X], the ring of polynomials with coefficients in R. Unless R is a field, R[X] is not a principal ideal domain. By induction, a polynomial ring in any number of variables over any UFD (and in particular over a field or over the integers) is a UFD. The formal power series ring over a field K (or more generally over a regular UFD such as a PID) is a UFD. On the other hand, the formal power series ring over a UFD need not be a UFD, even if the UFD is local. For example, if R is the localization of at the prime ideal then R is a local ring that is a UFD, but the formal power series ring R over R is not a UFD. The Auslander–Buchsbaum theorem states that every regular local ring is a UFD. is a UFD for all integers , but not for . Mori showed that if the completion of a Zariski ring, such as a Noetherian local ring, is a UFD, then the ring is a UFD. The converse of this is not true: there are Noetherian local rings that are UFDs but whose completions are not. The question of when this happens is rather subtle: for example, for the localization of at the prime ideal , both the local ring and its completion are UFDs, but in the apparently similar example of the localization of at the prime ideal the local ring is a UFD but its completion is not. Let be a field of any characteristic other than 2. Klein and Nagata showed that the ring is a UFD whenever Q is a nonsingular quadratic form in the Xs and n is at least 5. When , the ring need not be a UFD. For example, is not a UFD, because the element XY equals the element ZW so that XY and ZW are two different factorizations of the same element into irreducibles. The ring is a UFD, but the ring is not. On the other hand, The ring is not a UFD, but the ring is. Similarly the coordinate ring of the 2-dimensional real sphere is a UFD, but the coordinate ring of the complex sphere is not. Suppose that the variables Xi are given weights wi, and is a homogeneous polynomial of weight w. Then if c is coprime to w and R is a UFD and either every finitely generated projective module over R is free or c is 1 mod w, the ring is a UFD. Non-examples The quadratic integer ring of all complex numbers of the form , where a and b are integers, is not a UFD because 6 factors as both 2×3 and as . These truly are different factorizations, because the only units in this ring are 1 and −1; thus, none of 2, 3, , and are associate. It is not hard to show that all four factors are irreducible as well, though this may not be obvious. See also Algebraic integer. For a square-free positive integer d, the ring of integers of will fail to be a UFD unless d is a Heegner number. The ring of formal power series over the complex numbers is a UFD, but the subring of those that converge everywhere, in other words the ring of entire functions in a single complex variable, is not a UFD, since there exist entire functions with an infinity of zeros, and thus an infinity of irreducible factors, while a UFD factorization must be finite, e.g.: Properties Some concepts defined for integers can be generalized to UFDs: In UFDs, every irreducible element is prime. (In any integral domain, every prime element is irreducible, but the converse does not always hold. For instance, the element is irreducible, but not prime.) Note that this has a partial converse: a domain satisfying the ACCP is a UFD if and only if every irreducible element is prime. Any two elements of a UFD have a greatest common divisor and a least common multiple. Here, a greatest common divisor of a and b is an element d that divides both a and b, and such that every other common divisor of a and b divides d. All greatest common divisors of a and b are associated. Any UFD is integrally closed. In other words, if R is a UFD with quotient field K, and if an element k in K is a root of a monic polynomial with coefficients in R, then k is an element of R. Let S be a multiplicatively closed subset of a UFD A. Then the localization S−1A is a UFD. A partial converse to this also holds; see below. Equivalent conditions for a ring to be a UFD A Noetherian integral domain is a UFD if and only if every height 1 prime ideal is principal (a proof is given at the end). Also, a Dedekind domain is a UFD if and only if its ideal class group is trivial. In this case, it is in fact a principal ideal domain. In general, for an integral domain A, the following conditions are equivalent: A is a UFD. Every nonzero prime ideal of A contains a prime element. A satisfies ascending chain condition on principal ideals (ACCP), and the localization S−1A is a UFD, where S is a multiplicatively closed subset of A generated by prime elements. (Nagata criterion) A satisfies ACCP and every irreducible is prime. A is atomic and every irreducible is prime. A is a GCD domain satisfying ACCP. A is a Schreier domain, and atomic. A is a pre-Schreier domain and atomic. A has a divisor theory in which every divisor is principal. A is a Krull domain in which every divisorial ideal is principal (in fact, this is the definition of UFD in Bourbaki.) A is a Krull domain and every prime ideal of height 1 is principal. In practice, (2) and (3) are the most useful conditions to check. For example, it follows immediately from (2) that a PID is a UFD, since every prime ideal is generated by a prime element in a PID. For another example, consider a Noetherian integral domain in which every height one prime ideal is principal. Since every prime ideal has finite height, it contains a height one prime ideal (induction on height) that is principal. By (2), the ring is a UFD. See also Parafactorial local ring Noncommutative unique factorization domain Citations References Chap. 4. Chapter II.5 Ring theory Algebraic number theory factorization
Unique factorization domain
[ "Mathematics" ]
1,818
[ "Ring theory", "Fields of abstract algebra", "Arithmetic", "Algebraic number theory", "Factorization", "Number theory" ]
60,548
https://en.wikipedia.org/wiki/Planetarium
A planetarium (: planetariums or planetaria) is a theatre built primarily for presenting educational and entertaining shows about astronomy and the night sky, or for training in celestial navigation. A dominant feature of most planetariums is the large dome-shaped projection screen onto which scenes of stars, planets, and other celestial objects can be made to appear and move realistically to simulate their motion. The projection can be created in various ways, such as a star ball, slide projector, video, fulldome projector systems, and lasers. Typical systems can be set to simulate the sky at any point in time, past or present, and often to depict the night sky as it would appear from any point of latitude on Earth. Planetaria range in size from the 37 meter dome in St. Petersburg, Russia (called "Planetarium No 1") to three-meter inflatable portable domes where attendees sit on the floor. The largest planetarium in the Western Hemisphere is the Jennifer Chalsty Planetarium at Liberty Science Center in New Jersey, its dome measuring 27 meters in diameter. The Birla Planetarium in Kolkata, India is the largest by seating capacity, having 630 seats. In North America, the Hayden Planetarium at the American Museum of Natural History in New York City has the greatest number of seats, at 423. The term planetarium is sometimes used generically to describe other devices which illustrate the Solar System, such as a computer simulation or an orrery. Planetarium software refers to a software application that renders a three-dimensional image of the sky onto a two-dimensional computer screen, or in a virtual reality headset for a 3D representation. The term planetarian is used to describe a member of the professional staff of a planetarium. History Early The ancient Greek polymath Archimedes is attributed with creating a primitive planetarium device that could predict the movements of the Sun and the Moon and the planets. The discovery of the Antikythera mechanism proved that such devices already existed during antiquity, though likely after Archimedes' lifetime. Campanus of Novara described a planetary equatorium in his Theorica Planetarum, and included instructions on how to build one. The Globe of Gottorf built around 1650 had constellations painted on the inside. These devices would today usually be referred to as orreries (named for the Earl of Orrery). In fact, many planetariums today have projection orreries, which project onto the dome the Solar System (including the Sun and planets up to Saturn) in their regular orbital paths. In 1229, following the conclusion of the Fifth Crusade, Holy Roman Emperor Frederick II of Hohenstaufen brought back a tent with scattered holes representing stars or planets. The device was operated internally with a spinnable table that rotated the tent. The small size of typical 18th century orreries limited their impact, and towards the end of that century a number of educators attempted to create a larger sized version. The efforts of Adam Walker (1730–1821) and his sons are noteworthy in their attempts to fuse theatrical illusions with education. Walker's Eidouranion was the heart of his public lectures or theatrical presentations. Walker's son describes this "Elaborate Machine" as "twenty feet high, and twenty-seven in diameter: it stands vertically before the spectators, and its globes are so large, that they are distinctly seen in the most distant parts of the Theatre. Every Planet and Satellite seems suspended in space, without any support; performing their annual and diurnal revolutions without any apparent cause". Other lecturers promoted their own devices: R E Lloyd advertised his Dioastrodoxon, or Grand Transparent Orrery, and by 1825 William Kitchener was offering his Ouranologia, which was in diameter. These devices most probably sacrificed astronomical accuracy for crowd-pleasing spectacle and sensational and awe-provoking imagery. The oldest still-working planetarium can be found in the Frisian city of Franeker. It was built by Eise Eisinga (1744–1828) in the living room of his house. It took Eisinga seven years to build his planetarium, which was completed in 1781. 20th century In 1905 Oskar von Miller (1855–1934) of the Deutsches Museum in Munich commissioned updated versions of a geared orrery and planetarium from M Sendtner, and later worked with Franz Meyer, chief engineer at the Carl Zeiss optical works in Jena, on the largest mechanical planetarium ever constructed, capable of displaying both heliocentric and geocentric motion. This was displayed at the Deutsches Museum in 1924, construction work having been interrupted by the war. The planets travelled along overhead rails, powered by electric motors: the orbit of Saturn was 11.25 m in diameter. 180 stars were projected onto the wall by electric bulbs. While this was being constructed, von Miller was also working at the Zeiss factory with German astronomer Max Wolf, director of the Landessternwarte Heidelberg-Königstuhl observatory of the University of Heidelberg, on a new and novel design, inspired by Wallace W. Atwood's work at the Chicago Academy of Sciences and by the ideas of Walther Bauersfeld and Rudolf Straubel at Zeiss. The result was a planetarium design which would generate all the necessary movements of the stars and planets inside the optical projector, and would be mounted centrally in a room, projecting images onto the white surface of a hemisphere. In August 1923, the first (Model I) Zeiss planetarium projected images of the night sky onto the white plaster lining of a 16 m hemispherical concrete dome, erected on the roof of the Zeiss works. The first official public showing was at the Deutsches Museum in Munich on October 21, 1923. Zeiss Planetarium became popular, and attracted a lot of attention. Next Zeiss planetariums were opened in Rome (1928, in Aula Ottagona, part of the Baths of Diocletian), Chicago (1930), Osaka (1937, in the Osaka City Electricity Science Museum). After World War II When Germany was divided into East and West Germany after the war, the Zeiss firm was also split. Part remained in its traditional headquarters at Jena, in East Germany, and part migrated to West Germany. The designer of the first planetariums for Zeiss, Walther Bauersfeld, also migrated to West Germany with the other members of the Zeiss management team. There he remained on the Zeiss West management team until his death in 1959. The West German firm resumed making large planetariums in 1954, and the East German firm started making small planetariums a few years later. Meanwhile, the lack of planetarium manufacturers had led to several attempts at construction of unique models, such as one built by the California Academy of Sciences in Golden Gate Park, San Francisco, which operated 1952–2003. The Korkosz brothers built a large projector for the Boston Museum of Science, which was unique in being the first (and for a very long time only) planetarium to project the planet Uranus. Most planetariums ignore Uranus as being at best marginally visible to the naked eye. A great boost to the popularity of the planetarium worldwide was provided by the Space Race of the 1950s and 60s when fears that the United States might miss out on the opportunities of the new frontier in space stimulated a massive program to install over 1,200 planetariums in U.S. high schools. Armand Spitz recognized that there was a viable market for small inexpensive planetaria. His first model, the Spitz A, was designed to project stars from a dodecahedron, thus reducing machining expenses in creating a globe. Planets were not mechanized, but could be shifted by hand. Several models followed with various upgraded capabilities, until the A3P, which projected well over a thousand stars, had motorized motions for latitude change, daily motion, and annual motion for Sun, Moon (including phases), and planets. This model was installed in hundreds of high schools, colleges, and even small museums from 1964 to the 1980s. Japan entered the planetarium manufacturing business in the 1960s, with Goto and Minolta both successfully marketing a number of different models. Goto was particularly successful when the Japanese Ministry of Education put one of their smallest models, the E-3 or E-5 (the numbers refer to the metric diameter of the dome) in every elementary school in Japan. Phillip Stern, as former lecturer at New York City's Hayden Planetarium, had the idea of creating a small planetarium which could be programmed. His Apollo model was introduced in 1967 with a plastic program board, recorded lecture, and film strip. Unable to pay for this himself, Stern became the head of the planetarium division of Viewlex, a mid-size audio-visual firm on Long Island. About thirty canned programs were created for various grade levels and the public, while operators could create their own or run the planetarium live. Purchasers of the Apollo were given their choice of two canned shows, and could purchase more. A few hundred were sold, but in the late 1970s Viewlex went bankrupt for reasons unrelated to the planetarium business. During the 1970s, the OmniMax movie system (now known as IMAX Dome) was conceived to operate on planetarium screens. More recently, some planetariums have re-branded themselves as dome theaters, with broader offerings including wide-screen or "wraparound" films, fulldome video, and laser shows that combine music with laser-drawn patterns. Learning Technologies Inc. in Massachusetts offered the first easily portable planetarium in 1977. Philip Sadler designed this patented system which projected stars, constellation figures from many mythologies, celestial coordinate systems, and much else, from removable cylinders (Viewlex and others followed with their own portable versions). When Germany reunified in 1989, the two Zeiss firms did likewise, and expanded their offerings to cover many different size domes. Computerized planetaria In 1983, Evans & Sutherland installed the first digital planetarium projector displaying computer graphics (Hansen planetarium, Salt Lake City, Utah)—the Digistar I projector used a vector graphics system to display starfields as well as line art. This gives the operator great flexibility in showing not only the modern night sky as visible from Earth, but as visible from points far distant in space and time. The newest generations of planetarium projectors, beginning with Digistar 3, offer fulldome video technology. This allows for the projection of any image. Technology Domes Planetarium domes range in size from 3 to 35 m in diameter, accommodating from 1 to 500 people. They can be permanent or portable, depending on the application. Portable inflatable domes can be inflated in minutes. Such domes are often used for touring planetariums visiting, for example, schools and community centres. Temporary structures using glass-reinforced plastic (GRP) segments bolted together and mounted on a frame are possible. As they may take some hours to construct, they are more suitable for applications such as exhibition stands, where a dome will stay up for a period of at least several days. Negative-pressure inflated domes are suitable in some semi-permanent situations. They use a fan to extract air from behind the dome surface, allowing atmospheric pressure to push it into the correct shape. Smaller permanent domes are frequently constructed from glass reinforced plastic. This is inexpensive but, as the projection surface reflects sound as well as light, the acoustics inside this type of dome can detract from its utility. Such a solid dome also presents issues connected with heating and ventilation in a large-audience planetarium, as air cannot pass through it. Older planetarium domes were built using traditional construction materials and surfaced with plaster. This method is relatively expensive and suffers the same acoustic and ventilation issues as GRP. Most modern domes are built from thin aluminium sections with ribs providing a supporting structure behind. The use of aluminium makes it easy to perforate the dome with thousands of tiny holes. This reduces the reflectivity of sound back to the audience (providing better acoustic characteristics), lets a sound system project through the dome from behind (offering sound that seems to come from appropriate directions related to a show), and allows air circulation through the projection surface for climate control. The realism of the viewing experience in a planetarium depends significantly on the dynamic range of the image, i.e., the contrast between dark and light. This can be a challenge in any domed projection environment, because a bright image projected on one side of the dome will tend to reflect light across to the opposite side, "lifting" the black level there and so making the whole image look less realistic. Since traditional planetarium shows consisted mainly of small points of light (i.e., stars) on a black background, this was not a significant issue, but it became an issue as digital projection systems started to fill large portions of the dome with bright objects (e.g., large images of the sun in context). For this reason, modern planetarium domes are often not painted white but rather a mid grey colour, reducing reflection to perhaps 35-50%. This increases the perceived level of contrast. A major challenge in dome construction is to make seams as invisible as possible. Painting a dome after installation is a major task, and if done properly, the seams can be made almost to disappear. Traditionally, planetarium domes were mounted horizontally, matching the natural horizon of the real night sky. However, because that configuration requires highly inclined chairs for comfortable viewing "straight up", increasingly domes are being built tilted from the horizontal by between 5 and 30 degrees to provide greater comfort. Tilted domes tend to create a favoured "sweet spot" for optimum viewing, centrally about a third of the way up the dome from the lowest point. Tilted domes generally have seating arranged stadium-style in straight, tiered rows; horizontal domes usually have seats in circular rows, arranged in concentric (facing center) or epicentric (facing front) arrays. Planetaria occasionally include controls such as buttons or joysticks in the arm rests of seats to allow audience feedback that influences the show in real time. Often around the edge of the dome (the "cove") are: Silhouette models of geography or buildings like those in the area round the planetarium building. Lighting to simulate the effect of twilight or urban light pollution. Traditionally, planetariums needed many incandescent lamps around the cove of the dome to help audience entry and exit, to simulate sunrise and sunset, and to provide working light for dome cleaning. More recently, solid-state LED lighting has become available that significantly decreases power consumption and reduces the maintenance requirement as lamps no longer have to be changed on a regular basis. The world's largest mechanical planetarium is located in Monico, Wisconsin. The Kovac Planetarium. It is 22 feet in diameter and weighs two tons. The globe is made of wood and is driven with a variable speed motor controller. This is the largest mechanical planetarium in the world, larger than the Atwood Globe in Chicago (15 feet in diameter) and one third the size of the Hayden. Some new planetariums now feature a glass floor, which allows spectators to stand near the center of a sphere surrounded by projected images in all directions, giving the impression of floating in outer space. For example, a small planetarium at AHHAA in Tartu, Estonia features such an installation, with special projectors for images below the feet of the audience, as well as above their heads. Traditional electromechanical/optical projectors Traditional planetarium projection apparatus use a hollow ball with a light inside, and a pinhole for each star, hence the name "star ball". With some of the brightest stars (e.g. Sirius, Canopus, Vega), the hole must be so big to let enough light through that there must be a small lens in the hole to focus the light to a sharp point on the dome. In later and modern planetarium star balls, the individual bright stars often have individual projectors, shaped like small hand-held torches, with focusing lenses for individual bright stars. Contact breakers prevent the projectors from projecting below the "horizon". The star ball is usually mounted so it can rotate as a whole to simulate the Earth's daily rotation, and to change the simulated latitude on Earth. There is also usually a means of rotating to produce the effect of precession of the equinoxes. Often, one such ball is attached at its south ecliptic pole. In that case, the view cannot go so far south that any of the resulting blank area at the south is projected on the dome. Some star projectors have two balls at opposite ends of the projector like a dumbbell. In that case all stars can be shown and the view can go to either pole or anywhere between. But care must be taken that the projection fields of the two balls match where they meet or overlap. Smaller planetarium projectors include a set of fixed stars, Sun, Moon, and planets, and various nebulae. Larger projectors also include comets and a far greater selection of stars. Additional projectors can be added to show twilight around the outside of the screen (complete with city or country scenes) as well as the Milky Way. Others add coordinate lines and constellations, photographic slides, laser displays, and other images. Each planet is projected by a sharply focused spotlight that makes a spot of light on the dome. Planet projectors must have gearing to move their positioning and thereby simulate the planets' movements. These can be of these types:- Copernican. The axis represents the Sun. The rotating piece that represents each planet carries a light that must be arranged and guided to swivel so it always faces towards the rotating piece that represents the Earth. This presents mechanical problems including: The planet lights must be powered by wires, which have to bend about as the planets rotate, and repeatedly bending copper wire tends to cause wire breakage through metal fatigue. When a planet is at opposition to the Earth, its light is liable to be blocked by the mechanism's central axle. (If the planet mechanism is set 180° rotated from reality, the lights are carried by the Earth and shine towards each planet, and the blocking risk happens at conjunction with Earth.) Ptolemaic. Here the central axis represents the Earth. Each planet light is on a mount which rotates only about the central axis, and is aimed by a guide which is steered by a deferent and an epicycle (or whatever the planetarium maker calls them). Here Ptolemy's number values must be revised to remove the daily rotation, which in a planetarium is catered for otherwise. (In one planetarium, this needed Ptolemaic-type orbital constants for Uranus, which was unknown to Ptolemy.) Computer-controlled. Here all the planet lights are on mounts which rotate only about the central axis, and are aimed by a computer. Despite offering a good viewer experience, traditional star ball projectors suffer several inherent limitations. From a practical point of view, the low light levels require several minutes for the audience to "dark adapt" its eyesight. "Star ball" projection is limited in education terms by its inability to move beyond an Earth-bound view of the night sky. Finally, in most traditional projectors the various overlaid projection systems are incapable of proper occultation. This means that a planet image projected on top of a star field (for example) will still show the stars shining through the planet image, degrading the quality of the viewing experience. For related reasons, some planetariums show stars below the horizon projecting on the walls below the dome or on the floor, or (with a bright star or a planet) shining in the eyes of someone in the audience. However, the new breed of Optical-Mechanical projectors using fiber-optic technology to display the stars show a much more realistic view of the sky. Digital projectors An increasing number of planetariums are using digital technology to replace the entire system of interlinked projectors traditionally employed around a star ball to address some of their limitations. Digital planetarium manufacturers claim reduced maintenance costs and increased reliability from such systems compared with traditional "star balls" on the grounds that they employ few moving parts and do not generally require synchronisation of movement across the dome between several separate systems. Some planetariums mix both traditional opto-mechanical projection and digital technologies on the same dome. In a fully digital planetarium, the dome image is generated by a computer and then projected onto the dome using a variety of technologies including cathode-ray tube, LCD, DLP, or laser projectors. Sometimes a single projector mounted near the centre of the dome is employed with a fisheye lens to spread the light over the whole dome surface, while in other configurations several projectors around the horizon of the dome are arranged to blend together seamlessly. Digital projection systems all work by creating the image of the night sky as a large array of pixels. Generally speaking, the more pixels a system can display, the better the viewing experience. While the first generation of digital projectors were unable to generate enough pixels to match the image quality of the best traditional "star ball" projectors, high-end systems now offer a resolution that approaches the limit of human visual acuity. LCD projectors have fundamental limits on their ability to project true black as well as light, which has tended to limit their use in planetaria. LCOS and modified LCOS projectors have improved on LCD contrast ratios while also eliminating the "screen door" effect of small gaps between LCD pixels. "Dark chip" DLP projectors improve on the standard DLP design and can offer relatively inexpensive solution with bright images, but the black level requires physical baffling of the projectors. As the technology matures and reduces in price, laser projection looks promising for dome projection as it offers bright images, large dynamic range and a very wide color space. Show content Worldwide, most planetariums provide shows to the general public. Traditionally, shows for these audiences with themes such as "What's in the sky tonight?", or shows which pick up on topical issues such as a religious festival (often the Christmas star) linked to the night sky, have been popular. Live format is preferred by many venues as a live speaker or presenter can answer questions raised by the audience. Since the early 1990s, fully featured 3-D digital planetariums have added an extra degree of freedom to a presenter giving a show because they allow simulation of the view from any point in space, not only the Earth-bound view which we are most familiar with. This new virtual reality capability to travel through the universe provides important educational benefits because it vividly conveys that space has depth, helping audiences to leave behind the ancient misconception that the stars are stuck on the inside of a giant celestial sphere and instead to understand the true layout of the Solar System and beyond. For example, a planetarium can now 'fly' the audience towards one of the familiar constellations such as Orion, revealing that the stars which appear to make up a co-ordinated shape from an Earth-bound viewpoint are at vastly different distances from Earth and so not connected, except in human imagination and mythology. For especially visual or spatially aware people, this experience can be more educationally beneficial than other demonstrations. See also References External links IPS (International Planetarium Society) WPD (Worldwide Planetariums Database) Observation Glass engineering and science Theatres
Planetarium
[ "Materials_science", "Astronomy", "Engineering" ]
4,838
[ "Glass engineering and science", "Astronomy education", "Materials science", "Astronomy organizations", "Planetaria" ]
60,549
https://en.wikipedia.org/wiki/Orrery
An orrery is a mechanical model of the Solar System that illustrates or predicts the relative positions and motions of the planets and moons, usually according to the heliocentric model. It may also represent the relative sizes of these bodies; however, since accurate scaling is often not practical due to the actual large ratio differences, it may use a scaled-down approximation. The Greeks had working planetaria, but the first modern example was produced by John Rowley. He named it "orrery" for his patron Charles Boyle, 4th Earl of Orrery (in County Cork, Ireland). The plaque on it reads "Orrery invented by Graham 1700 improved by Rowley and presented by him to John [sic] Earl of Orrery after whom it was named at the suggestion of Richard Steele." Orreries are typically driven by a clockwork mechanism with a globe representing the Sun at the centre, and with a planet at the end of each of a series of arms. History Ancient The Antikythera mechanism, discovered in 1901 in a wreck off the Greek island of Antikythera in the Mediterranean Sea, exhibited the diurnal motions of the Sun, Moon, and the five planets known to the ancient Greeks. It has been dated between 205 to 87 BC. The mechanism is considered one of the first orreries. It was geocentric and used as a mechanical calculator to calculate astronomical positions. Cicero, the Roman philosopher and politician writing in the first century BC, has references describing planetary mechanical models. According to him, the Greek polymaths Thales and Posidonius both constructed a device modeling celestial motion. Early Modern In 1348, Giovanni Dondi built the first known clock driven mechanism of the system. It displays the ecliptic position of the Moon, Sun, Mercury, Venus, Mars, Jupiter and Saturn according to the complicated geocentric Ptolemaic planetary theories. The clock itself is lost, but Dondi left a complete description of its astronomic gear trains. As late as 1650, P. Schirleus built a geocentric planetarium with the Sun as a planet, and with Mercury and Venus revolving around the Sun as its moons. At the court of William IV, Landgrave of Hesse-Kassel two complicated astronomic clocks were built in 1561 and 1563–1568. These use four sides to show the ecliptical positions of the Sun, Mercury, Venus, Mars, Jupiter, Saturn, the Moon, Sun and Dragon (Nodes of the Moon) according to Ptolemy, a calendar, the sunrise and sunset, and an automated celestial sphere with an animated Sun symbol which, for the first time on a celestial globe, shows the real position of the Sun, including the equation of time. The clocks are now on display in Kassel at the Astronomisch-Physikalisches Kabinett and in Dresden at the Mathematisch-Physikalischer Salon. In De revolutionibus orbium coelestium, published in Nuremberg in 1543, Nicolaus Copernicus challenged the Western teaching of a geocentric universe in which the Sun revolved daily around the Earth. He observed that some Greek philosophers such as Aristarchus of Samos had proposed a heliocentric universe. This simplified the apparent epicyclic motions of the planets, making it feasible to represent the planets' paths as simple circles. This could be modeled by the use of gears. Tycho Brahe's improved instruments made precise observations of the skies (1576–1601), and from these Johannes Kepler (1621) deduced that planets orbited the Sun in ellipses. In 1687 Isaac Newton explained the cause of elliptic motion in his theory of gravitation. Modern There is an orrery built by clock makers George Graham and Thomas Tompion dated in the History of Science Museum, Oxford. Graham gave the first model, or its design, to the celebrated instrument maker John Rowley of London to make a copy for Prince Eugene of Savoy. Rowley was commissioned to make another copy for his patron Charles Boyle, 4th Earl of Orrery, from which the device took its name in English. This model was presented to Charles' son John, later the 5th Earl of Cork and 5th Earl of Orrery. Independently, Christiaan Huygens published in 1703 details of a heliocentric planetary machine which he had built while living in Paris between 1665 and 1681. He calculated the gear trains needed to represent a year of 365.242 days, and used that to produce the cycles of the principal planets. Joseph Wright's painting A Philosopher giving a Lecture on the Orrery (), which hangs in the Derby Museum and Art Gallery, depicts a group listening to a lecture by a natural philosopher. The Sun in a brass orrery provides the only light in the room. The orrery depicted in the painting has rings, which give it an appearance similar to that of an armillary sphere. The demonstration was thereby able to depict eclipses. To put this in chronological context, in 1762 John Harrison's marine chronometer first enabled accurate measurement of longitude. In 1766, astronomer Johann Daniel Titius first demonstrated that the mean distance of each planet from the Sun could be represented by the following progression: That is, 0.4, 0.7, 1.0, 1.6, 2.8, ... The numbers refer to astronomical units, the mean distance between Sun and Earth, which is 1.496 × 108 km (93 × 106 miles). The Derby Orrery does not show mean distance, but demonstrated the relative planetary movements. The Eisinga Planetarium was built from 1774 to 1781 by Eise Eisinga in his home in Franeker, in the Netherlands. It displays the planets across the width of a room's ceiling, and has been in operation almost continually since it was created. This orrery is a planetarium in both senses of the word: a complex machine showing planetary orbits, and a theatre for depicting the planets' movement. Eisinga house was bought by the Dutch Royal family who gave him a pension. In 1764, Benjamin Martin devised a new type of planetary model, in which the planets were carried on brass arms leading from a series of concentric or coaxial tubes. With this construction it was difficult to make the planets revolve, and to get the moons to turn around the planets. Martin suggested that the conventional orrery should consist of three parts: the planetarium where the planets revolved around the Sun, the tellurion (also tellurian or tellurium) which showed the inclined axis of the Earth and how it revolved around the Sun, and the lunarium which showed the eccentric rotations of the Moon around the Earth. In one orrery, these three motions could be mounted on a common table, separately using the central spindle as a prime mover. Workings All orreries are planetariums. The term orrery has only existed since 1714. A grand orrery is one that includes the outer planets known at the time of its construction. The word planetarium has shifted meaning, and now usually refers to hemispherical theatres in which images of the night sky are projected onto an overhead surface. Orreries can range widely in size from hand-held to room-sized. An orrery is used to demonstrate the motion of the planets, while a mechanical device used to predict eclipses and transits is called an astrarium. An orrery should properly include the Sun, the Earth and the Moon (plus optionally other planets). A model that only includes the Earth, the Moon, and the Sun is called a tellurion or tellurium, and one which only includes the Earth and the Moon is a lunarium. A jovilabe is a model of Jupiter and its moons. A planetarium will show the orbital period of each planet and the rotation rate, as shown in the table above. A tellurion will show the Earth with the Moon revolving around the Sun. It will use the angle of inclination of the equator from the table above to show how it rotates around its own axis. It will show the Earth's Moon, rotating around the Earth. A lunarium is designed to show the complex motions of the Moon as it revolves around the Earth. Orreries are usually not built to scale. Human orreries, where humans move about as the planets, have also been constructed, but most are temporary. There is a permanent human orrery at Armagh Observatory in Northern Ireland, which has the six ancient planets, Ceres, and comets Halley and Encke. Uranus and beyond are also shown, but in a fairly limited way. Another is at Sky's the Limit Observatory and Nature Center in Twentynine Palms, California; it is a true to scale (20 billion to one), true to position (accurate to within four days) human orrery. The first four planets are relatively close to one another, but the next four require a certain amount of hiking in order to visit them. A census of all permanent human orreries has been initiated by the French group F-HOU with a new effort to study their impact for education in schools. A map of known human orreries is available. A normal mechanical clock could be used to produce an extremely simple orrery to demonstrate the principle, with the Sun in the centre, Earth on the minute hand and Jupiter on the hour hand; Earth would make 12 revolutions around the Sun for every 1 revolution of Jupiter. As Jupiter's actual year is 11.86 Earth years long, the model would lose accuracy rapidly. Projection Many planetariums have a projection orrery, which projects onto the dome of the planetarium a Sun with either dots or small images of the planets. These usually are limited to the planets from Mercury to Saturn, although some include Uranus. The light sources for the planets are projected onto mirrors which are geared to a motor which drives the images on the dome. Typically the Earth will circle the Sun in one minute, while the other planets will complete an orbit in time periods proportional to their actual motion. Thus Venus, which takes 224.7 days to orbit the Sun, will take 37 seconds to complete an orbit on an orrery, and Jupiter will take 11 minutes, 52 seconds. Some planetariums have taken advantage of this to use orreries to simulate planets and their moons. Thus Mercury orbits the Sun in 0.24 of an Earth year, while Phobos and Deimos orbit Mars in a similar 4:1 time ratio. Planetarium operators wishing to show this have placed a red cap on the Sun (to make it resemble Mars) and turned off all the planets but Mercury and Earth. Similar approximations can be used to show Pluto and its five moons. Notable examples Shoemaker John Fulton of Fenwick, Ayrshire, built three between 1823 and 1833. The last is in Glasgow's Kelvingrove Art Gallery and Museum. The Eisinga Planetarium built by a wool carder named Eise Eisinga in his own living room, in the small city of Franeker in Friesland, is in fact an orrery. It was constructed between 1774 and 1781. The base of the model faces down from the ceiling of the room, with most of the mechanical works in the space above the ceiling. It is driven by a pendulum clock, which has 9 weights or ponds. The planets move around the model in real time. An innovative concept is to have people play the role of the moving planets and other Solar System objects. Such a model, called a human orrery, has been laid out at the Armagh Observatory. In popular culture The construction system Meccano is a popular tool for constructing highly accurate orreries. Model 391, the first Meccano Orrery, was described in the June 1918 Meccano Manual. In Dune Messiah, the 1969 sequel to Dune, there is a description of a desktop orrery representing the two moons of the fictional planet Arrakis and its sun. In the backstory of the 1982 film The Dark Crystal, the UrSkek TekTih made a giant automatic orrery, with the help of his fellow UrSkek ShodYod, for Aughra, in the mountaintop observatory where she lives. In the 1999 version of Tarzan, the title character studies an orrery with planets on it. In the 2000 science fiction film Pitch Black, an orrery was used to demonstrate a pending eclipse of the planet. In the 2020 historical novel A Room Made of Leaves by Kate Grenville, a makeshift orrery is made from scraps found in the early colony of New South Wales by its first astronomer, William Dawes. See also Apparent retrograde motion Armillary sphere Astrarium Astrolabe Astronomical clock Celestial globe Clockwork universe Eidouranion Ephemeris Equatorium Eratosthenes John Fulton (instrument maker) List of astronomical instruments Orbit of the Moon Stability of the Solar System Tellurion Torquetum References Further reading External links JPL Solar System Simulator Long Now Foundation Orrery University of Pennsylvania Orrery Historical scientific instruments Astronomical instruments Solar System models Scale modeling 1704 in science Science education materials
Orrery
[ "Physics", "Astronomy" ]
2,693
[ "Scale modeling", "Space art", "Astronomical instruments", "Solar System models", "Solar System" ]
60,556
https://en.wikipedia.org/wiki/William%20Armstrong%2C%201st%20Baron%20Armstrong
William Armstrong, 1st Baron Armstrong, (26 November 1810 – 27 December 1900) was an English engineer and industrialist who founded the Armstrong Whitworth manufacturing concern on Tyneside. He was also an eminent scientist, inventor and philanthropist. In collaboration with the architect Richard Norman Shaw, he built Cragside in Northumberland, the first house in the world to be lit by hydroelectricity. He is regarded as the inventor of modern artillery. Armstrong was knighted in 1859 after giving his gun patents to the government. In 1887, in Queen Victoria's golden jubilee year, he was raised to the peerage as Baron Armstrong of Cragside. Early life Armstrong was born in Newcastle upon Tyne at 9 Pleasant Row, Shieldfield, Although the house in which he was born no longer exists, an inscribed granite tablet marks the site where it stood. At that time the area, next to the Pandon Dene, was rural. His father, also called William, was a corn merchant on the Newcastle quayside, who rose through the ranks of Newcastle society to become mayor of the town in 1850. An elder sister, Anne, born in 1802, was named after his mother, the daughter of Addison Potter. Armstrong was educated at the Royal Grammar School, Newcastle upon Tyne, until he was sixteen, when he was sent to Bishop Auckland Grammar School. While there, he often visited the nearby engineering works of William Ramshaw. During his visits he met his future wife, Ramshaw's daughter Margaret, six years his senior. Armstrong's father was set on his following a career in the law, and so he was articled to Armorer Donkin, a solicitor friend of his father's. He spent five years in London studying law and returned to Newcastle in 1833. In 1835 he became a partner in Donkin's business and the firm became Donkin, Stable and Armstrong. Armstrong married Margaret Ramshaw in 1835, and they built a house in Jesmond Dene, on the eastern edge of Newcastle. Armstrong worked for eleven years as a solicitor, but during his spare time he showed great interest in engineering, developing the "Armstrong Hydroelectric Machine" between 1840 and 1842. In 1837, he laid the foundations for the engineering and environmental consultancy which is today known as Wardell Armstrong. Change of career Armstrong was a very keen angler, and while fishing on the River Dee at Dentdale in the Pennines, he saw a waterwheel in action, supplying power to a marble quarry. It struck Armstrong that much of the available power was being wasted. When he returned to Newcastle, he designed a rotary engine powered by water, and this was built in the High Bridge works of his friend Henry Watson. Little interest was shown in the engine. Armstrong subsequently developed a piston engine instead of a rotary one and decided that it might be suitable for driving a hydraulic crane. In 1846 his work as an amateur scientist was recognized when he was elected a Fellow of the Royal Society. In 1845 a scheme was set in motion to provide piped water from distant reservoirs to the households of Newcastle. Armstrong was involved in this scheme and he proposed to Newcastle Corporation that the excess water pressure in the lower part of town could be used to power a quayside crane specially adapted by himself. He claimed that his hydraulic crane could unload ships faster and more cheaply than conventional cranes. The Corporation agreed to his suggestion, and the experiment proved so successful that three more hydraulic cranes were installed on the Quayside. The success of his hydraulic crane led Armstrong to consider setting up a business to manufacture cranes and other hydraulic equipment. He therefore resigned from his legal practice. Donkin, his legal colleague, supported him in his career move, providing financial backing for the new venture. In 1847 the firm of W. G. Armstrong & Company bought of land alongside the river at Elswick, near Newcastle, and began to build a factory there. The new company received orders for hydraulic cranes from Edinburgh and Northern Railways and from Liverpool Docks, as well as for hydraulic machinery for dock gates in Grimsby. The company soon began to expand. In 1850 the company produced 45 cranes and two years later, 75. It averaged 100 cranes per year for the rest of the century. In 1850 over 300 men were employed at the works, but by 1863 this had risen to 3,800. The company soon branched out into bridge building, one of the first orders being for the Inverness Bridge, completed in 1855. Hydraulic accumulator Armstrong was responsible for developing the hydraulic accumulator. Where water pressure was not available on site for the use of hydraulic cranes, Armstrong often built high water towers to provide a supply of water at pressure – for instance, the Grimsby Dock Tower. However, when supplying cranes for use at New Holland on the Humber Estuary, he was unable to do this because the foundations consisted of sand. After much careful thought he produced the weighted accumulator, a cast-iron cylinder fitted with a plunger supporting a very heavy weight. The plunger would slowly be raised, drawing in water, until the downward force of the weight was sufficient to force the water below it into pipes at great pressure. The accumulator was a very significant, if unspectacular, invention, which found many applications in the following years. Armaments In 1854, during the Crimean War, Armstrong read about the difficulties the British Army experienced in manoeuvring its heavy field guns. He decided to design a lighter, more mobile field gun, with greater range and accuracy. He built a breech-loading gun with a strong, rifled barrel made from wrought iron wrapped around a steel inner lining, designed to fire a shell rather than a ball. In 1855 he had a five-pounder ready for inspection by a government committee. The gun proved successful in trials, but the committee thought a higher calibre gun was needed, so Armstrong built an 18-pounder on the same design. After trials, this gun was declared to be superior to all its rivals. Armstrong surrendered the patent for the gun to the British government, rather than profit from its design. As a result he was created a Knight Bachelor and in 1859 was presented to Queen Victoria. Armstrong became employed as Engineer of Rifled Ordnance to the War Department. In order to avoid a conflict of interests if his own company were to manufacture armaments, Armstrong created a separate company, called Elswick Ordnance Company, in which he had no financial involvement. The new company agreed to manufacture armaments for the British government and no other. Under his new position, Armstrong worked to bring the old Woolwich Arsenal up to date so that it could build guns designed at Elswick. However, just when it looked as if the new gun was about to become a great success, a great deal of opposition to the gun arose, both inside the army and from rival arms manufacturers, particularly Joseph Whitworth of Manchester. Stories were publicised that the new gun was too difficult to use, that it was too expensive, that it was dangerous to use, that it frequently needed repair and so on. All of this smacked of a concerted campaign against Armstrong. Armstrong was able to refute all of these claims in front of various government committees, but he found the constant criticism very wearying and depressing. In 1862 the government decided to stop ordering the new gun and return to muzzle loaders. Also, because of a drop in demand, future orders for guns would be supplied from Woolwich, leaving Elswick without new business. Compensation was eventually agreed with the government for the loss of business to the company, which went on legitimately to sell its products to foreign powers. Speculation that guns were sold to both sides in the American Civil War was unfounded. Warships In 1864 the two companies, W. G. Armstrong & Company and Elswick Ordnance Company merged to form Sir W. G. Armstrong & Company. Armstrong had resigned from his employment with the War Office, so there was no longer a conflict of interest. The company turned its attention to naval guns. In 1867 Armstrong reached an agreement with Charles Mitchell, a shipbuilder in Low Walker, whereby Mitchells would build warships and Elswick would provide the guns. The first ship, in 1868 was HMS Staunch, a gunboat. In 1876, because the 18th-century bridge at Newcastle restricted access by ships to the Elswick works, Armstrong's company paid for a new Swing Bridge to be built, so that warships could have their guns fitted at Elswick. In 1882 Armstrong's company merged with Mitchell's to form Sir William Armstrong, Mitchell and Co. Ltd. and in 1884 a shipyard opened at Elswick to specialise in warship production. The first vessels produced were the torpedo cruisers Panther and Leopard for the Austro-Hungarian Navy. The first battleship produced at Elswick was HMS Victoria, launched in 1887. The ship was originally to be named Renown, but the name was changed in honour of the Queen's Golden Jubilee. Armstrong drove the first and last rivets. The ship was ill-fated, as she was involved in a collision with HMS Camperdown just six years later in 1893 and sank with the loss of 358 men, including Vice-Admiral Sir George Tryon. An important customer of the Elswick yard was Japan, which took several cruisers, some of which defeated the Russian fleet at the Battle of Tsushima in 1905. It was claimed that every Japanese gun used in the battle had been provided by Elswick. Elswick was the only factory in the world that could build a battleship and arm it completely. The Elswick works continued to prosper, and by 1870 stretched for three-quarters of a mile along the riverside. The population of Elswick, which had been 3,539 in 1851, had increased to 27,800 by 1871. In 1894, Elswick built and installed the steam-driven pumping engines, hydraulic accumulators and hydraulic pumping engines to operate London's Tower Bridge. In 1897 the company merged with the company of Armstrong's old rival, Joseph Whitworth, and became Sir W. G. Armstrong, Whitworth & Co Ltd. Whitworth was by this time dead. Armstrong gathered many excellent engineers at Elswick. Notable among them were Andrew Noble and George Wightwick Rendel, whose design of gun-mountings and hydraulic control of gun-turrets were adopted worldwide. Rendel introduced the cruiser as a naval vessel. There was great rivalry and dislike between Noble and Rendel, which became open after Armstrong's death. Cragside From 1863 onwards, although Armstrong remained the head of his company, he became less involved in its day-to-day running. He appointed several very able men to senior positions and they continued his work. When he married, he acquired a house called Jesmond Dean (sic), which is now demolished, and not to be confused with the nearby Jesmond Dene House. Armstrong's house was to the west of Jesmond Dene, Newcastle, and thus not far from his birthplace, and he began to landscape and improve land that he bought within the Dene. In 1860 he paid local architect John Dobson to design a Banqueting Hall overlooking the Dene, which still survives, though it is now roofless. His house close to Newcastle was convenient for his practice as a solicitor and his work as an industrialist, but when he had more spare time he longed for a house in the country. He had often visited Rothbury as a child, when he was afflicted by a severe cough, and he had fond memories of the area. In 1863 he bought some land in a steep-sided, narrow valley where the Debdon Burn flows towards the River Coquet near Rothbury. He had the land cleared and supervised the building of a house perched on a ledge of rock, overlooking the burn. He also supervised a programme of planting trees and mosses so as to cover the rocky hillside with vegetation. His new house was called Cragside, and over the years Armstrong added to the Cragside estate. Eventually the estate was and had seven million trees planted, together with five artificial lakes and of carriage drives. The lakes were used to generate hydro-electricity, and the house was the first in the world to be lit by hydro-electricity, using incandescent lamps provided by the inventor Joseph Swan. As Armstrong spent less and less time at the Elswick works, he spent more and more time at Cragside, and it became his main home. In 1869 he commissioned the celebrated architect Richard Norman Shaw to enlarge and improve the house, and this was done over a period of 15 years. In 1883 Armstrong gave Jesmond Dene, together with its banqueting hall to the city of Newcastle. He retained his house next to the Dene. Armstrong entertained several eminent guests at Cragside, including the Shah of Persia, the King of Siam, the prime minister of China and the Prince and Princess of Wales. Later life In 1873 he served as High Sheriff of Northumberland. He was President of the North of England Institute of Mining and Mechanical Engineers from 1872 to 1875. He was elected as the president of the Institution of Civil Engineers in December 1881 and served in that capacity for the next year. He was conferred with Honorary Membership of the Institution of Engineers and Shipbuilders in Scotland in 1884. In 1886, he was persuaded to stand as a Unionist Liberal candidate for Newcastle, but was unsuccessful, coming third in the election. That same year he was presented with the Freedom of the City of Newcastle. In 1887 he was raised to the peerage as Baron Armstrong, of Cragside in the County of Northumberland. His last great project, begun in 1894, was the purchase and restoration of the huge Bamburgh Castle on the Northumberland coast, which remains in the hands of the Armstrong family. His wife, Margaret, died in September 1893, at their house in Jesmond. Armstrong died at Cragside on 27 December 1900, aged ninety. He was buried in Rothbury churchyard, alongside his wife. The couple had no children, and Armstrong's heir was his great-nephew William Watson-Armstrong. He was succeeded as chairman of the company by his one-time protégé, Andrew Noble. Such was Armstrong's fame as a gun-maker that he is thought to be a possible model for George Bernard Shaw's arms magnate in Major Barbara. The title character in Iain Pears' historical-mystery novel Stone's Fall also has similarities to Armstrong. His attitude to armaments There is no evidence that Armstrong agonised over his decision to go into armament production. He once said: "If I thought that war would be fomented, or the interests of humanity suffer, by what I have done, I would greatly regret it. I have no such apprehension." He also said: "It is our province, as engineers to make the forces of matter obedient to the will of man; those who use the means we supply must be responsible for their legitimate application." Views on renewable energy Armstrong advocated the use of renewable energy. Stating that coal "was used wastefully and extravagantly in all its applications", he predicted in 1863 that Britain would cease to produce coal within two centuries. As well as advocating the use of hydroelectricity, he also supported solar power, stating that the amount of solar energy received by an area of in the tropics would "exert the amazing power of 4,000 horses acting for nearly nine hours every day". The benefactor Armstrong donated the long wooded gorge of Jesmond Dene to the people of the city of Newcastle upon Tyne in 1883, as well as Armstrong Bridge and Armstrong Park nearby. He was involved in the foundation in 1871 of the College of Physical Science – a forerunner of the University of Newcastle, renamed Armstrong College in 1906. He was President of the Literary and Philosophical Society of Newcastle upon Tyne from 1860 until his death, as well as twice president of the Institution of Mechanical Engineers. Armstrong gave £11,500 towards the building of Newcastle's Hancock Natural History Museum, which was completed in 1882. This sum is equivalent to over £555,000 in 2010. Lord Armstrong's generosity extended beyond his death. In 1901 his heir, William Watson-Armstrong gave £100,000 (equivalent to £ in ), for the building of the new Royal Victoria Infirmary in Newcastle upon Tyne. Its original 1753 building at Forth Banks near the River Tyne was inadequate and impossible to expand. In 1903 the barony of Armstrong was revived in favour of William Watson-Armstrong. Honours In 1846 he was made a Fellow of the Royal Society (FRS). In 1850 he received the Telford Medal from the Institution of Civil Engineers. In 1859 William Armstrong was knighted as a Knight Bachelor (Kt). He was made a Companion of the Order of the Bath in the Civil Division In 1874 he was elected an International Member of the American Philosophical Society. In 1878 he received the Albert Medal from the Royal Society of Arts. In 1886 he was awarded the Freedom of the City of Newcastle. In 1887 he was raised to the peerage as a Hereditary peer, allowing him to sit in the House of Lords. He took the title Baron Armstrong, of Cragside in the County of Northumberland. In 1891 he received the Bessemer Gold Medal from the Iron and Steel Institute. Honorary Degrees Arms Publications , Plates: 17–24 References Further reading Heald, Henrietta (2012), William Armstrong: Magician of the North. Alnwick, Northumberland: McNidder & Grace. . Smith, Ken (2005), Emperor of Industry: Lord Armstrong of Cragside. Newcastle: Tyne Bridge Publishing, 48 pp. . Bastable, Marshall J. (2004), Arms and the State, Sir William Armstrong and the Remaking of British Naval Power. UK: Ashgate, 300 pp. . External links William Armstrong website 1810 births 1900 deaths People from Newcastle upon Tyne People from Rothbury People educated at the Royal Grammar School, Newcastle upon Tyne Barons Armstrong Companions of the Order of the Bath English industrialists 19th-century English engineers English inventors Fellows of the Royal Society History of Northumberland Hydraulic engineers Knights Bachelor People associated with Newcastle University Presidents of the Institution of Civil Engineers Weapon designers High sheriffs of Northumberland Mayors of Newcastle upon Tyne Liberal Unionist Party parliamentary candidates Bessemer Gold Medal 19th-century English lawyers Peers of the United Kingdom created by Queen Victoria 19th-century English businesspeople Members of the American Philosophical Society
William Armstrong, 1st Baron Armstrong
[ "Chemistry" ]
3,753
[ "Bessemer Gold Medal", "Chemical engineering awards" ]
60,600
https://en.wikipedia.org/wiki/Barcode
A barcode or bar code is a method of representing data in a visual, machine-readable form. Initially, barcodes represented data by varying the widths, spacings and sizes of parallel lines. These barcodes, now commonly referred to as linear or one-dimensional (1D), can be scanned by special optical scanners, called barcode readers, of which there are several types. Later, two-dimensional (2D) variants were developed, using rectangles, dots, hexagons and other patterns, called 2D barcodes or matrix codes, although they do not use bars as such. Both can be read using purpose-built 2D optical scanners, which exist in a few different forms. Matrix codes can also be read by a digital camera connected to a microcomputer running software that takes a photographic image of the barcode and analyzes the image to deconstruct and decode the code. A mobile device with a built-in camera, such as a smartphone, can function as the latter type of barcode reader using specialized application software and is suitable for both 1D and 2D codes. The barcode was invented by Norman Joseph Woodland and Bernard Silver and patented in the US in 1952. The invention was based on Morse code that was extended to thin and thick bars. However, it took over twenty years before this invention became commercially successful. UK magazine Modern Railways December 1962 pages 387–389 record how British Railways had already perfected a barcode-reading system capable of correctly reading rolling stock travelling at with no mistakes. An early use of one type of barcode in an industrial context was sponsored by the Association of American Railroads in the late 1960s. Developed by General Telephone and Electronics (GTE) and called KarTrak ACI (Automatic Car Identification), this scheme involved placing colored stripes in various combinations on steel plates which were affixed to the sides of railroad rolling stock. Two plates were used per car, one on each side, with the arrangement of the colored stripes encoding information such as ownership, type of equipment, and identification number. The plates were read by a trackside scanner located, for instance, at the entrance to a classification yard, while the car was moving past. The project was abandoned after about ten years because the system proved unreliable after long-term use. Barcodes became commercially successful when they were used to automate supermarket checkout systems, a task for which they have become almost universal. The Uniform Grocery Product Code Council had chosen, in 1973, the barcode design developed by George Laurer. Laurer's barcode, with vertical bars, printed better than the circular barcode developed by Woodland and Silver. Their use has spread to many other tasks that are generically referred to as automatic identification and data capture (AIDC). The first successful system using barcodes was in the UK supermarket group Sainsbury's in 1972 using shelf-mounted barcodes which were developed by Plessey. In June 1974, Marsh supermarket in Troy, Ohio used a scanner made by Photographic Sciences Corporation to scan the Universal Product Code (UPC) barcode on a pack of Wrigley's chewing gum. QR codes, a specific type of 2D barcode, rose in popularity in the second decade of the 2000s due to the growth in smartphone ownership. Other systems have made inroads in the AIDC market, but the simplicity, universality and low cost of barcodes has limited the role of these other systems, particularly before technologies such as radio-frequency identification (RFID) became available after 2023. History In 1948, Bernard Silver, a graduate student at Drexel Institute of Technology in Philadelphia, Pennsylvania, US overheard the president of the local food chain, Food Fair, asking one of the deans to research a system to automatically read product information during checkout. Silver told his friend Norman Joseph Woodland about the request, and they started working on a variety of systems. Their first working system used ultraviolet ink, but the ink faded too easily and was expensive. Convinced that the system was workable with further development, Woodland left Drexel, moved into his father's apartment in Florida, and continued working on the system. His next inspiration came from Morse code, and he formed his first barcode from sand on the beach. "I just extended the dots and dashes downwards and made narrow lines and wide lines out of them." To read them, he adapted technology from optical soundtracks in movies, using a 500-watt incandescent light bulb shining through the paper onto an RCA935 photomultiplier tube (from a movie projector) on the far side. He later decided that the system would work better if it were printed as a circle instead of a line, allowing it to be scanned in any direction. On 20 October 1949 Woodland and Silver filed a patent application for "Classifying Apparatus and Method", in which they described both the linear and bull's eye printing patterns, as well as the mechanical and electronic systems needed to read the code. The patent was issued on 7 October 1952 as US Patent 2,612,994. In 1951, Woodland moved to IBM and continually tried to interest IBM in developing the system. The company eventually commissioned a report on the idea, which concluded that it was both feasible and interesting, but that processing the resulting information would require equipment that was some time off in the future. IBM offered to buy the patent, but the offer was not accepted. Philco purchased the patent in 1962 and then sold it to RCA sometime later. Collins at Sylvania During his time as an undergraduate, David Jarrett Collins worked at the Pennsylvania Railroad and became aware of the need to automatically identify railroad cars. Immediately after receiving his master's degree from MIT in 1959, he started work at GTE Sylvania and began addressing the problem. He developed a system called KarTrak using blue, white and red reflective stripes attached to the side of the cars, encoding a four-digit company identifier and a six-digit car number. Light reflected off the colored stripes was read by photomultiplier vacuum tubes. The Boston and Maine Railroad tested the KarTrak system on their gravel cars in 1961. The tests continued until 1967, when the Association of American Railroads (AAR) selected it as a standard, automatic car identification, across the entire North American fleet. The installations began on 10 October 1967. However, the economic downturn and rash of bankruptcies in the industry in the early 1970s greatly slowed the rollout, and it was not until 1974 that 95% of the fleet was labeled. To add to its woes, the system was found to be easily fooled by dirt in certain applications, which greatly affected accuracy. The AAR abandoned the system in the late 1970s, and it was not until the mid-1980s that they introduced a similar system, this time based on radio tags. The railway project had failed, but a toll bridge in New Jersey requested a similar system so that it could quickly scan for cars that had purchased a monthly pass. Then the US Post Office requested a system to track trucks entering and leaving their facilities. These applications required special retroreflector labels. Finally, Kal Kan asked the Sylvania team for a simpler (and cheaper) version which they could put on cases of pet food for inventory control. Computer Identics Corporation In 1967, with the railway system maturing, Collins went to management looking for funding for a project to develop a black-and-white version of the code for other industries. They declined, saying that the railway project was large enough, and they saw no need to branch out so quickly. Collins then quit Sylvania and formed the Computer Identics Corporation. As its first innovations, Computer Identics moved from using incandescent light bulbs in its systems, replacing them with helium–neon lasers, and incorporated a mirror as well, making it capable of locating a barcode up to a meter (3 feet) in front of the scanner. This made the entire process much simpler and more reliable, and typically enabled these devices to deal with damaged labels, as well, by recognizing and reading the intact portions. Computer Identics Corporation installed one of its first two scanning systems in the spring of 1969 at a General Motors (Buick) factory in Flint, Michigan. The system was used to identify a dozen types of transmissions moving on an overhead conveyor from production to shipping. The other scanning system was installed at General Trading Company's distribution center in Carlstadt, New Jersey to direct shipments to the proper loading bay. Universal Product Code In 1966 the National Association of Food Chains (NAFC) held a meeting on the idea of automated checkout systems. RCA, which had purchased the rights to the original Woodland patent, attended the meeting and initiated an internal project to develop a system based on the bullseye code. The Kroger grocery chain volunteered to test it. In the mid-1970s the NAFC established the Ad-Hoc Committee for U.S. Supermarkets on a Uniform Grocery-Product Code to set guidelines for barcode development. In addition, it created a symbol-selection subcommittee to help standardize the approach. In cooperation with consulting firm, McKinsey & Co., they developed a standardized 11-digit code for identifying products. The committee then sent out a contract tender to develop a barcode system to print and read the code. The request went to Singer, National Cash Register (NCR), Litton Industries, RCA, Pitney-Bowes, IBM and many others. A wide variety of barcode approaches was studied, including linear codes, RCA's bullseye concentric circle code, starburst patterns and others. In the spring of 1971 RCA demonstrated their bullseye code at another industry meeting. IBM executives at the meeting noticed the crowds at the RCA booth and immediately developed their own system. IBM marketing specialist Alec Jablonover remembered that the company still employed Woodland, and he established a new facility in Research Triangle Park to lead development. In July 1972 RCA began an 18-month test in a Kroger store in Cincinnati. Barcodes were printed on small pieces of adhesive paper, and attached by hand by store employees when they were adding price tags. The code proved to have a serious problem; the printers would sometimes smear ink, rendering the code unreadable in most orientations. However, a linear code, like the one being developed by Woodland at IBM, was printed in the direction of the stripes, so extra ink would simply make the code "taller" while remaining readable. So on 3 April 1973 the IBM UPC was selected as the NAFC standard. IBM had designed five versions of UPC symbology for future industry requirements: UPC A, B, C, D, and E. NCR installed a testbed system at Marsh's Supermarket in Troy, Ohio, near the factory that was producing the equipment. On 26 June 1974, a 10-pack of Wrigley's Juicy Fruit gum was scanned, registering the first commercial use of the UPC. In 1971 an IBM team was assembled for an intensive planning session, threshing out, 12 to 18 hours a day, how the technology would be deployed and operate cohesively across the system, and scheduling a roll-out plan. By 1973, the team were meeting with grocery manufacturers to introduce the symbol that would need to be printed on the packaging or labels of all of their products. There were no cost savings for a grocery to use it, unless at least 70% of the grocery's products had the barcode printed on the product by the manufacturer. IBM projected that 75% would be needed in 1975. Economic studies conducted for the grocery industry committee projected over $40 million in savings to the industry from scanning by the mid-1970s. Those numbers were not achieved in that time-frame and some predicted the demise of barcode scanning. The usefulness of the barcode required the adoption of expensive scanners by a critical mass of retailers while manufacturers simultaneously adopted barcode labels. Neither wanted to move first and results were not promising for the first couple of years, with Business Week proclaiming "The Supermarket Scanner That Failed" in a 1976 article. Sims Supermarkets were the first location in Australia to use barcodes, starting in 1979. Barcode system A barcode system is a network of hardware and software, consisting primarily of mobile computers, printers, handheld scanners, infrastructure, and supporting software. Barcode systems are used to automate data collection where hand recording is neither timely nor cost effective. Despite often being provided by the same company, Barcoding systems are not radio-frequency identification (RFID) systems. Many companies use both technologies as part of larger resource management systems. A typical barcode system consist of some infrastructure, either wired or wireless that connects some number of mobile computers, handheld scanners, and printers to one or many databases that store and analyze the data collected by the system. At some level there must be some software to manage the system. The software may be as simple as code that manages the connection between the hardware and the database or as complex as an ERP, MRP, or some other inventory management software. Hardware A wide range of hardware is manufactured for use in barcode systems by such manufacturers as Datalogic, Intermec, HHP (Hand Held Products), Microscan Systems, Unitech, Metrologic, PSC, and PANMOBIL, with the best known brand of handheld scanners and mobile computers being produced by Symbol, a division of Motorola. Software Some ERP, MRP, and other inventory management software have built in support for barcode reading. Alternatively, custom interfaces can be created using a language such as C++, C#, Java, Visual Basic.NET, and many others. In addition, software development kits are produced to aid the process. Industrial adoption In 1981 the United States Department of Defense adopted the use of Code 39 for marking all products sold to the United States military. This system, Logistics Applications of Automated Marking and Reading Symbols (LOGMARS), is still used by DoD and is widely viewed as the catalyst for widespread adoption of barcoding in industrial uses. Use Barcodes are widely used around the world in many contexts. In stores, UPC barcodes are pre-printed on most items other than fresh produce from a grocery store. This speeds up processing at check-outs and helps track items and also reduces instances of shoplifting involving price tag swapping, although shoplifters can now print their own barcodes. Barcodes that encode a book's ISBN are also widely pre-printed on books, journals and other printed materials. In addition, retail chain membership cards use barcodes to identify customers, allowing for customized marketing and greater understanding of individual consumer shopping patterns. At the point of sale, shoppers can get product discounts or special marketing offers through the address or e-mail address provided at registration. Barcodes are widely used in healthcare and hospital settings, ranging from patient identification (to access patient data, including medical history, drug allergies, etc.) to creating SOAP notes with barcodes to medication management. They are also used to facilitate the separation and indexing of documents that have been imaged in batch scanning applications, track the organization of species in biology, and integrate with in-motion checkweighers to identify the item being weighed in a conveyor line for data collection. They can also be used to keep track of objects and people; they are used to keep track of rental cars, airline luggage, nuclear waste, express mail, and parcels. Barcoded tickets (which may be printed by the customer on their home printer, or stored on their mobile device) allow the holder to enter sports arenas, cinemas, theatres, fairgrounds, and transportation, and are used to record the arrival and departure of vehicles from rental facilities etc. This can allow proprietors to identify duplicate or fraudulent tickets more easily. Barcodes are widely used in shop floor control applications software where employees can scan work orders and track the time spent on a job. Barcodes are also used in some kinds of non-contact 1D and 2D position sensors. A series of barcodes are used in some kinds of absolute 1D linear encoder. The barcodes are packed close enough together that the reader always has one or two barcodes in its field of view. As a kind of fiducial marker, the relative position of the barcode in the field of view of the reader gives incremental precise positioning, in some cases with sub-pixel resolution. The data decoded from the barcode gives the absolute coarse position. An "address carpet", used in digital paper, such as Howell's binary pattern and the Anoto dot pattern, is a 2D barcode designed so that a reader, even though only a tiny portion of the complete carpet is in the field of view of the reader, can find its absolute X, Y position and rotation in the carpet. Matrix codes can embed a hyperlink to a web page. A mobile device with a built-in camera might be used to read the pattern and browse the linked website, which can help a shopper find the best price for an item in the vicinity. Since 2005, airlines use an IATA-standard 2D barcode on boarding passes (Bar Coded Boarding Pass (BCBP)), and since 2008 2D barcodes sent to mobile phones enable electronic boarding passes. Some applications for barcodes have fallen out of use. In the 1970s and 1980s, software source code was occasionally encoded in a barcode and printed on paper (Cauzin Softstrip and Paperbyte are barcode symbologies specifically designed for this application), and the 1991 Barcode Battler computer game system used any standard barcode to generate combat statistics. Artists have used barcodes in art, such as Scott Blake's Barcode Jesus, as part of the post-modernism movement. Symbologies The mapping between messages and barcodes is called a symbology. The specification of a symbology includes the encoding of the message into bars and spaces, any required start and stop markers, the size of the quiet zone required to be before and after the barcode, and the computation of a checksum. Linear symbologies can be classified mainly by two properties: Continuous vs. discrete Characters in discrete symbologies are composed of n bars and n − 1 spaces. There is an additional space between characters, but it does not convey information, and may have any width as long as it is not confused with the end of the code. Characters in continuous symbologies are composed of n bars and n spaces, and usually abut, with one character ending with a space and the next beginning with a bar, or vice versa. A special end pattern that has bars on both ends is required to end the code. Two-width vs. many-width A two-width, also called a binary bar code, contains bars and spaces of two widths, "wide" and "narrow". The precise width of the wide bars and spaces is not critical; typically, it is permitted to be anywhere between 2 and 3 times the width of the narrow equivalents. Some other symbologies use bars of two different heights (POSTNET), or the presence or absence of bars (CPC Binary Barcode). These are normally also considered binary bar codes. Bars and spaces in many-width symbologies are all multiples of a basic width called the module; most such codes use four widths of 1, 2, 3 and 4 modules. Some symbologies use interleaving. The first character is encoded using black bars of varying width. The second character is then encoded by varying the width of the white spaces between these bars. Thus, characters are encoded in pairs over the same section of the barcode. Interleaved 2 of 5 is an example of this. Stacked symbologies repeat a given linear symbology vertically. The most common among the many 2D symbologies are matrix codes, which feature square or dot-shaped modules arranged on a grid pattern. 2D symbologies also come in circular and other patterns and may employ steganography, hiding modules within an image (for example, DataGlyphs). Linear symbologies are optimized for laser scanners, which sweep a light beam across the barcode in a straight line, reading a slice of the barcode light-dark patterns. Scanning at an angle makes the modules appear wider, but does not change the width ratios. Stacked symbologies are also optimized for laser scanning, with the laser making multiple passes across the barcode. In the 1990s development of charge-coupled device (CCD) imagers to read barcodes was pioneered by Welch Allyn. Imaging does not require moving parts, as a laser scanner does. In 2007, linear imaging had begun to supplant laser scanning as the preferred scan engine for its performance and durability. 2D symbologies cannot be read by a laser, as there is typically no sweep pattern that can encompass the entire symbol. They must be scanned by an image-based scanner employing a CCD or other digital camera sensor technology. Barcode readers The earliest, and still the cheapest, barcode scanners are built from a fixed light and a single photosensor that is manually moved across the barcode. Barcode scanners can be classified into three categories based on their connection to the computer. The older type is the RS-232 barcode scanner. This type requires special programming for transferring the input data to the application program. Keyboard interface scanners connect to a computer using a PS/2 or AT keyboard–compatible adaptor cable (a "keyboard wedge"). The barcode's data is sent to the computer as if it had been typed on the keyboard. Like the keyboard interface scanner, USB scanners do not need custom code for transferring input data to the application program. On PCs running Windows the human interface device emulates the data merging action of a hardware "keyboard wedge", and the scanner automatically behaves like an additional keyboard. Most modern smartphones are able to decode barcode using their built-in camera. Google's mobile Android operating system can use their own Google Lens application to scan QR codes, or third-party apps like Barcode Scanner to read both one-dimensional barcodes and QR codes. Google's Pixel devices can natively read QR codes inside the default Pixel Camera app. Nokia's Symbian operating system featured a barcode scanner, while mbarcode is a QR code reader for the Maemo operating system. In Apple iOS 11, the native camera app can decode QR codes and can link to URLs, join wireless networks, or perform other operations depending on the QR Code contents. Other paid and free apps are available with scanning capabilities for other symbologies or for earlier iOS versions. With BlackBerry devices, the App World application can natively scan barcodes and load any recognized Web URLs on the device's Web browser. Windows Phone 7.5 is able to scan barcodes through the Bing search app. However, these devices are not designed specifically for the capturing of barcodes. As a result, they do not decode nearly as quickly or accurately as a dedicated barcode scanner or portable data terminal. Quality control and verification It is common for producers and users of bar codes to have a quality management system which includes verification and validation of bar codes. Barcode verification examines scanability and the quality of the barcode in comparison to industry standards and specifications. Barcode verifiers are primarily used by businesses that print and use barcodes. Any trading partner in the supply chain can test barcode quality. It is important to verify a barcode to ensure that any reader in the supply chain can successfully interpret a barcode with a low error rate. Retailers levy large penalties for non-compliant barcodes. These chargebacks can reduce a manufacturer's revenue by 2% to 10%. A barcode verifier works the way a reader does, but instead of simply decoding a barcode, a verifier performs a series of tests. For linear barcodes these tests are: Edge contrast (EC) The difference between the space reflectance (Rs) and adjoining bar reflectance (Rb). EC=Rs-Rb Minimum bar reflectance (Rb) The smallest reflectance value in a bar. Minimum space reflectance (Rs) The smallest reflectance value in a space. Symbol contrast (SC) Symbol contrast is the difference in reflectance values of the lightest space (including the quiet zone) and the darkest bar of the symbol. The greater the difference, the higher the grade. The parameter is graded as either A, B, C, D, or F. SC=Rmax-Rmin Minimum edge contrast (ECmin) The difference between the space reflectance (Rs) and adjoining bar reflectance (Rb). EC=Rs-Rb Modulation (MOD) The parameter is graded either A, B, C, D, or F. This grade is based on the relationship between minimum edge contrast (ECmin) and symbol contrast (SC). MOD=ECmin/SC The greater the difference between minimum edge contrast and symbol contrast, the lower the grade. Scanners and verifiers perceive the narrower bars and spaces to have less intensity than wider bars and spaces; the comparison of the lesser intensity of narrow elements to the wide elements is called modulation. This condition is affected by aperture size. Inter-character gap In discrete barcodes, the space that disconnects the two contiguous characters. When present, inter-character gaps are considered spaces (elements) for purposes of edge determination and reflectance parameter grades. Defects Decode Extracting the information which has been encoded in a bar code symbol. Decodability Can be graded as A, B, C, D, or F. The Decodability grade indicates the amount of error in the width of the most deviant element in the symbol. The less deviation in the symbology, the higher the grade. Decodability is a measure of print accuracy using the symbology reference decode algorithm. 2D matrix symbols look at the parameters: Symbol contrast Modulation Decode Unused error correction Fixed (finder) pattern damage Grid non-uniformity Axial non-uniformity Depending on the parameter, each ANSI test is graded from 0.0 to 4.0 (F to A), or given a pass or fail mark. Each grade is determined by analyzing the scan reflectance profile (SRP), an analog graph of a single scan line across the entire symbol. The lowest of the 8 grades is the scan grade, and the overall ISO symbol grade is the average of the individual scan grades. For most applications a 2.5 (C) is the minimal acceptable symbol grade. Compared with a reader, a verifier measures a barcode's optical characteristics to international and industry standards. The measurement must be repeatable and consistent. Doing so requires constant conditions such as distance, illumination angle, sensor angle and verifier aperture. Based on the verification results, the production process can be adjusted to print higher quality barcodes that will scan down the supply chain. Bar code validation may include evaluations after use (and abuse) testing such as sunlight, abrasion, impact, moisture, etc. Barcode verifier standards Barcode verifier standards are defined by the International Organization for Standardization (ISO), in ISO/IEC 15426-1 (linear) or ISO/IEC 15426-2 (2D). The current international barcode quality specification is ISO/IEC 15416 (linear) and ISO/IEC 15415 (2D). The European Standard EN 1635 has been withdrawn and replaced by ISO/IEC 15416. The original U.S. barcode quality specification was ANSI X3.182. (UPCs used in the US – ANSI/UCC5). As of 2011 the ISO workgroup JTC1 SC31 was developing a Direct Part Marking (DPM) quality standard: ISO/IEC TR 29158. Benefits In point-of-sale management, barcode systems can provide detailed up-to-date information on the business, accelerating decisions and with more confidence. For example: Fast-selling items can be identified quickly and automatically reordered. Slow-selling items can be identified, preventing inventory build-up. The effects of merchandising changes can be monitored, allowing fast-moving, more profitable items to occupy the best space. Historical data can be used to predict seasonal fluctuations very accurately. Items may be repriced on the shelf to reflect both sale prices and price increases. This technology also enables the profiling of individual consumers, typically through a voluntary registration of discount cards. While pitched as a benefit to the consumer, this practice is considered to be potentially dangerous by privacy advocates. Besides sales and inventory tracking, barcodes are very useful in logistics and supply chain management. When a manufacturer packs a box for shipment, a unique identifying number (UID) can be assigned to the box. A database can link the UID to relevant information about the box; such as order number, items packed, quantity packed, destination, etc. The information can be transmitted through a communication system such as electronic data interchange (EDI) so the retailer has the information about a shipment before it arrives. Shipments that are sent to a distribution center (DC) are tracked before forwarding. When the shipment reaches its final destination, the UID gets scanned, so the store knows the shipment's source, contents, and cost. Barcode scanners are relatively low cost and extremely accurate compared to key-entry, with only about 1 substitution error in 15,000 to 36 trillion characters entered. The exact error rate depends on the type of barcode. Types of barcodes Linear barcodes A first generation, "one dimensional" barcode that is made up of lines and spaces of various widths or sizes that create specific patterns. 2D barcodes 2D barcodes consist of bars, but use both dimensions for encoding. Matrix (2D) codes A matrix code or simply a 2D code, is a two-dimensional way to represent information. It can represent more data per unit area. Apart from dots various other patterns can be used. Example images In popular culture In architecture, a building in Lingang New City by German architects Gerkan, Marg and Partners incorporates a barcode design, as does a shopping mall called Shtrikh-kod (Russian for barcode) in Narodnaya ulitsa ("People's Street") in the Nevskiy district of St. Petersburg, Russia. In media, in 2011, the National Film Board of Canada and ARTE France launched a web documentary entitled Barcode.tv, which allows users to view films about everyday objects by scanning the product's barcode with their iPhone camera. In professional wrestling, the WWE stable D-Generation X incorporated a barcode into their entrance video, as well as on a T-shirt. In video games, the protagonist of the Hitman video game series has a barcode tattoo on the back of his head; QR codes can also be scanned in a side mission in Watch Dogs. The 2018 videogame Judgment features QR Codes that protagonist Takayuki Yagami can photograph with his phone camera. These are mostly to unlock parts for Yagami's Drone. Interactive Textbooks were first published by Harcourt College Publishers to Expand Education Technology with Interactive Textbooks. Designed barcodes Some companies integrate custom designs into barcodes on their consumer products without impairing their readability. Opposition Some have regarded barcodes to be an intrusive surveillance technology. Some Christians, pioneered by a 1982 book The New Money System 666 by Mary Stewart Relfe, believe the codes hide the number 666, representing the "Number of the beast". Old Believers, a separation of the Russian Orthodox Church, believe barcodes are the stamp of the Antichrist. Television host Phil Donahue described barcodes as a "corporate plot against consumers". See also Automated identification and data capture (AIDC) Barcode printer Campus card European Article Numbering-Uniform Code Council Global Trade Item Number Identifier Inventory control system Object hyperlinking Semacode SPARQCode (QR code) List of GS1 country codes References Further reading Automating Management Information Systems: Barcode Engineering and Implementation – Harry E. Burke, Thomson Learning, Automating Management Information Systems: Principles of Barcode Applications – Harry E. Burke, Thomson Learning, The Bar Code Book – Roger C. Palmer, Helmers Publishing, , 386 pages The Bar Code Manual – Eugene F. Brighan, Thompson Learning, Handbook of Bar Coding Systems – Harry E. Burke, Van Nostrand Reinhold Company, , 219 pages Information Technology for Retail:Automatic Identification & Data Capture Systems – Girdhar Joshi, Oxford University Press, , 416 pages Lines of Communication – Craig K. Harmon, Helmers Publishing, , 425 pages Punched Cards to Bar Codes – Benjamin Nelson, Helmers Publishing, , 434 pages Revolution at the Checkout Counter: The Explosion of the Bar Code – Stephen A. Brown, Harvard University Press, Reading Between The Lines – Craig K. Harmon and Russ Adams, Helmers Publishing, , 297 pages The Black and White Solution: Bar Code and the IBM PC – Russ Adams and Joyce Lane, Helmers Publishing, , 169 pages Sourcebook of Automatic Identification and Data Collection – Russ Adams, Van Nostrand Reinhold, , 298 pages Inside Out: The Wonders of Modern Technology – Carol J. Amato, Smithmark Pub, , 1993 External links Free Online Barcode Generator. Encodings Automatic identification and data capture 1952 introductions American inventions Records management technology
Barcode
[ "Technology" ]
6,936
[ "Data", "Automatic identification and data capture" ]
60,605
https://en.wikipedia.org/wiki/Dust%20storm
A dust storm, also called a sandstorm, is a meteorological phenomenon common in arid and semi-arid regions. Dust storms arise when a gust front or other strong wind blows loose sand and dirt from a dry surface. Fine particles are transported by saltation and suspension, a process that moves soil from one place and deposits it in another. The arid regions of North Africa, the Middle East, Central Asia and China are the main terrestrial sources of airborne dust. It has been argued that poor management of Earth's drylands, such as neglecting the fallow system, are increasing the size and frequency of dust storms from desert margins and changing both the local and global climate, as well as impacting local economies. The term sandstorm is used most often in the context of desert dust storms, especially in the Sahara Desert, or places where sand is a more prevalent soil type than dirt or rock, when, in addition to fine particles obscuring visibility, a considerable amount of larger sand particles are blown closer to the surface. The term dust storm is more likely to be used when finer particles are blown long distances, especially when the dust storm affects urban areas. Causes As the force of dust passing over loosely held particles increases, particles of sand first start to vibrate, then to move across the surface in a process called saltation. As they repeatedly strike the ground, they loosen and break off smaller particles of dust which then begin to travel in suspension. At wind speeds above that which causes the smallest to suspend, there will be a population of dust grains moving by a range of mechanisms: suspension, saltation and creep. A study from 2008 finds that the initial saltation of sand particles induces a static electric field by friction. Saltating sand acquires a negative charge relative to the ground which in turn loosens more sand particles which then begin saltating. This process has been found to double the number of particles predicted by previous theories. Particles become loosely held mainly due to a prolonged drought or arid conditions, and high wind speeds. Gust fronts may be produced by the outflow of rain-cooled air from an intense thunderstorm. Or, the wind gusts may be produced by a dry cold front: that is, a cold front that is moving into a dry air mass and is producing no precipitation—the type of dust storm which was common during the Dust Bowl years in the U.S. Following the passage of a dry cold front, convective instability resulting from cooler air riding over heated ground can maintain the dust storm initiated at the front. In desert areas, dust and sand storms are most commonly caused by either thunderstorm outflows, or by strong pressure gradients which cause an increase in wind velocity over a wide area. The vertical extent of the dust or sand that is raised is largely determined by the stability of the atmosphere above the ground as well as by the weight of the particulates. In some cases, dust and sand may be confined to a relatively-shallow layer by a low-lying temperature inversion. In other instances, dust (but not sand) may be lifted as high as . Dust storms are a major health hazard. Drought and wind contribute to the emergence of dust storms, as do poor farming and grazing practices by exposing the dust and sand to the wind. Wildfires can lead to dust storms as well. One poor farming practice which contributes to dust storms is dryland farming. Particularly poor dryland farming techniques are intensive tillage or not having established crops or cover crops when storms strike at particularly vulnerable times prior to revegetation. In a semi-arid climate, these practices increase susceptibility to dust storms. However, soil conservation practices may be implemented to control wind erosion. Physical and environmental effects A sandstorm can transport and carry large volumes of sand unexpectedly. Dust storms can carry large amounts of dust, with the leading edge being composed of a wall of thick dust as much as high. Dust and sand storms which come off the Sahara Desert are locally known as a simoom or simoon (sîmūm, sîmūn). The haboob (həbūb) is a sandstorm prevalent in the region of Sudan around Khartoum, with occurrences being most common in the summer. The Sahara desert is a key source of dust storms, particularly the Bodélé Depression and an area covering the confluence of Mauritania, Mali, and Algeria. Sahara dust is frequently emitted into the Mediterranean atmosphere and transported by the winds sometimes as far north as central Europe and Great Britain. Saharan dust storms have increased approximately 10-fold during the half-century since the 1950s, causing topsoil loss in Niger, Chad, northern Nigeria, and Burkina Faso. In Mauritania there were just two dust storms a year in the early 1960s; there are about 80 a year since 2007, according to English geographer Andrew Goudie, professor at the University of Oxford. Levels of Saharan dust coming off the east coast of Africa in June 2007 were five times those observed in June 2006, and were the highest observed since at least 1999, which may have cooled Atlantic waters enough to slightly reduce hurricane activity in late 2007. Dust storms have also been shown to increase the spread of disease across the globe. Bacteria and fungus spores in the ground are blown into the atmosphere by the storms with the minute particles and interact with urban air pollution. Short-term effects of exposure to desert dust include immediate increased symptoms and worsening of the lung function in individuals with asthma, increased mortality and morbidity from long-transported dust from both Saharan and Asian dust storms suggesting that long-transported dust storm particles adversely affects the circulatory system. Dust pneumonia is the result of large amounts of dust being inhaled. Prolonged and unprotected exposure of the respiratory system in a dust storm can also cause silicosis, which, if left untreated, will lead to asphyxiation; silicosis is an incurable condition that may also lead to lung cancer. There is also the danger of keratoconjunctivitis sicca ("dry eyes") which, in severe cases without immediate and proper treatment, can lead to blindness. Economic impact Dust storms cause soil loss from the drylands, and worse, they preferentially remove organic matter and the nutrient-rich lightest particles, thereby reducing agricultural productivity. Also, the abrasive effect of the storm damages young crop plants. Dust storms also reduce visibility, affecting aircraft and road transportation. Dust can also have beneficial effects where it deposits: Central and South American rainforests get significant quantities of mineral nutrients from the Sahara; iron-poor ocean regions get iron; and dust in Hawaii increases plantain growth. In northern China as well as the mid-western U.S., ancient dust storm deposits known as loess are highly fertile soils, but they are also a significant source of contemporary dust storms when soil-securing vegetation is disturbed. Iranian cities existence are challenged by dust storms. On Mars Dust storms are not limited to Earth and have also been known to form on Mars. These dust storms can extend over larger areas than those on Earth, sometimes encircling the planet, with wind speeds as high as . However, given Mars' much lower atmospheric pressure (roughly 1% that of Earth's), the intensity of Mars storms could never reach the hurricane-force winds experienced on Earth. Martian dust storms are formed when solar heating warms the Martian atmosphere and causes the air to move, lifting dust off the ground. The chance for storms is increased when there are great temperature variations like those seen at the equator during the Martian summer. See also References External links 12-hour U.S. map of surface dust concentrations Mouse-over an hour block on the row for 'Surface Dust Concentrations' Dust in the Wind Photos of the April 14 1935 and September 2 1934 dust storms in the Texas Panhandle hosted by the Portal to Texas History. University of Arizona Dust Model Page Photos of a sandstorm in Riyadh in 2009 from the BBC Newsbeat website Dust storm in Phoenix Arizona via YouTube Weather hazards Road hazards Articles containing video clips Hazards of outdoor recreation
Dust storm
[ "Physics", "Technology" ]
1,650
[ "Weather", "Physical phenomena", "Road hazards", "Weather hazards" ]
60,633
https://en.wikipedia.org/wiki/Dmitri%20Mendeleev
Dmitri Ivanovich Mendeleev ( ; 2 February [O.S. 20 January] 1907) was a Russian chemist known for formulating the periodic law and creating a version of the periodic table of elements. He used the periodic law not only to correct the then-accepted properties of some known elements, such as the valence and atomic weight of uranium, but also to predict the properties of three elements that were yet to be discovered (germanium, gallium and scandium). Early life Mendeleev was born in the village of Verkhnie Aremzyani, near Tobolsk in Siberia, to Ivan Pavlovich Mendeleev (1783–1847) and Maria Dmitrievna Mendeleeva (née Kornilieva) (1793–1850). Ivan worked as a school principal and a teacher of fine arts, politics and philosophy at the Tambov and Saratov gymnasiums. Ivan's father, Pavel Maximovich Sokolov, was a Russian Orthodox priest from the Tver region. As per the tradition of priests of that time, Pavel's children were given new family names while attending the theological seminary, with Ivan getting the family name Mendeleev after the name of a local landlord. Maria Kornilieva came from a well-known family of Tobolsk merchants, founders of the first Siberian printing house who traced their ancestry to Yakov Korniliev, a 17th-century posad man turned a wealthy merchant. In 1889, a local librarian published an article in the Tobolsk newspaper where he claimed that Yakov was a baptized Teleut, an ethnic minority known as "white Kalmyks" at the time. Since no sources were provided and no documented facts of Yakov's life were ever revealed, biographers generally dismiss it as a myth. In 1908, shortly after Mendeleev's death, one of his nieces published Family Chronicles. Memories about D. I. Mendeleev where she voiced "a family legend" about Maria's grandfather who married "a Kyrgyz or Tatar beauty whom he loved so much that when she died, he also died from grief". This, however, contradicts the documented family chronicles, and neither of those legends is supported by Mendeleev's autobiography, his daughter's or his wife's memoirs. Yet some Western scholars still refer to Mendeleev's supposed "Mongol", "Tatar", "Tartarian" or simply "Asian" ancestry as a fact. Mendeleev was raised as an Orthodox Christian, his mother encouraging him to "patiently search divine and scientific truth". His son Ivan would later inform that Mendeleev had departed from the Church and embraced a form of "romanticized deism". Mendeleev was the youngest of 17 siblings, of whom "only 14 stayed alive to be baptized" according to Mendeleev's brother Pavel, meaning the others died soon after their birth. The exact number of Mendeleev's siblings differs among sources and is still a matter of some historical dispute. Unfortunately for the family's financial well-being, his father became blind and lost his teaching position. His mother was forced to work and she restarted her family's abandoned glass factory. At the age of 13, after the passing of his father and the destruction of his mother's factory by fire, Mendeleev attended the Gymnasium in Tobolsk. In 1849, his mother took Mendeleev across Russia from Siberia to Moscow with the aim of getting Mendeleev enrolled at the Moscow University. The university in Moscow did not accept him. The mother and son continued to Saint Petersburg to the father's alma mater. The now poor Mendeleev family relocated to Saint Petersburg, where he entered the Main Pedagogical Institute in 1850. After graduation, he contracted tuberculosis, causing him to move to the Crimean Peninsula on the northern coast of the Black Sea in 1855. While there, he became a science master of the 1st Simferopol Gymnasium. In 1857, he returned to Saint Petersburg with fully restored health. Between 1859 and 1861, he worked on the capillarity of liquids and the workings of the spectroscope in Heidelberg. Later in 1861, he published a textbook named Organic Chemistry. This won him the Demidov Prize of the Petersburg Academy of Sciences. On 4 April 1862, he became engaged to Feozva Nikitichna Leshcheva, and they married on 27 April 1862 at Nikolaev Engineering Institute's church in Saint Petersburg (where he taught). Mendeleev became a professor at the Saint Petersburg Technological Institute and Saint Petersburg State University in 1864, and 1865, respectively. In 1865, he became a Doctor of Science for his dissertation "On the Combinations of Water with Alcohol". He achieved tenure in 1867 at St. Petersburg University and started to teach inorganic chemistry while succeeding Voskresenskii to this post; by 1871, he had transformed Saint Petersburg into an internationally recognized center for chemistry research. Periodic table In 1863, there were 56 known elements with a new element being discovered at a rate of approximately one per year. Other scientists had previously identified periodicity of elements. John Newlands described a Law of Octaves, noting their periodicity according to relative atomic weight in 1864, publishing it in 1865. His proposal identified the potential for new elements such as germanium. The concept was criticized, and his innovation was not recognized by the Society of Chemists until 1887. Another person to propose a periodic table was Lothar Meyer, who published a paper in 1864 describing 28 elements classified by their valence, but with no predictions of new elements. After becoming a teacher in 1867, Mendeleev wrote Principles of Chemistry (), which became the definitive textbook of its time. It was published in two volumes between 1868 and 1870, and Mendeleev wrote it as he was preparing a textbook for his course. This is when he made his most important discovery. As he attempted to classify the elements according to their chemical properties, he noticed patterns that led him to postulate his periodic table; he claimed to have envisioned the complete arrangement of the elements in a dream: Unaware of the earlier work on periodic tables going on in the 1860s, he made the following table: By adding additional elements following this pattern, Mendeleev developed his extended version of the periodic table. On 6 March 1869, he made a formal presentation to the Russian Chemical Society, titled The Dependence between the Properties of the Atomic Weights of the Elements, which described elements according to both atomic weight (now called relative atomic mass) and valence. This presentation stated that The elements, if arranged according to their atomic weight, exhibit an apparent periodicity of properties. Elements which are similar regarding their chemical properties either have similar atomic weights (e.g., Pt, Ir, Os) or have their atomic weights increasing regularly (e.g., K, Rb, Cs). The arrangement of the elements in groups of elements in the order of their atomic weights corresponds to their so-called valencies, as well as, to some extent, to their distinctive chemical properties; as is apparent among other series in that of Li, Be, B, C, N, O, and F. The elements which are the most widely diffused have small atomic weights. The magnitude of the atomic weight determines the character of the element, just as the magnitude of the molecule determines the character of a compound body. We must expect the discovery of many yet unknown elements – for example, two elements, analogous to aluminium and silicon, whose atomic weights would be between 65 and 75. The atomic weight of an element may sometimes be amended by a knowledge of those of its contiguous elements. Thus the atomic weight of tellurium must lie between 123 and 126, and cannot be 128. (Tellurium's atomic weight is 127.6, and Mendeleev was incorrect in his assumption that atomic weight must increase with position within a period.) Certain characteristic properties of elements can be foretold from their atomic weights. Mendeleev published his periodic table of all known elements and predicted several new elements to complete the table in a Russian-language journal. Only a few months after, Meyer published a virtually identical table in a German-language journal. Mendeleev has the distinction of accurately predicting the properties of what he called ekasilicon, ekaaluminium and ekaboron (germanium, gallium and scandium, respectively). Mendeleev also proposed changes in the properties of some known elements. Prior to his work, uranium was supposed to have valence 3 and atomic weight about 120. Mendeleev realized that these values did not fit in his periodic table, and doubled both to valence 6 and atomic weight 240 (close to the modern value of 238). For his predicted three elements, he used the prefixes of eka, dvi, and tri (Sanskrit one, two, three) in their naming. Mendeleev questioned some of the currently accepted atomic weights (they could be measured only with a relatively low accuracy at that time), pointing out that they did not correspond to those suggested by his Periodic Law. He noted that tellurium has a higher atomic weight than iodine, but he placed them in the right order, incorrectly predicting that the accepted atomic weights at the time were at fault. He was puzzled about where to put the known lanthanides, and predicted the existence of another row to the table which were the actinides which were some of the heaviest in atomic weight. Some people dismissed Mendeleev for predicting that there would be more elements, but he was proven to be correct when Ga (gallium) and Ge (germanium) were found in 1875 and 1886 respectively, fitting perfectly into the two missing spaces. By using Sanskrit prefixes to name "missing" elements, Mendeleev may have recorded his debt to the Sanskrit grammarians of ancient India, who had created theories of language based on their discovery of the two-dimensional patterns of speech sounds (exemplified by the Śivasūtras in Pāṇini's Sanskrit grammar). Mendeleev was a friend and colleague of the Sanskritist Otto von Böhtlingk, who was preparing the second edition of his book on Pāṇini at about this time, and Mendeleev wished to honor Pāṇini with his nomenclature. The original draft made by Mendeleev would be found years later and published under the name Tentative System of Elements. Dmitri Mendeleev is often referred to as the Father of the Periodic Table. He called his table or matrix, "the Periodic System". Later life In 1876, he became obsessed with Anna Ivanova Popova and began courting her; in 1881 he proposed to her and threatened suicide if she refused. His divorce from Leshcheva was finalized one month after he had married Popova (on 2 April) in early 1882. Even after the divorce, Mendeleev was technically a bigamist; the Russian Orthodox Church required at least seven years before lawful remarriage. His divorce and the surrounding controversy contributed to his failure to be admitted to the Russian Academy of Sciences (despite his international fame by that time). His daughter from his second marriage, Lyubov, became the wife of the famous Russian poet Alexander Blok. His other children were son Vladimir (a sailor, he took part in the notable Eastern journey of Nicholas II) and daughter Olga, from his first marriage to Feozva, and son Ivan and twins from Anna. Though Mendeleev was widely honored by scientific organizations all over Europe, including (in 1882) the Davy Medal from the Royal Society of London (which later also awarded him the Copley Medal in 1905), he resigned from Saint Petersburg University on 17 August 1890. He was elected a Foreign Member of the Royal Society (ForMemRS) in 1892, and in 1893 he was appointed director of the Bureau of Weights and Measures, a post which he occupied until his death. Mendeleev also investigated the composition of petroleum, and helped to found the first oil refinery in Russia. He recognized the importance of petroleum as a feedstock for petrochemicals. He is credited with a remark that burning petroleum as a fuel "would be akin to firing up a kitchen stove with bank notes". In 1905, Mendeleev was elected a member of the Royal Swedish Academy of Sciences. The following year the Nobel Committee for Chemistry recommended to the Swedish Academy to award the Nobel Prize in Chemistry for 1906 to Mendeleev for his discovery of the periodic system. He was also elected an International Member of the American Philosophical Society. The Chemistry Section of the Swedish Academy supported this recommendation. The academy was then supposed to approve the committee's choice, as it has done in almost every case. Unexpectedly, at the full meeting of the academy, a dissenting member of the Nobel Committee, Peter Klason, proposed the candidacy of Henri Moissan whom he favored. Svante Arrhenius, although not a member of the Nobel Committee for Chemistry, had a great deal of influence in the academy and also pressed for the rejection of Mendeleev, arguing that the periodic system was too old to acknowledge its discovery in 1906. According to the contemporaries, Arrhenius was motivated by the grudge he held against Mendeleev for his critique of Arrhenius's dissociation theory. After heated arguments, the majority of the academy chose Moissan by a margin of one vote. The attempts to nominate Mendeleev in 1907 were again frustrated by the absolute opposition of Arrhenius. In 1907, Mendeleev died at the age of 72 in Saint Petersburg from influenza, just 6 days short of his 73rd birthday. His last words were to his physician: "Doctor, you have science, I have faith," which is possibly a Jules Verne quote. Other achievements Mendeleev made other important contributions to science. The Russian chemist and science historian Lev Chugaev characterized him as"a chemist of genius, first-class physicist, a fruitful researcher in the fields of hydrodynamics, meteorology, geology, certain branches of chemical technology (explosives, petroleum, and fuels, for example) and other disciplines adjacent to chemistry and physics, a thorough expert of chemical industry and industry in general, and an original thinker in the field of economy [...]" Mendeleev was one of the founders, in 1868, of the . He worked on the theory and practice of protectionist trade and on agriculture. In an attempt at a chemical conception of the aether, he put forward a hypothesis that there existed two inert chemical elements of lesser atomic weight than hydrogen. Of these two proposed elements, he thought the lighter to be an all-penetrating, all-pervasive gas, and the slightly heavier one to be a proposed element, coronium. Mendeleev devoted much study and made important contributions to the determination of the nature of such indefinite compounds as solutions. In another department of physical chemistry, he investigated the expansion of liquids with heat, and devised a formula similar to Gay-Lussac's law of the uniformity of the expansion of gases, while in 1861 he anticipated Thomas Andrews' conception of the critical temperature of gases by defining the absolute boiling-point of a substance as the temperature at which cohesion and heat of vaporization become equal to zero and the liquid changes to vapor, irrespective of the pressure and volume. Mendeleev is given credit for the introduction of the metric system to the Russian Empire. He invented pyrocollodion, a kind of smokeless powder based on nitrocellulose. This work had been commissioned by the Russian Navy, which however did not adopt its use. In 1892 Mendeleev organized its manufacture. Mendeleev studied the origins of petroleum origin; he concluded that hydrocarbons are abiogenic and form deep within the earth – see Abiogenic petroleum origin. He wrote: "The capital fact to note is that petroleum was born in the depths of the earth, and it is only there that we must seek its origin." Activities beyond chemistry Beginning in the 1870s, he published widely beyond chemistry, looking at aspects of Russian industry, and technical issues in agricultural productivity. He explored demographic issues, sponsored studies of the Arctic Sea, tried to measure the efficacy of chemical fertilizers, and promoted the merchant navy. He was especially active in improving the Russian petroleum industry, making detailed comparisons with the more advanced industry in Pennsylvania. Although not well-grounded in economics, he had observed industry throughout his European travels, and in 1891 he helped convince the Ministry of Finance to impose temporary tariffs with the aim of fostering Russian infant industries. In 1890 he resigned his professorship at St. Petersburg University following a dispute with officials at the Ministry of Education over the treatment of university students. In 1892 he was appointed director of Russia's Central Bureau of Weights and Measures, and led the way to standardize fundamental prototypes and measurement procedures. He set up an inspection system, and introduced the metric system to Russia. He debated against the scientific claims of spiritualism, arguing that metaphysical idealism was no more than ignorant superstition. He bemoaned the widespread acceptance of spiritualism in Russian culture, and its negative effects on the study of science. Vodka myth A very popular Russian story credits Mendeleev with setting the 40% standard strength of vodka. For example, Russian Standard vodka advertises: "In 1894, Dmitri Mendeleev, the greatest scientist in all Russia, received the decree to set the Imperial quality standard for Russian vodka and the 'Russian Standard' was born" Others cite "the highest quality of Russian vodka approved by the royal government commission headed by Mendeleev in 1894". In fact, the 40% standard was already introduced by the Russian government in 1843, when Mendeleev was nine years old. It is true that Mendeleev in 1892 became head of the Archive of Weights and Measures in Saint Petersburg, and evolved it into a government bureau the following year, but that institution was charged with standardising Russian trade weights and measuring instruments, not setting any production quality standards. Also, Mendeleev's 1865 doctoral dissertation was entitled "A Discourse on the combination of alcohol and water", but it only discussed medical-strength alcohol concentrations over 70%, and he never wrote anything about vodka. Commemoration A number of places and objects are associated with the name and achievements of the scientist. In Saint Petersburg his name was given to D. I. Mendeleev Institute for Metrology, the National Metrology Institute, dealing with establishing and supporting national and worldwide standards for precise measurements. Next to it there is a monument to him that consists of his sitting statue and a depiction of his periodic table on the wall of the establishment. In the Twelve Collegia building, now being the centre of Saint Petersburg State University and in Mendeleev's time – Head Pedagogical Institute – there is Dmitry Mendeleev's Memorial Museum Apartment with his archives. The street in front of these is named after him as Mendeleevskaya liniya (Mendeleev Line). In Moscow, there is the D. Mendeleyev University of Chemical Technology of Russia. Mendelevium, which is a synthetic chemical element with the symbol Md (formerly Mv) and the atomic number 101, was named after Mendeleev. It is a metallic radioactive transuranic element in the actinide series, usually synthesized by bombarding einsteinium with alpha particles. The mineral mendeleevite-Ce, , was named in Mendeleev's honor in 2010. The related species mendeleevite-Nd, , was described in 2015. A large lunar impact crater Mendeleev, that is located on the far side of the Moon, also bears the name of the scientist. The Russian Academy of Sciences has occasionally awarded a Mendeleev Golden Medal since 1965. On 8 February 2016, Google celebrated Dmitri Mendeleev's 182nd Birthday with a doodle. Works Менделеев Д. И. Периодический закон (DjVu). Т. 1. // Собрание сочинений в 3 томах — М.: Издательство Академии наук СССР — via Runivers Менделеев Д. И. Растворы (DjVu). Т. 2. (DjVu)]. Т. 2. // Собрание сочинений в 3 томах — М.: Издательство Академии наук СССР — via Runivers Менделеев Д. И. Периодический закон. Дополнительные материалы (DjVu). Т. 3. // Собрание сочинений в 3 томах — М.: Издательство Академии наук СССР — via Runivers Менделеев Д. И. Ещё о расширении жидкостей (Ответ профессору Авенариусу). — СПб.: Тип. В. Демакова, 1884. — 18 с. Менделеев Д. И. Об опытах над упругостью газов. Сообщение Д. И. Менделеева в Императорском Русском техническом обществе — 21 янв. 1881 г. — СПб., 1881. — 22 с. Менделеев Д. И. Дополнения к познанию России. Посмертное издание. СПб.: А. С. Суворин, 1907. — 109 с. + I л. портрет. Менделеев Д. И. Изоморфизм в связи с другими отношениями кристаллической формы к составу. Диссертация, представленная при окончании курса в Главном педагогическом институте студентом Д. Менделеевым. — СПб., 1856. — 234 с. Менделеев Д. И. О сопротивлении жидкостей и о воздухоплавании: Вып. 1. — СПб.: Тип. В. Демакова, 1880. — 80 с.: табл. Менделеев Д. И. Заветные мысли (1905) Менделеев Д. И. Попытка химического понимания мирового эфира (1902) 54 articles for the Brockhaus and Efron Encyclopedic Dictionary See also List of Russian chemists Mendeleev's predicted elements Periodic systems of small molecules Notes References Citations Works cited Further reading External links Babaev, Eugene V. (February 2009). Dmitriy Mendeleev: A Short CV, and A Story of Life – 2009 biography on the occasion of Mendeleev's 175th anniversary Babaev, Eugene V., Moscow State University. Dmitriy Mendeleev Online Original Periodic Table, annotated. "Everything in its Place", essay by Oliver Sacks Dmitri Mendeleev's official site 1834 births 1907 deaths People from Tobolsk People from Tobolsky Uyezd Discoverers of chemical elements Inorganic chemists People involved with the periodic table Chemists from the Russian Empire Russian deists Russian former Christians Encyclopedists from the Russian Empire Saint Petersburg State Institute of Technology alumni Saint Petersburg State University alumni Academic staff of Military Engineering-Technical University Corresponding members of the Saint Petersburg Academy of Sciences Igor Sikorsky Kyiv Polytechnic Institute Members of the Prussian Academy of Sciences Members of the Royal Swedish Academy of Sciences Members of the Serbian Academy of Sciences and Arts Foreign members of the Royal Society Foreign associates of the National Academy of Sciences Demidov Prize laureates Recipients of the Copley Medal Rare earth scientists Manchester Literary and Philosophical Society Privy Councillor (Russian Empire) Academic staff of the Saint Petersburg State Institute of Technology Foreign members of the Serbian Academy of Sciences and Arts Members of the American Philosophical Society
Dmitri Mendeleev
[ "Chemistry" ]
5,452
[ "Periodic table", "People involved with the periodic table", "Inorganic chemists" ]
60,635
https://en.wikipedia.org/wiki/Glucose%20tolerance%20test
The glucose tolerance test (GTT, not to be confused with GGT test) is a medical test in which glucose is given and blood samples taken afterward to determine how quickly it is cleared from the blood. The test is usually used to test for diabetes, insulin resistance, impaired beta cell function, and sometimes reactive hypoglycemia and acromegaly, or rarer disorders of carbohydrate metabolism. In the most commonly performed version of the test, an oral glucose tolerance test (OGTT), a standard dose of glucose is ingested by mouth and blood levels are checked two hours later. Many variations of the GTT have been devised over the years for various purposes, with different standard doses of glucose, different routes of administration, different intervals and durations of sampling, and various substances measured in addition to blood glucose. History The glucose tolerance test was first described in 1923 by Jerome W. Conn. The test was based on the previous work in 1913 by A. T. B. Jacobson in determining that carbohydrate ingestion results in blood glucose fluctuations, and the premise (named the Staub-Traugott Phenomenon after its first observers H. Staub in 1921 and K. Traugott in 1922) that a normal patient fed glucose will rapidly return to normal levels of blood glucose after an initial spike, and will see improved reaction to subsequent glucose feedings. Testing Since the 1970s, the World Health Organization and other organizations interested in diabetes agreed on a standard dose and duration. Preparation The patient is instructed not to restrict carbohydrate intake in the days or weeks before the test. The test should not be done during an illness, as results may not reflect the patient's glucose metabolism when healthy. A full adult dose should not be given to a person weighing less than 42.6 kg (94 lb), or the excessive glucose may produce a false positive result. Usually the OGTT is performed in the morning as glucose tolerance can exhibit a diurnal rhythm with a significant decrease in the afternoon. The patient is instructed to fast (water is allowed) for 8–12 hours prior to the tests. Medication such as large doses of salicylates, diuretics, anticonvulsants, and oral contraceptives affect the glucose tolerance test. Procedure A zero time (baseline) blood sample is drawn. The patient is then given a measured dose (below) of glucose solution to drink within a 5-minute time frame. Blood is drawn at intervals for measurement of glucose (blood sugar), and sometimes insulin levels. The intervals and number of samples vary according to the purpose of the test. For simple diabetes screening, the most important sample is the 2 hour sample and the 0 and 2 hour samples may be the only ones collected. A laboratory may continue to collect blood for up to 6 hours depending on the protocol requested by the physician. Dose of glucose and variations 75 g of oral dose is the recommendation of the WHO to be used in all adults, and is the main dosage used in the United States. The dose is adjusted for weight only in children. The dose should be drunk within 5 minutes. A variant is often used in pregnancy to screen for gestational diabetes, with a screening test of 50 g over one hour. If elevated, this is followed with a test of 100 g over three hours. In UK general practice, the standard glucose load was provided by 394 ml of the energy drink Lucozade with original carbonated flavour, but this is being superseded by purpose-made drinks. Substances measured and variations If renal glycosuria (sugar excreted in the urine despite normal levels in the blood) is suspected, urine samples may also be collected for testing along with the fasting and 2 hour blood tests. Results Fasting plasma glucose (measured before the OGTT begins) should be below 5.6 mmol/L (100 mg/dL). Fasting levels between 5.6 and 6.9 mmol/L (100 and 125 mg/dL) indicate prediabetes ("impaired fasting glucose"), and fasting levels repeatedly at or above 7.0 mmol/L (>126 mg/dL) are diagnostic of diabetes. For a 2 hour GTT with 75 g intake, a glucose level below 7.8 mmol/L (140 mg/dL) is normal, whereas higher levels indicate hyperglycemia. Blood plasma glucose between 7.8 mmol/L (140 mg/dL) and 11.1 mmol/L (200 mg/dL) indicate "impaired glucose tolerance", and levels at or above 11.1 mmol/L at 2 hours confirm a diagnosis of diabetes. For gestational diabetes, the American College of Obstetricians and Gynecologists (ACOG) recommends a two-step procedure, wherein the first step is a 50 g glucose dose. If after 1 hour the blood glucose level is more than 7.8 mmol/L (140 mg/dL), it is followed by a 100 g glucose dose. The diagnosis of gestational diabetes is then defined by a blood glucose level meeting or exceeding the cutoff values on at least two intervals, with cutoffs as follows: Before glucose intake (fasting): 5.3 mmol/L (95 mg/dL) 1 hour after drinking the glucose solution: 10.0 mmol/L (180 mg/dL) 2 hours: 8.6 mmol/L (155 mg/dL) 3 hours: 7.8 mmol/L (140 mg/dL) Sample method The diagnosis criteria stated above by the World Health Organization (WHO) are for venous samples only (a blood sample taken from a vein in the arm). An increasingly popular method for measuring blood glucose is to sample capillary or finger-prick blood, which is less invasive, more convenient for the patient and requires minimal training to conduct. Though fasting blood glucose levels have been shown to be similar in both capillary and venous samples, postprandial blood glucose levels (those measured after a meal) can vary. The diagnosis criteria issued by the WHO are only suitable for venous blood samples. Given the increasing popularity of capillary testing, the WHO has recommended that a conversion factor between the two sample types be calculated, but no conversion factor had been issued by the WHO, despite some medical professionals adopting their own. A 2020 study on pregnant women for gestational diabetes mellitus (GDM) found that 0-hour venous and capillary levels were similar, but that 2-hour samples were different. The authors compared their study with others, and concluded that capillary samples could be used for diagnosis of GDM during pregnancy using corrected cutoffs with acceptable accuracy in an antenatal care setting. Variations A standard two-hour GTT (glucose tolerance test) is sufficient to diagnose or exclude all forms of diabetes mellitus at all but the earliest stages of development. Longer tests have been used for a variety of other purposes, such as detecting reactive hypoglycemia or defining subsets of hypothalamic obesity. Insulin levels are sometimes measured to detect insulin resistance or deficiency. The GTT (glucose tolerance test) is of limited value in the diagnosis of reactive hypoglycemia, since normal levels do not preclude the diagnosis, abnormal levels do not prove that the patient's other symptoms are related to a demonstrated atypical OGTT, and many people without symptoms of reactive hypoglycemia may have the late low glucose. Oral glucose challenge test The oral glucose challenge test (OGCT) is a short version of the OGTT, used to check pregnant women for signs of gestational diabetes. It can be done at any time of day, not on an empty stomach. The test involves 50 g of glucose, with a reading after one hour. Limitations of OGTT The OGTT does not distinguish between insulin resistance in peripheral tissues and reduced capacity of the pancreas beta-cells to produce insulin. The OGTT is less accurate than the hyperinsulinemic-euglycemic clamp technique (the "gold standard" for measuring insulin resistance), or the insulin tolerance test, but is technically less difficult. Neither of the two technically demanding tests can be easily applied in a clinical setting or used in epidemiological studies. HOMA-IR (homeostatic model assessment) is a convenient way of measuring insulin resistance in normal subjects, which can be used in epidemiological studies, but can give erroneous results for diabetic patients. See also Metabolic Score for Insulin Resistance (METS-IR) Homeostatic model assessment SPINA-GBeta SPINA-GR Disposition index Diabetes mellitus Diabetes management References Diabetes-related tests Blood tests Dynamic endocrine function tests
Glucose tolerance test
[ "Chemistry" ]
1,831
[ "Blood tests", "Chemical pathology" ]
60,637
https://en.wikipedia.org/wiki/Skene%27s%20gland
In female human anatomy, Skene's glands or the Skene glands ( , also known as the lesser vestibular glands or paraurethral glands) are two glands located towards the lower end of the urethra. The glands are surrounded by tissue that swells with blood during sexual arousal, and secrete a fluid, carried by the Skene's ducts to openings near the urethral meatus, particularly during orgasm. Structure and function The Skene's glands' openings are located in the vestibule of the vulva, around the lower end of the urethra. The two Skene's ducts lead from the Skene's glands to the vulvar vestibule, to the left and right of the urethral opening, from which they are structurally capable of secreting fluid. Although there remains debate about the function of the Skene's glands, one purpose is to secrete a fluid that helps lubricate the urethral opening. Skene's glands produce a milk-like ultrafiltrate of blood plasma. The glands may be the source of female ejaculation, but this has not been proven. Because they and the male prostate act similarly by secreting prostate-specific antigen (PSA), which is an ejaculate protein produced in males, and prostatic acid phosphatase, some authors refer to the Skene's glands as the "female prostate". They are homologous to the male prostate (developed from the same embryological tissues), but the homology is still a matter of research. Female ejaculate may result from sexual activity for some women, especially during orgasm. In addition to PSA and acid phosphatase, Skene's gland fluid contains high concentrations of glucose and fructose. In an amount of a few milliliters, fluid is secreted from these glands when stimulated from inside the vagina. Female ejaculation and squirting (secretion of large amounts of fluid) are believed by researchers to be two different processes. They may occur in combination during orgasm. Squirting alone is a sudden expulsion of liquid that at least partly comes from the bladder and contains urine, whereas ejaculation fluid includes a whitish transparent ejaculate that appears to come from the Skene's gland. Clinical significance Disorders of the Skene's glands may include: Infection (called skenitis, urethral syndrome, or female prostatitis) Skene's duct cyst: lined by stratified squamous epithelium, the cyst is caused by obstruction of the Skene's glands. It is located lateral to the urinary meatus. Magnetic resonance imaging (MRI) is used for diagnosis. The cyst is treated by surgical excision or marsupialization. Trichomoniasis: the Skene's glands (along with other structures) act as a reservoir for Trichomonas vaginalis, which explains why topical treatments are not as effective as oral medication for this condition. History While the glands were first described in 1672 by Regnier de Graaf and by the French surgeon Alphonse Guérin (1816–1895), they were named after the Scottish gynaecologist Alexander Skene, who wrote about it in Western medical literature in 1880. In 2002, the term female prostate as a second term after paraurethral gland was added in Terminologia Histologica by the Federative International Committee on Anatomical Terminology. The 2008 edition notes that the term was introduced "because of the morphological and immunological significance of the structure". Other animals Horses, dogs, sheep, and pigs are examples of other mammals that have these glands (minor vestibular glands). See also Bartholin's gland List of related male and female reproductive organs Mesonephric duct Pudendal nerve Vaginal lubrication References Further reading Glands Exocrine system Human female reproductive system Mammal female reproductive system Anatomy named for one who described it Sex organs
Skene's gland
[ "Biology" ]
844
[ "Exocrine system", "Organ systems" ]
60,644
https://en.wikipedia.org/wiki/Peripheral
A peripheral device, or simply peripheral, is an auxiliary hardware device that a computer uses to transfer information externally. A peripheral is a hardware component that is accessible to and controlled by a computer but is not a core component of the computer. A peripheral can be categorized based on the direction in which information flows relative to the computer: The computer receives data from an input device; examples: mouse, keyboard, scanner, game controller, microphone and webcam The computer sends data to an output device; examples: monitor, printer, headphones, and speakers The computer sends and receives data via an input/output device; examples: storage device (such as disk drive, solid-state drive, USB flash drive, memory card and tape drive), modem, router, gateway and network adapter Many modern electronic devices, such as Internet-enabled digital watches, video game consoles, smartphones, and tablet computers, have interfaces for use as a peripheral. See also Display device Expansion card Punched card input/output Punched tape Video game accessory References External links Peripheral – Encyclopædia Britannica
Peripheral
[ "Technology" ]
223
[ "Computer peripherals", "Components" ]
60,647
https://en.wikipedia.org/wiki/Netwide%20Assembler
The Netwide Assembler (NASM) is an assembler and disassembler for the Intel x86 architecture. It can be used to write 16-bit, 32-bit (IA-32) and 64-bit (x86-64) programs. It is considered one of the most popular assemblers for Linux and x86 chips. It was originally written by Simon Tatham with assistance from Julian Hall. , it is maintained by a small team led by H. Peter Anvin. It is open-source software released under the terms of a simplified (2-clause) BSD license. Features NASM can output several binary formats, including COFF, OMF, a.out, Executable and Linkable Format (ELF), Mach-O and binary file (.bin, binary disk image, used to compile operating systems), though position-independent code is supported only for ELF object files. It also has its own binary format called RDOFF. The variety of output formats allows retargeting programs to virtually any x86 operating system (OS). It can also create flat binary files, usable to write boot loaders, read-only memory (ROM) images, and in various facets of OS development. It can run on non-x86 platforms as a cross assembler, such as PowerPC and SPARC, though it cannot generate programs usable by those machines. NASM uses a variant of Intel assembly syntax instead of AT&T syntax. It also avoids features such as automatic generation of segment overrides (and the related ASSUME directive) used by MASM and compatible assemblers. Development NASM version 0.90 was released in October 1996. Version 2.00 was released on 28 November 2007, adding support for x86-64 extensions. The development versions are not uploaded to SourceForge.net, but are checked into GitHub with binary snapshots available from the project web page. In July 2009, as of version 2.07, NASM was released under the Simplified (2-clause) BSD license. Previously, because it was licensed under LGPL, it led to development of Yasm, a complete rewrite of under the New BSD License. Yasm offered support for x86-64 earlier than NASM. It also added support for GNU Assembler syntax. RDOFF Relocatable Dynamic Object File Format (RDOFF) is used by developers to test the integrity of NASM's object file output abilities. It is based heavily on the internal structure of NASM, essentially consisting of a header containing a serialization of the output driver function calls followed by an array of sections containing executable code or data. Tools for using the format, including a linker and loader, are included in the NASM distribution. Until version 0.90 was released in October 1996, NASM supported output of only flat-format executable files (e.g., DOS COM files). In version 0.90, Simon Tatham added support for an object-file output interface, and for DOS .OBJ files for 16-bit code only. NASM thus lacked a 32-bit object format. To address this lack, and as an exercise to learn the object-file interface, developer Julian Hall put together the first version of RDOFF, which was released in NASM version 0.91. Since this initial version, there has been one major update to the RDOFF format, which added a record-length indicator on each header record, allowing programs to skip over records whose format they do not recognise, and support for multiple segments; RDOFF1 only supported three segments: text, data and bss (containing uninitialized data). The RDOFF format is strongly deprecated and has been disabled starting in NASM 2.15.04. See also Assembly language Comparison of assemblers References Further reading External links Special edition for Win32 and BeOS. A comparison of GAS and NASM at IBM : a converter between the source format of the assemblers NASM and GAS 1996 software Assemblers Disassemblers DOS software Free and open source compilers Linux programming tools MacOS MacOS programming tools Programming tools for Windows Software using the BSD license
Netwide Assembler
[ "Engineering" ]
875
[ "Reverse engineering", "Disassemblers" ]
60,670
https://en.wikipedia.org/wiki/Lists%20of%20scientists
This article contains links to lists of scientists. By academic genealogy Academic genealogy of chemists List of people considered father or mother of a scientific field List of the 72 names on the Eiffel Tower Apostles of Linnaeus List of Arab scientists and scholars List of modern Arab scientists and engineers List of archaeologists Astronomer Royal List of astronomers List of French astronomers List of Fellows of the Australian Academy of Science List of biologists List of biochemists List of carcinologists List of coleopterists List of entomologists List of geneticists List of herpetologists List of immunologists List of marine biologists List of microbiologists List of paleoethnobotanists List of plant scientists List of plant pathologists List of biophysicists List of Catholic clergy scientists List of lay Catholic scientists List of chemists List of Christians in science and technology List of Christian Nobel laureates List of Christian scientists and scholars of medieval Islam List of climate scientists List of women climate scientists and activists List of cognitive scientists List of computer scientists List of cosmologists List of criminologists List of ecologists List of Ethiopian scientists List of participants in the Evolving Genes and Proteins symposium List of Fellows of the Royal Society by election year List of foresters Fullerian Professor of Chemistry Fullerian Professor of Physiology List of geologists List of women geologists List of geophysicists List of Germans relocated to the US via the Operation Paperclip List of Jewish Nobel laureates List of Kyoto Prize winners List of atheists in science and technology List of loop quantum gravity researchers Maria Goeppert-Mayer Award List of medieval and pre-modern Persian doctors List of meteorologists List of mineralogists List of minor planet discoverers List of Muslim Nobel laureates List of National Medal of Science laureates List of neurochemists List of neurologists and neurosurgeons List of nominees for the Nobel Prize in Chemistry List of physicians and scientists of Upstate New York List of ornithologists List of paleontologists List of pathologists List of pharmacists List of photochemists List of physicists List of plasma physicists List of presidents of the Geological Society of London List of presidents of the Geologists' Association List of psephologists Quakers in science List of quantum gravity researchers Racah Lectures in Physics List of Researchers at Racah Institute List of rheologists RNA Tie Club List of runologists Savilian Professor of Astronomy List of scientists whose names are used as units List of people whose names are used in chemical element names List of scientists whose names are used in physical constants List of soil scientists List of spectroscopists List of statisticians List of systems scientists List of taxonomic authorities by name List of undersea explorers List of authors of names published under the ICZN By country, religion, gender or ethnic background List of African educators, scientists and scholars List of Argentine scientists List of Armenian scientists and philosophers List of African-American inventors and scientists List of Arab scientists and scholars List of Austrian scientists List of Azerbaijani scientists and philosophers List of Brazilian scientists List of Bangladeshi scientists List of British Jewish scientists List of Cornish scientists List of Scottish scientists List of Welsh scientists List of Byzantine scholars (including scientists) List of Chinese scientists List of Christian scientists List of Catholic scientists List of Christian Nobel laureates List of Jesuit scientists List of Roman Catholic cleric-scientists List of Quaker scientists List of Croatian scientists List of Czech scientists List of Egyptian scientists List of Estonian scientists List of female scientists List of female scientists before the 20th century List of female scientists in the 20th century List of female scientists in the 21st century List of French scientists List of Indian scientists List of Nepalese scientists List of Persian scientists and scholars List of contemporary Iranian scientists, scholars, and engineers List of Italian scientists List of Jewish scientists and philosophers List of Jewish American chemists List of Muslim scientists Lists of Muslim scientists and scholars List of New Zealand scientists List of Nigerian scientists and scholars List of Pakistani scientists List of Romanian scientists List of Russian scientists List of Serbian scientists List of Swedish scientists By achievement List of Nobel laureates Lists of Nobel laureates See also Academic genealogy History of science and technology List of forms of electricity named after scientists List of science communicators Lists of people in STEM fields
Lists of scientists
[ "Technology" ]
856
[ "Lists of scientists", "Lists of people in STEM fields" ]
60,692
https://en.wikipedia.org/wiki/Prime%20element
In mathematics, specifically in abstract algebra, a prime element of a commutative ring is an object satisfying certain properties similar to the prime numbers in the integers and to irreducible polynomials. Care should be taken to distinguish prime elements from irreducible elements, a concept that is the same in UFDs but not the same in general. Definition An element of a commutative ring is said to be prime if it is not the zero element or a unit and whenever divides for some and in , then divides or divides . With this definition, Euclid's lemma is the assertion that prime numbers are prime elements in the ring of integers. Equivalently, an element is prime if, and only if, the principal ideal generated by is a nonzero prime ideal. (Note that in an integral domain, the ideal is a prime ideal, but is an exception in the definition of 'prime element'.) Interest in prime elements comes from the fundamental theorem of arithmetic, which asserts that each nonzero integer can be written in essentially only one way as 1 or −1 multiplied by a product of positive prime numbers. This led to the study of unique factorization domains, which generalize what was just illustrated in the integers. Being prime is relative to which ring an element is considered to be in; for example, 2 is a prime element in but it is not in , the ring of Gaussian integers, since and 2 does not divide any factor on the right. Connection with prime ideals An ideal in the ring (with unity) is prime if the factor ring is an integral domain. Equivalently, is prime if whenever then either or . In an integral domain, a nonzero principal ideal is prime if and only if it is generated by a prime element. Irreducible elements Prime elements should not be confused with irreducible elements. In an integral domain, every prime is irreducible but the converse is not true in general. However, in unique factorization domains, or more generally in GCD domains, primes and irreducibles are the same. Examples The following are examples of prime elements in rings: The integers , , , , , ... in the ring of integers the complex numbers , , and in the ring of Gaussian integers the polynomials and in , the ring of polynomials over . 2 in the quotient ring is prime but not irreducible in the ring In the ring of pairs of integers, is prime but not irreducible (one has ). In the ring of algebraic integers the element is irreducible but not prime (as 3 divides and 3 does not divide any factor on the right). References Notes Sources Section III.3 of Ring theory
Prime element
[ "Mathematics" ]
555
[ "Fields of abstract algebra", "Ring theory" ]
60,693
https://en.wikipedia.org/wiki/Irreducible%20element
In algebra, an irreducible element of an integral domain is a non-zero element that is not invertible (that is, is not a unit), and is not the product of two non-invertible elements. The irreducible elements are the terminal elements of a factorization process; that is, they are the factors that cannot be further factorized. If the irreducible factors of every non-zero non-unit element are uniquely defined, up to the multiplication by a unit, then the integral domain is called a unique factorization domain, but this does not need to happen in general for every integral domain. It was discovered in the 19th century that the rings of integers of some number fields are not unique factorization domains, and, therefore, that some irreducible elements can appear in some factorization of an element and not in other factorizations of the same element. The ignorance of this fact is the main error in many of the wrong proofs of Fermat's Last Theorem that were given during the three centuries between Fermat's statement and Wiles's proof of Fermat's Last Theorem. If is an integral domain, then is an irreducible element of if and only if, for all , the equation implies that the ideal generated by is equal to the ideal generated by or equal to the ideal generated by . This equivalence does not hold for general commutative rings, which is why the assumption of the ring having no nonzero zero divisors is commonly made in the definition of irreducible elements. It results also that there are several ways to extend the definition of an irreducible element to an arbitrary commutative ring. Relationship with prime elements Irreducible elements should not be confused with prime elements. (A non-zero non-unit element in a commutative ring is called prime if, whenever for some and in then or ) In an integral domain, every prime element is irreducible, but the converse is not true in general. The converse is true for unique factorization domains (or, more generally, GCD domains). Moreover, while an ideal generated by a prime element is a prime ideal, it is not true in general that an ideal generated by an irreducible element is an irreducible ideal. However, if is a GCD domain and is an irreducible element of , then as noted above is prime, and so the ideal generated by is a prime (hence irreducible) ideal of . Example In the quadratic integer ring it can be shown using norm arguments that the number 3 is irreducible. However, it is not a prime element in this ring since, for example, but 3 does not divide either of the two factors. See also Irreducible polynomial Notes References Ring theory Algebraic properties of elements
Irreducible element
[ "Mathematics" ]
580
[ "Fields of abstract algebra", "Ring theory" ]
60,710
https://en.wikipedia.org/wiki/Ferrocene
Ferrocene is an organometallic compound with the formula . The molecule is a complex consisting of two cyclopentadienyl rings sandwiching a central iron atom. It is an orange solid with a camphor-like odor that sublimes above room temperature, and is soluble in most organic solvents. It is remarkable for its stability: it is unaffected by air, water, strong bases, and can be heated to 400 °C without decomposition. In oxidizing conditions it can reversibly react with strong acids to form the ferrocenium cation . Ferrocene and the ferrocenium cation are sometimes abbreviated as Fc and respectively. The first reported synthesis of ferrocene was in 1951. Its unusual stability puzzled chemists, and required the development of new theory to explain its formation and bonding. The discovery of ferrocene and its many analogues, known as metallocenes, sparked excitement and led to a rapid growth in the discipline of organometallic chemistry. Geoffrey Wilkinson and Ernst Otto Fischer, both of whom worked on elucidating the structure of ferrocene, later shared the 1973 Nobel Prize in Chemistry for their work on organometallic sandwich compounds. Ferrocene itself has no large-scale applications, but has found more niche uses in catalysis, as a fuel additive, and as a tool in undergraduate education. History Discovery Ferrocene was discovered by accident twice. The first known synthesis may have been made in the late 1940s by unknown researchers at Union Carbide, who tried to pass hot cyclopentadiene vapor through an iron pipe. The vapor reacted with the pipe wall, creating a "yellow sludge" that clogged the pipe. Years later, a sample of the sludge that had been saved was obtained and analyzed by Eugene O. Brimm, shortly after reading Kealy and Pauson's article, and was found to consist of ferrocene. The second time was around 1950, when Samuel A. Miller, John A. Tebboth, and John F. Tremaine, researchers at British Oxygen, were attempting to synthesize amines from hydrocarbons and nitrogen in a modification of the Haber process. When they tried to react cyclopentadiene with nitrogen at 300 °C, at atmospheric pressure, they were disappointed to see the hydrocarbon react with some source of iron, yielding ferrocene. While they too observed its remarkable stability, they put the observation aside and did not publish it until after Pauson reported his findings. Kealy and Pauson were later provided with a sample by Miller et al., who confirmed that the products were the same compound. In 1951, Peter L. Pauson and Thomas J. Kealy at Duquesne University attempted to prepare fulvalene () by oxidative dimerization of cyclopentadiene (). To that end, they reacted the Grignard compound cyclopentadienyl magnesium bromide in diethyl ether with ferric chloride as an oxidizer. However, instead of the expected fulvalene, they obtained a light orange powder of "remarkable stability", with the formula . Determining the structure Pauson and Kealy conjectured that the compound had two cyclopentadienyl groups, each with a single covalent bond from the saturated carbon atom to the iron atom. However, that structure was inconsistent with then-existing bonding models and did not explain the unexpected stability of the compound, and chemists struggled to find the correct structure. The structure was deduced and reported independently by three groups in 1952. Robert Burns Woodward, Geoffrey Wilkinson, et al. deduced observe that the compound was diamagnetic and nonpolar. A few months later they described its reactions as being typical of aromatic compounds such as benzene. The name ferrocene was coined by Mark Whiting, a postdoc with Woodward.. Ernst Otto Fischer and Wolfgang Pfab also noted ferrocene's diamagneticity and high symmetry. They also synthesize nickelocene and cobaltocene and confirmed they had the same structure. Fischer described the structure as Doppelkegelstruktur ("double-cone structure"), although the term "sandwich" came to be preferred by British and American chemists. Philip Frank Eiland and Raymond Pepinsky confirmed the structure through X-ray crystallography and later by NMR spectroscopy. The "sandwich" structure of ferrocene was shockingly novel and led to intensive theoretical studies. Application of molecular orbital theory with the assumption of a Fe2+ centre between two cyclopentadienide anions resulted in the successful Dewar–Chatt–Duncanson model, allowing correct prediction of the geometry of the molecule as well as explaining its remarkable stability. Impact The discovery of ferrocene was considered so significant that Wilkinson and Fischer shared the 1973 Nobel Prize in Chemistry "for their pioneering work, performed independently, on the chemistry of the organometallic, called sandwich compounds". Structure and bonding Mössbauer spectroscopy indicates that the iron center in ferrocene should be assigned the +2 oxidation state. Each cyclopentadienyl (Cp) ring should then be allocated a single negative charge. Thus ferrocene could be described as iron(II) bis(cyclopentadienide), . Each ring has six π-electrons, which makes them aromatic according to Hückel's rule. These π-electrons are then shared with the metal via covalent bonding. Since Fe2+ has six d-electrons, the complex attains an 18-electron configuration, which accounts for its stability. In modern notation, this sandwich structural model of the ferrocene molecule is denoted as , where η denotes hapticity, the number of atoms through which each ring binds. The carbon–carbon bond distances around each five-membered ring are all 1.40 Å, and all Fe–C bond distances are 2.04 Å. From room temperature down to 164 K, X-ray crystallography yields the monoclinic space group; the cyclopentadienide rings are a staggered conformation, resulting in a centrosymmetric molecule, with symmetry group D5d. However, below 110 K, ferrocene crystallizes in an orthorhombic crystal lattice in which the Cp rings are ordered and eclipsed, so that the molecule has symmetry group D5h. In the gas phase, electron diffraction and computational studies show that the Cp rings are eclipsed. While ferrocene has no permanent dipole moment at room temperature, between 172.8 and 163.5 K the molecule exhibits an "incommensurate modulation", breaking the D5 symmetry and acquiring an electric dipole. The Cp rings rotate with a low barrier about the Cp(centroid)–Fe–Cp(centroid) axis, as observed by measurements on substituted derivatives of ferrocene using 1H and 13C nuclear magnetic resonance spectroscopy. For example, methylferrocene (CH3C5H4FeC5H5) exhibits a singlet for the C5H5 ring. In solution, and at room temperature, eclipsed D5h ferrocene was determined to dominate over the staggered D5d conformer, as suggested by both Fourier-transform infrared spectroscopy and DFT calculations. Synthesis Early methods The first reported syntheses of ferrocene were nearly simultaneous. Pauson and Kealy synthesised ferrocene using iron(III) chloride and cyclopentadienyl magnesium bromide. A redox reaction produces iron(II) chloride. The formation of fulvalene, the intended outcome does not occur. Another early synthesis of ferrocene was by Miller et al., who treated metallic iron with gaseous cyclopentadiene at elevated temperature. An approach using iron pentacarbonyl was also reported. Fe(CO)5 + 2 C5H6 → Fe(C5H5)2 + 5 CO + H2 Via alkali cyclopentadienide More efficient preparative methods are generally a modification of the original transmetalation sequence using either commercially available sodium cyclopentadienide or freshly cracked cyclopentadiene deprotonated with potassium hydroxide and reacted with anhydrous iron(II) chloride in ethereal solvents. Modern modifications of Pauson and Kealy's original Grignard approach are known: Using sodium cyclopentadienide:       2 NaC5H5   +   FeCl2   →   Fe(C5H5)2   +   2 NaCl Using freshly-cracked cyclopentadiene:     FeCl2·4H2O   +   2 C5H6   +   2 KOH   →   Fe(C5H5)2   +   2 KCl   +   6 H2O Using an iron(II) salt with a Grignard reagent:     2 C5H5MgBr   +   FeCl2   →   Fe(C5H5)2   +   2 MgBrCl Even some amine bases (such as diethylamine) can be used for the deprotonation, though the reaction proceeds more slowly than when using stronger bases: 2 C5H6   +   2 (CH3CH2)2NH   +   FeCl2   →   Fe(C5H5)2   +   2 (CH3CH2)2NH2Cl Direct transmetalation can also be used to prepare ferrocene from some other metallocenes, such as manganocene: FeCl2   +   Mn(C5H5)2   →   MnCl2   +   Fe(C5H5)2 Properties Ferrocene is an air-stable orange solid with a camphor-like odor. As expected for a symmetric, uncharged species, ferrocene is soluble in normal organic solvents, such as benzene, but is insoluble in water. It is stable to temperatures as high as 400 °C. Ferrocene readily sublimes, especially upon heating in a vacuum. Its vapor pressure is about 1 Pa at 25 °C, 10 Pa at 50 °C, 100 Pa at 80 °C, 1000 Pa at 116 °C, and 10,000 Pa (nearly 0.1 atm) at 162 °C. Reactions With electrophiles Ferrocene undergoes many reactions characteristic of aromatic compounds, enabling the preparation of substituted derivatives. A common undergraduate experiment is the Friedel–Crafts reaction of ferrocene with acetic anhydride (or acetyl chloride) in the presence of phosphoric acid as a catalyst. Under conditions for a Mannich reaction, ferrocene gives N,N-dimethylaminomethylferrocene. Ferrocene can itself be oxidized to the ferrocenium cation (Fc+); the ferrocene/ferrocenium couple is often used as a reference in electrochemistry. It is an aromatic substance and undergoes substitution reactions rather than addition reactions on the cyclopentadienyl ligands. For example, Friedel-Crafts acylation of ferrocene with acetic anhydride yields acetylferrocene just as acylation of benzene yields acetophenone under similar conditions. Vilsmeier-Haack reaction (formylation) using formylanilide and phosphorus oxychloride gives ferrocenecarboxaldehyde. Diformylation does not occur readily, showing the electronic communication between the two rings. Protonation of ferrocene allows isolation of [Cp2FeH]PF6. In the presence of aluminium chloride, Me2NPCl2 and ferrocene react to give ferrocenyl dichlorophosphine, whereas treatment with phenyldichlorophosphine under similar conditions forms P,P-diferrocenyl-P-phenyl phosphine. Ferrocene reacts with P4S10 forms a diferrocenyl-dithiadiphosphetane disulfide. Lithiation Ferrocene reacts with butyllithium to give 1,1′-dilithioferrocene, which is a versatile nucleophile. In combination with butyllithiium, tert-butyllithium produces monolithioferrocene. Redox chemistry Ferrocene undergoes a one-electron oxidation at around 0.4 V versus a saturated calomel electrode (SCE), becoming ferrocenium. This reversible oxidation has been used as standard in electrochemistry as Fc+/Fc = 0.64 V versus the standard hydrogen electrode, however other values have been reported. Ferrocenium tetrafluoroborate is a common reagent. The remarkably reversible oxidation-reduction behaviour has been extensively used to control electron-transfer processes in electrochemical and photochemical systems. Substituents on the cyclopentadienyl ligands alters the redox potential in the expected way: electron-withdrawing groups such as a carboxylic acid shift the potential in the anodic direction (i.e. made more positive), whereas electron-releasing groups such as methyl groups shift the potential in the cathodic direction (more negative). Thus, decamethylferrocene is much more easily oxidised than ferrocene and can even be oxidised to the corresponding dication. Ferrocene is often used as an internal standard for calibrating redox potentials in non-aqueous electrochemistry. Stereochemistry of substituted ferrocenes Disubstituted ferrocenes can exist as either 1,2-, 1,3- or 1,1′- isomers, none of which are interconvertible. Ferrocenes that are asymmetrically disubstituted on one ring are chiral – for example [CpFe(EtC5H3Me)]. This planar chirality arises despite no single atom being a stereogenic centre. The substituted ferrocene shown at right (a 4-(dimethylamino)pyridine derivative) has been shown to be effective when used for the kinetic resolution of racemic secondary alcohols. Several approaches have been developed to asymmetrically 1,1′-functionalise the ferrocene. Applications of ferrocene and its derivatives Ferrocene and its numerous derivatives have no large-scale applications, but have many niche uses that exploit the unusual structure (ligand scaffolds, pharmaceutical candidates), robustness (anti-knock formulations, precursors to materials), and redox (reagents and redox standards). Ligand scaffolds Chiral ferrocenyl phosphines are employed as ligands for transition-metal catalyzed reactions. Some of them have found industrial applications in the synthesis of pharmaceuticals and agrochemicals. For example, the diphosphine 1,1′-bis(diphenylphosphino)ferrocene (dppf) is a valued ligand for palladium-coupling reactions and Josiphos ligand is useful for hydrogenation catalysis. They are named after the technician who made the first one, Josi Puleo. Fuel additives Ferrocene and its derivatives are antiknock agents used in the fuel for petrol engines. They are safer than previously used tetraethyllead. Petrol additive solutions containing ferrocene can be added to unleaded petrol to enable its use in vintage cars designed to run on leaded petrol. The iron-containing deposits formed from ferrocene can form a conductive coating on spark plug surfaces. Ferrocene polyglycol copolymers, prepared by effecting a polycondensation reaction between a ferrocene derivative and a substituted dihydroxy alcohol, has promise as a component of rocket propellants. These copolymers provide rocket propellants with heat stability, serving as a propellant binder and controlling propellant burn rate. Ferrocene has been found to be effective at reducing smoke and sulfur trioxide produced when burning coal. The addition by any practical means, impregnating the coal or adding ferrocene to the combustion chamber, can significantly reduce the amount of these undesirable byproducts, even with a small amount of the metal cyclopentadienyl compound. Pharmaceuticals Ferrocene derivatives have been investigated as drugs, with one compound approved for use in the USSR in the 1970s as an iron supplement, though it is no longer marketed today. Only one drug has entered clinical trials in recent years, Ferroquine (7-chloro-N-(2-((dimethylamino)methyl)ferrocenyl)quinolin-4-amine), an antimalarial, which has reached Phase IIb trials. Ferrocene-containing polymer-based drug delivery systems have been investigated. The anticancer activity of ferrocene derivatives was first investigated in the late 1970s, when derivatives bearing amine or amide groups were tested against lymphocytic leukemia. Some ferrocenium salts exhibit anticancer activity, but no compound has seen evaluation in the clinic. Ferrocene derivatives have strong inhibitory activity against human lung cancer cell line A549, colorectal cancer cell line HCT116, and breast cancer cell line MCF-7. An experimental drug was reported which is a ferrocenyl version of tamoxifen. The idea is that the tamoxifen will bind to the estrogen binding sites, resulting in cytotoxicity. Ferrocifens are exploited for cancer applications by a French biotech, Feroscan, founded by Pr. Gerard Jaouen. Solid rocket propellant Ferrocene and related derivatives are used as powerful burn rate catalysts in ammonium perchlorate composite propellant. Derivatives and variations Ferrocene analogues can be prepared with variants of cyclopentadienyl. For example, bisindenyliron and bisfluorenyliron. Carbon atoms can be replaced by heteroatoms as illustrated by Fe(η5-C5Me5)(η5-P5) and Fe(η5-C5H5)(η5-C4H4N) ("azaferrocene"). Azaferrocene arises from decarbonylation of Fe(η5-C5H5)(CO)2(η1-pyrrole) in cyclohexane. This compound on boiling under reflux in benzene is converted to ferrocene. Because of the ease of substitution, many structurally unusual ferrocene derivatives have been prepared. For example, the penta(ferrocenyl)cyclopentadienyl ligand, features a cyclopentadienyl anion derivatized with five ferrocene substituents. In hexaferrocenylbenzene, C6[(η5-C5H4)Fe(η5-C5H5)]6, all six positions on a benzene molecule have ferrocenyl substituents (R). X-ray diffraction analysis of this compound confirms that the cyclopentadienyl ligands are not co-planar with the benzene core but have alternating dihedral angles of +30° and −80°. Due to steric crowding the ferrocenyls are slightly bent with angles of 177° and have elongated C-Fe bonds. The quaternary cyclopentadienyl carbon atoms are also pyramidalized. Also, the benzene core has a chair conformation with dihedral angles of 14° and displays bond length alternation between 142.7 pm and 141.1 pm, both indications of steric crowding of the substituents. The synthesis of hexaferrocenylbenzene has been reported using Negishi coupling of hexaiodidobenzene and diferrocenylzinc, using tris(dibenzylideneacetone)dipalladium(0) as catalyst, in tetrahydrofuran: The yield is only 4%, which is further evidence consistent with substantial steric crowding around the arene core. Materials chemistry Ferrocene, a precursor to iron nanoparticles, can be used as a catalyst for the production of carbon nanotubes. Vinylferrocene can be converted to (polyvinylferrocene, PVFc), a ferrocenyl version of polystyrene (the phenyl groups are replaced with ferrocenyl groups). Another polyferrocene which can be formed is poly(2-(methacryloyloxy)ethyl ferrocenecarboxylate), PFcMA. In addition to using organic polymer backbones, these pendant ferrocene units have been attached to inorganic backbones such as polysiloxanes, polyphosphazenes, and polyphosphinoboranes, (–PH(R)–BH2–)n, and the resulting materials exhibit unusual physical and electronic properties relating to the ferrocene / ferrocinium redox couple. Both PVFc and PFcMA have been tethered onto silica wafers and the wettability measured when the polymer chains are uncharged and when the ferrocene moieties are oxidised to produce positively charged groups. The contact angle with water on the PFcMA-coated wafers was 70° smaller following oxidation, while in the case of PVFc the decrease was 30°, and the switching of wettability is reversible. In the PFcMA case, the effect of lengthening the chains and hence introducing more ferrocene groups is significantly larger reductions in the contact angle upon oxidation. See also Josiphos ligands References External links Ferrocene at The Periodic Table of Videos (University of Nottingham) NIOSH Pocket Guide to Chemical Hazards (Centers for Disease Control and Prevention) Antiknock agents Sandwich compounds Cyclopentadienyl complexes Substances discovered in the 1950s
Ferrocene
[ "Chemistry" ]
4,617
[ "Organometallic chemistry", "Cyclopentadienyl complexes", "Sandwich compounds" ]
60,731
https://en.wikipedia.org/wiki/Anachronism
An anachronism (from the Greek , 'against' and , 'time') is a chronological inconsistency in some arrangement, especially a juxtaposition of people, events, objects, language terms and customs from different time periods. The most common type of anachronism is an object misplaced in time, but it may be a verbal expression, a technology, a philosophical idea, a musical style, a material, a plant or animal, a custom, or anything else associated with a particular period that is placed outside its proper temporal domain. An anachronism may be either intentional or unintentional. Intentional anachronisms may be introduced into a literary or artistic work to help a contemporary audience engage more readily with a historical period. Anachronism can also be used intentionally for purposes of rhetoric, propaganda, comedy, or shock. Unintentional anachronisms may occur when a writer, artist, or performer is unaware of differences in technology, terminology and language, customs and attitudes, or even fashions between different historical periods and eras. Types The metachronism-prochronism contrast is nearly synonymous with parachronism-anachronism, and involves postdating-predating respectively. Parachronism A parachronism (from the Greek , "on the side", and , "time") postdates. It is anything that appears in a time period in which it is not normally found (though not sufficiently out of place as to be impossible). This may be an object, idiomatic expression, technology, philosophical idea, musical style, material, custom, or anything else so closely bound to a particular time period as to seem strange when encountered in a later era. They may be objects or ideas that were once common but are now considered rare or inappropriate. They can take the form of obsolete technology or outdated fashion or idioms. Prochronism A prochronism (from the Greek , "before", and , "time") predates. It is an impossible anachronism which occurs when an object or idea has not yet been invented when the situation takes place, and therefore could not have possibly existed at the time. A prochronism may be an object not yet developed, a verbal expression that had not yet been coined, a philosophy not yet formulated, a breed of animal not yet evolved or bred, or use of a technology that had not yet been created. Metachronism A metachronism (from the Greek , "after", and , "time") postdates. It is the use of older cultural artifacts in modern settings which may seem inappropriate. For example, it could be considered metachronistic for a modern-day person to be depicted wearing a top hat or writing with a quill. Politically motivated anachronism Works of art and literature promoting a political, nationalist or revolutionary cause may use anachronism to depict an institution or custom as being more ancient than it actually is, or otherwise intentionally blur the distinctions between past and present. For example, the 19th-century Romanian painter Constantin Lecca depicts the peace agreement between Ioan Bogdan Voievod and Radu Voievod—two leaders in Romania's 16th-century history—with the flags of Moldavia (blue-red) and of Wallachia (yellow-blue) seen in the background. These flags date only from the 1830s: anachronism promotes legitimacy for the unification of Moldavia and Wallachia into the Kingdom of Romania at the time the painting was made. The Russian artist Vasily Vereshchagin, in his painting Suppression of the Indian Revolt by the English (), depicts the aftermath of the Indian Rebellion of 1857, when mutineers were executed by being blown from guns. In order to make the argument that the method of execution would again be utilized by the British if another rebellion broke out in India, Vereshchagin depicted the British soldiers conducting the executions in late 19th-century uniforms. Art and literature Anachronism is used especially in works of imagination that rest on a historical basis. Anachronisms may be introduced in many ways: for example, in the disregard of the different modes of life and thought that characterize different periods, or in ignorance of the progress of the arts and sciences and other facts of history. They vary from glaring inconsistencies to scarcely perceptible misrepresentation. Anachronisms may be the unintentional result of ignorance, or may be a deliberate aesthetic choice. Sir Walter Scott justified the use of anachronism in historical literature: "It is necessary, for exciting interest of any kind, that the subject assumed should be, as it were, translated into the manners as well as the language of the age we live in." However, as fashions, conventions and technologies move on, such attempts to use anachronisms to engage an audience may have quite the reverse effect, as the details in question are increasingly recognized as belonging neither to the historical era being represented, nor to the present, but to the intervening period in which the artwork was created. "Nothing becomes obsolete like a period vision of an older period", writes Anthony Grafton; "Hearing a mother in a historical movie of the 1940s call out 'Ludwig! Ludwig van Beethoven! Come in and practice your piano now!' we are jerked from our suspension of disbelief by what was intended as a means of reinforcing it, and plunged directly into the American bourgeois world of the filmmaker." It is only since the beginning of the 19th century that anachronistic deviations from historical reality have jarred on a general audience. C. S. Lewis wrote: Anachronisms abound in the works of Raphael and Shakespeare, as well as in those of less celebrated painters and playwrights of earlier times. Carol Meyers says that anachronisms in ancient texts can be used to better understand the stories by asking what the anachronism represents. Repeated anachronisms and historical errors can become an accepted part of popular culture, such as the belief that Roman legionaries wore leather armor. Comical anachronism Comedy fiction set in the past may use anachronism for humorous effect. Comedic anachronism can be used to make serious points about both historical and modern society, such as drawing parallels to political or social conventions. Future anachronism Even with careful research, science fiction writers risk anachronism as their works age because they cannot predict all political, social, and technological change. For example, many books, television shows, radio productions and films nominally set in the mid-21st century or later refer to the Soviet Union, to Saint Petersburg in Russia as Leningrad, to the continuing struggle between the Eastern and Western Blocs and to divided Germany and divided Berlin. Star Trek has suffered from future anachronisms; instead of "retconning" these errors, the 2009 film retained them for consistency with older franchises. Buildings or natural features, such as the World Trade Center in New York City, can become out of place once they disappear, with some works having been edited to remove the World Trade Center to avoid this situation. Futuristic technology may appear alongside technology which would be obsolete by the time in which the story is set. For example, in the stories of Robert A. Heinlein, interplanetary space travel coexists with calculation using slide rules. Language anachronism Language anachronisms in novels and films are quite common, both intentional and unintentional. Intentional anachronisms inform the audience more readily about a film set in the past. In this regard, language and pronunciation change so fast that most modern people (even many scholars) would find it difficult, or even impossible, to understand a film with dialogue in 15th-century English; thus, audiences willingly accept characters speaking an updated language, and modern slang and figures of speech are often used in these films. Unconscious anachronism Unintentional anachronisms may occur even in what are intended as wholly objective and accurate records or representations of historic artifacts and artworks, because the perspectives of historical recorders are conditioned by the assumptions and practices of their own times, in a form of cultural bias. One example is the attribution of historically inaccurate beards to various medieval tomb effigies and figures in stained glass in records made by English antiquaries of the late 16th and early 17th centuries. Working in an age in which beards were in fashion and widespread, the antiquaries seem to have unconsciously projected the fashion back into an era in which they were rare. In academia In historical writing, the most common type of anachronism is the adoption of the political, social or cultural concerns and assumptions of one era to interpret or evaluate the events and actions of another. The anachronistic application of present-day perspectives to comment on the historical past is sometimes described as presentism. Empiricist historians, working in the traditions established by Leopold von Ranke in the 19th century, regard this as a great error, and a trap to be avoided. Arthur Marwick has argued that "a grasp of the fact that past societies are very different from our own, and ... very difficult to get to know" is an essential and fundamental skill of the professional historian; and that "anachronism is still one of the most obvious faults when the unqualified (those expert in other disciplines, perhaps) attempt to do history". Detection of forgery The ability to identify anachronisms may be employed as a critical and forensic tool to demonstrate the fraudulence of a document or artifact purporting to be from an earlier time. Anthony Grafton discusses, for example, the work of the 3rd-century philosopher Porphyry, of Isaac Casaubon (1559–1614), and of Richard Reitzenstein (1861–1931), all of whom succeeded in exposing literary forgeries and plagiarisms, such as those included in the "Hermetic Corpus", through – among other techniques – the recognition of anachronisms. The detection of anachronisms is an important element within the scholarly discipline of diplomatics, the critical analysis of the forms and language of documents, developed by the Maurist scholar Jean Mabillon (1632–1707) and his successors René-Prosper Tassin (1697–1777) and Charles-François Toustain (1700–1754). The philosopher and reformer Jeremy Bentham wrote at the beginning of the 19th century: Examples are: The exposure by Lorenzo Valla in 1440 of the so-called Donation of Constantine, a decree purportedly issued by the Emperor Constantine the Great in either 315 or 317 AD, as a later forgery, depended to a considerable degree on the identification of anachronisms, such as references to the city of Constantinople (a name not in fact bestowed until 330 AD). A large number of apparent anachronisms in the Book of Mormon have served to convince critics that the book was written in the 19th century, and not, as its adherents claim, in pre-Columbian America. The use of 19th- and 20th-century anti-semitic terminology demonstrates that the purported "Franklin Prophecy" (attributed to Benjamin Franklin, who died in 1790) is a forgery. The "William Lynch speech", an address, supposedly delivered in 1712, on the control of slaves in Virginia, is now considered to be a 20th-century forgery, partly on account of its use of anachronistic terms such as "program" and "refueling". See also Anachronisms in the Book of Mormon Anatopism Evolutionary anachronism Invented traditions List of stories set in a future now past Retrofuturism Skeuomorph Society for Creative Anachronism Steampunk Tiffany Problem Whig history References Bibliography External links
Anachronism
[ "Physics" ]
2,451
[ "Spacetime", "Anachronism", "Physical quantities", "Time" ]
60,744
https://en.wikipedia.org/wiki/Cubic%20zirconia
Cubic zirconia (abbreviated CZ) is the cubic crystalline form of zirconium dioxide (ZrO2). The synthesized material is hard and usually colorless, but may be made in a variety of different colors. It should not be confused with zircon, which is a zirconium silicate (ZrSiO4). It is sometimes erroneously called cubic zirconium. Because of its low cost, durability, and close visual likeness to diamond, synthetic cubic zirconia has remained the most gemologically and economically important competitor for diamonds since commercial production began in 1976. Its main competitor as a synthetic gemstone is a more recently cultivated material, synthetic moissanite. Technical aspects Cubic zirconia [also known as cubic zircon) is crystallographically isometric, an important attribute of a would-be diamond simulant. During synthesis zirconium oxide naturally forms monoclinic crystals, which are stable under normal atmospheric conditions. A stabilizer is required for cubic crystals (taking on the fluorite structure) to form, and remain stable at ordinary temperatures; typically this is either yttrium or calcium oxide, the amount of stabilizer used depending on the many recipes of individual manufacturers. Therefore, the physical and optical properties of synthesized CZ vary, all values being ranges. It is a dense substance, with a density between 5.6 and 6.0 g/cm3—about 1.65 times that of diamond. Cubic zirconia is relatively hard, 8–8.5 on the Mohs scale—slightly harder than most semi-precious natural gems. Its refractive index is high at 2.15–2.18 (compared to 2.42 for diamonds) and its luster is Adamantine lustre. Its dispersion is very high at 0.058–0.066, exceeding that of diamond (0.044). Cubic zirconia has no cleavage and exhibits a conchoidal fracture. Because of its high hardness, it is generally considered brittle. Under shortwave UV cubic zirconia typically fluoresces a yellow, greenish yellow or "beige". Under longwave UV the effect is greatly diminished, with a whitish glow sometimes being seen. Colored stones may show a strong, complex rare earth absorption spectrum. History Discovered in 1892, the yellowish monoclinic mineral baddeleyite is a natural form of zirconium oxide. The high melting point of zirconia (2750 °C or 4976 °F) hinders controlled growth of single crystals. However, stabilization of cubic zirconium oxide had been realized early on, with the synthetic product stabilized zirconia introduced in 1929. Although cubic, it was in the form of a polycrystalline ceramic: it was used as a refractory material, highly resistant to chemical and thermal attack (up to 2540 °C or 4604 °F). In 1937, German mineralogists M. V. Stackelberg and K. Chudoba discovered naturally occurring cubic zirconia in the form of microscopic grains included in metamict zircon. This was thought to be a byproduct of the metamictization process, but the two scientists did not think the mineral important enough to give it a formal name. The discovery was confirmed through X-ray diffraction, proving the existence of a natural counterpart to the synthetic product. As with the majority of grown diamond substitutes, the idea of producing single-crystal cubic zirconia arose in the minds of scientists seeking a new and versatile material for use in lasers and other optical applications. Its production eventually exceeded that of earlier synthetics, such as synthetic strontium titanate, synthetic rutile, YAG (yttrium aluminium garnet) and GGG (gadolinium gallium garnet). Some of the earliest research into controlled single-crystal growth of cubic zirconia occurred in 1960s France, much work being done by Y. Roulin and R. Collongues. This technique involved molten zirconia being contained within a thin shell of still-solid zirconia, with crystal growth from the melt. The process was named cold crucible, an allusion to the system of water cooling used. Though promising, these attempts yielded only small crystals. Later, Soviet scientists under V. V. Osiko in the Laser Equipment Laboratory at the Lebedev Physical Institute in Moscow perfected the technique, which was then named skull crucible (an allusion either to the shape of the water-cooled container or to the form of crystals sometimes grown). They named the jewel Fianit after the institute's name FIAN (Physical Institute of the Academy of Science), but the name was not used outside of the USSR. This was known at the time as the Institute of Physics at the Russian Academy of Science. Their breakthrough was published in 1973, and commercial production began in 1976. In 1977, cubic zirconia began to be mass-produced in the jewelry marketplace by the Ceres Corporation, with crystals stabilized with 94% yttria. Other major producers as of 1993 include Taiwan Crystal Company Ltd, Swarovski and ICT inc. By 1980, annual global production had reached 60 million carats (12 tonnes) and continued to increase, with production reaching around 400 tonnes per year in 1998. Because the natural form of cubic zirconia is so rare, all cubic zirconia used in jewelry has been synthesized, one method of which was patented by Josep F. Wenckus & Co. in 1997. Synthesis The skull-melting method refined by Josep F. Wenckus and coworkers in 1997 remains the industry standard. This is largely due to the process allowing for temperatures of over 3000  °C to be achieved, lack of contact between crucible and material as well as the freedom to choose any gas atmosphere. Primary downsides to this method include the inability to predict the size of the crystals produced and it is impossible to control the crystallization process through temperature changes. The apparatus used in this process consists of a cup-shaped crucible surrounded by radio frequency-activated (RF-activated) copper coils and a water-cooling system. Zirconium dioxide thoroughly mixed with a stabilizer (normally 10% yttrium oxide) is fed into a cold crucible. Metallic chips of either zirconium or the stabilizer are introduced into the powder mix in a compact pile manner. The RF generator is switched on and the metallic chips quickly start heating up and readily oxidize into more zirconia. Consequently, the surrounding powder heats up by thermal conduction, begins melting and, in turn, becomes electroconductive, and thus it begins to heat up via the RF generator as well. This continues until the entire product is molten. Due to the cooling system surrounding the crucible, a thin shell of sintered solid material is formed. This causes the molten zirconia to remain contained within its own powder which prevents it from being contaminated from the crucible and reduces heat loss. The melt is left at high temperatures for some hours to ensure homogeneity and ensure that all impurities have evaporated. Finally, the entire crucible is slowly removed from the RF coils to reduce the heating and let it slowly cool down (from bottom to top). The rate at which the crucible is removed from the RF coils is chosen as a function of the stability of crystallization dictated by the phase transition diagram. This provokes the crystallization process to begin and useful crystals begin to form. Once the crucible has been completely cooled to room temperature, the resulting crystals are multiple elongated-crystalline blocks. This shape is dictated by a concept known as crystal degeneration according to Tiller. The size and diameter of the obtained crystals is a function of the cross-sectional area of the crucible, volume of the melt and composition of the melt. The diameter of the crystals is heavily influenced by the concentration of Y2O3 stabilizer. Phase relations in zirconia solids solutions As seen on the phase diagram, the cubic phase will crystallize first as the solution is cooled down no matter the concentration of Y2O3. If the concentration of Y2O3 is not high enough the cubic structure will start to break down into the tetragonal state which will then break down into a monoclinic phase. If the concentration of Y2O3 is between 2.5-5% the resulting product will be PSZ (partially stabilized zirconia) while monophasic cubic crystals will form from around 8-40%. Below 14% at low growth rates tend to be opaque indicating partial phase separation in the solid solution (likely due to diffusion in the crystals remaining in the high temperature region for a longer time). Above this threshold crystals tend to remain clear at reasonable growth rates and maintains good annealing conditions. Doping Because of cubic zirconia's isomorphic capacity, it can be doped with several elements to change the color of the crystal. A list of specific dopants and colors produced by their addition can be seen below. Primary growth defects The vast majority of YCZ (yttrium bearing cubic zirconia) crystals are clear with high optical perfection and with gradients of the refractive index lower than . However some samples contain defects with the most characteristic and common ones listed below. Growth striations: These are located perpendicular to the growth direction of the crystal and are caused mainly by either fluctuations in the crystal growth rate or the non-congruent nature of liquid-solid transition, thus leading to non-uniform distribution of Y2O3. Light-scattering phase inclusions: Caused by contaminants in the crystal (primarily precipitates of silicates or aluminates of yttrium), typically of magnitude 0.03-10 μm. Mechanical stresses: Typically caused by the high temperature gradients of the growth and cooling processes, causing the crystal to form with internal mechanical stresses acting on it. This causes refractive index values of up to , although the effect of this can be reduced by annealing at 2100 °C followed by a slow enough cooling process. Dislocations: Similar to mechanical stresses, dislocations can be greatly reduced by annealing. Uses outside jewelry Due to its optical properties yttrium cubic zirconia (YCZ) has been used for windows, lenses, prisms, filters and laser elements. Particularly in the chemical industry it is used as window material for the monitoring of corrosive liquids due to its chemical stability and mechanical toughness. YCZ has also been used as a substrate for semiconductor and superconductor films in similar industries. Mechanical properties of partially stabilized zirconia (high hardness and shock resistance, low friction coefficient, high chemical and thermal resistance as well as high wear and tear resistance) allow it to be used as a very particular building material, especially in the bio-engineering industry: It has been used to make reliable super-sharp medical scalpels for doctors that are compatible with bio-tissues and contain an edge much smoother than one made of steel. Innovations In recent years manufacturers have sought ways of distinguishing their product by supposedly "improving" cubic zirconia. Coating finished cubic zirconia with a film of diamond-like carbon (DLC) is one such innovation, a process using chemical vapor deposition. The resulting material is purportedly harder, more lustrous and more like diamond overall. The coating is thought to quench the excess fire of cubic zirconia, while improving its refractive index, thus making it appear more like diamond. Additionally, because of the high percentage of diamond bonds in the amorphous diamond coating, the finished simulant will show a positive diamond signature in Raman spectra. Another technique first applied to quartz and topaz has also been adapted to cubic zirconia: An iridescent effect created by vacuum-sputtering onto finished stones an extremely thin layer of a precious metal (typically gold), or certain metal oxides, metal nitrides, or other coatings. This material is marketed as "mystic" by many dealers. Unlike diamond-like carbon and other hard synthetic ceramic coatings, the iridescent effect made with precious metal coatings is not durable, due to their extremely low hardness and poor abrasion wear properties, compared to the remarkably durable cubic zirconia substrate. Cubic zirconia vis-à-vis diamond Key features of cubic zirconia distinguish it from diamond: Hardness: cubic zirconia has a rating of approximately 8 on Mohs hardness scale vs. a rating of 10 for diamond. This may cause dull and rounded edges in CZ facets; the edges of diamond facets are much sharper by comparison. Furthermore, diamond rarely shows polish marks, and those which are apparent are oriented in different directions on adjoining facets, whereas CZ shows marks in the same direction of the polish throughout. The Specific gravity or density of cubic zirconia is approximately 1.7 times that of diamond. This allows gemologists to differentiate the two substances by weight alone. This property can also be exploited, for example, by dropping the stones in a heavy liquid and comparing their relative rates of descent: diamond will sink more slowly than CZ. Refractive index: cubic zirconia has a refractive index of 2.15–2.18, compared to a diamond's 2.42. This has led to the development of other immersion techniques for identification. In these methods, stones with refractive indices higher than that of the liquid used will have dark borders around the girdle and light facet edges whereas those with indices lower than the liquid will have light borders around the girdle and dark facet junctions. Dispersion is very high at 0.058–0.066, exceeding a diamond's 0.044. Cut: Cubic zirconia gemstones can be cut differently than diamonds: The facet edges can be rounded or "smooth". Color: only the rarest of diamonds are truly colorless, most having a tinge of yellow or brown to some extent. A cubic zirconia is often entirely colorless: equivalent to a perfect "D" on diamond's color grading scale. That said, desirable colors of cubic zirconia can be produced including near colorless, yellow, pink, purple, green, and even multicolored. Thermal conductivity: Cubic zirconia is a thermal insulator whereas diamond is the most powerful thermal conductor. This provides the basis for Wenckus’ canonical identification method, the industry standard. Effects on the diamond market Cubic zirconia, as a diamond simulant and jewel competitor, can potentially reduce demand for conflict diamonds, and impact the controversy surrounding the rarity and value of diamonds. Regarding value, the paradigm that diamonds are costly due to their rarity and visual beauty has been replaced by an artificial rarity attributed to price-fixing practices of De Beers Company which held a monopoly on the market from the 1870s to early 2000s. The company pleaded guilty to these charges in an Ohio court in 13 July 2004. However, while De Beers has less market power, the price of diamonds continues to increase due to the demand in emerging markets such as India and China. The emergence of artificial stones such as cubic zirconia with optic properties similar to diamonds, could be an alternative for jewelry buyers given their lower price and noncontroversial history. An issue closely related to monopoly is the emergence of conflict diamonds. The Kimberley Process (KP) was established to deter the illicit trade of diamonds that fund civil wars in Angola and Sierra Leone. However, the KP is not as effective in decreasing the number of conflict diamonds reaching the European and American markets. Its definition does not include forced labor conditions or human right violations. A 2015 study from the Enough Project, showed that groups in the Central African Republic have reaped between US$3 million and US$6 million annually from conflict diamonds. UN reports show that more than US$24 million in conflict diamonds have been smuggled since the establishment of the KP. Diamond simulants have become an alternative to boycott the funding of unethical practices. Terms such as “Eco-friendly Jewelry” define them as conflict free origin and environmentally sustainable. However, concerns from mining countries such as the Democratic Republic of Congo are that a boycott in purchases of diamonds would only worsen their economy. According to the Ministry of Mines in Congo, 10% of its population relies on the income from diamonds. Therefore, cubic zirconia are a short term alternative to reduce conflict but a long term solution would be to establish a more rigorous system of identifying the origin of these stones. See also Diamond Diamond simulant Shelby Gem Factory Synthetic diamond Yttria-stabilized zirconia References Further reading Crystals Diamond simulants Gemstones Refractory materials Synthetic minerals Zirconium dioxide Fluorite crystal structure fr:Zircone
Cubic zirconia
[ "Physics", "Chemistry", "Materials_science" ]
3,533
[ "Refractory materials", "Synthetic materials", "Materials", "Crystallography", "Crystals", "Gemstones", "Synthetic minerals", "Matter" ]
60,758
https://en.wikipedia.org/wiki/Box%E2%80%93Muller%20transform
The Box–Muller transform, by George Edward Pelham Box and Mervin Edgar Muller, is a random number sampling method for generating pairs of independent, standard, normally distributed (zero expectation, unit variance) random numbers, given a source of uniformly distributed random numbers. The method was first mentioned explicitly by Raymond E. A. C. Paley and Norbert Wiener in their 1934 treatise on Fourier transforms in the complex domain. Given the status of these latter authors and the widespread availability and use of their treatise, it is almost certain that Box and Muller were well aware of its contents. The Box–Muller transform is commonly expressed in two forms. The basic form as given by Box and Muller takes two samples from the uniform distribution on the interval and maps them to two standard, normally distributed samples. The polar form takes two samples from a different interval, , and maps them to two normally distributed samples without the use of sine or cosine functions. The Box–Muller transform was developed as a more computationally efficient alternative to the inverse transform sampling method. The ziggurat algorithm gives a more efficient method for scalar processors (e.g. old CPUs), while the Box–Muller transform is superior for processors with vector units (e.g. GPUs or modern CPUs). Basic form Suppose and are independent samples chosen from the uniform distribution on the unit interval . Let and Then Z0 and Z1 are independent random variables with a standard normal distribution. The derivation is based on a property of a two-dimensional Cartesian system, where X and Y coordinates are described by two independent and normally distributed random variables, the random variables for and (shown above) in the corresponding polar coordinates are also independent and can be expressed as and Because is the square of the norm of the standard bivariate normal variable , it has the chi-squared distribution with two degrees of freedom. In the special case of two degrees of freedom, the chi-squared distribution coincides with the exponential distribution, and the equation for above is a simple way of generating the required exponential variate. Polar form The polar form was first proposed by J. Bell and then modified by R. Knop. While several different versions of the polar method have been described, the version of R. Knop will be described here because it is the most widely used, in part due to its inclusion in Numerical Recipes. A slightly different form is described as "Algorithm P" by D. Knuth in The Art of Computer Programming. Given and , independent and uniformly distributed in the closed interval , set . If or , discard u and v, and try another pair . Because and are uniformly distributed and because only points within the unit circle have been admitted, the values of s will be uniformly distributed in the open interval , too. The latter can be seen by calculating the cumulative distribution function for s in the interval . This is the area of a circle with radius , divided by . From this we find the probability density function to have the constant value 1 on the interval . Equally so, the angle θ divided by is uniformly distributed in the interval and independent of . We now identify the value of s with that of U1 and with that of U2 in the basic form. As shown in the figure, the values of and in the basic form can be replaced with the ratios and , respectively. The advantage is that calculating the trigonometric functions directly can be avoided. This is helpful when trigonometric functions are more expensive to compute than the single division that replaces each one. Just as the basic form produces two standard normal deviates, so does this alternate calculation. and Contrasting the two forms The polar method differs from the basic method in that it is a type of rejection sampling. It discards some generated random numbers, but can be faster than the basic method because it is simpler to compute (provided that the random number generator is relatively fast) and is more numerically robust. Avoiding the use of expensive trigonometric functions improves speed over the basic form. It discards of the total input uniformly distributed random number pairs generated, i.e. discards uniformly distributed random number pairs per Gaussian random number pair generated, requiring input random numbers per output random number. The basic form requires two multiplications, 1/2 logarithm, 1/2 square root, and one trigonometric function for each normal variate. On some processors, the cosine and sine of the same argument can be calculated in parallel using a single instruction. Notably for Intel-based machines, one can use the fsincos assembler instruction or the expi instruction (usually available from C as an intrinsic function), to calculate complex and just separate the real and imaginary parts. Note: To explicitly calculate the complex-polar form use the following substitutions in the general form, Let and Then The polar form requires 3/2 multiplications, 1/2 logarithm, 1/2 square root, and 1/2 division for each normal variate. The effect is to replace one multiplication and one trigonometric function with a single division and a conditional loop. Tails truncation When a computer is used to produce a uniform random variable it will inevitably have some inaccuracies because there is a lower bound on how close numbers can be to 0. If the generator uses 32 bits per output value, the smallest non-zero number that can be generated is . When and are equal to this the Box–Muller transform produces a normal random deviate equal to . This means that the algorithm will not produce random variables more than 6.660 standard deviations from the mean. This corresponds to a proportion of lost due to the truncation, where is the standard cumulative normal distribution. With 64 bits the limit is pushed to standard deviations, for which . Implementation C++ The standard Box–Muller transform generates values from the standard normal distribution (i.e. standard normal deviates) with mean 0 and standard deviation 1. The implementation below in standard C++ generates values from any normal distribution with mean and variance . If is a standard normal deviate, then will have a normal distribution with mean and standard deviation . The random number generator has been seeded to ensure that new, pseudo-random values will be returned from sequential calls to the generateGaussianNoise function. #include <cmath> #include <limits> #include <random> #include <utility> //"mu" is the mean of the distribution, and "sigma" is the standard deviation. std::pair<double, double> generateGaussianNoise(double mu, double sigma) { constexpr double two_pi = 2.0 * M_PI; //initialize the random uniform number generator (runif) in a range 0 to 1 static std::mt19937 rng(std::random_device{}()); // Standard mersenne_twister_engine seeded with rd() static std::uniform_real_distribution<> runif(0.0, 1.0); //create two random numbers, make sure u1 is greater than zero double u1, u2; do { u1 = runif(rng); } while (u1 == 0); u2 = runif(rng); //compute z0 and z1 auto mag = sigma * sqrt(-2.0 * log(u1)); auto z0 = mag * cos(two_pi * u2) + mu; auto z1 = mag * sin(two_pi * u2) + mu; return std::make_pair(z0, z1); } JavaScript /* Syntax: * * [ x, y ] = rand_normal(); * x = rand_normal()[0]; * y = rand_normal()[1]; */ function rand_normal() { let phi = 2 * Math.PI * Math.random(); let R = Math.sqrt(-2 * Math.log(Math.random())); let x = R * Math.cos(phi); let y = R * Math.sin(phi); return [ x, y ]; } Julia """ boxmullersample(N) Generate `2N` samples from the standard normal distribution using the Box-Muller method. """ function boxmullersample(N) z = Array{Float64}(undef,N,2); for i in axes(z,1) z[i,:] .= sincospi(2 * rand()); z[i,:] .*= sqrt(-2 * log(rand())); end vec(z) end """ boxmullersample(n,μ,σ) Generate `n` samples from the normal distribution with mean `μ` and standard deviation `σ` using the Box-Muller method. """ function boxmullersample(n,μ,σ) μ .+ σ*boxmullersample(cld(n,2))[1:n]; end See also Inverse transform sampling Marsaglia polar method, similar transform to Box–Muller, which uses Cartesian coordinates, instead of polar coordinates References External links How to Convert a Uniform Distribution to a Gaussian Distribution (C Code) Transforms Non-uniform random numbers Articles with example C++ code
Box–Muller transform
[ "Mathematics" ]
1,968
[ "Mathematical objects", "Functions and mappings", "Mathematical relations", "Transforms" ]
60,770
https://en.wikipedia.org/wiki/Curvature
In mathematics, curvature is any of several strongly related concepts in geometry that intuitively measure the amount by which a curve deviates from being a straight line or by which a surface deviates from being a plane. If a curve or surface is contained in a larger space, curvature can be defined extrinsically relative to the ambient space. Curvature of Riemannian manifolds of dimension at least two can be defined intrinsically without reference to a larger space. For curves, the canonical example is that of a circle, which has a curvature equal to the reciprocal of its radius. Smaller circles bend more sharply, and hence have higher curvature. The curvature at a point of a differentiable curve is the curvature of its osculating circle — that is, the circle that best approximates the curve near this point. The curvature of a straight line is zero. In contrast to the tangent, which is a vector quantity, the curvature at a point is typically a scalar quantity, that is, it is expressed by a single real number. For surfaces (and, more generally for higher-dimensional manifolds), that are embedded in a Euclidean space, the concept of curvature is more complex, as it depends on the choice of a direction on the surface or manifold. This leads to the concepts of maximal curvature, minimal curvature, and mean curvature. History In Tractatus de configurationibus qualitatum et motuum, the 14th-century philosopher and mathematician Nicole Oresme introduces the concept of curvature as a measure of departure from straightness; for circles he has the curvature as being inversely proportional to the radius; and he attempts to extend this idea to other curves as a continuously varying magnitude. The curvature of a differentiable curve was originally defined through osculating circles. In this setting, Augustin-Louis Cauchy showed that the center of curvature is the intersection point of two infinitely close normal lines to the curve. Plane curves Intuitively, the curvature describes for any part of a curve how much the curve direction changes over a small distance travelled (e.g. angle in ), so it is a measure of the instantaneous rate of change of direction of a point that moves on the curve: the larger the curvature, the larger this rate of change. In other words, the curvature measures how fast the unit tangent vector to the curve at point p rotates when point p moves at unit speed along the curve. In fact, it can be proved that this instantaneous rate of change is exactly the curvature. More precisely, suppose that the point is moving on the curve at a constant speed of one unit, that is, the position of the point is a function of the parameter , which may be thought as the time or as the arc length from a given origin. Let be a unit tangent vector of the curve at , which is also the derivative of with respect to . Then, the derivative of with respect to is a vector that is normal to the curve and whose length is the curvature. To be meaningful, the definition of the curvature and its different characterizations require that the curve is continuously differentiable near , for having a tangent that varies continuously; it requires also that the curve is twice differentiable at , for insuring the existence of the involved limits, and of the derivative of . The characterization of the curvature in terms of the derivative of the unit tangent vector is probably less intuitive than the definition in terms of the osculating circle, but formulas for computing the curvature are easier to deduce. Therefore, and also because of its use in kinematics, this characterization is often given as a definition of the curvature. Osculating circle Historically, the curvature of a differentiable curve was defined through the osculating circle, which is the circle that best approximates the curve at a point. More precisely, given a point on a curve, every other point of the curve defines a circle (or sometimes a line) passing through and tangent to the curve at . The osculating circle is the limit, if it exists, of this circle when tends to . Then the center and the radius of curvature of the curve at are the center and the radius of the osculating circle. The curvature is the reciprocal of radius of curvature. That is, the curvature is where is the radius of curvature (the whole circle has this curvature, it can be read as turn over the length ). This definition is difficult to manipulate and to express in formulas. Therefore, other equivalent definitions have been introduced. In terms of arc-length parametrization Every differentiable curve can be parametrized with respect to arc length. In the case of a plane curve, this means the existence of a parametrization , where and are real-valued differentiable functions whose derivatives satisfy This means that the tangent vector has a length equal to one and is thus a unit tangent vector. If the curve is twice differentiable, that is, if the second derivatives of and exist, then the derivative of exists. This vector is normal to the curve, its length is the curvature , and it is oriented toward the center of curvature. That is, Moreover, because the radius of curvature is (assuming 𝜿(s) ≠ 0) and the center of curvature is on the normal to the curve, the center of curvature is the point (In case the curvature is zero, the center of curvature is not located anywhere on the plane R2 and is often said to be located "at infinity".) If is the unit normal vector obtained from by a counterclockwise rotation of , then with . The real number is called the oriented curvature or signed curvature. It depends on both the orientation of the plane (definition of counterclockwise), and the orientation of the curve provided by the parametrization. In fact, the change of variable provides another arc-length parametrization, and changes the sign of . In terms of a general parametrization Let be a proper parametric representation of a twice differentiable plane curve. Here proper means that on the domain of definition of the parametrization, the derivative is defined, differentiable and nowhere equal to the zero vector. With such a parametrization, the signed curvature is where primes refer to derivatives with respect to . The curvature is thus These can be expressed in a coordinate-free way as These formulas can be derived from the special case of arc-length parametrization in the following way. The above condition on the parametrisation imply that the arc length is a differentiable monotonic function of the parameter , and conversely that is a monotonic function of . Moreover, by changing, if needed, to , one may suppose that these functions are increasing and have a positive derivative. Using notation of the preceding section and the chain rule, one has and thus, by taking the norm of both sides where the prime denotes differentiation with respect to . The curvature is the norm of the derivative of with respect to . By using the above formula and the chain rule this derivative and its norm can be expressed in terms of and only, with the arc-length parameter completely eliminated, giving the above formulas for the curvature. Graph of a function The graph of a function , is a special case of a parametrized curve, of the form As the first and second derivatives of are 1 and 0, previous formulas simplify to for the curvature, and to for the signed curvature. In the general case of a curve, the sign of the signed curvature is somewhat arbitrary, as it depends on the orientation of the curve. In the case of the graph of a function, there is a natural orientation by increasing values of . This makes significant the sign of the signed curvature. The sign of the signed curvature is the same as the sign of the second derivative of . If it is positive then the graph has an upward concavity, and, if it is negative the graph has a downward concavity. If it is zero, then one has an inflection point or an undulation point. When the slope of the graph (that is the derivative of the function) is small, the signed curvature is well approximated by the second derivative. More precisely, using big O notation, one has It is common in physics and engineering to approximate the curvature with the second derivative, for example, in beam theory or for deriving the wave equation of a string under tension, and other applications where small slopes are involved. This often allows systems that are otherwise nonlinear to be treated approximately as linear. Polar coordinates If a curve is defined in polar coordinates by the radius expressed as a function of the polar angle, that is is a function of , then its curvature is where the prime refers to differentiation with respect to . This results from the formula for general parametrizations, by considering the parametrization Implicit curve For a curve defined by an implicit equation with partial derivatives denoted , , , , , the curvature is given by The signed curvature is not defined, as it depends on an orientation of the curve that is not provided by the implicit equation. Note that changing into would not change the curve defined by , but it would change the sign of the numerator if the absolute value were omitted in the preceding formula. A point of the curve where is a singular point, which means that the curve is not differentiable at this point, and thus that the curvature is not defined (most often, the point is either a crossing point or a cusp). The above formula for the curvature can be derived from the expression of the curvature of the graph of a function by using the implicit function theorem and the fact that, on such a curve, one has Examples It can be useful to verify on simple examples that the different formulas given in the preceding sections give the same result. Circle A common parametrization of a circle of radius is . The formula for the curvature gives It follows, as expected, that the radius of curvature is the radius of the circle, and that the center of curvature is the center of the circle. The circle is a rare case where the arc-length parametrization is easy to compute, as it is It is an arc-length parametrization, since the norm of is equal to one. This parametrization gives the same value for the curvature, as it amounts to division by in both the numerator and the denominator in the preceding formula. The same circle can also be defined by the implicit equation with . Then, the formula for the curvature in this case gives Parabola Consider the parabola . It is the graph of a function, with derivative , and second derivative . So, the signed curvature is It has the sign of for all values of . This means that, if , the concavity is upward directed everywhere; if , the concavity is downward directed; for , the curvature is zero everywhere, confirming that the parabola degenerates into a line in this case. The (unsigned) curvature is maximal for , that is at the stationary point (zero derivative) of the function, which is the vertex of the parabola. Consider the parametrization . The first derivative of is , and the second derivative is zero. Substituting into the formula for general parametrizations gives exactly the same result as above, with replaced by . If we use primes for derivatives with respect to the parameter . The same parabola can also be defined by the implicit equation with . As , and , one obtains exactly the same value for the (unsigned) curvature. However, the signed curvature is meaningless here, as is a valid implicit equation for the same parabola, which gives the opposite sign for the curvature. Frenet–Serret formulas for plane curves The expression of the curvature In terms of arc-length parametrization is essentially the first Frenet–Serret formula where the primes refer to the derivatives with respect to the arc length , and is the normal unit vector in the direction of . As planar curves have zero torsion, the second Frenet–Serret formula provides the relation For a general parametrization by a parameter , one needs expressions involving derivatives with respect to . As these are obtained by multiplying by the derivatives with respect to , one has, for any proper parametrization Curvature comb A curvature comb can be used to represent graphically the curvature of every point on a curve. If is a parametrised curve its comb is defined as the parametrized curve where are the curvature and normal vector and is a scaling factor (to be chosen as to enhance the graphical representation). Space curves As in the case of curves in two dimensions, the curvature of a regular space curve in three dimensions (and higher) is the magnitude of the acceleration of a particle moving with unit speed along a curve. Thus if is the arc-length parametrization of then the unit tangent vector is given by and the curvature is the magnitude of the acceleration: The direction of the acceleration is the unit normal vector , which is defined by The plane containing the two vectors and is the osculating plane to the curve at . The curvature has the following geometrical interpretation. There exists a circle in the osculating plane tangent to whose Taylor series to second order at the point of contact agrees with that of . This is the osculating circle to the curve. The radius of the circle is called the radius of curvature, and the curvature is the reciprocal of the radius of curvature: The tangent, curvature, and normal vector together describe the second-order behavior of a curve near a point. In three dimensions, the third-order behavior of a curve is described by a related notion of torsion, which measures the extent to which a curve tends to move as a helical path in space. The torsion and curvature are related by the Frenet–Serret formulas (in three dimensions) and their generalization (in higher dimensions). General expressions For a parametrically-defined space curve in three dimensions given in Cartesian coordinates by , the curvature is where the prime denotes differentiation with respect to the parameter . This can be expressed independently of the coordinate system by means of the formula where × denotes the vector cross product. The following formula is valid for the curvature of curves in a Euclidean space of any dimension: Curvature from arc and chord length Given two points and on , let be the arc length of the portion of the curve between and and let denote the length of the line segment from to . The curvature of at is given by the limit where the limit is taken as the point approaches on . The denominator can equally well be taken to be . The formula is valid in any dimension. Furthermore, by considering the limit independently on either side of , this definition of the curvature can sometimes accommodate a singularity at . The formula follows by verifying it for the osculating circle. Surfaces The curvature of curves drawn on a surface is the main tool for the defining and studying the curvature of the surface. Curves on surfaces For a curve drawn on a surface (embedded in three-dimensional Euclidean space), several curvatures are defined, which relates the direction of curvature to the surface's unit normal vector, including the: normal curvature geodesic curvature geodesic torsion Any non-singular curve on a smooth surface has its tangent vector contained in the tangent plane of the surface. The normal curvature, , is the curvature of the curve projected onto the plane containing the curve's tangent and the surface normal ; the geodesic curvature, , is the curvature of the curve projected onto the surface's tangent plane; and the geodesic torsion (or relative torsion), , measures the rate of change of the surface normal around the curve's tangent. Let the curve be arc-length parametrized, and let so that form an orthonormal basis, called the Darboux frame. The above quantities are related by: Principal curvature All curves on the surface with the same tangent vector at a given point will have the same normal curvature, which is the same as the curvature of the curve obtained by intersecting the surface with the plane containing and . Taking all possible tangent vectors, the maximum and minimum values of the normal curvature at a point are called the principal curvatures, and , and the directions of the corresponding tangent vectors are called principal normal directions. Normal sections Curvature can be evaluated along surface normal sections, similar to above (see for example the Earth radius of curvature). Developable surfaces Some curved surfaces, such as those made from a smooth sheet of paper, can be flattened down into the plane without distorting their intrinsic features in any way. Such developable surfaces have zero Gaussian curvature (see below). Gaussian curvature In contrast to curves, which do not have intrinsic curvature, but do have extrinsic curvature (they only have a curvature given an embedding), surfaces can have intrinsic curvature, independent of an embedding. The Gaussian curvature, named after Carl Friedrich Gauss, is equal to the product of the principal curvatures, . It has a dimension of length−2 and is positive for spheres, negative for one-sheet hyperboloids and zero for planes and cylinders. It determines whether a surface is locally convex (when it is positive) or locally saddle-shaped (when it is negative). Gaussian curvature is an intrinsic property of the surface, meaning it does not depend on the particular embedding of the surface; intuitively, this means that ants living on the surface could determine the Gaussian curvature. For example, an ant living on a sphere could measure the sum of the interior angles of a triangle and determine that it was greater than 180 degrees, implying that the space it inhabited had positive curvature. On the other hand, an ant living on a cylinder would not detect any such departure from Euclidean geometry; in particular the ant could not detect that the two surfaces have different mean curvatures (see below), which is a purely extrinsic type of curvature. Formally, Gaussian curvature only depends on the Riemannian metric of the surface. This is Gauss's celebrated Theorema Egregium, which he found while concerned with geographic surveys and mapmaking. An intrinsic definition of the Gaussian curvature at a point is the following: imagine an ant which is tied to with a short thread of length . It runs around while the thread is completely stretched and measures the length of one complete trip around . If the surface were flat, the ant would find . On curved surfaces, the formula for will be different, and the Gaussian curvature at the point can be computed by the Bertrand–Diguet–Puiseux theorem as The integral of the Gaussian curvature over the whole surface is closely related to the surface's Euler characteristic; see the Gauss–Bonnet theorem. The discrete analog of curvature, corresponding to curvature being concentrated at a point and particularly useful for polyhedra, is the (angular) defect; the analog for the Gauss–Bonnet theorem is Descartes' theorem on total angular defect. Because (Gaussian) curvature can be defined without reference to an embedding space, it is not necessary that a surface be embedded in a higher-dimensional space in order to be curved. Such an intrinsically curved two-dimensional surface is a simple example of a Riemannian manifold. Mean curvature The mean curvature is an extrinsic measure of curvature equal to half the sum of the principal curvatures, . It has a dimension of length−1. Mean curvature is closely related to the first variation of surface area. In particular, a minimal surface such as a soap film has mean curvature zero and a soap bubble has constant mean curvature. Unlike Gauss curvature, the mean curvature is extrinsic and depends on the embedding, for instance, a cylinder and a plane are locally isometric but the mean curvature of a plane is zero while that of a cylinder is nonzero. Second fundamental form The intrinsic and extrinsic curvature of a surface can be combined in the second fundamental form. This is a quadratic form in the tangent plane to the surface at a point whose value at a particular tangent vector to the surface is the normal component of the acceleration of a curve along the surface tangent to ; that is, it is the normal curvature to a curve tangent to (see above). Symbolically, where is the unit normal to the surface. For unit tangent vectors , the second fundamental form assumes the maximum value and minimum value , which occur in the principal directions and , respectively. Thus, by the principal axis theorem, the second fundamental form is Thus the second fundamental form encodes both the intrinsic and extrinsic curvatures. Shape operator An encapsulation of surface curvature can be found in the shape operator, , which is a self-adjoint linear operator from the tangent plane to itself (specifically, the differential of the Gauss map). For a surface with tangent vectors and normal , the shape operator can be expressed compactly in index summation notation as (Compare the alternative expression of curvature for a plane curve.) The Weingarten equations give the value of in terms of the coefficients of the first and second fundamental forms as The principal curvatures are the eigenvalues of the shape operator, the principal curvature directions are its eigenvectors, the Gauss curvature is its determinant, and the mean curvature is half its trace. Curvature of space By extension of the former argument, a space of three or more dimensions can be intrinsically curved. The curvature is intrinsic in the sense that it is a property defined at every point in the space, rather than a property defined with respect to a larger space that contains it. In general, a curved space may or may not be conceived as being embedded in a higher-dimensional ambient space; if not then its curvature can only be defined intrinsically. After the discovery of the intrinsic definition of curvature, which is closely connected with non-Euclidean geometry, many mathematicians and scientists questioned whether ordinary physical space might be curved, although the success of Euclidean geometry up to that time meant that the radius of curvature must be astronomically large. In the theory of general relativity, which describes gravity and cosmology, the idea is slightly generalised to the "curvature of spacetime"; in relativity theory spacetime is a pseudo-Riemannian manifold. Once a time coordinate is defined, the three-dimensional space corresponding to a particular time is generally a curved Riemannian manifold; but since the time coordinate choice is largely arbitrary, it is the underlying spacetime curvature that is physically significant. Although an arbitrarily curved space is very complex to describe, the curvature of a space which is locally isotropic and homogeneous is described by a single Gaussian curvature, as for a surface; mathematically these are strong conditions, but they correspond to reasonable physical assumptions (all points and all directions are indistinguishable). A positive curvature corresponds to the inverse square radius of curvature; an example is a sphere or hypersphere. An example of negatively curved space is hyperbolic geometry (see also: non-positive curvature). A space or space-time with zero curvature is called flat. For example, Euclidean space is an example of a flat space, and Minkowski space is an example of a flat spacetime. There are other examples of flat geometries in both settings, though. A torus or a cylinder can both be given flat metrics, but differ in their topology. Other topologies are also possible for curved space . Generalizations The mathematical notion of curvature is also defined in much more general contexts. Many of these generalizations emphasize different aspects of the curvature as it is understood in lower dimensions. One such generalization is kinematic. The curvature of a curve can naturally be considered as a kinematic quantity, representing the force felt by a certain observer moving along the curve; analogously, curvature in higher dimensions can be regarded as a kind of tidal force (this is one way of thinking of the sectional curvature). This generalization of curvature depends on how nearby test particles diverge or converge when they are allowed to move freely in the space; see Jacobi field. Another broad generalization of curvature comes from the study of parallel transport on a surface. For instance, if a vector is moved around a loop on the surface of a sphere keeping parallel throughout the motion, then the final position of the vector may not be the same as the initial position of the vector. This phenomenon is known as holonomy. Various generalizations capture in an abstract form this idea of curvature as a measure of holonomy; see curvature form. A closely related notion of curvature comes from gauge theory in physics, where the curvature represents a field and a vector potential for the field is a quantity that is in general path-dependent: it may change if an observer moves around a loop. Two more generalizations of curvature are the scalar curvature and Ricci curvature. In a curved surface such as the sphere, the area of a disc on the surface differs from the area of a disc of the same radius in flat space. This difference (in a suitable limit) is measured by the scalar curvature. The difference in area of a sector of the disc is measured by the Ricci curvature. Each of the scalar curvature and Ricci curvature are defined in analogous ways in three and higher dimensions. They are particularly important in relativity theory, where they both appear on the side of Einstein's field equations that represents the geometry of spacetime (the other side of which represents the presence of matter and energy). These generalizations of curvature underlie, for instance, the notion that curvature can be a property of a measure; see curvature of a measure. Another generalization of curvature relies on the ability to compare a curved space with another space that has constant curvature. Often this is done with triangles in the spaces. The notion of a triangle makes senses in metric spaces, and this gives rise to spaces. See also Curvature form for the appropriate notion of curvature for vector bundles and principal bundles with connection Curvature of a measure for a notion of curvature in measure theory Curvature of parametric surfaces Curvature of Riemannian manifolds for generalizations of Gauss curvature to higher-dimensional Riemannian manifolds Curvature vector and geodesic curvature for appropriate notions of curvature of curves in Riemannian manifolds, of any dimension Degree of curvature Differential geometry of curves for a full treatment of curves embedded in a Euclidean space of arbitrary dimension Dioptre, a measurement of curvature used in optics Evolute, the locus of the centers of curvature of a given curve Fundamental theorem of curves Gauss–Bonnet theorem for an elementary application of curvature Gauss map for more geometric properties of Gauss curvature Gauss's principle of least constraint, an expression of the Principle of Least Action Mean curvature at one point on a surface Minimum railway curve radius Radius of curvature Second fundamental form for the extrinsic curvature of hypersurfaces in general Sinuosity Torsion of a curve Notes References () () External links The Feynman Lectures on Physics Vol. II Ch. 42: Curved Space The History of Curvature Curvature, Intrinsic and Extrinsic at MathPages Multivariable calculus Articles containing video clips
Curvature
[ "Physics", "Mathematics" ]
5,569
[ "Geometric measurement", "Physical quantities", "Calculus", "Multivariable calculus", "Curvature (mathematics)" ]
60,773
https://en.wikipedia.org/wiki/List%20of%20woods
This is a list of woods, most commonly used in the timber and lumber trade. Soft woods (coniferous) Araucaria Hoop pine (Araucaria cunninghamii) Monkey puzzle tree (Araucaria araucana) Paraná pine (Araucaria angustifolia) Cedar (Cedrus) Celery-top pine (Phyllocladus aspleniifolius) Cypress (Chamaecyparis, Cupressus, Taxodium) Arizona cypress (Cupressus arizonica) Bald cypress, southern cypress (Taxodium distichum) Alerce (Fitzroya cupressoides) Hinoki cypress (Chamaecyparis obtusa) Lawson's cypress (Chamaecyparis lawsoniana) Mediterranean cypress (Cupressus sempervirens) Douglas-fir (Pseudotsuga menziesii) Coast Douglas-fir (Pseudotsuga menziesii var. menziesii) Rocky Mountain Douglas-fir (Pseudotsuga menziesii var. glauca) European yew (Taxus baccata) Fir (Abies) Balsam fir (Abies balsamea) Silver fir (Abies alba) Noble fir (Abies procera) Pacific silver fir (Abies amabilis) Hemlock (Tsuga) Eastern hemlock (Tsuga canadensis) Mountain hemlock (Tsuga mertensiana) Western hemlock (Tsuga heterophylla) Huon pine, Macquarie pine (Lagarostrobos franklinii) Kauri (New Zealand) (Agathis australis) Queensland kauri (Australia) (Agathis robusta) Japanese nutmeg-yew, kaya (Torreya nucifera) Larch (Larix) European larch (Larix decidua) Japanese larch (Larix kaempferi) Tamarack (Larix laricina) Western larch (Larix occidentalis) Pine (Pinus) European black pine (Pinus nigra) Jack pine (Pinus banksiana) Lodgepole pine (Pinus contorta) Monterey pine (Pinus radiata) Ponderosa pine (Pinus ponderosa) Red pine (North America) (Pinus resinosa) Scots pine, red pine (UK) (Pinus sylvestris) White pine Eastern white pine (Pinus strobus) Western white pine (Pinus monticola) Sugar pine (Pinus lambertiana) Southern yellow pine Loblolly pine (Pinus taeda) Longleaf pine (Pinus palustris) Pitch pine (Pinus rigida) Shortleaf pine (Pinus echinata) Red cedar Eastern red cedar, (Juniperus virginiana) Western red cedar (Thuja plicata) Coast redwood (Sequoia sempervirens) Rimu (Dacrydium cupressinum) Spruce (Picea) Norway spruce (Picea abies) Black spruce (Picea mariana) Red spruce (Picea rubens) Sitka spruce (Picea sitchensis) White spruce (Picea glauca) Sugi (Cryptomeria japonica) White cedar Northern white cedar (Thuja occidentalis) Atlantic white cedar (Chamaecyparis thyoides) Nootka cypress (Cupressus nootkatensis) Hardwoods (angiosperms) Abachi (Triplochiton scleroxylon) Acacia (Acacia sp., Robinia pseudoacacia) African padauk (Pterocarpus soyauxii) Afzelia, doussi (Afzelia africana) Agba, tola (Gossweilerodendron balsamiferum) Alder (Alnus) Black alder (Alnus glutinosa) Red alder (Alnus rubra) Ash (Fraxinus) Black ash (Fraxinus nigra) Blue ash (Fraxinus quadrangulata) Common ash (Fraxinus excelsior) Green ash (Fraxinus pennsylvanica) Oregon ash (Fraxinus latifolia) Pumpkin ash (Fraxinus profunda) White ash (Fraxinus americana) Aspen (Populus) Bigtooth aspen (Populus gradidentata) European aspen (Populus tremula) Quaking aspen (Populus tremuloides) Australian red cedar (Toona ciliata) Ayan, movingui (Distemonanthus benthamianus) Balsa (Ochroma pyramidale) Basswood, linden American basswood (Tilia americana) White basswood (Tilia heterophylla) American beech (Fagus grandifolia) Birch (Betula) American birches Gray birch (Betula populifolia) Black birch (Betula nigra) Paper birch (Betula papyrifera) Sweet birch (Betula lenta) Yellow birch (Betula alleghaniensis) European birches Silver birch (Betula pendula) Downy birch (Betula pubescens) Blackbean (Castanospermum australe) Blackwood Australian blackwood (Acacia melanoxylon) African blackwood, mpingo (Dalbergia melanoxylon) Bloodwood (Brosimum rubescens) Boxelder (Acer negundo) Boxwood, common box (Buxus sempervirens) Brazilian walnut (Ocotea porosa) Brazilwood (Caesalpinia echinata) Buckeye, Horse-chestnut (Aesculus) Horse-chestnut (Aesculus hippocastanum) Ohio buckeye (Aesculus glabra) Yellow buckeye (Aesculus flava) Butternut (Juglans cinerea) California bay laurel (Umbellularia californica) Camphor tree (Cinnamomum camphora) Cape chestnut (Calodendrum capense) Catalpa, catawba (Catalpa) Ceylon satinwood (Chloroxylon swietenia) Cherry (Prunus) Black cherry (Prunus serotina) Red cherry (Prunus pensylvanica) Wild cherry (Prunus avium) Chestnut (Castanea spp.) Chestnut (Castanea sativa) American Chestnut (Castanea dentata) Coachwood (Ceratopetalum apetalum) Cocobolo (Dalbergia retusa) Corkwood (Leitneria floridana) Cottonwood, popular Eastern cottonwood (Populus deltoides) Swamp cottonwood (Populus heterophylla) Cucumbertree (Magnolia acuminata) Cumaru (Dipteryx spp.) Dogwood (Cornus spp.) Flowering dogwood (Cornus florida) Pacific dogwood (Cornus nuttallii) Ebony (Diospyros) Andaman marblewood (Diospyros kurzii) Ebène marbre (Diospyros melanida) African ebony (Diospyros crassiflora) Ceylon ebony (Diospyros ebenum) Rare Brown (Rareay Brownibium) Elm American elm (Ulmus americana) English elm (Ulmus procera) Rock elm (Ulmus thomasii) Slippery elm, red elm (Ulmus rubra) Wych elm (Ulmus glabra) Eucalyptus Lyptus: Flooded gum (Eucalyptus grandis) White mahogany (Eucalyptus acmenoides) Brown mallet (Eucalyptus astringens) Banglay, southern mahogany (Eucalyptus botryoides) River red gum (Eucalyptus camaldulensis) Karri (Eucalyptus diversicolor) Blue gum (Eucalyptus globulus) Flooded gum, rose gum (Eucalyptus grandis) York gum (Eucalyptus loxophleba) Jarrah (Eucalyptus marginata) Tallowwood (Eucalyptus microcorys) Grey ironbark (Eucalyptus paniculata) Blackbutt (Eucalyptus pilularis) Mountain ash (Eucalyptus regnans) Australian oak (Eucalyptus obliqua) Alpine ash (Eucalyptus delegatensis) Red mahogany (Eucalyptus resinifera) Swamp mahogany, swamp messmate (Eucalyptus robusta) Sydney blue gum (Eucalyptus saligna) Mugga, red ironbark (Eucalyptus sideroxylon) Redwood (Eucalyptus transcontinentalis) Wandoo (Eucalyptus wandoo) European crabapple (Malus sylvestris) European pear (Pyrus communis) Gonçalo alves (Astronium spp.) Greenheart (Chlorocardium rodiei) Grenadilla, mpingo (Dalbergia melanoxylon) Guanandi (Calophyllum brasiliense) Gum (Eucalyptus) Gumbo limbo (Bursera simaruba) Hackberry (Celtis occidentalis) Hickory (Carya) Pecan (Carya illinoinensis) Pignut hickory (Carya glabra) Shagbark hickory (Carya ovata) Shellbark hickory (Carya laciniosa) Hornbeam (Carpinus spp.) American hophornbeam (Ostrya virginiana) Ipê (Handroanthus spp.) Iroko, African teak (Milicia excelsa) Ironwood Balau (Shorea spp.) American hornbeam (Carpinus caroliniana) Sheoak, Polynesian ironwood (Casuarina equisetifolia) Giant ironwood (Choricarpia subargentea) Diesel tree (Copaifera langsdorffii) Borneo ironwood (Eusideroxylon zwageri) Lignum vitae Guaiacwood (Guaiacum officinale) Holywood (Guaiacum sanctum) Takian (Hopea odorata) Black ironwood (Krugiodendron ferreum) Black ironwood, olive (Olea spp.) Lebombo ironwood Androstachys johnsonii Catalina ironwood (Lyonothamnus floribundus) Ceylon ironwood (Mesua ferrea) Desert ironwood (Olneya tesota) Persian ironwood (Parrotia persica) Brazilian ironwood, pau ferro (Caesalpinia ferrea) Yellow lapacho (Tabebuia serratifolia) Jacarandá-boca-de-sapo (Jacaranda brasiliana) Jacarandá de Brasil (Dalbergia nigra) Jatobá (Hymenaea courbaril) Kingwood (Dalbergia cearensis) Lacewood Northern silky oak (Cardwellia sublimis) American sycamore (Platanus occidentalis) London plane (Platanus × hispanica) Limba (Terminalia superba) Locust Black locust (Robinia pseudoacacia) Honey locust (Gleditsia triacanthos) Mahogany Genuine mahogany (Swietenia) West Indies mahogany (Swietenia mahagoni) Bigleaf mahogany (Swietenia macrophylla) Pacific Coast mahogany (Swietenia humilis) other mahogany African mahogany (Khaya spp.) Chinese mahogany (Toona sinensis) Australian red cedar, Indian mahogany (Toona ciliata) Philippine mahogany, calantis, kalantis (Toona calantas) Indonesian mahogany, suren (Toona sureni) Sapele (Entandrophragma cylindricum) Sipo, utile (Entandrophragma utile) Tiama, (Entandrophragma angolense) Kosipo, (Entandrophragma candollei) Mountain mahogany, bottle tree (Entandrophragma caudatumi) Indian mahogany, chickrassy, chittagong wood (Chukrasia velutina) Spanish Cedar, cedro, Brazilian mahogany (Cedrela odorata) Light bosse, pink mahogany (Guarea cedrata) Dark bosse, pink Mahogany (Guarea thompsonii) American muskwood (Guarea grandifolia) Carapa, royal mahogany, demerara mahogany, bastard mahogany, andiroba, crabwood (Carapa guianensis) Bead-tree, white cedar, Persian lilac (Melia azedarach) Maple (Acer) Hard maple Sugar maple (Acer saccharum) Black maple (Acer nigrum) Soft maple Boxelder (Acer negundo) Red maple (Acer rubrum) Silver maple (Acer saccharinum) European maple Sycamore maple (Acer pseudoplatanus) Marblewood (Marmaroxylon racemosum) Marri, red gum (Corymbia calophylla) Meranti (Shorea spp.) Merbau, ipil (Intsia bijuga), Kwila Mesquite White mesquite (Prosopis alba) Chilean mesquite (Prosopis chilensis) Honey mesquite (Prosopis glandulosa) Black mesquite (Prosopis nigra) Screwbean mesquite (Prosopis pubescens) Velvet mesquite (Prosopis velutina) Mopane (Colophospermum mopane) Oak (Quercus) White oak White oak (Quercus alba) Bur oak (Quercus macrocarpa) Post oak (Quercus stellata) Swamp white oak (Quercus bicolor) Southern live oak (Quercus virginiana) Swamp chestnut oak (Quercus michauxii) Chestnut oak (Quercus prinus) Chinkapin oak (Quercus muhlenbergii) Canyon live oak (Quercus chrysolepis) Overcup oak (Quercus lyrata) English oak (Quercus robur) Red oak Northern red oak (Quercus rubra) Eastern black oak (Quercus velutina) Laurel oak (Quercus laurifolia) Southern red oak (Quercus falcata) Water oak (Quercus nigra) Willow oak (Quercus phellos) Nuttall's oak (Quercus texana) Okoumé (Aucoumea klaineana) Olive (Olea europaea) Pearl tree (Poliothyrsis sinensis) Pink ivory (Berchemia zeyheri) Poplar Balsam poplar (Populus balsamifera) Black poplar (Populus nigra) Hybrid black poplar (Populus × canadensis) Purpleheart (Peltogyne spp.) Queensland maple (Flindersia brayleyana) Queensland walnut (Endiandra palmerstonii) Ramin (Gonystylus spp.) Redheart, chakté-coc (Erythroxylon mexicanum) Sal (Shorea robusta) Sweetgum (Liquidambar styraciflua) Sandalwood (Santalum spp.) Indian sandalwood (Santalum album) Sassafras (Sassafras albidum) Southern sassafras (Atherosperma moschatum) Satiné, satinwood (Brosimum rubescens) Silky oak (Grevillea robusta) Silver wattle (Acacia dealbata) Sourwood (Oxydendrum arboreum) Spanish-cedar (Cedrela odorata) Spanish elm (Cordia alliodora) Tamboti (Spirostachys africana) Teak (Tectona grandis) Philippine teak (Tectona philippinensis) Thailand rosewood (Dalbergia cochinchinensis) Tupelo (Nyssa spp.) Black tupelo (Nyssa sylvatica) Tulip tree (Liriodendron tulipifera) Turpentine (Syncarpia glomulifera) Walnut (Juglans) Eastern black walnut (Juglans nigra) Common walnut (Juglans regia) Wenge (Millettia laurentii) Panga-panga (Millettia stuhlmannii) Willow (Salix) Black willow (Salix nigra) Cricket-bat willow (Salix alba 'Caerulea') White willow (Salix alba) Weeping willow (Salix babylonica) Zingana, African zebrawood (Microberlinia brazzavillensis) Pseudowoods Other wood-like materials: Bamboo Palm tree Coconut timber (Cocos nucifera) Toddy palm timber (Borassus flabellifer) See also Janka hardness test List of Indian timber trees References External links Global Wood Density Database National Hardwood and Lumber Association American Hardwood Information Center American Hardwood Export Council Australian National Association of Forest Industries Canadian Wood Group FSC Lesser Known Timber Species NCSU Inside Wood project Reproduction of The American Woods: exhibited by actual specimens and with copious explanatory text by Romeyn B. Hough US Forest Products Laboratory, "Characteristics and Availability of Commercially Important Wood" from the Wood Handbook PDF 916K International Wood Collectors Society Xiloteca Manuel Soler (One of the largest private collection of wood samples) African Timber Export Statistics Woods Woods Woodworking materials Woods Woods Woods
List of woods
[ "Physics", "Engineering" ]
3,584
[ "Natural materials", "Building engineering", "Construction", "Materials", "Building materials", "Architecture lists", "Matter", "Architecture" ]
60,777
https://en.wikipedia.org/wiki/Limonite
Limonite () is an iron ore consisting of a mixture of hydrated iron(III) oxide-hydroxides in varying composition. The generic formula is frequently written as , although this is not entirely accurate as the ratio of oxide to hydroxide can vary quite widely. Limonite is one of the three principal iron ores, the others being hematite and magnetite, and has been mined for the production of iron since at least 400 BC. Names Limonite is named for the Ancient Greek word ( ), meaning "wet meadow", or ( ), meaning "marshy lake", as an allusion to its occurrence as in meadows and marshes. In its brown form, it is sometimes called brown hematite or brown iron ore. Characteristics Limonite is relatively dense with a specific gravity varying from 2.7 to 4.3. It is usually medium to dark yellowish brown in color. The streak of limonite on an unglazed porcelain plate is always yellowish brown, a character which distinguishes it from hematite with a red streak, or from magnetite with a black streak. The hardness is quite variable, ranging from 1 to 5. In thin section it appears as red, yellow, or brown and has a high index of refraction, 2.0–2.4. Limonite minerals are strongly birefringent, but grain sizes are usually too small for this to be detectable. Although originally defined as a single mineral, limonite is now recognized as a field term for a mixture of related hydrated iron oxide minerals, among them goethite, lepidocrocite, akaganeite, and jarosite. Determination of the precise mineral composition is practical only with X-ray diffraction techniques. Individual minerals in limonite may form crystals, but limonite does not, although specimens may show a fibrous or microcrystalline structure, and limonite often occurs in concretionary forms or in compact and earthy masses; sometimes mammillary, botryoidal, reniform or stalactitic. Because of its amorphous nature, and occurrence in hydrated areas limonite often presents as a clay or mudstone. However, there are limonite pseudomorphs after other minerals such as pyrite. This means that chemical weathering transforms the crystals of pyrite into limonite by hydrating the molecules, but the external shape of the pyrite crystal remains. Limonite pseudomorphs have also been formed from other iron oxides, hematite and magnetite; from the carbonate siderite and from iron rich silicates such as almandine garnets. Formation Limonite usually forms from the hydration of hematite and magnetite, from the oxidation and hydration of iron rich sulfide minerals, and chemical weathering of other iron rich minerals such as olivine, pyroxene, amphibole, and biotite. It is often the major iron component in lateritic soils, and limonite laterite ores are a source of nickel and potentially cobalt and other valuable metals, present as trace elements. It is often deposited in run-off streams from mining operations. Uses Nickel-rich limonite ores represent the largest reserves of nickel. Such minerals are classified as lateritic nickel ore deposits. One of the first uses was as a pigment. The yellow form produced yellow ochre for which Cyprus was famous, while the darker forms produced more earthy tones. Roasting the limonite changed it partially to hematite, producing red ochres, burnt umbers and siennas. Bog iron ore and limonite mudstones are mined as a source of iron. Iron caps or gossans of siliceous iron oxide typically form as the result of intensive oxidation of sulfide ore deposits. These gossans were used by prospectors as guides to buried ore. Limonite was mined for its ancillary gold content. The oxidation of sulfide deposits which contained gold, often resulted in the concentration of gold in the iron oxide and quartz of the gossans. The gold of the primary veins was concentrated into the limonites of the deeply weathered rocks. In another example the deeply weathered iron formations of Brazil served to concentrate gold with the limonite of the resulting soils. History Limonite was one of the earliest materials used as a pigment by humans, and can be seen in Neolithic cave paintings and pictographs. While the first iron ore was likely meteoric iron, and hematite was far easier to smelt, in Africa, where the first evidence of iron metallurgy occurs, limonite is the most prevalent iron ore. Before smelting, as the ore was heated and the water driven off, more and more of the limonite was converted to hematite. The ore was then pounded as it was heated above 1250 °C, at which temperature the metallic iron begins sticking together and non-metallic impurities are thrown off as sparks. Complex systems developed, notably in Tanzania, to process limonite. Nonetheless, hematite and magnetite remained the ores of choice when smelting was by bloomeries, and it was only with the development of blast furnaces in the 1st century BCE in China and about 1150 CE in Europe, that the brown iron ore of limonite could be used to best advantage. Bog iron ore and limonite were mined in the US, but this ended with the development of advanced mining techniques. Goldbearing limonite gossans were productively mined in the Shasta County, California mining district. Similar deposits were mined near Rio Tinto in Spain and Mount Morgan in Australia. In the Dahlonega gold belt in Lumpkin County, Georgia gold was mined from limonite-rich lateritic or saprolite soil. As saprolite deposits have been exhausted in many mining sites, limonite has become the most prominent source of nickel for use in energy dense batteries. See also Ore genesis Notes External links Mineral galleries Mindat Gold and limonite Iron ores Rocks
Limonite
[ "Physics" ]
1,271
[ "Rocks", "Physical objects", "Matter" ]
60,784
https://en.wikipedia.org/wiki/Boulder
In geology, a boulder (or rarely bowlder) is a rock fragment with size greater than in diameter. Smaller pieces are called cobbles and pebbles. While a boulder may be small enough to move or roll manually, others are extremely massive. In common usage, a boulder is too large for a person to move. Smaller boulders are usually just called rocks or stones. Etymology The word boulder derives from boulder stone, from Middle English bulderston or Swedish bullersten. About In places covered by ice sheets during ice ages, such as Scandinavia, northern North America, and Siberia, glacial erratics are common. Erratics are boulders picked up by ice sheets during their advance, and deposited when they melt. These boulders are called "erratic" because they typically are of a different rock type than the bedrock on which they are deposited. One such boulder is used as the pedestal of the Bronze Horseman in Saint Petersburg, Russia. Some noted rock formations involve giant boulders exposed by erosion, such as the Devil's Marbles in Australia's Northern Territory, the Horeke basalts in New Zealand, where an entire valley contains only boulders, and The Baths on the island of Virgin Gorda in the British Virgin Islands. Boulder-sized clasts are found in some sedimentary rocks, such as coarse conglomerate and boulder clay. See also Bouldering, free climbing performed on small rock formations or artificial climbing walls Moeraki Boulders, unusually large spherical boulders found in New Zealand Monolith, a geological feature consisting of a single massive rock List of individual rocks References External links Rocks Rock formations Garden features Natural materials
Boulder
[ "Physics" ]
325
[ "Natural materials", "Materials", "Physical objects", "Rocks", "Matter" ]
60,790
https://en.wikipedia.org/wiki/Castor%20and%20Pollux
Castor and Pollux (or Polydeuces) are twin half-brothers in Greek and Roman mythology, known together as the Dioscuri or Dioskouroi. Their mother was Leda, but they had different fathers; Castor was the mortal son of Tyndareus, the king of Sparta, while Pollux was the divine son of Zeus, who seduced Leda in the guise of a swan. The pair are thus an example of heteropaternal superfecundation. Though accounts of their birth are varied, they are sometimes said to have been born from an egg, along with their twin sisters Helen of Troy and Clytemnestra. In Latin, the twins are also known as the Gemini ("twins") or Castores, as well as the Tyndaridae or Tyndarids. Pollux asked Zeus to let him share his own immortality with his twin to keep them together, and they were transformed into the constellation Gemini. The pair were regarded as the patrons of sailors, to whom they appeared as St. Elmo's fire. They were also associated with horsemanship, in keeping with their origin as the Indo-European horse twins. Birth There is much contradictory information regarding the parentage of the Dioscuri. In the Homeric Odyssey (11.298–304), they are the sons of Tyndareus alone, but they were sons of Zeus in the Hesiodic Catalogue (fr. 24 M–W). The conventional account (attested first in Pindar, Nemean 10) combined these paternities so that only Pollux was fathered by Zeus, while Leda and her husband Tyndareus conceived Castor. This explains why they were granted an alternate immortality. The figure of Tyndareus may have entered their tradition to explain their archaic name Tindaridai in Spartan inscriptions, or Tyndaridai in literature, in turn occasioning incompatible accounts of their parentage. Their other sisters were Timandra, Phoebe, and Philonoe. Castor and Pollux are sometimes both mortal, sometimes both divine. One consistent point is that if only one of them is immortal, it is Pollux. In Homer's Iliad, Helen looks down from the walls of Troy and wonders why she does not see her brothers among the Achaeans. The narrator remarks that they are both already dead and buried back in their homeland of Lacedaemon, thus suggesting that at least in some early traditions, both were mortal. Their death and shared immortality offered by Zeus was material of the lost Cypria in the Epic cycle. The Dioscuri were regarded as helpers of mankind and held to be patrons of travellers and of sailors in particular, who invoked them to seek favourable winds. Their role as horsemen and boxers also led to them being regarded as the patrons of athletes and athletic contests. They characteristically intervened at the moment of crisis, aiding those who honoured or trusted them. Classical sources Ancient Greek authors tell a number of versions of the story of Castor and Pollux. Homer portrays them initially as ordinary mortals, treating them as dead in the Iliad: "... there are two commanders I do not see, Castor the horse breaker and the boxer Polydeuces, my brothers ..." – Helen, Iliad but in the Odyssey they are described as both being alive, even though "the grain-bearing earth holds them". The author describes them as "having honour equal to gods", living on alternate days because of the intervention of Zeus. In both the Odyssey and in Hesiod, they are described as the sons of Tyndareus and Leda. In Pindar, Pollux is the son of Zeus, while Castor is the son of the mortal Tyndareus. The theme of ambiguous parentage is not unique to Castor and Pollux; similar characterisations appear in the stories of Herakles and Theseus. The Dioscuri are also invoked in Alcaeus' fragment 34a, though whether this poem antedates the Homeric Hymn to the twins is unknown. They appear together in two plays by Euripides, Helen and Elektra. Cicero tells the story of how Simonides of Ceos was rebuked by Scopas, his patron, for devoting too much space to praising Castor and Pollux in an ode celebrating Scopas' victory in a chariot race. Shortly afterwards, Simonides was told that two young men wished to speak to him; after he had left the banqueting room, the roof fell in and crushed Scopas and his guests. According to the ancient sources the horse of Castor was named Cyllarus. Mythology Both Dioscuri were excellent horsemen and hunters who participated in the hunting of the Calydonian Boar and later joined the crew of Jason's ship, the Argo. As Argonauts During the expedition of the Argonauts, Pollux took part in a boxing contest and defeated King Amycus of the Bebryces, a savage mythical people in Bithynia. After returning from the voyage, the Dioscuri helped Jason and Peleus to destroy the city of Iolcus in revenge for the treachery of its king Pelias. Rescuing Helen When their sister Helen was abducted by Theseus, the half-brothers invaded his kingdom of Attica to rescue her. In revenge they abducted Theseus's mother Aethra and took her to Sparta while setting his rival, Menestheus, on the throne of Athens. Aethra was then forced to become Helen's slave. She was ultimately returned to her home by her grandsons Demophon and Acamas after the fall of Troy. Leucippides, Lynceus, and death Castor and Pollux aspired to marry the Leucippides ("daughters of the white horse"), Phoebe and Hilaeira, whose father was Leucippus ("white horse"). Both women were already betrothed to cousins of the Dioscuri, the twin brothers Lynceus and Idas of Messenia, sons of Tyndareus's brother Aphareus. Castor and Pollux carried the women off to Sparta wherein each had a son; Phoebe bore Mnesileos to Pollux and Hilaeira bore Anogon to Castor. This began a family feud among the four sons of the brothers Tyndareus and Aphareus. The cousins carried out a cattle-raid in Arcadia together but fell out over the division of the meat. After stealing the herd, but before dividing it, the cousins butchered, quartered, and roasted a calf. As they prepared to eat, the gigantic Idas suggested that the herd be divided into two parts instead of four, based on which pair of cousins finished their meal first. Castor and Pollux agreed. Idas quickly ate both his portion and Lynceus' portion. Castor and Pollux had been duped. They allowed their cousins to take the entire herd, but vowed someday to take revenge. Some time later, Idas and Lynceus visited their uncle's home in Sparta. The uncle was on his way to Crete, so he left Helen in charge of entertaining the guests, which included both sets of cousins, as well as Paris, prince of Troy. Castor and Pollux recognized the opportunity to exact revenge, made an excuse that justified leaving the feast, and set out to steal their cousins' herd. Idas and Lynceus eventually set out for home, leaving Helen alone with Paris, who then kidnapped her. Thus, the four cousins helped set into motion the events that gave rise to the Trojan War. Meanwhile, Castor and Pollux had reached their destination. Castor climbed a tree to keep a watch as Pollux began to free the cattle. Far away, Idas and Lynceus approached. Lynceus, named for the lynx because he could see in the dark, spied Castor hiding in the tree. Idas and Lynceus immediately understood what was happening. Idas, furious, ambushed Castor, fatally wounding him with a blow from his spear – but not before Castor called out to warn Pollux. In the ensuing brawl, Pollux killed Lynceus. As Idas was about to kill Pollux, Zeus, who had been watching from Mount Olympus, hurled a thunderbolt, killing Idas and saving his son. Returning to the dying Castor, Pollux was given the choice by Zeus of spending all his time on Mount Olympus or giving half his immortality to his mortal brother. He opted for the latter, enabling the twins to alternate between Olympus and Hades. The brothers became the two brightest stars in the constellation Gemini ("the twins"): Castor (Alpha Geminorum) and Pollux (Beta Geminorum). As emblems of immortality and death, the Dioscuri, like Heracles, were said to have been initiated into the Eleusinian mysteries. In some myths, Poseidon rewarded them with horses to ride and power to aid shipwrecked men. Iconography Castor and Pollux are consistently associated with horses in art and literature. They are widely depicted as helmeted horsemen carrying spears. The Pseudo-Oppian manuscript depicts the brothers hunting, both on horseback and on foot. On votive reliefs they are depicted with a variety of symbols representing the concept of twinhood, such as the dokana (δόκανα – two upright pieces of wood connected by two cross-beams), a pair of amphorae, a pair of shields, or a pair of snakes. They are also often shown wearing felt caps, sometimes with stars above. They are depicted on metopes (an element of a Doric frieze) from Delphi showing them on the voyage of the Argo (Ἀργώ) and rustling cattle with Idas. Greek vases regularly show them capturing Phoebe and Hilaeira, as Argonauts, as well as in religious ceremonies and at the delivery to Leda of the egg containing Helen. They can be recognized in some vase-paintings by the skull-cap they wear, the pilos (πῖλος), which was already explained in antiquity as the remnants of the egg from which they hatched. They were described by Dares Phrygius as "blond haired, large eyed, fair complexioned, and well-built with trim bodies". Dokana Dokana were ancient symbolical representation of the Dioscuri. It consisted of two upright beams with others laid across them transversely. The Dioscuri were worshipped as gods of war, and their images accompanied the Spartan kings whenever they took the field against an enemy. But when in the year 504 B.C. the two kings, during their invasion of Attica, failed in their undertaking on account of their secret enmity towards each other, it was decreed at Sparta, that in future only one king should command the army, and in consequence should only be accompanied by one of the images of the Dioscuri. It is not improbable that these images, accompanying the kings into the field, were the ancient δόκανα, which were now disjointed, so that one-half of the symbol remained at Sparta, while the other was taken into the field by one of the kings. The name δόκανα seems that it comes from δοκός which meant beam, but Suda and the Etymologicum Magnum state that δόκανα was the name of the graves of the Dioscuri at Sparta, and derived from the verb δέχομαι. Shrines and rites The Dioskouroi were worshipped by the Greeks and Romans alike; there were temples to the twins in Athens, such as the Anakeion, and Rome, as well as shrines in many other locations in the ancient world. The Dioskouroi and their sisters grew up in Sparta, in the royal household of Tyndareus; they were particularly important to the Spartans, who associated them with the Spartan tradition of dual kingship and appreciated that two princes of their ruling house were elevated to immortality. Their connection there was very ancient: a uniquely Spartan aniconic representation of the Tyndaridai was as two upright posts joined by a cross-bar; as the protectors of the Spartan army the "beam figure" or dókana was carried in front of the army on campaign. Sparta's unique dual kingship reflects the divine influence of the Dioscuri. When the Spartan army marched to war, one king remained behind at home, accompanied by one of the Twins. "In this way the real political order is secured in the realm of the Gods". Their herōon or grave-shrine was on a mountain top at Therapne across the Eurotas from Sparta, at a shrine known as the Meneláeion where Helen, Menelaus, Castor and Pollux were all said to be buried. Castor himself was also venerated in the region of Kastoria in northern Greece. They were commemorated both as gods on Olympus worthy of holocaust, and as deceased mortals in Hades, whose spirits had to be propitiated by libations. Lesser shrines to Castor, Pollux and Helen were also established at a number of other locations around Sparta. The pear tree was regarded by the Spartans as sacred to Castor and Pollux, and images of the twins were hung in its branches. The standard Spartan oath was to swear "by the two gods" (in Doric Greek: νά τώ θεὼ, ná tō theō, in the Dual number). The rite of theoxenia (θεοξενία), "god-entertaining", was particularly associated with Castor and Pollux. The two deities were summoned to a table laid with food, whether at individuals' own homes or in the public hearths or equivalent places controlled by states. They are sometimes shown arriving at a gallop over a food-laden table. Although such "table offerings" were a fairly common feature of Greek cult rituals, they were normally made in the shrines of the gods or heroes concerned. The domestic setting of the theoxenia was a characteristic distinction accorded to the Dioskouroi. The image of the twins attending a goddess are widespread and link the Dioskouroi with the male societies of initiates under the aegis of the Anatolian Great Goddess and the great gods of Samothrace. During the Archaic period, the Dioscuri were venerated in Naukratis. The Dioscuri are the inventors of war dances, which characterize the Kuretes. Anakeia (ἀνάκεια) or Anakeion (ἀνάκειον) was a festival held at Athens in honor of the Dioscuri who also had the name Anakes (Ἄνακες). City of Dioscurias The ancient city of Dioscurias or Dioskurias (Διοσκουριάς) on the Black Sea coast, modern Sokhumi, was named after them. In addition, according to legend the city was founded by them. According to another legend, the city was founded by their charioteers, Amphitus and Cercius of Sparta. Island of Dioscuri The island of Socotra, located between the Guardafui Channel and the Arabian Sea, was called by the Greeks Dioskouridou (Διοσκουρίδου νήσος), meaning "the island of the Dioscuri". Indo-European analogues The heavenly twins appear in Indo-European tradition as the effulgent Vedic brother-horsemen called the Ashvins, Lithuanian Ašvieniai, and possibly Germanic Alcis. Etruscan Kastur and Pultuce The Etruscans venerated the twins as Kastur and Pultuce, collectively as the tinas cliniiaras, "Sons of Tinia", Etruscan counterpart of Zeus. They were often portrayed on Etruscan mirrors. As was the fashion in Greece, they could also be portrayed symbolically; one example is seen in the Tomb of the Funereal Bed at Tarquinia where a lectisternium is painted for them. Another is symbolised in a painting depicted as two pointed caps crowned with laurel, referring to the Phrygian caps. Italy and the Roman Empire From the 5th century BCE onwards, the brothers were revered by the Romans, probably as the result of cultural transmission via the Greek colonies of Magna Graecia in southern Italy. An archaic Latin inscription of the 6th or 5th century BCE found at Lavinium, which reads Castorei Podlouqueique qurois ("To Castor and Pollux, the Dioskouroi"), suggests a direct transmission from the Greeks; the word "qurois" is virtually a transliteration of the Greek word κούροις, while "Podlouquei" is effectively a transliteration of the Greek Πολυδεύκης. The construction of the Temple of Castor and Pollux, located in the Roman Forum at the heart of their city, was undertaken to fulfill a vow (votum) made by Aulus Postumius Albus Regillensis in gratitude at the Roman victory in the Battle of Lake Regillus in 495 BCE. The establishment of a temple may also be a form of evocatio, the transferral of a tutelary deity from a defeated town to Rome, where cult would be offered in exchange for favor. According to legend, the twins fought at the head of the Roman army and subsequently brought news of the victory back to Rome. The Locrians of Magna Graecia had attributed their success at a legendary battle on the banks of the Sagras to the intervention of the Twins. The Roman legend could have had its origins in the Locrian account and possibly supplies further evidence of cultural transmission between Rome and Magna Graecia. The Romans believed that the twins aided them on the battlefield. Their role as horsemen made them particularly attractive to the Roman equites and cavalry. Each year on July 15, Feast Day of the Dioskouroi, 1,800 equestrians would parade through the streets of Rome in an elaborate spectacle in which each rider wore full military attire and whatever decorations he had earned. Castor and Pollux are also represented in the Circus Maximus by the use of eggs as lap counters. In translations of comedies by Plautus, women generally swear by Castor, and men by Pollux; this is exemplified by the slave-woman character Staphyla in A Pot of Gold (act i, ll. 67–71) where she swears by Castor in line 67, then the negative prefix in line 71 denotes a refutation against swearing by Pollux. Photius wrote that Polydeuces was a lover of Hermes, and the god made him a gift of Dotor (), the Thessalian horse. Christianization Even after the rise of Christianity, the Dioskouroi continued to be venerated. The 5th century pope Gelasius I attested to the presence of a "cult of Castores" that the people did not want to abandon. In some instances, the twins appear to have simply been absorbed into a Christian framework; thus 4th century CE pottery and carvings from North Africa depict the Dioskouroi alongside the Twelve Apostles, the Raising of Lazarus or with Saint Peter. The church took an ambivalent attitude, rejecting the immortality of the Dioskouroi but seeking to replace them with equivalent Christian pairs. Saints Peter and Paul were thus adopted in place of the Dioskouroi as patrons of travelers, and Saints Cosmas and Damian took over their function as healers. Some have also associated Saints Speusippus, Eleusippus, and Melapsippus with the Dioskouroi. The New Testament scholar Dennis MacDonald identifies Castor and Pollux as models for James son of Zebedee and his brother John in the Gospel of Mark. MacDonald cites the origin of this identification to 1913 when J. Rendel Harris published his work Boanerges, a Greek version probably of an Aramaic name meaning "Sons of Thunder", thunder being associated with Zeus, father of Pollux, in what MacDonald calls a form of early Christian Dioscurism. More directly, the Acts of the Apostles mentions the Dioskouroi in a neutral context, as the figurehead of an Alexandrian ship boarded by Paul in Malta (Acts 28:11). Gallery The iconography of Castor and Pollux influenced or has close parallels with depictions of divine male twins in cultures with Greco-Roman relations. See also Ambulia, a Spartan epithet used for Athena, Zeus, and Castor and Pollux Alexiares and Anicetus, twin-sons of Heracles/Hercules and Hebe/Juventas; alongside their father, they are the guardians of the gates of Mount Olympus. Ashvins, the divine twins of Vedic mythology Ašvieniai, the divine twins in Lithuanian mythology Castorian (dog), extinct dog breed said to have been bred by Castor Lugal-irra and Meslamta-ea, twins gods in Mesopotamian mythology also thought to be represented by the constellation Gemini Heteropaternal superfecundation, when two males father fraternal twins Janus Nio Gozu and Mezu Thracian horseman, sometimes linked to the Dioscuri Notes References Sources . . . . . Pindar's themes of the unequal brothers and faithfulness and salvation, with the Christian parallels in the dual nature of Christ. . Excerpts in English of classical sources. Walker, Henry J. The Twin Horse Gods: The Dioskouroi in Mythologies of the Ancient World. London, New York: I. B. Tauris, 2015. Further reading External links The Warburg Institute Iconographic Database (images of Castor and Pollux—the Dioscuri) Argonauts Astronomical myths Characters in the Argonautica Children of Leda (mythology) Children of Zeus Chthonic beings Cybele Deeds of Poseidon Divine twins Family of Calyce (mythology) Gemini in astrology Greek gods Greek mythological heroes Greek underworld Dioscuri Horse deities Mythological Laconians Life-death-rebirth gods Princes in Greek mythology
Castor and Pollux
[ "Astronomy" ]
4,685
[ "Castor and Pollux", "Astronomical myths" ]
60,825
https://en.wikipedia.org/wiki/Endorphins
Endorphins (contracted from endogenous morphine) are peptides produced in the brain that block the perception of pain and increase feelings of wellbeing. They are produced and stored in the pituitary gland of the brain. Endorphins are endogenous painkillers often produced in the brain and adrenal medulla during physical exercise or orgasm and inhibit pain, muscle cramps, and relieve stress. History Opioid peptides in the brain were first discovered in 1973 by investigators at the University of Aberdeen, John Hughes and Hans Kosterlitz. They isolated "enkephalins" (from the Greek ) from pig brain, identified as Met-enkephalin and Leu-enkephalin. This came after the discovery of a receptor that was proposed to produce the pain-relieving analgesic effects of morphine and other opioids, which led Kosterlitz and Hughes to their discovery of the endogenous opioid ligands. Research during this time was focused on the search for a painkiller that did not have the addictive character or overdose risk of morphine. Rabi Simantov and Solomon H. Snyder isolated morphine-like peptides from calf brain. Eric J. Simon, who independently discovered opioid receptors, later termed these peptides as endorphins. This term was essentially assigned to any peptide that demonstrated morphine-like activity. In 1976, Choh Hao Li and David Chung recorded the sequences of α-, β-, and γ-endorphin isolated from camel pituitary glands for their opioid activity. Li determined that β-endorphin produced strong analgesic effects. Wilhelm Feldberg and Derek George Smyth in 1977 confirmed this, finding β-endorphin to be more potent than morphine. They also confirmed that its effects were reversed by naloxone, an opioid antagonist. Studies have subsequently distinguished between enkephalins, endorphins, and endogenously produced morphine, which is not a peptide. Opioid peptides are classified based on their precursor propeptide: all endorphins are synthesized from the precursor proopiomelanocortin (POMC), encoded by proenkephalin A, and dynorphins encoded by pre-dynorphin. Etymology The word endorphin is derived from / meaning "within" (endogenous, / , "proceeding from within"), and morphine, from Morpheus (), the god of dreams in the Greek mythology. Thus, endorphin is a contraction of 'endo(genous) (mo)rphin' (morphin being the old spelling of morphine). Types The class of endorphins consists of three endogenous opioid peptides: α-endorphin, β-endorphin, and γ-endorphin. The endorphins are all synthesized from the precursor protein, proopiomelanocortin, and all contain a Met-enkephalin motif at their N-terminus: Tyr-Gly-Gly-Phe-Met. α-endorphin and γ-endorphin result from proteolytic cleavage of β-endorphin between the Thr(16)-Leu(17) residues and Leu(17)-Phe(18) respectively. α-endorphin has the shortest sequence, and β-endorphin has the longest sequence. α-endorphin and γ-endorphin are primarily found in the anterior and intermediate pituitary. While β-endorphin is studied for its opioid activity, α-endorphin and γ-endorphin both lack affinity for opiate receptors and thus do not affect the body in the same way that β-endorphin does. Some studies have characterized α-endorphin activity as similar to that of psychostimulants and γ-endorphin activity to that of neuroleptics separately. Synthesis Endorphin precursors are primarily produced in the pituitary gland. All three types of endorphins are fragments of the precursor protein proopiomelanocortin (POMC). At the trans-Golgi network, POMC binds to a membrane-bound protein, carboxypeptidase E (CPE). CPE facilitates POMC transport into immature budding vesicles. In mammals, pro-peptide convertase 1 (PC1) cleaves POMC into adrenocorticotropin (ACTH) and beta-lipotropin (β-LPH). β-LPH, a pituitary hormone with little opiate activity, is then continually fragmented into different peptides, including α-endorphin, β-endorphin, and γ-endorphin. Peptide convertase 2 (PC2) is responsible for cleaving β-LPH into β-endorphin and γ-lipotropin. Formation of α-endorphin and γ-endorphin results from proteolytic cleavage of β-endorphin. Regulation Noradrenaline has been shown to increase endorphins production within inflammatory tissues, resulting in an analgesic effect; the stimulation of sympathetic nerves by electro-acupuncture is believed to be the cause of its analgesic effects. Mechanism of action Endorphins are released from the pituitary gland, typically in response to pain, and can act in both the central nervous system (CNS) and the peripheral nervous system (PNS). In the PNS, β-endorphin is the primary endorphin released from the pituitary gland. Endorphins inhibit transmission of pain signals by binding μ-receptors of peripheral nerves, which block their release of neurotransmitter substance P. The mechanism in the CNS is similar but works by blocking a different neurotransmitter: gamma-aminobutyric acid (GABA). In turn, inhibition of GABA increases the production and release of dopamine, a neurotransmitter associated with reward learning. Functions Endorphins play a major role in the body's inhibitory response to pain. Research has demonstrated that meditation by trained individuals can be used to trigger endorphin release. Laughter may also stimulate endorphin production and elevate one's pain threshold. Endorphin production can be triggered by vigorous aerobic exercise. The release of β-endorphin has been postulated to contribute to the phenomenon known as "runner's high". However, several studies have supported the hypothesis that the runner's high is due to the release of endocannabinoids rather than that of endorphins. Endorphins may contribute to the positive effect of exercise on anxiety and depression. The same phenomenon may also play a role in exercise addiction. Regular intense exercise may cause the brain to downregulate the production of endorphins in periods of rest to maintain homeostasis, causing a person to exercise more intensely in order to receive the same feeling. See also Neurobiological effects of physical exercise Enkephalin References External links Opioid peptides Analgesics Neuropeptides Stress (biological and psychological) Stress (biology) Psychological stress Motivation Pain Grief Anxiety Happy hormones
Endorphins
[ "Biology" ]
1,585
[ "Ethology", "Behavior", "Motivation", "Human behavior" ]
60,828
https://en.wikipedia.org/wiki/Lepton
In particle physics, a lepton is an elementary particle of half-integer spin (spin ) that does not undergo strong interactions. Two main classes of leptons exist: charged leptons (also known as the electron-like leptons or muons), including the electron, muon, and tauon, and neutral leptons, better known as neutrinos. Charged leptons can combine with other particles to form various composite particles such as atoms and positronium, while neutrinos rarely interact with anything, and are consequently rarely observed. The best known of all leptons is the electron. There are six types of leptons, known as flavours, grouped in three generations. The first-generation leptons, also called electronic leptons, comprise the electron () and the electron neutrino (); the second are the muonic leptons, comprising the muon () and the muon neutrino (); and the third are the tauonic leptons, comprising the tau () and the tau neutrino (). Electrons have the least mass of all the charged leptons. The heavier muons and taus will rapidly change into electrons and neutrinos through a process of particle decay: the transformation from a higher mass state to a lower mass state. Thus electrons are stable and the most common charged lepton in the universe, whereas muons and taus can only be produced in high-energy collisions (such as those involving cosmic rays and those carried out in particle accelerators). Leptons have various intrinsic properties, including electric charge, spin, and mass. Unlike quarks, however, leptons are not subject to the strong interaction, but they are subject to the other three fundamental interactions: gravitation, the weak interaction, and to electromagnetism, of which the latter is proportional to charge, and is thus zero for the electrically neutral neutrinos. For every lepton flavor, there is a corresponding type of antiparticle, known as an antilepton, that differs from the lepton only in that some of its properties have equal magnitude but opposite sign. According to certain theories, neutrinos may be their own antiparticle. It is not currently known whether this is the case. The first charged lepton, the electron, was theorized in the mid-19th century by several scientists and was discovered in 1897 by J. J. Thomson. The next lepton to be observed was the muon, discovered by Carl D. Anderson in 1936, which was classified as a meson at the time. After investigation, it was realized that the muon did not have the expected properties of a meson, but rather behaved like an electron, only with higher mass. It took until 1947 for the concept of "leptons" as a family of particles to be proposed. The first neutrino, the electron neutrino, was proposed by Wolfgang Pauli in 1930 to explain certain characteristics of beta decay. It was first observed in the Cowan–Reines neutrino experiment conducted by Clyde Cowan and Frederick Reines in 1956. The muon neutrino was discovered in 1962 by Leon M. Lederman, Melvin Schwartz, and Jack Steinberger, and the tau discovered between 1974 and 1977 by Martin Lewis Perl and his colleagues from the Stanford Linear Accelerator Center and Lawrence Berkeley National Laboratory. The tau neutrino remained elusive until July 2000, when the DONUT collaboration from Fermilab announced its discovery. Leptons are an important part of the Standard Model. Electrons are one of the components of atoms, alongside protons and neutrons. Exotic atoms with muons and taus instead of electrons can also be synthesized, as well as lepton–antilepton particles such as positronium. Etymology The name lepton comes from the Greek leptós, "fine, small, thin" (neuter nominative/accusative singular form: λεπτόν leptón); the earliest attested form of the word is the Mycenaean Greek , re-po-to, written in Linear B syllabic script. Lepton was first used by physicist Léon Rosenfeld in 1948: Following a suggestion of Prof. C. Møller, I adopt—as a pendant to "nucleon"—the denomination "lepton" (from λεπτός, small, thin, delicate) to denote a particle of small mass. Rosenfeld chose the name as the common name for electrons and (then hypothesized) neutrinos. Additionally, the muon, initially classified as a meson, was reclassified as a lepton in the 1950s. The masses of those particles are small compared to nucleons—the mass of an electron () and the mass of a muon (with a value of ) are fractions of the mass of the "heavy" proton (), and the mass of a neutrino is nearly zero. However, the mass of the tau (discovered in the mid-1970s) () is nearly twice that of the proton and times that of the electron. History The first lepton identified was the electron, discovered by J.J. Thomson and his team of British physicists in 1897. Then in 1930, Wolfgang Pauli postulated the electron neutrino to preserve conservation of energy, conservation of momentum, and conservation of angular momentum in beta decay. Pauli theorized that an undetected particle was carrying away the difference between the energy, momentum, and angular momentum of the initial and observed final particles. The electron neutrino was simply called the neutrino, as it was not yet known that neutrinos came in different flavours (or different "generations"). Nearly 40 years after the discovery of the electron, the muon was discovered by Carl D. Anderson in 1936. Due to its mass, it was initially categorized as a meson rather than a lepton. It later became clear that the muon was much more similar to the electron than to mesons, as muons do not undergo the strong interaction, and thus the muon was reclassified: electrons, muons, and the (electron) neutrino were grouped into a new group of particles—the leptons. In 1962, Leon M. Lederman, Melvin Schwartz, and Jack Steinberger showed that more than one type of neutrino exists by first detecting interactions of the muon neutrino, which earned them the 1988 Nobel Prize, although by then the different flavours of neutrino had already been theorized. The tau was first detected in a series of experiments between 1974 and 1977 by Martin Lewis Perl with his colleagues at the SLAC LBL group. Like the electron and the muon, it too was expected to have an associated neutrino. The first evidence for tau neutrinos came from the observation of "missing" energy and momentum in tau decay, analogous to the "missing" energy and momentum in beta decay leading to the discovery of the electron neutrino. The first detection of tau neutrino interactions was announced in 2000 by the DONUT collaboration at Fermilab, making it the second-to-latest particle of the Standard Model to have been directly observed, with Higgs boson being discovered in 2012. Although all present data is consistent with three generations of leptons, some particle physicists are searching for a fourth generation. The current lower limit on the mass of such a fourth charged lepton is , while its associated neutrino would have a mass of at least . Properties Spin and chirality Leptons are spin  particles. The spin-statistics theorem thus implies that they are fermions and thus that they are subject to the Pauli exclusion principle: no two leptons of the same species can be in the same state at the same time. Furthermore, it means that a lepton can have only two possible spin states, namely up or down. A closely related property is chirality, which in turn is closely related to a more easily visualized property called helicity. The helicity of a particle is the direction of its spin relative to its momentum; particles with spin in the same direction as their momentum are called right-handed and they are otherwise called left-handed. When a particle is massless, the direction of its momentum relative to its spin is the same in every reference frame, whereas for massive particles it is possible to 'overtake' the particle by choosing a faster-moving reference frame; in the faster frame, the helicity is reversed. Chirality is a technical property, defined through transformation behaviour under the Poincaré group, that does not change with reference frame. It is contrived to agree with helicity for massless particles, and is still well defined for particles with mass. In many quantum field theories, such as quantum electrodynamics and quantum chromodynamics, left- and right-handed fermions are identical. However, the Standard Model's weak interaction treats left-handed and right-handed fermions differently: only left-handed fermions (and right-handed anti-fermions) participate in the weak interaction. This is an example of parity violation explicitly written into the model. In the literature, left-handed fields are often denoted by a capital L subscript (e.g. the normal electron e) and right-handed fields are denoted by a capital R subscript (e.g. a positron e). Right-handed neutrinos and left-handed anti-neutrinos have no possible interaction with other particles (see Sterile neutrino) and so are not a functional part of the Standard Model, although their exclusion is not a strict requirement; they are sometimes listed in particle tables to emphasize that they would have no active role if included in the model. Even though electrically charged right-handed particles (electron, muon, or tau) do not engage in the weak interaction specifically, they can still interact electrically, and hence still participate in the combined electroweak force, although with different strengths (W). Electromagnetic interaction One of the most prominent properties of leptons is their electric charge, . The electric charge determines the strength of their electromagnetic interactions. It determines the strength of the electric field generated by the particle (see Coulomb's law) and how strongly the particle reacts to an external electric or magnetic field (see Lorentz force). Each generation contains one lepton with and one lepton with zero electric charge. The lepton with electric charge is commonly simply referred to as a charged lepton while a neutral lepton is called a neutrino. For example, the first generation consists of the electron with a negative electric charge and the electrically neutral electron neutrino . In the language of quantum field theory, the electromagnetic interaction of the charged leptons is expressed by the fact that the particles interact with the quantum of the electromagnetic field, the photon. The Feynman diagram of the electron–photon interaction is shown on the right. Because leptons possess an intrinsic rotation in the form of their spin, charged leptons generate a magnetic field. The size of their magnetic dipole moment is given by where is the mass of the lepton and is the so-called " factor" for the lepton. First-order quantum mechanical approximation predicts that the  factor is 2 for all leptons. However, higher-order quantum effects caused by loops in Feynman diagrams introduce corrections to this value. These corrections, referred to as the anomalous magnetic dipole moment, are very sensitive to the details of a quantum field theory model, and thus provide the opportunity for precision tests of the Standard Model. The theoretical and measured values for the electron anomalous magnetic dipole moment are within agreement within eight significant figures. The results for the muon, however, are problematic, hinting at a small, persistent discrepancy between the Standard Model and experiment. Weak interaction In the Standard Model, the left-handed charged lepton and the left-handed neutrino are arranged in doublet that transforms in the spinor representation () of the weak isospin SU(2) gauge symmetry. This means that these particles are eigenstates of the isospin projection with eigenvalues and respectively. In the meantime, the right-handed charged lepton transforms as a weak isospin scalar () and thus does not participate in the weak interaction, while there is no evidence that a right-handed neutrino exists at all. The Higgs mechanism recombines the gauge fields of the weak isospin SU(2) and the weak hypercharge U(1) symmetries to three massive vector bosons (, , ) mediating the weak interaction, and one massless vector boson, the photon (γ), responsible for the electromagnetic interaction. The electric charge can be calculated from the isospin projection and weak hypercharge through the Gell-Mann–Nishijima formula, To recover the observed electric charges for all particles, the left-handed weak isospin doublet must thus have , while the right-handed isospin scalar must have . The interaction of the leptons with the massive weak interaction vector bosons is shown in the figure on the right. Mass In the Standard Model, each lepton starts out with no intrinsic mass. The charged leptons (i.e. the electron, muon, and tau) obtain an effective mass through interaction with the Higgs field, but the neutrinos remain massless. For technical reasons, the masslessness of the neutrinos implies that there is no mixing of the different generations of charged leptons as there is for quarks. The zero mass of neutrino is in close agreement with current direct experimental observations of the mass. However, it is known from indirect experiments—most prominently from observed neutrino oscillations—that neutrinos have to have a nonzero mass, probably less than . This implies the existence of physics beyond the Standard Model. The currently most favoured extension is the so-called seesaw mechanism, which would explain both why the left-handed neutrinos are so light compared to the corresponding charged leptons, and why we have not yet seen any right-handed neutrinos. Lepton flavor quantum numbers The members of each generation's weak isospin doublet are assigned leptonic numbers that are conserved under the Standard Model. Electrons and electron neutrinos have an electronic number of , while muons and muon neutrinos have a muonic number of , while tau particles and tau neutrinos have a tauonic number of . The antileptons have their respective generation's leptonic numbers of −1. Conservation of the leptonic numbers means that the number of leptons of the same type remains the same, when particles interact. This implies that leptons and antileptons must be created in pairs of a single generation. For example, the following processes are allowed under conservation of leptonic numbers:    →   + ,  →   + , but none of these:      →   + ,  →   + ,    →   + . However, neutrino oscillations are known to violate the conservation of the individual leptonic numbers. Such a violation is considered to be smoking gun evidence for physics beyond the Standard Model. A much stronger conservation law is the conservation of the total number of leptons ( ), conserved even in the case of neutrino oscillations, but even it is still violated by a tiny amount by the chiral anomaly. Universality The coupling of leptons to all types of gauge boson are flavour-independent: The interaction between leptons and a gauge boson measures the same for each lepton. This property is called lepton universality and has been tested in measurements of the muon and tau lifetimes and of boson partial decay widths, particularly at the Stanford Linear Collider (SLC) and Large Electron–Positron Collider (LEP) experiments. The decay rate () of muons through the process is approximately given by an expression of the form (see muon decay for more details) where is some constant, and is the Fermi coupling constant. The decay rate of tau particles through the process is given by an expression of the same form where is some other constant. Muon–tauon universality implies that . On the other hand, electron–muon universality implies The branching ratios for the electronic mode (17.82%) and muonic (17.39%) mode of tau decay are not equal due to the mass difference of the final state leptons. Universality also accounts for the ratio of muon and tau lifetimes. The lifetime of a lepton (with = "" or "") is related to the decay rate by , where denotes the branching ratios and denotes the resonance width of the process with and replaced by two different particles from "" or "" or "". The ratio of tau and muon lifetime is thus given by Using values from the 2008 Review of Particle Physics for the branching ratios of the muon and tau yields a lifetime ratio of ~ , comparable to the measured lifetime ratio of ~ . The difference is due to and not actually being constants: They depend slightly on the mass of leptons involved. Recent tests of lepton universality in meson decays, performed by the LHCb, BaBar, and Belle experiments, have shown consistent deviations from the Standard Model predictions. However the combined statistical and systematic significance is not yet high enough to claim an observation of new physics. In July 2021 results on lepton flavour universality have been published testing W decays, previous measurements by the LEP had given a slight imbalance but the new measurement by the ATLAS collaboration have twice the precision and give a ratio of , which agrees with the standard-model prediction of unity. In 2024 a preprint by the ATLAS collaboration has published a new value of the most precise ratio so far testing the lepton flavour universality. Table of leptons {| class="wikitable" style="text-align:center" |+Properties of leptons |- !rowspan=2| Spin !rowspan=2| Particle or antiparticle name !rowspan=2| Symbol !rowspan=2| Charge !colspan=3| Lepton flavor number !rowspan=2| Mass !rowspan=2| Lifetime |- ! ! ! |- |rowspan=13| |style="text-align:left"| electron | | −1 | +1 |rowspan=2| 0 |rowspan=2| 0 |rowspan=2 style="text-align:right"| |rowspan=2| stable |- |style="text-align:left"| positron | | +1 | −1 |- |style="text-align:left"| muon | | −1 |rowspan=2| 0 | +1 |rowspan=2| 0 |rowspan=2 style="text-align:right"| |rowspan=2 style="text-align:right"|          |- |style="text-align:left"|antimuon | | +1 | −1 |- |style="text-align:left"| tau | | −1 |rowspan=2| 0 |rowspan=2| 0 | +1 |rowspan=2 style="text-align:right"| |rowspan=2 style="text-align:right"| |- |style="text-align:left"| antitau | | +1 | −1 |- !colspan=8| |- |style="text-align:left"| electron neutrino | | rowspan="6" | 0 | +1 |rowspan=2| 0 |rowspan=2| 0 |rowspan=2 style="text-align:left"| <  |rowspan=2| unknown |- |style="text-align:left"| electron antineutrino | | −1 |- |style="text-align:left"| muon neutrino | |rowspan=2| 0 | +1 |rowspan=2| 0 |rowspan=2 style="text-align:left"| < 0.17 |rowspan=2| unknown |- |style="text-align:left"| muon antineutrino | | −1 |- |style="text-align:left"| tau neutrino | |rowspan=2| 0 |rowspan=2| 0 | +1 |rowspan=2 style="text-align:left"| < 15.5 |rowspan=2| unknown |- |style="text-align:left"| tau antineutrino | | −1 |- |} See also Koide formula List of particles Preons – hypothetical particles that were once postulated to be subcomponents of quarks and leptons Notes References Bibliography External links – The PDG compiles authoritative information on particle properties. – a summary of leptons. Elementary particles 1897 in science
Lepton
[ "Physics" ]
4,472
[ "Elementary particles", "Subatomic particles", "Matter" ]
60,840
https://en.wikipedia.org/wiki/Pistacia%20lentiscus
Pistacia lentiscus (also lentisk or mastic) is a dioecious evergreen shrub or small tree of the genus Pistacia native to the Mediterranean Basin. It grows up to tall and is cultivated for its aromatic resin, mainly on the Greek island of Chios, around the Turkish town of Çeşme and northern parts of Iraq. Description The plant is evergreen, from high, with a strong smell of resin, growing in dry and rocky areas in North Africa and Mediterranean Europe. It resists mild to heavy frosts but prefers milder winters and grows on all types of soils, and can grow well in limestone areas and even in salty or saline environments, making it more abundant near the sea. It is also found in woodlands, dehesas (almost deforested pasture areas), Kermes oak woods, wooded areas dominated by other oaks, garrigues, maquis shrublands, hills, gorges, canyons, and rocky hillsides of the entire Mediterranean area. It is a typical species of Mediterranean mixed communities which include myrtle, Kermes oak, Mediterranean dwarf palm, buckthorn and sarsaparilla, and serves as protection and food for birds and other fauna in this ecosystem. It is a very hardy pioneer species dispersed by birds. When older, it develops some large trunks and numerous thicker and longer branches. In appropriate areas, when allowed to grow freely and age, it often becomes a tree of up to . However, logging, grazing, and fires often prevent its development. The leaves are alternate, leathery, and paripinnately compound (i.e., pinnately compound without terminal leaflet) with five or six pairs of deep-green leaflets. It presents very small flowers, the male with five stamens, the female with a 3-part style. The fruit is a drupe, first red and then black when ripe, about in diameter. The fruit, although not commonly consumed, is edible and has a tart raisin-like flavour. Pistacia lentiscus is related to Pistacia terebinthus, with which it hybridizes frequently in contact zones. Pistacia terebinthus is more abundant in the mountains and inland and the mastic is usually found more frequently in areas where the Mediterranean influence of the sea moderates the climate. The mastic tree does not reach the size of the Pistacia terebinthus, but the hybrids are very difficult to distinguish. The mastic has winged stalks to its leaflets, i.e., the stalks are flattened and with side fins, whereas these stems in Pistacia terebinthus are simple. On the west coast of the Mediterranean, Canary Islands and Middle East, it can be confused with P. atlantica. Distribution Pistacia lentiscus is native throughout the Mediterranean region, from Morocco and the Iberian Peninsula in the west through southern France and Turkey to Iraq and Iran in the east. It is also native to the Canary Islands. Ornamental use In urban areas near the sea, where "palmitos" or Mediterranean dwarf palms grow, and other exotic plants, it is often used in gardens and resorts, because of its strength and attractive appearance. Unlike other species of Pistacia, it retains its leaves throughout the year. It has been introduced as an ornamental shrub in Mexico, where it has naturalized and is often seen primarily in suburban and semiarid areas where the summer rainfall climate, contrary to the Mediterranean, does not affect it. Resin The aromatic, ivory-coloured resin, also known as mastic, is harvested as a spice from the cultivated mastic trees grown in the south of the Greek island of Chios in the Aegean Sea, where it is also known by the name "Chios tears". Originally liquid, it is hardened, when the weather turns cold, into drops or patties of hard, brittle, translucent resin. When chewed, the resin softens and becomes a bright white and opaque gum. The word mastic derives from the Latin word masticare (to chew), in Greek: μαστιχάω verb mastichein ("to gnash the teeth", the English word completely from the Latin masticate) or massein ("to chew"). In the Kurdish parts of Iraq, the dried resin is used to make rosaries Within the European Union, mastic production in Chios is granted protected designation of origin and protected geographical indication names. Although the tree is native to all of the Mediterranean region, it will release its resin only on selected places, most notably, around Cesme, Turkey and in the southern portion of the Greek island of Chios, the latter being the only place in the world where it is cultivated regularly. The island's mastic production is controlled by a co-operative of "medieval" villages, collectively known as the 'mastichochoria' (Μαστιχοχώρια, lit. "mastic villages"). Cultivation history The resin is collected by bleeding the trees from small cuts made in the bark of the main branches, and allowing the sap to drip onto the specially prepared ground below. The harvesting is done during the summer between July and September. After the mastic is collected, it is washed manually and is set aside to dry, away from the sun, as it will start melting again. Mastic resin is a relatively expensive kind of spice; it has been used principally as a chewing gum for at least 2,400 years. The flavour can be described as a strong, slightly smoky, resiny aroma and can be an acquired taste. Some scholars identify the bakha בכא mentioned in the Bible—as in the Valley of Baca () of Psalm 84—with the mastic plant. The word bakha appears to be derived from the Hebrew word for crying or weeping, and is thought to refer to the "tears" of resin secreted by the mastic plant, along with a sad weeping noise which occurs when the plant is walked on and branches are broken. The Valley of Baca is thought to be a valley near Jerusalem that was covered with low mastic shrubbery, much like some hillsides in northern Israel today. In an additional biblical reference, King David receives divine counsel to place himself opposite the Philistines coming up the Valley of Rephaim, southwest of Jerusalem, such that the "sound of walking on the tops of the bakha shrubs" (קול צעדה בראשי הבכאים) signals the moment to attack (II Samuel V: 22–24). Mastic is known to have been popular in Roman times when children chewed it, and in medieval times, it was highly prized for the sultan's harem both as a breath freshener and for cosmetics. It was the sultan's privilege to chew mastic, and it was considered to have healing properties. The spice's use was widened when Chios became part of the Ottoman Empire, and it remains popular in North Africa and the Near East. An unflattering reference to mastic-chewing was made in Shakespeare's Troilus and Cressida (published 1609) when Agamemnon dismisses the views of the cynic and satirist Thersites as graceless productions of "his mastic jaws". Culinary use Mastic gum is principally used either as a flavouring or for its gum properties, as in mastic chewing gum. As a spice, it continues to be used in Greece to flavour spirits and liqueurs (such as Chios's native drink mastiha), chewing gum, and a number of cakes, pastries, spoon sweets, and desserts. Sometimes, it is even used in making cheese. Mastic resin is a key ingredient in dondurma and Turkish puddings, giving those confections their unusual texture and bright whiteness. In Lebanon and Egypt, the spice is used to flavour many dishes, ranging from soups to meats to desserts, while in Morocco, smoke from the resin is used to flavour water. In Turkey, mastic is used as a flavor of Turkish delight. Recently, a mastic-flavoured fizzy drink has also been launched, called "Mast". In the Kurdish parts of Iraq, the fresh resin is used as a spice particularly used for Torshi. Mastic resin is a key ingredient in Greek festival breads, for example, the sweet bread tsoureki and the traditional New Year's vasilopita. Furthermore, mastic is also essential to myron, the holy oil used for chrismation by the Orthodox Churches. Mastic continues to be used for its gum and medicinal properties, as well as its culinary uses. Jordanian chewing gum manufacturer, Sharawi Bros., use the mastic of this shrub as a primary ingredient in their mastic-flavoured products and they distribute the gum to many deli stores worldwide. The resin is used as a primary ingredient in the production of cosmetics such as toothpaste, lotions for the hair and skin, and perfumes. Medicine People in the Mediterranean region have used mastic as a medicine for gastrointestinal ailments for several thousand years. First-century Greek physician and botanist Dioscorides wrote about the medicinal properties of mastic in his classic treatise De Materia Medica (About Medical Substances). Some centuries later, Markellos Empeirikos and Pavlos Eginitis also noticed the effect of mastic on the digestive system. Mastic oil has antibacterial and antifungal properties, and as such is widely used in the preparation of ointments for skin disorders and afflictions. It is also used in the manufacture of plasters. In recent years, university researchers have provided the scientific evidence for the medicinal properties of mastic. A 1985 study by the University of Thessaloniki and by the Meikai University discovered that mastic can reduce bacterial dental plaque in the mouth by 41.5%. A 1998 study by the University of Athens found that mastic oil has antibacterial and antifungal properties. Another 1998 University of Nottingham study claims that mastic can heal peptic ulcers by killing Helicobacter pylori, which causes peptic ulcers, gastritis, and duodenitis. Some in vivo studies have shown that mastic gum has no effect on H. pylori when taken for short periods of time. However, a recent and more extensive study showed that mastic gum reduced H. pylori populations after an insoluble and sticky polymer (poly-β-myrcene) constituent of mastic gum was removed, and if taken for a longer period of time. Miscellanea Apart from its medicinal properties and cosmetic and culinary uses, mastic gum is also used in the production of high-grade varnish. The mastic tree has been introduced into Mexico as an ornamental plant, where it is very prized and fully naturalized. The trees are grown mainly in suburban areas in semiarid zones, and remain undamaged, although the summer rainfall is contrary to its original Mediterranean climate. A related species, P. saportae, has been shown by DNA analysis to be a hybrid between maternal P. lentiscus and paternal P. terebinthus (terebinth or turpentine). The hybrid has imparipinnate leaves, with leaflets semipersistent, subsessile terminal, and sometimes reduced. Usually, P. terebinthus and P. lentiscus occupy different biotopes and barely overlap: Mastic appears at lower elevations and near the sea, while the P. terebinthus most frequently inhabits inland and mountainous areas such as the Iberian System. "Dufte-Zeichen" (Scents-signs), the fourth scene from Sonntag aus Licht by Karlheinz Stockhausen, is centred around seven scents, each one associated with one day of the week. "Mastix" is assigned to Wednesday and comes third. Biblical Narrative This tree plays a central role in the narrative of Susanna in the book of Daniel in the Bible. In the story two old men falsely accuse Susanna of adultery. Their lies are exposed when one says it happened under a mastic tree, while the other says it happened under a holly oak. Since the mastic is, at most, , while the oak is, at least, their lies were obvious to all. See also False mastic Greek cuisine Greek food products Mastic (plant resin) Mastichochoria Turkish cuisine References Further reading lentiscus Flora of North Africa Flora of Western Asia Trees of Europe Greek cuisine Resins Spices Chios Trees of Mediterranean climate Garden plants of Europe Garden plants of Africa Garden plants of Asia Drought-tolerant trees Ornamental trees Plants described in 1753 Taxa named by Carl Linnaeus Flora of the Mediterranean basin
Pistacia lentiscus
[ "Physics" ]
2,658
[ "Amorphous solids", "Unsolved problems in physics", "Resins" ]
60,854
https://en.wikipedia.org/wiki/Additive%20category
In mathematics, specifically in category theory, an additive category is a preadditive category C admitting all finitary biproducts. Definition There are two equivalent definitions of an additive category: One as a category equipped with additional structure, and another as a category equipped with no extra structure but whose objects and morphisms satisfy certain equations. Via preadditive categories A category C is preadditive if all its hom-sets are abelian groups and composition of morphisms is bilinear; in other words, C is enriched over the monoidal category of abelian groups. In a preadditive category, every finitary product is necessarily a coproduct, and hence a biproduct, and conversely every finitary coproduct is necessarily a product (this is a consequence of the definition, not a part of it). The empty product, is a final object and the empty product in the case of an empty diagram, an initial object. Both being limits, they are not finite products nor coproducts. Thus an additive category is equivalently described as a preadditive category admitting all finitary products and with the null object or a preadditive category admitting all finitary coproducts and with the null object Via semiadditive categories We give an alternative definition. Define a semiadditive category to be a category (note: not a preadditive category) which admits a zero object and all binary biproducts. It is then a remarkable theorem that the Hom sets naturally admit an abelian monoid structure. A proof of this fact is given below. An additive category may then be defined as a semiadditive category in which every morphism has an additive inverse. This then gives the Hom sets an abelian group structure instead of merely an abelian monoid structure. Generalization More generally, one also considers additive -linear categories for a commutative ring . These are categories enriched over the monoidal category of -modules and admitting all finitary biproducts. Examples The original example of an additive category is the category of abelian groups Ab. The zero object is the trivial group, the addition of morphisms is given pointwise, and biproducts are given by direct sums. More generally, every module category over a ring is additive, and so in particular, the category of vector spaces over a field is additive. The algebra of matrices over a ring, thought of as a category as described below, is also additive. Internal characterisation of the addition law Let C be a semiadditive category, so a category having all finitary biproducts. Then every hom-set has an addition, endowing it with the structure of an abelian monoid, and such that the composition of morphisms is bilinear. Moreover, if C is additive, then the two additions on hom-sets must agree. In particular, a semiadditive category is additive if and only if every morphism has an additive inverse. This shows that the addition law for an additive category is internal to that category. To define the addition law, we will use the convention that for a biproduct, pk will denote the projection morphisms, and ik will denote the injection morphisms. The diagonal morphism is the canonical morphism , induced by the universal property of products, such that for . Dually, the codiagonal morphism is the canonical morphism , induced by the universal property of coproducts, such that for . For each object , we define: the addition of the injections to be the diagonal morphism, that is ; the addition of the projections to be the codiagonal morphism, that is . Next, given two morphisms , there exists a unique morphism such that equals if , and 0 otherwise. We can therefore define . This addition is both commutative and associative. The associativity can be seen by considering the composition We have , using that . It is also bilinear, using for example that and that . We remark that for a biproduct we have . Using this, we can represent any morphism as a matrix. Matrix representation of morphisms Given objects and in an additive category, we can represent morphisms as -by- matrices where Using that , it follows that addition and composition of matrices obey the usual rules for matrix addition and multiplication. Thus additive categories can be seen as the most general context in which the algebra of matrices makes sense. Recall that the morphisms from a single object  to itself form the endomorphism ring . If we denote the -fold product of  with itself by , then morphisms from to are m-by-n matrices with entries from the ring . Conversely, given any ring , we can form a category  by taking objects An indexed by the set of natural numbers (including 0) and letting the hom-set of morphisms from to be the set of -by- matrices over , and where composition is given by matrix multiplication. Then is an additive category, and equals the -fold power . This construction should be compared with the result that a ring is a preadditive category with just one object, shown here. If we interpret the object as the left module , then this matrix category becomes a subcategory of the category of left modules over . This may be confusing in the special case where or is zero, because we usually don't think of matrices with 0 rows or 0 columns. This concept makes sense, however: such matrices have no entries and so are completely determined by their size. While these matrices are rather degenerate, they do need to be included to get an additive category, since an additive category must have a zero object. Thinking about such matrices can be useful in one way, though: they highlight the fact that given any objects and in an additive category, there is exactly one morphism from to 0 (just as there is exactly one 0-by-1 matrix with entries in ) and exactly one morphism from 0 to (just as there is exactly one 1-by-0 matrix with entries in ) – this is just what it means to say that 0 is a zero object. Furthermore, the zero morphism from to is the composition of these morphisms, as can be calculated by multiplying the degenerate matrices. Additive functors A functor between preadditive categories is additive if it is an abelian group homomorphism on each hom-set in C. If the categories are additive, then a functor is additive if and only if it preserves all biproduct diagrams. That is, if is a biproduct of  in C with projection morphisms and injection morphisms , then should be a biproduct of  in D with projection morphisms and injection morphisms . Almost all functors studied between additive categories are additive. In fact, it is a theorem that all adjoint functors between additive categories must be additive functors (see here). Most of the interesting functors studied in category theory are adjoints. Generalization When considering functors between -linear additive categories, one usually restricts to -linear functors, so those functors giving an -module homomorphism on each hom-set. Special cases A pre-abelian category is an additive category in which every morphism has a kernel and a cokernel. An abelian category is a pre-abelian category such that every monomorphism and epimorphism is normal. Many commonly studied additive categories are in fact abelian categories; for example, Ab is an abelian category. The free abelian groups provide an example of a category that is additive but not abelian. References Nicolae Popescu; 1973; Abelian Categories with Applications to Rings and Modules; Academic Press, Inc. (out of print) goes over all of this very slowly
Additive category
[ "Mathematics" ]
1,670
[ "Mathematical structures", "Category theory", "Additive categories" ]
60,871
https://en.wikipedia.org/wiki/Luminescence
Luminescence is a spontaneous emission of radiation from an electronically or vibrationally excited species not in thermal equilibrium with its environment. A luminescent object emits cold light in contrast to incandescence, where an object only emits light after heating. Generally, the emission of light is due to the movement of electrons between different energy levels within an atom after excitation by external factors. However, the exact mechanism of light emission in vibrationally excited species is unknown. The dials, hands, scales, and signs of aviation and navigational instruments and markings are often coated with luminescent materials in a process known as luminising. Types Ionoluminescence, a result of bombardment by fast ions Radioluminescence, a result of bombardment by ionizing radiation Electroluminescence, a result of an electric current passed through a substance Cathodoluminescence, a result of a luminescent material being struck by electrons Chemiluminescence, the emission of light as a result of a chemical reaction Bioluminescence, a result of biochemical reactions in a living organism Electrochemiluminescence, a result of an electrochemical reaction Lyoluminescence, a result of dissolving a solid (usually heavily irradiated) in a liquid solvent Candoluminescence, is light emitted by certain materials at elevated temperatures, which differs from the blackbody emission expected at the temperature in question. Mechanoluminescence, a result of a mechanical action on a solid Triboluminescence, generated when bonds in a material are broken when that material is scratched, crushed, or rubbed Fractoluminescence, generated when bonds in certain crystals are broken by fractures Piezoluminescence, produced by the action of pressure on certain solids Sonoluminescence, a result of imploding bubbles in a liquid when excited by sound Crystalloluminescence, produced during crystallization Thermoluminescence, the re-emission of absorbed energy when a substance is heated Cryoluminescence, the emission of light when an object is cooled (an example of this is wulfenite) Photoluminescence, a result of the absorption of photons Fluorescence, traditionally defined as the emission of light that ends immediately after the source of excitation is removed. As the definition does not fully describe the phenomenon, quantum mechanics is employed where it is defined as there is no change in spin multiplicity from the state of excitation to emission of light. Phosphorescence, traditionally defined as persistent emission of light after the end of excitation. As the definition does not fully describe the phenomenon, quantum mechanics is employed where it is defined as there is a change in spin multiplicity from the state of excitation to the emission of light. Applications Light-emitting diodes (LEDs) emit light via electro-luminescence. Phosphors, materials that emit light when irradiated by higher-energy electromagnetic radiation or particle radiation Laser, and lamp industry Phosphor thermometry, measuring temperature using phosphorescence Thermoluminescence dating Thermoluminescent dosimeter Non-disruptive observation of processes within a cell. Luminescence occurs in some minerals when they are exposed to low-powered sources of ultraviolet or infrared electromagnetic radiation (for example, portable UV lamps) at atmospheric pressure and atmospheric temperatures. This property of these minerals can be used during the process of mineral identification at rock outcrops in the field or in the laboratory. History The term luminescence was first introduced in 1888. See also List of light sources Scientific American, "Luminous Paint" (historical aspects), 10-Dec-1881, pp.368 High-visibility clothing References External links Fluorophores.org A database of luminescent dyes Light sources 1880s neologisms
Luminescence
[ "Chemistry" ]
803
[ "Luminescence", "Molecular physics" ]
60,874
https://en.wikipedia.org/wiki/Photoluminescence
Photoluminescence (abbreviated as PL) is light emission from any form of matter after the absorption of photons (electromagnetic radiation). It is one of many forms of luminescence (light emission) and is initiated by photoexcitation (i.e. photons that excite electrons to a higher energy level in an atom), hence the prefix photo-. Following excitation, various relaxation processes typically occur in which other photons are re-radiated. Time periods between absorption and emission may vary: ranging from short femtosecond-regime for emission involving free-carrier plasma in inorganic semiconductors up to milliseconds for phosphoresence processes in molecular systems; and under special circumstances delay of emission may even span to minutes or hours. Observation of photoluminescence at a certain energy can be viewed as an indication that an electron populated an excited state associated with this transition energy. While this is generally true in atoms and similar systems, correlations and other more complex phenomena also act as sources for photoluminescence in many-body systems such as semiconductors. A theoretical approach to handle this is given by the semiconductor luminescence equations. Forms Photoluminescence processes can be classified by various parameters such as the energy of the exciting photon with respect to the emission. Resonant excitation describes a situation in which photons of a particular wavelength are absorbed and equivalent photons are very rapidly re-emitted. This is often referred to as resonance fluorescence. For materials in solution or in the gas phase, this process involves electrons but no significant internal energy transitions involving molecular features of the chemical substance between absorption and emission. In crystalline inorganic semiconductors where an electronic band structure is formed, secondary emission can be more complicated as events may contain both coherent contributions such as resonant Rayleigh scattering where a fixed phase relation with the driving light field is maintained (i.e. energetically elastic processes where no losses are involved), and incoherent contributions (or inelastic modes where some energy channels into an auxiliary loss mode), The latter originate, e.g., from the radiative recombination of excitons, Coulomb-bound electron-hole pair states in solids. Resonance fluorescence may also show significant quantum optical correlations. More processes may occur when a substance undergoes internal energy transitions before re-emitting the energy from the absorption event. Electrons change energy states by either resonantly gaining energy from absorption of a photon or losing energy by emitting photons. In chemistry-related disciplines, one often distinguishes between fluorescence and phosphorescence. The former is typically a fast process, yet some amount of the original energy is dissipated so that re-emitted light photons will have lower energy than did the absorbed excitation photons. The re-emitted photon in this case is said to be red shifted, referring to the reduced energy it carries following this loss (as the Jablonski diagram shows). For phosphorescence, electrons which absorbed photons, undergo intersystem crossing where they enter into a state with altered spin multiplicity (see term symbol), usually a triplet state. Once the excited electron is transferred into this triplet state, electron transition (relaxation) back to the lower singlet state energies is quantum mechanically forbidden, meaning that it happens much more slowly than other transitions. The result is a slow process of radiative transition back to the singlet state, sometimes lasting minutes or hours. This is the basis for "glow in the dark" substances. Photoluminescence is an important technique for measuring the purity and crystalline quality of semiconductors such as GaN and InP and for quantification of the amount of disorder present in a system. Time-resolved photoluminescence (TRPL) is a method where the sample is excited with a light pulse and then the decay in photoluminescence with respect to time is measured. This technique is useful for measuring the minority carrier lifetime of III-V semiconductors like gallium arsenide (GaAs). Photoluminescence properties of direct-gap semiconductors In a typical PL experiment, a semiconductor is excited with a light-source that provides photons with an energy larger than the bandgap energy. The incoming light excites a polarization that can be described with the semiconductor Bloch equations. Once the photons are absorbed, electrons and holes are formed with finite momenta in the conduction and valence bands, respectively. The excitations then undergo energy and momentum relaxation towards the band-gap minimum. Typical mechanisms are Coulomb scattering and the interaction with phonons. Finally, the electrons recombine with holes under emission of photons. Ideal, defect-free semiconductors are many-body systems where the interactions of charge-carriers and lattice vibrations have to be considered in addition to the light-matter coupling. In general, the PL properties are also extremely sensitive to internal electric fields and to the dielectric environment (such as in photonic crystals) which impose further degrees of complexity. A precise microscopic description is provided by the semiconductor luminescence equations. Ideal quantum-well structures An ideal, defect-free semiconductor quantum well structure is a useful model system to illustrate the fundamental processes in typical PL experiments. The discussion is based on results published in Klingshirn (2012) and Balkan (1998). The fictive model structure for this discussion has two confined quantized electronic and two hole subbands, e1, e2 and h1, h2, respectively. The linear absorption spectrum of such a structure shows the exciton resonances of the first (e1h1) and the second quantum well subbands (e2, h2), as well as the absorption from the corresponding continuum states and from the barrier. Photoexcitation In general, three different excitation conditions are distinguished: resonant, quasi-resonant, and non-resonant. For the resonant excitation, the central energy of the laser corresponds to the lowest exciton resonance of the quantum well. No, or only a negligible amount of the excess, energy is injected to the carrier system. For these conditions, coherent processes contribute significantly to the spontaneous emission. The decay of polarization creates excitons directly. The detection of PL is challenging for resonant excitation as it is difficult to discriminate contributions from the excitation, i.e., stray-light and diffuse scattering from surface roughness. Thus, speckle and resonant Rayleigh-scattering are always superimposed to the incoherent emission. In case of the non-resonant excitation, the structure is excited with some excess energy. This is the typical situation used in most PL experiments as the excitation energy can be discriminated using a spectrometer or an optical filter. One has to distinguish between quasi-resonant excitation and barrier excitation. For quasi-resonant conditions, the energy of the excitation is tuned above the ground state but still below the barrier absorption edge, for example, into the continuum of the first subband. The polarization decay for these conditions is much faster than for resonant excitation and coherent contributions to the quantum well emission are negligible. The initial temperature of the carrier system is significantly higher than the lattice temperature due to the surplus energy of the injected carriers. Finally, only the electron-hole plasma is initially created. It is then followed by the formation of excitons. In case of barrier excitation, the initial carrier distribution in the quantum well strongly depends on the carrier scattering between barrier and the well. Relaxation Initially, the laser light induces coherent polarization in the sample, i.e., the transitions between electron and hole states oscillate with the laser frequency and a fixed phase. The polarization dephases typically on a sub-100 fs time-scale in case of nonresonant excitation due to ultra-fast Coulomb- and phonon-scattering. The dephasing of the polarization leads to creation of populations of electrons and holes in the conduction and the valence bands, respectively. The lifetime of the carrier populations is rather long, limited by radiative and non-radiative recombination such as Auger recombination. During this lifetime a fraction of electrons and holes may form excitons, this topic is still controversially discussed in the literature. The formation rate depends on the experimental conditions such as lattice temperature, excitation density, as well as on the general material parameters, e.g., the strength of the Coulomb-interaction or the exciton binding energy. The characteristic time-scales are in the range of hundreds of picoseconds in GaAs; they appear to be much shorter in wide-gap semiconductors. Directly after the excitation with short (femtosecond) pulses and the quasi-instantaneous decay of the polarization, the carrier distribution is mainly determined by the spectral width of the excitation, e.g., a laser pulse. The distribution is thus highly non-thermal and resembles a Gaussian distribution, centered at a finite momentum. In the first hundreds of femtoseconds, the carriers are scattered by phonons, or at elevated carrier densities via Coulomb-interaction. The carrier system successively relaxes to the Fermi–Dirac distribution typically within the first picosecond. Finally, the carrier system cools down under the emission of phonons. This can take up to several nanoseconds, depending on the material system, the lattice temperature, and the excitation conditions such as the surplus energy. Initially, the carrier temperature decreases fast via emission of optical phonons. This is quite efficient due to the comparatively large energy associated with optical phonons, (36meV or 420K in GaAs) and their rather flat dispersion, allowing for a wide range of scattering processes under conservation of energy and momentum. Once the carrier temperature decreases below the value corresponding to the optical phonon energy, acoustic phonons dominate the relaxation. Here, cooling is less efficient due their dispersion and small energies and the temperature decreases much slower beyond the first tens of picoseconds. At elevated excitation densities, the carrier cooling is further inhibited by the so-called hot-phonon effect. The relaxation of a large number of hot carriers leads to a high generation rate of optical phonons which exceeds the decay rate into acoustic phonons. This creates a non-equilibrium "over-population" of optical phonons and thus causes their increased reabsorption by the charge-carriers significantly suppressing any cooling. Thus, a system cools slower, the higher the carrier density is. Radiative recombination The emission directly after the excitation is spectrally very broad, yet still centered in the vicinity of the strongest exciton resonance. As the carrier distribution relaxes and cools, the width of the PL peak decreases and the emission energy shifts to match the ground state of the exciton (such as an electron) for ideal samples without disorder. The PL spectrum approaches its quasi-steady-state shape defined by the distribution of electrons and holes. Increasing the excitation density will change the emission spectra. They are dominated by the excitonic ground state for low densities. Additional peaks from higher subband transitions appear as the carrier density or lattice temperature are increased as these states get more and more populated. Also, the width of the main PL peak increases significantly with rising excitation due to excitation-induced dephasing and the emission peak experiences a small shift in energy due to the Coulomb-renormalization and phase-filling. In general, both exciton populations and plasma, uncorrelated electrons and holes, can act as sources for photoluminescence as described in the semiconductor-luminescence equations. Both yield very similar spectral features which are difficult to distinguish; their emission dynamics, however, vary significantly. The decay of excitons yields a single-exponential decay function since the probability of their radiative recombination does not depend on the carrier density. The probability of spontaneous emission for uncorrelated electrons and holes, is approximately proportional to the product of electron and hole populations eventually leading to a non-single-exponential decay described by a hyperbolic function. Effects of disorder Real material systems always incorporate disorder. Examples are structural defects in the lattice or disorder due to variations of the chemical composition. Their treatment is extremely challenging for microscopic theories due to the lack of detailed knowledge about perturbations of the ideal structure. Thus, the influence of the extrinsic effects on the PL is usually addressed phenomenologically. In experiments, disorder can lead to localization of carriers and hence drastically increase the photoluminescence life times as localized carriers cannot as easily find nonradiative recombination centers as can free ones. Researchers from the King Abdullah University of Science and Technology (KAUST) have studied the photoinduced entropy (i.e. thermodynamic disorder) of InGaN/GaN p-i-n double-heterostructure and AlGaN nanowires using temperature-dependent photoluminescence. They defined the photoinduced entropy as a thermodynamic quantity that represents the unavailability of a system's energy for conversion into useful work due to carrier recombination and photon emission. They have also related the change in entropy generation to the change in photocarrier dynamics in the nanowire active regions using results from time-resolved photoluminescence study. They hypothesized that the amount of generated disorder in the InGaN layers eventually increases as the temperature approaches room temperature because of the thermal activation of surface states, while an insignificant increase was observed in AlGaN nanowires, indicating lower degrees of disorder-induced uncertainty in the wider bandgap semiconductor. To study the photoinduced entropy, the scientists have developed a mathematical model that considers the net energy exchange resulting from photoexcitation and photoluminescence. Photoluminescent materials for temperature detection In phosphor thermometry, the temperature dependence of the photoluminescence process is exploited to measure temperature. Experimental methods Photoluminescence spectroscopy is a widely used technique for characterisation of the optical and electronic properties of semiconductors and molecules. The technique itself is fast, contactless, and nondestructive. Therefore, it can be used to study the optoelectronic properties of materials of various sizes (from microns to centimeters) during the fabrication process without complex sample preparation. For example, photoluminescence measurements of solar cell absorbers can predict the maximum voltage the material could produce. In chemistry, the method is more often referred to as fluorescence spectroscopy, but the instrumentation is the same. The relaxation processes can be studied using time-resolved fluorescence spectroscopy to find the decay lifetime of the photoluminescence. These techniques can be combined with microscopy, to map the intensity (confocal microscopy) or the lifetime (fluorescence-lifetime imaging microscopy) of the photoluminescence across a sample (e.g. a semiconducting wafer, or a biological sample that has been marked with fluorescent molecules). Modulated photoluminescence is a specific method for measuring the complex frequency response of the photoluminescence signal to a sinusoidal excitation, allowing for the direct extraction of minority carrier lifetime without the need for intensity calibrations. It has been used to study the influence of interface defects on the recombination of excess carriers in crystalline silicon wafers with different passivation schemes. See also Chemiluminescence Phosphorescence References Further reading Spectroscopy Luminescence
Photoluminescence
[ "Physics", "Chemistry" ]
3,308
[ "Luminescence", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Spectroscopy" ]
60,875
https://en.wikipedia.org/wiki/Triboluminescence
Triboluminescence is a phenomenon in which light is generated when a material is mechanically pulled apart, ripped, scratched, crushed, or rubbed (see tribology). The phenomenon is not fully understood but appears in most cases to be caused by the separation and reunification of static electric charges, see also triboelectric effect. The term comes from the Greek τρίβειν ("to rub"; see tribology) and the Latin lumen (light). Triboluminescence can be observed when breaking sugar crystals and peeling adhesive tapes. Triboluminescence is often a synonym for fractoluminescence (a term mainly used when referring only to light emitted from fractured crystals). Triboluminescence differs from piezoluminescence in that a piezoluminescent material emits light when deformed, as opposed to broken. These are examples of mechanoluminescence, which is luminescence resulting from any mechanical action on a solid. History Quartz rattlers of the Uncompahgre Ute indigenous people The Uncompahgre Ute indigenous people from Central Colorado are one of the first documented groups of people in the world credited with the application of mechanoluminescence involving the use of quartz crystals to generate light. The Ute constructed unique ceremonial rattles made from buffalo rawhide which they filled with clear quartz crystals collected from the mountains of Colorado and Utah. When the rattles were shaken at night during ceremonies, the friction and mechanical stress of the quartz crystals impacting together produced flashes of light visible through the translucent buffalo hide. Early scientific reports The first recorded observation is attributed to English scholar Francis Bacon when he recorded in his 1620 Novum Organum that "It is well known that all sugar, whether candied or plain, if it be hard, will sparkle when broken or scraped in the dark." The scientist Robert Boyle also reported on some of his work on triboluminescence in 1663. In 1675. Astronomer Jean-Felix Picard observed that his barometer was glowing in the dark as he carried it. His barometer consisted of a glass tube that was partially filled with mercury. The empty space above the mercury would glow whenever the mercury slid down the glass tube. In the late 1790s, sugar production began to produce more refined sugar crystals. These crystals were formed into a large solid cone for transport and sale. This solid sugar cone had to be broken into usable chunks using a sugar nips device. People began to notice that tiny bursts of light were visible as sugar was "nipped" in low light, an established example of triboluminescence. Mechanism of action There remain a few ambiguities about the effect. The current theory of triboluminescence—based upon crystallographic, spectroscopic, and other experimental evidence—is that upon fracture of asymmetrical materials, charge is separated. When the charges recombine, the electrical discharge ionizes the surrounding air, causing a flash of light. Research further suggests that crystals that display triboluminescence often lack symmetry and are poor conductors. However, there are substances which break this rule, and which do not possess asymmetry, yet display triboluminescence, such as hexakis(antipyrine)terbium iodide. It is thought that these materials contain impurities, which make the substance locally asymmetric. Further information on some of the possible processes involved can be found in the page on the triboelectric effect. The biological phenomenon of triboluminescence is thought to be controlled by recombination of free radicals during mechanical activation. Examples In common materials Certain household materials and substances can be seen to exhibit the property: Ordinary pressure-sensitive tape ("Scotch tape") displays a glowing line where the end of the tape is being pulled away from the roll. Soviet scientists observed in 1953 that unpeeling a roll of tape in a vacuum produced X-rays. The mechanism of X-ray generation was studied further in 2008. Similar X-ray emissions have also been observed with metals. Opening an envelope sealed with polymer glue may generate light that can be viewed as blue flashes in darkness. When sugar crystals are crushed, tiny electrical fields are created, separating positive and negative charges that create sparks while trying to reunite. Wint-O-Green Life Savers work especially well for creating such sparks, because wintergreen oil (methyl salicylate) is fluorescent and converts ultraviolet light into blue light. A diamond may begin to glow while being rubbed; this occasionally happens to diamonds while a facet is being ground or the diamond is being sawn during the cutting process. Diamonds may fluoresce blue or red. Some other minerals, such as quartz, are triboluminescent, emitting light when rubbed together. Triboluminescence as a biological phenomenon is observed in mechanical deformation and contact electrification of epidermal surface of osseous and soft tissues, during chewing food, at friction in joints of vertebrae, during sexual intercourse, and during blood circulation. Water jet abrasive cutting of ceramics (e.g., tiles) creates a yellow/orange glow at the point of impact of very high-speed flow. Chemicals notable for their triboluminescence Europium tetrakis (dibenzoylmethide)triethylammonium emits particularly bright red flashes upon the destruction of its crystals. Triphenylphosphinebis(pyridine)thiocyanatocopper(I) emits a reasonably strong blue light when crystals of it are fractured. This luminescence is not as extreme as the red luminescence; however, it is still very clearly visible to the naked eye in standard settings. N-acetylanthranilic acid emits a deep blue light when its crystals are fractured. Fractoluminescence Fractoluminescence is often used as a synonym for triboluminescence. It is the emission of light from the fracture (rather than rubbing) of a crystal, but fracturing often occurs with rubbing. Depending upon the atomic and molecular composition of the crystal, when the crystal fractures, a charge separation can occur, making one side of the fractured crystal positively charged and the other side negatively charged. Like in triboluminescence, if the charge separation results in a large enough electric potential, a discharge across the gap and through the bath gas between the interfaces can occur. The potential at which this occurs depends upon the dielectric properties of the bath gas. EMR propagation during fracturing The emission of electromagnetic radiation (EMR) during plastic deformation and crack propagation in metals and rocks has been studied. The EMR emissions from metals and alloys have also been explored and confirmed. Molotskii presented a dislocation mechanism for this type of EMR emission. In 2005, Srilakshmi and Misra reported an additional phenomenon of secondary EMR during plastic deformation and crack propagation in uncoated and metal-coated metals and alloys. EMR during the micro-plastic deformation and crack propagation from several metals and alloys and transient magnetic field generation during necking in ferromagnetic metals were reported by Misra (1973–75), which have been confirmed and explored by several researchers. Tudik and Valuev (1980) were able to measure the EMR frequency during tensile fracture of iron and aluminum in the region 100 THz by using photomultipliers. Srilakshmi and Misra (2005a) also reported an additional phenomenon of secondary electromagnetic radiation in uncoated and metal-coated metals and alloys. If a solid material is subjected to stresses of large amplitudes, which can cause plastic deformation and fracture, emissions such as thermal, acoustic, ions, and exo-emissions occur. Deformation induced EMR The study of deformation is essential for the development of new materials. Deformation in metals depends on temperature, type of stress applied, strain rate, oxidation, and corrosion. Deformation-induced EMR can be divided into three categories: effects in ionic crystal materials, effects in rocks and granites, and effects in metals and alloys. EMR emission depends on the orientation of the grains in individual crystals since material properties are different in differing directions. Amplitude of the EMR pulse increases as long as the crack grows as new atomic bonds are broken, leading to EMR. The Pulse starts to decay as the cracking halts. Observations from experiments showed that emitted EMR signals contain mixed frequency components. Test methods to measure EMR The most widely used tensile test method is used to characterize the mechanical properties of materials. From any complete tensile test record, one can obtain important information about the material's elastic properties, the character and extent of plastic deformation, yield, and tensile strengths and toughness. The information obtained from one test justifies the extensive use of tensile tests in engineering materials research. Therefore, investigations of EMR emissions are mainly based on the tensile test of the specimens. From experiments, it can be shown that tensile crack formation excites more intensive EMR than shear cracking, increasing the elasticity, strength, and loading rate during uniaxial loading increases amplitude. Poisson's ratio is a key parameter for EMR characterization during triaxial compression. If the Poisson's ratio is lower, it is harder for the material to strain transversally and hence there is a higher probability of new fractures. See also Earthquake light List of light sources Piezoelectricity Sonoluminescence Triboelectric effect References Further reading External links Triboluminescence Discussion on Tribo Net (2010) Bandaids glow when opening?! - Everyday Mysteries on Youtube (2018) Luminescence Light sources Electromagnetic radiation Photochemistry Chemistry
Triboluminescence
[ "Physics", "Chemistry" ]
2,022
[ "Physical phenomena", "Luminescence", "Molecular physics", "Electromagnetic radiation", "Radiation", "nan" ]
60,876
https://en.wikipedia.org/wiki/Markov%20chain
In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs now." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). Markov processes are named in honor of the Russian mathematician Andrey Markov. Markov chains have many applications as statistical models of real-world processes. They provide the basis for general stochastic simulation methods known as Markov chain Monte Carlo, which are used for simulating sampling from complex probability distributions, and have found application in areas including Bayesian statistics, biology, chemistry, economics, finance, information theory, physics, signal processing, and speech processing. The adjectives Markovian and Markov are used to describe something that is related to a Markov process. Principles Definition A Markov process is a stochastic process that satisfies the Markov property (sometimes characterized as "memorylessness"). In simpler terms, it is a process for which predictions can be made regarding future outcomes based solely on its present state and—most importantly—such predictions are just as good as the ones that could be made knowing the process's full history. In other words, conditional on the present state of the system, its future and past states are independent. A Markov chain is a type of Markov process that has either a discrete state space or a discrete index set (often representing time), but the precise definition of a Markov chain varies. For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space (thus regardless of the nature of time), but it is also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space). Types of Markov chains The system's state space and time parameter index need to be specified. The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time: Note that there is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes. Usually the term "Markov chain" is reserved for a process with a discrete set of times, that is, a discrete-time Markov chain (DTMC), but a few authors use the term "Markov process" to refer to a continuous-time Markov chain (CTMC) without explicit mention. In addition, there are other extensions of Markov processes that are referred to as such but do not necessarily fall within any of these four categories (see Markov model). Moreover, the time index need not necessarily be real-valued; like with the state space, there are conceivable processes that move through index sets with other mathematical constructs. Notice that the general state space continuous-time Markov chain is general to such a degree that it has no designated term. While the time parameter is usually discrete, the state space of a Markov chain does not have any generally agreed-on restrictions: the term may refer to a process on an arbitrary state space. However, many applications of Markov chains employ finite or countably infinite state spaces, which have a more straightforward statistical analysis. Besides time-index and state-space parameters, there are many other variations, extensions and generalizations (see Variations). For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, unless mentioned otherwise. Transitions The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities. The process is characterized by a state space, a transition matrix describing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate. A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement. Formally, the steps are the integers or natural numbers, and the random process is a mapping of these to states. The Markov property states that the conditional probability distribution for the system at the next step (and in fact at all future steps) depends only on the current state of the system, and not additionally on the state of the system at previous steps. Since the system changes randomly, it is generally impossible to predict with certainty the state of a Markov chain at a given point in the future. However, the statistical properties of the system's future can be predicted. In many applications, it is these statistical properties that are important. History Andrey Markov studied Markov processes in the early 20th century, publishing his first paper on the topic in 1906. Markov Processes in continuous time were discovered long before his work in the early 20th century in the form of the Poisson process. Markov was interested in studying an extension of independent random sequences, motivated by a disagreement with Pavel Nekrasov who claimed independence was necessary for the weak law of large numbers to hold. In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption, which had been commonly regarded as a requirement for such mathematical laws to hold. Markov later used Markov chains to study the distribution of vowels in Eugene Onegin, written by Alexander Pushkin, and proved a central limit theorem for such chains. In 1912 Henri Poincaré studied Markov chains on finite groups with an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced by Paul and Tatyana Ehrenfest in 1907, and a branching process, introduced by Francis Galton and Henry William Watson in 1873, preceding the work of Markov. After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier by Irénée-Jules Bienaymé. Starting in 1928, Maurice Fréchet became interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains. Andrey Kolmogorov developed in a 1931 paper a large part of the early theory of continuous-time Markov processes. Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well as Norbert Wiener's work on Einstein's model of Brownian movement. He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes. Independent of Kolmogorov's work, Sydney Chapman derived in a 1928 paper an equation, now called the Chapman–Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement. The differential equations are now called the Kolmogorov equations or the Kolmogorov–Chapman equations. Other mathematicians who contributed significantly to the foundations of Markov processes include William Feller, starting in 1930s, and then later Eugene Dynkin, starting in the 1950s. Examples Mark V. Shaney is a third-order Markov chain program, and a Markov text generator. It ingests the sample text (the Tao Te Ching, or the posts of a Usenet group) and creates a massive list of every sequence of three successive words (triplet) which occurs in the text. It then chooses two words at random, and looks for a word which follows those two in one of the triplets in its massive list. If there is more than one, it picks at random (identical triplets count separately, so a sequence which occurs twice is twice as likely to be picked as one which only occurs once). It then adds that word to the generated text. Then, in the same way, it picks a triplet that starts with the second and third words in the generated text, and that gives a fourth word. It adds the fourth word, then repeats with the third and fourth words, and so on. Random walks based on integers and the gambler's ruin problem are examples of Markov processes. Some variations of these processes were studied hundreds of years earlier in the context of independent variables. Two important examples of Markov processes are the Wiener process, also known as the Brownian motion process, and the Poisson process, which are considered the most important and central stochastic processes in the theory of stochastic processes. These two processes are Markov processes in continuous time, while random walks on the integers and the gambler's ruin problem are examples of Markov processes in discrete time. A famous Markov chain is the so-called "drunkard's walk", a random walk on the number line where, at each step, the position may change by +1 or −1 with equal probability. From any position there are two possible transitions, to the next or previous integer. The transition probabilities depend only on the current position, not on the manner in which the position was reached. For example, the transition probabilities from 5 to 4 and 5 to 6 are both 0.5, and all other transition probabilities from 5 are 0. These probabilities are independent of whether the system was previously in 4 or 6. A series of independent states (for example, a series of coin flips) satisfies the formal definition of a Markov chain. However, the theory is usually applied only when the probability distribution of the next state depends on the current one. A non-Markov example Suppose that there is a coin purse containing five quarters (each worth 25¢), five dimes (each worth 10¢), and five nickels (each worth 5¢), and one by one, coins are randomly drawn from the purse and are set on a table. If represents the total value of the coins set on the table after draws, with , then the sequence is not a Markov process. To see why this is the case, suppose that in the first six draws, all five nickels and a quarter are drawn. Thus . If we know not just , but the earlier values as well, then we can determine which coins have been drawn, and we know that the next coin will not be a nickel; so we can determine that with probability 1. But if we do not know the earlier values, then based only on the value we might guess that we had drawn four dimes and two nickels, in which case it would certainly be possible to draw another nickel next. Thus, our guesses about are impacted by our knowledge of values prior to . However, it is possible to model this scenario as a Markov process. Instead of defining to represent the total value of the coins on the table, we could define to represent the count of the various coin types on the table. For instance, could be defined to represent the state where there is one quarter, zero dimes, and five nickels on the table after 6 one-by-one draws. This new model could be represented by possible states, where each state represents the number of coins of each type (from 0 to 5) that are on the table. (Not all of these states are reachable within 6 draws.) Suppose that the first draw results in state . The probability of achieving now depends on ; for example, the state is not possible. After the second draw, the third draw depends on which coins have so far been drawn, but no longer only on the coins that were drawn for the first state (since probabilistically important information has since been added to the scenario). In this way, the likelihood of the state depends exclusively on the outcome of the state. Formal definition Discrete-time Markov chain A discrete-time Markov chain is a sequence of random variables X1, X2, X3, ... with the Markov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states: if both conditional probabilities are well defined, that is, if The possible values of Xi form a countable set S called the state space of the chain. Variations Time-homogeneous Markov chains are processes where for all n. The probability of the transition is independent of n. Stationary Markov chains are processes where for all n and k. Every stationary chain can be proved to be time-homogeneous by Bayes' rule.A necessary and sufficient condition for a time-homogeneous Markov chain to be stationary is that the distribution of is a stationary distribution of the Markov chain. A Markov chain with memory (or a Markov chain of order m) where m is finite, is a process satisfying In other words, the future state depends on the past m states. It is possible to construct a chain from which has the 'classical' Markov property by taking as state space the ordered m-tuples of X values, i.e., . Continuous-time Markov chain A continuous-time Markov chain (Xt)t ≥ 0 is defined by a finite or countable state space S, a transition rate matrix Q with dimensions equal to that of the state space and initial probability distribution defined on the state space. For i ≠ j, the elements qij are non-negative and describe the rate of the process transitions from state i to state j. The elements qii are chosen such that each row of the transition rate matrix sums to zero, while the row-sums of a probability transition matrix in a (discrete) Markov chain are all equal to one. There are three equivalent definitions of the process. Infinitesimal definition Let be the random variable describing the state of the process at time t, and assume the process is in a state i at time t. Then, knowing , is independent of previous values , and as h → 0 for all j and for all t, where is the Kronecker delta, using the little-o notation. The can be seen as measuring how quickly the transition from i to j happens. Jump chain/holding time definition Define a discrete-time Markov chain Yn to describe the nth jump of the process and variables S1, S2, S3, ... to describe holding times in each of the states where Si follows the exponential distribution with rate parameter −qYiYi. Transition probability definition For any value n = 0, 1, 2, 3, ... and times indexed up to this value of n: t0, t1, t2, ... and all states recorded at these times i0, i1, i2, i3, ... it holds that where pij is the solution of the forward equation (a first-order differential equation) with initial condition P(0) is the identity matrix. Finite state space If the state space is finite, the transition probability distribution can be represented by a matrix, called the transition matrix, with the (i, j)th element of P equal to Since each row of P sums to one and all elements are non-negative, P is a right stochastic matrix. Stationary distribution relation to eigenvectors and simplices A stationary distribution is a (row) vector, whose entries are non-negative and sum to 1, is unchanged by the operation of transition matrix P on it and so is defined by By comparing this definition with that of an eigenvector we see that the two concepts are related and that is a normalized () multiple of a left eigenvector e of the transition matrix P with an eigenvalue of 1. If there is more than one unit eigenvector then a weighted sum of the corresponding stationary states is also a stationary state. But for a Markov chain one is usually more interested in a stationary state that is the limit of the sequence of distributions for some initial distribution. The values of a stationary distribution are associated with the state space of P and its eigenvectors have their relative proportions preserved. Since the components of π are positive and the constraint that their sum is unity can be rewritten as we see that the dot product of π with a vector whose components are all 1 is unity and that π lies on a simplex. Time-homogeneous Markov chain with a finite state space If the Markov chain is time-homogeneous, then the transition matrix P is the same after each step, so the k-step transition probability can be computed as the k-th power of the transition matrix, Pk. If the Markov chain is irreducible and aperiodic, then there is a unique stationary distribution . Additionally, in this case Pk converges to a rank-one matrix in which each row is the stationary distribution : where 1 is the column vector with all entries equal to 1. This is stated by the Perron–Frobenius theorem. If, by whatever means, is found, then the stationary distribution of the Markov chain in question can be easily determined for any starting distribution, as will be explained below. For some stochastic matrices P, the limit does not exist while the stationary distribution does, as shown by this example: (This example illustrates a periodic Markov chain.) Because there are a number of different special cases to consider, the process of finding this limit if it exists can be a lengthy task. However, there are many techniques that can assist in finding this limit. Let P be an n×n matrix, and define It is always true that Subtracting Q from both sides and factoring then yields where In is the identity matrix of size n, and 0n,n is the zero matrix of size n×n. Multiplying together stochastic matrices always yields another stochastic matrix, so Q must be a stochastic matrix (see the definition above). It is sometimes sufficient to use the matrix equation above and the fact that Q is a stochastic matrix to solve for Q. Including the fact that the sum of each the rows in P is 1, there are n+1 equations for determining n unknowns, so it is computationally easier if on the one hand one selects one row in Q and substitutes each of its elements by one, and on the other one substitutes the corresponding element (the one in the same column) in the vector 0, and next left-multiplies this latter vector by the inverse of transformed former matrix to find Q. Here is one method for doing so: first, define the function f(A) to return the matrix A with its right-most column replaced with all 1's. If [f(P − In)]−1 exists then Explain: The original matrix equation is equivalent to a system of n×n linear equations in n×n variables. And there are n more linear equations from the fact that Q is a right stochastic matrix whose each row sums to 1. So it needs any n×n independent linear equations of the (n×n+n) equations to solve for the n×n variables. In this example, the n equations from "Q multiplied by the right-most column of (P-In)" have been replaced by the n stochastic ones. One thing to notice is that if P has an element Pi,i on its main diagonal that is equal to 1 and the ith row or column is otherwise filled with 0's, then that row or column will remain unchanged in all of the subsequent powers Pk. Hence, the ith row or column of Q will have the 1 and the 0's in the same positions as in P. Convergence speed to the stationary distribution As stated earlier, from the equation (if exists) the stationary (or steady state) distribution is a left eigenvector of row stochastic matrix P. Then assuming that P is diagonalizable or equivalently that P has n linearly independent eigenvectors, speed of convergence is elaborated as follows. (For non-diagonalizable, that is, defective matrices, one may start with the Jordan normal form of P and proceed with a bit more involved set of arguments in a similar way.) Let U be the matrix of eigenvectors (each normalized to having an L2 norm equal to 1) where each column is a left eigenvector of P and let Σ be the diagonal matrix of left eigenvalues of P, that is, Σ = diag(λ1,λ2,λ3,...,λn). Then by eigendecomposition Let the eigenvalues be enumerated such that: Since P is a row stochastic matrix, its largest left eigenvalue is 1. If there is a unique stationary distribution, then the largest eigenvalue and the corresponding eigenvector is unique too (because there is no other which solves the stationary distribution equation above). Let ui be the i-th column of U matrix, that is, ui is the left eigenvector of P corresponding to λi. Also let x be a length n row vector that represents a valid probability distribution; since the eigenvectors ui span we can write If we multiply x with P from right and continue this operation with the results, in the end we get the stationary distribution . In other words, = a1 u1 ← xPP...P = xPk as k → ∞. That means Since is parallel to u1(normalized by L2 norm) and (k) is a probability vector, (k) approaches to a1 u1 = as k → ∞ with a speed in the order of λ2/λ1 exponentially. This follows because hence λ2/λ1 is the dominant term. The smaller the ratio is, the faster the convergence is. Random noise in the state distribution can also speed up this convergence to the stationary distribution. General state space Harris chains Many results for Markov chains with finite state space can be generalized to chains with uncountable state space through Harris chains. The use of Markov chains in Markov chain Monte Carlo methods covers cases where the process follows a continuous state space. Locally interacting Markov chains "Locally interacting Markov chains" are Markov chains with an evolution that takes into account the state of other Markov chains. This corresponds to the situation when the state space has a (Cartesian-) product form. See interacting particle system and stochastic cellular automata (probabilistic cellular automata). See for instance Interaction of Markov Processes or. Properties Two states are said to communicate with each other if both are reachable from one another by a sequence of transitions that have positive probability. This is an equivalence relation which yields a set of communicating classes. A class is closed if the probability of leaving the class is zero. A Markov chain is irreducible if there is one communicating class, the state space. A state has period if is the greatest common divisor of the number of transitions by which can be reached, starting from . That is: The state is periodic if ; otherwise and the state is aperiodic. A state i is said to be transient if, starting from i, there is a non-zero probability that the chain will never return to i. It is called recurrent (or persistent) otherwise. For a recurrent state i, the mean hitting time is defined as: State i is positive recurrent if is finite and null recurrent otherwise. Periodicity, transience, recurrence and positive and null recurrence are class properties — that is, if one state has the property then all states in its communicating class have the property. A state i is called absorbing if there are no outgoing transitions from the state. Irreducibility Since periodicity is a class property, if a Markov chain is irreducible, then all its states have the same period. In particular, if one state is aperiodic, then the whole Markov chain is aperiodic. If a finite Markov chain is irreducible, then all states are positive recurrent, and it has a unique stationary distribution given by . Ergodicity A state i is said to be ergodic if it is aperiodic and positive recurrent. In other words, a state i is ergodic if it is recurrent, has a period of 1, and has finite mean recurrence time. If all states in an irreducible Markov chain are ergodic, then the chain is said to be ergodic. Equivalently, there exists some integer such that all entries of are positive. It can be shown that a finite state irreducible Markov chain is ergodic if it has an aperiodic state. More generally, a Markov chain is ergodic if there is a number N such that any state can be reached from any other state in any number of steps less or equal to a number N. In case of a fully connected transition matrix, where all transitions have a non-zero probability, this condition is fulfilled with N = 1. A Markov chain with more than one state and just one out-going transition per state is either not irreducible or not aperiodic, hence cannot be ergodic. Terminology Some authors call any irreducible, positive recurrent Markov chains ergodic, even periodic ones. In fact, merely irreducible Markov chains correspond to ergodic processes, defined according to ergodic theory. Some authors call a matrix primitive iff there exists some integer such that all entries of are positive. Some authors call it regular. Index of primitivity The index of primitivity, or exponent, of a regular matrix, is the smallest such that all entries of are positive. The exponent is purely a graph-theoretic property, since it depends only on whether each entry of is zero or positive, and therefore can be found on a directed graph with as its adjacency matrix. There are several combinatorial results about the exponent when there are finitely many states. Let be the number of states, then The exponent is . The only case where it is an equality is when the graph of goes like . If has diagonal entries, then its exponent is . If is symmetric, then has positive diagonal entries, which by previous proposition means its exponent is . (Dulmage-Mendelsohn theorem) The exponent is where is the girth of the graph. It can be improved to , where is the diameter of the graph. Measure-preserving dynamical system If a Markov chain has a stationary distribution, then it can be converted to a measure-preserving dynamical system: Let the probability space be , where is the set of all states for the Markov chain. Let the sigma-algebra on the probability space be generated by the cylinder sets. Let the probability measure be generated by the stationary distribution, and the Markov chain transition. Let be the shift operator: . Similarly we can construct such a dynamical system with instead. Since irreducible Markov chains with finite state spaces have a unique stationary distribution, the above construction is unambiguous for irreducible Markov chains. In ergodic theory, a measure-preserving dynamical system is called "ergodic" iff any measurable subset such that implies or (up to a null set). The terminology is inconsistent. Given a Markov chain with a stationary distribution that is strictly positive on all states, the Markov chain is irreducible iff its corresponding measure-preserving dynamical system is ergodic. Markovian representations In some cases, apparently non-Markovian processes may still have Markovian representations, constructed by expanding the concept of the "current" and "future" states. For example, let X be a non-Markovian process. Then define a process Y, such that each state of Y represents a time-interval of states of X. Mathematically, this takes the form: If Y has the Markov property, then it is a Markovian representation of X. An example of a non-Markovian process with a Markovian representation is an autoregressive time series of order greater than one. Hitting times The hitting time is the time, starting in a given set of states until the chain arrives in a given state or set of states. The distribution of such a time period has a phase type distribution. The simplest such distribution is that of a single exponentially distributed transition. Expected hitting times For a subset of states A ⊆ S, the vector kA of hitting times (where element represents the expected value, starting in state i that the chain enters one of the states in the set A) is the minimal non-negative solution to Time reversal For a CTMC Xt, the time-reversed process is defined to be . By Kelly's lemma this process has the same stationary distribution as the forward process. A chain is said to be reversible if the reversed process is the same as the forward process. Kolmogorov's criterion states that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions. Embedded Markov chain One method of finding the stationary probability distribution, , of an ergodic continuous-time Markov chain, Q, is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as a jump process. Each element of the one-step transition probability matrix of the EMC, S, is denoted by sij, and represents the conditional probability of transitioning from state i into state j. These conditional probabilities may be found by From this, S may be written as where I is the identity matrix and diag(Q) is the diagonal matrix formed by selecting the main diagonal from the matrix Q and setting all other elements to zero. To find the stationary probability distribution vector, we must next find such that with being a row vector, such that all elements in are greater than 0 and = 1. From this, may be found as (S may be periodic, even if Q is not. Once is found, it must be normalized to a unit vector.) Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observing X(t) at intervals of δ units of time. The random variables X(0), X(δ), X(2δ), ... give the sequence of states visited by the δ-skeleton. Special types of Markov chains Markov model Markov models are used to model changing systems. There are 4 main types of models, that generalize Markov chains depending on whether every sequential state is observable or not, and whether the system is to be adjusted on the basis of observations made: Bernoulli scheme A Bernoulli scheme is a special case of a Markov chain where the transition probability matrix has identical rows, which means that the next state is independent of even the current state (in addition to being independent of the past states). A Bernoulli scheme with only two possible states is known as a Bernoulli process. Note, however, by the Ornstein isomorphism theorem, that every aperiodic and irreducible Markov chain is isomorphic to a Bernoulli scheme; thus, one might equally claim that Markov chains are a "special case" of Bernoulli schemes. The isomorphism generally requires a complicated recoding. The isomorphism theorem is even a bit stronger: it states that any stationary stochastic process is isomorphic to a Bernoulli scheme; the Markov chain is just one such example. Subshift of finite type When the Markov matrix is replaced by the adjacency matrix of a finite graph, the resulting shift is termed a topological Markov chain or a subshift of finite type. A Markov matrix that is compatible with the adjacency matrix can then provide a measure on the subshift. Many chaotic dynamical systems are isomorphic to topological Markov chains; examples include diffeomorphisms of closed manifolds, the Prouhet–Thue–Morse system, the Chacon system, sofic systems, context-free systems and block-coding systems. Applications Markov chains have been employed in a wide range of topics across the natural and social sciences, and in technological applications. They have been used for forecasting in several areas: for example, price trends, wind power, stochastic terrorism, and solar irradiance. The Markov chain forecasting models utilize a variety of settings, from discretizing the time series, to hidden Markov models combined with wavelets, and the Markov chain mixture distribution model (MCM). Physics Markovian systems appear extensively in thermodynamics and statistical mechanics, whenever probabilities are used to represent unknown or unmodelled details of the system, if it can be assumed that the dynamics are time-invariant, and that no relevant history need be considered which is not already included in the state description. For example, a thermodynamic state operates under a probability distribution that is difficult or expensive to acquire. Therefore, Markov Chain Monte Carlo method can be used to draw samples randomly from a black-box to approximate the probability distribution of attributes over a range of objects. Markov chains are used in lattice QCD simulations. Chemistry A reaction network is a chemical system involving multiple reactions and chemical species. The simplest stochastic models of such networks treat the system as a continuous time Markov chain with the state being the number of molecules of each species and with reactions modeled as possible transitions of the chain. Markov chains and continuous-time Markov processes are useful in chemistry when physical systems closely approximate the Markov property. For example, imagine a large number n of molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov chain, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time is n times the probability a given molecule is in that state. The classical model of enzyme activity, Michaelis–Menten kinetics, can be viewed as a Markov chain, where at each time step the reaction proceeds in some direction. While Michaelis-Menten is fairly straightforward, far more complicated reaction networks can also be modeled with Markov chains. An algorithm based on a Markov chain was also used to focus the fragment-based growth of chemicals in silico towards a desired class of compounds such as drugs or natural products. As a molecule is grown, a fragment is selected from the nascent molecule as the "current" state. It is not aware of its past (that is, it is not aware of what is already bonded to it). It then transitions to the next state when a fragment is attached to it. The transition probabilities are trained on databases of authentic classes of compounds. Also, the growth (and composition) of copolymers may be modeled using Markov chains. Based on the reactivity ratios of the monomers that make up the growing polymer chain, the chain's composition may be calculated (for example, whether monomers tend to add in alternating fashion or in long runs of the same monomer). Due to steric effects, second-order Markov effects may also play a role in the growth of some polymer chains. Similarly, it has been suggested that the crystallization and growth of some epitaxial superlattice oxide materials can be accurately described by Markov chains. Biology Markov chains are used in various areas of biology. Notable examples include: Phylogenetics and bioinformatics, where most models of DNA evolution use continuous-time Markov chains to describe the nucleotide present at a given site in the genome. Population dynamics, where Markov chains are in particular a central tool in the theoretical study of matrix population models. Neurobiology, where Markov chains have been used, e.g., to simulate the mammalian neocortex. Systems biology, for instance with the modeling of viral infection of single cells. Compartmental models for disease outbreak and epidemic modeling. Testing Several theorists have proposed the idea of the Markov chain statistical test (MCST), a method of conjoining Markov chains to form a "Markov blanket", arranging these chains in several recursive layers ("wafering") and producing more efficient test sets—samples—as a replacement for exhaustive testing. Solar irradiance variability Solar irradiance variability assessments are useful for solar power applications. Solar irradiance variability at any location over time is mainly a consequence of the deterministic variability of the sun's path across the sky dome and the variability in cloudiness. The variability of accessible solar irradiance on Earth's surface has been modeled using Markov chains, also including modeling the two states of clear and cloudiness as a two-state Markov chain. Speech recognition Hidden Markov models have been used in automatic speech recognition systems. Information theory Markov chains are used throughout information processing. Claude Shannon's famous 1948 paper A Mathematical Theory of Communication, which in a single step created the field of information theory, opens by introducing the concept of entropy by modeling texts in a natural language (such as English) as generated by an ergodic Markov process, where each letter may depend statistically on previous letters. Such idealized models can capture many of the statistical regularities of systems. Even without describing the full structure of the system perfectly, such signal models can make possible very effective data compression through entropy encoding techniques such as arithmetic coding. They also allow effective state estimation and pattern recognition. Markov chains also play an important role in reinforcement learning. Markov chains are also the basis for hidden Markov models, which are an important tool in such diverse fields as telephone networks (which use the Viterbi algorithm for error correction), speech recognition and bioinformatics (such as in rearrangements detection). The LZMA lossless data compression algorithm combines Markov chains with Lempel-Ziv compression to achieve very high compression ratios. Queueing theory Markov chains are the basis for the analytical treatment of queues (queueing theory). Agner Krarup Erlang initiated the subject in 1917. This makes them critical for optimizing the performance of telecommunications networks, where messages must often compete for limited resources (such as bandwidth). Numerous queueing models use continuous-time Markov chains. For example, an M/M/1 queue is a CTMC on the non-negative integers where upward transitions from i to i + 1 occur at rate λ according to a Poisson process and describe job arrivals, while transitions from i to i – 1 (for i > 1) occur at rate μ (job service times are exponentially distributed) and describe completed services (departures) from the queue. Internet applications The PageRank of a webpage as used by Google is defined by a Markov chain. It is the probability to be at page in the stationary distribution on the following Markov chain on all (known) webpages. If is the number of known webpages, and a page has links to it then it has transition probability for all pages that are linked to and for all pages that are not linked to. The parameter is taken to be about 0.15. Markov models have also been used to analyze web navigation behavior of users. A user's web link transition on a particular website can be modeled using first- or second-order Markov models and can be used to make predictions regarding future navigation and to personalize the web page for an individual user. Statistics Markov chain methods have also become very important for generating sequences of random numbers to accurately reflect very complicated desired probability distributions, via a process called Markov chain Monte Carlo (MCMC). In recent years this has revolutionized the practicability of Bayesian inference methods, allowing a wide range of posterior distributions to be simulated and their parameters found numerically. Conflict and combat In 1971 a Naval Postgraduate School Master's thesis proposed to model a variety of combat between adversaries as a Markov chain "with states reflecting the control, maneuver, target acquisition, and target destruction actions of a weapons system" and discussed the parallels between the resulting Markov chain and Lanchester's laws. In 1975 Duncan and Siverson remarked that Markov chains could be used to model conflict between state actors, and thought that their analysis would help understand "the behavior of social and political organizations in situations of conflict." Economics and finance Markov chains are used in finance and economics to model a variety of different phenomena, including the distribution of income, the size distribution of firms, asset prices and market crashes. D. G. Champernowne built a Markov chain model of the distribution of income in 1953. Herbert A. Simon and co-author Charles Bonini used a Markov chain model to derive a stationary Yule distribution of firm sizes. Louis Bachelier was the first to observe that stock prices followed a random walk. The random walk was later seen as evidence in favor of the efficient-market hypothesis and random walk models were popular in the literature of the 1960s. Regime-switching models of business cycles were popularized by James D. Hamilton (1989), who used a Markov chain to model switches between periods of high and low GDP growth (or, alternatively, economic expansions and recessions). A more recent example is the Markov switching multifractal model of Laurent E. Calvet and Adlai J. Fisher, which builds upon the convenience of earlier regime-switching models. It uses an arbitrarily large Markov chain to drive the level of volatility of asset returns. Dynamic macroeconomics makes heavy use of Markov chains. An example is using Markov chains to exogenously model prices of equity (stock) in a general equilibrium setting. Credit rating agencies produce annual tables of the transition probabilities for bonds of different credit ratings. Social sciences Markov chains are generally used in describing path-dependent arguments, where current structural configurations condition future outcomes. An example is the reformulation of the idea, originally due to Karl Marx's , tying economic development to the rise of capitalism. In current research, it is common to use a Markov chain to model how once a country reaches a specific level of economic development, the configuration of structural factors, such as size of the middle class, the ratio of urban to rural residence, the rate of political mobilization, etc., will generate a higher probability of transitioning from authoritarian to democratic regime. Music Markov chains are employed in algorithmic music composition, particularly in software such as Csound, Max, and SuperCollider. In a first-order chain, the states of the system become note or pitch values, and a probability vector for each note is constructed, completing a transition probability matrix (see below). An algorithm is constructed to produce output note values based on the transition matrix weightings, which could be MIDI note values, frequency (Hz), or any other desirable metric. A second-order Markov chain can be introduced by considering the current state and also the previous state, as indicated in the second table. Higher, nth-order chains tend to "group" particular notes together, while 'breaking off' into other patterns and sequences occasionally. These higher-order chains tend to generate results with a sense of phrasal structure, rather than the 'aimless wandering' produced by a first-order system. Markov chains can be used structurally, as in Xenakis's Analogique A and B. Markov chains are also used in systems which use a Markov model to react interactively to music input. Usually musical systems need to enforce specific control constraints on the finite-length sequences they generate, but control constraints are not compatible with Markov models, since they induce long-range dependencies that violate the Markov hypothesis of limited memory. In order to overcome this limitation, a new approach has been proposed. Games and sports Markov chains can be used to model many games of chance. The children's games Snakes and Ladders and "Hi Ho! Cherry-O", for example, are represented exactly by Markov chains. At each turn, the player starts in a given state (on a given square) and from there has fixed odds of moving to certain other states (squares). Markov chain models have been used in advanced baseball analysis since 1960, although their use is still rare. Each half-inning of a baseball game fits the Markov chain state when the number of runners and outs are considered. During any at-bat, there are 24 possible combinations of number of outs and position of the runners. Mark Pankin shows that Markov chain models can be used to evaluate runs created for both individual players as well as a team. He also discusses various kinds of strategies and play conditions: how Markov chain models have been used to analyze statistics for game situations such as bunting and base stealing and differences when playing on grass vs. AstroTurf. Markov text generators Markov processes can also be used to generate superficially real-looking text given a sample document. Markov processes are used in a variety of recreational "parody generator" software (see dissociated press, Jeff Harrison, Mark V. Shaney, and Academias Neutronium). Several open-source text generation libraries using Markov chains exist. See also Dynamics of Markovian particles Gauss–Markov process Markov chain approximation method Markov chain geostatistics Markov chain mixing time Markov chain tree theorem Markov decision process Markov information source Markov odometer Markov operator Markov random field Master equation Quantum Markov chain Semi-Markov process Stochastic cellular automaton Telescoping Markov chain Variable-order Markov model Notes References A. A. Markov (1906) "Rasprostranenie zakona bol'shih chisel na velichiny, zavisyaschie drug ot druga". Izvestiya Fiziko-matematicheskogo obschestva pri Kazanskom universitete, 2-ya seriya, tom 15, pp. 135–156. A. A. Markov (1971). "Extension of the limit theorems of probability theory to a sum of variables connected in a chain". reprinted in Appendix B of: R. Howard. Dynamic Probabilistic Systems, volume 1: Markov Chains. John Wiley and Sons. Classical Text in Translation: Leo Breiman (1992) [1968] Probability. Original edition published by Addison-Wesley; reprinted by Society for Industrial and Applied Mathematics . (See Chapter 7) J. L. Doob (1953) Stochastic Processes. New York: John Wiley and Sons . S. P. Meyn and R. L. Tweedie (1993) Markov Chains and Stochastic Stability. London: Springer-Verlag . online: MCSS . Second edition to appear, Cambridge University Press, 2009. ; (NB. This was originally published in Russian as (Markovskiye protsessy) by Fizmatgiz in 1963 and translated to English with the assistance of the author.) S. P. Meyn. Control Techniques for Complex Networks. Cambridge University Press, 2007. . Appendix contains abridged Meyn & Tweedie. online: CTCN ] Extensive, wide-ranging book meant for specialists, written for both theoretical computer scientists as well as electrical engineers. With detailed explanations of state minimization techniques, FSMs, Turing machines, Markov processes, and undecidability. Excellent treatment of Markov processes pp. 449ff. Discusses Z-transforms, D transforms in their context. Classical text. cf Chapter 6 Finite Markov Chains pp. 384ff. John G. Kemeny & J. Laurie Snell (1960) Finite Markov Chains, D. van Nostrand Company E. Nummelin. "General irreducible Markov chains and non-negative operators". Cambridge University Press, 1984, 2004. Seneta, E. Non-negative matrices and Markov chains. 2nd rev. ed., 1981, XVI, 288 p., Softcover Springer Series in Statistics. (Originally published by Allen & Unwin Ltd., London, 1973) Kishor S. Trivedi, Probability and Statistics with Reliability, Queueing, and Computer Science Applications, John Wiley & Sons, Inc. New York, 2002. . K. S. Trivedi and R.A.Sahner, SHARPE at the age of twenty-two, vol. 36, no. 4, pp. 52–57, ACM SIGMETRICS Performance Evaluation Review, 2009. R. A. Sahner, K. S. Trivedi and A. Puliafito, Performance and reliability analysis of computer systems: an example-based approach using the SHARPE software package, Kluwer Academic Publishers, 1996. . G. Bolch, S. Greiner, H. de Meer and K. S. Trivedi, Queueing Networks and Markov Chains, John Wiley, 2nd edition, 2006. . External links Markov Chains chapter in American Mathematical Society's introductory probability book A visual explanation of Markov Chains Original paper by A.A Markov (1913): An Example of Statistical Investigation of the Text Eugene Onegin Concerning the Connection of Samples in Chains (translated from Russian) Markov processes Markov models Graph theory Random text generation
Markov chain
[ "Mathematics" ]
10,430
[ "Discrete mathematics", "Mathematical relations", "Graph theory", "Combinatorics" ]
60,879
https://en.wikipedia.org/wiki/Electroluminescence
Electroluminescence (EL) is an optical and electrical phenomenon, in which a material emits light in response to the passage of an electric current or to a strong electric field. This is distinct from black body light emission resulting from heat (incandescence), chemical reactions (chemiluminescence), reactions in a liquid (electrochemiluminescence), sound (sonoluminescence), or other mechanical action (mechanoluminescence), or organic electroluminescence. Mechanism Electroluminescence is the result of radiative recombination of electrons and holes in a material, usually a semiconductor. The excited electrons release their energy as photons – light. Prior to recombination, electrons and holes may be separated either by doping the material to form a p-n junction (in semiconductor electroluminescent devices such as light-emitting diodes) or through excitation by impact of high-energy electrons accelerated by a strong electric field (as with the phosphors in electroluminescent displays). It has been recently shown that as a solar cell improves its light-to-electricity efficiency (improved open-circuit voltage), it will also improve its electricity-to-light (EL) efficiency. Characteristics Electroluminescent technologies have low power consumption compared to competing lighting technologies, such as neon or fluorescent lamps. This, together with the thinness of the material, has made EL technology valuable to the advertising industry. Relevant advertising applications include electroluminescent billboards and signs. EL manufacturers can control precisely which areas of an electroluminescent sheet illuminate, and when. This has given advertisers the ability to create more dynamic advertising that is still compatible with traditional advertising spaces. An EL film is a so-called Lambertian radiator: unlike with neon lamps, filament lamps, or LEDs, the brightness of the surface appears the same from all angles of view; electroluminescent light is not directional. The light emitted from the surface is perfectly homogeneous and is well-perceived by the eye. EL film produces single-frequency (monochromatic) light that has a very narrow bandwidth, is uniform and visible from a great distance. In principle, EL lamps can be made in any color. However, the commonly used greenish color closely matches the peak sensitivity of human vision, producing the greatest apparent light output for the least electrical power input. Unlike neon and fluorescent lamps, EL lamps are not negative resistance devices so no extra circuitry is needed to regulate the amount of current flowing through them. A new technology now being used is based on multispectral phosphors that emit light from 600 to 400nm depending on the drive frequency; this is similar to the color-changing effect seen with aqua EL sheet but on a larger scale. Examples of electroluminescent materials Electroluminescent devices are fabricated using either organic or inorganic electroluminescent materials. The active materials are generally semiconductors of wide enough bandwidth to allow the exit of the light. The most typical inorganic thin-film EL (TFEL) is ZnS:Mn with yellow-orange emission. Examples of the range of EL material include: Powdered zinc sulfide doped with copper (producing greenish light) or silver (producing bright blue light) Thin-film zinc sulfide doped with manganese (producing orange-red color) Naturally blue diamond, which includes a trace of boron that acts as a dopant. Semiconductors containing Group III and Group V elements, such as indium phosphide (InP), gallium arsenide (GaAs), and gallium nitride (GaN) (Light-emitting diodes). Certain organic semiconductors, such as [Ru(bpy)3]2+(PF6−)2, where bpy is 2,2'-bipyridine Terbium oxide (yellow-green light) Practical implementations The most common electroluminescent (EL) devices are composed of either powder (primarily used in lighting applications) or thin films (for information displays.) Light-emitting capacitor (LEC) Light-emitting capacitor, or LEC, is a term used since at least 1961 to describe electroluminescent panels. General Electric has patents dating to 1938 on flat electroluminescent panels that are still made as night lights and backlights for instrument panel displays. Electroluminescent panels are a capacitor where the dielectric between the outside plates is a phosphor that gives off photons when the capacitor is charged. By making one of the contacts transparent, the large area exposed emits light. Electroluminescent automotive instrument panel backlighting, with each gauge pointer also an individual light source, entered production on 1960 Chrysler and Imperial passenger cars, and was continued successfully on several Chrysler vehicles through 1967 and marketed as "Panelescent Lighting". Night lights The Sylvania Lighting Division in Salem and Danvers, Massachusetts, produced and marketed an EL night light, under the trade name Panelescent at roughly the same time that the Chrysler instrument panels entered production. These lamps have proven extremely reliable, with some samples known to be still functional after nearly 50 years of continuous operation. Later in the 1960s, Sylvania's Electronic Systems Division in Needham, Massachusetts developed and manufactured several instruments for the Apollo Lunar Module and Command Module using electroluminescent display panels manufactured by the Electronic Tube Division of Sylvania at Emporium, Pennsylvania. Raytheon in Sudbury, Massachusetts manufactured the Apollo Guidance Computer, which used a Sylvania electroluminescent display panel as part of its display-keyboard interface (DSKY). Display backlighting Powder phosphor-based electroluminescent panels are frequently used as backlights for liquid crystal displays. They readily provide gentle, even illumination for the entire display while consuming relatively little electric power. This makes them convenient for battery-operated devices such as pagers, wristwatches, and computer-controlled thermostats, and their gentle green-cyan glow is common in the technological world. EL backlights require relatively high voltage (between 60 and 600 volts). For battery-operated devices, this voltage must be generated by a boost converter circuit within the device. This converter often makes a faintly audible whine or siren sound while the backlight is activated. Line-voltage-operated devices may be activated directly from the power line; some electroluminescent nightlights operate in this fashion. Brightness per unit area increases with increased voltage and frequency. Thin-film phosphor electroluminescence was first commercialized during the 1980s by Sharp Corporation in Japan, Finlux (Oy Lohja Ab) in Finland, and Planar Systems in the US. In these devices, bright, long-life light emission is achieved in thin-film yellow-emitting manganese-doped zinc sulfide material. Displays using this technology were manufactured for medical and vehicle applications where ruggedness and wide viewing angles were crucial, and liquid crystal displays were not well developed. In 1992, Timex introduced its Indiglo EL display on some watches. Recently, blue-, red-, and green-emitting thin film electroluminescent materials that offer the potential for long life and full-color electroluminescent displays have been developed. The EL material must be enclosed between two electrodes and at least one electrode must be transparent to allow the escape of the produced light. Glass coated with indium tin oxide is commonly used as the front (transparent) electrode, while the back electrode is coated with reflective metal. Additionally, other transparent conducting materials, such as carbon nanotube coatings or PEDOT can be used as the front electrode. The display applications are primarily passive (i.e., voltages are driven from the edge of the display cf. driven from a transistor on the display). Similar to LCD trends, there have also been Active Matrix EL (AMEL) displays demonstrated, where the circuitry is added to prolong voltages at each pixel. The solid-state nature of TFEL allows for a very rugged and high-resolution display fabricated even on silicon substrates. AMEL displays of 1280×1024 at over 1000 lines per inch (LPI) have been demonstrated by a consortium including Planar Systems. Thick-film dielectric electroluminescent technology Thick-film dielectric electroluminescent technology (TDEL) is a phosphor-based flat panel display technology developed by Canadian company iFire Technology Corp. TDEL is based on inorganic electroluminescent (IEL) technology that combines both thick-and thin-film processes. The TDEL structure is made with glass or other substrates, consisting of a thick-film dielectric layer and a thin-film phosphor layer sandwiched between two sets of electrodes to create a matrix of pixels. Inorganic phosphors within this matrix emit light in the presence of an alternating electric field. Color By Blue Color By Blue (CBB) was developed in 2003. The Color By Blue process achieves higher luminance and better performance than the previous triple pattern process, with increased contrast, grayscale rendition, and color uniformity across the panel. Color By Blue is based on the physics of photoluminescence. High luminance inorganic blue phosphor is used in combination with specialized color conversion materials, which absorb the blue light and re-emit red or green light, to generate the other colors. New applications Electroluminescent lighting is now used as an application for public safety identification involving alphanumeric characters on the roof of vehicles for clear visibility from an aerial perspective. Electroluminescent lighting, especially electroluminescent wire (EL wire), has also made its way into clothing as many designers have brought this technology to the entertainment and nightlife industry. From 2006, t-shirts with an electroluminescent panel stylized as an audio equalizer, the T-Qualizer, saw a brief period of popularity. Engineers have developed an electroluminescent "skin" that can stretch more than six times its original size while still emitting light. This hyper-elastic light-emitting capacitor (HLEC) can endure more than twice the strain of previously tested stretchable displays. It consists of layers of transparent hydrogel electrodes sandwiching an insulating elastomer sheet. The elastomer changes luminance and capacitance when stretched, rolled, and otherwise deformed. In addition to its ability to emit light under a strain of greater than 480% of its original size, the group's HLEC was shown to be capable of being integrated into a soft robotic system. Three six-layer HLEC panels were bound together to form a crawling soft robot, with the top four layers making up the light-up skin and the bottom two the pneumatic actuators. The discovery could lead to significant advances in health care, transportation, electronic communication and other areas. See also List of light sources OLED Photoelectric effect References External links Overview of electroluminescent display technology, and thediscovery of electroluminescence Chrysler Corporation press release introducing Panelescent (EL) Lighting on 8 September, 1959. Condensed matter physics Electrical phenomena Light sources Lighting Luminescence
Electroluminescence
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,366
[ "Physical phenomena", "Luminescence", "Molecular physics", "Phases of matter", "Materials science", "Electrical phenomena", "Condensed matter physics", "Matter" ]
60,880
https://en.wikipedia.org/wiki/List%20of%20electrical%20phenomena
This is a list of electrical phenomena. Electrical phenomena are a somewhat arbitrary division of electromagnetic phenomena. Some examples are: Atmospheric electricity Biefeld–Brown effect — Thought by the person who coined the name, Thomas Townsend Brown, to be an anti-gravity effect, it is generally attributed to electrohydrodynamics (EHD) or sometimes electro-fluid-dynamics, a counterpart to the well-known magneto-hydrodynamics. Bioelectrogenesis — The generation of electricity by living organisms. Capacitive coupling — Transfer of energy within an electrical network or between distant networks by means of displacement current. Contact electrification — The phenomenon of electrification by contact. When two objects were touched together, sometimes the objects became spontaneously charged (οne negative charge, one positive charge). Corona effect — Build-up of charges in a high-voltage conductor (common in AC transmission lines), which ionizes the air and produces visible light, usually purple. Dielectric polarization — Orientation of charges in certain insulators inside an external static electric field, such as when a charged object is brought close, which produces an electric field inside the insulator. Direct Current — (old: Galvanic Current) or "continuous current"; The continuous flow of electricity through a conductor such as a wire from high to low potential. Electromagnetic induction — Production of a voltage by a time-varying magnetic flux. Electroluminescence — The phenomenon wherein a material emits light in response to an electric current passed through it, or to a strong electric field. Electrostatic induction — Redistribution of charges in a conductor inside an external static electric field, such as when a charged object is brought close. Electrical conduction — The movement of electrically charged particles through transmission medium. Electric shock — Physiological reaction of a biological organism to the passage of electric current through its body. Ferranti effect — A rise in the amplitude of the AC voltage at the receiving end of a transmission line, compared with the sending-end voltage, due to the capacitance between the conductors, when the receiving end is open-circuited. Ferroelectric effect — The phenomenon whereby certain ionic crystals may exhibit a spontaneous dipole moment. Hall effect — Separation of charges in a current-carrying conductor inside an external magnetic field, which produces a voltage across the conductor. Inductance — The phenomenon whereby the property of a circuit by which energy is stored in the form of an electromagnetic field. Induction heating — Heat produced in a conductor when eddy currents pass through it. Joule heating — Heat produced in a conductor when charges move through it, such as in resistors and wires. Lightning — powerful natural electrostatic discharge produced during a thunderstorm. Lightning's abrupt electric discharge is accompanied by the emission of light. Noise and electromagnetic interference — Unwanted and usually random disturbance in an electrical signal. A Faraday cage can be used to attenuate electromagnetic fields, even to avoid the discharge from a Tesla coil. Photoconductivity — The phenomenon in which a material becomes more conductive due to the absorption of electro-magnetic radiation such as visible light, ultraviolet light, or gamma radiation. Photoelectric effect — Emission of electrons from a surface (usually metallic) upon exposure to, and absorption of, electromagnetic radiation (such as visible light and ultraviolet radiation). Photovoltaic effect — Production of a voltage by light exposure. Piezoelectric effect — Ability of certain crystals to generate a voltage in response to applied mechanical stress. Plasma — Plasma occur when gas is heated to very high temperatures and it disassociates into positive and negative charges. Proximity effect — Redistribution of charge flow in a conductor carrying alternating current when there are other nearby current-carrying conductors. Pyroelectric effect — The potential created in certain materials when they are heated. Redox — (short for reduction-oxidation reaction) A chemical reaction in which the oxidation states of atoms are changed. Skin effect — Tendency of charges to distribute at the surface of a conductor, when an alternating current passes through it. Static electricity — Class of phenomena involving the imbalanced charge present on an object, typically referring to charge with voltages of sufficient magnitude to produce visible attraction (e.g., static cling), repulsion, and sparks. Sparks — Electrical breakdown of a medium that produces an ongoing plasma discharge, similar to the instant spark, resulting from a current flowing through normally nonconductive media such as air. Telluric currents — Extremely low frequency electric current that occurs naturally over large underground areas at or near the surface of the Earth. Thermionic emission — the emission of electrons from a heated electrode, usually the cathode, the principle underlying most vacuum tubes. Thermoelectric effect — the Seebeck effect, the Peltier effect, and the Thomson effect. Thunderstorm — also electrical storm, form of weather characterized by the presence of lightning and its acoustic effect on the Earth's atmosphere known as thunder. Triboelectric effect — Type of contact electrification in which objects become electrically charged after coming into contact and are then separated. A Van de Graaff generator is based on this principle. Whistlers — Very low frequency radio wave generated by lightning. References External links A Βeginner's Guide to Natural VLF Radio Phenomena Lists of phenomena
List of electrical phenomena
[ "Physics" ]
1,085
[ "Physical phenomena", "Electrical phenomena" ]
60,885
https://en.wikipedia.org/wiki/Photoconductivity
Photoconductivity is an optical and electrical phenomenon in which a material becomes more electrically conductive due to the absorption of electromagnetic radiation such as visible light, ultraviolet light, infrared light, or gamma radiation. When light is absorbed by a material such as a semiconductor, the number of free electrons and holes increases, resulting in increased electrical conductivity. To cause excitation, the light that strikes the semiconductor must have enough energy to raise electrons across the band gap, or to excite the impurities within the band gap. When a bias voltage and a load resistor are used in series with the semiconductor, a voltage drop across the load resistors can be measured when the change in electrical conductivity of the material varies the current through the circuit. Classic examples of photoconductive materials include: photographic film: Kodachrome, Fujifilm, Agfachrome, Ilford, etc., based on silver sulfide and silver bromide. the conductive polymer polyvinylcarbazole, used extensively in photocopying (xerography); lead sulfide, used in infrared detection applications, such as the U.S. Sidewinder and Soviet (now Russian) Atoll heat-seeking missiles; selenium, employed in early television and xerography. Molecular photoconductors include organic, inorganic, and – more rarely – coordination compounds. Applications When a photoconductive material is connected as part of a circuit, it functions as a resistor whose resistance depends on the light intensity. In this context, the material is called a photoresistor (also called light-dependent resistor or photoconductor). The most common application of photoresistors is as photodetectors, i.e. devices that measure light intensity. Photoresistors are not the only type of photodetector—other types include charge-coupled devices (CCDs), photodiodes and phototransistors—but they are among the most common. Some photodetector applications in which photoresistors are often used include camera light meters, street lights, clock radios, infrared detectors, nanophotonic systems and low-dimensional photo-sensors devices. Sensitization Sensitization is an important engineering procedure to amplify the response of photoconductive materials. The photoconductive gain is proportional to the lifetime of photo-excited carriers (either electrons or holes). Sensitization involves intentional impurity doping that saturates native recombination centers with a short characteristic lifetime, and replacing these centers with new recombination centers having a longer lifetime. This procedure, when done correctly, results in an increase in the photoconductive gain of several orders of magnitude and is used in the production of commercial photoconductive devices. The text by Albert Rose is the work of reference for sensitization. Negative photoconductivity Some materials exhibit deterioration in photoconductivity upon exposure to illumination. One prominent example is hydrogenated amorphous silicon (a-Si:H) in which a metastable reduction in photoconductivity is observable (see Staebler–Wronski effect). Other materials that were reported to exhibit negative photoconductivity include ZnO nanowires, molybdenum disulfide, graphene, indium arsenide nanowires, decorated carbon nanotubes, and metal nanoparticles. Under an applied AC voltage and upon UV illumination, ZnO nanowires exhibit a continuous transition from positive to negative photoconductivity as a function of the AC frequency. ZnO nanowires also display a frequency-driven metal-insulator transition at room temperature. The responsible mechanism for both transitions has been attributed to a competition between bulk conduction and surface conduction. The frequency-driven bulk-to-surface transition of conductivity is expected to be a generic character of semiconductor nanostructures with the large surface-to-volume ratio. Magnetic photoconductivity In 2016 it was demonstrated that in some photoconductive material a magnetic order can exist. One prominent example is CH3NH3(Mn:Pb)I3. In this material a light induced magnetization melting was also demonstrated thus could be used in magneto optical devices and data storage. Photoconductivity spectroscopy The characterization technique called photoconductivity spectroscopy (also known as photocurrent spectroscopy) is widely used in studying optoelectronic properties of semiconductors. See also Photodiode Photoresistor (LDR) Photocurrent Photoconductive polymer Infrared detector Lead selenide (PbSe) Indium antimonide (InSb) References Condensed matter physics Electrical phenomena Optical phenomena
Photoconductivity
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
976
[ "Physical phenomena", "Phases of matter", "Materials science", "Optical phenomena", "Electrical phenomena", "Condensed matter physics", "Matter" ]
60,889
https://en.wikipedia.org/wiki/Skull%20crucible
The skull crucible process was developed at the Lebedev Physical Institute in Moscow to manufacture cubic zirconia. It was invented to solve the problem of cubic zirconia's melting-point being too high for even platinum crucibles. In essence, by heating only the center of a volume of cubic zirconia, the material forms its own "crucible" from its cooler outer layers. The term "skull" refers to these outer layers forming a shell enclosing the molten volume. Zirconium oxide powder is heated then gradually allowed to cool. Heating is accomplished by radio frequency induction using a coil wrapped around the apparatus. The outside of the device is water-cooled in order to keep the radio frequency coil from melting and also to cool the outside of the zirconium oxide and thus maintain the shape of the zirconium powder. Since zirconium oxide in its solid state does not conduct electricity, a piece of zirconium metal is placed inside the gob of zirconium oxide. As the zirconium melts it oxidizes and blends with the now molten zirconium oxide, a conductor, and is heated by radio frequency induction. When the zirconium oxide is melted on the inside (but not completely, since the outside needs to remain solid) the amplitude of the RF induction coil is gradually reduced and crystals form as the material cools. Normally this would form a monoclinic crystal system of zirconium oxide. In order to maintain a cubic crystal system a stabilizer is added, magnesium oxide, calcium oxide or yttrium oxide as well as any material to color the crystal. After the mixture cools the outer shell is broken off and the interior of the gob is then used to manufacture gemstones. References Ceramic engineering Soviet inventions Methods of crystal growth
Skull crucible
[ "Chemistry", "Materials_science", "Engineering" ]
384
[ "Ceramic engineering", "Crystallography", "Methods of crystal growth" ]
60,891
https://en.wikipedia.org/wiki/Surveying
Surveying or land surveying is the technique, profession, art, and science of determining the terrestrial two-dimensional or three-dimensional positions of points and the distances and angles between them. These points are usually on the surface of the Earth, and they are often used to establish maps and boundaries for ownership, locations, such as the designated positions of structural components for construction or the surface location of subsurface features, or other purposes required by government or civil law, such as property sales. A professional in land surveying is called a land surveyor. Surveyors work with elements of geodesy, geometry, trigonometry, regression analysis, physics, engineering, metrology, programming languages, and the law. They use equipment, such as total stations, robotic total stations, theodolites, GNSS receivers, retroreflectors, 3D scanners, lidar sensors, radios, inclinometer, handheld tablets, optical and digital levels, subsurface locators, drones, GIS, and surveying software. Surveying has been an element in the development of the human environment since the beginning of recorded history. It is used in the planning and execution of most forms of construction. It is also used in transportation, communications, mapping, and the definition of legal boundaries for land ownership. It is an important tool for research in many other scientific disciplines. Definition The International Federation of Surveyors defines the function of surveying as follows: A surveyor is a professional person with the academic qualifications and technical expertise to conduct one, or more, of the following activities; to determine, measure and represent land, three-dimensional objects, point-fields and trajectories; to assemble and interpret land and geographically related information, to use that information for the planning and efficient administration of the land, the sea and any structures thereon; and, to conduct research into the above practices and to develop them. History Ancient history Surveying has occurred since humans built the first large structures. In ancient Egypt, a rope stretcher would use simple geometry to re-establish boundaries after the annual floods of the Nile River. The almost perfect squareness and north–south orientation of the Great Pyramid of Giza, built , affirm the Egyptians' command of surveying. The groma instrument may have originated in Mesopotamia (early 1st millennium BC). The prehistoric monument at Stonehenge () was set out by prehistoric surveyors using peg and rope geometry. The mathematician Liu Hui described ways of measuring distant objects in his work Haidao Suanjing or The Sea Island Mathematical Manual, published in 263 AD. The Romans recognized land surveying as a profession. They established the basic measurements under which the Roman Empire was divided, such as a tax register of conquered lands (300 AD). Roman surveyors were known as Gromatici. In medieval Europe, beating the bounds maintained the boundaries of a village or parish. This was the practice of gathering a group of residents and walking around the parish or village to establish a communal memory of the boundaries. Young boys were included to ensure the memory lasted as long as possible. In England, William the Conqueror commissioned the Domesday Book in 1086. It recorded the names of all the land owners, the area of land they owned, the quality of the land, and specific information of the area's content and inhabitants. It did not include maps showing exact locations. Modern era Abel Foullon described a plane table in 1551, but it is thought that the instrument was in use earlier as his description is of a developed instrument. Gunter's chain was introduced in 1620 by English mathematician Edmund Gunter. It enabled plots of land to be accurately surveyed and plotted for legal and commercial purposes. Leonard Digges described a theodolite that measured horizontal angles in his book A geometric practice named Pantometria (1571). Joshua Habermel (Erasmus Habermehl) created a theodolite with a compass and tripod in 1576. Johnathon Sission was the first to incorporate a telescope on a theodolite in 1725. In the 18th century, modern techniques and instruments for surveying began to be used. Jesse Ramsden introduced the first precision theodolite in 1787. It was an instrument for measuring angles in the horizontal and vertical planes. He created his great theodolite using an accurate dividing engine of his own design. Ramsden's theodolite represented a great step forward in the instrument's accuracy. William Gascoigne invented an instrument that used a telescope with an installed crosshair as a target device, in 1640. James Watt developed an optical meter for the measuring of distance in 1771; it measured the parallactic angle from which the distance to a point could be deduced. Dutch mathematician Willebrord Snellius (a.k.a. Snel van Royen) introduced the modern systematic use of triangulation. In 1615 he surveyed the distance from Alkmaar to Breda, approximately . He underestimated this distance by 3.5%. The survey was a chain of quadrangles containing 33 triangles in all. Snell showed how planar formulae could be corrected to allow for the curvature of the Earth. He also showed how to resect, or calculate, the position of a point inside a triangle using the angles cast between the vertices at the unknown point. These could be measured more accurately than bearings of the vertices, which depended on a compass. His work established the idea of surveying a primary network of control points, and locating subsidiary points inside the primary network later. Between 1733 and 1740, Jacques Cassini and his son César undertook the first triangulation of France. They included a re-surveying of the meridian arc, leading to the publication in 1745 of the first map of France constructed on rigorous principles. By this time triangulation methods were well established for local map-making. It was only towards the end of the 18th century that detailed triangulation network surveys mapped whole countries. In 1784, a team from General William Roy's Ordnance Survey of Great Britain began the Principal Triangulation of Britain. The first Ramsden theodolite was built for this survey. The survey was finally completed in 1853. The Great Trigonometric Survey of India began in 1801. The Indian survey had an enormous scientific impact. It was responsible for one of the first accurate measurements of a section of an arc of longitude, and for measurements of the geodesic anomaly. It named and mapped Mount Everest and the other Himalayan peaks. Surveying became a professional occupation in high demand at the turn of the 19th century with the onset of the Industrial Revolution. The profession developed more accurate instruments to aid its work. Industrial infrastructure projects used surveyors to lay out canals, roads and rail. In the US, the Land Ordinance of 1785 created the Public Land Survey System. It formed the basis for dividing the western territories into sections to allow the sale of land. The PLSS divided states into township grids which were further divided into sections and fractions of sections. Napoleon Bonaparte founded continental Europe's first cadastre in 1808. This gathered data on the number of parcels of land, their value, land usage, and names. This system soon spread around Europe. Robert Torrens introduced the Torrens system in South Australia in 1858. Torrens intended to simplify land transactions and provide reliable titles via a centralized register of land. The Torrens system was adopted in several other nations of the English-speaking world. Surveying became increasingly important with the arrival of railroads in the 1800s. Surveying was necessary so that railroads could plan technologically and financially viable routes. 20th century At the beginning of the century, surveyors had improved the older chains and ropes, but they still faced the problem of accurate measurement of long distances. Trevor Lloyd Wadley developed the Tellurometer during the 1950s. It measures long distances using two microwave transmitter/receivers. During the late 1950s Geodimeter introduced electronic distance measurement (EDM) equipment. EDM units use a multi frequency phase shift of light waves to find a distance. These instruments eliminated the need for days or weeks of chain measurement by measuring between points kilometers apart in one go. Advances in electronics allowed miniaturization of EDM. In the 1970s the first instruments combining angle and distance measurement appeared, becoming known as total stations. Manufacturers added more equipment by degrees, bringing improvements in accuracy and speed of measurement. Major advances include tilt compensators, data recorders and on-board calculation programs. The first satellite positioning system was the US Navy TRANSIT system. The first successful launch took place in 1960. The system's main purpose was to provide position information to Polaris missile submarines. Surveyors found they could use field receivers to determine the location of a point. Sparse satellite cover and large equipment made observations laborious and inaccurate. The main use was establishing benchmarks in remote locations. The US Air Force launched the first prototype satellites of the Global Positioning System (GPS) in 1978. GPS used a larger constellation of satellites and improved signal transmission, thus improving accuracy. Early GPS observations required several hours of observations by a static receiver to reach survey accuracy requirements. Later improvements to both satellites and receivers allowed for Real Time Kinematic (RTK) surveying. RTK surveys provide high-accuracy measurements by using a fixed base station and a second roving antenna. The position of the roving antenna can be tracked. 21st century The theodolite, total station and RTK GPS survey remain the primary methods in use. Remote sensing and satellite imagery continue to improve and become cheaper, allowing more commonplace use. Prominent new technologies include three-dimensional (3D) scanning and lidar-based topographical surveys. UAV technology along with photogrammetric image processing is also appearing. Equipment Hardware The main surveying instruments in use around the world are the theodolite, measuring tape, total station, 3D scanners, GPS/GNSS, level and rod. Most instruments screw onto a tripod when in use. Tape measures are often used for measurement of smaller distances. 3D scanners and various forms of aerial imagery are also used. The theodolite is an instrument for the measurement of angles. It uses two separate circles, protractors or alidades to measure angles in the horizontal and the vertical plane. A telescope mounted on trunnions is aligned vertically with the target object. The whole upper section rotates for horizontal alignment. The vertical circle measures the angle that the telescope makes against the vertical, known as the zenith angle. The horizontal circle uses an upper and lower plate. When beginning the survey, the surveyor points the instrument in a known direction (bearing), and clamps the lower plate in place. The instrument can then rotate to measure the bearing to other objects. If no bearing is known or direct angle measurement is wanted, the instrument can be set to zero during the initial sight. It will then read the angle between the initial object, the theodolite itself, and the item that the telescope aligns with. The gyrotheodolite is a form of theodolite that uses a gyroscope to orient itself in the absence of reference marks. It is used in underground applications. The total station is a development of the theodolite with an electronic distance measurement device (EDM). A total station can be used for leveling when set to the horizontal plane. Since their introduction, total stations have shifted from optical-mechanical to fully electronic devices. Modern top-of-the-line total stations no longer need a reflector or prism to return the light pulses used for distance measurements. They are fully robotic, and can even e-mail point data to a remote computer and connect to satellite positioning systems, such as Global Positioning System. Real Time Kinematic GPS systems have significantly increased the speed of surveying, and they are now horizontally accurate to within 1 cm ± 1 ppm in real-time, while vertically it is currently about half of that to within 2 cm ± 2 ppm. GPS surveying differs from other GPS uses in the equipment and methods used. Static GPS uses two receivers placed in position for a considerable length of time. The long span of time lets the receiver compare measurements as the satellites orbit. The changes as the satellites orbit also provide the measurement network with well conditioned geometry. This produces an accurate baseline that can be over 20 km long. RTK surveying uses one static antenna and one roving antenna. The static antenna tracks changes in the satellite positions and atmospheric conditions. The surveyor uses the roving antenna to measure the points needed for the survey. The two antennas use a radio link that allows the static antenna to send corrections to the roving antenna. The roving antenna then applies those corrections to the GPS signals it is receiving to calculate its own position. RTK surveying covers smaller distances than static methods. This is because divergent conditions further away from the base reduce accuracy. Surveying instruments have characteristics that make them suitable for certain uses. Theodolites and levels are often used by constructors rather than surveyors in first world countries. The constructor can perform simple survey tasks using a relatively cheap instrument. Total stations are workhorses for many professional surveyors because they are versatile and reliable in all conditions. The productivity improvements from a GPS on large scale surveys make them popular for major infrastructure or data gathering projects. One-person robotic-guided total stations allow surveyors to measure without extra workers to aim the telescope or record data. A fast but expensive way to measure large areas is with a helicopter, using a GPS to record the location of the helicopter and a laser scanner to measure the ground. To increase precision, surveyors place beacons on the ground (about apart). This method reaches precisions between 5–40 cm (depending on flight height). Surveyors use ancillary equipment such as tripods and instrument stands; staves and beacons used for sighting purposes; PPE; vegetation clearing equipment; digging implements for finding survey markers buried over time; hammers for placements of markers in various surfaces and structures; and portable radios for communication over long lines of sight. Software Land surveyors, construction professionals, geomatics engineers and civil engineers using total station, GPS, 3D scanners, and other collector data use land surveying software to increase efficiency, accuracy, and productivity. Land Surveying Software is a staple of contemporary land surveying. Typically, much if not all of the drafting and some of the designing for plans and plats of the surveyed property is done by the surveyor, and nearly everyone working in the area of drafting today (2021) utilizes CAD software and hardware both on PC, and more and more in newer generation data collectors in the field as well. Other computer platforms and tools commonly used today by surveyors are offered online by the U.S. Federal Government and other governments' survey agencies, such as the National Geodetic Survey and the CORS network, to get automated corrections and conversions for collected GPS data, and the data coordinate systems themselves. Techniques Surveyors determine the position of objects by measuring angles and distances. The factors that can affect the accuracy of their observations are also measured. They then use this data to create vectors, bearings, coordinates, elevations, areas, volumes, plans and maps. Measurements are often split into horizontal and vertical components to simplify calculation. GPS and astronomic measurements also need measurement of a time component. Distance measurement Before EDM (Electronic Distance Measurement) laser devices, distances were measured using a variety of means. In pre-colonial America Natives would use the "bow shot" as a distance reference ("as far as an arrow can slung out of a bow", or "flights of a Cherokee long bow"). Europeans used chains with links of a known length such as a Gunter's chain, or measuring tapes made of steel or invar. To measure horizontal distances, these chains or tapes were pulled taut to reduce sagging and slack. The distance had to be adjusted for heat expansion. Attempts to hold the measuring instrument level would also be made. When measuring up a slope, the surveyor might have to "break" (break chain) the measurement- use an increment less than the total length of the chain. Perambulators, or measuring wheels, were used to measure longer distances but not to a high level of accuracy. Tacheometry is the science of measuring distances by measuring the angle between two ends of an object with a known size. It was sometimes used before to the invention of EDM where rough ground made chain measurement impractical. Angle measurement Historically, horizontal angles were measured by using a compass to provide a magnetic bearing or azimuth. Later, more precise scribed discs improved angular resolution. Mounting telescopes with reticles atop the disc allowed more precise sighting (see theodolite). Levels and calibrated circles allowed the measurement of vertical angles. Verniers allowed measurement to a fraction of a degree, such as with a turn-of-the-century transit. The plane table provided a graphical method of recording and measuring angles, which reduced the amount of mathematics required. In 1829 Francis Ronalds invented a reflecting instrument for recording angles graphically by modifying the octant. By observing the bearing from every vertex in a figure, a surveyor can measure around the figure. The final observation will be between the two points first observed, except with a 180° difference. This is called a close. If the first and last bearings are different, this shows the error in the survey, called the angular misclose. The surveyor can use this information to prove that the work meets the expected standards. Leveling The simplest method for measuring height is with an altimeter using air pressure to find the height. When more precise measurements are needed, means like precise levels (also known as differential leveling) are used. When precise leveling, a series of measurements between two points are taken using an instrument and a measuring rod. Differences in height between the measurements are added and subtracted in a series to get the net difference in elevation between the two endpoints. With the Global Positioning System (GPS), elevation can be measured with satellite receivers. Usually, GPS is somewhat less accurate than traditional precise leveling, but may be similar over long distances. When using an optical level, the endpoint may be out of the effective range of the instrument. There may be obstructions or large changes of elevation between the endpoints. In these situations, extra setups are needed. Turning is a term used when referring to moving the level to take an elevation shot from a different location. To "turn" the level, one must first take a reading and record the elevation of the point the rod is located on. While the rod is being kept in exactly the same location, the level is moved to a new location where the rod is still visible. A reading is taken from the new location of the level and the height difference is used to find the new elevation of the level gun, which is why this method is referred to as differential levelling. This is repeated until the series of measurements is completed. The level must be horizontal to get a valid measurement. Because of this, if the horizontal crosshair of the instrument is lower than the base of the rod, the surveyor will not be able to sight the rod and get a reading. The rod can usually be raised up to high, allowing the level to be set much higher than the base of the rod. Determining position The primary way of determining one's position on the Earth's surface when no known positions are nearby is by astronomic observations. Observations to the Sun, Moon and stars could all be made using navigational techniques. Once the instrument's position and bearing to a star is determined, the bearing can be transferred to a reference point on Earth. The point can then be used as a base for further observations. Survey-accurate astronomic positions were difficult to observe and calculate and so tended to be a base off which many other measurements were made. Since the advent of the GPS system, astronomic observations are rare as GPS allows adequate positions to be determined over most of the surface of the Earth. Reference networks Few survey positions are derived from the first principles. Instead, most surveys points are measured relative to previously measured points. This forms a reference or control network where each point can be used by a surveyor to determine their own position when beginning a new survey. Survey points are usually marked on the earth's surface by objects ranging from small nails driven into the ground to large beacons that can be seen from long distances. The surveyors can set up their instruments in this position and measure to nearby objects. Sometimes a tall, distinctive feature such as a steeple or radio aerial has its position calculated as a reference point that angles can be measured against. Triangulation is a method of horizontal location favoured in the days before EDM and GPS measurement. It can determine distances, elevations and directions between distant objects. Since the early days of surveying, this was the primary method of determining accurate positions of objects for topographic maps of large areas. A surveyor first needs to know the horizontal distance between two of the objects, known as the baseline. Then the heights, distances and angular position of other objects can be derived, as long as they are visible from one of the original objects. High-accuracy transits or theodolites were used, and angle measurements were repeated for increased accuracy. See also Triangulation in three dimensions. Offsetting is an alternate method of determining the position of objects, and was often used to measure imprecise features such as riverbanks. The surveyor would mark and measure two known positions on the ground roughly parallel to the feature, and mark out a baseline between them. At regular intervals, a distance was measured at right angles from the first line to the feature. The measurements could then be plotted on a plan or map, and the points at the ends of the offset lines could be joined to show the feature. Traversing is a common method of surveying smaller areas. The surveyors start from an old reference mark or known position and place a network of reference marks covering the survey area. They then measure bearings and distances between the reference marks, and to the target features. Most traverses form a loop pattern or link between two prior reference marks so the surveyor can check their measurements. Datum and coordinate systems Many surveys do not calculate positions on the surface of the Earth, but instead, measure the relative positions of objects. However, often the surveyed items need to be compared to outside data, such as boundary lines or previous survey's objects. The oldest way of describing a position is via latitude and longitude, and often a height above sea level. As the surveying profession grew it created Cartesian coordinate systems to simplify the mathematics for surveys over small parts of the Earth. The simplest coordinate systems assume that the Earth is flat and measure from an arbitrary point, known as a 'datum' (singular form of data). The coordinate system allows easy calculation of the distances and direction between objects over small areas. Large areas distort due to the Earth's curvature. North is often defined as true north at the datum. For larger regions, it is necessary to model the shape of the Earth using an ellipsoid or a geoid. Many countries have created coordinate-grids customized to lessen error in their area of the Earth. Errors and accuracy A basic tenet of surveying is that no measurement is perfect, and that there will always be a small amount of error. There are three classes of survey errors: Gross errors or blunders: Errors made by the surveyor during the survey. Upsetting the instrument, misaiming a target, or writing down a wrong measurement are all gross errors. A large gross error may reduce the accuracy to an unacceptable level. Therefore, surveyors use redundant measurements and independent checks to detect these errors early in the survey. Systematic: Errors that follow a consistent pattern. Examples include effects of temperature on a chain or EDM measurement, or a poorly adjusted spirit-level causing a tilted instrument or target pole. Systematic errors that have known effects can be compensated or corrected. Random: Random errors are small unavoidable fluctuations. They are caused by imperfections in measuring equipment, eyesight, and conditions. They can be minimized by redundancy of measurement and avoiding unstable conditions. Random errors tend to cancel each other out, but checks must be made to ensure they are not propagating from one measurement to the next. Surveyors avoid these errors by calibrating their equipment, using consistent methods, and by good design of their reference network. Repeated measurements can be averaged and any outlier measurements discarded. Independent checks like measuring a point from two or more locations or using two different methods are used, and errors can be detected by comparing the results of two or more measurements, thus utilizing redundancy. Once the surveyor has calculated the level of the errors in his or her work, it is adjusted. This is the process of distributing the error between all measurements. Each observation is weighted according to how much of the total error it is likely to have caused and part of that error is allocated to it in a proportional way. The most common methods of adjustment are the Bowditch method, also known as the compass rule, and the principle of least squares method. The surveyor must be able to distinguish between accuracy and precision. In the United States, surveyors and civil engineers use units of feet wherein a survey foot breaks down into 10ths and 100ths. Many deed descriptions containing distances are often expressed using these units (125.25 ft). On the subject of accuracy, surveyors are often held to a standard of one one-hundredth of a foot; about 1/8 inch. Calculation and mapping tolerances are much smaller wherein achieving near-perfect closures are desired. Though tolerances will vary from project to project, in the field and day to day usage beyond a 100th of a foot is often impractical. Types Local organisations or regulatory bodies class specializations of surveying in different ways. Broad groups are: As-built survey: a survey that documents the location of recently constructed elements of a construction project. As-built surveys are done for record, completion evaluation and payment purposes. An as-built survey is also known as a 'works as executed survey'. As built surveys are often presented in red or redline and laid over existing plans for comparison with design information. Cadastral or boundary surveying: a survey that establishes or re-establishes boundaries of a parcel using a legal description. It involves the setting or restoration of monuments or markers at the corners or along the boundaries of a parcel. These take the form of iron rods, pipes, or concrete monuments in the ground, or nails set in concrete or asphalt. The ALTA/ACSM Land Title Survey is a standard proposed by the American Land Title Association and the American Congress on Surveying and Mapping. It incorporates elements of the boundary survey, mortgage survey, and topographic survey. Control surveying: Control surveys establish reference points to use as starting positions for future surveys. Most other forms of surveying will contain elements of control surveying. Construction surveying and engineering surveying: topographic, layout, and as-built surveys associated with engineering design. They often need geodetic computations beyond normal civil engineering practice. Deformation survey: a survey to determine if a structure or object is changing shape or moving. First the positions of points on an object are found. A period of time is allowed to pass and the positions are then re-measured and calculated. Then a comparison between the two sets of positions is made. Dimensional control survey: This is a type of survey conducted in or on a non-level surface. Common in the oil and gas industry to replace old or damaged pipes on a like-for-like basis, the advantage of dimensional control survey is that the instrument used to conduct the survey does not need to be level. This is useful in the off-shore industry, as not all platforms are fixed and are thus subject to movement. Foundation survey: a survey done to collect the positional data on a foundation that has been poured and is cured. This is done to ensure that the foundation was constructed in the location, and at the elevation, authorized in the plot plan, site plan, or subdivision plan. Hydrographic survey: a survey conducted with the purpose of mapping the shoreline and bed of a body of water. Used for navigation, engineering, or resource management purposes. Leveling: either finds the elevation of a given point or establish a point at a given elevation. LOMA survey: Survey to change base flood line, removing property from a SFHA special flood hazard area. Measured survey : a building survey to produce plans of the building. such a survey may be conducted before renovation works, for commercial purpose, or at end of the construction process. Mining surveying: Mining surveying includes directing the digging of mine shafts and galleries and the calculation of volume of rock. It uses specialised techniques due to the restraints to survey geometry such as vertical shafts and narrow passages. Mortgage survey: A mortgage survey or physical survey is a simple survey that delineates land boundaries and building locations. It checks for encroachment, building setback restrictions and shows nearby flood zones. In many places a mortgage survey is a precondition for a mortgage loan. Photographic control survey: A survey that creates reference marks visible from the air to allow aerial photographs to be rectified. Stakeout, layout or setout: an element of many other surveys where the calculated or proposed position of an object is marked on the ground. This can be temporary or permanent. This is an important component of engineering and cadastral surveying. Structural survey: a detailed inspection to report upon the physical condition and structural stability of a building or structure. It highlights any work needed to maintain it in good repair. Subdivision: A boundary survey that splits a property into two or more smaller properties. Topographic survey: a survey of the positions and elevations of points and objects on a land surface, presented as a topographic map with contour lines. Existing conditions: Similar to a topographic survey but instead focuses more on the specific location of key features and structures as they exist at that time within the surveyed area rather than primarily focusing on the elevation, often used alongside architectural drawings and blueprints to locate or place building structures. Underwater survey: a survey of an underwater site, object, or area. Plane vs. geodetic surveying Based on the considerations and true shape of the Earth, surveying is broadly classified into two types. Plane surveying assumes the Earth is flat. Curvature and spheroidal shape of the Earth is neglected. In this type of surveying all triangles formed by joining survey lines are considered as plane triangles. It is employed for small survey works where errors due to the Earth's shape are too small to matter. In geodetic surveying the curvature of the Earth is taken into account while calculating reduced levels, angles, bearings and distances. This type of surveying is usually employed for large survey works. Survey works up to 100 square miles (260 square kilometers ) are treated as plane and beyond that are treated as geodetic. In geodetic surveying necessary corrections are applied to reduced levels, bearings and other observations. On the basis of the instrument used Chain Surveying Compass Surveying Plane table Surveying Levelling Theodolite Surveying Traverse Surveying Tacheometric Surveying Aerial Surveying Profession The basic principles of surveying have changed little over the ages, but the tools used by surveyors have evolved. Engineering, especially civil engineering, often needs surveyors. Surveyors help determine the placement of roads, railways, reservoirs, dams, pipelines, retaining walls, bridges, and buildings. They establish the boundaries of legal descriptions and political divisions. They also provide advice and data for geographical information systems (GIS) that record land features and boundaries. Surveyors must have a thorough knowledge of algebra, basic calculus, geometry, and trigonometry. They must also know the laws that deal with surveys, real property, and contracts. Most jurisdictions recognize three different levels of qualification: Survey assistants or chainmen are usually unskilled workers who help the surveyor. They place target reflectors, find old reference marks, and mark points on the ground. The term 'chainman' derives from past use of measuring chains. An assistant would move the far end of the chain under the surveyor's direction. Survey technicians often operate survey instruments, run surveys in the field, do survey calculations, or draft plans. A technician usually has no legal authority and cannot certify his work. Not all technicians are qualified, but qualifications at the certificate or diploma level are available. Licensed, registered, or chartered surveyors usually hold a degree or higher qualification. They are often required to pass further exams to join a professional association or to gain certifying status. Surveyors are responsible for planning and management of surveys. They have to ensure that their surveys, or surveys performed under their supervision, meet the legal standards. Many principals of surveying firms hold this status. Related professions include cartographers, hydrographers, geodesists, photogrammetrists, and topographers, as well as civil engineers and geomatics engineers. Licensing Licensing requirements vary with jurisdiction, and are commonly consistent within national borders. Prospective surveyors usually have to receive a degree in surveying, followed by a detailed examination of their knowledge of surveying law and principles specific to the region they wish to practice in, and undergo a period of on-the-job training or portfolio building before they are awarded a license to practise. Licensed surveyors usually receive a post nominal, which varies depending on where they qualified. The system has replaced older apprenticeship systems. A licensed land surveyor is generally required to sign and seal all plans. The state dictates the format, showing their name and registration number. In many jurisdictions, surveyors must mark their registration number on survey monuments when setting boundary corners. Monuments take the form of capped iron rods, concrete monuments, or nails with washers. Surveying institutions Most countries' governments regulate at least some forms of surveying. Their survey agencies establish regulations and standards. Standards control accuracy, surveying credentials, monumentation of boundaries and maintenance of geodetic networks. Many nations devolve this authority to regional entities or states/provinces. Cadastral surveys tend to be the most regulated because of the permanence of the work. Lot boundaries established by cadastral surveys may stand for hundreds of years without modification. Most jurisdictions also have a form of professional institution representing local surveyors. These institutes often endorse or license potential surveyors, as well as set and enforce ethical standards. The largest institution is the International Federation of Surveyors (Abbreviated FIG, for ). They represent the survey industry worldwide. Building surveying Most English-speaking countries consider building surveying a distinct profession. They have their own professional associations and licensing requirements. A building surveyor can provide technical building advice on existing buildings, new buildings, design, compliance with regulations such as planning and building control. A building surveyor normally acts on behalf of his or her client ensuring that their vested interests remain protected. The Royal Institution of Chartered Surveyors (RICS) is a world-recognised governing body for those working within the built environment. Cadastral surveying One of the primary roles of the land surveyor is to determine the boundary of real property on the ground. The surveyor must determine where the adjoining landowners wish to put the boundary. The boundary is established in legal documents and plans prepared by attorneys, engineers, and land surveyors. The surveyor then puts monuments on the corners of the new boundary. They might also find or resurvey the corners of the property monumented by prior surveys. Cadastral land surveyors are licensed by governments. The cadastral survey branch of the Bureau of Land Management (BLM) conducts most cadastral surveys in the United States. They consult with Forest Service, National Park Service, Army Corps of Engineers, Bureau of Indian Affairs, Fish and Wildlife Service, Bureau of Reclamation, and others. The BLM used to be known as the United States General Land Office (GLO). In states organized per the Public Land Survey System (PLSS), surveyors must carry out BLM cadastral surveys under that system. Cadastral surveyors often have to work around changes to the earth that obliterate or damage boundary monuments. When this happens, they must consider evidence that is not recorded on the title deed. This is known as extrinsic evidence. Quantity surveying Quantity surveying is a profession that deals with the costs and contracts of construction projects. A quantity surveyor is an expert in estimating the costs of materials, labor, and time needed for a project, as well as managing the financial and legal aspects of the project. A quantity surveyor can work for either the client or the contractor, and can be involved in different stages of the project, from planning to completion. Quantity surveyors are also known as Chartered Surveyors in the UK. Notable surveyors Some U.S. Presidents were land surveyors. George Washington and Abraham Lincoln surveyed colonial or frontier territories early in their career, prior to serving in office. Ferdinand Rudolph Hassler is considered the "father" of geodetic surveying in the U.S. David T. Abercrombie practiced land surveying before starting an outfitter store of excursion goods. The business would later turn into Abercrombie & Fitch lifestyle clothing store. Percy Harrison Fawcett was a British surveyor that explored the jungles of South America attempting to find the Lost City of Z. His biography and expeditions were recounted in the book The Lost City of Z and were later adapted on film screen. Inō Tadataka produced the first map of Japan using modern surveying techniques starting in 1800, at the age of 55. See also Types and methods Photogrammetry, a method of collecting information using aerial photography and satellite images Cadastral surveyor, used to document land ownership by the production of documents, diagrams, plats, and maps Dominion Land Survey, the method used to divide most of Western Canada into one-square-mile sections for agricultural and other purposes Public Land Survey System, a method used in the United States to survey and identify land parcels Survey township, a square unit of land, six miles (~9.7 km) on a side, done by the U.S. Public Land Survey System Construction surveying, the locating of structures relative to a reference line, used in the construction of buildings, roads, mines, and tunnels Deviation survey, used in the oil industry to measure a borehole's departure from the vertical Geospatial survey organizations Survey of India, India's central agency in charge of mapping and surveying Ordnance Survey, a national mapping agency for Great Britain U.S. National Geodetic Survey, performing geographic surveys as part of the U.S. Department of Commerce United States Coast and Geodetic Survey, a former surveying agency of the United States Government Other References Further reading Keay J (2000), The Great Arc: The Dramatic Tale of How India was Mapped and Everest was Named, Harper Collins, 182pp, . Pugh J C (1975), Surveying for Field Scientists, Methuen, 230pp, Genovese I (2005), Definitions of Surveying and Associated Terms, ACSM, 314pp, . External links Géomètres sans Frontières : Association de géometres pour aide au développement. NGO Surveyors without borders The National Museum of Surveying The Home of the National Museum of Surveying in Springfield, Illinois Land Surveyors United Support Network Global social support network featuring surveyor forums, instructional videos, industry news and support groups based on geolocation. Natural Resources Canada – Surveying Good overview of surveying with references to construction surveys, cadastral surveys, photogrammetry surveys, mining surveys, hydrographic surveys, route surveys, control surveys and topographic surveys Table of Surveying, 1728 Cyclopaedia Surveying & Triangulation The History of Surveying And Survey Equipment NCEES National Council of Examiners for Engineering and Surveying (NCEES) NSPS National Society of Professional Surveyors (NSPS) Ground Penetrating Radar FAQ Using Ground Penetrating Radar for Land Surveying Survey Earth A Global event for professional land surveyors and students to remeasure planet in a single day during summer solstice as a community of land surveyors. Surveyors – Occupational Employment Statistics Public Land Survey System Foundation (2009), Manual of Surveying Instructions For the Survey of the Public Lands of the United States Ancient Egyptian technology Civil engineering Land use
Surveying
[ "Engineering" ]
8,247
[ "Construction", "Surveying", "Civil engineering" ]
60,913
https://en.wikipedia.org/wiki/Stigmergy
Stigmergy ( ) is a mechanism of indirect coordination, through the environment, between agents or actions. The principle is that the trace left in the environment by an individual action stimulates the performance of a succeeding action by the same or different agent. Agents that respond to traces in the environment receive positive fitness benefits, reinforcing the likelihood of these behaviors becoming fixed within a population over time. Stigmergy is a form of self-organization. It produces complex, seemingly intelligent structures, without need for any planning, control, or even direct communication between the agents. As such it supports efficient collaboration between extremely simple agents, who may lack memory or individual awareness of each other. History The term "stigmergy" was introduced by French biologist Pierre-Paul Grassé in 1959 to refer to termite behavior. He defined it as: "Stimulation of workers by the performance they have achieved." It is derived from the Greek words στίγμα stigma "mark, sign" and ἔργον ergon "work, action", and captures the notion that an agent’s actions leave signs in the environment, signs that it and other agents sense and that determine and incite their subsequent actions. Later on, a distinction was made between the stigmergic phenomenon, which is specific to the guidance of additional work, and the more general, non-work specific incitation, for which the term sematectonic communication was coined by E. O. Wilson, from the Greek words σῆμα sema "sign, token", and τέκτων tecton "craftsman, builder": "There is a need for a more general, somewhat less clumsy expression to denote the evocation of any form of behavior or physiological change by the evidences of work performed by other animals, including the special case of the guidance of additional work." Stigmergy is now one of the key concepts in the field of swarm intelligence. Stigmergic behavior in non-human organisms Stigmergy was first observed in social insects. For example, ants exchange information by laying down pheromones (the trace) on their way back to the nest when they have found food. In that way, they collectively develop a complex network of trails, connecting the nest in an efficient way to various food sources. When ants come out of the nest searching for food, they are stimulated by the pheromone to follow the trail towards the food source. The network of trails functions as a shared external memory for the ant colony. In computer science, this general method has been applied in a variety of techniques called ant colony optimization, which search for solutions to complex problems by depositing "virtual pheromones" along paths that appear promising. In the field of artificial neural networks, stigmergy can be used as a computational memory. Federico Galatolo showed that a stigmergic memory can achieve the same performances of more complex and well established neural networks architectures like LSTM. Other eusocial creatures, such as termites, use pheromones to build their complex nests by following a simple decentralized rule set. Each insect scoops up a 'mudball' or similar material from its environment, infuses the ball with pheromones, and deposits it on the ground, initially in a random spot. However, termites are attracted to their nestmates' pheromones and are therefore more likely to drop their own mudballs on top of their neighbors'. The larger the heap of mud becomes, the more attractive it is, and therefore the more mud will be added to it (positive feedback). Over time this leads to the construction of pillars, arches, tunnels and chambers. Stigmergy has been observed in bacteria, various species of which differentiate into distinct cell types and which participate in group behaviors that are guided by sophisticated temporal and spatial control systems. Spectacular examples of multicellular behavior can be found among the myxobacteria. Myxobacteria travel in swarms containing many cells kept together by intercellular molecular signals. Most myxobacteria are predatory: individuals benefit from aggregation as it allows accumulation of extracellular enzymes which are used to digest prey microorganisms. When nutrients are scarce, myxobacterial cells aggregate into fruiting bodies, within which the swarming cells transform themselves into dormant myxospores with thick cell walls. The fruiting process is thought to benefit myxobacteria by ensuring that cell growth is resumed with a group (swarm) of myxobacteria, rather than isolated cells. Similar life cycles have developed among the cellular slime molds. The best known of the myxobacteria, Myxococcus xanthus and Stigmatella aurantiaca, are studied in various laboratories as prokaryotic models of development. Analysis of human behavior Stigmergy studied in eusocial creatures and physical systems, has been proposed as a model of analyzing some robotics systems, multi-agent systems, communication in computer networks, and online communities. On the Internet there are many collective projects where users interact only by modifying local parts of their shared virtual environment. Wikipedia is an example of this. The massive structure of information available in a wiki, or an open source software project such as the FreeBSD kernel could be compared to a termite nest; one initial user leaves a seed of an idea (a mudball) which attracts other users who then build upon and modify this initial concept, eventually constructing an elaborate structure of connected thoughts. In addition the concept of stigmergy has also been used to describe how cooperative work such as building design may be integrated. Designing a large contemporary building involves a large and diverse network of actors (e.g. architects, building engineers, static engineers, building services engineers). Their distributed activities may be partly integrated through practices of stigmergy. Analysis of human social movements The rise of open source software in the 21st century has disrupted the business models of some proprietary software providers, and open content projects like Wikipedia have threatened the business models of companies like Britannica. Researchers have studied collaborative open source projects, arguing they provide insights into the emergence of large-scale peer production and the growth of gift economy. Stigmergic society Heather Marsh, associated with the Occupy Movement, Wikileaks, and Anonymous, has proposed a new social system where competition as a driving force would be replaced with a more collaborative society. This proposed society would not use representative democracy but new forms of idea and action based governance and collaborative methods including stigmergy. "With stigmergy, an initial idea is freely given, and the project is driven by the idea, not by a personality or group of personalities. No individual needs permission (competitive) or consensus (cooperative) to propose an idea or initiate a project." Some at the Hong Kong Umbrella Movement in 2014 were quoted recommending stigmergy as a way forward. See also Ant mill Biosemiotics Extended mind thesis Path dependence Spontaneous order Watchmaker analogy r/place References Further reading Systems theory Self-organization
Stigmergy
[ "Mathematics" ]
1,466
[ "Self-organization", "Dynamical systems" ]
60,920
https://en.wikipedia.org/wiki/Tetrahydrocannabinol
Tetrahydrocannabinol (THC) is a cannabinoid found in cannabis. It is the principal psychoactive constituent of cannabis and one of at least 113 total cannabinoids identified on the plant. Although the chemical formula for THC (C21H30O2) describes multiple isomers, the term THC usually refers to the delta-9-THC isomer with chemical name (−)-trans-Δ9-tetrahydrocannabinol. It is a colorless oil. Medical uses THC, referred to as dronabinol in the pharmaceutical context, is approved in the United States as a capsule or solution to relieve chemotherapy-induced nausea and vomiting and HIV/AIDS-induced anorexia. THC is an active ingredient in nabiximols, a specific extract of Cannabis that was approved as a botanical drug in the United Kingdom in 2010 as a mouth spray for people with multiple sclerosis to alleviate neuropathic pain, spasticity, overactive bladder, and other symptoms. Nabiximols (as Sativex) is available as a prescription drug in Canada. In 2021, nabiximols was approved for medical use in Ukraine. Effects Effects of THC include red eyes, dry mouth, drowsiness, short-term memory impairment, difficulty concentrating, ataxia, increased appetite, anxiety, paranoia, psychosis (i.e., hallucinations, delusions), decreased motivation, and time dilation, among others. Chronic usage of THC may result in cannabinoid hyperemesis syndrome (CHS), a condition characterized by cyclic nausea, vomiting, and abdominal pain that may persist for months to years after discontinuation. Overdose The median lethal dose of THC in humans is not fully known as there is conflicting evidence. A 1972 study gave up to 90 mg/kg of THC to dogs and monkeys without any lethal effects. Some rats died within 72 hours after a dose of up to 36 mg/kg. A 2014 case study based on the toxicology reports and relative testimony in two separate cases gave the median lethal dose in humans at 30 mg/kg (2.1 grams THC for a person who weighs 70 kg; 154 lb; 11 stone), observing cardiovascular death in the one otherwise healthy subject of the two cases studied. A different 1972 study gave the median lethal dose for intravenous THC in mice and rats at 30–40 mg/kg. Interactions Formal drug–drug interaction studies with THC have not been conducted and are limited. The elimination half-life of the barbiturate pentobarbital has been found to increase by 4hours when concomitantly administered with oral THC. Pharmacology Mechanism of action The actions of Δ9-THC result from its partial agonist activity at the cannabinoid receptor CB1 (Ki = 40.7 nM), located mainly in the central nervous system, and the CB2 receptor (Ki = 36 nM), mainly expressed in cells of the immune system. The psychoactive effects of THC are primarily mediated by the activation of (mostly G-coupled) cannabinoid receptors, which result in a decrease in the concentration of the second messenger molecule cAMP through inhibition of adenylate cyclase. The presence of these specialized cannabinoid receptors in the brain led researchers to the discovery of endocannabinoids, such as anandamide and 2-arachidonoyl glyceride (2-AG). THC is a lipophilic molecule and may bind non-specifically to a variety of entities in the brain and body, such as adipose tissue (fat). THC, as well as other cannabinoids that contain a phenol group, possess mild antioxidant activity sufficient to protect neurons against oxidative stress, such as that produced by glutamate-induced excitotoxicity. THC targets receptors in a manner far less selective than endocannabinoid molecules released during retrograde signaling, as the drug has a relatively low cannabinoid receptor affinity. THC is also limited in its efficacy compared to other cannabinoids due to its partial agonistic activity, as THC appears to result in greater downregulation of cannabinoid receptors than endocannabinoids. Furthermore, in populations of low cannabinoid receptor density, THC may even act to antagonize endogenous agonists that possess greater receptor efficacy. However while THC's pharmacodynamic tolerance may limit the maximal effects of certain drugs, evidence suggests that this tolerance mitigates undesirable effects, thus enhancing the drug's therapeutic window. Recently, it has been shown that THC is also a partial autotaxin inhibitor, with an apparent IC50 of 407 ± 67 nM for the ATX-gamma isoform. THC was also co-crystallized with autotaxin, deciphering the binding interface of the complex. These results might explain some of the effects of THC on inflammation and neurological diseases, since autotaxin is responsible of LPA generation, a key lipid mediator involved in numerous diseases and physiological processes. However, clinical trials need to be performed in order to assess the importance of ATX inhibition by THC during medicinal cannabis consumption. Pharmacokinetics Absorption With oral administration of a single dose, THC is almost completely absorbed by the gastrointestinal tract. However, due to first-pass metabolism in the liver and the high lipid solubility of THC, only about 5 to 20% reaches circulation. Following oral administration, concentrations of THC and its major active metabolite 11-hydroxy-THC (11-OH-THC) peak after 0.5 to 4hours, with median time to peak of 1.0 to 2.5hours at different doses. In some cases, peak levels may not occur for as long as 6hours. Concentrations of THC and 11-hydroxy-THC in the circulation are approximately equal with oral administration. There is a slight increase in dose proportionality in terms of peak and area-under-the-curve levels of THC with increasing oral doses over a range of 2.5 to 10mg. A high-fat meal delays time to peak concentrations of oral THC by 4hours on average and increases area-under-the-curve exposure by 2.9-fold, but peak concentrations are not significantly altered. A high-fat meal additionally increases absorption of THC via the lymphatic system and allows it to bypass first-pass metabolism. Consequently, a high-fat meal increases levels of 11-hydroxy-THC by only 25% and most of the increase in bioavailability is due to increased levels of THC. The bioavailability of THC when smoking or inhaling is approximately 25%, with a range of 2% to 56% (although most commonly between 10 and 35%). The large range and marked variability between individuals is due to variation in factors including product matrix, ignition temperature, and inhalational dynamics (e.g., number, duration, and intervals of inhalations, breath hold time, depth and volume of inhalations, size of inhaled particles, deposition site in the lungs). THC is detectable within seconds with inhalation and peak levels of THC occur after 3 to 10minutes. Smoking or inhaling THC results in greater blood levels of THC and its metabolites and a much faster onset of action than oral administration of THC. Inhalation of THC bypasses the first-pass metabolism that occurs with oral administration. The bioavailability of THC with inhalation is increased in heavy users. Transdermal administration of THC is limited by its extreme water insolubility. Efficient skin transport can only be obtained with permeation enhancement. Transdermal administration of THC, as with inhalation, avoids the first-pass metabolism that occurs with oral administration. Distribution The volume of distribution of THC is large and is approximately 10L/kg (range 4–14L/kg), which is due to its high lipid solubility. The plasma protein binding of THC and its metabolites is approximately 95 to 99%, with THC bound mainly to lipoproteins and to a lesser extent albumin. THC is rapidly distributed into well-vascularized organs such as lung, heart, brain, and liver, and is subsequently equilibrated into less vascularized tissue. It is extensively distributed into and sequestered by fat tissue due to its high lipid solubility, from which it is slowly released. THC is able to cross the placenta and is excreted in human breast milk. Metabolism The metabolism of THC occurs mainly in the liver by cytochrome P450 enzymes CYP2C9, CYP2C19, and CYP3A4. CYP2C9 and CYP3A4 are the primary enzymes involving in metabolizing THC. Pharmacogenomic research has found that oral THC exposure is 2- to 3-fold greater in people with genetic variants associated with reduced CYP2C9 function. When taken orally, THC undergoes extensive first-pass metabolism in the liver, primarily via hydroxylation. The principal active metabolite of THC is 11-hydroxy-THC (11-OH-THC), which is formed by CYP2C9 and is psychoactive similarly to THC. This metabolite is further oxidized to 11-nor-9-carboxy-THC (THC-COOH). In animals, more than 100 metabolites of THC could be identified, but 11-OH-THC and THC-COOH are the predominant metabolites. Elimination More than 55% of THC is excreted in the feces and approximately 20% in the urine. The main metabolite in urine is the ester of glucuronic acid and 11-OH-THC and free THC-COOH. In the feces, mainly 11-OH-THC was detected. Estimates of the elimination half-life of THC are variable. THC was reported to have a fast initial half-life of 6minutes and a long terminal half-life of 22hours in a population pharmacokinetic study. Conversely, the Food and Drug Administration label for dronabinol reports an initial half-life of 4hours and a terminal half-life of 25 to 36hours. Many studies report an elimination half-life of THC in the range of 20 to 30hours. 11-Hydroxy-THC appears to have a similar terminal half-life to that of THC, for instance 12 to 36hours relative to 25 to 36hours in one study. The elimination half-life of THC is longer in heavy users. This may be due to slow redistribution from deep compartments such as fatty tissues, where THC accumulates with regular use. List of related compounds Chemistry THC is a molecule that combines polyketides (derived from acetyl CoA) and terpenoids (derived from isoprenylpyrophosphate). It is hydrophobic with very low solubility in water, but good solubility in many organic solvents. As a phytochemical, THC is assumed to be involved in the plant's evolutionary adaptation against insect predation, ultraviolet light, and environmental stress. The preparation of THC was reported in 1965. that procedure called for the intramolecular alkyl lithium attack on a starting carbonyl to form the fused rings, and a tosyl chloride mediated formation of the ether. Biosynthesis In the Cannabis plant, THC occurs mainly as tetrahydrocannabinolic acid (THCA, 2-COOH-THC). Geranyl pyrophosphate and olivetolic acid react, catalysed by an enzyme to produce cannabigerolic acid, which is cyclized by the enzyme THC acid synthase to give THCA. Over time, or when heated, THCA is decarboxylated, producing THC. The pathway for THCA biosynthesis is similar to that which produces the bitter acid humulone in hops. It can also be produced in genetically modified yeast. History Cannabidiol was isolated and identified from Cannabis sativa in 1940 by Roger Adams who was also the first to document the synthesis of THC (both Delta-9-THC and Delta-8-THC) from the acid-based cyclization of CBD in 1942. THC was first isolated from Cannabis by Raphael Mechoulam and Yehiel Gaoni in 1964. Society and culture Comparisons with medical cannabis Female cannabis plants contain at least 113 cannabinoids, including cannabidiol (CBD), thought to be the major anticonvulsant that helps people with multiple sclerosis, and cannabichromene (CBC), an anti-inflammatory which may contribute to the pain-killing effect of cannabis. Drug testing THC and its 11-OH-THC and THC-COOH metabolites can be detected and quantified in blood, urine, hair, oral fluid or sweat using a combination of immunoassay and chromatographic techniques as part of a drug use testing program or in a forensic investigation. There is ongoing research to create devices capable of detecting THC in breath. Regulation THC, along with its double bond isomers and their stereoisomers, is one of only three cannabinoids scheduled by the UN Convention on Psychotropic Substances (the other two are dimethylheptylpyran and parahexyl). It was listed under Schedule I in 1971, but reclassified to Schedule II in 1991 following a recommendation from the WHO. Based on subsequent studies, the WHO has recommended the reclassification to the less-stringent Schedule III. Cannabis as a plant is scheduled by the Single Convention on Narcotic Drugs (Schedule I and IV). It is specifically still listed under Schedule I by US federal law under the Controlled Substances Act for having "no accepted medical use" and "lack of accepted safety". However, dronabinol, a pharmaceutical form of THC, has been approved by the FDA as an appetite stimulant for people with AIDS and an antiemetic for people receiving chemotherapy under the trade names Marinol and Syndros. In 2003, the World Health Organization Expert Committee on Drug Dependence recommended transferring THC to Schedule IV of the convention, citing its medical uses and low abuse and addiction potential. In 2019, the Committee recommended transferring Δ9-THC to Schedule I of the Single Convention on Narcotic Drugs of 1961, but its recommendations were rejected by the United Nations Commission on Narcotic Drugs. In the United States As of 2023, 38 states, four territories, and the District of Columbia in the United States allow medical use of cannabis (in which THC is the primary psychoactive component), with the exception of Georgia, Idaho, Indiana, Iowa, Kansas, Nebraska, North Carolina, South Carolina, Tennessee, Texas, Wisconsin, and Wyoming. As of 2022, the U.S. federal government maintains cannabis as a schedule I controlled substance, while dronabinol is classified as Schedule III in capsule form (Marinol) and Schedule II in liquid oral form (Syndros). In Canada As of October 2018 when recreational use of cannabis was legalized in Canada, some 220 dietary supplements and 19 veterinary health products containing not more than 10 parts per million of THC extract were approved with general health claims for treating minor conditions. Research The status of THC as an illegal drug in most countries imposes restrictions on research material supply and funding, such as in the United States where the National Institute on Drug Abuse and Drug Enforcement Administration continue to control the sole federally-legal source of cannabis for researchers. Despite an August 2016 announcement that licenses would be provided to growers for supplies of medical marijuana, no such licenses were ever issued, despite dozens of applications. Although cannabis is legalized for medical uses in more than half of the states of the United States, no products have been approved for federal commerce by the Food and Drug Administration, a status that limits cultivation, manufacture, distribution, clinical research, and therapeutic applications. In April 2014, the American Academy of Neurology found evidence supporting the effectiveness of the cannabis extracts in treating certain symptoms of multiple sclerosis and pain, but there was insufficient evidence to determine effectiveness for treating several other neurological diseases. A 2015 review confirmed that medical marijuana was effective for treating spasticity and chronic pain, but caused numerous short-lasting adverse events, such as dizziness. Multiple sclerosis symptoms Spasticity. Based on the results of 3 high quality trials and 5 of lower quality, oral cannabis extract was rated as effective, and THC as probably effective, for improving people's subjective experience of spasticity. Oral cannabis extract and THC both were rated as possibly effective for improving objective measures of spasticity. Centrally mediated pain and painful spasms. Based on the results of 4 high quality trials and 4 low quality trials, oral cannabis extract was rated as effective, and THC as probably effective in treating central pain and painful spasms. Bladder dysfunction. Based on a single high quality study, oral cannabis extract and THC were rated as probably ineffective for controlling bladder complaints in multiple sclerosis Neurodegenerative disorders Huntington disease. No reliable conclusion could be drawn regarding the effectiveness of THC or oral cannabis extract in treating the symptoms of Huntington disease as the available trials were too small to reliably detect any difference Parkinson's disease. Based on a single study, oral CBD extract was rated probably ineffective in treating levodopa-induced dyskinesia in Parkinson's disease. Alzheimer's disease. A 2009 Cochrane Review found insufficient evidence to conclude whether cannabis products have any utility in the treatment of Alzheimer's disease. Other neurological disorders Tourette syndrome. The available data was determined to be insufficient to allow reliable conclusions to be drawn regarding the effectiveness of oral cannabis extract or THC in controlling tics. Cervical dystonia. Insufficient data was available to assess the effectiveness of oral cannabis extract of THC in treating cervical dystonia. Potential for toxicity Preliminary research indicates that prolonged exposure to high doses of THC may interfere with chromosomal stability, which may be hereditary as a factor affecting cell instability and cancer risk. The carcinogenicity of THC in the studied populations of so-called "heavy users" remains dubious due to various confounding variables, most significantly concurrent tobacco use. See also Cannabinoids Anandamide, 2-Arachidonoylglycerol, endogenous cannabinoid agonists Cannabidiol (CBD) Cannabinol (CBN), a metabolite of THC Dronabinol, the name of THC-based pharmaceutical (INN) HU-210, WIN 55,212-2, JWH-133, synthetic cannabinoid agonists (neocannabinoids) Hashish List of investigational analgesics Medical cannabis Dronabinol Epidiolex (prescription form of purified cannabidiol derived from hemp used for treating some rare neurological diseases) Sativex Effects of cannabis War on Drugs Vaping-associated pulmonary injury Cannabinoid hyperemesis syndrome (CHS) References External links U.S. National Library of Medicine: Drug Information Portal – Tetrahydrocannabinol Acetylcholinesterase inhibitors Amorphous solids Antiemetics Appetite stimulants Aromatase inhibitors Benzochromenes CB1 receptor agonists CB2 receptor agonists Entheogens Euphoriants Glycine receptor agonists Natural phenols metabolism Cannabis Phytocannabinoids Terpeno-phenolic compounds Transient receptor potential channel modulators Opioid receptor negative allosteric modulators
Tetrahydrocannabinol
[ "Physics" ]
4,221
[ "Amorphous solids", "Unsolved problems in physics" ]
60,925
https://en.wikipedia.org/wiki/Ship%20commissioning
Ship commissioning is the act or ceremony of placing a ship in active service and may be regarded as a particular application of the general concepts and practices of project commissioning. The term is most commonly applied to placing a warship in active duty with its country's military forces. The ceremonies involved are often rooted in centuries-old naval tradition. Ship naming and launching endow a ship hull with her identity, but many milestones remain before it is completed and considered ready to be designated a commissioned ship. The engineering plant, weapon and electronic systems, galley, and other equipment required to transform the new hull into an operating and habitable warship are installed and tested. The prospective commanding officer, ship's officers, the petty officers, and seamen who will form the crew report for training and familiarization with their new ship. Before commissioning, the new ship undergoes sea trials to identify any deficiencies needing correction. The preparation and readiness time between christening-launching and commissioning may be as much as three years for a nuclear-powered aircraft carrier to as brief as twenty days for a World War II landing ship. USS Monitor, of American Civil War fame, was commissioned less than three weeks after launch. Pre-commissioning Regardless of the type of ship in question, a vessel's journey towards commissioning in its nation's navy begins with a process known as sea trials. Sea trials usually take place some years after a vessel was laid down, and mark the interim step between the completion of a ship's construction and its official acceptance for service with its nation's navy. Sea trials begin when the ship is floated out of its dry dock (or more rarely, moved by a vehicle to the sea from its construction hangar, as was the case with the submarine ), at which time the initial crew for a ship (usually a skeleton crew composed of yard workers and naval personnel; in the modern era of increasingly complex ships the crew will include technical representatives of the ship builder and major system subcontractors) will assume command of the vessel in question. The ship is then sailed in littoral waters to test the design, equipment, and other ship specific systems to ensure that they work properly and can handle the equipment that they will be using in the future. Tests during this phase can include launching missiles from missile magazines, firing the ship's gun (if so equipped), conducting basic flight tests with rotary and fixed-wing aircraft that will be assigned to the ship, and various tests of the electronic and propulsion equipment. Often during this phase of testing problems arise relating to the state of the equipment on the ship, which can require returning to the builder's shipyard to address those concerns. In addition to problems with a ship's arms, armament, and equipment, the sea trial phase a ship undergoes prior to commissioning can identify issues with the ship's design that may need to be addressed before it can be accepted into service. During her sea trials in 1999 French Naval officials determined that the was too short to safely operate the E2C Hawkeye, resulting in her return to the builder's shipyard for enlargement. After a ship has successfully cleared its sea trial period, it will officially be accepted into service with its nation's navy. At this point, the ship in question will undergo a process of degaussing and/or deperming, to reduce the ship's magnetic signature. Commissioning Once a ship's sea trials are successfully completed, plans for the commissioning ceremony will take shape. Depending on the naval traditions of the nation in question, the commissioning ceremony may be an elaborately planned event with guests, the ship's future crew, and other persons of interest in attendance, or the nation may forgo a ceremony and administratively place the ship in commission. At a minimum, on the day on which the ship is to be commissioned the crew will report for duty aboard the ship and the commanding officer will read through the orders given for the ship and its personnel. If the ship's ceremony is a public affair, the Captain may make a speech to the audience, along with other VIPs as the ceremony dictates. Religious ceremonies, such as blessing the ship or the singing of traditional hymns or songs may also occur. Once a ship has been commissioned its final step toward becoming an active unit of the navy it serves is to report to its home port and officially load or accept any remaining equipment (such as munitions). Decommissioning To decommission a ship is to terminate its career in service in the armed forces of a nation. Unlike wartime ship losses, in which a vessel lost to enemy action is said to be struck, decommissioning confers that the ship has reached the end of its usable life and is being retired from a country's navy. Depending on the naval traditions of the country, a ceremony commemorating the decommissioning of the ship may take place, or the vessel may be removed administratively with minimal fanfare. The term "paid off" is alternatively used in British and Commonwealth contexts, originating in the age-of-sail practice of ending an officer's commission and paying crew wages once the ship completed its voyage. Ship decommissioning usually occurs some years after the ship was commissioned and is intended to serve as a means by which a vessel that has become too old or obsolete can be retired with honor from the country's armed forces. Decommissioning of the vessel may also occur due to treaty agreements (such as the Washington Naval Treaty) or for safety reasons (such as a ship's nuclear reactor and associated parts reaching the end of their service life), depending on the type of ship being decommissioned. In a limited number of cases a ship may be decommissioned if the vessel in question is judged to be damaged beyond economical repair, as was the case with , or . In rare cases, a navy or its associated country may recommission or leave a ship that is old or obsolete in commission with the regular force rather than decommissioning the vessel in question due to the historical significance or public sentiment for the ship in question. This is the case with the ships and . Vessels preserved in this manner typically do not relinquish their names to other, more modern ships that may be in the design, planning, or construction phase of the parent nation's navy. Prior to its formal decommissioning, the ship in question will begin the process of decommissioning by going through a preliminary step called inactivation or deactivation. During this phase, a ship will report to a naval facility owned by the country to permit the ship's crew to offload, remove, and dismantle the ship's weapons, ammunition, electronics, and other material that is judged to be of further use to the nation. The removed material from a ship usually ends up either rotating to another ship in the class with similar weapons and/or capabilities, or in storage pending a decision on equipment's fate. During this time a ship's crew may be thinned out via transfers and reassignments as the ongoing removal of equipment renders certain personnel (such as missile technicians or gun crews) unable to perform their duties on the ship in question. Certain aspects of a ship's deactivation – such as the removal or deactivation of a ship's nuclear weapons capabilities – may be governed by international treaties, which can result in the presence of foreign officials authorized to inspect the weapon or weapon system to ensure compliance with treaties. Other aspects of a ship's decommissioning, such as the reprocessing of nuclear fuel from a ship utilizing a nuclear reactor or the removal of hazardous materials from a ship, are handled by the government according to the nation's domestic policies. When a ship finishes its inactivation, it is then formally decommissioned, after which the ship is usually towed to a storage facility. In addition to the economic advantages of retiring a ship that has grown maintenance intensive or obsolete, the decommissioning frees up the name used by the ship, allowing vessels currently in the planning or building stages to inherit the name of that warship. Often, but not always, ships that are decommissioned spend the next few years in a reserve fleet before their ultimate fate is decided. Practices by nation United States Navy Commissioning in the early United States Navy under sail was attended by no ceremony. An officer designated to command a new ship received orders similar to those issued to Captain Thomas Truxtun in 1798: In Truxtun's time, the prospective commanding officer had responsibility for overseeing construction details, outfitting the ship, and recruiting his crew. When a captain determined that his new ship was ready to take to sea, he mustered the crew on deck, read his orders, broke the national ensign and distinctive commissioning pennant, and caused the watch to be set and the first entry to be made in the log. Thus, the ship was placed in commission. Commissionings were not public affairs, and unlike christening-and-launching ceremonies, were not recorded by newspapers. The first specific reference to commissioning located in naval records is a letter of November 6, 1863, from Secretary of the Navy Gideon Welles to all navy yards and stations. The Secretary directed: "Hereafter the commandants of navy yards and stations will inform the Department, by special report of the date when each vessel preparing for sea service at their respective commands, is placed in commission." Subsequently, various editions of Navy regulations mentioned the act of putting a ship in commission, but details of a commissioning ceremony were not prescribed. Through custom and usage, a fairly standard practice emerged, the essentials of which are outlined in current Navy regulations. Craft assigned to Naval Districts and shore bases for local use, such as harbor tugs and floating drydocks, are not usually placed in commission but are instead given an "in service" status. They do fly the national ensign, but not a commissioning pennant. In modern times, officers and crew members of a new warship are assembled on the quarterdeck or other suitable area. Formal transfer of the ship to the prospective commanding officer is done by the Chief of Naval Operations or his representative. The national anthem is played, the transferring officer reads the commissioning directive, the ensign is hoisted, and the commissioning pennant broken. The prospective commanding officer reads his orders, assumes command, and the first watch is set. Following, the sponsor is traditionally invited to give the first order to the ship's company: "Man our ship and bring her to life!", whereupon the ship's assigned crew would run on board and man the rails of the ship. In recent years, commissionings have become more public occasions. Most commonly assisted by a Commissioning Support Team (CST), the Prospective Commanding Officer and ship's crew, shipbuilder executives, and senior Navy representatives gather for a formal ceremony placing the ship in active service (in commission). Guests, including the ship's sponsor, are frequently invited to attend, and a prominent individual delivers a commissioning address. On May 3, 1975, more than 20,000 people witnessed the commissioning of at Norfolk, Virginia. The carrier's sponsor, daughter of Fleet Admiral Chester Nimitz, was introduced, and U.S. President Gerald R. Ford was the principal speaker. Regardless of the type of ship, the brief commissioning ceremony completes the cycle from christening and launching to bring the ship into full status as a warship of her nation. See also Shakedown cruise Taken on Strength Decommissioning of Russian nuclear-powered vessels Lists of ship commissionings and decommissionings References External links Navy Traditions and Customs from Naval Historical Center Photos from the 1986 commissioning of USS Samuel B. Roberts (FFG 58) Naval ceremonies Rituals attending construction
Ship commissioning
[ "Engineering" ]
2,401
[ "Construction", "Rituals attending construction" ]
60,930
https://en.wikipedia.org/wiki/Ceremonial%20ship%20launching
Ceremonial ship launching involves the performing of ceremonies associated with the process of transferring a vessel to the water. It is a nautical tradition in many cultures, dating back millennia, to accompany the physical process with ceremonies which have been observed as public celebration and a solemn blessing, usually but not always, in association with the launch itself. Ship launching imposes stresses on the ship not met during normal operation and in addition to the size and weight of the vessel represents a considerable engineering challenge as well as a public spectacle. The process also involves many traditions intended to invite good luck, such as christening by breaking a sacrificial bottle of champagne over the bow as the ship is named aloud and launched. Methods There are three principal methods of conveying a new ship from building site to water, only two of which are called "launching". The oldest, most familiar, and most widely used is the end-on launch, in which the vessel slides down an inclined slipway, usually stern first. With the side launch, the ship enters the water broadside. This method came into use in the 19th century on inland waters, rivers, and lakes, and was more widely adopted during World War II. The third method is float-out, used for ships that are built in basins or dry docks and then floated by admitting water into the dock. If launched in a restrictive waterway, drag chains are used to slow the ship speed to prevent it striking the opposite bank. Stern-first Normally, ways are arranged perpendicular to the shore line (or as nearly so as the water and maximum length of vessel allows) and the ship is built with its stern facing the water. Where the launch takes place into a narrow river, the building slips may be at a shallow angle rather than perpendicular, even though this requires a longer slipway when launching. Modern slipways take the form of a reinforced concrete mat of sufficient strength to support the vessel, with two "barricades" that extend well below the water level taking into account tidal variations. The barricades support the two launch ways. The vessel is built upon temporary cribbing that is arranged to give access to the hull's outer bottom and to allow the launchways to be erected under the complete hull. When it is time to prepare for launching, a pair of standing ways is erected under the hull and out onto the barricades. The surface of the ways is greased. (Tallow and whale oil were used as grease in sailing ship days.) A pair of sliding ways is placed on top, under the hull, and a launch cradle with bow and stern poppets is erected on these sliding ways. The weight of the hull is then transferred from the build cribbing onto the launch cradle. Provision is made to hold the vessel in place and then release it at the appropriate moment in the launching ceremony; common mechanisms include weak links designed to be cut at a signal and mechanical triggers controlled by a switch from the ceremonial platform. On launching, the vessel slides backwards down the slipway on the ways until it floats by itself. Sideways Some slipways are built so that the vessel is side-on to the water and is launched sideways. This is done where the limitations of the water channel would not allow lengthwise launching, but occupies a much greater length of shore. The Great Eastern designed by Brunel was built this way, as were many landing craft during World War II. This method requires many more sets of ways to support the weight of the ship. Air-bag Sometimes ships are launched using a series of inflated tubes underneath the hull, which deflate to cause a downward slope into the water. This procedure has the advantages of requiring less permanent infrastructure, risk, and cost. The airbags provide support to the hull of the ship and aid its launching motion into the water, thus this method is arguably safer than other options such as sideways launching. These airbags are usually cylindrical in shape with hemispherical heads at both ends. Traditions Ancient A Babylonian narrative dating from the 3rd millennium BC describes the completion of a ship: Openings to the water I stopped; I searched for cracks and the wanting parts I fixed: Three sari of bitumen I poured over the outside; To the gods I caused oxen to be sacrificed. It is believed that ancient Egyptians, Greeks, and Romans called on their gods to protect seamen. Favor was evoked from the monarch of the seas—Poseidon in Greek mythology, Neptune in Roman mythology. Ship launching participants in ancient Greece wreathed their heads with olive branches, drank wine to honor the gods, and poured water on the new vessel as a symbol of blessing. Shrines were carried on board Greek and Roman ships, and this practice extended into the Middle Ages. The shrine was usually placed at the quarterdeck, an area which continues to have special ceremonial significance. Different peoples and cultures shaped the religious ceremonies surrounding a ship launching. Jews and Christians customarily used wine and water as they called upon God to safeguard them at sea. Intercession of the saints and the blessing of the church were asked by Christians. Ship launchings in the Ottoman Empire were accompanied by prayers to Allah, the sacrifice of sheep, and appropriate feasting. Chaplain Henry Teonge of Britain's Royal Navy left an interesting account of a warship launch, a "briganteen of 23 oars," by the Knights of Malta in 1675: Two fryers and an attendant went into the vessel, and kneeling down prayed halfe an houre, and layd their hands on every mast, and other places of the vessel, and sprinkled her all over with holy water. Then they came out and hoysted a pendent to signify she was a man of war; then at once thrust her into the water. Early Modern Age The liturgical aspects of ship christenings, or baptisms, continued in Catholic countries, while the Reformation seems to have put a stop to them for a time in Protestant Europe. By the 17th century, for example, English launchings were secular affairs. The christening party for the launch of the 64-gun ship of the line in 1610 included the Prince of Wales and famed naval constructor Phineas Pett, who was master shipwright at the Woolwich yard. Pett described the proceedings: The "standing cup" was a large cup fashioned of precious metal. When the ship began to slide down the ways, the presiding official took a ceremonial sip of wine from the cup, and poured the rest on the deck or over the bow. Usually the cup was thrown overboard and belonged to the lucky retriever. As navies grew larger and launchings more frequent, economy dictated that the costly cup be caught in a net for reuse at other launchings. Late in 17th century Britain, the standing-cup ceremony was replaced by the practice of breaking a bottle across the bow. By country Launching could be said to mark the birth of a vessel; and people throughout history have performed launching ceremonies, in part to appeal for good fortune and the safety of each new vessel. Canada In Canada, Aboriginal peoples will perform ceremonies at the launching of vessels along with other methods of launching. France French ship launchings and christenings in the 18th and early 19th centuries were accompanied by unique rites closely resembling marriage and baptismal ceremonies. A godfather for the new ship presented a godmother with a bouquet of flowers as both said the ship's name. No bottle was broken, but a priest pronounced the vessel's name and blessed it with holy water. India In India, ships have historically been launched with a Puja ceremony that dedicates the ship to a Hindu god or goddess, and seeks blessings for her and her sailors. Historically, Hindu priests would perform the puja ceremony at launch. In the 20th century, ships were launched with a lady breaking a coconut on the bow of the vessel, which is sometimes followed by a small Puja. Japan Japanese ship launchings incorporate silver axes which are thought to bring good luck and scare away evil. Japanese shipbuilders traditionally order the crafting of a special axe for each new vessel; and after the launching ceremony, they present the axe to the vessel's owner as a commemorative gift. The axe is used to cut the rope which tethers the ship to the place where she was built. United Kingdom Sponsors of British warships were customarily members of the royal family, senior naval officers, or Admiralty officials. A few civilians were invited to sponsor Royal Navy ships during the nineteenth century, and women became sponsors for the first time. In 1875, a religious element was returned to naval christenings by Princess Alexandra, wife of the Prince of Wales, when she introduced an Anglican choral service in the launching ceremony for battleship . The usage continues with the singing of Psalm 107 with its special meaning to mariners: They that go down to the sea in ships; That do business in great waters; These see the works of the Lord, and His wonders in the deep. In 1969, Queen Elizabeth II named the ocean liner Queen Elizabeth 2 after herself, instead of the older liner , by saying, "I name this ship Queen Elizabeth the Second. May God bless her and all who sail in her." On 4 July 2014, the Queen named the Royal Navy's new aircraft carrier with a bottle of single malt Scotch whisky from the Bowmore distillery on the island of Islay instead of champagne because the ship had been built and launched in Scotland. The Duchess of Rothesay similarly launched by pulling a lever which smashed a bottle of single malt Scotch whisky at the side of the ship. At the 2024 launching of CalMac ferry Glen Rosa, newly-qualified welder Beth Atkinson named the ship and pulled a lever to similarly smash a bottle, of single malt from the Ardgowan distillery at nearby Inverkip. Shipyard ephemera is a rich source of detail concerning a launch and this was often material produced for the audience of the day and then thrown away. Tyne & Wear Archives & Museums has many of these items from Tyne and Wear shipyards. A number can be seen in Commons. The 1900 piece for reproduced in this article lists a woman performing the launch. United States Ceremonial practices for christening and launching ships in the United States have their roots in Europe. Descriptions are not plentiful for launching American Revolutionary War naval vessels, but a local newspaper detailed the launch of Continental frigate at Portsmouth, New Hampshire, in May 1776: It was customary for the builders to celebrate a ship launching. Rhode Island authorities were charged with overseeing construction of frigates and . They voted the sum of fifty dollars () to the master builder of each yard "to be expended in providing an entertainment for the carpenters that worked on the ships." Five pounds () was spent for lime juice for the launching festivities of frigate at Philadelphia, Pennsylvania, suggesting that the "entertainment" included a potent punch with lime juice as an ingredient. No mention has come to light of christening a Continental Navy ship during the American Revolution. The first ships of the Continental Navy were , , , and . These were former merchantmen, and their names were assigned during conversion and outfitting. Later, Congress authorized the construction of thirteen frigates, and no names were assigned until after four had launched. The first description that we have of an American warship christening is that of at Boston, October 21, 1797, famous as "Old Ironsides." Her sponsor was Captain James Sever, USN, who stood on the weather deck at the bow. "At fifteen minutes after twelve she commenced a movement into the water with such steadiness, majesty and exactness as to fill every heart with sensations of joy and delight." As Constitution ran out, Captain Sever broke a bottle of fine old Madeira over the heel of the bowsprit. Frigate had an interesting launching on April 10, 1800, at New York: As the 19th century progressed, American ship launchings continued to be festive occasions, but with no set ritual except that the sponsor(s) used some "christening fluid" as the ship received her name. Sloop of war was launched in 1828 and was "christened by a young lady of Portsmouth." This is the first known instance of a woman sponsoring a United States Navy vessel. The contemporaneous account does not name her. The first identified woman sponsor was Lavinia Fanning Watson, daughter of a prominent Philadelphian. She broke a bottle of wine and water over the bow of sloop-of-war at Philadelphia Navy Yard on August 22, 1846. Women as sponsors became increasingly the rule, but not universally so. As sloop-of-war "glided along the inclined plane" in 1846, "two young sailors, one stationed at each side of her head, anointed her with bottles, and named her as she left her cradle for the deep." As late as 1898, the torpedo boat was christened by the son of the builder. Wine is the traditional christening fluid, although numerous other liquids have been used. and were sent on their way in 1843 with whisky. Seven years later, "a bottle of best brandy was broken over the bow of steam sloop ." Steam frigate earned her place in naval history as Confederate States of America ironclad , and she was baptized with water from the Merrimack River. Admiral David Farragut's famous American Civil War flagship steam sloop was christened by three sponsors; two young ladies broke bottles of Connecticut River water and Hartford, Connecticut spring water, while a naval lieutenant completed the ceremony with a bottle of sea water. Champagne came into popular use as a christening fluid as the 19th century closed. A granddaughter of Secretary of the Navy Benjamin F. Tracy wet the bow of , the Navy's first steel battleship, with champagne at the New York Navy Yard on November 18, 1890. The effects of national prohibition on alcoholic beverages were reflected to some extent in ship christenings. Cruisers and , for example, were christened with water; the submarine V-6 with cider. However, battleship appropriately received her name with California wine in 1919. Champagne returned in 1922, but only for the launch of light cruiser . Rigid naval airships , , , and were built during the 1920s and early 1930s, carried on the Naval Vessel Register, and each was formally commissioned. The earliest First Lady of the United States to act as sponsor was Grace Coolidge who christened the airship Los Angeles. Lou Henry Hoover christened Akron in 1931, but the customary bottle was not used. Instead, the First Lady pulled a cord which opened a hatch in the airship's towering nose to release a flock of pigeons. Thousands of ships of every description came off the ways during World War II, the concerted effort of a mobilized American industry. The historic christening and launching ceremonies continued, but travel restrictions, other wartime considerations, and sheer numbers dictated that such occasions be less elaborate than those in the years before the war. On 15 December 1941, the United States Maritime Commission announced that all formal launching ceremonies would be discontinued for merchant ships being constructed under its authority, though simple informal ceremonies could continue without reimbursement to builders. In recent history, all U.S. Navy sponsors have been female. In addition to the ceremonial breaking of a champagne bottle on the bow, the sponsor remains in contact with the ship's crew and is involved in special events such as homecomings. Incidents sank moments after her launching at a shipyard in Govan, Glasgow, Scotland on 3 July 1883. As Daphne moved into the river, the anchors failed to stop the ship's forward progress. The starboard anchor moved only , but the port anchor was dragged . The current of the river caught Daphne and flipped it onto its port side, sinking it in deep water. 124 died including many young boys, some of whose relatives were there on shore. launched on 21 June 1898. Albion created a wave with her entry into the water after the Duchess of York christened her. The wave caused a stage to collapse on which 200 people were watching; it slid into a side creek, and 34 people drowned, mostly women and children. This was probably one of the first-ever ship launchings to be filmed. In 1907, the Italian ocean liner capsized and sank upon launch. In 2011, the luxury boat SS Jiugang sank at launch in Lanzhou, China. See also Ship class naming conventions United States ship naming conventions Russian ship naming conventions Japanese ship-naming conventions Hull classification symbol Ship commissioning Ship sponsor Lists of ship launches References Further reading Rodgers, Silvia The symbolism of ship launching in the Royal Navy (1983) (PhD thesis) External links Photos of the 8 Dec 1984 launching ceremony of the USS Samuel B. Roberts (FFsG 58) An online exhibit of ship launching ceremonies from the first half of the 20th Century Short video of ships being launched sideways Beginnings Ship launching Nautical terminology Rituals attending construction
Ceremonial ship launching
[ "Physics", "Engineering" ]
3,436
[ "Beginnings", "Physical quantities", "Time", "Rituals attending construction", "Construction", "Spacetime" ]
60,933
https://en.wikipedia.org/wiki/Triboelectric%20effect
The triboelectric effect (also known as triboelectricity, triboelectric charging, triboelectrification, or tribocharging) describes electric charge transfer between two objects when they contact or slide against each other. It can occur with different materials, such as the sole of a shoe on a carpet, or between two pieces of the same material. It is ubiquitous, and occurs with differing amounts of charge transfer (tribocharge) for all solid materials. There is evidence that tribocharging can occur between combinations of solids, liquids and gases, for instance liquid flowing in a solid tube or an aircraft flying through air. Often static electricity is a consequence of the triboelectric effect when the charge stays on one or both of the objects and is not conducted away. The term triboelectricity has been used to refer to the field of study or the general phenomenon of the triboelectric effect, or to the static electricity that results from it. When there is no sliding, tribocharging is sometimes called contact electrification, and any static electricity generated is sometimes called contact electricity. The terms are often used interchangeably, and may be confused. Triboelectric charge plays a major role in industries such as packaging of pharmaceutical powders, and in many processes such as dust storms and planetary formation. It can also increase friction and adhesion. While many aspects of the triboelectric effect are now understood and extensively documented, significant disagreements remain in the current literature about the underlying details. History The historical development of triboelectricity is interwoven with work on static electricity and electrons themselves. Experiments involving triboelectricity and static electricity occurred before the discovery of the electron. The name ēlektron (ἤλεκτρον) is Greek for amber, which is connected to the recording of electrostatic charging by Thales of Miletus around 585 BCE, and possibly others even earlier. The prefix (Greek for 'rub') refers to sliding, friction and related processes, as in tribology. From the axial age (8th to 3rd century BC) the attraction of materials due to static electricity by rubbing amber and the attraction of magnetic materials were considered to be similar or the same. There are indications that it was known both in Europe and outside, for instance China and other places. Syrian women used amber whorls in weaving and exploited the triboelectric properties, as noted by Pliny the Elder. The effect was mentioned in records from the medieval period. Archbishop Eustathius of Thessalonica, Greek scholar and writer of the 12th century, records that Woliver, king of the Goths, could draw sparks from his body. He also states that a philosopher was able, while dressing, to draw sparks from his clothes, similar to the report by Robert Symmer of his silk stocking experiments, which may be found in the 1759 Philosophical Transactions. It is generally considered that the first major scientific analysis was by William Gilbert in his publication De Magnete in 1600. He discovered that many more materials than amber such as sulphur, wax, glass could produce static electricity when rubbed, and that moisture prevented electrification. Others such as Sir Thomas Browne made important contributions slightly later, both in terms of materials and the first use of the word electricity in Pseudodoxia Epidemica. He noted that metals did not show triboelectric charging, perhaps because the charge was conducted away. An important step was around 1663 when Otto von Guericke invented a machine that could automate triboelectric charge generation, making it much easier to produce more tribocharge; other electrostatic generators followed. For instance, shown in the Figure is an electrostatic generator built by Francis Hauksbee the Younger. Another key development was in the 1730s when C. F. du Fay pointed out that there were two types of charge which he named vitreous and resinous. These names corresponded to the glass (vitreous) rods and bituminous coal, amber, or sealing wax (resinous) used in du Fay's experiments. These names were used throughout the 19th century. The use of the terms positive and negative for types of electricity grew out of the independent work of Benjamin Franklin around 1747 where he ascribed electricity to an over- or under- abundance of an electrical fluid. At about the same time Johan Carl Wilcke published in his 1757 PhD thesis a triboelectric series. In this work materials were listed in order of the polarity of charge separation when they are touched or slide against another. A material towards the bottom of the series, when touched to a material near the top of the series, will acquire a more negative charge. The first systematic analysis of triboelectricity is considered to be the work of Jean Claude Eugène Péclet in 1834. He studied triboelectric charging for a range of conditions such as the material, pressure and rubbing of surfaces. It was some time before there were further quantitative works by Owen in 1909 and Jones in 1915. The most extensive early set of experimental analyses was from 1914–1930 by the group of Professor Shaw, who laid much of the foundation of experimental knowledge. In a series of papers he: was one of the first to mention some of the failings of the triboelectric series, also showing that heat had a major effect on tribocharging; analyzed in detail where different materials would fall in a triboelectric series, at the same time pointing out anomalies; separately analyzed glass and solid elements and solid elements and textiles, carefully measuring both tribocharging and friction; analyzed charging due to air-blown particles; demonstrated that surface strain and relaxation played a critical role for a range of materials, and examined the tribocharging of many different elements with silica. Much of this work predates an understanding of solid state variations of energies levels with position, and also band bending. It was in the early 1950s in the work of authors such as Vick that these were taken into account along with concepts such as quantum tunnelling and behavior such as Schottky barrier effects, as well as including models such as asperities for contacts based upon the work of Frank Philip Bowden and David Tabor. Basic characteristics Triboelectric charging occurs when two materials are brought into contact then separated, or slide against each other. An example is rubbing a plastic pen on a shirt sleeve made of cotton, wool, polyester, or the blended fabrics used in modern clothing. An electrified pen will attract and pick up pieces of paper less than a square centimeter, and will repel a similarly electrified pen. This repulsion is detectable by hanging both pens on threads and setting them near one another. Such experiments led to the theory of two types of electric charge, one being the negative of the other, with a simple sum respecting signs giving the total charge. The electrostatic attraction of the charged plastic pen to neutral uncharged pieces of paper (for example) is due to induced dipoles in the paper. The triboelectric effect can be unpredictable because many details are often not controlled. Phenomena which do not have a simple explanation have been known for many years. For instance, as early as 1910, Jaimeson observed that for a piece of cellulose, the sign of the charge was dependent upon whether it was bent concave or convex during rubbing. The same behavior with curvature was reported in 1917 by Shaw, who noted that the effect of curvature with different materials made them either more positive or negative. In 1920, Richards pointed out that for colliding particles the velocity and mass played a role, not just what the materials were. In 1926, Shaw pointed out that with two pieces of identical material, the sign of the charge transfer from "rubber" to "rubbed" could change with time. There are other more recent experimental results which also do not have a simple explanation. For instance the work of Burgo and Erdemir, which showed that the sign of charge transfer reverses between when a tip is pushing into a substrate versus when it pulls out; the detailed work of Lee et al and Forward, Lacks and Sankaran and others measuring the charge transfer during collisions between particles of zirconia of different size but the same composition, with one size charging positive, the other negative; the observations using sliding or Kelvin probe force microscope of inhomogeneous charge variations between nominally identical materials. The details of how and why tribocharging occurs are not established science as of 2023. One component is the difference in the work function (also called the electron affinity) between the two materials. This can lead to charge transfer as, for instance, analyzed by Harper. As has been known since at least 1953, the contact potential is part of the process but does not explain many results, such as the ones mentioned in the last two paragraphs. Many studies have pointed out issues with the work function difference (Volta potential) as a complete explanation. There is also the question of why sliding is often important. Surfaces have many nanoscale asperities where the contact is taking place, which has been taken into account in many approaches to triboelectrification. Volta and Helmholtz suggested that the role of sliding was to produce more contacts per second. In modern terms, the idea is that electrons move many times faster than atoms, so the electrons are always in equilibrium when atoms move (the Born–Oppenheimer approximation). With this approximation, each asperity contact during sliding is equivalent to a stationary one; there is no direct coupling between the sliding velocity and electron motion. An alternative view (beyond the Born–Oppenheimer approximation) is that sliding acts as a quantum mechanical pump which can excite electrons to go from one material to another. A different suggestion is that local heating during sliding matters, an idea first suggested by Frenkel in 1941. Other papers have considered that local bending at the nanoscale produces voltages which help drive charge transfer via the flexoelectric effect. There are also suggestions that surface or trapped charges are important. More recently there have been attempts to include a full solid state description. Explanations and mechanisms From early work starting around the end of the 19th century a large amount of information is available about what, empirically, causes triboelectricity. While there is extensive experimental data on triboelectricity there is not as yet full scientific consensus on the source, or perhaps more probably the sources. Some aspects are established, and will be part of the full picture: Work function differences between the two materials. Local curvature, strain and roughness. The forces used during sliding, and the velocities when particles collide as well as the sizes. The electronic structure of the materials, and the crystallographic orientation of the two contacting materials. Surface or interface states, as well as environmental factors such as humidity. Triboelectric series An empirical approach to triboelectricity is a triboelectric series. This is a list of materials ordered by how they develop a charge relative to other materials on the list. Johan Carl Wilcke published the first one in a 1757 paper. The series was expanded by Shaw and Henniker by including natural and synthetic polymers, and included alterations in the sequence depending on surface and environmental conditions. Lists vary somewhat as to the order of some materials. Another triboelectric series based on measuring the triboelectric charge density of materials was proposed by the group of Zhong Lin Wang. The triboelectric charge density of the tested materials was measured with respect to liquid mercury in a glove box under well-defined conditions, with fixed temperature, pressure and humidity. It is known that this approach is too simple and unreliable. There are many cases where there are triangles: material A is positive when rubbed against B, B is positive when rubbed against C, and C is positive when rubbed against A, an issue mentioned by Shaw in 1914. This cannot be explained by a linear series; cyclic series are inconsistent with the empirical triboelectric series. Furthermore, there are many cases where charging occurs with contacts between two pieces of the same material. This has been modelled as a consequence of the electric fields from local bending (flexoelectricity). Work function differences In all materials there is a positive electrostatic potential from the positive atomic nuclei, partially balanced by a negative electrostatic potential of what can be described as a sea of electrons. The average potential is positive, what is called the mean inner potential (MIP). Different materials have different MIPs, depending upon the types of atoms and how close they are. At a surface the electrons also spill out a little into the vacuum, as analyzed in detail by Kohn and Liang. This leads to a dipole at the surface. Combined, the dipole and the MIP lead to a potential barrier for electrons to leave a material which is called the work function. A rationalization of the triboelectric series is that different members have different work functions, so electrons can go from the material with a small work function to one with a large. The potential difference between the two materials is called the Volta potential, also called the contact potential. Experiments have validated the importance of this for metals and other materials. However, because the surface dipoles vary for different surfaces of any solid the contact potential is not a universal parameter. By itself it cannot explain many of the results which were established in the early 20th century. Electromechanical contributions Whenever a solid is strained, electric fields can be generated. One process is due to linear strains, and is called piezoelectricity, the second depends upon how rapidly strains are changing with distance (derivative) and is called flexoelectricity. Both are established science, and can be both measured and calculated using density functional theory methods. Because flexoelectricity depends upon a gradient it can be much larger at the nanoscale during sliding or contact of asperity between two objects. There has been considerable work on the connection between piezoelectricity and triboelectricity. While it can be important, piezoelectricity only occurs in the small number of materials which do not have inversion symmetry, so it is not a general explanation. It has recently been suggested that flexoelectricity may be very important in triboelectricity as it occurs in all insulators and semiconductors. Quite a few of the experimental results such as the effect of curvature can be explained by this approach, although full details have not as yet been determined. There is also early work from Shaw and Hanstock, and from the group of Daniel Lacks demonstrating that strain matters. Capacitor charge compensation model An explanation that has appeared in different forms is analogous to charge on a capacitor. If there is a potential difference between two materials due to the difference in their work functions (contact potential), this can be thought of as equivalent to the potential difference across a capacitor. The charge to compensate this is that which cancels the electric field. If an insulating dielectric is in between the two materials, then this will lead to a polarization density and a bound surface charge of , where is the surface normal. The total charge in the capacitor is then the combination of the bound surface charge from the polarization and that from the potential. The triboelectric charge from this compensation model has been frequently considered as a key component. If the additional polarization due to strain (piezoelectricity) or bending of samples (flexoelectricity) is included this can explain observations such as the effect of curvature or inhomogeneous charging. Electron and/or ion transfer There is debate about whether electrons or ions are transferred in triboelectricity. For instance, Harper discusses both possibilities, whereas Vick was more in favor of electron transfer. The debate remains to this day with, for instance, George M. Whitesides advocating for ions, while Diaz and Fenzel-Alexander as well as Laurence D. Marks support both, and others just electrons. Thermodynamic irreversibility In the latter half of the 20th century the Soviet school led by chemist Boris Derjaguin argued that triboelectricity and the associated phenomenon of triboluminescence are fundamentally irreversible. A similar point of view to Derjaguin's has been more recently advocated by Seth Putterman and his collaborators at the University of California, Los Angeles (UCLA). A proposed theory of triboelectricity as a fundamentally irreversible process was published in 2020 by theoretical physicists Robert Alicki and Alejandro Jenkins. They argued that the electrons in the two materials that slide against each other have different velocities, giving a non-equilibrium state. Quantum effects cause this imbalance to pump electrons from one material to the other. This is a fermionic analog of the mechanism of rotational superradiance originally described by Yakov Zeldovich for bosons. Electrons are pumped in both directions, but small differences in the electronic potential landscapes for the two surfaces can cause net charging. Alicki and Jenkins argue that such an irreversible pumping is needed to understand how the triboelectric effect can generate an electromotive force. Humidity Generally, increased humidity (water in the air) leads to a decrease in the magnitude of triboelectric charging. The size of this effect varies greatly depending on the contacting materials; the decrease in charging ranges from up to a factor of 10 or more to very little humidity dependence. Some experiments find increased charging at moderate humidity compared to extremely dry conditions before a subsequent decrease at higher humidity. The most widespread explanation is that higher humidity leads to more water adsorbed at the surface of contacting materials, leading to a higher surface conductivity. The higher conductivity allows for greater charge recombination as contacts separate, resulting in a smaller transfer of charge. Another proposed explanation for humidity effects considers the case when charge transfer is observed to increase with humidity in dry conditions. Increasing humidity may lead to the formation of water bridges between contacting materials that promote the transfer of ions. Examples Friction and adhesion from tribocharging Friction is a retarding force due to different energy dissipation process such as elastic and plastic deformation, phonon and electron excitation, and also adhesion. As an example, in a car or any other vehicle the wheels elastically deform as they roll. Part of the energy needed for this deformation is recovered (elastic deformation), some is not and goes into heating the tires. The energy which is not recovered contributes to the back force, a process called rolling friction. Similar to rolling friction there are energy terms in charge transfer, which contribute to friction. In static friction there is coupling between elastic strains, polarization and surface charge which contributes to the frictional force. In sliding friction, when asperities contact and there is charge transfer, some of the charge returns as the contacts are released, some does not and will contribute to the macroscopically observed friction. There is evidence for a retarding Coulomb force between asperities of different charges, and an increase in the adhesion from contact electrification when geckos walk on water. There is also evidence of connections between jerky (stick–slip) processes during sliding with charge transfer, electrical discharge and x-ray emission. How large the triboelectric contribution is to friction has been debated. It has been suggested by some that it may dominate for polymers, whereas Harper has argued that it is small. Liquids and gases The generation of static electricity from the relative motion of liquids or gases is well established, with one of the first analyses in 1886 by Lord Kelvin in his water dropper which used falling drops to create an electric generator. Liquid mercury is a special case as it typically acts as a simple metal, so has been used as a reference electrode. More common is water, and electricity due to water droplets hitting surfaces has been documented since the discovery by Philipp Lenard in 1892 of the spray electrification or waterfall effect. This is when falling water generates static electricity either by collisions between water drops or with the ground, leading to the finer mist in updrafts being mainly negatively charged, with positive near the lower surface. It can also occur for sliding drops. Another type of charge can be produced during rapid solidification of water containing ions, which is called the Workman–Reynolds effect. During the solidification the positive and negative ions may not be equally distributed between the liquid and solid. For instance, in thunderstorms this can contribute (together with the waterfall effect) to separation of positive hydrogen ions and negative hydroxide ions, leading to static charge and lightning. A third class is associated with contact potential differences between liquids or gases and other materials, similar to the work function differences for solids. It has been suggested that a triboelectric series for liquids is useful. One difference from solids is that often liquids have charged double layers, and most of the work to date supports that ion transfer (rather than electron) dominates for liquids as first suggested by Irving Langmuir in 1938. Finally, with liquids there can be flow-rate gradients at interfaces, and also viscosity gradients. These can produce electric fields and also polarization of the liquid, a field called electrohydrodynamics. These are analogous to the electromechanical terms for solids where electric fields can occur due to elastic strains as described earlier. Powders During commercial powder processing or in natural processes such as dust storms, triboelectric charge transfer can occur. There can be electric fields of up to 160kV/m with moderate wind conditions, which leads to Coulomb forces of about the same magnitude as gravity. There does not need to be air present, significant charging can occur, for instance, on airless planetary bodies. With pharmaceutic powders and other commercial powders the tribocharging needs to be controlled for quality control of the materials and doses. Static discharge is also a particular hazard in grain elevators owing to the danger of a dust explosion, in places that store explosive powders, and in many other cases. Triboelectric powder separation has been discussed as a method of separating powders, for instance different biopolymers. The principle here is that different degrees of charging can be exploited for electrostatic separation, a general concept for powders. In industry There are many areas in industry where triboelectricity is known to be an issue. some examples are: Non-conducting pipes carrying combustible liquids or fuels such as petrol can result in tribocharge accumulation on the walls of the pipes, which can lead to potentials as large as 90 kV. Pneumatic transport systems in industry can lead to fires due to the tribocharge generated during use. On ships, contact between cargo and pipelines during loading and unloading, as well as flow in steam pipes and water jets in cleaning machines can lead to dangerous charging. Courses exist to teach mariners the dangers. US authorities require nearly all industrial facilities to measure particulate dust emissions. Various sensors based on triboelectricity are used, and in 1997 the United States Environmental Protection Agency issued guidelines for triboelectric fabric-filter bag leak-detection systems. Commercial sensors are available for triboelectric dust detection. Wiping a rail near a chemical tank while it is being filled with a flammable chemical can lead to sparks which ignite the chemical. This was the cause of a 2017 explosion that killed one and injured many. Other examples While the simple case of stroking a cat is familiar to many, there are other areas in modern technological civilization where triboelectricity is exploited or is a concern: Air moving past an aircraft can lead to a buildup of charge called "precipitation static" or "P-static"; aircraft typically have one or more static wicks to remove it. Checking the status of these is a standard task for pilots. Similarly, helicopter blades move fast, and tribocharging can generate voltages up to 200 kV. During planetary formation, a key step is aggregation of dust or smaller particles. There is evidence that triboelectric charging during collisions of granular material plays a key role in overcoming barriers to aggregation. Single-use medical protective clothing must fulfill certain triboelectric charging regulations in China. Space vehicles can accumulate significant tribocharge which can interfere with communications such as the sending of self-destruct signals. Some launches have been delayed by weather conditions where tribocharging could occur. Triboelectric nanogenerators are energy harvesting devices which convert mechanical energy into electricity. Triboelectric noise within medical cable assemblies and lead wires is generated when the conductors, insulation, and fillers rub against each other as the cables are flexed during movement. Keeping triboelectric noise at acceptable levels requires careful material selection, design, and processing. It is also an issue with underwater electroacoustic transducers if there are flexing motions of the cables; the mechanism is believed to involve relative motion between a dielectric and a conductor in the cable. Vehicle tires are normally dark because carbon black is added to help conduct away tribocharge that can shock passengers when they exit. There are also discharging straps than can be purchased. See also Electrostatic generator, machine to produce static electricity Electrostatic induction, separation of charges and polarization due to other charges Electrostriction, coupling between an electric field and volume of unit cells Electrohydrodynamics, coupling in liquids between electric fields and properties Flexoelectricity, polarization due to bending and other strain gradients Mechanoluminescence, light produced by mechanical action, often involving triboelectric effect Nanotribology, science of tribology (friction, lubrication and wear processes) at the nanoscale Piezoelectricity, polarization due to linear strains Polarization density, general description of the physics of polarization Static electricity, electric charge often but not always due to triboelectricity Tribology, science of friction, lubrication and wear Triboluminescence, light associated with sliding or contacts Work function, the energy to remove an electron from a surface References External links The return of Static Man, a podcast for kids about a masked menace who is electrified and goes around zapping people. Video of a charged rod demonstration at the University of Minnesota showing repulsion after rods are tribocharged, different cases giving repulsive and attractive forces. Video demonstrating tribocharging with a plastic comb rubbed by a cotton cloth attracting small pieces of paper. Video on Triboelectric Charging from the Khan Academy. It discusses the contact potential difference model, using the term electron affinity which has the same meaning as work function. Electrical phenomena Electrostatics Electricity Tribology
Triboelectric effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
5,522
[ "Tribology", "Physical phenomena", "Materials science", "Surface science", "Electrical phenomena", "Mechanical engineering" ]
60,943
https://en.wikipedia.org/wiki/Pleochroism
Pleochroism is an optical phenomenon in which a substance has different colors when observed at different angles, especially with polarized light. Etymology The roots of the word are from Greek (). It was first made compound in the German term Pleochroismus by mineralogist Wilhelm Haidinger in 1854, in the journal Annalen der Physik und Chemie. Its first known English usage is by geologist James Dana in 1854. Background Anisotropic crystals will have optical properties that vary with the direction of light. The direction of the electric field determines the polarization of light, and crystals will respond in different ways if this angle is changed. These kinds of crystals have one or two optical axes. If absorption of light varies with the angle relative to the optical axis in a crystal then pleochroism results. Anisotropic crystals have double refraction of light where light of different polarizations is bent different amounts by the crystal, and therefore follows different paths through the crystal. The components of a divided light beam follow different paths within the mineral and travel at different speeds. When the mineral is observed at some angle, light following some combination of paths and polarizations will be present, each of which will have had light of different colors absorbed. At another angle, the light passing through the crystal will be composed of another combination of light paths and polarizations, each with their own color. The light passing through the mineral will therefore have different colors when it is viewed from different angles, making the stone seem to be of different colors. Tetragonal, trigonal, and hexagonal minerals can only show two colors and are called dichroic. Orthorhombic, monoclinic, and triclinic crystals can show three and are trichroic. For example, hypersthene, which has two optical axes, can have a red, yellow, or blue appearance when oriented in three different ways in three-dimensional space. Isometric minerals cannot exhibit pleochroism. Tourmaline is notable for exhibiting strong pleochroism. Gems are sometimes cut and set either to display pleochroism or to hide it, depending on the colors and their attractiveness. The pleochroic colors are at their maximum when light is polarized parallel with a principal optical vector. The axes are designated X, Y, and Z for direction, and alpha, beta, and gamma in magnitude of the refractive index. These axes can be determined from the appearance of a crystal in a conoscopic interference pattern. Where there are two optical axes, the acute bisectrix of the axes gives Z for positive minerals and X for negative minerals and the obtuse bisectrix gives the alternative axis (X or Z). Perpendicular to these is the Y axis. The color is measured with the polarization parallel to each direction. An absorption formula records the amount of absorption parallel to each axis in the form of X < Y < Z with the left most having the least absorption and the rightmost the most. In mineralogy and gemology Pleochroism is an extremely useful tool in mineralogy and gemology for mineral and gem identification, since the number of colors visible from different angles can identify the possible crystalline structure of a gemstone or mineral and therefore help to classify it. Minerals that are otherwise very similar often have very different pleochroic color schemes. In such cases, a thin section of the mineral is used and examined under polarized transmitted light with a petrographic microscope. Another device using this property to identify minerals is the dichroscope. List of pleochroic minerals Purple and violet Amethyst (very low): different shades of purple Andalusite (strong): green-brown / dark red / purple Beryl (medium): purple / colorless Corundum (high): purple / orange Hypersthene (strong): purple / orange Spodumene (Kunzite) (strong): purple / clear / pink Tourmaline (strong): pale purple / purple Putnisite: pale purple / bluish grey Blue Aquamarine (medium): clear / light blue, or light blue / dark blue Alexandrite (strong): dark red-purple / orange / green Apatite (strong): blue-yellow / blue-colorless Benitoite (strong): colorless / dark blue Cordierite (aka Iolite) (orthorhombic; very strong): pale yellow / violet / pale blue Corundum (strong): dark violet-blue / light blue-green Tanzanite See Zoisite Topaz (very low): colorless / pale blue / pink Tourmaline (strong): dark blue / light blue Zoisite (strong): blue / red-purple / yellow-green Zircon (strong): blue / clear / gray Green Alexandrite (strong): dark red / orange / green Andalusite (strong): brown-green / dark red Corundum (strong): green / yellow-green Emerald (strong): green / blue-green Peridot (low): yellow-green / green / colorless Titanite (medium): brown-green / blue-green Tourmaline (strong): blue-green / brown-green / yellow-green Zircon (low): greenish brown / green Kornerupine (strong): green / pale yellowish-brown / reddish-brown Hiddenite (strong): blue-green / emerald-green / yellow-green Yellow Citrine (very weak): different shades of pale yellow Chrysoberyl (very weak): red-yellow / yellow-green / green Corundum (weak): yellow / pale yellow Danburite (weak): very pale yellow / pale yellow Kasolite (weak): pale yellow / grey Orthoclase (weak): different shades of pale yellow Phenacite (medium): colorless / yellow-orange Spodumene (medium): different shades of pale yellow Topaz (medium): tan / yellow / yellow-orange Tourmaline (medium): pale yellow / dark yellow Zircon (weak): tan / yellow Hornblende (strong): light green / dark green / yellow / brown Segnitite (weak): pale to medium yellow Brown and orange Corundum (strong): yellow-brown / orange Topaz (medium): brown-yellow / dull brown-yellow Tourmaline (very low): dark brown / light brown Zircon (very weak): brown-red / brown-yellow Biotite (medium): brown Red and pink Alexandrite (strong): dark red / orange / green Andalusite (strong): dark red / brown-red Corundum (strong): violet-red / orange-red Morganite (medium): light red / red-violet Tourmaline (strong): dark red / light red Zircon (medium): purple / red-brown See also Birefringence Medieval sunstone References “Pleochroism.” Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/pleochroism. Accessed 1 Jan. 2024. “Pleochroism, N.” Oxford English Dictionary, Oxford UP, July 2023, https://doi.org/10.1093/OED/5173776922. Accessed 1 Jan 2024. Mineralogy Optical mineralogy Microscopy Gemology Lists of colors
Pleochroism
[ "Chemistry" ]
1,594
[ "Microscopy" ]
60,951
https://en.wikipedia.org/wiki/Cardamom
Cardamom (), sometimes cardamon or cardamum, is a spice made from the seeds of several plants in the genera Elettaria and Amomum in the family Zingiberaceae. Both genera are native to the Indian subcontinent and Indonesia. They are recognized by their small seed pods: triangular in cross-section and spindle-shaped, with a thin, papery outer shell and small, black seeds; Elettaria pods are light green and smaller, while Amomum pods are larger and dark brown. Species used for cardamom are native throughout tropical and subtropical Asia. The first references to cardamom are found in Sumer, and in Ayurveda. In the 21st century, it is cultivated mainly in India, Indonesia, and Guatemala. Etymology The word cardamom is derived from the Latin , as a Latinisation of the Greek (), a compound of (, "cress") and (), of unknown origin. The earliest attested form of the word signifying "cress" is the Mycenaean Greek , written in Linear B syllabic script, in the list of flavorings on the spice tablets found among palace archives in the House of the Sphinxes in Mycenae. The modern genus name Elettaria is derived from the root attested in Dravidian languages. Types and distribution The two main types of cardamom are: True or green cardamom (or white cardamom when bleached) comes from the species Elettaria cardamomum and is distributed from India to Malaysia. What is often referred to as white cardamon is actually Siam cardamom, Amomum krervanh. Black cardamom, also known as brown, greater, large, longer, or Nepal cardamom, comes from the species Amomum subulatum and is native to the eastern Himalayas and mostly cultivated in Eastern Nepal, Sikkim, and parts of Darjeeling district in West Bengal of India, and southern Bhutan. Uses Both forms of cardamom are used as flavorings and cooking spices in both food and drink. E. cardamomum (green cardamom) is used as a spice, a masticatory, or is smoked. Food and beverage Cardamom has a strong taste, with an aromatic, resinous fragrance. Black cardamom has a more smoky – though not bitter – aroma, with a coolness some consider similar to mint. Green cardamom is one of the most expensive spices by weight, but little is needed to impart flavor. It is best stored in the pod, as exposed or ground seeds quickly lose their flavor. Grinding the pods and seeds together lowers both the quality and the price. For recipes requiring whole cardamom pods, a generally accepted equivalent is 10 pods equals teaspoons (7.4 ml) of ground cardamom. Cardamom is a common ingredient in Indian cooking. It is also often used in baking in the Nordic countries, in particular in Sweden, Norway, and Finland, where it is used in traditional treats such as the Scandinavian Yule bread , the Swedish sweet bun, and Finnish sweet bread . In the Middle East, green cardamom powder is used as a spice for sweet dishes, and as a traditional flavouring in coffee and tea. Cardamom is used to a wide extent in savoury dishes. In some Middle Eastern countries, coffee and cardamom are often ground in a wooden mortar, a , and cooked together in a skillet, a , over wood or gas, to produce mixtures with up to 40% cardamom. In Asia, both types of cardamom are widely used in both sweet and savoury dishes, particularly in the south. Both are frequent components in such spice mixes as Indian and Nepali masalas and Thai curry pastes. Green cardamom is often used in traditional Indian sweets and in masala chai (spiced tea). Both are also often used as a garnish in basmati rice and other dishes. Individual seeds are sometimes chewed and used in much the same way as chewing gum. It is used by confectionery giant Wrigley; its Eclipse Breeze Exotic Mint packaging indicates the product contains "cardamom to neutralize the toughest breath odors". It is also included in aromatic bitters, gin, and herbal teas. In Korea, Tavoy cardamom (Wurfbainia villosa var. xanthioides) and red cardamom (Lanxangia tsao-ko) are used in tea called . Composition The essential oil content of cardamom seeds depends on storage conditions and may be as high as 8%. The oil is typically 45% α-terpineol, 27% myrcene, 8% limonene, 6% menthone, 3% β-phellandrene, 2% 1,8-cineol, 2% sabinene and 2% heptane. Other sources report the following contents: 1,8-cineol (20 to 50%), α-terpenylacetate (30%), sabinene, limonene (2 to 14%), and borneol. In the seeds of round cardamom from Java (Wurfbainia compacta), the content of essential oil is lower (2 to 4%), and the oil contains mainly 1,8-cineol (up to 70%) plus β-pinene (16%); furthermore, α-pinene, α-terpineol and humulene are found. Production In 2022, world production of cardamom (included with nutmeg and mace for reporting to the United Nations) was 138,888 tonnes, led by India, Indonesia and Guatemala, which together accounted for 85% of the total (table). Production practices According to Nair (2011), in the years when India achieves a good crop, it is still less productive than Guatemala. Other notable producers include Costa Rica, El Salvador, Honduras, Papua New Guinea, Sri Lanka, Tanzania, Thailand, and Vietnam. Much production of cardamom in India is cultivated on private property or in areas the government leases out to farmers. Traditionally, small plots of land within the forests (called eld-kandies) where the wild or acclimatised plant existed are cleared during February and March. Brushwood is cut and burned, and the roots of powerful weeds are torn up to free the soil. Soon after clearing, cardamom plants spring up. After two years the cardamom plants may have eight-to-ten leaves and reach in height. In the third year, they may be in height. In the following May or June the ground is again weeded, and by September to November a light crop is obtained. In the fourth year, weeding again occurs, and if the cardamoms grow less than apart a few are transplanted to new positions. The plants bear for three or four years; and historically the life of each plantation was about eight or nine years. In Malabar the seasons run a little later than in Mysore, and – according to some reports – a full crop may be obtained in the third year. Cardamoms grown above elevation are considered to be of higher quality than those grown below that altitude. Plants may be raised from seed or by division of the rhizome. In about a year, the seedlings reach about in length, and are ready for transplantation. The flowering season is April to May, and after swelling in August and September, by the first half of October usually attain the desired degree of ripening. The crop is accordingly gathered in October and November, and in exceptionally moist weather, the harvest protracts into December. At the time of harvesting, the scapes or shoots bearing the clusters of fruits are broken off close to the stems and placed in baskets lined with fresh leaves. The fruits are spread out on carefully prepared floors, sometimes covered with mats, and are then exposed to the sun. Four or five days of careful drying and bleaching in the sun is usually enough. In rainy weather, drying with artificial heat is necessary, though the fruits suffer greatly in colour; they are consequently sometimes bleached with steam and sulphurous vapour or with ritha nuts. The industry is highly labour-intensive, each hectare requiring considerable maintenance throughout the year. Production constraints include recurring climate vagaries, the absence of regular re-plantation, and ecological conditions associated with deforestation. Cultivation In 1873 and 1874, Ceylon (now Sri Lanka) exported about each year. In 1877, Ceylon exported , in 1879, , and in the 1881–82 season, In 1903, of cardamom growing areas were owned by European planters. The produce of the Travancore plantations was given as , or just a little under that of Ceylon. The yield of the Mysore plantations was approximately , and the cultivation was mainly in Kadur district. The volume for 1903–04 stated the value of the cardamoms exported to have been Rs. 3,37,000 as compared with Rs. 4,16,000 the previous year. India, which ranks second in world production, recorded a decline of 6.7 percent in cardamom production for 2012–13, and projected a production decline of 30–40% in 2013–14, compared with the previous year due to unfavorable weather. In India, the state of Kerala is by far the most productive producer, with the districts of Idukki, Palakkad and Wynad being the principal producing areas. Given that a number of bureaucrats have personal interests in the industry, in India, several organisations have been set up to protect cardamom producers such as the Cardamom Growers Association (est. 1992) and the Kerala Cardamom Growers Association (est. 1974). Research in India's cardamom plantations began in the 1970s while Kizhekethil Chandy held the office of Chairman of the Cardamom Board. The Kerala Land Reforms Act imposed restrictions on the size of certain agricultural holdings per household to the benefit of cardamom producers. In 1979–1980, Guatemala surpassed India in worldwide production. Guatemala cultivates Elettaria cardamomum, which is native to the Malabar Coast of India. Alta Verapaz Department produces 70 percent of Guatemala's cardamom. Cardamom was introduced to Guatemala before World War I by the German coffee planter Oscar Majus Kloeffer. After World War II, production was increased to 13,000 to 14,000 tons annually. The average annual income for a plantation-owning household in 1998 was US$3,408. Although the typical harvest requires over 210 days of labor per year, most cardamom farmers are better off than many other agricultural workers, and there are a significant number of those from the upper strata of society involved in the cultivation process. Increased demand since the 1980s, principally from China, for both Wurfbainia villosa and Lanxangia tsao-ko, has provided a key source of income for poor farmers living at higher altitudes in localized areas of China, Laos, and Vietnam, people typically isolated from many other markets. Laos exports about 400 tonnes annually through Thailand according to the FAO. Trade Cardamom production's demand and supply patterns of trade are influenced by price movements, nationally and internationally, in 5 to 6-year cycles. Importing leaders mentioned are Saudi Arabia and Kuwait, while other significant importers include Germany, Iran, Japan, Jordan, Pakistan, Qatar, United Arab Emirates, the UK, and the former USSR. According to the United Nations Conference on Trade and Development, 80 percent of cardamom's total consumption occurs in the Middle East. In the 19th century, Bombay and Madras were among the principal distributing ports of cardamom. India's exports to foreign countries increased during the early 20th century, particularly to the United Kingdom, followed by Arabia, Aden, Germany, Turkey, Japan, Persia and Egypt. However, some 95% of cardamom produced in India is for domestic purposes, and India is itself by far the most important consuming country for cardamoms in the world. India also imports cardamom from Sri Lanka. In 1903–1904, these imports came to , valued at Rs. 1,98,710. In contrast, Guatemala's local consumption is negligible, which supports the exportation of most of the cardamom that is produced. In the mid-1800s, Ceylon's cardamom was chiefly imported by Canada. After saffron and vanilla, cardamom is currently the third most expensive spice, and is used as a spice and flavouring for food and liqueurs. History Cardamom has been used in flavorings and food over centuries. During the Middle Ages, cardamom dominated the trade industry. The Arab states played a significant role in the trade of Indian spices, including cardamom. It is now ranked the third most expensive spice following saffron and vanilla. Cardamom production began in ancient times, and has been referred to in ancient Sanskrit texts as . The Babylonians and Assyrians used the spice early on, and trade in cardamom opened up along land routes and by the interlinked Persian Gulf route controlled from Dilmun as early as the third millennium BCE Early Bronze Age, into western Asia and the Mediterranean world. The ancient Greeks thought highly of cardamom, and the Greek physicians Dioscorides and Hippocrates wrote about its therapeutic properties, identifying it as a digestive aid. Due to demand in ancient Greece and Rome, the cardamom trade developed into a handsome luxury business; cardamom was one of the spices eligible for import tax in Alexandria in 126 CE. In medieval times, Venice became the principal importer of cardamom into the west, along with pepper, cloves and cinnamon, which was traded with merchants from the Levant with salt and meat products. In China, Amomum was an important part of the economy during the Song Dynasty (960–1279). In 1150, the Arab geographer Muhammad al-Idrisi noted that cardamom was being imported to Aden, in Yemen, from India and China. The Portuguese became involved in the trade in the 16th century, and the industry gained wide-scale European interest in the 19th century. Gallery See also Aframomum corrorima, known as Ethiopian cardamom References Bibliography CardamomHQ: In-depth information on Cardamom Mabberley, D.J. The Plant-book: A Portable Dictionary of the Higher Plants. Cambridge University Press, 1996, Gernot Katzer's Spice Pages: Cardamom Plant Cultures: botany and history of Cardamom Pham Hoang Ho 1993, Cay Co Vietnam [Plants of Vietnam: in Vietnamese], vols. I, II & III, Montreal. Buckingham, J.S. & Petheram, R.J. 2004, Cardamom cultivation and forest biodiversity in northwest Vietnam, Agricultural Research and Extension Network, Overseas Development Institute, London UK. Aubertine, C. 2004, Cardamom (Amomum spp.) in Lao PDR: the hazardous future of an agroforest system product, in 'Forest products, livelihoods and conservation: case studies of non-timber forest products systems vol. 1-Asia, Center for International Forestry Research. Bogor, Indonesia. Alpinioideae Spices Medicinal plants of Asia Arab cuisine Bangladeshi cuisine Bhutanese cuisine Indonesian cuisine Iranian cuisine Iraqi cuisine Pakistani spices Nepalese cuisine Plant common names Sri Lankan spices Indian spices Aphrodisiac foods Austronesian agriculture
Cardamom
[ "Biology" ]
3,252
[ "Plant common names", "Common names of organisms", "Plants" ]
60,972
https://en.wikipedia.org/wiki/Simple%20API%20for%20XML
SAX (Simple API for XML) is an event-driven online algorithm for lexing and parsing XML documents, with an API developed by the XML-DEV mailing list. SAX provides a mechanism for reading data from an XML document that is an alternative to that provided by the Document Object Model (DOM). Where the DOM operates on the document as a whole—building the full abstract syntax tree of an XML document for convenience of the user—SAX parsers operate on each piece of the XML document sequentially, issuing parsing events while making a single pass through the input stream. Definition Unlike DOM, there is no formal specification for SAX. The Java implementation of SAX is considered to be normative. SAX processes documents state-independently, in contrast to DOM which is used for state-dependent processing of XML documents. Benefits A SAX parser only needs to report each parsing event as it happens, and normally discards almost all of that information once reported (it does, however, keep some things, for example a list of all elements that have not been closed yet, in order to catch later errors such as end-tags in the wrong order). Thus, the minimum memory required for a SAX parser is proportional to the maximum depth of the XML file (i.e., of the XML tree) and the maximum data involved in a single XML event (such as the name and attributes of a single start-tag, or the content of a processing instruction, etc.). This much memory is usually considered negligible. A DOM parser, in contrast, has to build a tree representation of the entire document in memory to begin with, thus using memory that increases with the entire document length. This takes considerable time and space for large documents (memory allocation and data-structure construction take time). The compensating advantage, of course, is that once loaded any part of the document can be accessed in any order. Because of the event-driven nature of SAX, processing documents is generally far faster than DOM-style parsers, so long as the processing can be done in a start-to-end pass. Many tasks, such as indexing, conversion to other formats, very simple formatting and the like can be done that way. Other tasks, such as sorting, rearranging sections, getting from a link to its target, looking up information on one element to help process a later one and the like require accessing the document structure in complex orders and will be much faster with DOM than with multiple SAX passes. Some implementations do not neatly fit either category: a DOM approach can keep its persistent data on disk, cleverly organized for speed (editors such as SoftQuad Author/Editor and large-document browser/indexers such as DynaText do this); while a SAX approach can cleverly cache information for later use (any validating SAX parser keeps more information than described above). Such implementations blur the DOM/SAX tradeoffs, but are often very effective in practice. Due to the nature of DOM, streamed reading from disk requires techniques such as lazy evaluation, caches, virtual memory, persistent data structures, or other techniques (one such technique is disclosed in US patent 5557722). Processing XML documents larger than main memory is sometimes thought impossible because some DOM parsers do not allow it. However, it is no less possible than sorting a dataset larger than main memory using disk space as memory to sidestep this limitation. Drawbacks The event-driven model of SAX is useful for XML parsing, but it does have certain drawbacks. Virtually any kind of XML validation requires access to the document in full. The most trivial example is that an attribute declared in the DTD to be of type IDREF, requires that there be only one element in the document that uses the same value for an ID attribute. To validate this in a SAX parser, one must keep track of all ID attributes (any one of them might end up being referenced by an IDREF attribute at the very end); as well as every IDREF attribute until it is resolved. Similarly, to validate that each element has an acceptable sequence of child elements, information about what child elements have been seen for each parent must be kept until the parent closes. Additionally, some kinds of XML processing simply require having access to the entire document. XSLT and XPath, for example, need to be able to access any node at any time in the parsed XML tree. Editors and browsers likewise need to be able to display, modify, and perhaps re-validate at any time. While a SAX parser may well be used to construct such a tree initially, SAX provides no help for such processing as a whole. XML processing with SAX A parser that implements SAX (i.e., a SAX Parser) functions as a stream parser, with an event-driven API. The user defines a number of callback methods that will be called when events occur during parsing. The SAX events include (among others): XML Text nodes XML Element Starts and Ends XML Processing Instructions XML Comments Some events correspond to XML objects that are easily returned all at once, such as comments. However, XML elements can contain many other XML objects, and so SAX represents them as does XML itself: by one event at the beginning, and another at the end. Properly speaking, the SAX interface does not deal in elements, but in events that largely correspond to tags. SAX parsing is unidirectional; previously parsed data cannot be re-read without starting the parsing operation again. There are many SAX-like implementations in existence. In practice, details vary, but the overall model is the same. For example, XML attributes are typically provided as name and value arguments passed to element events, but can also be provided as separate events, or via a hash table or similar collection of all the attributes. For another, some implementations provide "Init" and "Fin" callbacks for the very start and end of parsing; others do not. The exact names for given event types also vary slightly between implementations. Example Given the following XML document: <?xml version="1.0" encoding="UTF-8"?> <DocumentElement param="value"> <FirstElement> ¶ Some Text </FirstElement> <?some_pi some_attr="some_value"?> <SecondElement param2="something"> Pre-Text <Inline>Inlined text</Inline> Post-text. </SecondElement> </DocumentElement> This XML document, when passed through a SAX parser, will generate a sequence of events like the following: XML Element start, named DocumentElement, with an attribute param equal to "value" XML Element start, named FirstElement XML Text node, with data equal to "&#xb6; Some Text" (note: certain white spaces can be changed) XML Element end, named FirstElement Processing Instruction event, with the target some_pi and data some_attr="some_value" (the content after the target is just text; however, it is very common to imitate the syntax of XML attributes, as in this example) XML Element start, named SecondElement, with an attribute param2 equal to "something" XML Text node, with data equal to "Pre-Text" XML Element start, named Inline XML Text node, with data equal to "Inlined text" XML Element end, named Inline XML Text node, with data equal to "Post-text." XML Element end, named SecondElement XML Element end, named DocumentElement Note that the first line of the sample above is the XML Declaration and not a processing instruction; as such it will not be reported as a processing instruction event (although some SAX implementations provide a separate event just for the XML declaration). The result above may vary: the SAX specification deliberately states that a given section of text may be reported as multiple sequential text events. Many parsers, for example, return separate text events for numeric character references. Thus in the example above, a SAX parser may generate a different series of events, part of which might include: XML Element start, named FirstElement XML Text node, with data equal to "&#xb6;" (the Unicode character U+00b6) XML Text node, with data equal to " Some Text" XML Element end, named FirstElement See also Expat (XML) Flying Saucer (library) Java API for XML Processing LibXML List of XML markup languages List of XML schemas MSXML RapidJSON - a SAX-like API for JSON StAX Streaming XML VTD-XML Xerces XQuery API for Java References Further reading External links SAX home page Application programming interfaces XML-based standards
Simple API for XML
[ "Technology" ]
1,846
[ "Computer standards", "XML-based standards" ]
60,980
https://en.wikipedia.org/wiki/Digital%20Visual%20Interface
Digital Visual Interface (DVI) is a video display interface developed by the Digital Display Working Group (DDWG). The digital interface is used to connect a video source, such as a video display controller, to a display device, such as a computer monitor. It was developed with the intention of creating an industry standard for the transfer of uncompressed digital video content. DVI devices manufactured as DVI-I have support for analog connections, and are compatible with the analog VGA interface by including VGA pins, while DVI-D devices are digital-only. This compatibility, along with other advantages, led to its widespread acceptance over competing digital display standards Plug and Display (P&D) and Digital Flat Panel (DFP). Although DVI is predominantly associated with computers, it is sometimes used in other consumer electronics such as television sets and DVD players. History An earlier attempt to promulgate an updated standard to the analog VGA connector was made by the Video Electronics Standards Association (VESA) in 1994 and 1995, with the Enhanced Video Connector (EVC), which was intended to consolidate cables between the computer and monitor. EVC used a 35-pin Molex MicroCross connector and carried analog video (input and output), analog stereo audio (input and output), and data (via USB and FireWire). At the same time, with the increasing availability of digital flat-panel displays, the priority shifted to digital video transmission, which would remove the extra analog/digital conversion steps required for VGA and EVC; the EVC connector was reused by VESA, which released the Plug & Display (P&D) standard in 1997. P&D offered single-link TMDS digital video with, as an option, analog video output and data (USB and FireWire), using a 35-pin MicroCross connector similar to EVC; the analog audio and video input lines from EVC were repurposed to carry digital video for P&D. Because P&D was a physically large, expensive connector, a consortium of companies developed the DFP standard (1999), which was focused solely on digital video transmission using a 20-pin micro ribbon connector and omitted the analog video and data capabilities of P&D. DVI instead chose to strip just the data functions from P&D, using a 29-pin MicroCross connector to carry digital and analog video. Critically, DVI allows dual-link TMDS signals, meaning it supports higher resolutions than the single-link P&D and DFP connectors, which led to its successful adoption as an industry standard. Compatibility of DVI with P&D and DFP is accomplished typically through passive adapters that provide appropriate physical interfaces, as all three standards use the same DDC/EDID handshaking protocols and TMDS digital video signals. DVI made its way into products starting in 1999. One of the first DVI monitors was Apple's original Cinema Display, which launched in 1999. Technical overview DVI's digital video transmission format is based on panelLink, a serial format developed by Silicon Image that utilizes a high-speed serial link called transition minimized differential signaling (TMDS). TMDS Digital video pixel data is transported using multiple TMDS twisted pairs. At the electrical level, these pairs are highly resistant to electrical noise and other forms of analog distortion. Single link A single link DVI connection has four TMDS pairs. Three data pairs carry their designated 8-bit RGB component (red, green, or blue) of the video signal for a total of 24 bits per pixel. The fourth pair carries the TMDS clock. The binary data is encoded using 8b/10b encoding. DVI does not use packetization, but rather transmits the pixel data as if it were a rasterized analog video signal. As such, the complete frame is drawn during each vertical refresh period. The full active area of each frame is always transmitted without compression. Video modes typically use horizontal and vertical refresh timings that are compatible with cathode-ray tube (CRT) displays, though this is not a requirement. In single link mode, the maximum TMDS clock frequency is 165 MHz, which supports a maximum resolution of 2.75 megapixels (including blanking interval) at 60 Hz refresh. For practical purposes, this allows a maximum 16:10 screen resolution of 1920 × 1200 at 60 Hz. Dual link To support higher-resolution display devices, the DVI specification contains a provision for dual link. Dual link DVI doubles the number of TMDS data pairs, effectively doubling the video bandwidth, which allows higher resolutions up to 2560 × 1600 at 60 Hz or higher refresh rates for lower resolutions. Compatibility For backward compatibility with displays using analog VGA signals, some of the contacts in the DVI connector carry the analog VGA signals. To ensure a basic level of interoperability, DVI compliant devices are required to support one baseline display mode, "low pixel format" (640 × 480 at 60 Hz). DDC Like modern analog VGA connectors, the DVI connector includes pins for the display data channel (DDC), which allows the graphics adapter to read the monitor's extended display identification data (EDID). When a source and display using the DDC2 revision are connected, the source first queries the display's capabilities by reading the monitor EDID block over an I²C link. The EDID block contains the display's identification, color characteristics (such as gamma value), and table of supported video modes. The table can designate a preferred mode or native resolution. Each mode is a set of timing values that define the duration and frequency of the horizontal/vertical sync, the positioning of the active display area, the horizontal resolution, vertical resolution, and refresh rate. Cable length The maximum length recommended for DVI cables is not included in the specification, since it is dependent on the TMDS clock frequency. In general, cable lengths up to will work for display resolutions up to 1920 × 1200. Longer cables up to in length can be used with display resolutions 1280 × 1024 or lower. For greater distances, the use of a DVI booster—a signal repeater which may use an external power supply—is recommended to help mitigate signal degradation. Connector The DVI connector on a device is given one of three names, depending on which signals it implements: DVI-I (integrated, combines digital and analog in the same connector; digital may be single or dual link) DVI-D (digital only, single link or dual link) DVI-A (analog only) Most DVI connector types—the exception is DVI-A—have pins that pass digital video signals. These come in two varieties: single link and dual link. Single link DVI employs a single transmitter with a TMDS clock up to 165 MHz that supports resolutions up to 1920 × 1200 at 60 Hz. Dual link DVI adds six pins, at the center of the connector, for a second transmitter increasing the bandwidth and supporting resolutions up to 2560 × 1600 at 60 Hz. A connector with these additional pins is sometimes referred to as DVI-DL (dual link). Dual link should not be confused with dual display (also known as dual head), which is a configuration consisting of a single computer connected to two monitors, sometimes using a DMS-59 connector for two single link DVI connections. In addition to digital, some DVI connectors also have pins that pass an analog signal, which can be used to connect an analog monitor. The analog pins are the four that surround the flat blade on a DVI-I or DVI-A connector. A VGA monitor, for example, can be connected to a video source with DVI-I through the use of a passive adapter. Since the analog pins are directly compatible with VGA signaling, passive adapters are simple and cheap to produce, providing a cost-effective solution to support VGA on DVI. The long flat pin on a DVI-I connector is wider than the same pin on a DVI-D connector, so even if the four analog pins were manually removed, it still wouldn't be possible to connect a male DVI-I to a female DVI-D. It is possible, however, to join a male DVI-D connector with a female DVI-I connector. DVI is the only widespread video standard that includes analog and digital transmission in the same connector. Competing standards are exclusively digital: these include a system using low-voltage differential signaling (LVDS), known by its proprietary names FPD-Link (flat-panel display) and FLATLINK; and its successors, the LVDS Display Interface (LDI) and OpenLDI. Some DVD players, HDTV sets, and video projectors have DVI connectors that transmit an encrypted signal for copy protection using the High-bandwidth Digital Content Protection (HDCP) protocol. Computers can be connected to HDTV sets over DVI, but the graphics card must support HDCP to play content protected by digital rights management (DRM). Specifications Digital Minimum TMDS clock frequency: 25.175 MHz Used for the mandatory "low pixel format" display mode: VGA (640x480) @ 60 Hz Maximum single link TMDS clock frequency: 165 MHz Single link maximum gross bit rate (including 8b/10b overhead): 4.95 Gbit/s Net bit rate (subtracting 8b/10b overhead): 3.96 Gbit/s Dual link bit rates are twice that of single link at an identical clock frequency. Gross bit rate (Including 8b/10b overhead) at a 165 MHz clock: 9.90 Gbit/s. Net bit rate (subtracting 8b/10b overhead): 7.92 Gbit/s Clocks above 165 MHz are allowed in dual link mode Bits per pixel: 24 bits per pixel support is mandatory in all resolutions supported. Less than 24 bits per pixel is optional. Dual link optionally supports up to 48 bits per pixel. If a depth greater than 24 bits per pixel is desired, the least significant bits are sent on the second link. Pixels per TMDS clock cycle: 1 (single link at 24 bits or less per pixel, and dual link for 25 to 48 bits per pixel) or 2 (dual link at 24 bits or less per pixel) Example display modes (single link): SXGA () @ 85 Hz with GTF blanking (159 MHz TMDS clock) FHD () @ 60 Hz with CVT-RB blanking (139 MHz TMDS clock) UXGA () @ 60 Hz with GTF blanking (161 MHz TMDS clock) WUXGA () @ 60 Hz with CVT-RB blanking (154 MHz TMDS clock) WQXGA () @ 30 Hz with CVT-RB blanking (132 MHz TMDS clock) Example display modes (dual link): QXGA () @ 72 Hz with CVT blanking (2 pixels per 163 MHz TMDS clock) FHD () @ 144 Hz WUXGA () @ 120 Hz with CVT-RB blanking (2 pixels per 154 MHz TMDS clock) WQXGA () @ 60 Hz with CVT-RB blanking (2 pixels per 135 MHz TMDS clock) WQUXGA () @ 30 Hz with CVT-RB blanking (2 pixels per 146 MHz TMDS clock) Generalized Timing Formula (GTF) is a VESA standard which can easily be calculated with the Linux gtf utility. Coordinated Video Timings-Reduced Blanking (CVT-RB) is a VESA standard which offers reduced horizontal and vertical blanking for non-CRT based displays. Digital data encoding One of the purposes of DVI stream encoding is to provide a DC-balanced output that reduces decoding errors. This goal is achieved by using 10-bit symbols for 8-bit or less characters and using the extra bits for the DC balancing. Like other ways of transmitting video, there are two different regions: the active region, where pixel data is sent, and the control region, where synchronization signals are sent. The active region is encoded using transition-minimized differential signaling, where the control region is encoded with a fixed 8b/10b encoding. As the two schemes yield different 10-bit symbols, a receiver can fully differentiate between active and control regions. When DVI was designed, most computer monitors were still of the cathode-ray tube type that require analog video synchronization signals. The timing of the digital synchronization signals matches the equivalent analog ones, so the process of transforming DVI to and from an analog signal does not require extra (high-speed) memory, expensive at the time. HDCP is an extra layer that transforms the 10-bit symbols before transmitting. Only after correct authorization can the receiver undo the HDCP encryption. Control regions are not encrypted in order to let the receiver know when the active region starts. Clock and data relationship DVI provide one TMDS clock pair and 3 TMDS data pairs in single link mode or 6 TMDS data pairs in dual link mode. TMDS data pairs operate at a gross bit rate that is 10 times the frequency of the TMDS clock. In each TMDS clock period there is a 10-bit symbol per TMDS data pair representing 8-bits of pixel color. In single link mode each set of three 10-bit symbols represents one 24-bit pixel, while in dual link mode each set of six 10-bit symbols either represents two 24-bit pixels or one pixel of up to 48-bit color depth. The specification document allows the data and the clock to not be aligned. However, as the ratio between the TMDS clock and gross bit rate per TMDS pair is fixed at 1:10, the unknown alignment is kept over time. The receiver must recover the bits on the stream using any of the techniques of clock/data recovery to find the correct symbol boundary. The DVI specification allows the TMDS clock to vary between 25 MHz and 165 MHz. This 1:6.6 ratio can make clock recovery difficult, as phase-locked loops, if used, need to work over a large frequency range. One benefit of DVI over other interfaces is that it is relatively straightforward to transform the signal from the digital domain into the analog domain using a video DAC, as both clock and synchronization signals are transmitted. Fixed frequency interfaces, like DisplayPort, need to reconstruct the clock from the transmitted data. Display power management The DVI specification includes signaling for reducing power consumption. Similar to the analog VESA display power management signaling (DPMS) standard, a connected device can turn a monitor off when the connected device is powered down, or programmatically if the display controller of the device supports it. Devices with this capability can also attain Energy Star certification. Analog The analog section of the DVI specification document is brief and points to other specifications like VESA VSIS for electrical characteristics and GTFS for timing information. The motivation for including analog is to keep compatibility with the previous VGA cables and connectors. VGA pins for HSync, Vsync and three video channels are available in both DVI-I or DVI-A (but not DVI-D) connectors and are electrically compatible, while pins for DDC (clock and data) and 5 V power and ground are kept in all DVI connectors. Thus, a passive adapter can interface between DVI-I or DVI-A (but not DVI-D) and VGA connectors. DVI and HDMI compatibility HDMI is a newer digital audio/video interface developed and promoted by the consumer electronics industry. DVI and HDMI have the same electrical specifications for their TMDS and VESA/DDC twisted pairs. However HDMI and DVI differ in several key ways. HDMI lacks VGA compatibility and does not include analog signals. DVI is limited to the RGB color model while HDMI also supports YCbCr 4:4:4 and YCbCr 4:2:2 color spaces, which are generally not used for computer graphics. In addition to digital video, HDMI supports the transport of packets used for digital audio. HDMI sources differentiate between legacy DVI displays and HDMI-capable displays by reading the display's EDID block. To promote interoperability between DVI-D and HDMI devices, HDMI source components and displays support DVI-D signaling. For example, an HDMI display can be driven by a DVI-D source because HDMI and DVI-D both define an overlapping minimum set of supported resolutions and frame buffer formats. Some DVI-D sources use non-standard extensions to output HDMI signals including audio (e.g. ATI 3000-series and NVIDIA GTX 200-series). Some multimedia displays use a DVI to HDMI adapter to input the HDMI signal with audio. Exact capabilities vary by video card specifications. In the reverse scenario, a DVI display that lacks optional support for HDCP might be unable to display protected content even though it is otherwise compatible with the HDMI source. Features specific to HDMI such as remote control, audio transport, xvYCC and deep color are not usable in devices that support only DVI signals. HDCP compatibility between source and destination devices is subject to manufacturer specifications for each device. Proposed successors IEEE 1394 was proposed by High-Definition Audio-Video Network Alliance (HANA Alliance) for all cabling needs, including video, over coaxial or 1394 cable as a combined data stream. However, this interface does not have enough throughput to handle uncompressed HD video, so it is unsuitable for applications such as video games and interactive program guides. High-Definition Multimedia Interface (HDMI), a forward-compatible standard that also includes digital audio transmission Unified Display Interface (UDI) was proposed by Intel to replace both DVI and HDMI, but was deprecated in favor of DisplayPort. DisplayPort (a license-free standard proposed by VESA to succeed DVI that has optional DRM mechanisms) / Mini DisplayPort Thunderbolt: an interface that uses the USB-C connector (from Thunderbolt 3 and onward; the Mini DisplayPort connector was used for Thunderbolt 1 and 2) but combines PCI Express (PCIe) and DisplayPort (DP) into one serial signal, permitting the connection of PCIe devices in addition to video displays. It provides DC power as well. In December 2010, Intel, AMD, and several computer and display manufacturers announced they would stop supporting DVI-I, VGA and LVDS-technologies from 2013/2015, and instead speed up adoption of DisplayPort and HDMI. They also stated: "Legacy interfaces such as VGA, DVI and LVDS have not kept pace, and newer standards such as DisplayPort and HDMI clearly provide the best connectivity options moving forward. In our opinion, DisplayPort 1.2 is the future interface for PC monitors, along with HDMI 1.4a for TV connectivity". See also DMS-59 – a single DVI sized connector providing two single link DVI or VGA channels List of video connectors DiiVA Lightning (connector) Notes References Further reading Computer connectors Computer display standards Computer-related introductions in 1999 Digital display connectors High-definition television American inventions Television technology Television transmission standards Video signal Audiovisual connectors
Digital Visual Interface
[ "Technology" ]
4,083
[ "Information and communications technology", "Television technology" ]
60,994
https://en.wikipedia.org/wiki/Radio-frequency%20induction
For the common use of RF induction process of heating a metal object by electromagnetic induction, see induction heating Radio-frequency induction (RF induction) is the use of a radio frequency magnetic field to transfer energy by means of electromagnetic induction in the near field. A radio-frequency alternating current is passed through a coil of wire that acts as the transmitter, and a second coil or conducting object, magnetically coupled to the first coil, acts as the receiver. See also Radio-frequency identification (RFID) Radio antenna Electromagnetic radiation Electromagnetic induction Induction plasma technology List of electronics topics List of radiation topics Transformer External articles Budyansky, A. and A. Zykov, "Static current-voltage characteristics for radio-frequencyinduction discharge". Plasma Science, 1995. IEEE Conference Record - Abstracts., 1995 Page(s):146 IBM Research Division, T. J. Watson Research Center, Yorktown Heights, New York 10598. Maurizio Vignati and Livio Giuliani "Radiofrequency Exposure Near High-voltage Lines". Tenforde, T. S., and W. T. Kaune, "Interaction of extremely low frequency electric and magnetic fields with humans". Health Phys 53(6):585-606 (1987). Radio electronics
Radio-frequency induction
[ "Engineering" ]
269
[ "Radio electronics" ]
60,996
https://en.wikipedia.org/wiki/Firefly
The Lampyridae are a family of elateroid beetles with more than 2,000 described species, many of which are light-emitting. They are soft-bodied beetles commonly called fireflies, lightning bugs, or glowworms for their conspicuous production of light, mainly during twilight, to attract mates. The type species is Lampyris noctiluca: the common glow-worm of Europe. Light production in the Lampyridae is thought to have originated as a warning signal that the larvae were distasteful. This ability to create light was then co-opted as a mating signal and, in a further development, adult female fireflies of the genus Photuris mimic the flash pattern of the Photinus beetle to trap their males as prey. Fireflies are found in temperate and tropical climates. Many live in marshes or in wet, wooded areas where their larvae have abundant sources of food. While all known fireflies glow as larvae, only some species produce light in their adult stage, and the location of the light organ varies among species and between sexes of the same species. Fireflies have attracted human attention since classical antiquity; their presence has been taken to signify a wide variety of conditions in different cultures and is especially appreciated aesthetically in Japan, where parks are set aside for this specific purpose. Biology Fireflies are beetles and in many aspects resemble other beetles at all stages of their life cycle, undergoing complete metamorphosis. A few days after mating, a female lays her fertilized eggs on or just below the surface of the ground. The eggs hatch three to four weeks later. In certain firefly species with aquatic larvae, such as Aquatica leii, the female oviposits on emergent portions of aquatic plants, and the larvae descend into the water after hatching. The larvae feed until the end of the summer. Most fireflies hibernate as larvae. Some do this by burrowing underground, while others find places on or under the bark of trees. They emerge in the spring. At least one species, Ellychnia corrusca, overwinters as an adult. The larvae of most species are specialized predators and feed on other larvae, terrestrial snails, and slugs. Some are so specialized that they have grooved mandibles that deliver digestive fluids directly to their prey. The larval stage lasts from several weeks up to, in certain species, two or more years. The larvae pupate for one to two and a half weeks and emerge as adults. Adult diet varies among firefly species: some are predatory, while others feed on plant pollen or nectar. Some adults, like the European glow-worm, have no mouth, emerging only to mate and lay eggs before dying. In most species, adults live for a few weeks in summer. Fireflies vary widely in their general appearance, with differences in color, shape, size, and features such as antennae. Adults differ in size depending on the species, with the largest up to long. Many species have non-flying larviform females. These can often be distinguished from the larvae only because the adult females have compound eyes, unlike the simple eyes of larvae, though the females have much smaller (and often highly regressed) eyes than those of their males. The most commonly known fireflies are nocturnal, although numerous species are diurnal and usually not luminescent; however, some species that remain in shadowy areas may produce light. Most fireflies are distasteful to vertebrate predators, as they contain the steroid pyrones lucibufagins, similar to the cardiotonic bufadienolides found in some poisonous toads. All fireflies glow as larvae, where bioluminescence is an aposematic warning signal to predators. Light and chemical production Light production in fireflies is due to the chemical process of bioluminescence. This occurs in specialized light-emitting organs, usually on a female firefly's lower abdomen. The enzyme luciferase acts on luciferin, in the presence of magnesium ions, ATP, and oxygen to produce light. Oxygen is supplied via an abdominal trachea or breathing tube. Gene coding for these substances has been inserted into many different organisms. Firefly luciferase is used in forensics, and the enzyme has medical uses – in particular, for detecting the presence of ATP or magnesium. Fireflies produce a "cold light", with no infrared or ultraviolet frequencies. The light may be yellow, green, or pale red, with wavelengths from 510 to 670 nanometers. Some species such as the dimly glowing "blue ghost" of the Eastern US may seem to emit blueish-white light from a distance and in low light conditions, but their glow is bright green when observed up close. Their perceived blue tint may be due to the Purkinje effect. During a study on the genome of Aquatica leii, scientists discovered two key genes are responsible for the formation, activation, and positioning of this firefly's light organ: Alabd-B and AlUnc-4. Adults emit light primarily for mate selection. Early larval bioluminescence was adopted in the phylogeny of adult fireflies, and was repeatedly gained and lost before becoming fixed and retained as a mechanism of sexual communication in many species. Adult lampyrids have a variety of ways to communicate with mates in courtships: steady glows, flashing, and the use of chemical signals unrelated to photic systems. Chemical signals, or pheromones, are the ancestral form of sexual communication; this pre-dates the evolution of flash signaling in the lineage, and is retained today in diurnally-active species. Some species, especially lightning bugs of the genera Photinus, Photuris, and Pyractomena, are distinguished by the unique courtship flash patterns emitted by flying males in search of females. In general, females of the genus Photinus do not fly, but do give a flash response to males of their own species. Signals, whether photic or chemical, allow fireflies to identify mates of their own species. Flash signaling characteristics include differences in duration, timing, color, number and rate of repetitions, height of flight, and direction of flight (e.g. climbing or diving) and vary interspecifically and geographically. When flash signals are not sufficiently distinguished between species in a population, sexual selection encourages divergence of signaling patterns. Synchronization of flashing occurs in several species; it is explained as phase synchronization and spontaneous order. Tropical fireflies routinely synchronise their flashes among large groups, particularly in Southeast Asia. At night along river banks in the Malaysian jungles, fireflies synchronize their light emissions precisely. Current hypotheses about the causes of this behavior involve diet, social interaction, and altitude. In the Philippines, thousands of fireflies can be seen all year-round in the town of Donsol. In the United States, one of the most famous sightings of fireflies blinking in unison occurs annually near Elkmont, Tennessee, in the Great Smoky Mountains during the first weeks of June. Congaree National Park in South Carolina is another host to this phenomenon. Female "femme fatale" Photuris fireflies mimic the photic signaling patterns of the smaller Photinus, attracting males to what appears to be a suitable mate, then eating them. This provides the females with a supply of the toxic defensive lucibufagin chemicals. Many fireflies do not produce light. Usually these species are diurnal, or day-flying, such as those in the genus Ellychnia. A few diurnal fireflies that inhabit primarily shadowy places, such as beneath tall plants or trees, are luminescent. One such genus is Lucidota. Non-bioluminescent fireflies use pheromones to signal mates. Some basal groups lack bioluminescence and use chemical signaling instead. Phosphaenus hemipterus has photic organs, yet is a diurnal firefly and displays large antennae and small eyes. These traits suggest that pheromones are used for sexual selection, while photic organs are used for warning signals. In controlled experiments, males coming from downwind arrived at females first, indicating that males travel upwind along a pheromone plume. Males can find females without the use of visual cues, so sexual communication in P. hemipterus appears to be mediated entirely by pheromones. Evolution Fossil history The oldest known fossils of the Lampyridae family are Protoluciola and Flammarionella from the Late Cretaceous (Cenomanian ~ 99 million years ago) Burmese amber of Myanmar, which belong to the subfamily Luciolinae. The light producing organs are clearly present. The ancestral glow colour for the last common ancestor of all living fireflies has been inferred to be green, based on genomic analysis. Taxonomy The fireflies (including the lightning bugs) are a family, Lampyridae, of some 2,000 species within the Coleoptera. The family forms a single clade, a natural phylogenetic group. The term glowworm is used for both adults and larvae of firefly species such as Lampyris noctiluca, the common European glowworm, in which only the nonflying adult females glow brightly; the flying males glow weakly and intermittently. In the Americas, "glow worms" are the closely related Coleopteran family Phengodidae, while in New Zealand and Australia, a "glow worm" is a luminescent larva of the fungus gnat Arachnocampa, within the true flies, Diptera. Phylogeny The phylogeny of the Lampyridae family, based on both phylogenetic and morphological evidence by Martin et al. 2019, is: Interaction with humans Conservation Firefly populations are thought to be declining worldwide. While monitoring data for many regions are scarce, a growing number of anecdotal reports, coupled with several published studies from Europe and Asia, suggest that fireflies are endangered. Recent IUCN Red List assessments for North American fireflies have identified species with heightened extinction risk in the US, with 18 taxa categorized as threatened with extinction. Fireflies face threats including habitat loss and degradation, light pollution, pesticide use, poor water quality, invasive species, over-collection, and climate change. Firefly tourism, a quickly growing sector of the travel and tourism industry, has also been identified as a potential threat to fireflies and their habitats when not managed appropriately. Like many other organisms, fireflies are directly affected by land-use change (e.g., loss of habitat area and connectivity), which is identified as the main driver of biodiversity changes in terrestrial ecosystems. Pesticides, including insecticides and herbicides, have also been indicated as a likely cause of firefly decline. These chemicals can not only harm fireflies directly but also potentially reduce prey populations and degrade habitat. Light pollution is an especially concerning threat to fireflies. Since the majority of firefly species use bioluminescent courtship signals, they are also sensitive to environmental levels of light and consequently to light pollution. A growing number of studies investigating the effects of artificial light at night on fireflies has shown that light pollution can disrupt fireflies' courtship signals and even interfere with larval dispersal. Researchers agree that protecting and enhancing firefly habitat is necessary to conserve their populations. Recommendations include reducing or limiting artificial light at night, restoring habitats where threatened species occur, and eliminating unnecessary pesticide use, among many others. Sundarbans Firefly Sanctuary in Bangladesh was established in 2019. In culture Fireflies have featured in human culture around the world for centuries. In Japan, the emergence of fireflies (Japanese: ) signifies the anticipated changing of the seasons; firefly viewing is a special aesthetic pleasure of midsummer, celebrated in parks that exist for that one purpose. The Japanese sword Hotarumaru, made in the 14th century, is so named for a legend that its flaws were repaired by fireflies. In Italy, the firefly (Italian: ) appears in Canto XXVI of Dante's Inferno, written in the 14th century: References Sources Further reading Faust, Lynn Frierson (2017). "Fireflies, Glow-worms, and Lightning Bugs" Lewis, Sara (2016). Silent Sparks: The Wondrous World of Fireflies. Princeton University Press. ISBN 978-1400880317. External links An introduction to European fireflies and glow-worms Firefly.org – Firefly & Lightning Bug Facts, Pictures, Information About Firefly Insect Disappearance Firefly simulating robot, China NCBI taxonomy database Museum of Science, Boston – Understanding Fireflies Video of a firefly larva in Austria FireflyExperience.org – Luminous Photography and Videos of Fireflies & Lightning Bugs . Bioluminescent insects Night Articles containing video clips Extant Cenomanian first appearances
Firefly
[ "Astronomy" ]
2,666
[ "Time in astronomy", "Night" ]
61,003
https://en.wikipedia.org/wiki/Prototype-based%20programming
Prototype-based programming is a style of object-oriented programming in which behavior reuse (known as inheritance) is performed via a process of reusing existing objects that serve as prototypes. This model can also be known as prototypal, prototype-oriented, classless, or instance-based programming. Prototype-based programming uses the process generalized objects, which can then be cloned and extended. Using fruit as an example, a "fruit" object would represent the properties and functionality of fruit in general. A "banana" object would be cloned from the "fruit" object and general properties specific to bananas would be appended. Each individual "banana" object would be cloned from the generic "banana" object. Compare to the class-based paradigm, where a "fruit" class would be extended by a "banana" class. The first prototype-based programming languages were Director a.k.a. Ani (on top of MacLisp) (1976-1979), and contemporaneously and not independently, ThingLab (on top of Smalltalk) (1977-1981), respective PhD projects by Kenneth Michael Kahn at MIT and Alan Hamilton Borning at Stanford (but working with Alan Kay at Xerox PARC). Borning introduced the word "prototype" in his TOPLAS 1981 paper. The first prototype-based programming language with more than one implementer or user was probably Yale T Scheme (1981-1984), though like Director and ThingLab initially, it just speaks of objects without classes. The language that made the name and notion of prototypes popular was Self (1985-1995), developed by David Ungar and Randall Smith to research topics in object-oriented language design. Since the late 1990s, the classless paradigm has grown increasingly popular. Some current prototype-oriented languages are JavaScript (and other ECMAScript implementations such as JScript and Flash's ActionScript 1.0), Lua, Cecil, NewtonScript, Io, Ioke, MOO, REBOL and AHK. Since the 2010s, a new generation of languages with pure functional prototypes has appeared, that reduce OOP to its very core: Jsonnet is a dynamic lazy pure functional language with a builtin prototype object system using mixin inheritance; Nix is a dynamic lazy pure functional language that builds an equivalent object system (Nix "extensions") in just two short function definitions (plus many other convenience functions). Both languages are used to define large distributed software configurations (Jsonnet being directly inspired by GCL, the Google Configuration Language, with which Google defines all its deployments, and has similar semantics though with dynamic binding of variables). Since then, other languages like Gerbil Scheme have implemented pure functional lazy prototype systems based on similar principles. Design and implementation Etymologically, a "prototype" means "first cast" ("cast" in the sense of being manufactured). A prototype is a concrete thing, from which other objects can be created by copying and modifying. For example, the International Prototype of the Kilogram is an actual object that really exists, from which new kilogram-objects can be created by copying. In comparison, a "class" is an abstract thing, in which objects can belong. For example, all kilogram-objects are in the class of KilogramObject, which might be a subclass of MetricObject, and so on. Prototypal inheritance in JavaScript is described by Douglas Crockford as Advocates of prototype-based programming argue that it encourages the programmer to focus on the behavior of some set of examples and only later worry about classifying these objects into archetypal objects that are later used in a fashion similar to classes. Many prototype-based systems encourage the alteration of prototypes during run-time, whereas only very few class-based object-oriented systems (such as the dynamic object-oriented system, Common Lisp, Dylan, Objective-C, Perl, Python, Ruby, or Smalltalk) allow classes to be altered during the execution of a program. Almost all prototype-based systems are based on interpreted and dynamically typed languages. Systems based on statically typed languages are technically feasible, however. The Omega language discussed in Prototype-Based Programming is an example of such a system, though according to Omega's website even Omega is not exclusively static, but rather its "compiler may choose to use static binding where this is possible and may improve the efficiency of a program." Object construction In prototype-based languages there are no explicit classes. Objects inherit directly from other objects through a prototype property. The prototype property is called prototype in Self and JavaScript, or proto in Io. There are two methods of constructing new objects: ex nihilo ("from nothing") object creation or through cloning an existing object. The former is supported through some form of object literal, declarations where objects can be defined at runtime through special syntax such as {...} and passed directly to a variable. While most systems support a variety of cloning, ex nihilo object creation is not as prominent. In class-based languages, a new instance is constructed through a class's constructor function, a special function that reserves a block of memory for the object's members (properties and methods) and returns a reference to that block. An optional set of constructor arguments can be passed to the function and are usually held in properties. The resulting instance will inherit all the methods and properties that were defined in the class, which acts as a kind of template from which similarly typed objects can be constructed. Systems that support ex nihilo object creation allow new objects to be created from scratch without cloning from an existing prototype. Such systems provide a special syntax for specifying the properties and behaviors of new objects without referencing existing objects. In many prototype languages there exists a root object, often called Object, which is set as the default prototype for all other objects created in run-time and which carries commonly needed methods such as a toString() function to return a description of the object as a string. One useful aspect of ex nihilo object creation is to ensure that a new object's slot (properties and methods) names do not have namespace conflicts with the top-level Object object. (In the JavaScript language, one can do this by using a null prototype, i.e. Object.create(null).) Cloning refers to a process whereby a new object is constructed by copying the behavior of an existing object (its prototype). The new object then carries all the qualities of the original. From this point on, the new object can be modified. In some systems the resulting child object maintains an explicit link (via delegation or resemblance) to its prototype, and changes in the prototype cause corresponding changes to be apparent in its clone. Other systems, such as the Forth-like programming language Kevo, do not propagate change from the prototype in this fashion and instead follow a more concatenative model where changes in cloned objects do not automatically propagate across descendants. // Example of true prototypal inheritance style in JavaScript. // Object creation using the literal object notation {}. const foo = { name: "foo", one: 1, two: 2 }; // Another object. const bar = { two: "two", three: 3 }; // Object.setPrototypeOf() is a method introduced in ECMAScript 2015. // For the sake of simplicity, let us pretend that the following // line works regardless of the engine used: Object.setPrototypeOf(bar, foo); // foo is now the prototype of bar. // If we try to access foo's properties from bar from now on, // we'll succeed. bar.one; // Resolves to 1. // The child object's properties are also accessible. bar.three; // Resolves to 3. // Own properties shadow prototype properties. bar.two; // Resolves to "two". bar.name; // Unaffected, resolves to "foo". foo.name; // Resolves to "foo". For another example: const foo = { one: 1, two: 2 }; // bar.[[prototype]] = foo const bar = Object.create(foo); bar.three = 3; bar.one; // 1 bar.two; // 2 bar.three; // 3 Delegation In prototype-based languages that use delegation, the language runtime is capable of dispatching the correct method or finding the right piece of data simply by following a series of delegation pointers (from object to its prototype) until a match is found. All that is required to establish this behavior-sharing between objects is the delegation pointer. Unlike the relationship between class and instance in class-based object-oriented languages, the relationship between the prototype and its offshoots does not require that the child object have a memory or structural similarity to the prototype beyond this link. As such, the child object can continue to be modified and amended over time without rearranging the structure of its associated prototype as in class-based systems. It is also important to note that not only data, but also methods can be added or changed. For this reason, some prototype-based languages refer to both data and methods as "slots" or "members". Concatenation In concatenative prototyping - the approach implemented by the Kevo programming language - there are no visible pointers or links to the original prototype from which an object is cloned. The prototype (parent) object is copied rather than linked to and there is no delegation. As a result, changes to the prototype will not be reflected in cloned objects. Incidentally, the Cosmos programming language achieves the same through the use of persistent data structures. The main conceptual difference under this arrangement is that changes made to a prototype object are not automatically propagated to clones. This may be seen as an advantage or disadvantage. (However, Kevo does provide additional primitives for publishing changes across sets of objects based on their similarity — so-called family resemblances or clone family mechanism — rather than through taxonomic origin, as is typical in the delegation model.) It is also sometimes claimed that delegation-based prototyping has an additional disadvantage in that changes to a child object may affect the later operation of the parent. However, this problem is not inherent to the delegation-based model and does not exist in delegation-based languages such as JavaScript, which ensure that changes to a child object are always recorded in the child object itself and never in parents (i.e. the child's value shadows the parent's value rather than changing the parent's value). In simplistic implementations, concatenative prototyping will have faster member lookup than delegation-based prototyping (because there is no need to follow the chain of parent objects), but will conversely use more memory (because all slots are copied, rather than there being a single slot pointing to the parent object). More sophisticated implementations can avoid this problem, however, although trade-offs between speed and memory are required. For example, systems with concatenative prototyping can use a copy-on-write implementation to allow for behind-the-scenes data sharing — and such an approach is indeed followed by Kevo. Conversely, systems with delegation-based prototyping can use caching to speed up data lookup. Criticism Advocates of class-based object models who criticize prototype-based systems often have concerns similar to the concerns that proponents of static type systems for programming languages have of dynamic type systems (see datatype). Usually, such concerns involve correctness, safety, predictability, efficiency and programmer unfamiliarity. On the first three points, classes are often seen as analogous to types (in most statically typed object-oriented languages they serve that role) and are proposed to provide contractual guarantees to their instances, and to users of their instances, that they will behave in some given fashion. Regarding efficiency, declaring classes simplifies many compiler optimizations that allow developing efficient method and instance-variable lookup. For the Self language, much development time was spent on developing, compiling, and interpreting techniques to improve the performance of prototype-based systems versus class-based systems. A common criticism made against prototype-based languages is that the community of software developers is unfamiliar with them, despite the popularity and market permeation of JavaScript. However, knowledge about prototype-based systems is increasing with the proliferation of JavaScript frameworks and the complex use of JavaScript as the World Wide Web (Web) matures. ECMAScript 6 introduced classes as syntactic sugar over JavaScript's existing prototype-based inheritance, providing an alternative way to create objects and manage inheritance. Languages supporting prototype-based programming Actor-Based Concurrent Language (ABCL): ABCL/1, ABCL/R, ABCL/R2, ABCL/c+ Agora AutoHotkey Cecil and Diesel of Craig Chambers ColdC COLA Common Lisp Cyan ECMAScript ActionScript 1.0, used by Adobe Flash and Adobe Flex ECMAScript for XML (E4X) JavaScript JScript TypeScript Io Ioke Jsonnet Logtalk LPC Lua M2000 Maple MOO Neko NewtonScript Nim Nix Object Lisp Obliq Omega OpenLaszlo Perl, with the Class::Prototyped module Python with prototype.py. R, with the proto package REBOL Red (programming language) Ruby (programming language) Self Seph Slate (programming language) SmartFrog Snap! Etoys TADS Tcl with snit extension Umajin See also Class-based programming (contrast) Differential inheritance Programming paradigm References Further reading Class Warfare: Classes vs. Prototypes, by Brian Foote. Using Prototypical Objects to Implement Shared Behavior in Object Oriented Systems, by Henry Lieberman, 1986. Object-oriented programming Programming paradigms Type theory
Prototype-based programming
[ "Mathematics" ]
2,907
[ "Type theory", "Mathematical logic", "Mathematical structures", "Mathematical objects" ]
61,026
https://en.wikipedia.org/wiki/West
West is one of the four cardinal directions or points of the compass. It is the opposite direction from east and is the direction in which the Sun sets on the Earth. Etymology The word "west" is a Germanic word passed into some Romance languages (ouest in French, oest in Catalan, ovest in Italian, oeste in Spanish and Portuguese). As in other languages, the word formation stems from the fact that west is the direction of the setting sun in the evening: 'west' derives from the Indo-European root *wes reduced from *wes-pero 'evening, night', cognate with Ancient Greek ἕσπερος hesperos 'evening; evening star; western' and Latin vesper 'evening; west'. Examples of the same formation in other languages include Latin occidens 'west' from occidō 'to go down, to set' and Hebrew מַעֲרָב (maarav) 'west' from עֶרֶב (erev) 'evening'. West is sometimes abbreviated as W. Navigation To go west using a compass for navigation (in a place where magnetic north is the same direction as true north) one needs to set a bearing or azimuth of 270°. West is the direction opposite that of the Earth's rotation on its axis, and is therefore the general direction towards which the Sun appears to constantly progress and eventually set. This is not true on the planet Venus, which rotates in the opposite direction from the Earth (retrograde rotation). To an observer on the surface of Venus, the Sun would rise in the west and set in the east although Venus's opaque clouds prevent observing the Sun from the planet's surface. In a map with north at the top, west is on the left. Moving continuously west is following a circle of latitude. Weather Due to the direction of the Earth's rotation, the prevailing wind in many places in the middle latitudes (i.e. between 35 and 65 degrees latitude) is from the west, known as the westerlies. Cultural The phrase "the West" is often spoken in reference to the Western world, which includes the European Union (also the EFTA countries), the United Kingdom, the Americas, Israel, Australia, New Zealand and (in part) South Africa. The concept of the Western part of the earth has its roots in the Western Roman Empire and the Western Christianity. During the Cold War "the West" was often used to refer to the NATO camp as opposed to the Warsaw Pact and non-aligned nations. The expression survives, with an increasingly ambiguous meaning. Symbolic meanings In Chinese Buddhism, the West represents movement toward the Buddha or enlightenment (see Journey to the West). The ancient Aztecs believed that the West was the realm of the great goddess of water, mist, and maize. In Ancient Egypt, the West was considered to be the portal to the netherworld, and is the cardinal direction regarded in connection with death, though not always with a negative connotation. Ancient Egyptians also believed that the Goddess Amunet was a personification of the West. The Celts believed that beyond the western sea off the edges of all maps lay the Otherworld, or Afterlife. In Judaism, west is seen to be toward the Shekinah (presence) of God, as in Jewish history the Tabernacle and subsequent Jerusalem Temple faced east, with God's Presence in the Holy of Holies up the steps to the west. According to the Bible, the Israelites crossed the Jordan River westward into the Promised Land. In Islam, while in India, people pray facing towards the west as in respect to Mecca, Mecca is in the West-ward direction. In American literature (e.g., in The Great Gatsby) moving West has sometimes symbolized gaining freedom, perhaps as an association with the settling of the Wild West (see also the American frontier and Manifest Destiny). References External links Orientation (geometry)
West
[ "Physics", "Mathematics" ]
820
[ "Topology", "Space", "Geometry", "Spacetime", "Orientation (geometry)" ]
61,028
https://en.wikipedia.org/wiki/East
East is one of the four cardinal directions or points of the compass. It is the opposite direction from west and is the direction from which the Sun rises on the Earth. Etymology As in other languages, the word is formed from the fact that east is the direction where the Sun rises: east comes from Middle English est, from Old English ēast, which itself comes from the Proto-Germanic *aus-to- or *austra- "east, toward the sunrise", from Proto-Indo-European *aus- "to shine," or "dawn", cognate with Old High German *ōstar "to the east", Latin aurora 'dawn', and Greek ēōs 'dawn, east'. Examples of the same formation in other languages include Latin oriens 'east, sunrise' from orior 'to rise, to originate', Greek ανατολή anatolé 'east' from ἀνατέλλω 'to rise' and Hebrew מִזְרָח mizraḥ 'east' from זָרַח zaraḥ 'to rise, to shine'. Ēostre, a Germanic goddess of dawn, might have been a personification of both dawn and the cardinal points. East is sometimes abbreviated as E. Navigation By convention, the right-hand side of a map is east. This convention has developed from the use of a compass, which places north at the top. However, on maps of planets such as Venus and Uranus which rotate retrograde, the left hand side is east. To go east using a compass for navigation, one sets a bearing or azimuth of 90°. Cultural East is the direction toward which the Earth rotates about its axis, and therefore the general direction from which the Sun appears to rise. The practice of praying towards the East is older than Christianity, but has been adopted by this religion as the Orient was thought of as containing mankind's original home. Hence, Christian churches have been traditionally oriented towards the east. After some early exceptions, this tradition of having the altar on the liturgical east has become a part of the church orientation concept liturgical east and west. The Orient is the East, traditionally comprising anything that belongs to the Eastern world, in relation to Europe. In English, it is largely a metonym for, and referring to the same area as, the continent of Asia, divided into the Far East, Middle East, and Near East. Despite this Eurocentric origin, these regions are still located to the east of the Geographical centre of Earth. Within an individual city within the Northern Hemisphere, the east end is typically poorer because the prevailing winds blow from the west. See also Intermediate Region Easting Oriental References External links Orientation (geometry)
East
[ "Physics", "Mathematics" ]
558
[ "Topology", "Space", "Geometry", "Spacetime", "Orientation (geometry)" ]
61,087
https://en.wikipedia.org/wiki/Glycomics
Glycomics is the comprehensive study of glycomes (the entire complement of sugars, whether free or present in more complex molecules of an organism), including genetic, physiologic, pathologic, and other aspects. Glycomics "is the systematic study of all glycan structures of a given cell type or organism" and is a subset of glycobiology. The term glycomics is derived from the chemical prefix for sweetness or a sugar, "glyco-", and was formed to follow the omics naming convention established by genomics (which deals with genes) and proteomics (which deals with proteins). Challenges The complexity of sugars: regarding their structures, they are not linear instead they are highly branched. Moreover, glycans can be modified (modified sugars), this increases its complexity. Complex biosynthetic pathways for glycans. Usually glycans are found either bound to protein (glycoprotein) or conjugated with lipids (glycolipids). Unlike genomes, glycans are highly dynamic. This area of research has to deal with an inherent level of complexity not seen in other areas of applied biology. 68 building blocks (molecules for DNA, RNA and proteins; categories for lipids; types of sugar linkages for saccharides) provide the structural basis for the molecular choreography that constitutes the entire life of a cell. DNA and RNA have four building blocks each (the nucleosides or nucleotides). Lipids are divided into eight categories based on ketoacyl and isoprene. Proteins have 20 (the amino acids). Saccharides have 32 types of sugar linkages. While these building blocks can be attached only linearly for proteins and genes, they can be arranged in a branched array for saccharides, further increasing the degree of complexity. Add to this the complexity of the numerous proteins involved, not only as carriers of carbohydrate, the glycoproteins, but proteins specifically involved in binding and reacting with carbohydrate: Carbohydrate-specific enzymes for synthesis, modulation, and degradation Lectins, carbohydrate-binding proteins of all sorts Receptors, circulating or membrane-bound carbohydrate-binding receptors Importance To answer this question one should know the different and important functions of glycans. The following are some of those functions: Glycoproteins and Glycolipids found on the cell surface play a critical role in bacterial and viral recognition. They are involved in cellular signaling pathways and modulate cell function. They are important in innate immunity. They determine cancer development. They orchestrate the cellular fate, inhibit proliferation, regulate circulation and invasion. They affect the stability and folding of proteins. They affect the pathway and fate of glycoproteins. There are many glycan-specific diseases, often hereditary diseases. There are important medical applications of aspects of glycomics: Lectins fractionate cells to avoid graft-versus-host disease in hematopoietic stem cell transplantation. Activation and expansion of cytolytic CD8 T cells in cancer treatment. Glycomics is particularly important in microbiology because glycans play diverse roles in bacterial physiology. Research in bacterial glycomics could lead to the development of: novel drugs bioactive glycans glycoconjugate vaccines Tools used The following are examples of the commonly used techniques in glycan analysis High-resolution mass spectrometry (MS) and high-performance liquid chromatography (HPLC) The most commonly applied methods are MS and HPLC, in which the glycan part is cleaved either enzymatically or chemically from the target and subjected to analysis. In case of glycolipids, they can be analyzed directly without separation of the lipid component. N-glycans from glycoproteins are analyzed routinely by high-performance-liquid-chromatography (reversed phase, normal phase and ion exchange HPLC) after tagging the reducing end of the sugars with a fluorescent compound (reductive labeling). A large variety of different labels were introduced in the recent years, where 2-aminobenzamide (AB), anthranilic acid (AA), 2-aminopyridin (PA), 2-aminoacridone (AMAC) and 3-(acetylamino)-6-aminoacridine (AA-Ac) are just a few of them. O-glycans are usually analysed without any tags, due to the chemical release conditions preventing them to be labeled. Fractionated glycans from high-performance liquid chromatography (HPLC) instruments can be further analyzed by MALDI-TOF-MS(MS) to get further information about structure and purity. Sometimes glycan pools are analyzed directly by mass spectrometry without prefractionation, although a discrimination between isobaric glycan structures is more challenging or even not always possible. Anyway, direct MALDI-TOF-MS analysis can lead to a fast and straightforward illustration of the glycan pool. In recent years, high performance liquid chromatography online coupled to mass spectrometry became very popular. By choosing porous graphitic carbon as a stationary phase for liquid chromatography, even non derivatized glycans can be analyzed. Electrospray ionisation (ESI) is frequently used for this application. Multiple Reaction Monitoring (MRM) Although MRM has been used extensively in metabolomics and proteomics, its high sensitivity and linear response over a wide dynamic range make it especially suited for glycan biomarker research and discovery. MRM is performed on a triple quadrupole (QqQ) instrument, which is set to detect a predetermined precursor ion in the first quadrupole, a fragmented in the collision quadrupole, and a predetermined fragment ion in the third quadrupole. It is a non-scanning technique, wherein each transition is detected individually and the detection of multiple transitions occurs concurrently in duty cycles. This technique is being used to characterize the immune glycome. Table 1: Advantages and disadvantages of mass spectrometry in glycan analysis Arrays Lectin and antibody arrays provide high-throughput screening of many samples containing glycans. This method uses either naturally occurring lectins or artificial monoclonal antibodies, where both are immobilized on a certain chip and incubated with a fluorescent glycoprotein sample. Glycan arrays, like that offered by the Consortium for Functional Glycomics and Z Biotech LLC, contain carbohydrate compounds that can be screened with lectins or antibodies to define carbohydrate specificity and identify ligands. Metabolic and covalent labeling of glycans Metabolic labeling of glycans can be used as a way to detect glycan structures. A well known strategy involves the use of azide-labeled sugars which can be reacted using the Staudinger ligation. This method has been used for in vitro and in vivo imaging of glycans. Tools for glycoproteins X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy for complete structural analysis of complex glycans is a difficult and complex field. However, the structure of the binding site of numerous lectins, enzymes and other carbohydrate-binding proteins has revealed a wide variety of the structural basis for glycome function. The purity of test samples have been obtained through chromatography (affinity chromatography etc.) and analytical electrophoresis (PAGE (polyacrylamide electrophoresis), capillary electrophoresis, affinity electrophoresis, etc.). Software and databases There are several on-line software and databases available for glycomic research. This includes: GlyCosmos GlyTouCan GlycomeDB UniCarb-DB See also Cytomics Glycobiology GlycomeDB UniCarb-DB Carbohydrate Structure Database (CSDB) Glycoconjugate Glyquest Interactomics Lipidomics List of omics topics in biology Metabolomics Omics Proteomics systems biology Minimum Information Required About a Glycomics Experiment References External links glycosciences.de This site provides databases and bioinformatics tools for glycobiology and glycomics. GlycomeDB, A carbohydrate structure metadatabase GlycoBase A web HPLC/UPLC resource that contains elution positions expressed as glucose unit values. ProGlycAn A short introduction to glycan analysis and a nomenclature for N-Glycans CD BioGlyco This site provides database and tools in the field of glycomics, from glycan release, separation, and purification, glycan derivatization to glycan characterization and quantification. Omics Carbohydrate chemistry Sugar
Glycomics
[ "Chemistry", "Biology" ]
1,920
[ "Carbohydrates", "Glycomics", "Bioinformatics", "Omics", "Sugar", "Carbohydrate chemistry", "nan", "Chemical synthesis", "Glycobiology" ]
61,095
https://en.wikipedia.org/wiki/Flight%20and%20expulsion%20of%20Germans%20%281944%E2%80%931950%29
During the later stages of World War II and the post-war period, Germans and fled and were expelled from various Eastern and Central European countries, including Czechoslovakia, and from the former German provinces of Lower and Upper Silesia, East Prussia, and the eastern parts of Brandenburg (Neumark) and Pomerania (Hinterpommern), which were annexed by Poland and the Soviet Union. The idea to expel the Germans from the annexed territories had been proposed by Winston Churchill, in conjunction with the Polish and Czechoslovak exile governments in London at least since 1942. Tomasz Arciszewski, the Polish prime minister in-exile, supported the annexation of German territory but opposed the idea of expulsion, wanting instead to naturalize the Germans as Polish citizens and to assimilate them. Joseph Stalin, in concert with other Communist leaders, planned to expel all ethnic Germans from east of the Oder and from lands which from May 1945 fell inside the Soviet occupation zones. In 1941, his government had already transported Germans from Crimea to Central Asia. Between 1944 and 1948, millions of people, including ethnic Germans () and German citizens (), were permanently or temporarily moved from Central and Eastern Europe. By 1950, a total of about 12 million Germans had fled or been expelled from east-central Europe into Allied-occupied Germany and Austria. The West German government put the total at 14.6 million, including a million ethnic Germans who had settled in territories conquered by Nazi Germany during World War II, ethnic German migrants to Germany after 1950, and the children born to expelled parents. The largest numbers came from former eastern territories of Germany ceded to the People's Republic of Poland and the Soviet Union (about seven million), and from Czechoslovakia (about three million). The areas affected included the former eastern territories of Germany, which were annexed by Poland, as well as the Soviet Union after the war and Germans who were living within the borders of the pre-war Second Polish Republic, Czechoslovakia, Hungary, Romania, Yugoslavia, and the Baltic states. The Nazis had made plans—only partially completed before the Nazi defeat—to remove Jews and many Slavic people from Eastern Europe and settle the area with Germans. The death toll attributable to the flight and expulsions is disputed, with estimates ranging from 500,000 up to 2.5 million according to the German government. The removals occurred in three overlapping phases, the first of which was the organized evacuation of ethnic Germans by the Nazi government in the face of the advancing Red Army, from mid-1944 to early 1945. The second phase was the disorganised fleeing of ethnic Germans immediately following the 's surrender. The third phase was a more organised expulsion following the Allied leaders' Potsdam Agreement, which redefined the Central European borders and approved expulsions of ethnic Germans from the former German territories transferred to Poland, Russia and Czechoslovakia. Many German civilians were sent to internment and labour camps where they were used as forced labour as part of German reparations to countries in Eastern Europe. The major expulsions were completed in 1950. Estimates for the total number of people of German ancestry still living in Central and Eastern Europe in 1950 ranged from 700,000 to 2.7 million. Background Before World War II, East-Central Europe generally lacked clearly delineated ethnic settlement areas. There were some ethnic-majority areas, but there were also vast mixed areas and abundant smaller pockets settled by various ethnicities. Within these areas of diversity, including the major cities of Central and Eastern Europe, people in various ethnic groups had interacted every day for centuries, while not always harmoniously, on every civic and economic level. With the rise of nationalism in the 19th century, the ethnicity of citizens became an issue in territorial claims, the self-perception/identity of states, and claims of ethnic superiority. The German Empire introduced the idea of ethnicity-based settlement in an attempt to ensure its territorial integrity. It was also the first modern European state to propose population transfers as a means of solving "nationality conflicts", intending the removal of Poles and Jews from the projected post–World War I "Polish Border Strip" and its resettlement with Christian ethnic Germans. Following the collapse of Austria-Hungary, the Russian Empire, and the German Empire at the end of World War I, the Treaty of Versailles pronounced the formation of several independent states in Central and Eastern Europe, in territories previously controlled by these imperial powers. None of the new states were ethnically homogeneous. After 1919, many ethnic Germans emigrated from the former imperial lands back to the Weimar Republic and the First Austrian Republic after losing their privileged status in those foreign lands, where they had maintained minority communities. In 1919 ethnic Germans became national minorities in Poland, Czechoslovakia, Hungary, Yugoslavia, and Romania. In the following years, the Nazi ideology encouraged them to demand local autonomy. In Germany during the 1930s, Nazi propaganda claimed that Germans elsewhere were subject to persecution. Nazi supporters throughout eastern Europe (Czechoslovakia's Konrad Henlein, Poland's Deutscher Volksverband and Jungdeutsche Partei, Hungary's Volksbund der Deutschen in Ungarn) formed local Nazi political parties sponsored financially by the German Ministry of Foreign Affairs, e.g. by the Hauptamt Volksdeutsche Mittelstelle. However, by 1939 more than half of Polish Germans lived outside of the formerly German territories of Poland due to improving economic opportunities. Population movements Notes: According to the national census figures the percentage of ethnic Germans in the total population was: Poland 2.3%; Czechoslovakia 22.3%; Hungary 5.5%; Romania 4.1% and Yugoslavia 3.6%. The West German figures are the base used to estimate losses in the expulsions. The West German figure for Poland is broken out as 939,000 monolingual German and 432,000 bi-lingual Polish/German. The West German figure for Poland includes 60,000 in Trans-Olza which was annexed by Poland in 1938. In the 1930 census, this region was included in the Czechoslovak population. A West German analysis of the wartime Deutsche Volksliste by Alfred Bohmann (de) put the number of Polish nationals in the Polish areas annexed by Nazi Germany who identified themselves as German at 709,500 plus 1,846,000 Poles who were considered candidates for Germanisation. In addition, there were 63,000 Volksdeutsch in the General Government. Martin Broszat cited a document with different Volksliste figures 1,001,000 were identified as Germans and 1,761,000 candidates for Germanisation. The figures for the Deutsche Volksliste exclude ethnic Germans resettled in Poland during the war. The national census figures for Germans include German-speaking Jews. Poland (7,000) Czech territory not including Slovakia (75,000) Hungary 10,000, Yugoslavia (10,000) During the Nazi German occupation, many citizens of German descent in Poland registered with the Deutsche Volksliste. Some were given important positions in the hierarchy of the Nazi administration, and some participated in Nazi atrocities, causing resentment towards German speakers in general. These facts were later used by the Allied politicians as one of the justifications for the expulsion of the Germans. The contemporary position of the German government is that, while the Nazi-era war crimes resulted in the expulsion of the Germans, the deaths due to the expulsions were an injustice. During the German occupation of Czechoslovakia, especially after the reprisals for the assassination of Reinhard Heydrich, most of the Czech resistance groups demanded that the "German problem" be solved by transfer/expulsion. These demands were adopted by the Czechoslovak government-in-exile, which sought the support of the Allies for this proposal, beginning in 1943. The final agreement for the transfer of the Germans was not reached until the Potsdam Conference. The expulsion policy was part of a geopolitical and ethnic reconfiguration of postwar Europe. In part, it was retribution for Nazi Germany's initiation of the war and subsequent atrocities and ethnic cleansing in Nazi-occupied Europe. Allied leaders Franklin D. Roosevelt of the United States, Winston Churchill of the United Kingdom, and Joseph Stalin of the USSR, had agreed in principle before the end of the war that the border of Poland's territory would be moved west (though how far was not specified) and that the remaining ethnic German population were subject to expulsion. They assured the leaders of the émigré governments of Poland and Czechoslovakia, both occupied by Nazi Germany, of their support on this issue. Reasons and justifications for the expulsions Given the complex history of the affected regions and the divergent interests of the victorious Allied powers, it is difficult to ascribe a definitive set of motives to the expulsions. The respective paragraph of the Potsdam Agreement only states vaguely: "The Three Governments, having considered the question in all its aspects, recognize that the transfer to Germany of German populations, or elements thereof, remaining in Poland, Czechoslovakia and Hungary, will have to be undertaken. They agreed that any transfers that take place should be effected in an orderly and humane manner." The major motivations revealed were: A desire to create ethnically homogeneous nation-states: This is presented by several authors as a key issue that motivated the expulsions. View of a German minority as potentially troublesome: From the Soviet perspective, shared by the communist administrations installed in Soviet-occupied Europe, the remaining large German populations outside postwar Germany were seen as a potentially troublesome 'fifth column' that would, because of its social structure, interfere with the envisioned Sovietisation of the respective countries. The Western allies also saw the threat of a potential German 'fifth column', especially in Poland after the agreed-to compensation with former German territory. In general, the Western allies hoped to secure a more lasting peace by eliminating the German minorities, which they thought could be done in a humane manner. The proposals from the Polish and Czech governments-in-exile to expel ethnic Germans after the war received support from Winston Churchill and Anthony Eden. Another motivation was to punish the Germans: the Allies declared them collectively guilty of German war crimes. Soviet political considerations: Stalin saw the expulsions as a means of creating antagonism between Germany and its Eastern neighbors, who would thus need Soviet protection. The expulsions served several practical purposes as well. Ethnically homogeneous nation-state The creation of ethnically homogeneous nation states in Central and Eastern Europe was presented as the key reason for the official decisions of the Potsdam and previous Allied conferences as well as the resulting expulsions. The principle of every nation inhabiting its own nation state gave rise to a series of expulsions and resettlements of Germans, Poles, Ukrainians and others who after the war found themselves outside their supposed home states. The 1923 population exchange between Greece and Turkey lent legitimacy to the concept. Churchill cited the operation as a success in a speech discussing the German expulsions. In view of the desire for ethnically homogeneous nation-states, it did not make sense to draw borders through regions that were already inhabited homogeneously by Germans without any minorities. As early as 9 September 1944, Soviet leader Joseph Stalin and Polish communist Edward Osóbka-Morawski of the Polish Committee of National Liberation signed a treaty in Lublin on population exchanges of Ukrainians and Poles living on the "wrong" side of the Curzon Line. Many of the 2.1 million Poles expelled from the Soviet-annexed Kresy, so-called 'repatriants', were resettled to former German territories, then dubbed 'Recovered Territories'. Czech Edvard Beneš, in his decree of 19 May 1945, termed ethnic Hungarians and Germans "unreliable for the state", clearing a way for confiscations and expulsions. View of German minorities as potential fifth columns Distrust and enmity One of the reasons given for the population transfer of Germans from the former eastern territories of Germany was the claim that these areas had been a stronghold of the Nazi movement. Neither Stalin nor the other influential advocates of this argument required that expellees be checked for their political attitudes or their activities. Even in the few cases when this happened and expellees were proven to have been bystanders, opponents or even victims of the Nazi regime, they were rarely spared from expulsion. Polish Communist propaganda used and manipulated hatred of the Nazis to intensify the expulsions. With German communities living within the pre-war borders of Poland, there was an expressed fear of disloyalty of Germans in Eastern Upper Silesia and Pomerelia, based on wartime Nazi activities. Created on order of Reichsführer-SS Heinrich Himmler, a Nazi ethnic German organisation called Selbstschutz carried out executions during Intelligenzaktion alongside operational groups of German military and police, in addition to such activities as identifying Poles for execution and illegally detaining them. To Poles, expulsion of Germans was seen as an effort to avoid such events in the future. As a result, Polish exile authorities proposed a population transfer of Germans as early as 1941. The Czechoslovak government-in-exile worked with the Polish government-in-exile towards this end during the war. Preventing ethnic violence The participants at the Potsdam Conference asserted that expulsions were the only way to prevent ethnic violence. As Winston Churchill expounded in the House of Commons in 1944, "Expulsion is the method which, insofar as we have been able to see, will be the most satisfactory and lasting. There will be no mixture of populations to cause endless trouble... A clean sweep will be made. I am not alarmed by the prospect of disentanglement of populations, not even of these large transferences, which are more possible in modern conditions than they have ever been before". Polish resistance fighter, statesman and courier Jan Karski warned President Franklin D. Roosevelt in 1943 of the possibility of Polish reprisals, describing them as "unavoidable" and "an encouragement for all the Germans in Poland to go west, to Germany proper, where they belong." Punishment for Nazi crimes The expulsions were also driven by a desire for retribution, given the brutal way German occupiers treated non-German civilians in the German-occupied territories during the war. Thus, the expulsions were at least partly motivated by the animus engendered by the war crimes and atrocities perpetrated by the German belligerents and their proxies and supporters. Czechoslovak President Edvard Beneš, in the National Congress, justified the expulsions on 28 October 1945 by stating that the majority of Germans had acted in full support of Hitler; during a ceremony in remembrance of the Lidice massacre, he blamed all Germans as responsible for the actions of the German state. In Poland and Czechoslovakia, newspapers, leaflets and politicians across the political spectrum, which narrowed during the post-war Communist take-over, asked for retribution for wartime German activities. Responsibility of the German population for the crimes committed in its name was also asserted by commanders of the late and post-war Polish military. Karol Świerczewski, commander of the Second Polish Army, briefed his soldiers to "exact on the Germans what they enacted on us, so they will flee on their own and thank God they saved their lives." In Poland, which had suffered the loss of six million citizens, including its elite and almost its entire Jewish population due to Lebensraum and the Holocaust, most Germans were seen as Nazi-perpetrators who could now finally be collectively punished for their past deeds. Soviet political considerations Stalin, who had earlier directed several population transfers in the Soviet Union, strongly supported the expulsions, which worked to the Soviet Union's advantage in several ways. The satellite states would now feel the need to be protected by the Soviets from German anger over the expulsions. The assets left by expellees in Poland and Czechoslovakia were successfully used to reward cooperation with the new governments, and support for the Communists was especially strong in areas that had seen significant expulsions. Settlers in these territories welcomed the opportunities presented by their fertile soils and vacated homes and enterprises, increasing their loyalty. Movements in the later stages of the war Evacuation and flight to areas within Germany Late in the war, as the Red Army advanced westward, many Germans were apprehensive about the impending Soviet occupation. Most were aware of the Soviet reprisals against German civilians. Soviet soldiers committed numerous rapes and other crimes. News of atrocities such as the Nemmersdorf massacre were exaggerated and disseminated by the Nazi propaganda machine. Plans to evacuate the ethnic German population westward into Germany, from Poland and the eastern territories of Germany, were prepared by various Nazi authorities toward the end of the war. In most cases, implementation was delayed until Soviet and Allied forces had defeated the German forces and advanced into the areas to be evacuated. The abandonment of millions of ethnic Germans in these vulnerable areas until combat conditions overwhelmed them can be attributed directly to the measures taken by the Nazis against anyone suspected of 'defeatist' attitudes (as evacuation was considered) and the fanaticism of many Nazi functionaries in their execution of Hitler's 'no retreat' orders. The first exodus of German civilians from the eastern territories was composed of both spontaneous flight and organized evacuation, starting in mid-1944 and continuing until early 1945. Conditions turned chaotic during the winter when kilometers-long queues of refugees pushed their carts through the snow trying to stay ahead of the advancing Red Army. Refugee treks which came within reach of the advancing Soviets suffered casualties when targeted by low-flying aircraft, and some people were crushed by tanks. The German Federal Archive has estimated that 100–120,000 civilians (1% of the total population) were killed during the flight and evacuations. Polish historians Witold Sienkiewicz and Grzegorz Hryciuk maintain that civilian deaths in the flight and evacuation were "between 600,000 and 1.2 million. The main causes of death were cold, stress, and bombing." The mobilized Strength Through Joy liner Wilhelm Gustloff was sunk in January 1945 by Soviet Navy submarine S-13, killing about 9,000 civilians and military personnel escaping East Prussia in the largest loss of life in a single ship sinking in history. Many refugees tried to return home when the fighting ended. Before 1 June 1945, 400,000 people crossed back over the Oder and Neisse rivers eastward, before Soviet and Polish communist authorities closed the river crossings; another 800,000 entered Silesia through Czechoslovakia. In accordance with the Potsdam Agreement, at the end of 1945—wrote Hahn & Hahn—4.5 million Germans who had fled or been expelled were under the control of the Allied governments. From 1946 to 1950 around 4.5 million people were brought to Germany in organized mass transports from Poland, Czechoslovakia, and Hungary. An additional 2.6 million released POWs were listed as expellees. Evacuation and flight to Denmark From the Baltic coast, many soldiers and civilians were evacuated by ship in the course of Operation Hannibal. Between 23 January and 5 May 1945, up to 250,000 Germans, primarily from East Prussia, Pomerania, and the Baltic states, were evacuated to Nazi-occupied Denmark, based on an order issued by Hitler on 4 February 1945. When the war ended, the German refugee population in Denmark amounted to 5% of the total Danish population. The evacuation focused on women, the elderly and children—a third of whom were under the age of fifteen. After the war, the Germans were interned in several hundred refugee camps throughout Denmark, the largest of which was the Oksbøl Refugee Camp with 37,000 inmates. The camps were guarded by Danish Defence units. The situation eased after 60 Danish clergymen spoke in defence of the refugees in an open letter, and Social Democrat Johannes Kjærbøl took over the administration of the refugees on 6 September 1945. On 9 May 1945, the Red Army occupied the island of Bornholm; between 9 May and 1 June 1945, the Soviets shipped 3,000 refugees and 17,000 Wehrmacht soldiers from there to Kolberg. In 1945, 13,492 German refugees died, among them 7,000 children under five years of age. According to Danish physician and historian Kirsten Lylloff, these deaths were partially due to denial of medical care by Danish medical staff, as both the Danish Association of Doctors and the Danish Red Cross began refusing medical treatment to German refugees starting in March 1945. The last refugees left Denmark on 15 February 1949. In the Treaty of London, signed 26 February 1953, West Germany and Denmark agreed on compensation payments of 160 million Danish kroner for its extended care of the refugees, which West Germany paid between 1953 and 1958. Following Germany's defeat The Second World War ended in Europe with Germany's defeat in May 1945. By this time, all of Eastern and much of Central Europe was under Soviet occupation. This included most of the historical German settlement areas, as well as the Soviet occupation zone in eastern Germany. The Allies settled on the terms of occupation, the territorial truncation of Germany, and the expulsion of ethnic Germans from post-war Poland, Czechoslovakia and Hungary to the Allied Occupation Zones in the Potsdam Agreement, drafted during the Potsdam Conference between 17 July and 2 August 1945. Article XII of the agreement is concerned with the expulsions to post-war Germany and reads: The Three Governments, having considered the question in all its aspects, recognize that the transfer to Germany of German populations, or elements thereof, remaining in Poland, Czechoslovakia, and Hungary, will have to be undertaken. They agree that any transfers that take place should be effected in an orderly and humane manner. The agreement further called for equal distribution of the transferred Germans for resettlement among American, British, French and Soviet occupation zones comprising post–World War II Germany. Expulsions that took place before the Allies agreed on the terms at Potsdam are referred to as "irregular" expulsions (Wilde Vertreibungen). They were conducted by military and civilian authorities in Soviet-occupied post-war Poland and Czechoslovakia in the first half of 1945. In Yugoslavia, the remaining Germans were not expelled; ethnic German villages were turned into internment camps where over 50,000 perished from deliberate starvation and direct murders by Yugoslav guards. In late 1945 the Allies requested a temporary halt to the expulsions, due to the refugee problems created by the expulsion of Germans. While expulsions from Czechoslovakia were temporarily slowed, this was not true in Poland and the former eastern territories of Germany. Sir Geoffrey Harrison, one of the drafters of the cited Potsdam article, stated that the "purpose of this article was not to encourage or legalize the expulsions, but rather to provide a basis for approaching the expelling states and requesting them to co-ordinate transfers with the Occupying Powers in Germany." After Potsdam, a series of expulsions of ethnic Germans occurred throughout the Soviet-controlled Eastern European countries. Property and materiel in the affected territory that had belonged to Germany or to Germans was confiscated; it was either transferred to the Soviet Union, nationalised, or redistributed among the citizens. Of the many post-war forced migrations, the largest was the expulsion of ethnic Germans from Central and Eastern Europe, primarily from the territory of 1937 Czechoslovakia (which included the historically German-speaking area in the Sudeten mountains along the German-Czech-Polish border (Sudetenland)), and the territory that became post-war Poland. Poland's post-war borders were moved west to the Oder-Neisse line, deep into former German territory and within 80 kilometers of Berlin. Polish refugees expelled from the Soviet Union were resettled in the former German territories that were awarded to Poland after the war. During and after the war, 2,208,000 Poles fled or were expelled from the former eastern Polish regions that were merged to the USSR after the 1939 Soviet invasion of Poland; 1,652,000 of these refugees were resettled in the former German territories. Czechoslovakia The final agreement for the transfer of the Germans was reached at the Potsdam Conference. According to the West German Schieder commission, there were 4.5 million German civilians present in Bohemia-Moravia in May 1945, including 100,000 from Slovakia and 1.6 million refugees from Poland. Between 700,000 and 800,000 Germans were affected by irregular expulsions between May and August 1945. The expulsions were encouraged by Czechoslovak politicians and were generally executed by order of local authorities, mostly by groups of armed volunteers and the army. Transfers of population under the Potsdam agreements lasted from January until October 1946. 1.9 million ethnic Germans were expelled to the American zone, part of what would become West Germany. More than 1 million were expelled to the Soviet zone, which later became East Germany. About 250,000 ethnic Germans were allowed to remain in Czechoslovakia. According to the West German Schieder commission 250,000 persons who had declared German nationality in the 1939 Nazi census remained in Czechoslovakia; however the Czechs counted 165,790 Germans remaining in December 1955. Male Germans with Czech wives were expelled, often with their spouses, while ethnic German women with Czech husbands were allowed to stay. According to the Schieder commission, Sudeten Germans considered essential to the economy were held as forced labourers. The West German government estimated the expulsion death toll at 273,000 civilians, and this figure is cited in historical literature. However, in 1995, research by a joint German and Czech commission of historians found that the previous demographic estimates of 220,000 to 270,000 deaths to be overstated and based on faulty information. They concluded that the death toll was between 15,000 and 30,000 dead, assuming that not all deaths were reported. The German Red Cross Search Service (Suchdienst) confirmed the deaths of 18,889 people during the expulsions from Czechoslovakia. (Violent deaths 5,556; Suicides 3,411; Deported 705; In camps 6,615; During the wartime flight 629; After wartime flight 1,481; Cause undetermined 379; Other misc. 73.) Hungary In contrast to expulsions from other nations or states, the expulsion of the Germans from Hungary was dictated from outside Hungary. It began on 22 December 1944 when the Soviet Red Army Commander-in-Chief ordered the expulsions. In February 1945 the Soviet-dominated Allied Control Commission ordered the Hungarian Ministry of Interior to compile lists of all ethnic Germans living in the country. Initially the Census Bureau refused to divulge information on Hungarians who had registered as Volksdeutsche, but acceded under pressure from the Hungarian State Protection Authority. Three percent of the German pre-war population (about 20,000 people) had been evacuated by the Volksbund before that. They went to Austria, but many had returned. Overall, 60,000 ethnic Germans had fled. According to the West German Schieder commission report of 1956, in early 1945 between 30 and 35,000 ethnic German civilians and 30,000 military POW were arrested and transported from Hungary to the Soviet Union as forced labourers. In some villages, the entire adult population was taken to labor camps in the Donbas. 6,000 died there as a result of hardships and ill-treatment. Data from the Russian archives, which were based on an actual enumeration, put the number of ethnic Germans registered by the Soviets in Hungary at 50,292 civilians, of whom 31,923 were deported to the USSR for reparation labor implementing Order 7161. 9% (2,819) were documented as having died. In 1945, official Hungarian figures showed 477,000 German speakers in Hungary, including German-speaking Jews, 303,000 of whom had declared German nationality. Of the German nationals, 33% were children younger than 12 or elderly people over 60; 51% were women. On 29 December 1945, the postwar Hungarian Government, obeying the directions of the Potsdam Conference agreements, ordered the expulsion of anyone identified as German in the 1941 census, or who had been a member of the Volksbund, the SS, or any other armed German organisation. Accordingly, mass expulsions began. The rural population was affected more than the urban population or those ethnic Germans determined to have needed skills, such as miners. Germans married to Hungarians were not expelled, regardless of sex. The first 5,788 expellees departed from Wudersch on 19 January 1946. About 180,000 German-speaking Hungarian citizens were stripped of their citizenship and possessions, and expelled to the Western zones of Germany. By July 1948, 35,000 others had been expelled to the Soviet occupation zone of Germany. Most of the expellees found new homes in the south-west German province of Baden-Württemberg, but many others settled in Bavaria and Hesse. Other research indicates that, between 1945 and 1950, 150,000 were expelled to western Germany, 103,000 to Austria, and none to eastern Germany. During the expulsions, numerous organized protest demonstrations by the Hungarian population took place. Acquisition of land for distribution to Hungarian refugees and nationals was one of the main reasons stated by the government for the expulsion of the ethnic Germans from Hungary. The botched organization of the redistribution led to social tensions. 22,445 people were identified as German in the 1949 census. An order of 15 June 1948 halted the expulsions. A governmental decree of 25 March 1950 declared all expulsion orders void, allowing the expellees to return if they so wished. After the fall of Communism in the early 1990s, German victims of expulsion and Soviet forced labor were rehabilitated. Post-Communist laws allowed expellees to be compensated, to return, and to buy property. There were reportedly no tensions between Germany and Hungary regarding expellees. In 1958, the West German government estimated, based on a demographic analysis, that by 1950, 270,000 Germans remained in Hungary; 60,000 had been assimilated into the Hungarian population, and there were 57,000 "unresolved cases" that remained to be clarified. The editor for the section of the 1958 report for Hungary was Wilfried Krallert, a scholar dealing with Balkan affairs since the 1930s when he was a Nazi Party member. During the war, he was an officer in the SS and was directly implicated in the plundering of cultural artifacts in eastern Europe. After the war, he was chosen to author the sections of the demographic report on the expulsions from Hungary, Romania, and Yugoslavia. The figure of 57,000 "unresolved cases" in Hungary is included in the figure of 2 million dead expellees, which is often cited in official German and historical literature. Netherlands After World War II, the Dutch government decided to expel the German expatriates (25,000) living in the Netherlands. Germans, including those with Dutch spouses and children, were labelled as "hostile subjects" ("vijandelijke onderdanen"). The operation began on 10 September 1946 in Amsterdam, when German expatriates and their families were arrested at their homes in the middle of the night and given one hour to pack of luggage. They were only allowed to take 100 guilders with them. The remainder of their possessions were seized by the state. They were taken to internment camps near the German border, the largest of which was Mariënbosch concentration camp, near Nijmegen. About 3,691 Germans (less than 15% of the total number of German expatriates in the Netherlands) were expelled. The Allied forces occupying the Western zone of Germany opposed this operation, fearing that other nations might follow suit. Poland, including former German territories Throughout 1944 until May 1945, as the Red Army advanced through Eastern Europe and the provinces of eastern Germany, some German civilians were killed in the fighting. While many had already fled ahead of the advancing Soviet Army, frightened by rumors of Soviet atrocities, which in some cases were exaggerated and exploited by Nazi Germany's propaganda, millions still remained. A 2005 study by the Polish Academy of Sciences estimated that during the final months of the war, 4 to 5 million German civilians fled with the retreating German forces, and in mid-1945, 4.5 to 4.6 million Germans remained in the territories under Polish control. By 1950, 3,155,000 had been transported to Germany, 1,043,550 were naturalized as Polish citizens and 170,000 Germans still remained in Poland. According to the West German Schieder commission of 1953, 5,650,000 Germans remained in what would become Poland's new borders in mid-1945, 3,500,000 had been expelled and 910,000 remained in Poland by 1950. According to the Schieder commission, the civilian death toll was 2 million; in 1974, the German Federal Archives estimated the death toll at about 400,000. (The controversy regarding the casualty figures is covered below in the section on casualties.) During the 1945 military campaign, most of the male German population remaining east of the Oder–Neisse line were considered potential combatants and held by Soviet military in detention camps subject to verification by the NKVD. Members of Nazi party organizations and government officials were segregated and sent to the USSR for forced labour as reparations. In mid-1945, the eastern territories of pre-war Germany were turned over to the Soviet-controlled Polish military forces. Early expulsions were undertaken by the Polish Communist military authorities even before the Potsdam Conference placed them under temporary Polish administration pending the final Peace Treaty, in an effort to ensure later territorial integration into an ethnically homogeneous Poland. The Polish Communists wrote: "We must expel all the Germans because countries are built on national lines and not on multinational ones." The Polish government defined Germans as either Reichsdeutsche, people enlisted in first or second Volksliste groups; or those who held German citizenship. Around 1,165,000 German citizens of Slavic descent were "verified" as "autochthonous" Poles. Of these, most were not expelled; but many chose to migrate to Germany between 1951 and 1982, including most of the Masurians of East Prussia. At the Potsdam Conference (17 July – 2 August 1945), the territory to the east of the Oder–Neisse line was assigned to Polish and Soviet Union administration pending the final peace treaty. All Germans had their property confiscated and were placed under restrictive jurisdiction. The Silesian voivode Aleksander Zawadzki in part had already expropriated the property of the German Silesians on 26 January 1945, another decree of 2 March expropriated that of all Germans east of the Oder and Neisse, and a subsequent decree of 6 May declared all "abandoned" property as belonging to the Polish state. Germans were also not permitted to hold Polish currency, the only legal currency since July, other than earnings from work assigned to them. The remaining population faced theft and looting, and also in some instances rape and murder by the criminal elements, crimes that were rarely prevented nor prosecuted by the Polish Militia Forces and newly installed communist judiciary. In mid-1945, 4.5 to 4.6 million Germans resided in territory east of the Oder–Neisse Line. By early 1946, 550,000 Germans had already been expelled from there, and 932,000 had been verified as having Polish nationality. In the February 1946 census, 2,288,000 people were classified as Germans and subject to expulsion, and 417,400 were subject to verification action, to determine nationality. The negatively verified people, who did not succeed in demonstrating their "Polish nationality", were directed for resettlement. Those Polish citizens who had collaborated or were believed to have collaborated with the Nazis, were considered "traitors of the nation" and sentenced to forced labor prior to being expelled. By 1950, 3,155,000 German civilians had been expelled and 1,043,550 were naturalized as Polish citizens. 170,000 Germans considered "indispensable" for the Polish economy were retained until 1956, although almost all had left by 1960. 200,000 Germans in Poland were employed as forced labor in communist-administered camps prior to being expelled from Poland. These included Central Labour Camp Jaworzno, Central Labour Camp Potulice, Łambinowice and Zgoda labour camp. Besides these large camps, numerous other forced labor, punitive and internment camps, urban ghettos and detention centers, sometimes consisting only of a small cellar, were set up. The German Federal Archives estimated in 1974 that more than 200,000 German civilians were interned in Polish camps; they put the death rate at 20–50% and estimated that over 60,000 probably died. Polish historians Witold Sienkiewicz and Grzegorz Hryciuk maintain that the internment:resulted in numerous deaths, which cannot be accurately determined because of lack of statistics or falsification. At certain periods, they could be in the tens of percent of the inmate numbers. Those interned are estimated at 200–250,000 German nationals and the indigenous population and deaths might range from 15,000 to 60,000 persons." Note: The indigenous population were former German citizens who declared Polish ethnicity. Historian R. M. Douglas describes a chaotic and lawless regime in the former German territories in the immediate postwar era. The local population was victimized by criminal elements who arbitrarily seized German property for personal gain. Bilingual people who were on the Volksliste during the war were declared Germans by Polish officials who then seized their property for personal gain. The Federal Statistical Office of Germany estimated that in mid-1945, 250,000 Germans remained in the northern part of the former East Prussia, which became the Kaliningrad Oblast. They also estimated that more than 100,000 people surviving the Soviet occupation were evacuated to Germany beginning in 1947. German civilians were held as "reparation labor" by the USSR. Data from the Russian archives, newly published in 2001 and based on an actual enumeration, put the number of German civilians deported from Poland to the USSR in early 1945 for reparation labor at 155,262; 37% (57,586) died in the USSR. The West German Red Cross had estimated in 1964 that 233,000 German civilians were deported to the USSR from Poland as forced laborers and that 45% (105,000) were dead or missing. The West German Red Cross estimated at that time that 110,000 German civilians were held as forced labor in the Kaliningrad Oblast, where 50,000 were dead or missing. The Soviets deported 7,448 Poles of the Armia Krajowa from Poland. Soviet records indicated that 506 Poles died in captivity. Tomasz Kamusella maintains that in early 1945, 165,000 Germans were transported to the Soviet Union. According to Gerhardt Reichling, an official in the German Finance office, 520,000 German civilians from the Oder–Neisse region were conscripted for forced labor by both the USSR and Poland; he maintains that 206,000 perished. The attitudes of surviving Poles varied. Many had suffered brutalities and atrocities by the Germans, surpassed only by the German policies against Jews, during the Nazi occupation. The Germans had recently expelled more than a million Poles from territories they annexed during the war. Some Poles engaged in looting and various crimes, including murders, beatings, and rapes against Germans. On the other hand, in many instances Poles, including some who had been made slave laborers by the Germans during the war, protected Germans, for instance by disguising them as Poles. Moreover, in the Opole (Oppeln) region of Upper Silesia, citizens who claimed Polish ethnicity were allowed to remain, even though some, not all, had uncertain nationality, or identified as ethnic Germans. Their status as a national minority was accepted in 1955, along with state subsidies, with regard to economic assistance and education. The attitude of Soviet soldiers was ambiguous. Many committed atrocities, most notably rape and murder, and did not always distinguish between Poles and Germans, mistreating them equally. Other Soviets were taken aback by the brutal treatment of the German civilians and tried to protect them. Richard Overy cites an approximate total of 7.5 million Germans evacuated, migrated, or expelled from Poland between 1944 and 1950. Tomasz Kamusella cites estimates of 7 million expelled in total during both the "wild" and "legal" expulsions from the recovered territories from 1945 to 1948, plus an additional 700,000 from areas of pre-war Poland. Romania The ethnic German population of Romania in 1939 was estimated at 786,000. In 1940, Bessarabia and Bukovina were occupied by the USSR, and the ethnic German population of 130,000 was deported to German-held territory during the Nazi–Soviet population transfers, as well as 80,000 from Romania. 140,000 of these Germans were resettled in German-occupied Poland; in 1945, they were caught up in the flight and expulsion from Poland. Most of the ethnic Germans in Romania resided in Transylvania, the northern part of which was annexed by Hungary during World War II. The pro-German Hungarian government, as well as the pro-German Romanian government of Ion Antonescu, allowed Germany to enlist the German population in Nazi-sponsored organizations. During the war, 54,000 of the male population was conscripted by Nazi Germany, many into the Waffen-SS. In mid-1944, roughly 100,000 Germans fled from Romania with the retreating German forces. According to the West German Schieder commission report of 1957, 75,000 German civilians were deported to the USSR as forced labour and 15% (approximately 10,000) did not return. Data from the Russian archives which were based on an actual enumeration put the number of ethnic Germans registered by the Soviets in Romania at 421,846 civilians, of whom 67,332 were deported to the USSR for reparation labour, where 9% (6,260) died. The roughly 400,000 ethnic Germans who remained in Romania were treated as guilty of collaboration with Nazi Germany and were deprived of their civil liberties and property. Many were impressed into forced labour and deported from their homes to other regions of Romania. In 1948, Romania began a gradual rehabilitation of the ethnic Germans: they were not expelled, and the communist regime gave them the status of a national minority, the only Eastern Bloc country to do so. In 1958, the West German government estimated, based on a demographic analysis, that by 1950, 253,000 were counted as expellees in Germany or the West, 400,000 Germans still remained in Romania, 32,000 had been assimilated into the Romanian population, and that there were 101,000 "unresolved cases" that remained to be clarified. The figure of 101,000 "unresolved cases" in Romania is included in the total German expulsion dead of 2 million which is often cited in historical literature. 355,000 Germans remained in Romania in 1977. During the 1980s, many began to leave, with over 160,000 leaving in 1989 alone. By 2002, the number of ethnic Germans in Romania was 60,000. Soviet Union and annexed territories The Baltic, Bessarabian and ethnic Germans in areas that became Soviet-controlled following the Molotov–Ribbentrop Pact of 1939 were resettled to Nazi Germany, including annexed areas like Warthegau, during the Nazi-Soviet population exchange. Only a few returned to their former homes when Germany invaded the Soviet Union and temporarily gained control of those areas. These returnees were employed by the Nazi occupation forces to establish a link between the German administration and the local population. Those resettled elsewhere shared the fate of the other Germans in their resettlement area. The ethnic German minority in the USSR was considered a security risk by the Soviet government, and they were deported during the war in order to prevent their possible collaboration with the Nazi invaders. In August 1941 the Soviet government ordered ethnic Germans to be deported from the European USSR, by early 1942, 1,031,300 Germans were interned in "special settlements" in Central Asia and Siberia. Life in the special settlements was harsh and severe, food was limited, and the deported population was governed by strict regulations. Shortages of food plagued the whole Soviet Union and especially the special settlements. According to data from the Soviet archives, by October 1945, 687,300 Germans remained alive in the special settlements; an additional 316,600 Soviet Germans served as labour conscripts during World War II. Soviet Germans were not accepted in the regular armed forces but were employed instead as conscript labour. The labor army members were arranged into worker battalions that followed camp-like regulations and received Gulag rations. In 1945 the USSR deported to the special settlements 203,796 Soviet ethnic Germans who had been previously resettled by Germany in Poland. These post-war deportees increased the German population in the special settlements to 1,035,701 by 1949. According to J. Otto Pohl, 65,599 Germans perished in the special settlements. He believes that an additional 176,352 unaccounted for people "probably died in the labor army". Under Stalin, Soviet Germans continued to be confined to the special settlements under strict supervision, in 1955 they were rehabilitated but were not allowed to return to the European USSR. The Soviet-German population grew despite deportations and forced labor during the war; in the 1939 Soviet census the German population was 1.427 million. By 1959 it had increased to 1.619 million. The calculations of the West German researcher Gerhard Reichling do not agree to the figures from the Soviet archives. According to Reichling a total of 980,000 Soviet ethnic Germans were deported during the war; he estimated that 310,000 died in forced labour. During the early months of the invasion of the USSR in 1941 the Germans occupied the western regions of the USSR that had German settlements. A total of 370,000 ethnic Germans from the USSR were deported to Poland by Germany during the war. In 1945 the Soviets found 280,000 of these resettlers in Soviet-held territory and returned them to the USSR; 90,000 became refugees in Germany after the war. Those ethnic Germans who remained in the 1939 borders of the Soviet Union occupied by Nazi Germany in 1941 remained where they were until 1943, when the Red Army liberated Soviet territory and the Wehrmacht withdrew westward. From January 1943, most of these ethnic Germans moved in treks to the Warthegau or to Silesia, where they were to settle. Between 250,000 and 320,000 had reached Nazi Germany by the end of 1944. On their arrival, they were placed in camps and underwent 'racial evaluation' by the Nazi authorities, who dispersed those deemed 'racially valuable' as farm workers in the annexed provinces, while those deemed to be of "questionable racial value" were sent to work in Germany. The Red Army captured these areas in early 1945, and 200,000 Soviet Germans had not yet been evacuated by the Nazi authorities, who were still occupied with their 'racial evaluation'. They were regarded by the USSR as Soviet citizens and repatriated to camps and special settlements in the Soviet Union. 70,000 to 80,000 who found themselves in the Soviet occupation zone after the war were also returned to the USSR, based on an agreement with the Western Allies. The death toll during their capture and transportation was estimated at 15–30%, and many families were torn apart. The special "German settlements" in the post-war Soviet Union were controlled by the Internal Affairs Commissioner, and the inhabitants had to perform forced labor until the end of 1955. They were released from the special settlements by an amnesty decree of 13 September 1955, and the Nazi collaboration charge was revoked by a decree of 23 August 1964. They were not allowed to return to their former homes and remained in the eastern regions of the USSR, and no individual's former property was restored. Since the 1980s, the Soviet and Russian governments have allowed ethnic Germans to emigrate to Germany. Different situations emerged in northern East Prussia regarding Königsberg (renamed Kaliningrad) and the adjacent Memel territory around Memel (Klaipėda). The Königsberg area of East Prussia was annexed by the Soviet Union, becoming an exclave of the Russian Soviet Republic. Memel was integrated into the Lithuanian Soviet Republic. Many Germans were evacuated from East Prussia and the Memel territory by Nazi authorities during Operation Hannibal or fled in panic as the Red Army approached. The remaining Germans were conscripted for forced labor. Ethnic Russians and the families of military staff were settled in the area. In June 1946, 114,070 Germans and 41,029 Soviet citizens were registered as living in the Kaliningrad Oblast, with an unknown number of unregistered Germans ignored. Between June 1945 and 1947, roughly half a million Germans were expelled. Between 24 August and 26 October 1948, 21 transports with a total of 42,094 Germans left the Kaliningrad Oblast for the Soviet Occupation Zone. The last remaining Germans were expelled between November 1949 (1,401 people) and January 1950 (7). Thousands of German children, called the "wolf children", had been left orphaned and unattended or died with their parents during the harsh winter without food. Between 1945 and 1947, around 600,000 Soviet citizens settled in the oblast. Yugoslavia Before World War II, roughly 500,000 German-speaking people (mostly Danube Swabians) lived in the Kingdom of Yugoslavia. Most fled during the war or emigrated after 1950 thanks to the Displaced Persons Act of 1948; some were able to emigrate to the United States. During the final months of World War II a majority of the ethnic Germans fled Yugoslavia with the retreating Nazi forces. After the liberation, Yugoslav Partisans exacted revenge on ethnic Germans for the wartime atrocities of Nazi Germany, in which many ethnic Germans had participated, especially in the Banat area of the Territory of the Military Commander in Serbia. The approximately 200,000 ethnic Germans remaining in Yugoslavia suffered persecution and sustained personal and economic losses. About 7,000 were killed as local populations and partisans took revenge for German wartime atrocities. From 1945 to 1948 ethnic Germans were held in labour camps where about 50,000 perished. Those surviving were allowed to emigrate to Germany after 1948. According to West German figures in late 1944 the Soviets transported 27,000 to 30,000 ethnic Germans, a majority of whom were women aged 18 to 35, to Ukraine and the Donbas for forced labour; about 20% (5,683) were reported dead or missing. Data from Russian archives published in 2001, based on an actual enumeration, put the number of German civilians deported from Yugoslavia to the USSR in early 1945 for reparation labour at 12,579, where 16% (1,994) died. After March 1945, a second phase began in which ethnic Germans were massed into villages such as Gakowa and Kruševlje that were converted into labour camps. All furniture was removed, straw placed on the floor, and the expellees housed like animals under military guard, with minimal food and rampant, untreated disease. Families were divided into the unfit women, old, and children, and those fit for slave labour. A total of 166,970 ethnic Germans were interned, and 48,447 (29%) perished. The camp system was shut down in March 1948. In Slovenia, the ethnic German population at the end of World War II was concentrated in Slovenian Styria, more precisely in Maribor, Celje, and a few other smaller towns (like Ptuj and Dravograd), and in the rural area around Apače on the Austrian border. The second-largest ethnic German community in Slovenia was the predominantly rural Gottschee County around Kočevje in Lower Carniola, south of Ljubljana. Smaller numbers of ethnic Germans also lived in Ljubljana and in some western villages in the Prekmurje region. In 1931, the total number of ethnic Germans in Slovenia was around 28,000: around half of them lived in Styria and in Prekmurje, while the other half lived in the Gottschee County and in Ljubljana. In April 1941, southern Slovenia was occupied by Italian troops. By early 1942, ethnic Germans from Gottschee/Kočevje were forcefully transferred to German-occupied Styria by the new German authorities. Most resettled to the Posavje region (a territory along the Sava river between the towns of Brežice and Litija), from where around 50,000 Slovenes had been expelled. Gottschee Germans were generally unhappy about their forced transfer from their historical home region. One reason was that the agricultural value of their new area of settlement was perceived as much lower than the Gottschee area. As German forces retreated before the Yugoslav Partisans, most ethnic Germans fled with them in fear of reprisals. By May 1945, only a few Germans remained, mostly in the Styrian towns of Maribor and Celje. The Liberation Front of the Slovenian People expelled most of the remainder after it seized complete control in the region in May 1945. The Yugoslavs set up internment camps at Sterntal and Teharje. The government nationalized their property on a "decision on the transition of enemy property into state ownership, on state administration over the property of absent people, and on sequestration of property forcibly appropriated by occupation authorities" of 21 November 1944 by the Presidency of the Anti-Fascist Council for the People's Liberation of Yugoslavia. After March 1945, ethnic Germans were placed in so-called "village camps". Separate camps existed for those able to work and for those who were not. In the latter camps, containing mainly children and the elderly, the mortality rate was about 50%. Most of the children under 14 were then placed in state-run homes, where conditions were better, though the German language was banned. These children were later given to Yugoslav families, and not all German parents seeking to reclaim their children in the 1950s were successful. West German government figures from 1958 put the death toll at 135,800 civilians. A recent study published by the ethnic Germans of Yugoslavia based on an actual enumeration has revised the death toll down to about 58,000. A total of 48,447 people had died in the camps; 7,199 were shot by partisans, and another 1,994 perished in Soviet labour camps. Those Germans still considered Yugoslav citizens were employed in industry or the military, but could buy themselves free of Yugoslav citizenship for the equivalent of three months' salary. By 1950, 150,000 of the Germans from Yugoslavia were classified as "expelled" in Germany, another 150,000 in Austria, 10,000 in the United States, and 3,000 in France. According to West German figures 82,000 ethnic Germans remained in Yugoslavia in 1950. After 1950, most emigrated to Germany or were assimilated into the local population. Kehl, Germany The population of Kehl (12,000 people), on the east bank of the Rhine opposite Strasbourg, fled and was evacuated in the course of the Liberation of France, on 23 November 1944. The French Army occupied the town in March 1945 and prevented the inhabitants from returning until 1953. Latin America Fearing a Nazi Fifth Column, between 1941 and 1945 the US government facilitated the expulsion of 4,058 German citizens from 15 Latin American countries to internment camps in Texas and Louisiana. Subsequent investigations showed many of the internees to be harmless, and three-quarters of them were returned to Germany during the war in exchange for citizens of the Americas, while the remainder returned to their homes in Latin America. Palestine At the start of World War II, colonists with German citizenship were rounded up by the British authorities and sent to internment camps in Bethlehem in Galilee. 661 Templers were deported to Australia via Egypt on 31 July 1941, leaving 345 in Palestine. Internment continued in Tatura, Victoria, Australia, until 1946–47. In 1962 the State of Israel paid 54 million Deutsche Marks in compensation to property owners whose assets were nationalized. Human losses Estimates of total deaths of German civilians in the flight and expulsions, including forced labour of Germans in the Soviet Union, range from 500,000 to a maximum of 3 million people. Although the German government's official estimate of deaths has stood at 2 million since the 1960s, the publication in 1987–89 of previously classified West German studies has led some historians to the conclusion that the actual number was much lower—in the range of 500,000–600,000. English-language sources have put the death toll at 2–3 million based on West German government figures from the 1960s. West German government estimates of the death toll In 1950 the West German Government made a preliminary estimate of 3.0 million missing people (1.5 million in prewar Germany and 1.5 million in Eastern Europe) whose fate needed to be clarified. These figures were superseded by the publication of the 1958 study by the Statistisches Bundesamt. In 1953 the West German government ordered a survey by the Suchdienst (search service) of the German churches to trace the fate of 16.2 million people in the area of the expulsions; the survey was completed in 1964 but kept secret until 1987. The search service was able to confirm 473,013 civilian deaths; there were an additional 1,905,991 cases of persons whose fate could not be determined. From 1954 to 1961 the Schieder commission issued five reports on the flight and expulsions. The head of the commission Theodor Schieder was a rehabilitated former Nazi party member who was involved in the preparation of the Nazi to colonize eastern Europe. The commission estimated a total death toll of about 2.3 million civilians including 2 million east of the Oder Neisse line. The figures of the Schieder commission were superseded by the publication in 1958 of the study by the West German government Statistisches Bundesamt, Die deutschen Vertreibungsverluste (The German Expulsion Casualties). The authors of the report included former Nazi party members, :de:Wilfried Krallert, Walter Kuhn and :de:Alfred Bohmann. The Statistisches Bundesamt put losses at 2,225,000 (1.339 million in prewar Germany and 886,000 in Eastern Europe). In 1961 the West German government published slightly revised figures that put losses at 2,111,000 (1,225,000 in prewar Germany and 886,000 in Eastern Europe) In 1969, the federal West German government ordered a further study to be conducted by the German Federal Archives, which was finished in 1974 and kept secret until 1989. The study was commissioned to survey crimes against humanity such as deliberate killings, which according to the report included deaths caused by military activity in the 1944–45 campaign, forced labor in the USSR and civilians kept in post-war internment camps. The authors maintained that the figures included only those deaths caused by violent acts and inhumanities (Unmenschlichkeiten) and do not include post-war deaths due to malnutrition and disease. Also not included are those who were raped or suffered mistreatment and did not die immediately. They estimated 600,000 deaths (150,000 during flight and evacuations, 200,000 as forced labour in the USSR and 250,000 in post-war internment camps. By region 400,000 east of the Oder Neisse line, 130,000 in Czechoslovakia and 80,000 in Yugoslavia). No figures were given for Romania and Hungary. A 1986 study by Gerhard Reichling "Die deutschen Vertriebenen in Zahlen" (the German expellees in figures) concluded 2,020,000 ethnic Germans perished after the war including 1,440,000 as a result of the expulsions and 580,000 deaths due to deportation as forced labourers in the Soviet Union. Reichling was an employee of the Federal Statistical Office who was involved in the study of German expulsion statistics since 1953. The Reichling study is cited by the German government to support their estimate of 2 million expulsion deaths Discourse The West German figure of 2 million deaths in the flight and expulsions was widely accepted by historians in the West prior to the fall of communism in Eastern Europe and the end of the Cold War. The recent disclosure of the German Federal Archives study and the Search Service figures have caused some scholars in Germany and Poland to question the validity of the figure of 2 million deaths; they estimate the actual total at 500–600,000. The German government continues to maintain that the figure of 2 million deaths is correct. The issue of the "expellees" has been a contentious one in German politics, with the Federation of Expellees staunchly defending the higher figure. Analysis by Rüdiger Overmans In 2000 the German historian Rüdiger Overmans published a study of German military casualties; his research project did not investigate civilian expulsion deaths. In 1994, Overmans provided a critical analysis of the previous studies by the German government which he believes are unreliable. Overmans maintains that the studies of expulsion deaths by the German government lack adequate support; he maintains that there are more arguments for the lower figures than for the higher figures. ("") In a 2006 interview, Overmans maintained that new research is needed to clarify the fate of those reported as missing. He found the 1965 figures of the Search Service to be unreliable because they include non-Germans; the figures according to Overmans include military deaths; the numbers of surviving people, natural deaths and births after the war in Eastern Europe are unreliable because the Communist governments in Eastern Europe did not extend full cooperation to West German efforts to trace people in Eastern Europe; the reports given by eyewitnesses surveyed are not reliable in all cases. In particular, Overmans maintains that the figure of 1.9 million missing people was based on incomplete information and is unreliable. Overmans found the 1958 demographic study to be unreliable because it inflated the figures of ethnic German deaths by including missing people of doubtful German ethnic identity who survived the war in Eastern Europe; the figures of military deaths is understated; the numbers of surviving people, natural deaths and births after the war in Eastern Europe are unreliable because the Communist governments in Eastern Europe did not extend full cooperation to West German efforts to trace people in Eastern Europe. Overmans maintains that the 600,000 deaths found by the German Federal Archives in 1974 is only a rough estimate of those killed, not a definitive figure. He pointed out that some deaths were not reported because there were no surviving eyewitnesses of the events; also there was no estimate of losses in Hungary, Romania and the USSR. Overmans conducted a research project that studied the casualties of the German military during the war and found that the previous estimate of 4.3 million dead and missing, especially in the final stages of the war, was about one million short of the actual toll. In his study Overmans researched only military deaths; his project did not investigate civilian expulsion deaths; he merely noted the difference between the 2.2 million dead estimated in the 1958 demographic study, of which 500,000 have so far have been verified. He found that German military deaths from areas in Eastern Europe were about 1.444 million, and thus 334,000 higher than the 1.1 million figure in the 1958 demographic study, lacking documents available today included the figures with civilian deaths. Overmans believes this will reduce the number of civilian deaths in the expulsions. Overmans further pointed out that the 2.225 million number estimated by the 1958 study would imply that the casualty rate among the expellees was equal to or higher than that of the military, which he found implausible. Analysis by historian Ingo Haar In 2006, Haar called into question the validity of the official government figure of 2 million expulsion deaths in an article in the German newspaper Süddeutsche Zeitung. Since then Haar has published three articles in academic journals that covered the background of the research by the West German government on the expulsions. Haar maintains that all reasonable estimates of deaths from expulsions lie between around 500,000 and 600,000, based on the information of Red Cross Search Service and German Federal Archives. Harr pointed out that some members of the Schieder commission and officials of the Statistisches Bundesamt involved in the study of the expulsions were involved in the Nazi plan to colonize Eastern Europe. Haar posits that figures have been inflated in Germany due to the Cold War and domestic German politics, and he maintains that the 2.225 million number relies on improper statistical methodology and incomplete data, particularly in regard to the expellees who arrived in East Germany. Haar questions the validity of population balances in general. He maintains that 27,000 German Jews who were Nazi victims are included in the West German figures. He rejects the statement by the German government that the figure of 500–600,000 deaths omitted those people who died of disease and hunger, and has stated that this is a "mistaken interpretation" of the data. He maintains that deaths due to disease, hunger and other conditions are already included in the lower numbers. According to Haar the numbers were set too high for decades, for postwar political reasons. Studies in Poland In 2001, Polish researcher Bernadetta Nitschke puts total losses for Poland at 400,000 (the same figure as the German Federal Archive study). She noted that historians in Poland have maintained that most of the deaths occurred during the flight and evacuation during the war, the deportations to the USSR for forced labour and, after the resettlement, due to the harsh conditions in the Soviet occupation zone in postwar Germany. Polish demographer Piotr Eberhardt found that, "Generally speaking, the German estimates... are not only highly arbitrary, but also clearly tendentious in presentation of the German losses." He maintains that the German government figures from 1958 overstated the total number of the ethnic Germans living in Poland prior to the war as well as the total civilian deaths due to the expulsions. For example, Eberhardt points out that "the total number of Germans in Poland is given as equal to 1,371,000. According to the Polish census of 1931, there were altogether only 741,000 Germans in the entire territory of Poland." Study by Hans Henning Hahn and Eva Hahn German historians Hans Henning Hahn and Eva Hahn published a detailed study of the flight and expulsions that is sharply critical of German accounts of the Cold War era. The Hahns regard the official German figure of 2 million deaths as an historical myth, lacking foundation. They place the ultimate blame for the mass flight and expulsion on the wartime policy of the Nazis in Eastern Europe. The Hahns maintain that most of the reported 473,013 deaths occurred during the Nazi organized flight and evacuation during the war, and the forced labor of Germans in the Soviet Union; they point out that there are 80,522 confirmed deaths in the postwar internment camps. They put the postwar losses in eastern Europe at a fraction of the total losses: Poland –15,000 deaths from 1945 to 1949 in internment camps; Czechoslovakia – 15,000–30,000 dead, including 4,000–5,000 in internment camps and ca. 15,000 in the Prague uprising; Yugoslavia – 5,777 deliberate killings and 48,027 deaths in internment camps; Denmark – 17,209 dead in internment camps; Hungary and Romania – no postwar losses reported. The Hahns point out that the official 1958 figure of 273,000 deaths for Czechoslovakia was prepared by Alfred Bohmann, a former Nazi Party member who had served in the wartime SS. Bohmann was a journalist for an ultra-nationalist Sudeten-Deutsch newspaper in postwar West Germany. The Hahns believe the population figures of ethnic Germans for eastern Europe include German-speaking Jews killed in the Holocaust. They believe that the fate of German-speaking Jews in Eastern Europe deserves the attention of German historians. ("Deutsche Vertreibungshistoriker haben sich mit der Geschichte der jüdischen Angehörigen der deutschen Minderheiten kaum beschäftigt.") German and Czech commission of historians In 1995, research by a joint German and Czech commission of historians found that the previous demographic estimates of 220,000 to 270,000 deaths in Czechoslovakia to be overstated and based on faulty information. They concluded that the death toll was at least 15,000 people and that it could range up to a maximum of 30,000 dead, assuming that not all deaths were reported. Rebuttal by the German government The German government maintains that the figure of 2–2.5 million expulsion deaths is correct. In 2005 the German Red Cross Search Service put the death toll at 2,251,500 but did not provide details for this estimate. On 29 November 2006, State Secretary in the German Federal Ministry of the Interior, Christoph Bergner, outlined the stance of the respective governmental institutions on Deutschlandfunk (a public-broadcasting radio station in Germany) saying that the numbers presented by the German government and others are not contradictory to the numbers cited by Haar and that the below 600,000 estimate comprises the deaths directly caused by atrocities during the expulsion measures and thus only includes people who were raped, beaten, or else killed on the spot, while the above two million estimate includes people who on their way to postwar Germany died of epidemics, hunger, cold, air raids and the like. Schwarzbuch der Vertreibung by Heinz Nawratil A German lawyer, Heinz Nawratil, published a study of the expulsions entitled Schwarzbuch der Vertreibung ("Black Book of Expulsion"). Nawratil claimed the death toll was 2.8 million: he includes the losses of 2.2 million listed in the 1958 West German study, and an estimated 250,000 deaths of Germans resettled in Poland during the war, plus 350,000 ethnic Germans in the USSR. In 1987, German historian Martin Broszat (former head of the Institute of Contemporary History in Munich) described Nawratil's writings as "polemics with a nationalist-rightist point of view and exaggerates in an absurd manner the scale of 'expulsion crimes'." Broszat found Nawratil's book to have "factual errors taken out of context." German historian Thomas E. Fischer calls the book "problematic". James Bjork (Department of History, King's College London) has criticized German educational DVDs based on Nawratil's book. Condition of the expellees after arriving in post-war Germany Those who arrived were in bad conditionparticularly during the harsh winter of 1945–46, when arriving trains carried "the dead and dying in each carriage (other dead had been thrown from the train along the way)". After experiencing Red Army atrocities, Germans in the expulsion areas were subject to harsh punitive measures by Yugoslav partisans and in post-war Poland and Czechoslovakia. Beatings, rapes and murders accompanied the expulsions. Some had experienced massacres, such as the Ústí (Aussig) massacre, in which 80–100 ethnic Germans died, or Postoloprty massacre, or conditions like those in the Upper Silesian Camp Łambinowice (Lamsdorf), where interned Germans were exposed to sadistic practices and at least 1,000 died. Many expellees had experienced hunger and disease, separation from family members, loss of civil rights and familiar environment, and sometimes internment and forced labour. Once they arrived, they found themselves in a country devastated by war. Housing shortages lasted until the 1960s, which along with other shortages led to conflicts with the local population. The situation eased only with the West German economic boom in the 1950s that drove unemployment rates close to zero. France did not participate in the Potsdam Conference, so it felt free to approve some of the Potsdam Agreements and dismiss others. France maintained the position that it had not approved the expulsions and therefore was not responsible for accommodating and nourishing the destitute expellees in its zone of occupation. While the French military government provided for the few refugees who arrived before July 1945 in the area that became the French zone, it succeeded in preventing entrance by later-arriving ethnic Germans deported from the East. Britain and the US protested against the actions of the French military government but had no means to force France to bear the consequences of the expulsion policy agreed upon by American, British and Soviet leaders in Potsdam. France persevered with its argument to clearly differentiate between war-related refugees and post-war expellees. In December 1946 it absorbed into its zone German refugees from Denmark, where 250,000 Germans had traveled by sea between February and May 1945 to take refuge from the Soviets. These were refugees from the eastern parts of Germany, not expellees; Danes of German ethnicity remained untouched and Denmark did not expel them. With this humanitarian act the French saved many lives, due to the high death toll German refugees faced in Denmark. Until mid-1945, the Allies had not reached an agreement on how to deal with the expellees. France suggested immigration to South America and Australia and the settlement of 'productive elements' in France, while the Soviets' SMAD suggested a resettlement of millions of expellees in Mecklenburg-Vorpommern. The Soviets, who encouraged and partly carried out the expulsions, offered little cooperation with humanitarian efforts, thereby requiring the Americans and British to absorb the expellees in their zones of occupation. In contradiction with the Potsdam Agreements, the Soviets neglected their obligation to provide supplies for the expellees. In Potsdam, it was agreed that 15% of all equipment dismantled in the Western zones—especially from the metallurgical, chemical and machine manufacturing industries—would be transferred to the Soviets in return for food, coal, potash (a basic material for fertiliser), timber, clay products, petroleum products, etc. The Western deliveries started in 1946, but this turned out to be a one-way street. The Soviet deliveries—desperately needed to provide the expellees with food, warmth, and basic necessities and to increase agricultural production in the remaining cultivation area—did not materialize. Consequently, the US stopped all deliveries on 3 May 1946, while the expellees from the areas under Soviet rule were deported to the West until the end of 1947. In the British and US zones the supply situation worsened considerably, especially in the British zone. Due to its location on the Baltic, the British zone already harbored a great number of refugees who had come by sea, and the already modest rations had to be further shortened by a third in March 1946. In Hamburg, for instance, the average living space per capita, reduced by air raids from in 1939 to 8.3 in 1945, was further reduced to in 1949 by billeting refugees and expellees. In May 1947, Hamburg trade unions organized a strike against the small rations, with protesters complaining about the rapid absorption of expellees. The US and Britain had to import food into their zones, even as Britain was financially exhausted and dependent on food imports having fought Nazi Germany for the entire war, including as the sole opponent from June 1940 to June 1941 (the period when Poland and France were defeated, the Soviet Union supported Nazi Germany, and the United States had not yet entered the war). Consequently, Britain had to incur additional debt to the US, and the US had to spend more for the survival of its zone, while the Soviets gained applause among Eastern Europeans—many of whom were impoverished by the war and German occupation—who plundered the belongings of expellees, often before they were actually expelled. Since the Soviet Union was the only power among the Allies that allowed and/or encouraged the looting and robbery in the area under its military influence, the perpetrators and profiteers blundered into a situation in which they became dependent on the perpetuation of Soviet rule in their countries to not be dispossessed of the booty and to stay unpunished. With ever more expellees sweeping into post-war Germany, the Allies moved towards a policy of assimilation, which was believed to be the best way to stabilise Germany and ensure peace in Europe by preventing the creation of a marginalised population. This policy led to the granting of German citizenship to the ethnic German expellees who had held citizenship of Poland, Czechoslovakia, Hungary, Yugoslavia, Romania, etc. before World War II. This effort was led by the Sonne Commission, a 14-member body consisting of nine Americans and five Germans within the Economic Cooperation Administration which was tasked with devising strategies to solve the refugee crisis. When the Federal Republic of Germany was founded, a law was drafted on 24 August 1952 that was primarily intended to ease the financial situation of the expellees. The law, termed the Lastenausgleichsgesetz, granted partial compensation and easy credit to the expellees; the loss of their civilian property had been estimated at 299.6 billion Deutschmarks (out of a total loss of German property due to the border changes and expulsions of 355.3 billion Deutschmarks). Administrative organisations were set up to integrate the expellees into post-war German society. While the Stalinist regime in the Soviet occupation zone did not allow the expellees to organise, in the Western zones expellees over time established a variety of organizations, including the All-German Bloc/League of Expellees and Deprived of Rights. The most prominent—still active today—is the Federation of Expellees (Bund der Vertriebenen, or BdV). "War children" of German ancestry in Western and Northern Europe In countries occupied by Nazi Germany during the war, sexual relations between Wehrmacht soldiers and local women resulted in the birth of significant numbers of children. Relationships between German soldiers and local women were particularly common in countries whose population was not dubbed "inferior" (Untermensch) by the Nazis. After the Wehrmacht's withdrawal, these women and their children of German descent were often ill-treated. Legacy of the expulsions With at least 12 million Germans directly involved, possibly 14 million or more, it was the largest movement or transfer of any single ethnic population in European history and the largest among the post-war expulsions in Central and Eastern Europe (which displaced 20 to 31 million people in total). The exact number of Germans expelled after the war is still unknown, because most recent research provides a combined estimate which includes those who were evacuated by the German authorities, fled or were killed during the war. It is estimated that between 12 and 14 million German citizens and foreign ethnic Germans and their descendants were displaced from their homes. The exact number of casualties is still unknown and is difficult to establish due to the chaotic nature of the last months of the war. Census figures placed the total number of ethnic Germans still living in Eastern Europe in 1950, after the major expulsions were complete, at approximately 2.6 million, about 12 percent of the pre-war total. The events have been usually classified as population transfer, or as ethnic cleansing. R.J. Rummel has classified these events as democide, and a few scholars go as far as calling it a genocide. Polish sociologist and philosopher Lech Nijakowski objects to the term "genocide" as inaccurate agitprop. The expulsions created major social disruptions in the receiving territories, which were tasked with providing housing and employment for millions of refugees. West Germany established a ministry dedicated to the problem, and several laws created a legal framework. The expellees established several organisations, some demanding compensation. Their grievances, while remaining controversial, were incorporated into public discourse. During 1945 the British press aired concerns over the refugees' situation; this was followed by limited discussion of the issue during the Cold War outside West Germany. East Germany sought to avoid alienating the Soviet Union and its neighbours; the Polish and Czechoslovakian governments characterised the expulsions as "a just punishment for Nazi crimes". Western analysts were inclined to see the Soviet Union and its satellites as a single entity, disregarding the national disputes that had preceded the Cold War. The fall of the Soviet Union and the reunification of Germany opened the door to a renewed examination of the expulsions in both scholarly and political circles. A factor in the ongoing nature of the dispute may be the relatively large proportion of German citizens who were among the expellees and/or their descendants, estimated at 20% in 2000. A 1993 novel, Summer of Dead Dreams, written by Harry Thürk—a German author who left Upper Silesia annexed by Poland shortly after the war had ended—contained graphic depictions of the treatment of Germans by Soviets and Poles in Thürk's hometown of Prudnik. It depicted the maltreatment of Germans while also acknowledging German guilt, as well as Polish animosity toward Germans and, in specific instances, friendships between Poles and Germans despite the circumstances. Thürk's novel, when serialized in Polish translation by the Tygodnik Prudnicki ("Prudnik Weekly") magazine, was met with criticism from some Polish residents of Prudnik, but also with praise, because it revealed to many local citizens that there had been a post-war German ghetto in the town and addressed the tensions between Poles and Soviets in post-war Poland. The serialization was followed by an exhibition on Thurk's life in Prudnik's town museum. Status in international law International law on population transfer underwent considerable evolution during the 20th century. Before World War II, several major population transfers were the result of bilateral treaties and had the support of international bodies such as the League of Nations. The tide started to turn when the charter of the Nuremberg trials of German Nazi leaders declared forced deportation of civilian populations to be both a war crime and a crime against humanity, and this opinion was progressively adopted and extended through the remainder of the century. Underlying the change was the trend to assign rights to individuals, thereby limiting the rights of nation-states to impose fiats which could adversely affect such individuals. The Charter of the then-newly formed United Nations stated that its Security Council could take no enforcement actions regarding measures taken against World War II "enemy states", defined as enemies of a Charter signatory in WWII. The Charter did not preclude action in relation to such enemies "taken or authorized as a result of that war by the Governments having responsibility for such action." Thus, the Charter did not invalidate or preclude action against World War II enemies following the war. This argument is contested by Alfred de Zayas, an American professor of international law. ICRC's legal adviser Jean-Marie Henckaerts posited that the contemporary expulsions conducted by the Allies of World War II themselves were the reason why expulsion issues were included neither in the UN Declaration of Human Rights of 1948, nor in the European Convention on Human Rights in 1950, and says it "may be called 'a tragic anomaly' that while deportations were outlawed at Nuremberg they were used by the same powers as a 'peacetime measure'". It was only in 1955 that the Settlement Convention regulated expulsions, yet only in respect to expulsions of individuals of the states who signed the convention. The first international treaty condemning mass expulsions was a document issued by the Council of Europe on 16 September 1963, Protocol No 4 to the Convention for the Protection of Human Rights and Fundamental Freedoms Securing Certain Rights and Freedoms Other than Those Already Included in the Convention and in the First Protocol, stating in Article 4: "collective expulsion of aliens is prohibited." This protocol entered into force on 2 May 1968, and as of 1995 was ratified by 19 states. There is now general consensus about the legal status of involuntary population transfers: "Where population transfers used to be accepted as a means to settle ethnic conflict, today, forced population transfers are considered violations of international law." No legal distinction is made between one-way and two-way transfers, since the rights of each individual are regarded as independent of the experience of others. Although the signatories to the Potsdam Agreements and the expelling countries may have considered the expulsions to be legal under international law at the time, there are historians and scholars in international law and human rights who argue that the expulsions of Germans from Central and Eastern Europe should now be considered as episodes of ethnic cleansing, and thus a violation of human rights. For example, Timothy V. Waters argues in "On the Legal Construction of Ethnic Cleansing" that if similar circumstances arise in the future, the precedent of the expulsions of the Germans without legal redress would also allow the future ethnic cleansing of other populations under international law. In the 1970s and 1980s, a Harvard-trained lawyer and historian, Alfred de Zayas, published Nemesis at Potsdam and A Terrible Revenge, both of which became bestsellers in Germany. De Zayas argues that the expulsions were war crimes and crimes against humanity even in the context of international law of the time, stating, "the only applicable principles were the Hague Conventions, in particular, the Hague Regulations, Articles 42–56, which limited the rights of occupying powers—and obviously occupying powers have no rights to expel the populations—so there was the clear violation of the Hague Regulations." He argued that the expulsions violated the Nuremberg Principles. In November 2000, a major conference on ethnic cleansing in the 20th century was held at Duquesne University in Pittsburgh, along with the publication of a book containing participants' conclusions. The former United Nations High Commissioner for Human Rights José Ayala Lasso of Ecuador endorsed the establishment of the Centre Against Expulsions in Berlin. José Ayala Lasso recognized the "expellees" as victims of gross violations of human rights. De Zayas, a member of the advisory board of the Centre Against Expulsions, endorses the full participation of the organisation representing the expellees, the Bund der Vertriebenen (Federation of Expellees), in the Centre in Berlin. Berlin Centre A Centre Against Expulsions was to be set up in Berlin by the German government based on an initiative and with active participation of the German Federation of Expellees. The centre's creation has been criticized in Poland. It was strongly opposed by the Polish government and president Lech Kaczyński. Former Polish prime minister Donald Tusk restricted his comments to a recommendation that Germany pursue a neutral approach at the museum. The museum apparently did not materialize. The only project along the same lines in Germany is "Visual Sign" (Sichtbares Zeichen) under the auspices of the Stiftung Flucht, Vertreibung, Versöhnung (SFVV). Several members of two consecutive international Advisory (scholar) Councils criticised some activities of the foundation and the new Director Winfried Halder resigned. Dr Gundula Bavendamm is a current Director. Historiography British historian Richard J. Evans wrote that although the expulsions of ethnic Germans from Eastern Europe was done in an extremely brutal manner that could not be defended, the basic aim of expelling the ethnic German population of Poland and Czechoslovakia was justified by the subversive role played by the German minorities before World War II. Evans wrote that under the Weimar Republic the vast majority of ethnic Germans in Poland and Czechoslovakia made it clear that they were not loyal to the states they happened to live under, and under Nazi rule, the German minorities in Eastern Europe were willing tools of German foreign policy. Evans also wrote that many areas of eastern Europe featured a jumble of various ethnic groups aside from Germans, and that it was the destructive role played by ethnic Germans as instruments of Nazi Germany that led to their expulsion after the war. Evans concluded by positing that the expulsions were justified as they put an end to a major problem that plagued Europe before the war; that gains to the cause of peace were a further benefit of the expulsions; and that if the Germans had been allowed to remain in Eastern Europe after the war, West Germany would have used their presence to make territorial claims against Poland and Czechoslovakia, and that given the Cold War, this could have helped cause World War III. Historian Gerhard Weinberg wrote that the expulsions of the Sudeten Germans was justified as the Germans themselves had scrapped the Munich Agreement. Political issues In January 1990, the president of Czechoslovakia, Václav Havel, requested forgiveness on his country's behalf, using the term expulsion rather than transfer. Public approval for Havel's stance was limited; in a 1996 opinion poll, 86% of Czechs stated they would not support a party that endorsed such an apology. The expulsion issue surfaced in 2002 during the Czech Republic's application for membership in the European Union, since the authorisation decrees issued by Edvard Beneš had not been formally renounced. In October 2009, Czech president Václav Klaus stated that the Czech Republic would require exemption from the European Charter of Fundamental Rights to ensure that the descendants of expelled Germans could not press legal claims against the Czech Republic. Five years later, in 2014, the government of Prime Minister Bohuslav Sobotka decided that the exemption was "no longer relevant" and that the withdrawal of the opt-out "would help improve Prague's position with regard to other EU international agreements." On 20 June 2018, which was World Refugee Day, German Chancellor Angela Merkel said that there had been "no moral or political justification" for the post-war expulsion of ethnic Germans. Misuse of graphical materials Nazi propaganda pictures produced during the Heim ins Reich and pictures of expelled Poles are sometimes published to show the flight and expulsion of Germans. See also Dutch annexation of German territory after World War II Expulsion of Poles by Germany Expulsion of Poles by Nazi Germany German reparations for World War II Integration of immigrants Internment of German Americans Istrian-Dalmatian exodus Operation Paperclip Persecution of Germans Population transfer in the Soviet Union Pursuit of Nazi collaborators Treaty of Zgorzelec Victor Gollancz War crimes in occupied Poland during World War II World War II evacuation and expulsion Deportation of Germans from Latin America during World War II Displaced persons camps in post–World War II Europe References Sources Baziur, Grzegorz. Armia Czerwona na Pomorzu Gdańskim 1945–1947 [Red Army Gdańsk Pomerania 1945–1947], Warsaw: Instytut Pamięci Narodowej, 2003; Beneš, Z., D. Jančík et al., Facing History: The Evolution of Czech and German Relations in the Czech Provinces, 1848–1948, Prague: Gallery; Blumenwitz, Dieter: Flucht und Vertreibung, Cologne: Carl Heymanns, 1987; Brandes, Detlef: Flucht und Vertreibung (1938–1950), European History Online, Mainz: Institute of European History, 2011, retrieved 25 February 2013. De Zayas, Alfred M.: A terrible Revenge. Palgrave Macmillan, New York, 1994. . De Zayas, Alfred M.: Nemesis at Potsdam, London, 1977; . Douglas, R.M.: Orderly and Humane. The Expulsion of the Germans after the Second World War. Yale University Press, 2012; German statistics (Statistical and graphical data illustrating German population movements in the aftermath of the Second World War published in 1966 by the West German Ministry of Refugees and Displaced Persons) Grau, Karl F. Silesian Inferno, War Crimes of the Red Army on its March into Silesia in 1945, Valley Forge, PA: The Landpost Press, 1992; Jankowiak, Stanisław. Wysiedlenie i emigracja ludności niemieckiej w polityce władz polskich w latach 1945–1970 [Expulsion and emigration of German population in the policies of Polish authorities in 1945–1970], Warsaw: Instytut Pamięci Narodowej, 2005; Naimark, Norman M. The Russians in Germany: A History of the Soviet Zone of Occupation, 1945–1949, Cambridge: Harvard University Press, 1995; Naimark, Norman M.: Fires of Hatred. Ethnic Cleansing in Twentieth–Century Europe. Cambridge: Harvard University Press, 2001; Overy, Richard. The Penguin Historical Atlas of the Third Reich, London: Penguin Books, 1996; , p. 111. Podlasek, Maria. Wypędzenie Niemców z terenów na wschód od Odry i Nysy Łużyckiej, Warsaw: Wydawnictwo Polsko-Niemieckie, 1995; Steffen Prauser, Arfon Rees (2004), The Expulsion of 'German' Communities from Eastern Europe at the end of the Second World War (PDF file, direct download), EUI Working Paper HEC No. 2004/1; Florence: European University Institute. Contributors: Steffen Prauser and Arfon Rees, Piotr Pykel, Tomasz Kamusella, Balazs Apor, Stanislav Sretenovic, Markus Wien, Tillmann Tegeler, and Luigi Cajani. Accessed 26 May 2015. Reichling, Gerhard. Die deutschen Vertriebenen in Zahlen, 1986; Truman Presidential Library: Marshall Plan Documents, trumanlibrary.org; accessed 6 December 2014. Zybura, Marek. Niemcy w Polsce [Germans in Poland], Wrocław: Wydawnictwo Dolnośląskie, 2004; . Suppan, Arnold: "Hitler – Benes – Tito". Wien 2014. Verlag der Österreichischen Akademie der Wissenschaften. Drei Bände. . External links A documentary film about the expulsion of the Germans from Hungary Timothy V. Waters, On the Legal Construction of Ethnic Cleansing, Paper 951, 2006, University of Mississippi School of Law (PDF) Interest of the United States in the transfer of German populations from Poland, Czechoslovakia, Hungary, Rumania, and Austria, Foreign relations of the United States: diplomatic papers, Volume II (1945) pp. 1227–1327 (Note: p. 1227 begins with a Czechoslovak document dated 23 November 1944, several months before Czechoslovakia was "liberated" by the Soviet Army.) (Main URL, wisc.edu) Frontiers and areas of administration. Foreign relations of the United States (the Potsdam Conference), Volume I (1945), wisc.edu History and Memory: mass expulsions and transfers 1939–1945–1949, M. Rutowska, Z. Mazur, H. Orłowski Forced Migration in Central and Eastern Europe, 1939–1950 "Unsere Heimat ist uns ein fremdes Land geworden..." Die Deutschen östlich von Oder und Neiße 1945–1950. Dokumente aus polnischen Archiven. Band 1: Zentrale Behörden, Wojewodschaft Allenstein Dokumentation der Vertreibung Displaced Persons Act of 1948 Flucht und Vertreibung Gallerie- Flight & Expulsion Gallery Deutsche Vertriebenen – German Expulsions (Histories & Documentation) 1940s in Germany 1950 in Germany Germans Forced migration in the Soviet Union Sudetenland Aftermath of World War II in Germany Aftermath of World War II in Poland Aftermath of World War II in the Soviet Union German diaspora in Europe German diaspora in Poland Germany–Soviet Union relations Czechoslovakia–Germany relations Estonia–Germany relations Germany–Latvia relations German occupation of Lithuania during World War II Ethnic cleansing of Germans Ethnic cleansing in Europe Anti-German sentiment in Europe Genocides in Europe Collective punishment 1944 in Germany American collusion with Soviet World War II crimes British collusion with Soviet World War II crimes Polish war crimes in World War II Soviet World War II crimes
Flight and expulsion of Germans (1944–1950)
[ "Biology" ]
19,726
[ "Behavior", "Aggression", "Human behavior", "Violence" ]
61,097
https://en.wikipedia.org/wiki/Roche%20limit
In celestial mechanics, the Roche limit, also called Roche radius, is the distance from a celestial body within which a second celestial body, held together only by its own force of gravity, will disintegrate because the first body's tidal forces exceed the second body's self-gravitation. Inside the Roche limit, orbiting material disperses and forms rings, whereas outside the limit, material tends to coalesce. The Roche radius depends on the radius of the first body and on the ratio of the bodies' densities. The term is named after Édouard Roche (, ), the French astronomer who first calculated this theoretical limit in 1848. Explanation The Roche limit typically applies to a satellite's disintegrating due to tidal forces induced by its primary, the body around which it orbits. Parts of the satellite that are closer to the primary are attracted more strongly by gravity from the primary than parts that are farther away; this disparity effectively pulls the near and far parts of the satellite apart from each other, and if the disparity (combined with any centrifugal effects due to the object's spin) is larger than the force of gravity holding the satellite together, it can pull the satellite apart. Some real satellites, both natural and artificial, can orbit within their Roche limits because they are held together by forces other than gravitation. Objects resting on the surface of such a satellite would be lifted away by tidal forces. A weaker satellite, such as a comet, could be broken up when it passes within its Roche limit. Since, within the Roche limit, tidal forces overwhelm the gravitational forces that might otherwise hold the satellite together, no satellite can gravitationally coalesce out of smaller particles within that limit. Indeed, almost all known planetary rings are located within their Roche limit. (Notable exceptions are Saturn's E-Ring and Phoebe ring. These two rings could possibly be remnants from the planet's proto-planetary accretion disc that failed to coalesce into moonlets, or conversely have formed when a moon passed within its Roche limit and broke apart.) The gravitational effect occurring below the Roche limit is not the only factor that causes comets to break apart. Splitting by thermal stress, internal gas pressure, and rotational splitting are other ways for a comet to split under stress. Determination The limiting distance to which a satellite can approach without breaking up depends on the rigidity of the satellite. At one extreme, a completely rigid satellite will maintain its shape until tidal forces break it apart. At the other extreme, a highly fluid satellite gradually deforms leading to increased tidal forces, causing the satellite to elongate, further compounding the tidal forces and causing it to break apart more readily. Most real satellites would lie somewhere between these two extremes, with tensile strength rendering the satellite neither perfectly rigid nor perfectly fluid. For example, a rubble-pile asteroid will behave more like a fluid than a solid rocky one; an icy body will behave quite rigidly at first but become more fluid as tidal heating accumulates and its ices begin to melt. But note that, as defined above, the Roche limit refers to a body held together solely by the gravitational forces which cause otherwise unconnected particles to coalesce, thus forming the body in question. The Roche limit is also usually calculated for the case of a circular orbit, although it is straightforward to modify the calculation to apply to the case (for example) of a body passing the primary on a parabolic or hyperbolic trajectory. Rigid satellites The rigid-body Roche limit is a simplified calculation for a spherical satellite. Irregular shapes such as those of tidal deformation on the body or the primary it orbits are neglected. It is assumed to be in hydrostatic equilibrium. These assumptions, although unrealistic, greatly simplify calculations. The Roche limit for a rigid spherical satellite is the distance, , from the primary at which the gravitational force on a test mass at the surface of the object is exactly equal to the tidal force pulling the mass away from the object: where is the radius of the primary, is the density of the primary, and is the density of the satellite. This can be equivalently written as where is the radius of the secondary, is the mass of the primary, and is the mass of the secondary. A third equivalent form uses only one property for each of the two bodies, the mass of the primary and the density of the secondary, is These all represent the orbital distance inside of which loose material (e.g. regolith) on the surface of the satellite closest to the primary would be pulled away, and likewise material on the side opposite the primary will also go away from, rather than toward, the satellite. Fluid satellites A more accurate approach for calculating the Roche limit takes the deformation of the satellite into account. An extreme example would be a tidally locked liquid satellite orbiting a planet, where any force acting upon the satellite would deform it into a prolate spheroid. The calculation is complex and its result cannot be represented in an exact algebraic formula. Roche himself derived the following approximate solution for the Roche limit: However, a better approximation that takes into account the primary's oblateness and the satellite's mass is: where is the oblateness of the primary. The fluid solution is appropriate for bodies that are only loosely held together, such as a comet. For instance, comet Shoemaker–Levy 9's decaying orbit around Jupiter passed within its Roche limit in July 1992, causing it to fragment into a number of smaller pieces. On its next approach in 1994 the fragments crashed into the planet. Shoemaker–Levy 9 was first observed in 1993, but its orbit indicated that it had been captured by Jupiter a few decades prior. See also Roche lobe Chandrasekhar limit Spaghettification (the extreme case of tidal distortion) Hill sphere Sphere of influence (black hole) Black hole Triton (moon) (Neptune's satellite) Comet Shoemaker–Levy 9 References Sources 2.44 is mentioned on page 258. External links Discussion of the Roche Limit; Audio: Cain/Gay – Astronomy Cast Tidal Forces Across the Universe – August 2007 Roche Limit Description from NASA Concepts in astrophysics Equations of astronomy Gravity Planetary rings Space science Tidal forces Solar System
Roche limit
[ "Physics", "Astronomy" ]
1,273
[ "Concepts in astrophysics", "Outer space", "Concepts in astronomy", "Space science", "Astrophysics", "Equations of astronomy", "Solar System" ]
61,098
https://en.wikipedia.org/wiki/Ethnic%20cleansing
Ethnic cleansing is the systematic forced removal of ethnic, racial, or religious groups from a given area, with the intent of making the society ethnically homogeneous. Along with direct removal such as deportation or population transfer, it also includes indirect methods aimed at forced migration by coercing the victim group to flee and preventing its return, such as murder, rape, and property destruction. Both the definition and charge of ethnic cleansing is often disputed, with some researchers including and others excluding coercive assimilation or mass killings as a means of depopulating an area of a particular group. Although scholars do not agree on which events constitute ethnic cleansing, many instances have occurred throughout history. The term was first used to describe Albanian nationalist treatment of the Kosovo Serbs in the 1980s, and entered widespread use during the Yugoslav Wars in the 1990s. Since then, the term has gained widespread acceptance due to journalism. Although research originally focused on deep-rooted animosities as an explanation for ethnic cleansing events, more recent studies depict ethnic cleansing as "a natural extension of the homogenizing tendencies of nation states" or emphasize security concerns and the effects of democratization, portraying ethnic tensions as a contributing factor. Research has also focused on the role of war as a causative or potentiating factor in ethnic cleansing. However, states in a similar strategic situation can have widely varying policies towards minority ethnic groups perceived as a security threat. Ethnic cleansing has no legal definition under international criminal law, but the methods by which it is carried out are considered crimes against humanity and may also fall under the Genocide Convention. Etymology An antecedent to the term is the Greek word (; ), which was used in ancient texts. e.g., to describe atrocities that accompanied Alexander the Great's conquest of Thebes in 335 BCE. The expulsion of the Moriscos from Spain between 1609 and 1614 is considered by some authors to be one of the first episodes of state-sponsored ethnic cleansing in the modern western world. Raphael Lemkin, who coined the term "genocide", considered the displacement of Native Americans by American settlers as a historical example of genocide. Others, like historian Gary Anderson, contend that genocide does not accurately characterize any aspect of American history, suggesting instead that ethnic cleansing is a more appropriate term. Circassian genocide, also known as "Tsitsekun", is often regarded by various historians as the first large-scale ethnic cleansing campaign launched by a state during the 19th century industrial era. Imperial Russian general Nikolay Yevdakimov, who supervised the operations of Circassian genocide during 1860s, dehumanised Muslim Circassians as "a pestilence" to be expelled from their native lands. Russian objective was the annexation of land; and the Russian military operations that forcibly deported Circassians were designated by Yevdakimov as "ochishchenie" (cleansing). In the early 1900s, regional variants of the term could be found among the Czechs (), the Poles (), the French () and the Germans (). A 1913 Carnegie Endowment report condemning the actions of all participants in the Balkan Wars contained various new terms to describe brutalities committed toward ethnic groups. During the Holocaust in World War II, Nazi Germany pursued a policy of ensuring that Europe was "cleaned of Jews" (). The Nazi called for the genocide and ethnic cleansing of most Slavic people in central and eastern Europe for the purpose of providing more living space for the Germans. During the Genocide of Serbs in the Independent State of Croatia, the euphemism ("cleansing the terrain") was used by the Croatian Ustaše to describe military actions in which non-Croats were purposely systematically killed or otherwise uprooted from their homes. The term was also used in the December 20, 1941 directive of Serbian Chetniks in reference to the genocidal massacres they committed against Bosniaks and Croats between 1941 and 1945. The Russian phrase (; lit. "cleansing of borders") was used in Soviet documents of the early 1930s to refer to the forced resettlement of Polish people from the border zone in the Byelorussian and Ukrainian SSRs. This process of the population transfer in the Soviet Union was repeated on an even larger scale in 1939–1941, involving many other groups suspected of disloyalty. In its complete form, the term appeared for the first time in the Romanian language () in an address by Vice Prime Minister Mihai Antonescu to cabinet members in July 1941. After the beginning of the invasion by the Soviet Union, he concluded: "I do not know when the Romanians will have such chance for ethnic cleansing." In the 1980s, the Soviets used the term "etnicheskoye chishcheniye" which literally translates to "ethnic cleansing" to describe Azerbaijani efforts to drive Armenians away from Nagorno-Karabakh. It was widely popularized by the Western media during the Bosnian War (1992–1995). In 1992, the German equivalent of ethnic cleansing (, ) was named German Un-word of the Year by the Gesellschaft für deutsche Sprache due to its euphemistic, inappropriate nature. Definitions The Final Report of the Commission of Experts established pursuant to Security Council Resolution 780 defined ethnic cleansing as: The official United Nations definition of ethnic cleansing is "rendering an area ethnically homogeneous by using force or intimidation to remove from a given area persons of another ethnic or religious group." As a category, ethnic cleansing encompasses a continuum or spectrum of policies. In the words of Andrew Bell-Fialkoff, "ethnic cleansing ... defies easy definition. At one end it is virtually indistinguishable from forced emigration and population exchange while at the other it merges with deportation and genocide. At the most general level, however, ethnic cleansing can be understood as the expulsion of a population from a given territory." Terry Martin has defined ethnic cleansing as "the forcible removal of an ethnically defined population from a given territory" and as "occupying the central part of a continuum between genocide on one end and nonviolent pressured ethnic emigration on the other end." Gregory Stanton, the founder of Genocide Watch, has criticised the rise of the term and its use for events that he feels should be called "genocide": because "ethnic cleansing" has no legal definition, its media use can detract attention from events that should be prosecuted as genocide. As a crime under international law There is no international treaty that specifies a specific crime of ethnic cleansing; however, ethnic cleansing in the broad sense—the forcible deportation of a population—is defined as a crime against humanity under the statutes of both the International Criminal Court (ICC) and the International Criminal Tribunal for the Former Yugoslavia (ICTY). The gross human rights violations integral to stricter definitions of ethnic cleansing are treated as separate crimes falling under public international law of crimes against humanity and in certain circumstances genocide. There are also situations, such as the expulsion of Germans after World War II, where ethnic cleansing has taken place without legal redress (see Preussische Treuhand v. Poland). Timothy v. Waters argues that similar ethnic cleansing could go unpunished in the future. Mutual ethnic cleansing Mutual ethnic cleansing occurs when two groups commit ethnic cleansing against minority members of the other group within their own territories. For instance in the 1920s, Turkey expelled its Greek minority and Greece expelled its Turkish minority following the Greco-Turkish War. Other examples where mutual ethnic cleansing occurred include the First Nagorno-Karabakh War and the population transfers by the Soviets of Germans, Poles, and Ukrainians after World War II. Causes According to Michael Mann, in The Dark Side of Democracy (2004), murderous ethnic cleansing is strongly related to the creation of democracies. He argues that murderous ethnic cleansing is due to the rise of nationalism, which associates citizenship with a specific ethnic group. Democracy, therefore, is tied to ethnic and national forms of exclusion. Nevertheless, it is not democratic states that are more prone to commit ethnic cleansing, because minorities tend to have constitutional guarantees. Neither are stable authoritarian regimes (except the nazi and communist regimes) which are likely perpetrators of murderous ethnic cleansing, but those regimes that are in process of democratization. Ethnic hostility appears where ethnicity overshadows social classes as the primordial system of social stratification. Usually, in deeply divided societies, categories such as class and ethnicity are deeply intertwined, and when an ethnic group is seen as oppressor or exploitative of the other, serious ethnic conflict can develop. Michael Mann holds that when two ethnic groups claim sovereignty over the same territory and can feel threatened, their differences can lead to severe grievances and danger of ethnic cleansing. The perpetration of murderous ethnic cleansing tends to occur in unstable geopolitical environments and in contexts of war. As ethnic cleansing requires high levels of organisation and is usually directed by states or other authoritative powers, perpetrators are usually state powers or institutions with some coherence and capacity, not failed states as it is generally perceived. The perpetrator powers tend to get support by core constituencies that favour combinations of nationalism, statism, and violence. Ethnic cleansing was prevalent during the Age of Nationalism in Europe (19th and 20th centuries). Multi-ethnic European engaged in ethnic cleansing against minorities in order to pre-empt their secession and the loss of territory. Ethnic cleansing was particularly prevalent during periods of interstate war. Genocide Ethnic cleansing has been described as part of a continuum of violence whose most extreme form is genocide. Ethnic cleansing is similar to forced deportation or population transfer. While ethnic cleansing and genocide may share the same goal and methods (e.g., forced displacement), ethnic cleansing is intended to displace a persecuted population from a given territory, while genocide is intended to destroy a group. Some academics consider genocide to be a subset of "murderous ethnic cleansing". Norman Naimark writes that these concepts are different but related, for "literally and figuratively, ethnic cleansing bleeds into genocide, as mass murder is committed in order to rid the land of a people." William Schabas states "ethnic cleansing is also a warning sign of genocide to come. Genocide is the last resort of the frustrated ethnic cleanser." Multiple genocide scholars have criticized distinguishing between ethnic cleansing and genocide, with Martin Shaw arguing that forced deportation necessarily results in the destruction of a group and this must be foreseen by the perpetrators. As a military, political, and economic tactic The resettlement policy of the Neo-Assyrian Empire in the 9th and 7th centuries BC is considered by some scholars to be one of the first cases of ethnic cleansing. During the 1980s, in Lebanon, ethnic cleansing was common during all phases of the conflict, notable incidents were seen in the early phase of the war, such as the Damour massacre, the Karantina massacre, the Siege of the Tel al-Zaatar Palestinian refugee camp, and during the 1982 Lebanon War such as the Sabra and Shatila Massacre committed by Lebanese Maronite forces backed by Israel against Palestinian refugees and Lebanese Shia civilians. After the Israeli withdrawal from the Chouf, the Mountain War broke out, where ethnic cleansings (mostly in the form of tit-for-tat killings) occurred. During that time, the Syrian backed, mostly Druze dominated People's Liberation Army used a policy they called "territorial cleansing" to "drain" the Chouf of Maronite Christians in order to deny them of resisting the advance of the PSP. As a result, 163,670 Christian villagers were displaced due to these operations. In response to these massacres, the Lebanese Forces conducted a similar policy, which resulted in 20,000 Druze displaced. Ethnic cleansing was a common phenomenon in the wars in Croatia, Kosovo, and Bosnia and Herzegovina. This entailed intimidation, forced expulsion, or killing of the unwanted ethnic group as well as the destruction of the places of worship, cemeteries and cultural and historical buildings of that ethnic group in order to alter the population composition of an area in the favour of another ethnic group which would become the majority. According to numerous ICTY verdicts and indictments, Serb and Croat forces performed ethnic cleansing of their territories planned by their political leadership to create ethnically pure states (Republika Srpska and Republic of Serbian Krajina by the Serbs; and Herzeg-Bosnia by the Croats). Survivors of the ethnic cleansing were left severely traumatized as a consequence of this campaign. Israeli herders have engaged in a systemic displacement of Palestinian herders in Area C of the West Bank as a form of nationalist and economic warfare. When enforced as part of a political settlement, as happened with the expulsion of Germans after World War II through the forced resettlement of ethnic Germans to Germany in its reduced borders after 1945, the forced population movements, constituting a type of ethnic cleansing, may contribute to long-term stability of a post-conflict nation. Some justifications may be made as to why the targeted group will be moved in the conflict resolution stages, as in the case of the ethnic Germans, some individuals of the large German population in Czechoslovakia and prewar Poland had encouraged Nazi jingoism before World War II, but this was forcibly resolved. According to historian Norman Naimark, during an ethnic cleansing process, there may be destruction of physical symbols of the victims including temples, books, monuments, graveyards, and street names: "Ethnic cleansing involves not only the forced deportation of entire nations but the eradication of the memory of their presence." In many cases, the side perpetrating the alleged ethnic cleansing and its allies have fiercely disputed the charge. Instances See also Cultural genocide Discrimination based on skin tone Ethnic conflict Ethnic violence Ethnocentrism Ethnocide Forced displacement Identity cleansing Identity politics Nativism (politics) Political cleansing of population Population cleansing Racial segregation Racism Redlining Religious persecution Religious discrimination Religious segregation Religious violence Sectarian violence Social cleansing Sundown town Supremacism Xenophobia Explanatory notes Notes References Vladimir Petrović (2007), Etnicizacija čišćenja u reči i nedelu (Ethnicisation of Cleansing), Hereticus 1/2007, 11–36 Further reading External links Collective punishment Ethnic conflict Military-related euphemisms Forced migration Human rights abuses Persecution Racially motivated violence Racism Violence 1940s neologisms
Ethnic cleansing
[ "Biology" ]
2,927
[ "Behavior", "Aggression", "Human behavior", "Violence" ]
61,162
https://en.wikipedia.org/wiki/Taurine
Taurine (), or 2-aminoethanesulfonic acid, is a non-proteinogenic naturally occurring amino sulfonic acid that is widely distributed in animal tissues. It is a major constituent of bile and can be found in the large intestine, and accounts for up to 0.1% of total human body weight. Taurine is named after Latin (cognate to Ancient Greek ταῦρος, taûros) meaning bull or ox, as it was first isolated from ox bile in 1827 by German scientists Friedrich Tiedemann and Leopold Gmelin. It was discovered in human bile in 1846 by Edmund Ronalds. Although taurine is abundant in human organs with diverse putative roles, it is not an essential human dietary nutrient and is not included among nutrients with a recommended intake level. Taurine is synthesized naturally in the human liver from methionine and cysteine. Taurine is commonly sold as a dietary supplement, but there is no good clinical evidence that taurine supplements provide any benefit to human health. Taurine is used as a food additive for cats (who require it as an essential nutrient), dogs, and poultry. Taurine concentrations in land plants are low or undetectable, but up to wet weight have been found in algae. Chemical and biochemical features Taurine exists as a zwitterion , as verified by X-ray crystallography. The sulfonic acid has a low pKa ensuring that it is fully ionized to the sulfonate at the pHs found in the intestinal tract. Synthesis Synthetic taurine is obtained by the ammonolysis of isethionic acid (2-hydroxyethanesulfonic acid), which in turn is obtained from the reaction of ethylene oxide with aqueous sodium bisulfite. A direct approach involves the reaction of aziridine with sulfurous acid. In 1993, about  tonnes of taurine were produced for commercial purposes: 50% for pet food and 50% in pharmaceutical applications. As of 2010, China alone has more than 40 manufacturers of taurine. Most of these enterprises employ the ethanolamine method to produce a total annual production of about  tonnes. In the laboratory, taurine can be produced by alkylation of ammonia with bromoethanesulfonate salts. Biosynthesis Taurine is naturally derived from cysteine. Mammalian taurine synthesis occurs in the liver via the cysteine sulfinic acid pathway. In this pathway, cysteine is first oxidized to its sulfinic acid, catalyzed by the enzyme cysteine dioxygenase. Cysteine sulfinic acid, in turn, is decarboxylated by sulfinoalanine decarboxylase to form hypotaurine. Hypotaurine is enzymatically oxidized to yield taurine by hypotaurine dehydrogenase. Taurine is also produced by the transsulfuration pathway, which converts homocysteine into cystathionine. The cystathionine is then converted to hypotaurine by the sequential action of three enzymes: cystathionine gamma-lyase, cysteine dioxygenase, and cysteine sulfinic acid decarboxylase. Hypotaurine is then oxidized to taurine as described above. A pathway for taurine biosynthesis from serine and sulfate is reported in microalgae, developing chicken embryos, and chick liver. Serine dehydratase converts serine to 2-aminoacrylate, which is converted to cysteic acid by 3′-phosphoadenylyl sulfate:2-aminoacrylate C-sulfotransferase. Cysteic acid is converted to taurine by cysteine sulfinic acid decarboxylase. In food Taurine occurs naturally in fish and meat. The mean daily intake from omnivore diets was determined to be around (range ), and to be low or negligible from a vegan diet. Typical taurine consumption in the American diet is about per day. Taurine is partially destroyed by heat in processes such as baking and boiling. This is a concern for cat food, as cats have a dietary requirement for taurine and can easily become deficient. Either raw feeding or supplementing taurine can satisfy this requirement. Both lysine and taurine can mask the metallic flavor of potassium chloride, a salt substitute. Breast milk Prematurely born infants are believed to lack the enzymes needed to convert cystathionine to cysteine, and may, therefore, become deficient in taurine. Taurine is present in breast milk, and has been added to many infant formulas as a measure of prudence since the early 1980s. However, this practice has never been rigorously studied, and as such it has yet to be proven to be necessary, or even beneficial. Energy drinks and dietary supplements Taurine is an ingredient in some energy drinks in amounts of 1–3 g per serving. A 1999 assessment of European consumption of energy drinks found that taurine intake was per day. Research Taurine is not regarded as an essential human dietary nutrient and has not been assigned recommended intake levels. High-quality clinical studies to determine possible effects of taurine in the body or following dietary supplementation are absent from the literature. Preliminary human studies on the possible effects of taurine supplementation have been inadequate due to low subject numbers, inconsistent designs, and variable doses. Preliminary studies have suggested that supplementing with taurine may increase exercise capacity and affects lipid profiles in individuals with diabetes. Safety and toxicity According to the European Food Safety Authority, taurine is "considered to be a skin and eye irritant and skin sensitiser, and to be hazardous if inhaled;" it may be safe to consume up to 6 grams of taurine per day. Other sources indicate that taurine is safe for supplemental intake in normal healthy adults at up to 3 grams per day. A 2008 review found no documented reports of negative or positive health effects associated with the amount of taurine used in energy drinks, concluding, "The amounts of guarana, taurine, and ginseng found in popular energy drinks are far below the amounts expected to deliver either therapeutic benefits or adverse events". Animal dietary requirement Cats Cats lack the enzymatic machinery (sulfinoalanine decarboxylase) to produce taurine and must therefore acquire it from their diet. A taurine deficiency in cats can lead to retinal degeneration and eventually blindness – a condition known as central retinal degeneration as well as hair loss and tooth decay. Other effects of a diet lacking in this essential amino acid are dilated cardiomyopathy, and reproductive failure in female cats. Decreased plasma taurine concentration has been demonstrated to be associated with feline dilated cardiomyopathy. Unlike CRD, the condition is reversible with supplementation. Taurine is now a requirement of the Association of American Feed Control Officials (AAFCO) and any dry or wet food product labeled approved by the AAFCO should have a minimum of 0.1% taurine in dry food and 0.2% in wet food. Studies suggest the amino acid should be supplied at per kilogram of bodyweight per day for domestic cats. Other mammals A number of other mammals also have a requirement for taurine. While the majority of dogs can synthesize taurine, case reports have described a singular American cocker spaniel, 19 Newfoundland dogs, and a family of golden retrievers suffering from taurine deficiency treatable with supplementation. Foxes on fur farms also appear to require dietary taurine. The rhesus, cebus and cynomolgus monkeys each require taurine at least in infancy. The giant anteater also requires taurine. Birds Taurine appears to be essential for the development of passerine birds. Many passerines seek out taurine-rich spiders to feed their young, particularly just after hatching. Researchers compared the behaviours and development of birds fed a taurine-supplemented diet to a control diet and found the juveniles fed taurine-rich diets as neonates were much larger risk takers and more adept at spatial learning tasks. Under natural conditions, each blue tit nestling receive of taurine per day from parents. Taurine can be synthesized by chickens. Supplementation has no effect on chickens raised under adequate lab conditions, but seems to help with growth under stresses such as heat and dense housing. Fish Species of fish, mostly carnivorous ones, show reduced growth and survival when the fish-based feed in their food is replaced with soy meal or feather meal. Taurine has been identified as the factor responsible for this phenomenon; supplementation of taurine to plant-based fish feed reverses these effects. Future aquaculture is expected to use more of these more environmentally-friendly protein sources, so supplementation would become more important. The need of taurine in fish is conditional, differing by species and growth stage. The Olive flounder, for example, has lower capacity to synthesize taurine compared to the rainbow trout. Juvenile fish are less efficient at taurine biosyntheis due to reduced cysteine sulfinate decarboxylase levels. Derivatives Taurine is used in the preparation of the anthelmintic drug netobimin (Totabin). Taurolidine Taurocholic acid and tauroselcholic acid Tauromustine 5-Taurinomethyluridine and 5-taurinomethyl-2-thiouridine are modified uridines in (human) mitochondrial tRNA. Tauryl is the functional group attaching at the sulfur, 2-aminoethylsulfonyl. Taurino is the functional group attaching at the nitrogen, 2-sulfoethylamino. Thiotaurine Peroxytaurine which is a degradation product by both superoxide and heat degradation. See also Homotaurine (tramiprosate), precursor to acamprosate Taurates, a group of surfactants References Amines Sulfonic acids Glycine receptor agonists Inhibitory amino acids
Taurine
[ "Chemistry" ]
2,119
[ "Functional groups", "Sulfonic acids" ]
61,190
https://en.wikipedia.org/wiki/Cauchy%27s%20integral%20theorem
In mathematics, the Cauchy integral theorem (also known as the Cauchy–Goursat theorem) in complex analysis, named after Augustin-Louis Cauchy (and Édouard Goursat), is an important statement about line integrals for holomorphic functions in the complex plane. Essentially, it says that if is holomorphic in a simply connected domain Ω, then for any simply closed contour in Ω, that contour integral is zero. Statement Fundamental theorem for complex line integrals If is a holomorphic function on an open region , and is a curve in from to then, Also, when has a single-valued antiderivative in an open region , then the path integral is path independent for all paths in . Formulation on simply connected regions Let be a simply connected open set, and let be a holomorphic function. Let be a smooth closed curve. Then: (The condition that be simply connected means that has no "holes", or in other words, that the fundamental group of is trivial.) General formulation Let be an open set, and let be a holomorphic function. Let be a smooth closed curve. If is homotopic to a constant curve, then: (Recall that a curve is homotopic to a constant curve if there exists a smooth homotopy (within ) from the curve to the constant curve. Intuitively, this means that one can shrink the curve into a point without exiting the space.) The first version is a special case of this because on a simply connected set, every closed curve is homotopic to a constant curve. Main example In both cases, it is important to remember that the curve does not surround any "holes" in the domain, or else the theorem does not apply. A famous example is the following curve: which traces out the unit circle. Here the following integral: is nonzero. The Cauchy integral theorem does not apply here since is not defined at . Intuitively, surrounds a "hole" in the domain of , so cannot be shrunk to a point without exiting the space. Thus, the theorem does not apply. Discussion As Édouard Goursat showed, Cauchy's integral theorem can be proven assuming only that the complex derivative exists everywhere in . This is significant because one can then prove Cauchy's integral formula for these functions, and from that deduce these functions are infinitely differentiable. The condition that be simply connected means that has no "holes" or, in homotopy terms, that the fundamental group of is trivial; for instance, every open disk , for , qualifies. The condition is crucial; consider which traces out the unit circle, and then the path integral is nonzero; the Cauchy integral theorem does not apply here since is not defined (and is certainly not holomorphic) at . One important consequence of the theorem is that path integrals of holomorphic functions on simply connected domains can be computed in a manner familiar from the fundamental theorem of calculus: let be a simply connected open subset of , let be a holomorphic function, and let be a piecewise continuously differentiable path in with start point and end point . If is a complex antiderivative of , then The Cauchy integral theorem is valid with a weaker hypothesis than given above, e.g. given , a simply connected open subset of , we can weaken the assumptions to being holomorphic on and continuous on and a rectifiable simple loop in . The Cauchy integral theorem leads to Cauchy's integral formula and the residue theorem. Proof If one assumes that the partial derivatives of a holomorphic function are continuous, the Cauchy integral theorem can be proven as a direct consequence of Green's theorem and the fact that the real and imaginary parts of must satisfy the Cauchy–Riemann equations in the region bounded by and moreover in the open neighborhood of this region. Cauchy provided this proof, but it was later proven by Goursat without requiring techniques from vector calculus, or the continuity of partial derivatives. We can break the integrand as well as the differential into their real and imaginary components: In this case we have By Green's theorem, we may then replace the integrals around the closed contour with an area integral throughout the domain that is enclosed by as follows: But as the real and imaginary parts of a function holomorphic in the domain and must satisfy the Cauchy–Riemann equations there: We therefore find that both integrands (and hence their integrals) are zero This gives the desired result See also Morera's theorem Methods of contour integration Star domain References External links Jeremy Orloff, 18.04 Complex Variables with Applications Spring 2018 Massachusetts Institute of Technology: MIT OpenCourseWare Creative Commons. Augustin-Louis Cauchy Theorems in complex analysis
Cauchy's integral theorem
[ "Mathematics" ]
1,019
[ "Theorems in mathematical analysis", "Theorems in complex analysis" ]
61,193
https://en.wikipedia.org/wiki/Externality
In economics, an externality or external cost is an indirect cost or benefit to an uninvolved third party that arises as an effect of another party's (or parties') activity. Externalities can be considered as unpriced components that are involved in either consumer or producer market transactions. Air pollution from motor vehicles is one example. The cost of air pollution to society is not paid by either the producers or users of motorized transport to the rest of society. Water pollution from mills and factories is another example. All (water) consumers are made worse off by pollution but are not compensated by the market for this damage. A positive externality is when an individual's consumption in a market increases the well-being of others, but the individual does not charge the third party for the benefit. The third party is essentially getting a free product or service. An example of this might be the apartment above a bakery receiving some free heat in winter. The people who live in the apartment do not compensate the bakery for this benefit. The concept of externality was first developed by Alfred Marshall in the 1890s and achieved broader attention in the works of economist Arthur Pigou in the 1920s. The prototypical example of a negative externality is environmental pollution. Pigou argued that a tax, equal to the marginal damage or marginal external cost, (later called a "Pigouvian tax") on negative externalities could be used to reduce their incidence to an efficient level. Subsequent thinkers have debated whether it is preferable to tax or to regulate negative externalities, the optimally efficient level of the Pigouvian taxation, and what factors cause or exacerbate negative externalities, such as providing investors in corporations with limited liability for harms committed by the corporation. Externalities often occur when the production or consumption of a product or service's private price equilibrium cannot reflect the true costs or benefits of that product or service for society as a whole. This causes the externality competitive equilibrium to not adhere to the condition of Pareto optimality. Thus, since resources can be better allocated, externalities are an example of market failure. Externalities can be either positive or negative. Governments and institutions often take actions to internalize externalities, thus market-priced transactions can incorporate all the benefits and costs associated with transactions between economic agents. The most common way this is done is by imposing taxes on the producers of this externality. This is usually done similar to a quote where there is no tax imposed and then once the externality reaches a certain point there is a very high tax imposed. However, since regulators do not always have all the information on the externality it can be difficult to impose the right tax. Once the externality is internalized through imposing a tax the competitive equilibrium is now Pareto optimal. History of the concept The term "externality" was first coined by the British economist Alfred Marshall in his seminal work, "Principles of Economics," published in 1890. Marshall introduced the concept to elucidate the effects of production and consumption activities that extend beyond the immediate parties involved in a transaction. Marshall's formulation of externalities laid the groundwork for subsequent scholarly inquiry into the broader societal impacts of economic actions. While Marshall provided the initial conceptual framework for externalities, it was Arthur Pigou, a British economist, who further developed the concept in his influential work, "The Economics of Welfare," published in 1920. Pigou expanded upon Marshall's ideas and introduced the concept of "Pigovian taxes" or corrective taxes aimed at internalizing externalities by aligning private costs with social costs. His work emphasized the role of government intervention in addressing market failures resulting from externalities. Additionally, the American economist Frank Knight contributed to the understanding of externalities through his writings on social costs and benefits in the 1920s and 1930s. Knight's work highlighted the inherent challenges in quantifying and mitigating externalities within market systems, underscoring the complexities involved in achieving optimal resource allocation. Throughout the 20th century, the concept of externalities continued to evolve with advancements in economic theory and empirical research. Scholars such as Ronald Coase and Harold Hotelling made significant contributions to the understanding of externalities and their implications for market efficiency and welfare. The recognition of externalities as a pervasive phenomenon with wide-ranging implications has led to its incorporation into various fields beyond economics, including environmental science, public health, and urban planning. Contemporary debates surrounding issues such as climate change, pollution, and resource depletion underscore the enduring relevance of the concept of externalities in addressing pressing societal challenges. Definitions A negative externality is any difference between the private cost of an action or decision to an economic agent and the social cost. In simple terms, a negative externality is anything that causes an indirect cost to individuals. An example is the toxic gases that are released from industries or mines, these gases cause harm to individuals within the surrounding area and have to bear a cost (indirect cost) to get rid of that harm. Conversely, a positive externality is any difference between the private benefit of an action or decision to an economic agent and the social benefit. A positive externality is anything that causes an indirect benefit to individuals and for which the producer of that positive externality is not compensated. For example, planting trees makes individuals' property look nicer and it also cleans the surrounding areas. In microeconomic theory, externalities are factored into competitive equilibrium analysis as the social effect, as opposed to the private market which only factors direct economic effects. The social effect of economic activity is the sum of the indirect (the externalities) and direct factors. The Pareto optimum, therefore, is at the levels in which the social marginal benefit equals the social marginal cost. Externalities are the residual effects of economic activity on persons not directly participating in the transaction. The consequences of producer or consumer behaviors that result in external costs or advantages imposed on others are not taken into account by market pricing and can have both positive and negative effects. To further elaborate on this, when expenses associated with the production or use of an item or service are incurred by others but are not accounted for in the market price, this is known as a negative externality. The health and well-being of local populations may be negatively impacted by environmental deterioration resulting from the extraction of natural resources. Comparably, the tranquility of surrounding inhabitants might be disturbed by noise pollution from industry or transit, which lowers their quality of life. On the other hand, positive externalities occur when the activities of producers or consumers benefit other parties in ways that are not accounted for in market exchanges. A prime example of a positive externality is education, as those who invest in it gain knowledge and production for society as a whole in addition to personal profit. Government involvement is frequently necessary to address externalities. This can be done by enacting laws, Pigovian taxes, or other measures that encourage positive externalities or internalize external costs. Through the integration of externalities into economic research and policy formulation, society may endeavor to get results that optimize aggregate well-being and foster sustainable growth. Implications A voluntary exchange may reduce societal welfare if external costs exist. The person who is affected by the negative externalities in the case of air pollution will see it as lowered utility: either subjective displeasure or potentially explicit costs, such as higher medical expenses. The externality may even be seen as a trespass on their health or violating their property rights (by reduced valuation). Thus, an external cost may pose an ethical or political problem. Negative externalities are Pareto inefficient, and since Pareto efficiency underpins the justification for private property, they undermine the whole idea of a market economy. For these reasons, negative externalities are more problematic than positive externalities. Although positive externalities may appear to be beneficial, while Pareto efficient, they still represent a failure in the market as it results in the production of the good falling under what is optimal for the market. By allowing producers to recognise and attempt to control their externalities production would increase as they would have motivation to do so. With this comes the free rider problem. The free rider problem arises when people overuse a shared resource without doing their part to produce or pay for it. It represents a failure in the market where goods and services are not able to be distributed efficiently, allowing people to take more than what is fair. For example, if a farmer has honeybees a positive externality of owning these bees is that they will also pollinate the surrounding plants. This farmer has a next door neighbour who also benefits from this externality even though he does not have any bees himself. From the perspective of the neighbour he has no incentive to purchase bees himself as he is already benefiting from them at zero cost. But for the farmer, he is missing out on the full benefits of his own bees which he paid for, because they are also being used by his neighbour. There are a number of theoretical means of improving overall social utility when negative externalities are involved. The market-driven approach to correcting externalities is to internalize third party costs and benefits, for example, by requiring a polluter to repair any damage caused. But in many cases, internalizing costs or benefits is not feasible, especially if the true monetary values cannot be determined. Laissez-faire economists such as Friedrich Hayek and Milton Friedman sometimes refer to externalities as "neighborhood effects" or "spillovers", although externalities are not necessarily minor or localized. Similarly, Ludwig von Mises argues that externalities arise from lack of "clear personal property definition." Examples Externalities may arise between producers, between consumers or between consumers and producers. Externalities can be negative when the action of one party imposes costs on another, or positive when the action of one party benefits another. Negative A negative externality (also called "external cost" or "external diseconomy") is an economic activity that imposes a negative effect on an unrelated third party, not captured by the market price. It can arise either during the production or the consumption of a good or service. Pollution is termed an externality because it imposes costs on people who are "external" to the producer and consumer of the polluting product. Barry Commoner commented on the costs of externalities: <blockquote>Clearly, we have compiled a record of serious failures in recent technological encounters with the environment. In each case, the new technology was brought into use before the ultimate hazards were known. We have been quick to reap the benefits and slow to comprehend the costs.<ref>Barry Commoner "Frail Reeds in a Harsh World". New York: The American Museum of Natural History. Natural History. Journal of the American Museum of Natural History, Vol. LXXVIII No. 2, February, 1969, p. 44</ref></blockquote> Many negative externalities are related to the environmental consequences of production and use. The article on environmental economics also addresses externalities and how they may be addressed in the context of environmental issues. Negative production externalities Examples for negative production externalities include: Air pollution from burning fossil fuels. This activity causes damages to crops, materials and (historic) buildings and public health. Anthropogenic climate change as a consequence of greenhouse gas emissions from the burning of fossil fuels and the rearing of livestock. The Stern Review on the Economics of Climate Change says "Climate change presents a unique challenge for economics: it is the greatest example of market failure we have ever seen." Water pollution from industrial effluents can harm plants, animals, and humans Spam emails during the sending of unsolicited messages by email. Government regulation: Any costs required to comply with a law, regulation, or policy, either in terms of time or money, that are not covered by the entity issuing the edict (see also unfunded mandate). Noise pollution during the production process, which may be mentally and psychologically disruptive. Systemic risk: the risks to the overall economy arising from the risks that the banking system takes. A condition of moral hazard can occur in the absence of well-designed banking regulation, or in the presence of badly designed regulation. Negative effects of Industrial farm animal production, including "the increase in the pool of antibiotic-resistant bacteria because of the overuse of antibiotics; air quality problems; the contamination of rivers, streams, and coastal waters with concentrated animal waste; animal welfare problems, mainly as a result of the extremely close quarters in which the animals are housed.". The depletion of the stock of fish in the ocean due to overfishing. This is an example of a common property resource, which is vulnerable to the tragedy of the commons in the absence of appropriate environmental governance. In the United States, the cost of storing nuclear waste from nuclear plants for more than 1,000 years (over 100,000 for some types of nuclear waste) is, in principle, included in the cost of the electricity the plant produces in the form of a fee paid to the government and held in the nuclear waste superfund, although much of that fund was spent on Yucca Mountain nuclear waste repository without producing a solution. Conversely, the costs of managing the long-term risks of disposal of chemicals, which may remain hazardous on similar time scales, is not commonly internalized in prices. The USEPA regulates chemicals for periods ranging from 100 years to a maximum of 10,000 years. Negative consumption externalities Examples of negative consumption externalities include: Noise pollution: Sleep deprivation due to a neighbor listening to loud music late at night. Antibiotic resistance, caused by increased usage of antibiotics: Individuals do not consider this efficacy cost when making usage decisions. Government policies proposed to preserve future antibiotic effectiveness include educational campaigns, regulation, Pigouvian taxes, and patents. Passive smoking: Shared costs of declining health and vitality caused by smoking or alcohol abuse. Here, the "cost" is that of providing minimum social welfare. Economists more frequently attribute this problem to the category of moral hazards, the prospect that parties insulated from risk may behave differently from the way they would if they were fully exposed to the risk. For example, individuals with insurance against automobile theft may be less vigilant about locking their cars, because the negative consequences of automobile theft are (partially) borne by the insurance company. Traffic congestion: When more people use public roads, road users experience congestion costs such as more waiting in traffic and longer trip times. Increased road users also increase the likelihood of road accidents. Price increases: Consumption by one party causes prices to rise and therefore makes other consumers worse off, perhaps by preventing, reducing or delaying their consumption. These effects are sometimes called "pecuniary externalities" and are distinguished from "real externalities" or "technological externalities". Pecuniary externalities appear to be externalities, but occur within the market mechanism and are not considered to be a source of market failure or inefficiency, although they may still result in substantial harm to others. Weak public infrastructure, air pollution, climate change, work misallocation, resource requirements and land/space requirements as in the externalities of automobiles. Positive A positive externality (also called "external benefit" or "external economy" or "beneficial externality") is the positive effect an activity imposes on an unrelated third party. Similar to a negative externality, it can arise either on the production side, or on the consumption side. A positive production externality occurs when a firm's production increases the well-being of others but the firm is uncompensated by those others, while a positive consumption externality occurs when an individual's consumption benefits other but the individual is uncompensated by those others. Positive production externalities Examples of positive production externalities A beekeeper who keeps the bees for their honey. A side effect or externality associated with such activity is the pollination of surrounding crops by the bees. The value generated by the pollination may be more important than the value of the harvested honey. The corporate development of some free software (studied notably by Jean Tirole and Steven Weber) Research and development, since much of the economic benefits of research are not captured by the originating firm. An industrial company providing first aid classes for employees to increase on the job safety. This may also save lives outside the factory. Restored historic buildings may encourage more people to visit the area and patronize nearby businesses. A foreign firm that demonstrates up-to-date technologies to local firms and improves their productivity. Public transport can increase economic welfare by providing transit services to other economic activities, however the benefits of those other economic activities are not felt by the operator, it can also decrease the negative externalities of increasing road patronage in the absence of a congestion charge. Positive consumption externalities Examples of positive consumption externalities include: An individual who maintains an attractive house may confer benefits to neighbors in the form of increased market values for their properties. This is an example of a pecuniary externality, because the positive spillover is accounted for in market prices. In this case, house prices in the neighborhood will increase to match the increased real estate value from maintaining their aesthetic. (such as by mowing the lawn, keeping the trash orderly, and getting the house painted) Anything that reduces the rate of transmission of an infectious disease carries positive externalities. This includes vaccines, quarantine, tests and other diagnostic procedures. For airborne infections, it also includes masking. For waterborne diseases, it includes improved sewers and sanitation. (See herd immunity) Increased education of individuals, as this can lead to broader society benefits in the form of greater economic productivity, a lower unemployment rate, greater household mobility and higher rates of political participation. An individual buying a product that is interconnected in a network (e.g., a smartphone). This will increase the usefulness of such phones to other people who have a video cellphone. When each new user of a product increases the value of the same product owned by others, the phenomenon is called a network externality or a network effect. Network externalities often have "tipping points" where, suddenly, the product reaches general acceptance and near-universal usage. In an area that does not have a public fire department, homeowners who purchase private fire protection services provide a positive externality to neighboring properties, which are less at risk of the protected neighbor's fire spreading to their (unprotected) house. Collective solutions or public policies are implemented to regulate activities with positive or negative externalities. Positional The sociological basis of Positional externalities is rooted in the theories of conspicuous consumption and positional goods. Conspicuous consumption (originally articulated by Veblen, 1899) refers to the consumption of goods or services primarily for the purpose of displaying social status or wealth. In simpler terms, individuals engange in conspicuous consumption to signal their economic standing or to gain social recognition. Positional goods (introduced by Hirsch, 1977) are such goods, whose value is heavily contingent upon how they compare to similar goods owned by others. Their desirability is or derived utility is intrinsically tied to their relative scarcity or exclusivity within a particular social context. The economic concept of Positional externalities originates from Duesenberry's Relative Income Hypothesis. This hypothesis challenges the conventional microeconomic model, as outlined by the Common Pool Resource (CPR) mechanism, which typically assumes that an individual's utility derived from consuming a particular good or service remains unaffected by other's consumption choices. Instead, Duesenberry posits that individuals gauge the utility of their consumption based on a comparison with other consumption bundles, thus introducing the notion of relative income into economic analysis. Consequently, the consumption of positional goods becomes highly sought after, as it directly impacts one's perceived status relative to others in their social circle. Example: consider a scenario where individuals within a social group vie for the latest luxury cars. As one member acquires a top-of-the-line vehicle, others may feel compelled to upgrade their own cars to preserve their status within the group. This cycle of competitive consumption can result in inefficient allocation of resources and exacerbate income inequality within society. The consumption of positional goods engenders negative externalities, wherein the acquisition of such goods by one individual diminishes the utility or value of similar goods held by others within the same reference group. This positional externality, can lead to a cascade of overconsumption, as individuals strive to maintain or improve their relative position through excessive spending. Positional externalities are related, but not similar to Percuniary externalities. Pecuniary Pecuniary externalities are those which affect a third party's profit but not their ability to produce or consume. These externalities "occur when new purchases alter the relevant context within which an existing positional good is evaluated." Robert H. Frank gives the following example: if some job candidates begin wearing expensive custom-tailored suits, a side effect of their action is that other candidates become less likely to make favorable impressions on interviewers. From any individual job seeker's point of view, the best response might be to match the higher expenditures of others, lest her chances of landing the job fall. But this outcome may be inefficient since when all spend more, each candidate's probability of success remains unchanged. All may agree that some form of collective restraint on expenditure would be useful." Frank notes that treating positional externalities like other externalities might lead to "intrusive economic and social regulation." He argues, however, that less intrusive and more efficient means of "limiting the costs of expenditure cascades"—i.e., the hypothesized increase in spending of middle-income families beyond their means "because of indirect effects associated with increased spending by top earners"—exist; one such method is the personal income tax. The effect that rising demand has on prices in marketplaces with intense competition is a typical illustration of pecuniary externalities. Prices rise in response to shifts in consumer preferences or income levels, which raise demand for a product and benefit suppliers by increasing sales and profits. But other customers who now have to pay more for identical goods might also suffer from this price hike. As a result, consumers who were not involved in the initial transaction suffer a monetary externality in the form of diminished buying power, while producers profit from increased prices. Furthermore, markets with economies of scale or network effects may experience pecuniary externalities. For example, when it comes to network products, like social media platforms or communication networks, the more people use the technology or engage in it, the more valuable the product becomes. Consequently, early adopters could gain financially from positive pecuniary externalities such as enhanced network effects or greater resale prices of related products or services. As a conclusion, pecuniary externalities draw attention to the intricate relationships that exist between market players and the effects that market transactions have on distribution. Comprehending pecuniary externalities is essential for assessing market results and formulating policies that advance economic efficiency and equality, even if they might not have the same direct impact on welfare or resource allocation as traditional externalities. Inframarginal The concept of inframarginal externalities was introduced by James Buchanan and Craig Stubblebine in 1962. Inframarginal externalities differ from other externalities in that there is no benefit or loss to the marginal consumer. At the relevant margin to the market, the externality does not affect the consumer and does not cause a market inefficiency. The externality only affects at the inframarginal range outside where the market clears. These types of externalities do not cause inefficient allocation of resources and do not require policy action. Technological Technological externalities directly affect a firm's production and therefore, indirectly influence an individual's consumption; and the overall impact of society; for example Open-source software or free software development by corporations. These externalities occur when technology spillovers from the acts of one economic agent impact the production or consumption potential of another agency. Depending on their nature, these spillovers may produce positive or negative externalities. The creation of new technologies that help people in ways that go beyond the original inventor is one instance of positive technical externalities. Let us examine the instance of research and development (R&D) inside the pharmaceutical sector. In addition to possible financial gain, a pharmaceutical company's R&D investment in the creation of a new medicine helps society in other ways. Better health outcomes, higher productivity, and lower healthcare expenses for both people and society at large might result from the new medication. Furthermore, the information created via research and development frequently spreads to other businesses and sectors, promoting additional innovation and economic expansion. For example, biotechnology advances could have uses in agriculture, environmental cleanup, or renewable energy, not just in the pharmaceutical industry. However, technical externalities can also take the form of detrimental spillovers that cost society money. Pollution from industrial manufacturing processes is a prime example. Businesses might not be entirely responsible for the expenses of environmental deterioration if they release toxins into the air or rivers as a result of their production processes. Rather, these expenses are shifted to society in the form of decreased quality of life for impacted populations, harm to the environment, and health risks. In addition, workers in some industries may experience job displacement and unemployment as a result of disruptive developments in labor markets brought about by technological improvements. For instance, individuals with outdated skills may lose their jobs as a result of the automation of manufacturing processes through robots and artificial intelligence, causing social and economic unrest in the affected areas. Supply and demand diagram The usual economic analysis of externalities can be illustrated using a standard supply and demand diagram if the externality can be valued in terms of money. An extra supply or demand curve is added, as in the diagrams below. One of the curves is the private cost that consumers pay as individuals for additional quantities of the good, which in competitive markets, is the marginal private cost. The other curve is the true cost that society as a whole pays for production and consumption of increased production the good, or the marginal social cost. Similarly, there might be two curves for the demand or benefit of the good. The social demand curve would reflect the benefit to society as a whole, while the normal demand curve reflects the benefit to consumers as individuals and is reflected as effective demand in the market. What curve is added depends on the type of externality that is described, but not whether it is positive or negative. Whenever an externality arises on the production side, there will be two supply curves (private and social cost). However, if the externality arises on the consumption side, there will be two demand curves instead (private and social benefit). This distinction is essential when it comes to resolving inefficiencies that are caused by externalities. External costs The graph shows the effects of a negative externality. For example, the steel industry is assumed to be selling in a competitive market – before pollution-control laws were imposed and enforced (e.g. under laissez-faire). The marginal private cost is less than the marginal social or public cost by the amount of the external cost, i.e., the cost of air pollution and water pollution. This is represented by the vertical distance between the two supply curves. It is assumed that there are no external benefits, so that social benefit equals individual benefit. If the consumers only take into account their own private cost, they will end up at price Pp and quantity Qp, instead of the more efficient price Ps and quantity Qs. These latter reflect the idea that the marginal social benefit should equal the marginal social cost, that is that production should be increased only as long as the marginal social benefit exceeds the marginal social cost. The result is that a free market is inefficient since at the quantity Qp, the social benefit is less than the social cost, so society as a whole would be better off if the goods between Qp and Qs had not been produced. The problem is that people are buying and consuming too much steel. This discussion implies that negative externalities (such as pollution) are more than merely an ethical problem. The problem is one of the disjunctures between marginal private and social costs that are not solved by the free market. It is a problem of societal communication and coordination to balance costs and benefits. This also implies that pollution is not something solved by competitive markets. Some collective solution is needed, such as a court system to allow parties affected by the pollution to be compensated, government intervention banning or discouraging pollution, or economic incentives such as green taxes. External benefits The graph shows the effects of a positive or beneficial externality. For example, the industry supplying smallpox vaccinations is assumed to be selling in a competitive market. The marginal private benefit of getting the vaccination is less than the marginal social or public benefit by the amount of the external benefit (for example, society as a whole is increasingly protected from smallpox by each vaccination, including those who refuse to participate). This marginal external benefit of getting a smallpox shot is represented by the vertical distance between the two demand curves. Assume there are no external costs, so that social cost equals individual cost. If consumers only take into account their own private benefits from getting vaccinations, the market will end up at price Pp and quantity Qp as before, instead of the more efficient price Ps and quantity Qs. This latter again reflect the idea that the marginal social benefit should equal the marginal social cost, i.e., that production should be increased as long as the marginal social benefit exceeds the marginal social cost. The result in an unfettered market is inefficient since at the quantity Qp, the social benefit is greater than the societal cost, so society as a whole would be better off if more goods had been produced. The problem is that people are buying too few vaccinations. The issue of external benefits is related to that of public goods, which are goods where it is difficult if not impossible to exclude people from benefits. The production of a public good has beneficial externalities for all, or almost all, of the public. As with external costs, there is a problem here of societal communication and coordination to balance benefits and costs. This also implies that vaccination is not something solved by competitive markets. The government may have to step in with a collective solution, such as subsidizing or legally requiring vaccine use. If the government does this, the good is called a merit good. Examples include policies to accelerate the introduction of electric vehicles or promote cycling, both of which benefit public health. Causes Externalities often arise from poorly defined property rights. While property rights to some things, such as objects, land, and money can be easily defined and protected, air, water, and wild animals often flow freely across personal and political borders, making it much more difficult to assign ownership. This incentivizes agents to consume them without paying the full cost, leading to negative externalities. Positive externalities similarly accrue from poorly defined property rights. For example, a person who gets a flu vaccination cannot own part of the herd immunity this confers on society, so they may choose not to be vaccinated. When resources are managed poorly or there are no well-defined property rights, externalities frequently result, especially when it comes to common pool resources. Due to their rivalrous usage and non-excludability, common pool resources including fisheries, forests, and grazing areas are vulnerable to abuse and deterioration when access is unrestrained. Without clearly defined property rights or efficient management structures, people or organizations may misuse common pool resources without thinking through the long-term effects, which might have detrimental externalities on other users and society at large. This phenomenon—famously referred to by Garrett Hardin as the "tragedy of the commons"—highlights people's propensity to put their immediate self-interests ahead of the sustainability of shared resources. Imagine, for instance, that there are no rules or limits in place and that several fishers have access to a single fishing area. In order to maintain their way of life and earn income, fishers are motivated to maximize their catches, which eventually causes overfishing and the depletion of fish populations. Fish populations decrease, and as a result, ecosystems are irritated, and the fishing industry experiences financial losses. These consequences have an adverse effect on subsequent generations and other people who depend on the resource. Nevertheless, the reduction of externalities linked to resources in common pools frequently necessitates the adoption of collaborative management approaches, like community-based management frameworks, tradable permits, and quotas. Communities can lessen the tragedy of the commons and encourage sustainable resource use and conservation for the benefit of current and future generations by establishing property rights or controlling access to shared resources. Another common cause of externalities is the presence of transaction costs. Transaction costs are the cost of making an economic trade. These costs prevent economic agents from making exchanges they should be making. The costs of the transaction outweigh the benefit to the agent. When not all mutually beneficial exchanges occur in a market, that market is inefficient. Without transaction costs, agents could freely negotiate and internalize all externalities. In order to further understand transactional costs, it is crucial to discuss Ronald Coase's methodologies. The standard theory of externalities, which holds that internalizing external costs or benefits requires government action through measures like Pigovian taxes or regulations, has been challenged by Coase. He presents the idea of transaction costs, which include the expenses related to reaching, upholding, and keeping an eye on agreements between parties. In the existence of externalities, transaction costs may hinder the effectiveness of private bargaining and result in worse-than-ideal results, according to Coase. He does, however, contend that private parties can establish mutually advantageous arrangements to internalize externalities without the involvement of the government, provided that there are minimal transaction costs and clearly defined property rights. Nevertheless, Coase uses the example of the distribution of property rights between a farmer and a rancher to support his claims. Assume there is a negative externality because the farmer's crops are harmed by the rancher's livestock. In a society where property rights are well-defined and transaction costs are minimal, the farmer and rancher can work out a voluntary agreement to settle the dispute. For example, the farmer may invest in preventive measures to lessen the impact, or the rancher could pay the farmer back for the harm the cattle caused. Coase's approach emphasizes how crucial it is to take property rights and transaction costs into account when managing externalities. He highlights that voluntary transactions between private parties can allow private parties to internalise externalities and that property rights distribution and transaction cost reduction can help make this possible. Possible solutions Solutions in non-market economies In planned economies, production is typically limited only to necessity, which would eliminate externalities created by overproduction. The central planner can decide to create and allocate jobs in industries that work to mitigate externalities, rather than waiting for the market to create a demand for these jobs. Solutions in market economies There are several general types of solutions to the problem of externalities, including both public- and private-sector resolutions: Corporations or partnerships will allow confidential sharing of information among members, reducing the positive externalities that would occur if the information were shared in an economy consisting only of individuals. Pigovian taxes or subsidies intended to redress economic injustices or imbalances. Regulation to limit activity that might cause negative externalities Government provision of services with positive externalities Lawsuits to compensate affected parties for negative externalities Voting to cause participants to internalize externalities subject to the conditions of the efficient voter rule. Mediation or negotiation between those affected by externalities and those causing them A Pigovian tax (also called Pigouvian tax, after economist Arthur C. Pigou) is a tax imposed that is equal in value to the negative externality. In order to fully correct the negative externality, the per unit tax should equal the marginal external cost. The result is that the market outcome would be reduced to the efficient amount. A side effect is that revenue is raised for the government, reducing the amount of distortionary taxes that the government must impose elsewhere. Governments justify the use of Pigovian taxes saying that these taxes help the market reach an efficient outcome because this tax bridges the gap between marginal social costs and marginal private costs. Some arguments against Pigovian taxes say that the tax does not account for all the transfers and regulations involved with an externality. In other words, the tax only considers the amount of externality produced. Another argument against the tax is that it does not take private property into consideration. Under the Pigovian system, one firm, for example, can be taxed more than another firm, even though the other firm is actually producing greater amounts of the negative externality. Further arguments against Pigou disagree with his assumption every externality has someone at fault or responsible for the damages. Coase argues that externalities are reciprocal in nature. Both parties must be present for an externality to exist. He uses the example of two neighbors. One neighbor possesses a fireplace, and often lights fires in his house without issue. Then one day, the other neighbor builds a wall that prevents the smoke from escaping and sends it back into the fire-building neighbor’s home. This illustrates the reciprocal nature of externalities. Without the wall, the smoke would not be a problem, but without the fire, the smoke would not exist to cause problems in the first place. Coase also takes issue with Pigou’s assumption of a “benevolent despot” government. Pigou assumes the government’s role is to see the external costs or benefits of a transaction and assign an appropriate tax or subsidy. Coase argues that the government faces costs and benefits just like any other economic agent, so other factors play into its decision-making. However, the most common type of solution is a tacit agreement through the political process. Governments are elected to represent citizens and to strike political compromises between various interests. Normally governments pass laws and regulations to address pollution and other types of environmental harm. These laws and regulations can take the form of "command and control" regulation (such as enforcing standards and limiting process variables), or environmental pricing reform (such as ecotaxes or other Pigovian taxes, tradable pollution permits or the creation of markets for ecological services). The second type of resolution is a purely private agreement between the parties involved. Government intervention might not always be needed. Traditional ways of life may have evolved as ways to deal with external costs and benefits. Alternatively, democratically run communities can agree to deal with these costs and benefits in an amicable way. Externalities can sometimes be resolved by agreement between the parties involved. This resolution may even come about because of the threat of government action. The use of taxes and subsidies in solving the problem of externalities Correction tax, respectively subsidy, means essentially any mechanism that increases, respectively decreases, the costs (and thus price) associated with the activities of an individual or company. The private-sector may sometimes be able to drive society to the socially optimal resolution. Ronald Coase argued that an efficient outcome can sometimes be reached without government intervention. Some take this argument further, and make the political argument that government should restrict its role to facilitating bargaining among the affected groups or individuals and to enforcing any contracts that result. This result, often known as the Coase theorem, requires that Property rights be well-defined People act rationally Transaction costs be minimal (costless bargaining) Complete information If all of these conditions apply, the private parties can bargain to solve the problem of externalities. The second part of the Coase theorem asserts that, when these conditions hold, whoever holds the property rights, a Pareto efficient outcome will be reached through bargaining. This theorem would not apply to the steel industry case discussed above. For example, with a steel factory that trespasses on the lungs of a large number of individuals with pollution, it is difficult if not impossible for any one person to negotiate with the producer, and there are large transaction costs. Hence the most common approach may be to regulate the firm (by imposing limits on the amount of pollution considered "acceptable") while paying for the regulation and enforcement with taxes. The case of the vaccinations would also not satisfy the requirements of the Coase theorem. Since the potential external beneficiaries of vaccination are the people themselves, the people would have to self-organize to pay each other to be vaccinated. But such an organization that involves the entire populace would be indistinguishable from government action. In some cases, the Coase theorem is relevant. For example, if a logger is planning to clear-cut a forest in a way that has a negative impact on a nearby resort, the resort-owner and the logger could, in theory, get together to agree to a deal. For example, the resort-owner could pay the logger not to clear-cut – or could buy the forest. The most problematic situation, from Coase's perspective, occurs when the forest literally does not belong to anyone, or in any example in which there are not well-defined and enforceable property rights; the question of "who" owns the forest is not important, as any specific owner will have an interest in coming to an agreement with the resort owner (if such an agreement is mutually beneficial). However, the Coase theorem is difficult to implement because Coase does not offer a negotiation method. Moreover, Coasian solutions are unlikely to be reached due to the possibility of running into the assignment problem, the holdout problem, the free-rider problem, or transaction costs. Additionally, firms could potentially bribe each other since there is little to no government interaction under the Coase theorem. For example, if one oil firm has a high pollution rate and its neighboring firm is bothered by the pollution, then the latter firm may move depending on incentives. Thus, if the oil firm were to bribe the second firm, the first oil firm would suffer no negative consequences because the government would not know about the bribing. In a dynamic setup, Rosenkranz and Schmitz (2007) have shown that the impossibility to rule out Coasean bargaining tomorrow may actually justify Pigouvian intervention today. To see this, note that unrestrained bargaining in the future may lead to an underinvestment problem (the so-called hold-up problem). Specifically, when investments are relationship-specific and non-contractible, then insufficient investments will be made when it is anticipated that parts of the investments’ returns will go to the trading partner in future negotiations (see Hart and Moore, 1988). Hence, Pigouvian taxation can be welfare-improving precisely because Coasean bargaining will take place in the future. Antràs and Staiger (2012) make a related point in the context of international trade. Kenneth Arrow suggests another private solution to the externality problem. He believes setting up a market for the externality is the answer. For example, suppose a firm produces pollution that harms another firm. A competitive market for the right to pollute may allow for an efficient outcome. Firms could bid the price they are willing to pay for the amount they want to pollute, and then have the right to pollute that amount without penalty. This would allow firms to pollute at the amount where the marginal cost of polluting equals the marginal benefit of another unit of pollution, thus leading to efficiency. Frank Knight also argued against government intervention as the solution to externalities. He proposed that externalities could be internalized with privatization of the relevant markets. He uses the example of road congestion to make his point. Congestion could be solved through the taxation of public roads. Knight shows that government intervention is unnecessary if roads were privately owned instead. If roads were privately owned, their owners could set tolls that would reduce traffic and thus congestion to an efficient level. This argument forms the basis of the traffic equilibrium. This argument supposes that two points are connected by two different highways. One highway is in poor condition, but is wide enough to fit all traffic that desires to use it. The other is a much better road, but has limited capacity. Knight argues that, if a large number of vehicles operate between the two destinations and have freedom to choose between the routes, they will distribute themselves in proportions such that the cost per unit of transportation will be the same for every truck on both highways. This is true because as more trucks use the narrow road, congestion develops and as congestion increases it becomes equally profitable to use the poorer highway. This solves the externality issue without requiring any government tax or regulations. Solutions to greenhouse gas emission externalities The negative effect of carbon emissions and other greenhouse gases produced in production exacerbate the numerous environmental and human impacts of anthropogenic climate change. These negative effects are not reflected in the cost of producing, nor in the market price of the final goods. There are many public and private solutions proposed to combat this externality Emissions fee An emissions fee, or carbon tax, is a tax levied on each unit of pollution produced in the production of a good or service. The tax incentivised producers to either lower their production levels or to undertake abatement activities that reduce emissions by switching to cleaner technology or inputs. Cap-and-trade systems The cap-and-trade system enables the efficient level of pollution (determined by the government) to be achieved by setting a total quantity of emissions and issuing tradable permits to polluting firms, allowing them to pollute a certain share of the permissible level. Permits will be traded from firms that have low abatement costs to firms with higher abatement costs and therefore the system is both cost-effective and cost-efficient. The cap and trade system has some practical advantages over an emissions fee such as the fact that: 1. it reduces uncertainty about the ultimate pollution level. 2. If firms are profit maximizing, they will utilize cost-minimizing technology to attain the standard which is efficient for individual firms and provides incentives to the research and development market to innovate. 3. The market price of pollution rights would keep pace with the price level while the economy experiences inflation. The emissions fee and cap and trade systems are both incentive-based approaches to solving a negative externality problem. Command-and-control regulations Command-and-control regulations act as an alternative to the incentive-based approach. They require a set quantity of pollution reduction and can take the form of either a technology standard or a performance standard. A technology standard requires pollution producing firms to use specified technology. While it may reduce the pollution, it is not cost-effective and stifles innovation by incentivising research and development for technology that would work better than the mandated one. Performance standards set emissions goals for each polluting firm. The free choice of the firm to determine how to reach the desired emissions level makes this option slightly more efficient than the technology standard, however, it is not as cost-effective as the cap-and-trade system since the burden of emissions reduction cannot be shifted to firms with lower abatement. Scientific calculation of external costs A 2020 scientific analysis of external climate costs of foods indicates that external greenhouse gas costs are typically highest for animal-based products – conventional and organic to about the same extent within that ecosystem-subdomain – followed by conventional dairy products and lowest for organic plant-based foods and concludes that contemporary monetary evaluations are "inadequate" and that policy-making that lead to reductions of these costs to be possible, appropriate and urgent. Criticism Ecological economics criticizes the concept of externality because there is not enough system thinking and integration of different sciences in the concept. Ecological economics is founded upon the view that the neoclassical economics (NCE) assumption that environmental and community costs and benefits are mutually cancelling "externalities" is not warranted. Joan Martinez Alier, for instance shows that the bulk of consumers are automatically excluded from having an impact upon the prices of commodities, as these consumers are future generations who have not been born yet. The assumptions behind future discounting, which assume that future goods will be cheaper than present goods, has been criticized by Fred Pearce and by the Stern Report (although the Stern report itself does employ discounting and has been criticized for this and other reasons by ecological economists such as Clive Spash). Concerning these externalities, some, like the eco-businessman Paul Hawken, argue an orthodox economic line that the only reason why goods produced unsustainably are usually cheaper than goods produced sustainably is due to a hidden subsidy, paid by the non-monetized human environment, community or future generations. These arguments are developed further by Hawken, Amory and Hunter Lovins to promote their vision of an environmental capitalist utopia in Natural Capitalism: Creating the Next Industrial Revolution. In contrast, ecological economists, like Joan Martinez-Alier, appeal to a different line of reasoning. Rather than assuming some (new) form of capitalism is the best way forward, an older ecological economic critique questions the very idea of internalizing externalities as providing some corrective to the current system. The work by Karl William Kapp argues that the concept of "externality" is a misnomer. In fact the modern business enterprise operates on the basis of shifting costs onto others as normal practice to make profits. Charles Eisenstein has argued that this method of privatising profits while socialising the costs through externalities, passing the costs to the community, to the natural environment or to future generations is inherently destructive. Social ecological economist Clive Spash argues that externality theory fallaciously assumes environmental and social problems are minor aberrations in an otherwise perfectly functioning efficient economic system. Internalizing the odd externality does nothing to address the structural systemic problem and fails to recognize the all pervasive nature of these supposed 'externalities'. This is precisely why heterodox economists argue for a heterodox theory of social costs to effectively prevent the problem through the precautionary principle. See also References Further reading Anderson, David A. (2019) Environmental Economics and Natural Resource Management 5e'', New York: Routledge. Berger, Sebastian (2017) The Social Costs of Neoliberalism: Essays in the Economics of K. William Kapp. Nottingham: Spokesman. Berger, Sebastian (ed) (2015) The Heterodox Theory of Social Costs - by K. William Kapp. London: Routledge. Johnson, Paul M. Definition "A Glossary of Economic Terms" Jean-Jacques Laffont (2008) Externalities. In: Palgrave Macmillan (eds) The New Palgrave Dictionary of Economics. Palgrave Macmillan, London External links ExternE – European Union project to evaluate external costs Econ 120 – Externalities Environmental economics Market failure Welfare economics Public economics Inefficiency in game theory
Externality
[ "Mathematics", "Environmental_science" ]
10,502
[ "Environmental economics", "Game theory", "Inefficiency in game theory", "Environmental social science" ]
61,213
https://en.wikipedia.org/wiki/Laurent%20series
In mathematics, the Laurent series of a complex function is a representation of that function as a power series which includes terms of negative degree. It may be used to express complex functions in cases where a Taylor series expansion cannot be applied. The Laurent series was named after and first published by Pierre Alphonse Laurent in 1843. Karl Weierstrass had previously described it in a paper written in 1841 but not published until 1894. Definition The Laurent series for a complex function about an arbitrary point is given by where the coefficients are defined by a contour integral that generalizes Cauchy's integral formula: The path of integration is counterclockwise around a Jordan curve enclosing and lying in an annulus in which is holomorphic (analytic). The expansion for will then be valid anywhere inside the annulus. The annulus is shown in red in the figure on the right, along with an example of a suitable path of integration labeled . When is defined as the circle , where , this amounts to computing the complex Fourier coefficients of the restriction of to . The fact that these integrals are unchanged by a deformation of the contour is an immediate consequence of Green's theorem. One may also obtain the Laurent series for a complex function at . However, this is the same as when . In practice, the above integral formula may not offer the most practical method for computing the coefficients for a given function ; instead, one often pieces together the Laurent series by combining known Taylor expansions. Because the Laurent expansion of a function is unique whenever it exists, any expression of this form that equals the given function in some annulus must actually be the Laurent expansion of . Convergence Laurent series with complex coefficients are an important tool in complex analysis, especially to investigate the behavior of functions near singularities. Consider for instance the function with . As a real function, it is infinitely differentiable everywhere; as a complex function however it is not differentiable at . The Laurent series of is obtained via the power series representation, which converges to for all except at the singularity . The graph on the right shows in black and its Laurent approximations As , the approximation becomes exact for all (complex) numbers except at the singularity . More generally, Laurent series can be used to express holomorphic functions defined on an annulus, much as power series are used to express holomorphic functions defined on a disc. Suppose is a given Laurent series with complex coefficients and a complex center . Then there exists a unique inner radius and outer radius such that: The Laurent series converges on the open annulus . That is, both the positive- and negative degree power series converge. Furthermore, this convergence will be uniform on compact sets. Finally, the convergent series defines a holomorphic function on . Outside the annulus, the Laurent series diverges. That is, at each point in the exterior of , either the positive- or negative degree power series diverges. On the boundary of the annulus, one cannot make a general statement, except that there is at least one point on the inner boundary and one point on the outer boundary such that cannot be holomorphically extended to those points; giving rise to a Riemann-Hilbert problem. It is possible that may be zero or may be infinite; at the other extreme, it's not necessarily true that is less than . These radii can be computed by taking the limit superior of the coefficients such that: When , the coefficient of the Laurent expansion is called the residue of at the singularity . For example, the function is holomorphic everywhere except at . The Laurent expansion about can then be obtained from the power series representation: hence, the residue is given by . Conversely, for a holomorphic function defined on the annulus , there always exists a unique Laurent series with center which converges (at least on ) to . For example, consider the following rational function, along with its partial fraction expansion: This function has singularities at and , where the denominator is zero and the expression is therefore undefined. A Taylor series about (which yields a power series) will only converge in a disc of radius 1, since it "hits" the singularity at . However, there are three possible Laurent expansions about 0, depending on the radius of : One series is defined on the inner disc where ; it is the same as the Taylor series, This follows from the partial fraction form of the function, along with the formula for the sum of a geometric series, for . The second series is defined on the middle annulus where is caught between the two singularities: Here, we use the alternative form of the geometric series summation, for . The third series is defined on the infinite outer annulus where , (which is also the Laurent expansion at ) This series can be derived using geometric series as before, or by performing polynomial long division of 1 by , not stopping with a remainder but continuing into terms; indeed, the "outer" Laurent series of a rational function is analogous to the decimal form of a fraction. (The "inner" Taylor series expansion can be obtained similarly, just by reversing the term order in the division algorithm.) Uniqueness Suppose a function holomorphic on the annulus has two Laurent series: Multiply both sides by , where k is an arbitrary integer, and integrate on a path γ inside the annulus, The series converges uniformly on , where ε is a positive number small enough for γ to be contained in the constricted closed annulus, so the integration and summation can be interchanged. Substituting the identity into the summation yields Hence the Laurent series is unique. Laurent polynomials A Laurent polynomial is a Laurent series in which only finitely many coefficients are non-zero. Laurent polynomials differ from ordinary polynomials in that they may have terms of negative degree. Principal part The principal part of a Laurent series is the series of terms with negative degree, that is If the principal part of is a finite sum, then has a pole at of order equal to (negative) the degree of the highest term; on the other hand, if has an essential singularity at , the principal part is an infinite sum (meaning it has infinitely many non-zero terms). If the inner radius of convergence of the Laurent series for is 0, then has an essential singularity at if and only if the principal part is an infinite sum, and has a pole otherwise. If the inner radius of convergence is positive, may have infinitely many negative terms but still be regular at , as in the example above, in which case it is represented by a different Laurent series in a disk about . Laurent series with only finitely many negative terms are well-behaved—they are a power series divided by , and can be analyzed similarly—while Laurent series with infinitely many negative terms have complicated behavior on the inner circle of convergence. Multiplication and sum Laurent series cannot in general be multiplied. Algebraically, the expression for the terms of the product may involve infinite sums which need not converge (one cannot take the convolution of integer sequences). Geometrically, the two Laurent series may have non-overlapping annuli of convergence. Two Laurent series with only finitely many negative terms can be multiplied: algebraically, the sums are all finite; geometrically, these have poles at , and inner radius of convergence 0, so they both converge on an overlapping annulus. Thus when defining formal Laurent series, one requires Laurent series with only finitely many negative terms. Similarly, the sum of two convergent Laurent series need not converge, though it is always defined formally, but the sum of two bounded below Laurent series (or any Laurent series on a punctured disk) has a non-empty annulus of convergence. Also, for a field , by the sum and multiplication defined above, formal Laurent series would form a field which is also the field of fractions of the ring of formal power series. See also Puiseux series Mittag-Leffler's theorem Formal Laurent series Laurent series considered formally, with coefficients from an arbitrary commutative ring, without regard for convergence, and with only finitely many negative terms, so that multiplication is always defined. Z-transform the special case where the Laurent series is taken about zero has much use in time-series analysis. Fourier series the substitution transforms a Laurent series into a Fourier series, or conversely. This is used in the q-series expansion of the j-invariant. Padé approximant Another technique used when a Taylor series is not viable. Notes References External links Laurent Series and Mandelbrot set by Robert Munafo Complex analysis Series expansions
Laurent series
[ "Mathematics" ]
1,769
[ "Series expansions", "Mathematical relations", "Approximations", "Algebra" ]
61,220
https://en.wikipedia.org/wiki/Spintronics
Spintronics (a portmanteau meaning spin transport electronics), also known as spin electronics, is the study of the intrinsic spin of the electron and its associated magnetic moment, in addition to its fundamental electronic charge, in solid-state devices. The field of spintronics concerns spin-charge coupling in metallic systems; the analogous effects in insulators fall into the field of multiferroics. Spintronics fundamentally differs from traditional electronics in that, in addition to charge state, electron spins are used as a further degree of freedom, with implications in the efficiency of data storage and transfer. Spintronic systems are most often realised in dilute magnetic semiconductors (DMS) and Heusler alloys and are of particular interest in the field of quantum computing and neuromorphic computing. History Spintronics emerged from discoveries in the 1980s concerning spin-dependent electron transport phenomena in solid-state devices. This includes the observation of spin-polarized electron injection from a ferromagnetic metal to a normal metal by Johnson and Silsbee (1985) and the discovery of giant magnetoresistance independently by Albert Fert et al. and Peter Grünberg et al. (1988). The origin of spintronics can be traced to the ferromagnet/superconductor tunneling experiments pioneered by Meservey and Tedrow and initial experiments on magnetic tunnel junctions by Julliere in the 1970s. The use of semiconductors for spintronics began with the theoretical proposal of a spin field-effect-transistor by Datta and Das in 1990 and of the electric dipole spin resonance by Rashba in 1960. Theory The spin of the electron is an intrinsic angular momentum that is separate from the angular momentum due to its orbital motion. The magnitude of the projection of the electron's spin along an arbitrary axis is , implying that the electron acts as a fermion by the spin-statistics theorem. Like orbital angular momentum, the spin has an associated magnetic moment, the magnitude of which is expressed as . In a solid, the spins of many electrons can act together to affect the magnetic and electronic properties of a material, for example endowing it with a permanent magnetic moment as in a ferromagnet. In many materials, electron spins are equally present in both the up and the down state, and no transport properties are dependent on spin. A spintronic device requires generation or manipulation of a spin-polarized population of electrons, resulting in an excess of spin up or spin down electrons. The polarization of any spin dependent property X can be written as . A net spin polarization can be achieved either through creating an equilibrium energy split between spin up and spin down. Methods include putting a material in a large magnetic field (Zeeman effect), the exchange energy present in a ferromagnet or forcing the system out of equilibrium. The period of time that such a non-equilibrium population can be maintained is known as the spin lifetime, . In a diffusive conductor, a spin diffusion length can be defined as the distance over which a non-equilibrium spin population can propagate. Spin lifetimes of conduction electrons in metals are relatively short (typically less than 1 nanosecond). An important research area is devoted to extending this lifetime to technologically relevant timescales. The mechanisms of decay for a spin polarized population can be broadly classified as spin-flip scattering and spin dephasing. Spin-flip scattering is a process inside a solid that does not conserve spin, and can therefore switch an incoming spin up state into an outgoing spin down state. Spin dephasing is the process wherein a population of electrons with a common spin state becomes less polarized over time due to different rates of electron spin precession. In confined structures, spin dephasing can be suppressed, leading to spin lifetimes of milliseconds in semiconductor quantum dots at low temperatures. Superconductors can enhance central effects in spintronics such as magnetoresistance effects, spin lifetimes and dissipationless spin-currents. The simplest method of generating a spin-polarised current in a metal is to pass the current through a ferromagnetic material. The most common applications of this effect involve giant magnetoresistance (GMR) devices. A typical GMR device consists of at least two layers of ferromagnetic materials separated by a spacer layer. When the two magnetization vectors of the ferromagnetic layers are aligned, the electrical resistance will be lower (so a higher current flows at constant voltage) than if the ferromagnetic layers are anti-aligned. This constitutes a magnetic field sensor. Two variants of GMR have been applied in devices: (1) current-in-plane (CIP), where the electric current flows parallel to the layers and (2) current-perpendicular-to-plane (CPP), where the electric current flows in a direction perpendicular to the layers. Other metal-based spintronics devices: Tunnel magnetoresistance (TMR), where CPP transport is achieved by using quantum-mechanical tunneling of electrons through a thin insulator separating ferromagnetic layers. Spin-transfer torque, where a current of spin-polarized electrons is used to control the magnetization direction of ferromagnetic electrodes in the device. Spin-wave logic devices carry information in the phase. Interference and spin-wave scattering can perform logic operations. Spintronic-logic devices Non-volatile spin-logic devices to enable scaling are being extensively studied. Spin-transfer, torque-based logic devices that use spins and magnets for information processing have been proposed. These devices are part of the ITRS exploratory road map. Logic-in memory applications are already in the development stage. A 2017 review article can be found in Materials Today. A generalized circuit theory for spintronic integrated circuits has been proposed so that the physics of spin transport can be utilized by SPICE developers and subsequently by circuit and system designers for the exploration of spintronics for “beyond CMOS computing.” Applications Read heads of magnetic hard drives are based on the GMR or TMR effect. Motorola developed a first-generation 256 kb magnetoresistive random-access memory (MRAM) based on a single magnetic tunnel junction and a single transistor that has a read/write cycle of under 50 nanoseconds. Everspin has since developed a 4 Mb version. Two second-generation MRAM techniques are in development: thermal-assisted switching (TAS) and spin-transfer torque (STT). Another design, racetrack memory, a novel memory architecture proposed by Dr. Stuart S. P. Parkin, encodes information in the direction of magnetization between domain walls of a ferromagnetic wire. In 2012, persistent spin helices of synchronized electrons were made to persist for more than a nanosecond, a 30-fold increase over earlier efforts, and longer than the duration of a modern processor clock cycle. Semiconductor-based spintronic devices Doped semiconductor materials display dilute ferromagnetism. In recent years, dilute magnetic oxides (DMOs) including ZnO based DMOs and TiO2-based DMOs have been the subject of numerous experimental and computational investigations. Non-oxide ferromagnetic semiconductor sources (like manganese-doped gallium arsenide ), increase the interface resistance with a tunnel barrier, or using hot-electron injection. Spin detection in semiconductors has been addressed with multiple techniques: Faraday/Kerr rotation of transmitted/reflected photons Circular polarization analysis of electroluminescence Nonlocal spin valve (adapted from Johnson and Silsbee's work with metals) Ballistic spin filtering The latter technique was used to overcome the lack of spin-orbit interaction and materials issues to achieve spin transport in silicon. Because external magnetic fields (and stray fields from magnetic contacts) can cause large Hall effects and magnetoresistance in semiconductors (which mimic spin-valve effects), the only conclusive evidence of spin transport in semiconductors is demonstration of spin precession and dephasing in a magnetic field non-collinear to the injected spin orientation, called the Hanle effect. Applications Applications using spin-polarized electrical injection have shown threshold current reduction and controllable circularly polarized coherent light output. Examples include semiconductor lasers. Future applications may include a spin-based transistor having advantages over MOSFET devices such as steeper sub-threshold slope. Magnetic-tunnel transistor: The magnetic-tunnel transistor with a single base layer has the following terminals: Emitter (FM1): Injects spin-polarized hot electrons into the base. Base (FM2): Spin-dependent scattering takes place in the base. It also serves as a spin filter. Collector (GaAs): A Schottky barrier is formed at the interface. It only collects electrons that have enough energy to overcome the Schottky barrier, and when states are available in the semiconductor. The magnetocurrent (MC) is given as: And the transfer ratio (TR) is MTT promises a highly spin-polarized electron source at room temperature. Storage media Antiferromagnetic storage media have been studied as an alternative to ferromagnetism, especially since with antiferromagnetic material the bits can be stored as well as with ferromagnetic material. Instead of the usual definition 0 ↔ 'magnetisation upwards', 1 ↔ 'magnetisation downwards', the states can be, e.g., 0 ↔ 'vertically-alternating spin configuration' and 1 ↔ 'horizontally-alternating spin configuration'.). The main advantages of antiferromagnetic material are: insensitivity to data-damaging perturbations by stray fields due to zero net external magnetization; no effect on near particles, implying that antiferromagnetic device elements would not magnetically disturb its neighboring elements; far shorter switching times (antiferromagnetic resonance frequency is in the THz range compared to GHz ferromagnetic resonance frequency); broad range of commonly available antiferromagnetic materials including insulators, semiconductors, semimetals, metals, and superconductors. Research is being done into how to read and write information to antiferromagnetic spintronics as their net zero magnetization makes this difficult compared to conventional ferromagnetic spintronics. In modern MRAM, detection and manipulation of ferromagnetic order by magnetic fields has largely been abandoned in favor of more efficient and scalable reading and writing by electrical current. Methods of reading and writing information by current rather than fields are also being investigated in antiferromagnets as fields are ineffective anyway. Writing methods currently being investigated in antiferromagnets are through spin-transfer torque and spin-orbit torque from the spin Hall effect and the Rashba effect. Reading information in antiferromagnets via magnetoresistance effects such as tunnel magnetoresistance is also being explored. See also Stuart S. P. Parkin Electric dipole spin resonance Josephson effect Magnetoresistive random-access memory (MRAM) Magnonics Potential applications of graphene#Spintronics Rashba effect Spin pumping Spin-transfer torque Spinhenge@Home Spinmechatronics Spinplasmonics Unconventional computing Valleytronics List of emerging technologies Multiferroics References Further reading "Introduction to Spintronics". Marc Cahay, Supriyo Bandyopadhyay, CRC Press, "Spintronics Steps Forward.", University of South Florida News External links 23 milestones in the history of spin compiled by Nature Milestone 18: A Giant Leap for Electronics: Giant Magneto-resistance, compiled by Nature Milestone 20: Information in a Spin: Datta-Das, compiled by Nature Spintronics portal with news and resources RaceTrack:InformationWeek (April 11, 2008) Spintronics research targets GaAs. Spintronics Tutorial Lecture on Spin transport by S. Datta (from Datta Das transistor)—Part 1 and Part 2 Electronics Quantum electronics Condensed matter physics Theoretical computer science Non-volatile memory Solid-state computer storage
Spintronics
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
2,534
[ "Quantum electronics", "Theoretical computer science", "Applied mathematics", "Spintronics", "Phases of matter", "Quantum mechanics", "Materials science", "Condensed matter physics", "Nanotechnology", "Matter" ]