id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
7,214,368
https://en.wikipedia.org/wiki/Energy%20%26%20Environment
Energy & Environment is an academic journal "covering the direct and indirect environmental impacts of energy acquisition, transport, production and use". Under its editor-in-chief from 1998 to 2017, Sonja Boehmer-Christiansen, it was known for easygoing peer-review and publishing climate change denial papers. Yiu Fai Tsang became its editor-in-chief in May 2017. Abstracting and indexing The journal is abstracted and indexed in the Social Sciences Citation Index, Scopus, EBSCO databases, Current Contents/Social & Behavioral Sciences, and Compendex. According to the Journal Citation Reports, the journal had a 2021 impact factor of 2.945, ranking it 65th out of 125 journals in the category "Environmental Studies". History The journal was first published in 1989; David Everest (Department of the Environment, United Kingdom) was its founding editor. Following his death in 1998, Boehmer-Christiansen became the journal's editor. She and several members of the journal's editorial advisory board had previously been associated with "the Energy and Environment Groups" at the Science and Technology Policy Unit (University of Sussex), with John Surrey. Its publisher, Multi-science ceased trading on 31 December 2015 and the journal was transferred to SAGE. In May 2017, Yiu Fai Tsang became the journal's editor. Climate change denial and criticism The journal was regarded as "a small journal that caters to climate change denialists". It has played an important role in attacking climate science and scientists, for example Michael E. Mann. In 2011, a number of scientists such Gavin Schmidt, Roger A. Pielke Jr., Stephan Lewandowsky and Michael Ashley have criticised that E&E has low standards of peer review and little impact. In addition, Ralph Keeling criticized a paper in the journal which claimed that CO2 levels were above 400 ppm in 1825, 1857 and 1942, writing in a letter to the editor, "Is it really the intent of E&E to provide a forum for laundering pseudo-science?" A 2005 article in Environmental Science & Technology stated that the journal is "obscure" and that "scientific claims made in Energy & Environment have little credibility among scientists." Boehmer-Christiansen acknowledged that the journal's "impact rating has remained too low for many ambitious young researchers to use it", but blamed this on "the negative attitudes of the Intergovernmental Panel on Climate Change (IPCC)/Climatic Research Unit people." According to Hans von Storch, the journal "tries to give people who do not have a platform a platform," which "is then attractive for skeptic papers. They know they can come through and that interested people make sure the paper enters the political realm." When asked about the publication in the Spring of 2003 of a revised version of the paper at the center of the Soon and Baliunas controversy, Boehmer-Christiansen said, "I'm following my political agenda -- a bit, anyway. But isn't that the right of the editor?" The journal has also been accused of publishing papers that could not have passed any reasonable peer review process, such as one in 2011 that claimed that the Sun was made of iron. See also Environmental engineering science References External links Energy and fuel journals English-language journals Environmental social science journals Academic journals established in 1989 Climate change denial 8 times per year journals
Energy & Environment
[ "Environmental_science" ]
703
[ "Environmental science journals", "Energy and fuel journals", "Environmental social science journals", "Environmental social science" ]
7,214,369
https://en.wikipedia.org/wiki/Global%20distance%20test
The global distance test (GDT), also written as GDT_TS to represent "total score", is a measure of similarity between two protein structures with known amino acid correspondences (e.g. identical amino acid sequences) but different tertiary structures. It is most commonly used to compare the results of protein structure prediction to the experimentally determined structure as measured by X-ray crystallography, protein NMR, or, increasingly, cryoelectron microscopy. The GDT metric was developed by Adam Zemla at Lawrence Livermore National Laboratory and originally implemented in the Local-Global Alignment (LGA) program. It is intended as a more accurate measurement than the common root-mean-square deviation (RMSD) metric - which is sensitive to outlier regions created, for example, by poor modeling of individual loop regions in a structure that is otherwise reasonably accurate. The conventional GDT_TS score is computed over the alpha carbon atoms and is reported as a percentage, ranging from 0 to 100. In general, the higher the GDT_TS score, the more closely a model approximates a given reference structure. GDT_TS measurements are used as major assessment criteria in the production of results from the Critical Assessment of Structure Prediction (CASP), a large-scale experiment in the structure prediction community dedicated to assessing current modeling techniques. The metric was first introduced as an evaluation standard in the third iteration of the biannual experiment (CASP3) in 1998. Various extensions to the original method have been developed; variations that accounts for the positions of the side chains are known as global distance calculations (GDC). Calculation The GDT score is calculated as the largest set of amino acid residues' alpha carbon atoms in the model structure falling within a defined distance cutoff of their position in the experimental structure, after iteratively superimposing the two structures. By the original design the GDT algorithm calculates 20 GDT scores, i.e. for each of 20 consecutive distance cutoffs (0.5 Å, 1.0 Å, 1.5 Å, ... 10.0 Å). For structure similarity assessment it is intended to use the GDT scores from several cutoff distances, and scores generally increase with increasing cutoff. A plateau in this increase may indicate an extreme divergence between the experimental and predicted structures, such that no additional atoms are included in any cutoff of a reasonable distance. The conventional GDT_TS total score in CASP is the average result of cutoffs at 1, 2, 4, and 8 Å. Variations and extensions The original GDT_TS is calculated based on the superimpositions and GDT scores produced by the Local-Global Alignment (LGA) program. A "high accuracy" version called GDT_HA is computed by selection of smaller cutoff distances (half the size of GDT_TS) and thus more heavily penalizes larger deviations from the reference structure. It was used in the high accuracy category of CASP7. CASP8 defined a new "TR score", which is GDT_TS minus a penalty for residues clustered too close, meant to penalize steric clashes in the predicted structure, sometimes to game the cutoff measure of GDT. The primary GDT assessment uses only the alpha carbon atoms. To apply superposition‐based scoring to the amino acid residue side chains, a GDT‐like score called "global distance calculation for sidechains" (GDC_sc) was designed and implemented within the LGA program in 2008. Instead of comparing residue positions on the basis of alpha carbons, GDC_sc uses a predefined "characteristic atom" near the end of each residue for the evaluation of inter-residue distance deviations. An "all atoms" variant of the GDC score (GDC_all) is calculated using full-model information, and is one of the standard measures used by CASP's organizers and assessors to evaluate accuracy of predicted structural models. GDT scores are generally computed with respect to a single reference structure. In some cases, structural models with lower GDT scores to a reference structure determined by protein NMR are nevertheless better fits to the underlying experimental data. Methods have been developed to estimate the uncertainty of GDT scores due to protein flexibility and uncertainty in the reference structure. See also Root mean square deviation (bioinformatics) — A different structure comparison measure. TM-score — A different structure comparison measure. References External links CASP14 results - summary tables of the latest CASP experiment run in 2020, including example plots of GDT score as a function of cutoff distance GDT, GDC, LCS and LGA description services and documentation on structure comparison and similarity measures. Bioinformatics Computational chemistry
Global distance test
[ "Chemistry", "Engineering", "Biology" ]
984
[ "Bioinformatics", "Theoretical chemistry", "Computational chemistry", "Biological engineering" ]
7,214,571
https://en.wikipedia.org/wiki/Common%20Criteria%20Testing%20Laboratory
The Common Criteria model provides for the separation of the roles of evaluator and certifier. Product certificates are awarded by national schemes on the basis of evaluations carried by independent testing laboratories. A Common Criteria testing laboratory is a third-party commercial security testing facility that is accredited to conduct security evaluations for conformance to the Common Criteria international standard. Such facility must be accredited according to ISO/IEC 17025 with its national certification body. Examples List of laboratory designations by country: In the US they are called Common Criteria Testing Laboratory (CCTL) In Canada they are called Common Criteria Evaluation Facility (CCEF) In the UK they are called Commercial Evaluation Facilities (CLEF) In France they are called Centres d’Evaluation de la Sécurité des Technologies de l’Information (CESTI) In Germany they are called IT Security Evaluation Facility (ITSEF) Common Criteria Recognition Arrangement Common Criteria Recognition Arrangement (CCRA) or Common Criteria Mutual Recognition Arrangement (MRA) is an international agreement that recognizes evaluations against the Common Criteria standard performed in all participating countries. There are some limitations to this agreement and, in the past, only evaluations up to EAL4+ were recognized. With on-going transition away from EAL levels and the introduction of NDPP evaluations that “map” to up to EAL4 assurance components continue to be recognized. United States In the United States the National Institute of Standards and Technology (NIST) National Voluntary Laboratory Accreditation Program (NVLAP) accredits CCTLs to meet National Information Assurance Partnership (NIAP) Common Criteria Evaluation and Validation Scheme requirements and conduct IT security evaluations for conformance to the Common Criteria. CCTL requirements These laboratories must meet the following requirements: NIST Handbook 150, NVLAP Procedures and General Requirements NIST Handbook 150-20, NVLAP Information Technology Security Testing — Common Criteria NIAP specific criteria for IT security evaluations and other NIAP defined requirements CCTLs enter into contractual agreements with sponsors to conduct security evaluations of IT products and Protection Profiles which use the CCEVS, other NIAP approved test methods derived from the Common Criteria, Common Methodology and other technology based sources. CCTLs must observe the highest standards of impartiality, integrity and commercial confidentiality. CCTLs must operate within the guidelines established by the CCEVS. To become a CCTL, a testing laboratory must go through a series of steps that involve both the NIAP Validation Body and NVLAP. NVLAP accreditation is the primary requirement for achieving CCTL status. Some scheme requirements that cannot be satisfied by NVLAP accreditation are addressed by the NIAP Validation Body. At present, there are only three scheme-specific requirements imposed by the Validation Body. NIAP approved CCTLs must agree to the following: Located in the U.S. and be a legal entity, duly organized and incorporated, validly existing and in good standing under the laws of the state where the laboratory intends to do business Accept U.S. Government technical oversight and validation of evaluation-related activities in accordance with the policies and procedures established by the CCEVS Accept U.S. Government participants in selected Common Criteria evaluations. CCTL accreditation A testing laboratory becomes a CCTL when the laboratory is approved by the NIAP Validation Body and is listed on the Approved Laboratories List. To avoid unnecessary expense and delay in becoming a NIAP-approved testing laboratory, it is strongly recommended that prospective CCTLs ensure that they are able to satisfy the scheme-specific requirements prior to seeking accreditation from NVLAP. This can be accomplished by sending a letter of intent to the NIAP prior to entering the NVLAP process. Additional laboratory-related information can be found in CCEVS publications: #1 Common Criteria Evaluation and Validation Scheme for Information Technology Security — Organization, Management, and Concept of Operations and Scheme Publication #4 Common Criteria Evaluation and Validation Scheme for Information Technology Security — Guidance to Common Criteria Testing Laboratories Canada In Canada the Communications Security Establishment Canada (CSEC) Canadian Common Criteria Scheme (CCCS) oversees Common Criteria Evaluation Facilities (CCEF). Accreditation is performed by Standards Council of Canada (SCC) under its Program for the Accreditation of Laboratories – Canada (PALCAN) according to CAN-P-1591, the SCC’s adaptation of ISO/IEC 17025-2005 for ITSET Laboratories. Approval is performed by the CCS Certification Body, a body within the CSEC, and is the verification of the applicant's ability to perform competent Common Criteria evaluations. Notes External links US: Common Criteria Evaluation and Validation Scheme US: Common Criteria Testing Laboratories Canada: Common Criteria Scheme Canada: Common Criteria Evaluation Facilities Common Criteria Recognition Agreement List of Common Criteria evaluated products ISO/IEC 15408 — available free as a public standard Computer security procedures Tests
Common Criteria Testing Laboratory
[ "Engineering" ]
989
[ "Cybersecurity engineering", "Computer security procedures" ]
7,215,216
https://en.wikipedia.org/wiki/Enriched%20Xenon%20Observatory
The Enriched Xenon Observatory (EXO) is a particle physics experiment searching for neutrinoless double beta decay of xenon-136 at WIPP near Carlsbad, New Mexico, U.S. Neutrinoless double beta decay (0νββ) detection would prove the Majorana nature of neutrinos and impact the neutrino mass values and ordering. These are important open topics in particle physics. EXO currently has a 200-kilogram xenon liquid time projection chamber (EXO-200) with R&D efforts on a ton-scale experiment (nEXO). Xenon double beta decay was detected and limits have been set for 0νββ. Overview EXO measures the rate of neutrinoless decay events above the expected background of similar signals, to find or limit the double beta decay half-life, which relates to the effective neutrino mass using nuclear matrix elements. A limit on effective neutrino mass below 0.01 eV would determine the neutrino mass order. The effective neutrino mass is dependent on the lightest neutrino mass in such a way that that bound indicates the normal mass hierarchy. The expected rate of 0νββ events is very low, so background radiation is a significant problem. WIPP has of rock overburden—equivalent to of water—to screen incoming cosmic rays. Lead shielding and a cryostat also protect the setup. The neutrinoless decays would appear as narrow spike in the energy spectrum around the xenon Q-value (Qββ = 2457.8 keV), which is fairly high and above most gamma decays. EXO-200 History EXO-200 was designed with a goal of less than 40 events per year within two standard deviations of expected decay energy. This background was achieved by selecting and screening all materials for radiopurity. Originally the vessel was to be made of Teflon, but the final design of the vessel uses thin, ultra-pure copper. EXO-200 was relocated from Stanford to WIPP in the summer of 2007. Assembly and commissioning continued until the end of 2009 with data taking beginning in May 2011. Calibration was done using 228Th, 137Cs, and 60Co gamma sources. Design The prototype EXO-200 uses a copper cylindrical time projection chamber filled with of pure liquid xenon. Xenon is a scintillator, so decay particles produce prompt light which is detected by avalanche photodiodes, providing the event time. A large electric field drives ionization electrons to wires for collection. The time between the light and first collection determines the z coordinate of the event, while a grid of wires determines the radial and angular coordinates. Results The background from earth radioactivity(Th/U) and 137Xe contamination led to ≈2×10−3 counts/(keV·kg·yr) in the detector. Energy resolution near Qββ of 1.53% was achieved. In August 2011, EXO-200 was the first experiment to observe double beta decay of 136Xe, with a half life of 2.11×1021 years. This is the slowest directly observed process. An improved half life of 2.165 ±0.016(stat) ±0.059(sys) × 1021 years was published in 2014. EXO set a limit on neutrinoless beta decay of 1.6×1025 years in 2012. A revised analysis of run 2 data with 100 kg·yr exposure, reported in the June issue of Nature reduced the limits on half-life to 1.1×1025 yr, and mass to 450 meV. This was used to confirm the power of the design and validate the proposed expansion. Additional running for two years was taken. EXO-200 has performed two scientific operations, Phase I (2011-2014) and after upgrades, Phase II (2016 - 2018) for a total exposure of 234.1 kg·yr. No evidence of neutrinoless double beta decay has been found in the combined Phase I and II data, giving the lower bound of years for the half-life and upper mass of 239 meV. Phase II was the final operation of EXO-200. nEXO A ton-scale experiment, nEXO ("next EXO"), must overcome many backgrounds. The EXO collaboration is exploring many possibilities to do so, including barium tagging in liquid xenon. Any double beta decay event will leave behind a daughter barium ion, while backgrounds, such as radioactive impurities or neutrons, will not. Requiring a barium ion at the location of an event eliminates all backgrounds. Tagging of a single ion of barium has been demonstrated and progress has been made on a method for extracting ions out of the liquid xenon. A freezing probe method has been demonstrated, and gaseous tagging is also being developed. The 2014 EXO-200 paper indicated a 5000 kg TPC can improve the background by xenon self-shielding and better electronics. Diameter would be increased to 130 cm and a water tank would be added as shielding and muon veto. This is much larger than the attenuation length for gamma rays. Radiopure copper for nEXO has been completed. It is planned for installation in the SNOLAB "Cryopit". An Oct. 2017 paper details the experiment and discusses the sensitivity and the discovery potential of nEXO for neutrinoless double beta decay. Details on the ionization readout of the TPC have also been published. The pre-Conceptual Design Report (pCDR) for nEXO was published in 2018. The planned location is SNOLAB, Canada. References External links EXO web site nEXO web site EXO experiment record on INSPIRE-HEP Particle experiments Neutrino experiments Radioactivity Xenon
Enriched Xenon Observatory
[ "Physics", "Chemistry" ]
1,230
[ "Radioactivity", "Nuclear physics" ]
7,215,439
https://en.wikipedia.org/wiki/Window%20sill
A windowsill (also written window sill or window-sill, and less frequently in British English, cill) is the horizontal structure or surface at the bottom of a window. Window sills serve to structurally support and hold the window in place. The exterior portion of a window sill provides a mechanism for shedding rainwater away from the wall at the window opening. Therefore, window sills are usually inclined slightly downward away from the window and wall, and often extend past the exterior face of the wall, so the water will drip off rather than run down the wall. Some windowsills are made of natural stone, cast stone, concrete, tile, or other non-porous materials to further increase their water resistance. Windows may not have a structural sill or the sill may not be sufficiently weather resistant. In these cases, a strip of waterproof and weather resistant material (steel, vinyl, PVC) called a sill pan may be used to protect the wall and shed the water. Like the sill, a sill pan will usually be inclined and protrude from the wall. Types of window sill A window sill in the most general sense is a horizontal structural element below a window opening or window unit in masonry construction or framed construction and is regarded as part of the window frame. The bottom of a window frame sits on top of the window sill of the wall opening. A window sill may span the entire width of a wall from inside to outside, as is often the case in basic masonry construction, making it visible on both the interior and exterior of the building. In such a case, the exterior window sill and interior window sill would be two sides of the same structural element. Conversely, a window sill may only extend from the internal wall structure to the outside and not be visible from the building's interior. In that case, the window likely has a shelf-like piece of interior trim work—often made of wood, tile, or stone—which is distinct from the exterior window sill. The technical term used by carpenters, window manufacturers, and other professionals for this piece of trim work is window stool, but it is also referred to as a window sill. In residential buildings, some people use this latter kind of interior window sill or stool to store houseplants, books, or other small personal items. See also Window § Terms Window box References Sill Architectural elements
Window sill
[ "Technology", "Engineering" ]
500
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
7,216,005
https://en.wikipedia.org/wiki/Unidirectional%20network
A unidirectional network (also referred to as a unidirectional gateway or data diode) is a network appliance or device that allows data to travel in only one direction. Data diodes can be found most commonly in high security environments, such as defense, where they serve as connections between two or more networks of differing security classifications. Given the rise of industrial IoT and digitization, this technology can now be found at the industrial control level for such facilities as nuclear power plants, power generation and safety critical systems like railway networks. After years of development, data diodes have evolved from being only a network appliance or device allowing raw data to travel only in one direction, used in guaranteeing information security or protection of critical digital systems, such as industrial control systems, from inbound cyber attacks, to combinations of hardware and software running in proxy computers in the source and destination networks. The hardware enforces physical unidirectionality, and the software replicates databases and emulates protocol servers to handle bi-directional communication. Data Diodes are now capable of transferring multiple protocols and data types simultaneously. It contains a broader range of cybersecurity features like secure boot, certificate management, data integrity, forward error correction (FEC), secure communication via TLS, among others. A unique characteristic is that data is transferred deterministically (to predetermined locations) with a protocol "break" that allows the data to be transferred through the data diode. Data diodes are commonly found in high security military and government environments, and are now becoming widely spread in sectors like oil & gas, water/wastewater, airplanes (between flight control units and in-flight entertainment systems), manufacturing and cloud connectivity for industrial IoT. New regulations have increased demand and with increased capacity, major technology vendors have lowered the cost of the core technology. History The first data diodes were developed by governmental organizations in the eighties and nineties. Because these organizations work with confidential information, making sure their network is secure is of the highest priority. Primary solutions used by these organizations were air gaps. But, as the amount of transferable data increased, and a continuous and real-time data stream became more important, these organizations had to look for an automated solution. In the search for more standardization, an increasing number of organizations started to look for a solution that was a better fit for their activities. Commercial solutions created by stable organizations succeeded given the level of security and long-term support. In the United States, utilities and oil and gas companies have used data diodes for several years, and regulators have encouraged their use to protect equipment and processes in safety instrumented systems (SISs). The Nuclear Regulatory Commission (NRC) now mandates the use of data diodes and many other sectors, in addition to electrical and nuclear, also use data diodes effectively. In Europe, regulators and operators of several safety-critical systems started recommending and implementing regulations on the use of unidirectional gateways. In 2013 the working, Industrial Control System Cybersecurity, directed by the French Network and Information Security Agency (ANSSI) stated that is forbidden to use firewalls to connect any class 3 network, such as railway switching systems, to a lower class network or corporate network, only unidirectional technology is permitted. Applications Real time monitoring of safety-critical networks Secure OT – IT bridge Secure cloud connectivity of critical OT networks Database replication Data mining Trusted back-end and hybrid cloud hosted solutions (private / public) Secure data exchange for data marketplaces Secure credential/ certificate provisioning Secure cross-data base sharing Secure printing from a less secure network to a high secure network (reducing print costs) Transferring application and operating system updates from a less secure network to a high secure network Time synchronization in highly secure networks File transfer Streaming video Sending/receiving alerts or alarms from open to critical/confidential networks Sending/receiving emails from open to critical/confidential networks Government Commercial companies Usage Unidirectional network devices are typically used to guarantee information security or protection of critical digital systems, such as Industrial control systems, from cyber attacks. While use of these devices is common in high security environments such as defense, where they serve as connections between two or more networks of differing security classifications, the technology is also being used to enforce one-way communications outbound from critical digital systems to untrusted networks connected to the Internet. The physical nature of unidirectional networks only allows data to pass from one side of a network connection to another, and not the other way around. This can be from the "low side" or untrusted network, to the "high side" or trusted network, or vice versa. In the first case, data in the high side network is kept confidential and users retain access to data from the low side. Such functionality can be attractive if sensitive data is stored on a network which requires connectivity with the Internet: the high side can receive Internet data from the low side, but no data on the high side are accessible to Internet-based intrusion. In the second case, a safety-critical physical system can be made accessible for online monitoring, yet be insulated from all Internet-based attacks that might seek to cause physical damage. In both cases, the connection remains unidirectional even if both the low and the high network are compromised, as the security guarantees are physical in nature. There are two general models for using unidirectional network connections. In the classical model, the purpose of the data diode is to prevent export of classified data from a secure machine while allowing import of data from an insecure machine. In the alternative model, the diode is used to allow export of data from a protected machine while preventing attacks on that machine. These are described in more detail below. One-way flow to less secure systems Involves systems that must be secured against remote/external attacks from public networks while publishing information to such networks. For example, an election management system used with electronic voting must make election results available to the public while at the same time it must be immune to attack. This model is applicable to a variety of critical infrastructure protection problems, where protection of the data in a network is less important than reliable control and correct operation of the network. For example, the public living downstream from a dam needs up-to-date information on the outflow, and the same information is a critical input to the control system for the floodgates. In such a situation, it is critical that the flow of information be from the secure control system to the public, and not vice versa. One-way flow to more secure systems The majority of unidirectional network applications in this category are in defense, and defense contractors. These organizations traditionally have applied air gaps to keep classified data physically separate from any Internet connection. With the introduction of unidirectional networks in some of these environments, a degree of connectivity can safely exist between a network with classified data, and a network with an Internet connection. In the Bell–LaPadula security model, users of a computer system can only create data at or above their own security level. This applies in contexts where there is a hierarchy of information classifications. If users at each security level share a machine dedicated to that level, and if the machines are connected by data diodes, the Bell–LaPadula constraints can be rigidly enforced. Benefits Traditionally, when the IT network provides DMZ server access for an authorized user, the data is vulnerable to intrusions from the IT network. However, with a unidirectional gateways separating a critical side or OT network with sensitive data from an open side with business and Internet connectivity, normally IT network, organizations can achieve the best of both worlds, enabling the connectivity required and assuring security. This holds true even if the IT network is compromised, because the traffic flow control is physical in nature. No reported cases of data diodes being bypassed or exploited to enable two-way traffic. Lower long-term operating cost (OPEX) cost as there are no rules to maintain. Although there will be software updates to be installed. Often these devices need to be maintained by the vendors. The unidirectional software layer cannot be configured to allow two-way traffic due to the physical disconnection of the RX or TX line. Weaknesses As of June 2015, unidirectional gateways were not yet commonly used or well understood. Unidirectional gateways are unable to route the majority of network traffic and break most protocols. Cost; data diodes were originally expensive, although lower cost solutions are now available. Specific use cases that require a two-way data flow can be difficult to achieve. Variations The simplest form of a unidirectional network is a modified, fiber-optic network link, with send and receive transceivers removed or disconnected for one direction, and any link failure protection mechanisms disabled. Some commercial products rely on this basic design, but add other software functionality that provides applications with an interface which helps them pass data across the link. All-optical data diodes can support very high channel capacities and are among the simplest. In 2019, Controlled Interfaces demonstrated its (now patented) one-way optical fiber link using 100G commercial off-the-shelf transceivers in a pair of Arista network switch platforms. No specialized driver software is required. Other more sophisticated commercial offerings enable simultaneous one-way data transfer of multiple protocols that usually require bidirectional links. The German companies INFODAS and GENUA have developed software based ("logical") data diodes that use a Microkernel Operating system to ensure unidirectional data transfer. Due to the software architecture these solutions offer higher speed than conventional hardware based data diodes. ST Engineering, have developed its own Secure e-Application Gateway, consisting of multiple data diodes and other software components, to enable real-time bi-directional HTTP(S) web services transactions over the internet while protecting the secured networks from both malicious injects and data leakage. In 2018, Siemens Mobility released an industrial grade unidirectional gateway solution in which the data diode, Data Capture Unit, uses electromagnetic induction and new chip design to achieve an EBA safety assessment, guaranteeing secure connectivity of new and existing safety critical systems up to Safety integrity level (SIL) 4 to enable secure IoT and provide data analytics and other cloud hosted digital services. In 2022, Fend Incorporated released a data diode capable of acting as a Modbus Gateway with full optical isolation. This diode is targeted at industrial markets and critical infrastructure serving to bridge old outdated technology with newer IT systems. The diode also functions as a Modbus converter, with the ability to connect to serial RTU systems on one side and Ethernet TCP systems on the other. The US Naval Research Laboratory (NRL) has developed its own unidirectional network called the Network Pump. This is in many ways similar to DSTO's work, except that it allows a limited backchannel going from the high side to the low side for the transmission of acknowledgments. This technology allows more protocols to be used over the network, but introduces a potential covert channel if both the high- and low-side are compromised through artificially delaying the timing of the acknowledgment. Different implementations also have differing levels of third party certification and accreditation. A cross domain guard intended for use in a military context may have or require extensive third party certification and accreditation. A data diode intended for industrial use, however, may not have or require third party certification and accreditation at all, depending on the application. Notable vendors BAE Systems - US/UK Fend Incorporated - US Siemens - Germany ST Engineering - Singapore Technolution - Netherlands See also Bell–LaPadula model for security Network tap Intrusion detection system References External links Patton Blog: Employing Simplex Data Circuits for Ultra-High-Security Networking SANS Institute Paper on Tactical Data Diodes in Industrial Automation and Control Systems. Guide to Industrial Control Systems (ICS) Security United States Department of Commerce - National Institute of Standards and technology on data diode use on Industrial Control Systems. Improving Industrial Control System Cybersecurity with Defense-in-Depth Strategies United States Department of Homeland - Security Industrial Control Systems Cyber Emergency Response Team on data diode use. Networking hardware Computer network security
Unidirectional network
[ "Engineering" ]
2,545
[ "Cybersecurity engineering", "Computer networks engineering", "Computer network security", "Networking hardware" ]
7,216,032
https://en.wikipedia.org/wiki/Journal%20of%20High%20Energy%20Physics
The Journal of High Energy Physics is a monthly peer-reviewed open access scientific journal covering the field of high energy physics. It is published by Springer Science+Business Media on behalf of the International School for Advanced Studies. The journal is part of the SCOAP3 initiative. According to the Journal Citation Reports, the journal has a 2020 impact factor of 5.810. References External links Journal page at International School for Advanced Studies website English-language journals Monthly journals Physics journals Academic journals established in 1997 Springer Science+Business Media academic journals Academic journals associated with learned and professional societies Particle physics journals
Journal of High Energy Physics
[ "Physics" ]
120
[ "Particle physics stubs", "Particle physics", "Particle physics journals" ]
7,216,822
https://en.wikipedia.org/wiki/Contact%20order
The contact order of a protein is a measure of the locality of the inter-amino acid contacts in the protein's native state tertiary structure. It is calculated as the average sequence distance between residues that form native contacts in the folded protein divided by the total length of the protein. Higher contact orders indicate longer folding times, and low contact order has been suggested as a predictor of potential downhill folding, or protein folding that occurs without a free energy barrier. This effect is thought to be due to the lower loss of conformational entropy associated with the formation of local as opposed to nonlocal contacts. Relative contact order (CO) is formally defined as: where N is the total number of contacts, ΔSi,j is the sequence separation, in residues, between contacting residues i and j, and L is the total number of residues in the protein. The value of contact order typically ranges from 5% to 25% for single-domain proteins, with lower contact order belonging to mainly helical proteins, and higher contact order belonging to proteins with a high beta-sheet content. Protein structure prediction methods are more accurate in predicting the structures of proteins with low contact orders. This may be partly because low contact order proteins tend to be small, but is likely to be explained by the smaller number of possible long-range residue-residue interactions to be considered during global optimization procedures that minimize an energy function. Even successful structure prediction methods such as the Rosetta method overproduce low-contact-order structure predictions compared to the distributions observed in experimentally determined protein structures. The percentage of the natively folded contact order can also be used as a measure of the "nativeness" of folding transition states. Phi value analysis in concert with molecular dynamics has produced transition-state models whose contact order is close to that of the folded state in proteins that are small and fast-folding. Further, contact orders in transition states as well as those in native states are highly correlated with overall folding time. In addition to their role in structure prediction, contact orders can themselves be predicted based on a sequence alignment, which can be useful in classifying the fold of a novel sequence with some degree of homology to known sequences. See also Circuit topology: topological arrangement of contacts References Bioinformatics Protein structure
Contact order
[ "Chemistry", "Engineering", "Biology" ]
463
[ "Bioinformatics", "Biological engineering", "Protein structure", "Structural biology" ]
7,217,055
https://en.wikipedia.org/wiki/Project%20workforce%20management
Project workforce management is the practice of combining the coordination of all logistic elements of a project through a single software application (or workflow engine). This includes planning and tracking of schedules and mileposts, cost and revenue, resource allocation, as well as overall management of these project elements. Efficiency is improved by eliminating manual processes, like spreadsheet tracking to monitor project progress. It also allows for at-a-glance status updates and ideally integrates with existing legacy applications in order to unify ongoing projects, enterprise resource planning (ERP) and broader organizational goals. There are a lot of logistic elements in a project. Different team members are responsible for managing each element and often, the organisation may have a mechanism to manage some logistic areas as well. By coordinating these various components of project management, workforce management and financials through a single solution, the process of configuring and changing project and workforce details is simplified. Introduction A project workforce management system defines project tasks, project positions, and assigns personnel to the project positions. The project tasks and positions are correlated to assign a responsible project position or even multiple positions to complete each project task. Because each project position may be assigned to a specific person, the qualifications and availabilities of that person can be taken into account when determining the assignment. By associating project tasks and project positions, a manager can better control the assignment of the workforce and complete the project more efficiently. When it comes to project workforce management, it is all about managing all the logistic aspects of a project or an organisation through a software application. Usually, this software has a workflow engine defined. Therefore, all the logistic processes take place in the workflow engine. About Technical field This invention relates to project management systems and methods, more particularly to a software-based system and method for project and workforce management. Software usage Due to the software usage, all the project workflow management tasks can be fully automated without leaving many tasks for the project managers. This returns high efficiency to the project management when it comes to project tracking proposes. In addition to different tracking mechanisms, project workforce management software also offer a dashboard for the project team. Through the dashboard, the project team has a glance view of the overall progress of the project elements. Most of the times, project workforce management software can work with the existing legacy software systems such as ERP (enterprise resource planning) systems. This easy integration allows the organisation to use a combination of software systems for management purposes. Background Good project management is an important factor for the success of a project. A project may be thought of as a collection of activities and tasks designed to achieve a specific goal of the organisation, with specific performance or quality requirements while meeting any subject time and cost constraints. Project management refers to managing the activities that lead to the successful completion of a project. Furthermore, it focuses on finite deadlines and objectives. A number of tools may be used to assist with this as well as with assessment. Project management may be used when planning personnel resources and capabilities. The project may be linked to the objects in a professional services life cycle and may accompany the objects from the opportunity over quotation, contract, time and expense recording, billing, period-end-activities to the final reporting. Naturally the project gets even more detailed when moving through this cycle. For any given project, several project tasks should be defined. Project tasks describe the activities and phases that have to be performed in the project such as writing of layouts, customising, testing. What is needed is a system that allows project positions to be correlated with project tasks. Project positions describe project roles like project manager, consultant, tester, etc. Project-positions are typically arranged linearly within the project. By correlating project tasks with project positions, the qualifications and availability of personnel assigned to the project positions may be considered. Benefits of project management Good project management should: Reduce the chance of a project failing Ensure a minimum level of quality and that results meet requirements and expectations Free up other staff members to get on with their area of work and increase efficiency both on the project and within the business Make things simpler and easier for staff with a single point of contact running the overall project Encourage consistent communications amongst staff and suppliers Keep costs, timeframes and resources to budget Workflow engine When it comes to project workforce management, it is all about managing all the logistic aspects of a project or an organisation through a software application. Usually, this software has a workflow engine defined in them. So, all the logistic processes take place in the workflow engine. The regular and most common types of tasks handled by project workforce management software or a similar workflow engine are: Planning and monitoring project schedules and milestones Regularly monitoring your project's schedule performance can provide early indications of possible activity-coordination problems, resource conflicts, and possible cost overruns. To monitor schedule performance. Collecting information and evaluating it ensure a project accuracy. The project schedule outlines the intended result of the project and what's required to bring it to completion. In the schedule, we need to include all the resources involved and cost and time constraints through a work breakdown structure (WBS). The WBS outlines all the tasks and breaks them down into specific deliverables. Tracking the cost and revenue aspects of projects The importance of tracking actual costs and resource usage in projects depends upon the project situation. Tracking actual costs and resource usage is an essential aspect of the project control function. Resource utilisation and monitoring Organisational profitability is directly connected to project management efficiency and optimal resource utilisation. To sum up, organisations that struggle with either or both of these core competencies typically experience cost overruns, schedule delays and unhappy customers. The focus for project management is the analysis of project performance to determine whether a change is needed in the plan for the remaining project activities to achieve the project goals. Other management aspects of project management Project risk management Risk identification consists of determining which risks are likely to affect the project and documenting the characteristics of each. Project communication management Project communication management is about how communication is carried out during the course of the project Project quality management It is of no use completing a project within the set time and budget if the final product is of poor quality. The project manager has to ensure that the final product meets the quality expectations of the stakeholders. This is done by good: Quality planning: Identifying what quality standards are relevant to the project and determining how to meet them. Quality assurance: Evaluating overall project performance on a regular basis to provide confidence that the project will satisfy the relevant quality standards. Quality control: Monitoring specific project results to determine if they comply with relevant quality standards and identifying ways to remove causes of poor performance. Project workforce management vs. traditional management There are three main differences between Project Workforce Management and traditional project management and workforce management disciplines and solutions: Workflow-driven All project and workforce processes are designed, controlled and audited using a built-in graphical workflow engine. Users can design, control and audit the different processes involved in the project. The graphical workflow is quite attractive for the users of the system and allows the users to have a clear idea of the workflow engine. Organisation and work breakdown structures Project Workforce Management provides organization and work breakdown structures to create, manage and report on functional and approval hierarchies, and to track information at any level of detail. Users can create, manage, edit and report work breakdown structures. Work breakdown structures have different abstraction levels, so the information can be tracked at any level. Usually, project workforce management has approval hierarchies. Each workflow created will go through several records before it becomes an organisational or project standard. This helps the organisation to reduce the inefficiencies of the process, as it is audited by many stakeholders. Connected project, workforce and financial processes Unlike traditional disconnected project, workforce and billing management systems that are solely focused on tracking IT projects, internal workforce costs or billable projects, Project Workforce Management is designed to unify the coordination of all project and workforce processes, whether internal, shared (IT) or billable. Summary A project workforce management system defines project tasks, project positions and assigns personnel to the project positions. The project tasks and project positions are correlated to assign a responsible project position or positions to complete each project task. Because each project position may be assigned to a specific person, the qualification and availabilities of the person can be taken into account when determining the assignment. By correlating the project tasks and project positions, a manager can better control the assignment of the workforce and complete projects more efficiently. Project workflow management is one of the best methods for managing different aspects of project. If the project is complex, then the outcomes for the project workforce management could be more effective. For simple projects or small organisations, project workflow management may not add much value, but for more complex projects and big organisations, managing project workflow will make a big difference. This is because that small organisations or projects do not have a significant overhead when it comes to managing processes. There are many project workforce management, but many organisations prefer to adopt unique solutions. Therefore, organisation gets software development companies to develop custom project workflow managing systems for them. This has proved to be the most suitable way of getting the best project workforce management system acquired for the company. Literature References Data management ERP software Project management Workflow technology
Project workforce management
[ "Technology" ]
1,908
[ "Data management", "Data" ]
7,217,844
https://en.wikipedia.org/wiki/Haas%20House
The Haas House is a building in Vienna, Austria, at the Stock-im-Eisen-Platz. Designed by the Austrian architect Hans Hollein, it is a building in the postmodernist style and was completed in 1990. The building is located at the site of the former flagship department store dating to 1867, destroyed during World War II and rebuilt in 1953. The use of the Haas-Haus is divided between retail and a restaurant. The building is considered controversial owing to its contrast with the adjacent Stephansdom cathedral. In December 2014, Uniqa Insurance Group sold the building to the Austrian catering company Do & Co, which now uses it as their headquarters. References External links The "Haas House" in Vienna a short video about Hollein's building in the historical city center of Vienna Buildings and structures in Vienna Postmodern architecture Commercial buildings completed in 1990 Buildings and structures in Innere Stadt
Haas House
[ "Engineering" ]
190
[ "Postmodern architecture", "Architecture" ]
7,218,650
https://en.wikipedia.org/wiki/Coastal%E2%80%93Karst%20Statistical%20Region
The Coastal–Karst Statistical Region (, ) is a statistical region in southwest Slovenia. It covers the traditional and historical regions of Slovenian Istria and most of the Karst Plateau, which traditionally belonged to the County of Gorizia and Gradisca. The region has a sub-Mediterranean climate and is Slovenia's only statistical region bordering the sea. Its natural features enable the development of tourism, transport, and special agricultural crops. More than two-thirds of gross value added are generated by services (trade, accommodation, and transport); most was generated by activities at the Port of Koper and through seaside and spa tourism. The region recorded almost a quarter of all tourist nights in the country in 2013; slightly less than half by domestic tourists. Among foreign tourists, Italians, Austrians, and Germans predominated. In 2012 the region was one of four regions with a positive annual population growth rate (8.1‰). However, the age structure of the population was less favourable: in mid-2013 the ageing index was 133.3, which means that for every 100 inhabitants under 15 there were 133 inhabitants 65 or older. The farms in this region are among the smallest in Slovenia in terms of average utilised agricultural area per farm and in terms of the number of livestock on farms. Cities and towns The Coastal–Karst Statistical Region includes four cities and towns, the largest of which is Koper. Municipalities The Coastal–Karst Statistical Region comprises the following eight municipalities: Ankaran Divača Hrpelje-Kozina Izola Komen Koper Piran Sežana Demographics It has an area of 1,044 km2 and an estimated 112,942 inhabitants (at 1 July 2015)—of whom almost half live in the coastal city of Koper—and the second-highest GDP per capita of the Slovenian regions. It has high percentage of foreigners, at 10% (after the Central Slovenia Statistical Region with 33%, the Drava Statistical Region with 12.6%, and the Savinja Statistical Region with 12%). Economy This region has the highest percentage of people employed in tertiary (services) activities. Employment structure: 77.8% services, 20.7% industry, 1.5% agriculture. 37.1% of the GDP is generated by transport, trade and catering business. 19.6% of all tourists visit this region, most of them from abroad (62.5%). Transportation Length of motorways: 83.6 km Length of other roads: 1551.6 km Also railways. It has the largest and only commercial port situated in Koper along with marinas in Koper, Izola and Portorož. There is also a small international airport. Sources Slovenian regions in figures 2014 Statistical regions of Slovenia
Coastal–Karst Statistical Region
[ "Mathematics" ]
570
[ "Statistical regions of Slovenia", "Statistical concepts", "Statistical regions" ]
7,218,707
https://en.wikipedia.org/wiki/Subtitle%20%28titling%29
In books and other works, the subtitle is an explanatory title added by the author to the title proper of a work. Another kind of subtitle, often used in the past, is the alternative title, also called alternate title, traditionally denoted and added to the title with the alternative conjunction "or", hence its appellation. As an example, Mary Shelley gave her most famous novel the title Frankenstein; or, The Modern Prometheus, where or, The Modern Prometheus is the alternative title, by which she references the Greek Titan as a hint of the novel's themes. A more modern usage is to simply separate the subtitle by punctuation, making the subtitle more of a continuation or sub-element of the title proper. In library cataloging and in bibliography, the subtitle does not include an alternative title, which is defined as part of the title proper: e.g., One Good Turn: A Natural History of the Screwdriver and the Screw is filed as One Good Turn (title) and A Natural History of the Screwdriver and the Screw (subtitle), while Twelfth Night, or What You Will is filed as Twelfth Night, or What You Will (title). Literature Subtitles and alternative titles for plays were fashionable in the Elizabethan era. William Shakespeare parodied this vogue by giving the comedy Twelfth Night his only subtitle, the deliberately uninformative or What You Will, implying that the subtitle can be whatever the audience wants it to be. In printing, subtitles often appear below the title in a less prominent typeface or following the title after a colon. Some modern publishers choose to forget subtitles when republishing historical works, such as Shelley's famous story, which is often now sold simply as Frankenstein. Non-fiction In political philosophy, for example, the 16th-century theorist Thomas Hobbes named his magnum opus Leviathan or The Matter, Forme and Power of a Common-Wealth Ecclesiasticall and Civil, using the subtitle to explain the subject matter of the book. Film and other media In film, examples of subtitles using "or" include Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb and Birdman or (The Unexpected Virtue of Ignorance). Subtitles are also used to distinguish different installments in a series, instead of or in addition to a number, such as: Pirates of the Caribbean: Dead Man's Chest, the second in the Pirates of the Caribbean film series; Mario Kart: Super Circuit, the third in the Mario Kart video game series; and Star Trek II: The Wrath of Khan, the second in the Star Trek film series. References Book design Publishing Names
Subtitle (titling)
[ "Engineering" ]
572
[ "Book design", "Design" ]
7,218,910
https://en.wikipedia.org/wiki/Minoan%20chronology
Minoan chronology is a framework of dates used to divide the history of the Minoan civilization. Two systems of relative chronology are used for the Minoans. One is based on sequences of pottery styles, while the other is based on the architectural phases of the Minoan palaces. These systems are often used alongside one another. Establishing an absolute chronology has proved difficult, since different methodologies provide different results. For instance, while carbon dating places the eruption of Thera around 1600 BC, synchronism with Egyptian records would place it roughly a century later. Relative chronology Ceramic periodization The standard relative chronology divides Minoan history into three eras: Early Minoan (EM), Middle Minoan (MM) and Late Minoan (LM). These eras are divided into sub-eras using Roman numerals (e.g. EM I, EM II, EM III) and sub-sub-eras using capital letters (e.g. LM IIIA, LMIIIB, LM IIIC). This system is based on the sequence of pottery styles excavated at Minoan sites. For instance, the transition from EM III to MM IA is characterized by the appearance of handmade polychrome pottery; the transition from MM IA to MM IB follows the appearance of wheel-made pottery. This framework was originated by Arthur Evans during his excavations at Knossos. It remains the standard in Minoan archaeology, though it has been revised and refined by subsequent researchers and some aspects remain under debate. Architectural periodization An alternative framework divides Minoan history based on the construction phases of the Minoan palaces. In this system, the Prepalatial period covers the timespan before the construction of the palaces. The Protopalatial era begins with the construction of the first palaces, and ends with their destruction. The Neopalatial period, often considered the zenith of Minoan civilization, begins with the rebuilding of the palaces, and ends with yet another wave of destructions. The Postpalatial period covers the era in which Minoan culture continued in the absence of the palaces. Some variants of this system include a Final palace period or a Monopalatial period between the Neo- and Postpalatial periods, corresponding to era when the palace at Knossos was reoccupied. The architectural periodization was proposed by Nikolaos Platon in 1961, though later scholars have proposed variants and refinements. This system is often used side-by-side with the ceramic chronology, since the two are commensurate. For instance, the Prepalatial period covers the ceramic phases EM I through MM IA. Absolute dating Establishing an absolute chronology has proved difficult. Archaeologists have attempted to determine calendar dates by synchronizing the periods of Minoan relative chronology with those of better understood neighbors. For example, Minoan artifacts from the LM IB ceramic period have been found in 18th Dynasty contexts in Egypt, for which Egyptian chronology provides generally accepted calendar dates. However, dates determined in this manner do not always match the results of carbon dating and other methods based on natural science. Much of the controversy concerns the dating of the eruption of Thera, which is known to have occurred towards the end of the LM IA period. While carbon dating places this event (and thus LM IA) around 1600 BC, synchronism with Egyptian records would place it roughly a century later. Theran eruption The timing of natural disasters is of importance to high and low chronologies, which can use the resulting geological evidence to date co-located artifacts. The eruption of the Thera volcano on what is now the island of Santorini is of particular significance to the chronology of Minoan history. The Theran eruption plays a role in both the high and low chronological approaches, although there is a difference in the date range each system assigns to the event. In his initial framework, Evans vaguely assigned the eruption to the 17th century BCE. Low chronological assessments revise the eruption to the mid-15th century, while high and blended chronologies push the date back to a point in between Evans' and low chronologies, a more commonly accepted specific date of approximately 1628, though the date is by no means generally agreed. The precise date is of more concern to archaeologists of the Asian mainland and Ancient Egypt, where volcanic ash from Thera is widely evident, and there are established competing chronologies, than to those of Crete. High chronological techniques such as radiocarbon dating can be used in conjunction with evidence from artifacts indirectly related to the eruption, such as eruption-caused tsunami debris to pinpoint the exact timing of the event, and therefore which Minoan period it belongs in. However, the broadness of radiocarbon dating has also resulted in dates for the eruption of Thera that do not precisely match evidence from the archeological record. Minoan history Early Minoan Early Minoan society developed largely continuously from local Neolithic predecessors, with some cultural influence and perhaps migration from eastern populations. This period saw a gradual shift from localized clan-based villages towards the more urbanized and stratified society of later periods. EM I (c. 3100-2650 BC) is marked by the appearance of the first painted ceramics. Continuing a trend that began during the Neolithic, settlements grew in size and complexity, and spread from fertile plains towards highland sites and islands as the Minoans learned to exploit less hospitable terrain. EM II (c. 2650-2200 BC) has been termed an international era. Trade intensified and Minoan ships began sailing beyond the Aegean to Egypt and Syria, possibly enabled by the invention of masted ships. Minoan material culture shows increased international influence, for instance in the adoption of Minoan seals based on the older Near Eastern seal. Minoan settlements grew, some doubling in size, and monumental buildings were constructed at sites that would later become palaces. EM III (c. 2200-2100 BC) saw the continuation of these trends. Middle Minoan MM I (c. 2100-1875 BC) saw the emergence of Protopalatial society. During MM IA (c. 2100-1925 BC), populations increased dramatically at sites such as Knossos, Phaistos, and Malia, accompanied by major construction projects. During MM IB (c. 1925-1875 BC), the first palaces were built at these sites, in areas which had been used for communal ceremonies since the Neolithic. Middle Minoan artisans developed new colorful paints and adopted the potter's wheel during MM IB, producing wares such as Kamares ware. MM II (c. 1875-1750 BC) saw the development of the Minoan writing systems, Cretan hieroglyphic and Linear A. It ended with mass destructions generally attributed to earthquakes, though violent destruction has been considered as an alternative explanation. MM III (c. 1750-1700 BC) marks the beginning of the Neopalatial period. Most of the palaces were rebuilt with architectural innovations, with the notable exception of Phaistos. Cretan hieroglyphs were abandoned in favor of Linear A, and Minoan cultural influence becomes significant in mainland Greece. Late Minoan The Late Minoan period was an eventful time that saw profound change in Minoan society. Many of the most recognizable Minoan artifacts date from this time, for instance the Snake goddess figurines, La Parisienne Fresco, and the marine style of pottery decoration. Late Minoan I (c. 1700-1470 BC) was a continuation of the prosperous Neopalatial culture. A notable event from this era was the eruption of the Thera volcano, which occurred around 1600 BC towards the end of the LM IA subperiod. One of the largest volcanic explosions in recorded history, it ejected about of material and was measured at 7 on the Volcanic Explosivity Index. While the eruption destroyed Cycladic settlements such as Akrotiri and led to the abandonment of some sites in northeast Crete, other Minoan sites such as Knossos continued to prosper. The post-eruption LM IB period (c.1625-1470) saw ambitious new building projects, booming international trade, and artistic developments such as the marine style. Late Minoan IB (c. 1625-1470 BC) ended with severe destructions throughout the island, marking the end of Neopalatial society. These destructions are thought to have been deliberate, since they spared certain sites in a manner inconsistent with natural disasters. For instance, the town at Knossos burned while the palace itself did not. The causes of these destructions have been a perennial topic of debate. While some researchers attributed them to Mycenaean conquerors, others have argued that they were the result of internal upheavals. Similarly, while some researchers have attempted to link them to lingering environmental disruption from the Thera eruption, others have argued that the two events are too distant in time for any causal relation. Late Minoan II (c. 1470-1420 BC) is sparsely represented in the archaeological record, but appears to have been a period of decline. It marks the beginning of the Monopalatial period, as the palace at Knossos was the sole one remaining in use. Late Minoan III (c. 1420-1075 BC) shows profound social and political changes. Among the palaces, only Knossos remained in use, though it too was destroyed by LM IIIB2 and possibly earlier. The language of administration shifted to Mycenaean Greek, written in Linear B, and material culture shows increased mainland influence, reflecting the rise of a Greek-speaking elite. In Late Minoan IIIC (c. 1200-1075 BC), coinciding with the wider Late Bronze Age collapse, coastal settlements were abandoned in favor of defensible locations on higher ground. These small villages, some of which grew out of earlier mountain shrines, continued aspects of recognizably Minoan culture until the Early Iron Age. Notes External links Ian Swindale, using the chronology of Andonis Vasilakis in his book on Minoan Crete, published by Adam Editions in 2000 Dartmouth College World History Encyclopedia Thera Foundation L. Marangou in the Foundation of the Hellenic World site Companion to Manning (Cornell) University of Oklahoma Chronology Chronology Chronology
Minoan chronology
[ "Physics" ]
2,137
[ "Spacetime", "Chronology", "Physical quantities", "Time" ]
7,218,913
https://en.wikipedia.org/wiki/Type%20704%20Radar
The Type 704 is a counter-battery radar designed to accurately locate the hostile artillery, rocket and ground-to-ground missile launcher immediately after the firing of enemy, and support friendly artillery by providing guidance of counter fire. Built by NORINCO, it was first displayed publicly in 1988's ASIADEX defence show. Development Type 704 radar shares the same root as its larger cousin, the SLC-2 Radar: four AN/TPQ-37 Firefinder radar have been sold to China and this had become the foundation of SLC-2 radar development. Aside from political reasons, the US$10 million plus unit price tag of TPQ-37 (including after sale logistic support) was simply too costly for Chinese. Decision was made to develop a domestic equivalent after mastering the technologies of TPQ-37. After the initial test of TPQ-37 in Tangshan (汤山) Range near Nanjing in 1988, and in Xuanhua District in October of the same year, several shortcomings of TPQ-37 were discovered and further intensive tests were conducted and completed in 1994. The requirement of the Chinese domestic equivalent was subsequently modified to address these issues revealed in trials. Due to the limitation of the Chinese industrial capability at the time, decision was made to develop the Chinese domestic equivalent in several steps. The first step was to develop a smaller one, which would result in the Chinese equivalent of AN/TPQ-36 Firefinder radar, Type 704 series radar, and based on the experience gained from this program, a more capable larger version in the same class of AN/TPQ-37 Firefinder radar would be developed, which eventually resulted in SLC-2 series. Type 704 radar Type 704 is the first of the Type 704 series of counter-battery radars. Developmental work of Type 704 begun in parallel with the introduction of AN/TPQ-37 radar into Chinese service, and the reported experience gained on the Chinese reverse engineering of TPQ-37 has influenced Type 704 radar. One problem revealed in the tests was that the reliability of TPQ-37 is much lower than what was claimed. The reason was that when TPQ-37 was deployed in environments with high humidity and high level of rainfall (southern China), high salinity (coastal regions), high altitude (southwestern China), and subjected to daily high temperature differences (northwestern China), malfunctions occurs more frequently. Type 704 radar was designed specifically to improve the reliability against these harsh environmental factors. Type 704A radar Type 704 is followed by its successor, Type 704A, which is fully solid state, fully digitized version, which further improved reliability and simplified logistics, and thus reduced the operational cost. One of the limitations of TPQ-37 revealed in tests was that it was less effective against projectiles with flat trajectory, so it is much more effective against howitzer and mortar rounds than rounds from 130 mm towed field gun M1954 (M-46) and its Chinese derivative Type 59-1. Type 704A radar is designed to overcome this shortcoming by improving the capability against rounds with flat trajectory. BL904 radar A further improved variant based on Type 704A designated as BL904 has also been introduced. This latest version of Type 704 radar family reportedly utilizes the more advanced lens arrangement for its planar passive phased array antenna, instead of earlier simple horn arranged used in earlier versions. Unconfirmed Chinese claims also concludes that the BL904 radar also incorporate former-USSR counter-battery radar Zoopark-1 radar, two of which was purchased by China from Ukraine, but such claim has yet to be verified by official sources and sources outside China. Specifications S - band Range (against 81-mm mortar round sized target): > CS/RB1 radar At the 9th Zhuhai Airshow held in November 2012, a new, lightweight, counterbattery radar designated as CS/RB1 made its public debut. Like Type 704 and BL904 radars, CS/RB1 is also designed primarily for detecting incoming projectiles down to the size of mortar round, though larger objects can be tracked as well. CS/RB1 is designed to be a lightweight version of Type 704/BL904 to be carried by individual soldiers (when systems are breaking down into portions). CS/RB1 radar operates is a passive phased radar operates in L-band, and it is fully solid state, highly digitized, conformal array in cylindrical shape., and it can be airdropped. References 1. Fire Control Radar Technology, Dec 1999 issue, Xi'an Electronics Research Institute (also known as Institute No. 206 of China Arms Industry Group Corporation), Xi'an, December, 1999, ,Domestic Chinese SN: CN 61-1214/TJ. 2. Fire Control Radar Technology, Feb 1995 issue, Xi'an Electronics Research Institute (also known as Institute No. 206 of China Arms Industry Group Corporation), Xi'an, February, 1995, ,Domestic Chinese SN: CN 61-1214/TJ. 3. Ordnance Knowledge, Jul 2007 issue, Ordnance Knowledge Magazine Publishing House, Beijing, July, 2007, , Domestic Chinese SN: CN 11-1470/TJ. Weapon locating radar Military radars of the People's Republic of China Military equipment introduced in the 1980s
Type 704 Radar
[ "Technology" ]
1,122
[ "Warning systems", "Weapon locating radar" ]
7,219,097
https://en.wikipedia.org/wiki/Synechococcus
Synechococcus (from the Greek synechos, in succession, and the Greek kokkos, granule) is a unicellular cyanobacterium that is very widespread in the marine environment. Its size varies from 0.8 to 1.5 μm. The photosynthetic coccoid cells are preferentially found in well–lit surface waters where it can be very abundant (generally 1,000 to 200,000 cells per ml). Many freshwater species of Synechococcus have also been described. The genome of S. elongatus strain PCC7002 has a size of 3.4 Mbp, whereas the oceanic strain WH8102 has a genome of size 2.4 Mbp. Introduction Synechococcus is one of the most important components of the prokaryotic autotrophic picoplankton in the temperate to tropical oceans. The genus was first described in 1979, and was originally defined to include "small unicellular cyanobacteria with ovoid to cylindrical cells that reproduce by binary traverse fission in a single plane and lack sheaths". This definition of the genus Synechococcus contained organisms of considerable genetic diversity and was later subdivided into subgroups based on the presence of the accessory pigment phycoerythrin. The marine forms of Synechococcus are coccoid cells between 0.6 and 1.6 μm in size. They are Gram-negative cells with highly structured cell walls that may contain projections on their surface. Electron microscopy frequently reveals the presence of phosphate inclusions, glycogen granules, and more importantly, highly structured carboxysomes. Cells are known to be motile by a gliding method and a novel uncharacterized, nonphototactic swimming method that does not involve flagellar motion. While some cyanobacteria are capable of photoheterotrophic or even chemoheterotrophic growth, all marine Synechococcus strains appear to be obligate photoautotrophs that are capable of supporting their nitrogen requirements using nitrate, ammonia, or in some cases urea as a sole nitrogen source. Marine Synechococcus species are traditionally not thought to fix nitrogen. In the last decade, several strains of Synechococcus elongatus have been produced in laboratory environments to include the fastest growing cyanobacteria to date, Synechococcus elongatus UTEX 2973. S. elongatus UTEX 2973 is a mutant hybrid from UTEX 625 and is most closely related to S. elongatus PCC 7942 with 99.8% similarity. It has the shortest doubling time at “1.9 hours in a BG11 medium at 41°C under continuous 500 μmoles photons·m−2·s−1 white light with 3% CO2”. Pigments The main photosynthetic pigment in Synechococcus is chlorophyll a, while its major accessory pigments are phycobiliprotein. The four commonly recognized phycobilins are phycocyanin, allophycocyanin, allophycocyanin B and phycoerythrin. In addition Synechococcus also contains zeaxanthin but no diagnostic pigment for this organism is known. Zeaxanthin is also found in Prochlorococcus, red algae and as a minor pigment in some chlorophytes and eustigmatophytes. Similarly, phycoerythrin is also found in rhodophytes and some cryptomonads. Phylogeny Phylogenetic description of Synechococcus is difficult. Isolates are morphologically very similar, yet exhibit a G+C content ranging from 39 to 71%, illustrating the large genetic diversity of this provisional taxon. Initially, attempts were made to divide the group into three subclusters, each with a specific range of genomic G+C content. The observation that open-ocean isolates alone nearly span the complete G+C spectrum, however, indicates that Synechococcus is composed of at least several species. Bergey's Manual (Herdman et al. 2001) now divides Synechococcus into five clusters (equivalent to genera) based on morphology, physiology, and genetic traits. Cluster 1 includes relatively large (1–1.5 μm) nonmotile obligate photoautotrophs that exhibit low salt tolerance. Reference strains for this cluster are PCC6301 (formerly Anacycstis nidulans) and PCC6312, which were isolated from fresh water in Texas and California, respectively. Cluster 2 also is characterized by low salt tolerance. Cells are obligate photoautrotrophs, lack phycoerythrin, and are thermophilic. The reference strain PCC6715 was isolated from a hot spring in Yellowstone National Park. Cluster 3 includes phycoerythrin-lacking marine Synechococcus species that are euryhaline, i.e. capable of growth in both marine and freshwater environments. Several strains, including the reference strain PCC7003, are facultative heterotrophs and require vitamin B12 for growth. Cluster 4 contains a single isolate, PCC7335. This strain is obligate marine. This strain contains phycoerthrin and was first isolated from the intertidal zone in Puerto Peñasco, Mexico. The last cluster contains what had previously been referred to as ‘marine A and B clusters’ of Synechococcus. These cells are truly marine and have been isolated from both the coastal and the open ocean. All strains are obligate photoautrophs and are around 0.6–1.7 μm in diameter. This cluster is, however, further divided into a population that either contains (cluster 5.1) or does not contain (cluster 5.2) phycoerythrin. The reference strains are WH8103 for the phycoerythrin-containing strains and WH5701 for those strains that lack this pigment. More recently, Badger et al. (2002) proposed the division of the cyanobacteria into a α- and a β-subcluster based on the type of rbcL (large subunit of ribulose 1,5-bisphosphate carboxylase/oxygenase) found in these organisms. α-cyanobacteria were defined to contain a form IA, while β-cyanobacteria were defined to contain a form IB of this gene. In support for this division Badger et al. analyze the phylogeny of carboxysomal proteins, which appear to support this division. Also, two particular bicarbonate transport systems appear to only be found in α-cyanobacteria, which lack carboxysomal carbonic anhydrases. The complete phylogenetic tree of 16S rRNA sequences of Synechococcus revealed at least 12 groups, which morphologically correspond to Synechococcus, but they have not derived from the common ancestor. Moreover, it has been estimated based on molecular dating that the first Synechococcus lineage has appeared 3 billion years ago in thermal springs with subsequent radiation to marine and freshwater environments. As of 2020, the morphologically similar "Synechococcus collective" has been split into 15 genera under 5 different orders: Synechococcales (Cyanobium, Inmanicoccus, Lacustricoccus gen. Nov., Parasynechococcus, Pseudosynechococcus, Regnicoccus, Synechospongium gen. nov., Synechococcus and Vulcanococcus); Cyanobacteriales (Limnothrix); Leptococcales (Brevicoccus and Leptococcus); Thermosynechococcales (Stenotopis and Thermosynechococcus); and Neosynechococcales (Neosynechococcus). (gen. nov. means that the genus is newly created in 2020). Ecology and distribution Synechococcus has been observed to occur at concentrations ranging between a few cells to 106 cells per ml in virtually all regions of oceanic euphotic zone except in samples from the McMurdo Sound and Ross Ice Shelf in Antarctica. Cells are generally much more abundant in nutrient-rich environments than in the oligotrophic ocean and prefer the upper, well-lit portion of the euphotic zone. Synechococcus has also been observed to occur at high abundances in environments with low salinities and/or low temperatures. It is usually far outnumbered by Prochlorococcus in all environments where they co-occur. Exceptions to this rule are areas of permanently enriched nutrients such as upwelling areas and coastal watersheds. In the nutrient-depleted areas of the oceans, such as the central gyres, Synechococcus is apparently always present, although only at low concentrations, ranging from a few to 4×10³ cells per ml. Vertically Synechococcus is usually relatively equitably distributed throughout the mixed layer and exhibits an affinity for the higher-light areas. Below the mixed layer, cell concentrations rapidly decline. Vertical profiles are strongly influenced by hydrologic conditions and can be very variable both seasonally and spatially. Overall, Synechococcus abundance often parallels that of Prochlorococcus in the water column. In the Pacific high-nutrient, low-chlorophyll zone and in temperate open seas where stratification was recently established both profiles parallel each other and exhibit abundance maxima just about the subsurface chlorophyll maximum. The factors controlling the abundance of Synechococcus still remain poorly understood, especially considering that even in the most nutrient-depleted regions of the central gyres, where cell abundances are often very low, population growth rates are often high and not drastically limited. Factors such as grazing, viral mortality, genetic variability, light adaptation, and temperature, as well as nutrients are certainly involved, but remain to be investigated on a rigorous and global scale. Despite the uncertainties, a relationship probably exists between ambient nitrogen concentrations and Synechococcus abundance, with an inverse relationship to Prochlorococcus in the upper euphotic zone, where light is not limiting. One environment where Synechococcus thrives particularly well is coastal plumes of major rivers. Such plumes are coastally enriched with nutrients such as nitrate and phosphate, which drives large phytoplankton blooms. High productivity in coastal river plumes is often associated with large populations of Synechococcus and elevated form IA (cyanobacterial) rbcL mRNA. Prochlorococcus is thought to be at least 100 times more abundant than Synechococcus in warm oligotrophic waters. Assuming average cellular carbon concentrations, it has thus been estimated that Prochlorococcus accounts for at least 22 times more carbon in these waters, thus may be of much greater significance to the global carbon cycle than Synechococcus. Evolutionary history Free floating viruses have been found carrying photosynthetic genes, and Synechococcus samples have been found to have viral proteins associated with photosynthesis. It is estimated 10% of all photosynthesis on earth is carried out with viral genes. Not all viruses immediately kill their hosts, 'temperate' viruses co-exist with their host until stresses or nearing end of natural life span make them switch their host to virus production; if a mutation occurs that stops this final step, the host can carry the virus genes with no ill effects. And if a healthy host reproduces while infectious, its offspring can be infectious as well. It is likely such a process gave Synechococcus photosynthesis. DNA recombination, repair and replication Marine Synechococcus species possess a set of genes that function in DNA recombination, repair and replication. This set of genes includes the recBCD gene complex whose product, exonuclease V, functions in recombinational repair of DNA, and the umuCD gene complex whose product, DNA polymerase V, functions in error-prone DNA replication. Some Synechococcus strains are naturally competent for genetic transformation, and thus can take up extracellular DNA and recombine it into their own genome. Synechococcus strains also encode the gene lexA that regulates an SOS response system, that is likely similar to the well-studied E. coli SOS system that is employed in the response to DNA damage. Species Synechococcus ambiguus Skuja Synechococcus arcuatus var. calcicolus Fjerdingstad Synechococcus bigranulatus Skuja Synechococcus brunneolus Rabenhorst Synechococcus caldarius Okada Synechococcus capitatus A. E. Bailey-Watts & J. Komárek Synechococcus carcerarius Norris Synechococcus elongatus (Nägeli) Nägeli Synechococcus endogloeicus F. Hindák Synechococcus epigloeicus F. Hindák Synechococcus ferrunginosus Wawrik Synechococcus intermedius Gardner Synechococcus koidzumii Yoneda Synechococcus lividus Copeland Synechococcus marinus Jao Synechococcus minutissimus Negoro Synechococcus mundulus Skuja Synechococcus nidulans (Pringsheim) Komárek Synechococcus rayssae Dor Synechococcus rhodobaktron Komárek & Anagnostidis Synechococcus roseo-persicinus Grunow Synechococcus roseo-purpureus G. S. West Synechococcus salinarum Komárek Synechococcus salinus Frémy Synechococcus sciophilus Skuja Synechococcus sigmoideus (Moore & Carter) Komárek Synechococcus spongiarum Usher et al. Synechococcus subsalsus Skuja Synechococcus sulphuricus Dor Synechococcus vantieghemii (Pringsheim) Bourrelly Synechococcus violaceus Grunow Synechococcus viridissimus Copeland Synechococcus vulcanus Copeland See also Gloeomargarita lithophora Photosynthetic picoplankton Prochlorococcus Synechocystis, another cyanobacterial model organism References Further reading External links Cyanobacteria genera Synechococcales Marine microorganisms
Synechococcus
[ "Biology" ]
3,082
[ "Marine microorganisms", "Microorganisms" ]
7,219,407
https://en.wikipedia.org/wiki/Open-Architecture-System
Open-Architecture-System (OAS) is the main User interface and synthesizer software of the Wersi keyboard line. OAS improves on prior organ interfaces by allowing the user to add sounds, rhythms, third party programs and future software enhancements without changing hardware. Compared to previous organs which relied on buttons, OAS uses a touch screen to make programming easier. OAS can host up to 4 separate VST software instruments, allowing for an expandable system similar to the Korg OASYS. OAS can support dynamic touch and aftertouch, but cannot support horizontal touch like the Yamaha Stagea Electone. OAS Version 7 OAS Version 7 expands on previous versions by adding a new effects section. Separate effects are available for the accompaniment section, sequencer and drums. Added effects include delay, reverb, phasing, wah wah, distortion, compressor, and flanger. In addition, version 7 includes 300 new sounds, 700 sounds in total. Version 7 adds the Wersi Open Art Arranger. This software enables the Wersi to use all Yamaha styles, including those from the Tyros 2. References External links Wersi USA home page Wersi international home Global OAS Users Group Electric and electronic keyboard instruments Software synthesizers
Open-Architecture-System
[ "Technology" ]
258
[ "Computing stubs" ]
7,219,559
https://en.wikipedia.org/wiki/Aurophilicity
In chemistry, aurophilicity refers to the tendency of gold complexes to aggregate via formation of weak metallophilic interactions. The main evidence for aurophilicity is from the crystallographic analysis of Au(I) complexes. The aurophilic bond has a length of about 3.0 Å and a strength of about 7–12 kcal/mol, which is comparable to the strength of a hydrogen bond. The effect is greatest for gold as compared with copper or silver—the higher elements in its periodic table group—due to increased relativistic effects. Observations and theory show that, on average, 28% of the binding energy in the aurophilic interaction can be attributed to relativistic expansion of the gold d orbitals. An example of aurophilicity is the propensity of gold centres to aggregate. While both intramolecular and intermolecular aurophilic interactions have been observed, only intramolecular aggregation has been observed at such nucleation sites. Role in self-assembly The similarity in strength between hydrogen bonding and aurophilic interaction has proven to be a convenient tool in the field of polymer chemistry. Much research has been conducted on self-assembling supramolecular structures, both those that aggregate by aurophilicity alone and those that contain both aurophilic and hydrogen-bonding interactions. An important and exploitable property of aurophilic interactions relevant to their supramolecular chemistry is that while both inter- and intramolecular interactions are possible, intermolecular aurophilic linkages are comparatively weak and easily broken by solvation; most complexes that exhibit intramolecular aurophilic interactions retain such moieties in solution. References Gold Chemical bonding
Aurophilicity
[ "Physics", "Chemistry", "Materials_science" ]
373
[ "Chemical bonding", "Condensed matter physics", "nan" ]
7,219,667
https://en.wikipedia.org/wiki/Core%20binding%20factor
The Core binding factor (CBF) is a group of heterodimeric transcription factors. Core binding factors are composed of: a non-DNA-binding CBFβ chain (CBFB) a DNA-binding CBFα chain (RUNX1, RUNX2, RUNX3) References See also AI-10-49, an anti-leukemic drug under development. External links Transcription factors
Core binding factor
[ "Chemistry", "Biology" ]
86
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
7,220,441
https://en.wikipedia.org/wiki/Heart%20%28Chinese%20constellation%29
The Heart mansion (; also called ) is one of the Twenty-eight mansions of the Chinese constellations. It is one of the eastern mansions of the Azure Dragon. Its prominent figure is the star Alpha Scorpii. Asterisms References Chinese constellations
Heart (Chinese constellation)
[ "Astronomy" ]
55
[ "Chinese constellations", "Constellations" ]
7,220,589
https://en.wikipedia.org/wiki/Bundle%20map
In mathematics, a bundle map (or bundle morphism) is a morphism in the category of fiber bundles. There are two distinct, but closely related, notions of bundle map, depending on whether the fiber bundles in question have a common base space. There are also several variations on the basic theme, depending on precisely which category of fiber bundles is under consideration. In the first three sections, we will consider general fiber bundles in the category of topological spaces. Then in the fourth section, some other examples will be given. Bundle maps over a common base Let and be fiber bundles over a space M. Then a bundle map from E to F over M is a continuous map such that . That is, the diagram should commute. Equivalently, for any point x in M, maps the fiber of E over x to the fiber of F over x. General morphisms of fiber bundles Let πE:E→ M and πF:F→ N be fiber bundles over spaces M and N respectively. Then a continuous map is called a bundle map from E to F if there is a continuous map f:M→ N such that the diagram commutes, that is, . In other words, is fiber-preserving, and f is the induced map on the space of fibers of E: since πE is surjective, f is uniquely determined by . For a given f, such a bundle map is said to be a bundle map covering f. Relation between the two notions It follows immediately from the definitions that a bundle map over M (in the first sense) is the same thing as a bundle map covering the identity map of M. Conversely, general bundle maps can be reduced to bundle maps over a fixed base space using the notion of a pullback bundle. If πF:F→ N is a fiber bundle over N and f:M→ N is a continuous map, then the pullback of F by f is a fiber bundle f*F over M whose fiber over x is given by (f*F)x = Ff(x). It then follows that a bundle map from E to F covering f is the same thing as a bundle map from E to f*F over M. Variants and generalizations There are two kinds of variation of the general notion of a bundle map. First, one can consider fiber bundles in a different category of spaces. This leads, for example, to the notion of a smooth bundle map between smooth fiber bundles over a smooth manifold. Second, one can consider fiber bundles with extra structure in their fibers, and restrict attention to bundle maps which preserve this structure. This leads, for example, to the notion of a (vector) bundle homomorphism between vector bundles, in which the fibers are vector spaces, and a bundle map φ is required to be a linear map on each fiber. In this case, such a bundle map φ (covering f) may also be viewed as a section of the vector bundle Hom(E,f*F) over M, whose fiber over x is the vector space Hom(Ex,Ff(x)) (also denoted L(Ex,Ff(x))) of linear maps from Ex to Ff(x). Notes References Fiber bundles Theory of continuous functions
Bundle map
[ "Mathematics" ]
664
[ "Theory of continuous functions", "Topology" ]
7,221,088
https://en.wikipedia.org/wiki/Air%20conditioning
Air conditioning, often abbreviated as A/C (US) or air con (UK), is the process of removing heat from an enclosed space to achieve a more comfortable interior temperature and in some cases also controlling the humidity of internal air. Air conditioning can be achieved using a mechanical 'air conditioner' or by other methods, including passive cooling and ventilative cooling. Air conditioning is a member of a family of systems and techniques that provide heating, ventilation, and air conditioning (HVAC). Heat pumps are similar in many ways to air conditioners, but use a reversing valve to allow them both to heat and to cool an enclosed space. Air conditioners, which typically use vapor-compression refrigeration, range in size from small units used in vehicles or single rooms to massive units that can cool large buildings. Air source heat pumps, which can be used for heating as well as cooling, are becoming increasingly common in cooler climates. Air conditioners can reduce mortality rates due to higher temperature. According to the International Energy Agency (IEA) 1.6 billion air conditioning units were used globally in 2016. The United Nations called for the technology to be made more sustainable to mitigate climate change and for the use of alternatives, like passive cooling, evaporative cooling, selective shading, windcatchers, and better thermal insulation. History Air conditioning dates back to prehistory. Double-walled living quarters, with a gap between the two walls to encourage air flow, were found in the ancient city of Hamoukar, in modern Syria. Ancient Egyptian buildings also used a wide variety of passive air-conditioning techniques. These became widespread from the Iberian Peninsula through North Africa, the Middle East, and Northern India. Passive techniques remained widespread until the 20th century when they fell out of fashion and were replaced by powered air conditioning. Using information from engineering studies of traditional buildings, passive techniques are being revived and modified for 21st-century architectural designs. Air conditioners allow the building's indoor environment to remain relatively constant, largely independent of changes in external weather conditions and internal heat loads. They also enable deep plan buildings to be created and have allowed people to live comfortably in hotter parts of the world. Development Preceding discoveries In 1558, Giambattista della Porta described a method of chilling ice to temperatures far below its freezing point by mixing it with potassium nitrate (then called "nitre") in his popular science book Natural Magic. In 1620, Cornelis Drebbel demonstrated "Turning Summer into Winter" for James I of England, chilling part of the Great Hall of Westminster Abbey with an apparatus of troughs and vats. Drebbel's contemporary Francis Bacon, like della Porta a believer in science communication, may not have been present at the demonstration, but in a book published later the same year, he described it as "experiment of artificial freezing" and said that "Nitre (or rather its spirit) is very cold, and hence nitre or salt when added to snow or ice intensifies the cold of the latter, the nitre by adding to its cold, but the salt by supplying activity to the cold of the snow." In 1758, Benjamin Franklin and John Hadley, a chemistry professor at the University of Cambridge, conducted experiments applying the principle of evaporation as a means to cool an object rapidly. Franklin and Hadley confirmed that the evaporation of highly volatile liquids (such as alcohol and ether) could be used to drive down the temperature of an object past the freezing point of water. They experimented with the bulb of a mercury-in-glass thermometer as their object. They used a bellows to speed up the evaporation. They lowered the temperature of the thermometer bulb down to while the ambient temperature was . Franklin noted that soon after they passed the freezing point of water , a thin film of ice formed on the surface of the thermometer's bulb and that the ice mass was about thick when they stopped the experiment upon reaching . Franklin concluded: "From this experiment, one may see the possibility of freezing a man to death on a warm summer's day." The 19th century included many developments in compression technology. In 1820, English scientist and inventor Michael Faraday discovered that compressing and liquefying ammonia could chill air when the liquefied ammonia was allowed to evaporate. In 1842, Florida physician John Gorrie used compressor technology to create ice, which he used to cool air for his patients in his hospital in Apalachicola, Florida. He hoped to eventually use his ice-making machine to regulate the temperature of buildings. He envisioned centralized air conditioning that could cool entire cities. Gorrie was granted a patent in 1851, but following the death of his main backer, he was not able to realize his invention. In 1851, James Harrison created the first mechanical ice-making machine in Geelong, Australia, and was granted a patent for an ether vapor-compression refrigeration system in 1855 that produced three tons of ice per day. In 1860, Harrison established a second ice company. He later entered the debate over competing against the American advantage of ice-refrigerated beef sales to the United Kingdom. First devices Electricity made the development of effective units possible. In 1901, American inventor Willis H. Carrier built what is considered the first modern electrical air conditioning unit. In 1902, he installed his first air-conditioning system, in the Sackett-Wilhelms Lithographing & Publishing Company in Brooklyn, New York. His invention controlled both the temperature and humidity, which helped maintain consistent paper dimensions and ink alignment at the printing plant. Later, together with six other employees, Carrier formed The Carrier Air Conditioning Company of America, a business that in 2020 employed 53,000 people and was valued at $18.6 billion. In 1906, Stuart W. Cramer of Charlotte, North Carolina, was exploring ways to add moisture to the air in his textile mill. Cramer coined the term "air conditioning" in a patent claim which he filed that year, where he suggested that air conditioning was analogous to "water conditioning", then a well-known process for making textiles easier to process. He combined moisture with ventilation to "condition" and change the air in the factories; thus, controlling the humidity that is necessary in textile plants. Willis Carrier adopted the term and incorporated it into the name of his company. Domestic air conditioning soon took off. In 1914, the first domestic air conditioning was installed in Minneapolis in the home of Charles Gilbert Gates. It is, however, possible that the considerable device (c. ) was never used, as the house remained uninhabited (Gates had already died in October 1913.) In 1931, H.H. Schultz and J.Q. Sherman developed what would become the most common type of individual room air conditioner: one designed to sit on a window ledge. The units went on sale in 1932 at US$10,000 to $50,000 (the equivalent of $ to $ in .) A year later, the first air conditioning systems for cars were offered for sale. Chrysler Motors introduced the first practical semi-portable air conditioning unit in 1935, and Packard became the first automobile manufacturer to offer an air conditioning unit in its cars in 1939. Further development Innovations in the latter half of the 20th century allowed more ubiquitous air conditioner use. In 1945, Robert Sherman of Lynn, Massachusetts, invented a portable, in-window air conditioner that cooled, heated, humidified, dehumidified, and filtered the air. The first inverter air conditioners were released in 1980–1981. In 1954, Ned Cole, a 1939 architecture graduate from the University of Texas at Austin, developed the first experimental "suburb" with inbuilt air conditioning in each house. 22 homes were developed on a flat, treeless track in northwest Austin, Texas, and the community was christened the 'Austin Air-Conditioned Village.' The residents were subjected to a year-long study of the effects of air conditioning led by the nation’s premier air conditioning companies, builders, and social scientists. In addition, researchers from UT’s Health Service and Psychology Department studied the effects on the "artificially cooled humans." One of the more amusing discoveries was that each family reported being troubled with scorpions, the leading theory being that scorpions sought cool, shady places. Other reported changes in lifestyle were that mothers baked more, families ate heavier foods, and they were more apt to choose hot drinks. Air conditioner adoption tends to increase above around $10,000 annual household income in warmer areas. Global GDP growth explains around 85% of increased air condition adoption by 2050, while the remaining 15% can be explained by climate change. As of 2016 an estimated 1.6 billion air conditioning units were used worldwide, with over half of them in China and USA, and a total cooling capacity of 11,675 gigawatts. The International Energy Agency predicted in 2018 that the number of air conditioning units would grow to around 4 billion units by 2050 and that the total cooling capacity would grow to around 23,000 GW, with the biggest increases in India and China. Between 1995 and 2004, the proportion of urban households in China with air conditioners increased from 8% to 70%. As of 2015, nearly 100 million homes, or about 87% of US households, had air conditioning systems. In 2019, it was estimated that 90% of new single-family homes constructed in the US included air conditioning (ranging from 99% in the South to 62% in the West). Operation Operating principles Cooling in traditional air conditioner systems is accomplished using the vapor-compression cycle, which uses a refrigerant's forced circulation and phase change between gas and liquid to transfer heat. The vapor-compression cycle can occur within a unitary, or packaged piece of equipment; or within a chiller that is connected to terminal cooling equipment (such as a fan coil unit in an air handler) on its evaporator side and heat rejection equipment such as a cooling tower on its condenser side. An air source heat pump shares many components with an air conditioning system, but includes a reversing valve, which allows the unit to be used to heat as well as cool a space. Air conditioning equipment will reduce the absolute humidity of the air processed by the system if the surface of the evaporator coil is significantly cooler than the dew point of the surrounding air. An air conditioner designed for an occupied space will typically achieve a 30% to 60% relative humidity in the occupied space. Most modern air-conditioning systems feature a dehumidification cycle during which the compressor runs. At the same time, the fan is slowed to reduce the evaporator temperature and condense more water. A dehumidifier uses the same refrigeration cycle but incorporates both the evaporator and the condenser into the same air path; the air first passes over the evaporator coil, where it is cooled and dehumidified before passing over the condenser coil, where it is warmed again before it is released back into the room. Free cooling can sometimes be selected when the external air is cooler than the internal air. Therefore, the compressor does not need to be used, resulting in high cooling efficiencies for these times. This may also be combined with seasonal thermal energy storage. Heating Some air conditioning systems can reverse the refrigeration cycle and act as an air source heat pump, thus heating instead of cooling the indoor environment. They are also commonly referred to as "reverse cycle air conditioners". The heat pump is significantly more energy-efficient than electric resistance heating, because it moves energy from air or groundwater to the heated space and the heat from purchased electrical energy. When the heat pump is in heating mode, the indoor evaporator coil switches roles and becomes the condenser coil, producing heat. The outdoor condenser unit also switches roles to serve as the evaporator and discharges cold air (colder than the ambient outdoor air). Most air source heat pumps become less efficient in outdoor temperatures lower than 4 °C or 40 °F. This is partly because ice forms on the outdoor unit's heat exchanger coil, which blocks air flow over the coil. To compensate for this, the heat pump system must temporarily switch back into the regular air conditioning mode to switch the outdoor evaporator coil back to the condenser coil, to heat up and defrost. Therefore, some heat pump systems will have electric resistance heating in the indoor air path that is activated only in this mode to compensate for the temporary indoor air cooling, which would otherwise be uncomfortable in the winter. Newer models have improved cold-weather performance, with efficient heating capacity down to . However, there is always a chance that the humidity that condenses on the heat exchanger of the outdoor unit could freeze, even in models that have improved cold-weather performance, requiring a defrosting cycle to be performed. The icing problem becomes much more severe with lower outdoor temperatures, so heat pumps are sometimes installed in tandem with a more conventional form of heating, such as an electrical heater, a natural gas, heating oil, or wood-burning fireplace or central heating, which is used instead of or in addition to the heat pump during harsher winter temperatures. In this case, the heat pump is used efficiently during milder temperatures, and the system is switched to the conventional heat source when the outdoor temperature is lower. Performance The coefficient of performance (COP) of an air conditioning system is a ratio of useful heating or cooling provided to the work required. Higher COPs equate to lower operating costs. The COP usually exceeds 1; however, the exact value is highly dependent on operating conditions, especially absolute temperature and relative temperature between sink and system, and is often graphed or averaged against expected conditions. Air conditioner equipment power in the U.S. is often described in terms of "tons of refrigeration", with each approximately equal to the cooling power of one short ton ( of ice melting in a 24-hour period. The value is equal to 12,000 BTUIT per hour, or 3,517 watts. Residential central air systems are usually from 1 to 5 tons (3.5 to 18 kW) in capacity. The efficiency of air conditioners is often rated by the seasonal energy efficiency ratio (SEER), which is defined by the Air Conditioning, Heating and Refrigeration Institute in its 2008 standard AHRI 210/240, Performance Rating of Unitary Air-Conditioning and Air-Source Heat Pump Equipment. A similar standard is the European seasonal energy efficiency ratio (ESEER). Efficiency is strongly affected by the humidity of the air to be cooled. Dehumidifying the air before attempting to cool it can reduce subsequent cooling costs by as much as 90 percent. Thus, reducing dehumidifying costs can materially affect overall air conditioning costs. Control system Wireless remote control This type of controller uses an infrared LED to relay commands from a remote control to the air conditioner. The output of the infrared LED (like that of any infrared remote) is invisible to the human eye because its wavelength is beyond the range of visible light (940 nm). This system is commonly used on mini-split air conditioners because it is simple and portable. Some window and ducted central air conditioners uses it as well. Wired controller A wired controller, also called a "wired thermostat," is a device that controls an air conditioner by switching heating or cooling on or off. It uses different sensors to measure temperatures and actuate control operations. Mechanical thermostats commonly use bimetallic strips, converting a temperature change into mechanical displacement, to actuate control of the air conditioner. Electronic thermostats, instead, use a thermistor or other semiconductor sensor, processing temperature change as electronic signals to control the air conditioner. These controllers are usually used in hotel rooms because they are permanently installed into a wall and hard-wired directly into the air conditioner unit, eliminating the need for batteries. Types * where the typical capacity is in kilowatt as follows: very small: <1.5 kW small: 1.5–3.5 kW medium: 4.2–7.1 kW large: 7.2–14 kW very large: >14 kW Mini-split and multi-split systems Ductless systems (often mini-split, though there are now ducted mini-split) typically supply conditioned and heated air to a single or a few rooms of a building, without ducts and in a decentralized manner. Multi-zone or multi-split systems are a common application of ductless systems and allow up to eight rooms (zones or locations) to be conditioned independently from each other, each with its indoor unit and simultaneously from a single outdoor unit. The first mini-split system was sold in 1961 by Toshiba in Japan, and the first wall-mounted mini-split air conditioner was sold in 1968 in Japan by Mitsubishi Electric, where small home sizes motivated their development. The Mitsubishi model was the first air conditioner with a cross-flow fan. In 1969, the first mini-split air conditioner was sold in the US. Multi-zone ductless systems were invented by Daikin in 1973, and variable refrigerant flow systems (which can be thought of as larger multi-split systems) were also invented by Daikin in 1982. Both were first sold in Japan. Variable refrigerant flow systems when compared with central plant cooling from an air handler, eliminate the need for large cool air ducts, air handlers, and chillers; instead cool refrigerant is transported through much smaller pipes to the indoor units in the spaces to be conditioned, thus allowing for less space above dropped ceilings and a lower structural impact, while also allowing for more individual and independent temperature control of spaces. The outdoor and indoor units can be spread across the building. Variable refrigerant flow indoor units can also be turned off individually in unused spaces. The lower start-up power of VRF's DC inverter compressors and their inherent DC power requirements also allow VRF solar-powered heat pumps to be run using DC-providing solar panels. Ducted central systems Split-system central air conditioners consist of two heat exchangers, an outside unit (the condenser) from which heat is rejected to the environment and an internal heat exchanger (the evaporator, or Fan Coil Unit, FCU) with the piped refrigerant being circulated between the two. The FCU is then connected to the spaces to be cooled by ventilation ducts. Floor standing air conditioners are similar to this type of air conditioner but sit within spaces that need cooling. Central plant cooling Large central cooling plants may use intermediate coolant such as chilled water pumped into air handlers or fan coil units near or in the spaces to be cooled which then duct or deliver cold air into the spaces to be conditioned, rather than ducting cold air directly to these spaces from the plant, which is not done due to the low density and heat capacity of air, which would require impractically large ducts. The chilled water is cooled by chillers in the plant, which uses a refrigeration cycle to cool water, often transferring its heat to the atmosphere even in liquid-cooled chillers through the use of cooling towers. Chillers may be air- or liquid-cooled. Portable units A portable system has an indoor unit on wheels connected to an outdoor unit via flexible pipes, similar to a permanently fixed installed unit (such as a ductless split air conditioner). Hose systems, which can be monoblock or air-to-air, are vented to the outside via air ducts. The monoblock type collects the water in a bucket or tray and stops when full. The air-to-air type re-evaporates the water, discharges it through the ducted hose, and can run continuously. Many but not all portable units draw indoor air and expel it outdoors through a single duct, negatively impacting their overall cooling efficiency. Many portable air conditioners come with heat as well as a dehumidification function. Window unit and packaged terminal The packaged terminal air conditioner (PTAC), through-the-wall, and window air conditioners are similar. These units are installed on a window frame or on a wall opening. The unit usually has an internal partition separating its indoor and outdoor sides, which contain the unit's condenser and evaporator, respectively. PTAC systems may be adapted to provide heating in cold weather, either directly by using an electric strip, gas, or other heaters, or by reversing the refrigerant flow to heat the interior and draw heat from the exterior air, converting the air conditioner into a heat pump. They may be installed in a wall opening with the help of a special sleeve on the wall and a custom grill that is flush with the wall and window air conditioners can also be installed in a window, but without a custom grill. Packaged air conditioner Packaged air conditioners (also known as self-contained units) are central systems that integrate into a single housing all the components of a split central system, and deliver air, possibly through ducts, to the spaces to be cooled. Depending on their construction they may be outdoors or indoors, on roofs (rooftop units), draw the air to be conditioned from inside or outside a building and be water or air-cooled. Often, outdoor units are air-cooled while indoor units are liquid-cooled using a cooling tower. Types of compressors Reciprocating This compressor consists of a crankcase, crankshaft, piston rod, piston, piston ring, cylinder head and valves. Scroll This compressor uses two interleaving scrolls to compress the refrigerant. it consists of one fixed and one orbiting scrolls. This type of compressor is more efficient because it has 70 percent less moving parts than a reciprocating compressor. Screw This compressor use two very closely meshing spiral rotors to compress the gas. The gas enters at the suction side and moves through the threads as the screws rotate. The meshing rotors force the gas through the compressor, and the gas exits at the end of the screws. The working area is the inter-lobe volume between the male and female rotors. It is larger at the intake end, and decreases along the length of the rotors until the exhaust port. This change in volume is the compression. Capacity modulation technologies There are several ways to modulate the cooling capacity in refrigeration or air conditioning and heating systems. The most common in air conditioning are: on-off cycling, hot gas bypass, use or not of liquid injection, manifold configurations of multiple compressors, mechanical modulation (also called digital), and inverter technology. Hot gas bypass Hot gas bypass involves injecting a quantity of gas from discharge to the suction side. The compressor will keep operating at the same speed, but due to the bypass, the refrigerant mass flow circulating with the system is reduced, and thus the cooling capacity. This naturally causes the compressor to run uselessly during the periods when the bypass is operating. The turn down capacity varies between 0 and 100%. Manifold configurations Several compressors can be installed in the system to provide the peak cooling capacity. Each compressor can run or not in order to stage the cooling capacity of the unit. The turn down capacity is either 0/33/66 or 100% for a trio configuration and either 0/50 or 100% for a tandem. Mechanically modulated compressor This internal mechanical capacity modulation is based on periodic compression process with a control valve, the two scroll set move apart stopping the compression for a given time period. This method varies refrigerant flow by changing the average time of compression, but not the actual speed of the motor. Despite an excellent turndown ratio – from 10 to 100% of the cooling capacity, mechanically modulated scrolls have high energy consumption as the motor continuously runs. Variable-speed compressor This system uses a variable-frequency drive (also called an Inverter) to control the speed of the compressor. The refrigerant flow rate is changed by the change in the speed of the compressor. The turn down ratio depends on the system configuration and manufacturer. It modulates from 15 or 25% up to 100% at full capacity with a single inverter from 12 to 100% with a hybrid tandem. This method is the most efficient way to modulate an air conditioner's capacity. It is up to 58% more efficient than a fixed speed system. Impact Health effects In hot weather, air conditioning can prevent heat stroke, dehydration due to excessive sweating, electrolyte imbalance, kidney failure, and other issues due to hyperthermia. Heat waves are the most lethal type of weather phenomenon in the United States. A 2020 study found that areas with lower use of air conditioning correlated with higher rates of heat-related mortality and hospitalizations. The August 2003 France heatwave resulted in approximately 15,000 deaths, where 80% of the victims were over 75 years old. In response, the French government required all retirement homes to have at least one air-conditioned room at per floor during heatwaves. Air conditioning (including filtration, humidification, cooling and disinfection) can be used to provide a clean, safe, hypoallergenic atmosphere in hospital operating rooms and other environments where proper atmosphere is critical to patient safety and well-being. It is sometimes recommended for home use by people with allergies, especially mold. However, poorly maintained water cooling towers can promote the growth and spread of microorganisms such as Legionella pneumophila, the infectious agent responsible for Legionnaires' disease. As long as the cooling tower is kept clean (usually by means of a chlorine treatment), these health hazards can be avoided or reduced. The state of New York has codified requirements for registration, maintenance, and testing of cooling towers to protect against Legionella. Economic effects First designed to benefit targeted industries such as the press as well as large factories, the invention quickly spread to public agencies and administrations with studies with claims of increased productivity close to 24% in places equipped with air conditioning. Air conditioning caused various shifts in demography, notably that of the United States starting from the 1970s. In the US, the birth rate was lower in the spring than during other seasons until the 1970s but this difference then declined since then. As of 2007, the Sun Belt contained 30% of the total US population while it was inhabited by 24% of Americans at the beginning of the 20th century. Moreover, the summer mortality rate in the US, which had been higher in regions subject to a heat wave during the summer, also evened out. The spread of the use of air conditioning acts as a main driver for the growth of global demand of electricity. According to a 2018 report from the International Energy Agency (IEA), it was revealed that the energy consumption for cooling in the United States, involving 328 million Americans, surpasses the combined energy consumption of 4.4 billion people in Africa, Latin America, the Middle East, and Asia (excluding China). A 2020 survey found that an estimated 88% of all US households use AC, increasing to 93% when solely looking at homes built between 2010 and 2020. Environmental effects Space cooling including air conditioning accounted globally for 2021 terawatt-hours of energy usage in 2016 with around 99% in the form of electricity, according to a 2018 report on air-conditioning efficiency by the International Energy Agency. The report predicts an increase of electricity usage due to space cooling to around 6200 TWh by 2050, and that with the progress currently seen, greenhouse gas emissions attributable to space cooling will double: 1,135 million tons (2016) to 2,070 million tons. There is some push to increase the energy efficiency of air conditioners. United Nations Environment Programme (UNEP) and the IEA found that if air conditioners could be twice as effective as now, 460 billion tons of GHG could be cut over 40 years. The UNEP and IEA also recommended legislation to decrease the use of hydrofluorocarbons, better building insulation, and more sustainable temperature-controlled food supply chains going forward. Refrigerants have also caused and continue to cause serious environmental issues, including ozone depletion and climate change, as several countries have not yet ratified the Kigali Amendment to reduce the consumption and production of hydrofluorocarbons. CFCs and HCFCs refrigerants such as R-12 and R-22, respectively, used within air conditioners have caused damage to the ozone layer, and hydrofluorocarbon refrigerants such as R-410A and R-404A, which were designed to replace CFCs and HCFCs, are instead exacerbating climate change. Both issues happen due to the venting of refrigerant to the atmosphere, such as during repairs. HFO refrigerants, used in some if not most new equipment, solve both issues with an ozone damage potential (ODP) of zero and a much lower global warming potential (GWP) in the single or double digits vs. the three or four digits of hydrofluorocarbons. Hydrofluorocarbons would have raised global temperatures by around by 2100 without the Kigali Amendment. With the Kigali Amendment, the increase of global temperatures by 2100 due to hydrofluorocarbons is predicted to be around . Alternatives to continual air conditioning include passive cooling, passive solar cooling, natural ventilation, operating shades to reduce solar gain, using trees, architectural shades, windows (and using window coatings) to reduce solar gain. Social effects Socioeconomic groups with a household income below around $10,000 tend to have a low air conditioning adoption, which worsens heat-related mortality. The lack of cooling can be hazardous, as areas with lower use of air conditioning correlate with higher rates of heat-related mortality and hospitalizations. Premature mortality in NYC is projected to grow between 47% and 95% in 30 years, with lower-income and vulnerable populations most at risk. Studies on the correlation between heat-related mortality and hospitalizations and living in low socioeconomic locations can be traced in Phoenix, Arizona, Hong Kong, China, Japan, and Italy. Additionally, costs concerning health care can act as another barrier, as the lack of private health insurance during a 2009 heat wave in Australia, was associated with heat-related hospitalization. Disparities in socioeconomic status and access to air conditioning are connected by some to institutionalized racism, which leads to the association of specific marginalized communities with lower economic status, poorer health, residing in hotter neighborhoods, engaging in physically demanding labor, and experiencing limited access to cooling technologies such as air conditioning. A study overlooking Chicago, Illinois, Detroit, and Michigan found that black households were half as likely to have central air conditioning units when compared to their white counterparts. Especially in cities, Redlining creates heat islands, increasing temperatures in certain parts of the city. This is due to materials heat-absorbing building materials and pavements and lack of vegetation and shade coverage. There have been initiatives that provide cooling solutions to low-income communities, such as public cooling spaces. Other techniques Buildings designed with passive air conditioning are generally less expensive to construct and maintain than buildings with conventional HVAC systems with lower energy demands. While tens of air changes per hour, and cooling of tens of degrees, can be achieved with passive methods, site-specific microclimate must be taken into account, complicating building design. Many techniques can be used to increase comfort and reduce the temperature in buildings. These include evaporative cooling, selective shading, wind, thermal convection, and heat storage. Passive ventilation Passive cooling Daytime radiative cooling Passive daytime radiative cooling (PDRC) surfaces reflect incoming solar radiation and heat back into outer space through the infrared window for cooling during the daytime. Daytime radiative cooling became possible with the ability to suppress solar heating using photonic structures, which emerged through a study by Raman et al. (2014). PDRCs can come in a variety of forms, including paint coatings and films, that are designed to be high in solar reflectance and thermal emittance. PDRC applications on building roofs and envelopes have demonstrated significant decreases in energy consumption and costs. In suburban single-family residential areas, PDRC application on roofs can potentially lower energy costs by 26% to 46%. PDRCs are predicted to show a market size of ~$27 billion for indoor space cooling by 2025 and have undergone a surge in research and development since the 2010s. Fans Hand fans have existed since prehistory. Large human-powered fans built into buildings include the punkah. The 2nd-century Chinese inventor Ding Huan of the Han dynasty invented a rotary fan for air conditioning, with seven wheels in diameter and manually powered by prisoners. In 747, Emperor Xuanzong (r. 712–762) of the Tang dynasty (618–907) had the Cool Hall (Liang Dian ) built in the imperial palace, which the Tang Yulin describes as having water-powered fan wheels for air conditioning as well as rising jet streams of water from fountains. During the subsequent Song dynasty (960–1279), written sources mentioned the air conditioning rotary fan as even more widely used. Thermal buffering In areas that are cold at night or in winter, heat storage is used. Heat may be stored in earth or masonry; air is drawn past the masonry to heat or cool it. In areas that are below freezing at night in winter, snow and ice can be collected and stored in ice houses for later use in cooling. This technique is over 3,700 years old in the Middle East. Harvesting outdoor ice during winter and transporting and storing for use in summer was practiced by wealthy Europeans in the early 1600s, and became popular in Europe and the Americas towards the end of the 1600s. This practice was replaced by mechanical compression-cycle icemakers. Evaporative cooling In dry, hot climates, the evaporative cooling effect may be used by placing water at the air intake, such that the draft draws air over water and then into the house. For this reason, it is sometimes said that the fountain, in the architecture of hot, arid climates, is like the fireplace in the architecture of cold climates. Evaporative cooling also makes the air more humid, which can be beneficial in a dry desert climate. Evaporative coolers tend to feel as if they are not working during times of high humidity, when there is not much dry air with which the coolers can work to make the air as cool as possible for dwelling occupants. Unlike other types of air conditioners, evaporative coolers rely on the outside air to be channeled through cooler pads that cool the air before it reaches the inside of a house through its air duct system; this cooled outside air must be allowed to push the warmer air within the house out through an exhaust opening such as an open door or window. See also Air filter Air purifier Cleanroom Crankcase heater Energy recovery ventilation Indoor air quality Particulates References External links Carrier's original patent Scientific American, "Artificial Cold", 28 August 1880, p. 138 Scientific American, "The Presidential Cold Air Machine", 6 August 1881, p. 84 1902 introductions American inventions Ancient Egyptian technology Ancient Roman technology Building automation Chinese inventions Cooling technology Dutch inventions Gas technologies Home appliances
Air conditioning
[ "Physics", "Technology", "Engineering" ]
7,328
[ "Machines", "Building engineering", "Automation", "Physical systems", "Home appliances", "Building automation" ]
7,221,195
https://en.wikipedia.org/wiki/Page%20Up%20and%20Page%20Down%20keys
The Page Up and Page Down keys (sometimes abbreviated as PgUp and PgDn) are two keys commonly found on computer keyboards. The two keys are primarily used to scroll up or down in documents, but the scrolling distance varies between different applications. In word processors, for instance, they may jump by an emulated physical page or by a screen view that may show only part of one page or many pages at once depending on zoom factor. In cases when the document is shorter than the full screen, and often have no visible effect at all. Operating systems differ as to whether the keys (pressed without modifier) simply move the view – e.g. in Mac OS X – or also the input caret – e.g. in Microsoft Windows. In right-to-left settings, will move either upwards or rightwards (instead of left) and will move down or leftwards (instead of right). The keys have been dubbed and , accordingly. The arrow keys and the scroll wheel can also be used to scroll a document, although usually by smaller incremental distances. Used together with a modifier key, such as , , or a combination thereof, they may act the same as the Page keys. In most operating systems, if the Page Up or Page Down key is pressed along with the key in editable text, all the text scrolled over will be highlighted. In some applications, the and keys behave differently in caret navigation (toggled with the function key in Windows). For a claimed 30% of people, the paging keys move the text in the opposite direction to what they find natural, and software may contain settings to reverse the operation of these keys to accommodate that. In August 2008, Microsoft received the US patent #7,415,666 for the functions of the two keys – Page Up & Page Down. See also Arrow keys Scroll wheel References Computer keys
Page Up and Page Down keys
[ "Technology" ]
390
[ "Computing stubs", "Computer hardware stubs" ]
7,221,237
https://en.wikipedia.org/wiki/Tannakian%20formalism
In mathematics, a Tannakian category is a particular kind of monoidal category C, equipped with some extra structure relative to a given field K. The role of such categories C is to generalise the category of linear representations of an algebraic group G defined over K. A number of major applications of the theory have been made, or might be made in pursuit of some of the central conjectures of contemporary algebraic geometry and number theory. The name is taken from Tadao Tannaka and Tannaka–Krein duality, a theory about compact groups G and their representation theory. The theory was developed first in the school of Alexander Grothendieck. It was later reconsidered by Pierre Deligne, and some simplifications made. The pattern of the theory is that of Grothendieck's Galois theory, which is a theory about finite permutation representations of groups G which are profinite groups. The gist of the theory is that the fiber functor Φ of the Galois theory is replaced by an exact and faithful tensor functor F from C to the category of finite-dimensional vector spaces over K. The group of natural transformations of Φ to itself, which turns out to be a profinite group in the Galois theory, is replaced by the group G of natural transformations of F into itself, that respect the tensor structure. This is in general not an algebraic group but a more general group scheme that is an inverse limit of algebraic groups (pro-algebraic group), and C is then found to be equivalent to the category of finite-dimensional linear representations of G. More generally, it may be that fiber functors F as above only exists to categories of finite dimensional vector spaces over non-trivial extension fields L/K. In such cases the group scheme G is replaced by a gerbe on the fpqc site of Spec(K), and C is then equivalent to the category of (finite-dimensional) representations of . Formal definition of Tannakian categories Let K be a field and C a K-linear abelian rigid tensor (i.e., a symmetric monoidal) category such that . Then C is a Tannakian category (over K) if there is an extension field L of K such that there exists a K-linear exact and faithful tensor functor (i.e., a strong monoidal functor) F from C to the category of finite dimensional L-vector spaces. A Tannakian category over K is neutral if such exact faithful tensor functor F exists with L=K. Applications The tannakian construction is used in relations between Hodge structure and l-adic representation. Morally, the philosophy of motives tells us that the Hodge structure and the Galois representation associated to an algebraic variety are related to each other. The closely-related algebraic groups Mumford–Tate group and motivic Galois group arise from categories of Hodge structures, category of Galois representations and motives through Tannakian categories. Mumford-Tate conjecture proposes that the algebraic groups arising from the Hodge strucuture and the Galois representation by means of Tannakian categories are isomorphic to one another up to connected components. Those areas of application are closely connected to the theory of motives. Another place in which Tannakian categories have been used is in connection with the Grothendieck–Katz p-curvature conjecture; in other words, in bounding monodromy groups. The Geometric Satake equivalence establishes an equivalence between representations of the Langlands dual group of a reductive group G and certain equivariant perverse sheaves on the affine Grassmannian associated to G. This equivalence provides a non-combinatorial construction of the Langlands dual group. It is proved by showing that the mentioned category of perverse sheaves is a Tannakian category and identifying its Tannaka dual group with . Extensions has established partial Tannaka duality results in the situation where the category is R-linear, where R is no longer a field (as in classical Tannakian duality), but certain valuation rings. has initiated and developed Tannaka duality in the context of infinity-categories. References Further reading M. Larsen and R. Pink. Determining representations from invariant dimensions. Invent. math., 102:377–389, 1990. Monoidal categories Algebraic groups Duality theories
Tannakian formalism
[ "Mathematics" ]
907
[ "Mathematical structures", "Monoidal categories", "Category theory", "Duality theories", "Geometry" ]
7,221,435
https://en.wikipedia.org/wiki/Wind%20direction
Wind direction is generally reported by the direction from which the wind originates. For example, a north or northerly wind blows from the north to the south; the exceptions are onshore winds (blowing onto the shore from the water) and offshore winds (blowing off the shore to the water). Wind direction is usually reported in cardinal (or compass) direction, or in degrees. Consequently, a wind blowing from the north has a wind direction referred to as 0° (360°); a wind blowing from the east has a wind direction referred to as 90°, etc. Weather forecasts typically give the direction of the wind along with its speed, for example a "northerly wind at 15 km/h" is a wind blowing from the north at a speed of 15 km/h. If wind gusts are present, their speed may also be reported. Measurement techniques A variety of instruments can be used to measure wind direction, such as the anemoscope, windsock, and wind vane. All these instruments work by moving to minimize air resistance. The way a weather vane is pointed by prevailing winds indicates the direction from which the wind is blowing. The larger opening of a windsock faces the direction that the wind is blowing from; its tail, with the smaller opening, points in the same direction as the wind is blowing. Modern instruments used to measure wind speed and direction are called anemoscopes, anemometers and wind vanes. These types of instruments are used by the wind energy industry, both for wind resource assessment and turbine control. When a high measurement frequency is needed (such as in research applications), wind can be measured by the propagation speed of ultrasound signals or by the effect of ventilation on the resistance of a heated wire. Another type of anemometer uses pitot tubes that take advantage of the pressure differential between an inner tube and an outer tube that is exposed to the wind to determine the dynamic pressure, which is then used to compute the wind speed. In situations where modern instruments are not available, an index finger can be used to test the direction of wind. This is accomplished by wetting the finger and pointing it upwards. The side of the finger that feels "cool" is (approximately) the direction from which the wind is blowing. The "cool" sensation is caused by an increased rate of evaporation of the moisture on the finger due to the air flow across the finger, and consequently the "finger technique" of measuring wind direction does not work well in either very humid or very hot conditions. The same principle is used to measure the dew point using a sling psychrometer (a more accurate instrument than the human finger). Another primitive technique for measuring wind direction is to take a pinch of grass and drop it; the direction that the grass falls is the direction the wind is blowing. This last technique is often used by golfers because it allows them to gauge the strength of the wind. See also Air masses Apparent wind Beaufort scale Wind fetch Wind power Wind rose Wind transducer Yamartino method for calculating the standard deviation of wind direction References Meteorological phenomena Meteorological quantities Wind
Wind direction
[ "Physics", "Mathematics" ]
635
[ "Physical phenomena", "Earth phenomena", "Physical quantities", "Quantity", "Meteorological quantities", "Meteorological phenomena" ]
7,221,759
https://en.wikipedia.org/wiki/Humanist%20%28electronic%20seminar%29
Humanist is an international electronic seminar on humanities computing and the digital humanities, in the form of a long-running electronic mailing list and its associated archive. The primary aim of Humanist is to provide a forum for discussion of intellectual, scholarly, pedagogical, and social issues and for exchange of information among members. Humanist is also a publication of the Alliance of Digital Humanities Organizations (ADHO) and the Office for Humanities Communication (OHC) and an affiliated publication of the American Council of Learned Societies (ACLS). In 2008, there were 1650 subscribers. History The Humanist list was created in 1987 by Willard McCarty, then at the University of Toronto, as a BITNET (NetNorth in Canada) electronic mail newsletter for people who support computing in the humanities for the Association for Literary and Linguistic Computing. McCarty, later at King's College London, continued to edit it. Although Humanist started off as a means of communication for people directly involved in the support of humanities computing, it grew in scope to become an extended conversation about the nature of "humanities computing" (or "digital humanities", or one of a contested range of other names), about what computing looks like viewed from the humanities, and humanities from computing: "Humanist remains the forum within which the technology, informed by the concerns of humane learning, can be viewed from an interdisciplinary common ground." As of 12 August 2020 the list went on hiatus for "a few weeks" for technical improvements. However, in February 2021 the list eventually moved to a new infrastructure, hosted at the University of Cologne, Germany. References External links (from February 2021) at the Department for Digital Humanities of the University of Cologne, Germany Old website (effective until August 2020) at King's College London, allied with Alliance of Digital Humanities Organizations (ADHO) Office for Humanities Communication (OHC) Digital humanities Electronic mailing lists Humanities education Computer-related introductions in 1987
Humanist (electronic seminar)
[ "Technology" ]
393
[ "Digital humanities", "Computing and society" ]
7,222,298
https://en.wikipedia.org/wiki/Earthpark
Earthpark is a proposed best-in-class educational facility with indoor rain forest and aquarium elements, and a mission of "inspiring generations to learn from the natural world." It was previously called the Environmental Project. Inspired by The Eden Project in Cornwall, England, Earthpark was aimed to be an educational ecosystem and a popular visitor attraction. The project remains dormant since 2008, though it was briefly re-visited in 2018. History and Funding Earthpark was to be located around Lake Red Rock, near the town of Pella, Iowa. Proposals from a variety of potential hosts, some outside of Iowa are being considered. Talks between Earthpark and the city of Coralville, Iowa, the original planned location of Earthpark, ended in 2006. The project then found a new home in Pella, Iowa. but that did not become a reality. In December 2007, federal funding from the U.S. Department of Energy for the Pella location was rescinded, in a form of $50 million. In August 2008, when asked if any efforts would be made to get additional federal money for the project, U.S. Senator Chuck Grassley said "Not by this senator, and I don't think there will be any by other senators." Same year, one of the prospective founders behind this project, Ray Townsend's son, Ted, had pledged $32.9 million of his own money. Size and scope The total project cost for Earthpark was estimated to be $155 million, though over a third of that financial demand was met. The complex was planned to be in area with a 600,000 gallon aquarium and outdoor wetland and prairie exhibits. The Earthpark project was expected to employ 150 people directly and create an additional 2000 indirect jobs. The economic impact was estimated to be US$130 million annually. The park was projected to draw 1 million visitors annually to the Pella area. References External links Entertainment venues in Iowa Ecological experiments Environmental design
Earthpark
[ "Engineering" ]
398
[ "Environmental design", "Design" ]
7,222,821
https://en.wikipedia.org/wiki/Cast%20stone
Cast stone or reconstructed stone is a highly refined building material, a form of precast concrete used as masonry intended to simulate natural-cut stone. It is used for architectural features: trim, or ornament; facing buildings or other structures; statuary; and for garden ornaments. Cast stone can be made from white and/or grey cements, manufactured or natural sands, crushed stone or natural gravels, and colored with mineral coloring pigments. Cast stone may replace such common natural building stones as limestone, brownstone, sandstone, bluestone, granite, slate, coral, and travertine. History The earliest known use of cast stone dates from about 1138 in the Cité de Carcassonne, France. Cast stone was first used extensively in London in the 19th century and gained widespread acceptance in America in 1920. One of the earliest developments in the industry was Coade stone, a fired ceramic form of stoneware. Today most artificial stone consists of fine Portland cement-based concrete placed to set in wooden, rubber-lined fiberglass or iron moulds. It is cheaper and more uniform than natural stone, and widely used. In engineering projects, it allows transporting the bulk materials and casting near the place of use, which is cheaper than transporting and carving very large pieces of stone. According to Rupert Gunnis a Dutchman named Van Spangen set up an artificial stone manufactury at Bow in London in 1800. Having later gone into partnership with a Mr. Powell the firm was broken up in 1828, and the moulds sold to a sculptor, Felix Austin. Another well-known variety was Victoria stone, which is composed of three parts finely crushed Mount Sorrel (Leicestershire) granite to one of Portland cement, carefully mechanically mixed and filled into moulds. After setting the blocks are placed in a solution of silicate of soda to indurate and harden them. Many manufacturers turned out a very non-porous product able to resist corrosive sea air and industrial and residential air pollution. Manufacturing Cast stone is commonly manufactured by two methods, the first method is the dry tamp method and the second is the wet cast process. Both methods manufactured a simulated natural cut stone look. Wood, plaster, glue, sand, sheet metal, and gelatin are the molding materials that are used to manufacture drawing work and casting molds like delineate section, bed, and face templates. A low slump mixture is required for the dry tamp method that should be tamped into the mold. The dry stone consists of two layers, an inner layer of concrete and an outer layer of decoration which is also known as the facing layer. In the wet cast method, to flow material easily on the mold mixture of integrally colored with water and plastic is used. In dry tamp mixtures molds can be used many times, but in wet cast mixtures molds only can be used once. Standards In the US and some other countries, the industry standard today for physical properties and raw materials constituents is ASTM C 1364, the Standard Specification for Architectural Cast Stone. Membership in ASTM International (founded in 1898 as the American Chapter of the International Association for Testing and Materials and most recently known as the American Society for Testing and Materials) exceeds 30,000 technical experts from more than 100 countries who comprise a worldwide standards forum. The ASTM method of developing standards has been based on consensus of both users and producers of all kinds of materials. The ASTM process ensures that interested individuals and organizations representing industry, academia, consumers, and governments alike, all have an equal vote in determining a standard's content. In the UK and Europe, it is more normal to use the Standard "BS 1217 Cast stone - Specification" from the BSI Group. The European Commission's "Construction Products Regulations" legislation states that by mid-2013 CE marking becomes mandatory for certain construction products sold in Europe, including some Cast Stone items". See also Geopolymers Anthropic rock Fambrini & Daniels Cast stone manufacturers. References Dictionnaire raisonné de l’architecture française du XIe au XVIe siècle/Béton Concrete Building materials Masonry Building stone Artificial stone
Cast stone
[ "Physics", "Engineering" ]
857
[ "Structural engineering", "Matter", "Building engineering", "Architecture", "Construction", "Materials", "Concrete", "Masonry", "Building materials" ]
16,065,959
https://en.wikipedia.org/wiki/Visco%20Corporation
is a software company located in Japan. It was founded in 1982 by and later became corporate on August 8, 1983, while revealing itself as "Visco" in Japan. They originally developed video games for several platforms from the arcades and NES, to the Nintendo 64 and Neo Geo in the past. When Visco was one of the companies under the Taito umbrella, some of its titles back then were labeled "Taito". They also teamed up with Seta and Sammy in developing arcade games powered by the SSV (Sammy, Seta and Visco) arcade system board until Sammy fully acquired noted game company Sega under a new company titled Sega Sammy Holdings in 2004, while Seta's parent company Aruze announced in December 2008 that Seta decided to close their doors after 23 years of existence. Therefore, the SSV board was no longer being produced. From 2008, Visco began manufacturing slot machines for casinos mostly in southeast Asian regions. Games released References External links Visco Corporation's Official website Video game companies established in 1982 Video game companies of Japan Japanese companies established in 1982 Computer hardware companies Video game development companies
Visco Corporation
[ "Technology" ]
230
[ "Computer hardware companies", "Computers" ]
16,066,056
https://en.wikipedia.org/wiki/Digital%20modeling%20and%20fabrication
Digital modeling and fabrication is a design and production process that combines 3D modeling or computing-aided design (CAD) with additive and subtractive manufacturing. Additive manufacturing is also known as 3D printing, while subtractive manufacturing may also be referred to as machining, and many other technologies can be exploited to physically produce the designed objects. Modeling Digitally fabricated objects are created with a variety of CAD software packages, using both 2D vector drawing, and 3D modeling. Types of 3D models include wireframe, solid, surface and mesh. A design has one or more of these model types. Machines for fabrication Three machines are popular for fabrication: 1. CNC router 2. Laser cutter 3. 3D Printer CNC milling machine CNC stands for "computer numerical control". CNC mills or routers include proprietary software which interprets 2D vector drawings or 3D models and converts this information to a G-code, which represents specific CNC functions in an alphanumeric format, which the CNC mill can interpret. The G-codes drive a machine tool, a powered mechanical device typically used to fabricate components. CNC machines are classified according to the number of axes that they possess, with 3, 4 and 5 axis machines all being common, and industrial robots being described with having as many as 9 axes. CNC machines are specifically successful in milling materials such as plywood, plastics, foam board, and metal at a fast speed. CNC machine beds are typically large enough to allow 4' × 8' (123 cm x 246 cm) sheets of material, including foam several inches thick, to be cut. Laser cutter The laser cutter is a machine that uses a laser to cut materials such as chip board, matte board, felt, wood, and acrylic up to 3/8 inch (1 cm) thickness. The laser cutter is often bundled with a driver software which interprets vector drawings produced by any number of CAD software platforms. The laser cutter is able to modulate the speed of the laser head, as well as the intensity and resolution of the laser beam, and as such is able in both to cut and to score material, as well as approximate raster graphics. Objects cut out of materials can be used in the fabrication of physical models, which will only require the assembly of the flat parts. 3D printers 3D printers use a variety of methods and technology to assemble physical versions of digital objects. Typically desktop 3D printers can make small plastic 3D objects. They use a roll of thin plastic filament, melting the plastic and then depositing it precisely to cool and harden. They normally build 3D objects from bottom to top in a series of many very thin plastic horizontal layers. This process often happens over the course of several hours. Fused deposition modeling Fused deposition modeling, also known as fused filament fabrication, uses a 3-axis robotic system that extrudes material, typically a thermoplastic, one thin layer at a time and progressively builds up a shape. Examples of machines that use this method are the Dimension 768 and the Ultimaker. Stereolithography Stereolithography uses a high intensity light projector, usually using DLP technology, with a photosensitive polymer resin. It will project the profile of an object to build a single layer, curing the resin into a solid shape. Then the printer will move the object out of the way by a small amount and project the profile of the next layer. Examples of devices that use this method are the Form-One printer and Os-RC Illios. Selective laser sintering Selective laser sintering uses a laser to trace out the shape of an object in a bed of finely powdered material that can be fused together by the application of heat from the laser. After one layer has been traced by a laser, the bed and partially finished part is moved out of the way, a thin layer of the powdered material is spread, and the process is repeated. Typical materials used are alumide, steel, glass, thermoplastics (especially nylon), and certain ceramics. Example devices include the Formiga P 110 and the Eos EosINT P730. Powder printer Powder printers work in a similar manner to SLS machines, and typically use powders that can be cured, hardened, or otherwise made solid by the application of a liquid binder that is delivered via an inkjet printhead. Common materials are plaster of paris, clay, powdered sugar, wood-filler bonding putty, and flour, which are typically cured with water, alcohol, vinegar, or some combination thereof. The major advantage of powder and SLS machines is their ability to continuously support all parts of their objects throughout the printing process with unprinted powder. This permits the production of geometries not easily otherwise created. However, these printers are often more complex and expensive. Examples of printers using this method are the ZCorp Zprint 400 and 450. See also Direct digital manufacturing Industry 4.0 Rapid Prototyping Responsive computer-aided design Technology education References 3D imaging 3D printing Computer-aided design Building technology Numerical control Laser applications Modelling Geometry processing
Digital modeling and fabrication
[ "Technology", "Engineering" ]
1,042
[ "Computer-aided design", "Design engineering", "Industrial computing", "Digital manufacturing" ]
16,066,413
https://en.wikipedia.org/wiki/John%20R.%20Yates
John R. Yates III is an American chemist and Ernest W. Hahn Professor in the Departments of Molecular Medicine and Neurobiology at The Scripps Research Institute in La Jolla, California. He is currently editor-in-chief of the Journal of Proteome Research. He appeared on list of 2022, 2023 and 2024 Highly Cited Researchers, making him among the top one percent of most cited researchers in the world. His work is focused on developing tools and in proteomics and he specializes in mass spectrometry. He is best known for the development of the Sequest algorithm for automated peptide sequencing and Multidimensional Protein Identification Technology (MudPIT) and data-independent acquisition. His lab is known for its contribution to methods to identify structural changes in proteins in living cells as well as method to identify N-glycan processing and occupancy using mass spectrometry such as covalent protein painting, and DeGlyPHER (Deglycosylation-dependent Glycan/Proteomic Heterogeneity Evaluation Report). His laboratory has made important contributions to understanding the biochemical mechanisms behind the failure of △F508 cystic fibrosis ion transport regulator to mature. References External links Page at Scripps Lab page 21st-century American chemists Mass spectrometrists Living people Scripps Research faculty Year of birth missing (living people) Thomson Medal recipients
John R. Yates
[ "Physics", "Chemistry" ]
290
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
16,066,580
https://en.wikipedia.org/wiki/Tarski%27s%20plank%20problem
In mathematics, Tarski's plank problem is a question about coverings of convex regions in n-dimensional Euclidean space by "planks": regions between two hyperplanes. Alfred Tarski asked if the sum of the widths of the planks must be at least the minimum width of the convex region. The question was answered affirmatively by . Statement Given a convex body C in Rn and a hyperplane H, the width of C parallel to H, w(C,H), is the distance between the two supporting hyperplanes of C that are parallel to H. The smallest such distance (i.e. the infimum over all possible hyperplanes) is called the minimal width of C, w(C). The (closed) set of points P between two distinct, parallel hyperplanes in Rn is called a plank, and the distance between the two hyperplanes is called the width of the plank, w(P). Tarski conjectured that if a convex body C of minimal width w(C) was covered by a collection of planks, then the sum of the widths of those planks must be at least w(C). That is, if P1,…,Pm are planks such that then Bang proved this is indeed the case. Nomenclature The name of the problem, specifically for the sets of points between parallel hyperplanes, comes from the visualisation of the problem in R2. Here, hyperplanes are just straight lines and so planks become the space between two parallel lines. Thus the planks can be thought of as (infinitely long) planks of wood, and the question becomes how many planks does one need to completely cover a convex tabletop of minimal width w? Bang's theorem shows that, for example, a circular table of diameter d feet can't be covered by fewer than d planks of wood of width one foot each. References Geometry
Tarski's plank problem
[ "Mathematics" ]
393
[ "Geometry" ]
16,067,587
https://en.wikipedia.org/wiki/Friant-Kern%20Canal
The Friant-Kern Canal is a aqueduct managed by the United States Bureau of Reclamation in Central California to convey water to augment irrigation capacity in Fresno, Tulare, and Kern counties. A part of the Central Valley Project, canal construction began in 1949 and was completed in 1951 at a cost of $60.8 million. The Friant-Kern Canal begins at the Friant Dam of Millerton Lake, a reservoir on the San Joaquin River north of Fresno, and flows south along the eastern edge of the San Joaquin Valley, ending at the Kern River near Bakersfield. In a typical year, it diverts almost all the flow of the San Joaquin River, leaving the river dry for about downstream. The Central Valley Project Delta-Mendota Canal replenishes the San Joaquin River at the town of Mendota, and replaces the volume of water being delivered by the Friant-Kern Canal. Average annual throughput is , with a high of in 2005, and a low of in 2015. In the past few years canal flows have been reduced due to river restoration projects requiring a greater release of water from the Friant Dam into the San Joaquin. The Friant-Kern Canal capacity is , gradually decreasing to at its terminus. The canal is built in both concrete-lined and unlined earth sections. It is up to wide at the top and is wide at the bottom of concrete segments, and wide in earth segments. Water depths range from . Introduction Friant-Kern canal delivers water to numerous districts, cities, and up to 15,000 family farms. The canal stems from the Friant Dam located in the Sierra Nevada foothills, near the town of Friant. Built by the Bureau of Reclamation, the dam reaches a height of 319 feet and a length of 3,488 feet storing approximately 520,500 acre feet of water. In addition to storing water, the dam produces renewable energy through a 25 MW power plant operated by Friant Power Authority. Friant-Kern canal combats issues such as subsidence by providing water from the wetter northernmost part of the state to incentivize farmers to pump less groundwater. The Friant-Kern canal is part of a much larger project called the Central Valley Project or CVP. This water infrastructure system was created for many reasons, one being to ease the detrimental affects associated with excess ground water pumping, particularly by farmers, and to also support the necessary economic development to withstand the massive influx of people entering the state, especially between the years 1920–1950. Central Valley Project The Central Valley Project was an ambitious project built to address many different problems affecting the state. The CVP was intended to reduce the impacts of flooding, provide water for varying purposes within the valley, distribute water to different urban centers around the region, generate electricity, and to aid in conservation efforts. The entire project consisted of 20 dams and reservoirs which collectively store about 12 million acre feet of water. Issues and concerns Environmental Impacts Environmental impacts associated with the Friant-Kern Canal vary across the state. Salmon populations are impacted due to diversion of water from the natural stream flow. Along with depleted stream flows, the dam itself serves as a blockade against salmon traveling upstream in search of appropriate spawning grounds. Due to the diversion of water, dry reaches of riverbed are reported along some portions of the San Joaquin River. Along the river where the bed is dry, riparian habitats are suffering and native flora and fauna are impacted detrimentally. With dry riverbeds and salmon populations suffering, a lawsuit was filed which led to a settlement urging restoration of the river. The river is replenished by the Delta Mendota Canal, but not before negative impacts are observed. Subsidence Subsidence is caused by excess or unsustainable removal of groundwater, typically below an aquitard or confining layer. Up to 60% of the Friant-Kern canal water delivery capacity is negatively affected by land subsidence. This reduction in flow rates in the canal impacts both agricultural and groundwater basins within the service area. Decreased flow rates means more groundwater pumping by farmers and less groundwater recharge by state agencies. Both of these contribute to further subsidence and reductions in the ability to transport water during particularly wet years. By April 2017, the canal had subsided a total of twelve feet since its completion in 1949. The FWA estimates that current construction aimed towards fixing the subsidence problem will reduce the delivery of class 2 supplies by about 100,000 acre feet/year. Construction Construction is needed to fix the canal where subsidence has impacted its functionality. Proposed construction consists of the excavation of 400,000 cubic yards of soil and 17,000 cubic yards of rock. Some other materials consist of 450,000 cubic yards of backfill is required along with 35,000 cubic yards of concrete lining material, 500,000 linear feet of aqualastic sealant and 85,000 cubic yards of riprap. To minimize any possible negative affects to biological resources, construction will occur when canal flows are low enough to avoid in-water work. Construction will also have an effect on air quality but only in the short term. The emission levels have been calculated to be under the federal and San Joaquin Valley Air Pollution Control District levels. Fugitive dust suppression is required to reduce air pollution as much as possible. Noise levels will also increase during the time of construction, however, disturbance coordinators will be designated with their contact information provided and all machinery will be fine-tuned and equipped with necessary noise mufflers. Restoring the canal has been postulated to provide an increase of local jobs to an economically depressed region. Myriophyllum hippuroides Myriophyllum hippuroides, also known as western watermilfoil has been impacting the canal for quite some time now. These weeds root themselves and reproduce in the earthen areas which are areas lacking cement canal lining. This weed can also attach itself to cracks in the concrete. So after sometime floating down the canal, the weed may find another home to reproduce and propagate from. Chemical treatment is required for the successful removal of the aquatic weed which can grow up to ten feet long. This weed has been reported to clog canals, water meters, and micro irrigation sprinklers. Farmers who are trying to cut back on water usage by using micro irrigation technology are especially susceptible to clogging by these weeds. See also Madera Canal Temperance Flat Dam Footnotes References Friant Division Project, US Bureau of Reclamation Friant Division History , US Bureau of Reclamation Friant-Kern Canal Water Data, US Geological Survey Agriculture in California Aqueducts in California Transportation buildings and structures in Fresno County, California Irrigation in the United States Transportation buildings and structures in Kern County, California Transportation buildings and structures in Tulare County, California San Joaquin River United States Bureau of Reclamation Central Valley Project Interbasin transfer
Friant-Kern Canal
[ "Engineering", "Environmental_science" ]
1,413
[ "Hydrology", "Irrigation projects", "Central Valley Project", "Interbasin transfer" ]
16,067,595
https://en.wikipedia.org/wiki/Madera%20Canal
The Madera Canal is a -long aqueduct in the U.S. state of California. It is part of the Central Valley Project managed by the United States Bureau of Reclamation to convey water north to augment irrigation capacity in Madera County. It was also the subject of the United States Supreme Court's decision in Central Green Co. v. United States. The Madera Canal begins at Millerton Lake, a reservoir on the San Joaquin River north of Fresno. The canal runs north along the eastern edge of the San Joaquin Valley, ending at the Chowchilla River east of Chowchilla. Average annual throughput is . The Madera Canal has a capacity of , gradually decreasing to at the terminus. It was completed in 1945. The headworks was rebuilt in 1965 to deliver . References Central Valley Project - Friant Division, Bureau of Reclamation USGS flow data Agriculture in California Transportation buildings and structures in Madera County, California Central Valley Project Irrigation in the United States Aqueducts in California San Joaquin River United States Bureau of Reclamation 1945 establishments in California Transport infrastructure completed in 1945
Madera Canal
[ "Engineering" ]
220
[ "Irrigation projects", "Central Valley Project" ]
16,067,683
https://en.wikipedia.org/wiki/Montgomery%20Bell%20Tunnel
The Montgomery Bell Tunnel, also known as the Pattison Forge Tunnel, which Bell called "Pattison Forge" (often spelled, incorrectly, "Patterson") after his mother's maiden name, is a historic water diversion tunnel in Harpeth River State Park in Cheatham County, Tennessee. Built in 1819, the long tunnel is believed to be the first full-size tunnel built in the United States, and is the first used to divert water for industrial purposes. It was designated a Historic Civil Engineering Landmark in 1981, and a National Historic Landmark in 1994. Description and history The Montgomery Bell Tunnel is located in a unit of Harpeth River State Park, north of the town of Kingston Springs, Tennessee. In this area, the Harpeth River undergoes a series of meanders. In one of these, two parts of the river are quite close after a lengthy oxbow, known as the Narrows of the Harpeth. The tunnel runs roughly north-south across this isthmus. It is in length, and is dug entirely through limestone rock. Neither the tunnel nor its portals are lined in any way. The tunnel's profile shape is that of a rectangle topped by a segmented arch, and it is generally high and wide. The entrance portal is high and wide and the exit portal is high and wide. The tunnel has suffered some damage over the years due to erosive forces. Montgomery Bell, an entrepreneur from Pennsylvania who was involved in iron foundries in central Tennessee, purchased the land in this area in 1818. Recognizing the potential to apply water power to the process of producing wrought iron, he directed the construction of this tunnel, which facilitates use of a drop in river height for power generation. The tunnel is the first known example in the United States of a "full-scale" water diversion tunnel. It is also apparently the first "full-scale" tunnel of any type in the United States, its completion predating that of the Auburn Tunnel (1821) in Pennsylvania, whose construction was begun first. The tunnel is the only feature of Bell's iron works to survive; not surviving are a dam, head race, and the iron foundry itself, as well as Bell's house, which was built nearby. During the 1930s, the land in this area was leased to the Boy Scouts of America for use as a summer camp. It was deeded to the state in 1978. In the late evening, on September 2, 2011, a fire was lit in the tunnel. The amount of driftwood from 2010's flood in the tunnel enlarged the fire. The fire was eventually extinguished in the early hours of the morning. The tunnel, and the road passing over it, were damaged. It and the road were stabilized, however, and Montgomery Bell Tunnel is again safe. See also List of National Historic Landmarks in Tennessee National Register of Historic Places listings in Cheatham County, Tennessee References External links About Harpeth River State Park - Tennessee State Parks web site National Historic Landmarks in Tennessee Buildings and structures in Cheatham County, Tennessee Water tunnels in the United States Historic Civil Engineering Landmarks Tunnels in Tennessee Water tunnels on the National Register of Historic Places Tunnels completed in 1819 Transportation buildings and structures on the National Register of Historic Places in Tennessee National Register of Historic Places in Cheatham County, Tennessee
Montgomery Bell Tunnel
[ "Engineering" ]
666
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
16,068,771
https://en.wikipedia.org/wiki/Louis-Alexandre%20de%20Cessart
Louis-Alexandre de Cessart (25 August 1719, Paris – 12 April 1806, Rouen) was a French road and bridge engineer. He served in the "gendarmerie de la Maison du Roi", fighting at the battles of Fontenoy and Raucoux in 1745 and 1746. In 1747 he entered the school of Jean-Rodolphe Perronet, which later became the École nationale des ponts et chaussées. He contributed to the Encyclopédie with Perronet and Jean-Baptiste de Voglie. He was made under-engineer of the generality of Tours in 1751. Notably, it was he who conceived several bridges over the river Loire, along with the Pont des Arts over the Seine in Paris, the first dike project at Cherbourg, and several quays at ports in north-west France. Bibliography Louis-Victor Dubois d'Arneuville, Description des travaux hydrauliques de Louis-Alexandre de Cessart, A.-A. Renouard, Paris, 2 volumes, 1806–1808. External links Biography French civil engineers École des Ponts ParisTech alumni Corps des ponts 1719 births 1806 deaths 18th-century French architects French bridge engineers Architects from Paris Structural engineers
Louis-Alexandre de Cessart
[ "Engineering" ]
258
[ "Structural engineering", "Structural engineers" ]
16,068,922
https://en.wikipedia.org/wiki/Guidance%2C%20navigation%2C%20and%20control
Guidance, navigation and control (abbreviated GNC, GN&C, or G&C) is a branch of engineering dealing with the design of systems to control the movement of vehicles, especially, automobiles, ships, aircraft, and spacecraft. In many cases these functions can be performed by trained humans. However, because of the speed of, for example, a rocket's dynamics, human reaction time is too slow to control this movement. Therefore, systems—now almost exclusively digital electronic—are used for such control. Even in cases where humans can perform these functions, it is often the case that GNC systems provide benefits such as alleviating operator work load, smoothing turbulence, fuel savings, etc. In addition, sophisticated applications of GNC enable automatic or remote control. Guidance refers to the determination of the desired path of travel (the "trajectory") from the vehicle's current location to a designated target, as well as desired changes in velocity, rotation and acceleration for following that path. Navigation refers to the determination, at a given time, of the vehicle's location and velocity (the "state vector") as well as its attitude. Control refers to the manipulation of the forces, by way of steering controls, thrusters, etc., needed to execute guidance commands while maintaining vehicle stability. Parts Guidance, navigation, and control systems consist of 3 essential parts: navigation which tracks current location, guidance which leverages navigation data and target information to direct flight control "where to go", and control which accepts guidance commands to affect change in aerodynamic and/or engine controls. Navigation is the art of determining where you are, a science that has seen tremendous focus in 1711 with the Longitude prize. Navigation aids either measure position from a fixed point of reference (ex. landmark, north star, LORAN Beacon), relative position to a target (ex. radar, infra-red, ...) or track movement from a known position/starting point (e.g. IMU). Today's complex systems use multiple approaches to determine current position. For example, today's most advanced navigation systems are embodied within the Anti-ballistic missile, the RIM-161 Standard Missile 3 leverages GPS, IMU and ground segment data in the boost phase and relative position data for intercept targeting. Complex systems typically have multiple redundancy to address drift, improve accuracy (ex. relative to a target) and address isolated system failure. Navigation systems therefore take multiple inputs from many different sensors, both internal to the system and/or external (ex. ground based update). Kalman filter provides the most common approach to combining navigation data (from multiple sensors) to resolve current position. Guidance is the "driver" of a vehicle. It takes input from the navigation system (where am I) and uses targeting information (where do I want to go) to send signals to the flight control system that will allow the vehicle to reach its destination (within the operating constraints of the vehicle). The "targets" for guidance systems are one or more state vectors (position and velocity) and can be inertial or relative. During powered flight, guidance is continually calculating steering directions for flight control. For example, the Space Shuttle targets an altitude, velocity vector, and gamma to drive main engine cut off. Similarly, an Intercontinental ballistic missile also targets a vector. The target vectors are developed to fulfill the mission and can be preplanned or dynamically created. Control Flight control is accomplished either aerodynamically or through powered controls such as engines. Guidance sends signals to flight control. A Digital Autopilot (DAP) is the interface between guidance and control. Guidance and the DAP are responsible for calculating the precise instruction for each flight control. The DAP provides feedback to guidance on the state of flight controls. Examples GNC systems are found in essentially all autonomous or semi-autonomous systems. These include: Autopilots Driverless cars, like Mars rovers or those participating in the DARPA Grand Challenge Guided missiles Precision-guided airdrop systems Reaction control systems for spacecraft Spacecraft launch vehicles Unmanned aerial vehicles Auto-steering tractors Autonomous underwater vehicle Related examples are: Celestial navigation is a position fixing technique that was devised to help sailors cross the featureless oceans without having to rely on dead reckoning to enable them to strike land. Celestial navigation uses angular measurements (sights) between the horizon and a common celestial object. The Sun is most often measured. Skilled navigators can use the Moon, planets or one of 57 navigational stars whose coordinates are tabulated in nautical almanacs. Historical tools include a sextant, watch and ephemeris data. Today's space shuttle, and most interplanetary spacecraft, use optical systems to calibrate inertial navigation systems: Crewman Optical Alignment Sight (COAS), Star Tracker. Inertial Measurement Units (IMUs) are the primary inertial system for maintaining current position (navigation) and orientation in missiles and aircraft. They are complex machines with one or more rotating Gyroscopes that can rotate freely in 3 degrees of motion within a complex gimbal system. IMUs are "spun up" and calibrated prior to launch. A minimum of 3 separate IMUs are in place within most complex systems. In addition to relative position, the IMUs contain accelerometers which can measure acceleration in all axes. The position data, combined with acceleration data provide the necessary inputs to "track" motion of a vehicle. IMUs have a tendency to "drift", due to friction and accuracy. Error correction to address this drift can be provided via ground link telemetry, GPS, radar, optical celestial navigation and other navigation aids. When targeting another (moving) vehicle, relative vectors become paramount. In this situation, navigation aids which provide updates of position relative to the target are more important. In addition to the current position, inertial navigation systems also typically estimate a predicted position for future computing cycles. See also Inertial navigation system. Astro-inertial guidance is a sensor fusion/information fusion of the Inertial guidance and Celestial navigation. Long-range Navigation (LORAN) : This was the predecessor of GPS and was (and to an extent still is) used primarily in commercial sea transportation. The system works by triangulating the ship's position based on directional reference to known transmitters. Global Positioning System (GPS) : GPS was designed by the US military with the primary purpose of addressing "drift" within the inertial navigation of Submarine-launched ballistic missile(SLBMs) prior to launch. GPS transmits 2 signal types: military and a commercial. The accuracy of the military signal is classified but can be assumed to be well under 0.5 meters. The GPS system space segment is composed of 24 to 32 satellites in medium Earth orbit at an altitude of approximately 20,200 km (12,600 mi). The satellites are in six specific orbits and transmit highly accurate time and satellite location information which can be used to derive distances and calculate position. Radar/Infrared/Laser : This form of navigation provides information to guidance relative to a known target, it has both civilian (ex rendezvous) and military applications. active (employs own radar to illuminate the target), passive (detects target's radar emissions), semiactive radar homing, Infrared homing : This form of guidance is used exclusively for military munitions, specifically air-to-air and surface-to-air missiles. The missile's seeker head homes in on the infrared (heat) signature from the target's engines (hence the term "heat-seeking missile"), Ultraviolet homing, used in FIM-92 Stinger - more resistive to countermeasures, than IR homing system Laser guidance : A laser designator device calculates relative position to a highlighted target. Most are familiar with the military uses of the technology on Laser-guided bomb. The space shuttle crew leverages a hand held device to feed information into rendezvous planning. The primary limitation on this device is that it requires a line of sight between the target and the designator. Terrain contour matching (TERCOM). Uses a ground scanning radar to "match" topography against digital map data to fix current position. Used by cruise missiles such as the Tomahawk (missile family). See also Aeronautics Air navigation Aircraft flight control system Control engineering Flight control surfaces Missile guidance Navigation References External links AIAA GNC Conference (annual) Academic Earth: Aircraft Systems Engineering: Lecture 16 GNC. Phil Hattis – MIT Georgia Tech: GNC: Theory and Applications NASA Shuttle Technology: GNC Boeing: Defense, Space & Security: International Space Station: GNC Princeton Satellite Systems: GNC of High-Altitude Airships. Joseph Mueller CEAS: EuroGNC Conference Applications of control engineering Avionics Robot control Cybernetics Military electronics Uncrewed vehicles
Guidance, navigation, and control
[ "Technology", "Engineering" ]
1,815
[ "Robotics engineering", "Avionics", "Robot control", "Control engineering", "Aircraft instruments", "Applications of control engineering" ]
16,070,103
https://en.wikipedia.org/wiki/Electron-beam%20processing
Electron-beam processing or electron irradiation (EBI) is a process that involves using electrons, usually of high energy, to treat an object for a variety of purposes. This may take place under elevated temperatures and nitrogen atmosphere. Possible uses for electron irradiation include sterilization, alteration of gemstone colors, and cross-linking of polymers. Electron energies typically vary from the keV to MeV range, depending on the depth of penetration required. The irradiation dose is usually measured in grays but also in Mrads ( is equivalent to ). The basic components of a typical electron-beam processing device include: an electron gun (consisting of a cathode, grid, and anode), used to generate and accelerate the primary beam; and, a magnetic optical (focusing and deflection) system, used for controlling the way in which the electron beam impinges on the material being processed (the "workpiece"). In operation, the gun cathode is the source of thermally emitted electrons that are both accelerated and shaped into a collimated beam by the electrostatic field geometry established by the gun electrode (grid and anode) configuration used. The electron beam then emerges from the gun assembly through an exit hole in the ground-plane anode with an energy equal to the value of the negative high voltage (gun operating voltage) being applied to the cathode. This use of a direct high voltage to produce a high-energy electron beam allows the conversion of input electrical power to beam power at greater than 95% efficiency, making electron-beam material processing a highly energy-efficient technique. After exiting the gun, the beam passes through an electromagnetic lens and deflection coil system. The lens is used for producing either a focused or defocused beam spot on the workpiece, while the deflection coil is used to either position the beam spot on a stationary location or provide some form of oscillatory motion. In polymers, an electron beam may be used on the material to induce effects such as chain scission (which makes the polymer chain shorter) and cross-linking. The result is a change in the properties of the polymer, which is intended to extend the range of applications for the material. The effects of irradiation may also include changes in crystallinity, as well as microstructure. Usually, the irradiation process degrades the polymer. The irradiated polymers may sometimes be characterized using DSC, XRD, FTIR, or SEM. In poly(vinylidene fluoride-trifluoroethylene) copolymers, high-energy electron irradiation lowers the energy barrier for the ferroelectric-paraelectric phase transition and reduces polarization hysteresis losses in the material. Electron-beam processing involves irradiation (treatment) of products using a high-energy electron-beam accelerator. Electron-beam accelerators utilize an on-off technology, with a common design being similar to that of a cathode ray television. Electron-beam processing is used in industry primarily for three product modifications: Crosslinking of polymer-based products to improve mechanical, thermal, chemical and other properties, Material degradation often used in the recycling of materials, Sterilization of medical and pharmaceutical goods. Nanotechnology is one of the fastest-growing new areas in science and engineering. Radiation is early applied tool in this area; arrangement of atoms and ions has been performed using ion or electron beams for many years. New applications concern nanocluster and nanocomposites synthesis. Crosslinking The cross-linking of polymers through electron-beam processing changes a thermoplastic material into a thermoset. When polymers are crosslinked, the molecular movement is severely impeded, making the polymer stable against heat. This locking together of molecules is the origin of all of the benefits of crosslinking, including the improvement of the following properties: Thermal: resistance to temperature, aging, low-temperature impact, etc. Mechanical: tensile strength, modulus, abrasion resistance, pressure rating, creep resistance, etc. Chemical: stress crack resistance, etc. Other: heat shrink memory properties, positive temperature coefficient, etc. Cross-linking is the interconnection of adjacent long molecules with networks of bonds induced by chemical treatment or electron-beam treatment. Electron-beam processing of thermoplastic material results in an array of enhancements, such as an increase in tensile strength and resistance to abrasions, stress cracking and solvents. Joint replacements such as knees and hips are being manufactured from cross-linked ultra-high-molecular-weight polyethylene because of the excellent wear characteristics due to extensive research. Polymers commonly crosslinked using the electron-beam irradiation process include polyvinyl chloride (PVC), thermoplastic polyurethanes and elastomers (TPUs), polybutylene terephthalate (PBT), polyamides / nylon (PA66, PA6, PA11, PA12), polyvinylidene fluoride (PVDF), polymethylpentene (PMP), polyethylenes (LLDPE, LDPE, MDPE, HDPE, UHMWPE), and ethylene copolymers such as ethylene-vinyl acetate (EVA) and ethylene tetrafluoroethylene (ETFE). Some of the polymers utilize additives to make the polymer more readily irradiation-crosslinkable. An example of an electron-beam crosslinked part is connector made from polyamide, designed to withstand the higher temperatures needed for soldering with the lead-free solder required by the RoHS initiative. Cross-linked polyethylene piping called PEX is commonly used as an alternative to copper piping for water lines in newer home construction. PEX piping will outlast copper and has performance characteristics that are superior to copper in many ways. Foam is also produced using electron-beam processing to produce high-quality, fine-celled, aesthetically pleasing product. Long-chain branching The resin pellets used to produce the foam and thermoformed parts can be electron-beam-processed to a lower dose level than when crosslinking and gels occur. These resin pellets, such as polypropylene and polyethylene can be used to create lower-density foams and other parts, as the "melt strength" of the polymer is increased. Chain scissioning Chain scissioning or polymer degradation can also be achieved through electron-beam processing. The effect of the electron beam can cause the degradation of polymers, breaking chains and therefore reducing the molecular weight. The chain scissioning effects observed in polytetrafluoroethylene (PTFE) have been used to create fine micropowders from scrap or off-grade materials. Chain scission is the breaking apart of molecular chains to produce required molecular sub-units from the chain. Electron-beam processing provides Chain scission without the use of harsh chemicals usually utilized to initiate chain scission. An example of this process is the breaking down of cellulose fibers extracted from wood in order to shorten the molecules, thereby producing a raw material that can then be used to produce biodegradable detergents and diet-food substitutes. "Teflon" (PTFE) is also electron-beam-processed, allowing it to be ground to a fine powder for use in inks and as coatings for the automotive industry. Microbiological sterilization Electron-beam processing has the ability to break the chains of DNA in living organisms, such as bacteria, resulting in microbial death and rendering the space they inhabit sterile. E-beam processing has been used for the sterilization of medical products and aseptic packaging materials for foods, as well as disinfestation, the elimination of live insects from grain, tobacco, and other unprocessed bulk crops. Sterilization with electrons has significant advantages over other methods of sterilization currently in use. The process is quick, reliable, and compatible with most materials, and does not require any quarantine following the processing. For some materials and products that are sensitive to oxidative effects, radiation tolerance levels for electron-beam irradiation may be slightly higher than for gamma exposure. This is due to the higher dose rates and shorter exposure times of e-beam irradiation, which have been shown to reduce the degradative effects of oxygen. Notes Electromagnetism Electron beams in manufacturing Industrial processes Plastics industry
Electron-beam processing
[ "Physics" ]
1,792
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions" ]
16,070,185
https://en.wikipedia.org/wiki/Struve%20function
In mathematics, the Struve functions , are solutions of the non-homogeneous Bessel's differential equation: introduced by . The complex number α is the order of the Struve function, and is often an integer. And further defined its second-kind version as . The modified Struve functions are equal to and are solutions of the non-homogeneous Bessel's differential equation: And further defined its second-kind version as . Definitions Since this is a non-homogeneous equation, solutions can be constructed from a single particular solution by adding the solutions of the homogeneous problem. In this case, the homogeneous solutions are the Bessel functions, and the particular solution may be chosen as the corresponding Struve function. Power series expansion Struve functions, denoted as have the power series form where is the gamma function. The modified Struve functions, denoted , have the following power series form Integral form Another definition of the Struve function, for values of satisfying , is possible expressing in term of the Poisson's integral representation: Asymptotic forms For small , the power series expansion is given above. For large , one obtains: where is the Neumann function. Properties The Struve functions satisfy the following recurrence relations: Relation to other functions Struve functions of integer order can be expressed in terms of Weber functions and vice versa: if is a non-negative integer then Struve functions of order where is an integer can be expressed in terms of elementary functions. In particular if is a non-negative integer then where the right hand side is a spherical Bessel function. Struve functions (of any order) can be expressed in terms of the generalized hypergeometric function : Applications The Struve and Weber functions were shown to have an application to beamforming in., and in describing the effect of confining interface on Brownian motion of colloidal particles at low Reynolds numbers. References External links Struve functions at the Wolfram functions site. Special functions Struve family
Struve function
[ "Mathematics" ]
416
[ "Special functions", "Combinatorics" ]
16,070,200
https://en.wikipedia.org/wiki/Intelligent%20decision%20support%20system
An intelligent decision support system (IDSS) is a decision support system that makes extensive use of artificial intelligence (AI) techniques. Use of AI techniques in management information systems has a long history – indeed terms such as "Knowledge-based systems" (KBS) and "intelligent systems" have been used since the early 1980s to describe components of management systems, but the term "Intelligent decision support system" is thought to originate with Clyde Holsapple and Andrew Whinston in the late 1970s. Examples of specialized intelligent decision support systems include Flexible manufacturing systems (FMS), intelligent marketing decision support systems and medical diagnosis systems. Ideally, an intelligent decision support system should behave like a human consultant: supporting decision makers by gathering and analysing evidence, identifying and diagnosing problems, proposing possible courses of action and evaluating such proposed actions. The aim of the AI techniques embedded in an intelligent decision support system is to enable these tasks to be performed by a computer, while emulating human capabilities as closely as possible. Many IDSS implementations are based on expert systems, a well established type of KBS that encode knowledge and emulate the cognitive behaviours of human experts using predicate logic rules, and have been shown to perform better than the original human experts in some circumstances. Expert systems emerged as practical applications in the 1980s based on research in artificial intelligence performed during the late 1960s and early 1970s. They typically combine knowledge of a particular application domain with an inference capability to enable the system to propose decisions or diagnoses. Accuracy and consistency can be comparable to (or even exceed) that of human experts when the decision parameters are well known (e.g. if a common disease is being diagnosed), but performance can be poor when novel or uncertain circumstances arise. Research in AI focused on enabling systems to respond to novelty and uncertainty in more flexible ways is starting to be used in IDSS. For example, intelligent agents that perform complex cognitive tasks without any need for human intervention have been used in a range of decision support applications. Capabilities of these intelligent agents include knowledge sharing, machine learning, data mining, and automated inference. A range of AI techniques such as case based reasoning, rough sets and fuzzy logic have also been used to enable decision support systems to perform better in uncertain conditions. A 2009 research about a multi-artificial system intelligence system named IILS is proposed to automate problem-solving processes within the logistics industry. The system involves integrating intelligence modules based on case-based reasoning, multi-agent systems, fuzzy logic, and artificial neural networks aiming to offer advanced logistics solutions and support in making well-informed, high-quality decisions to address a wide range of customer needs and challenges. References Further reading Turban, E., Aronson J., & Liang T.: Decision support systems and Intelligent systems (2004) Pearson External links A brief history of DSS Artificial intelligence Information systems Decision support systems Knowledge engineering
Intelligent decision support system
[ "Technology", "Engineering" ]
591
[ "Systems engineering", "Knowledge engineering", "Decision support systems", "Information systems", "Information technology" ]
16,070,384
https://en.wikipedia.org/wiki/Pignistic%20probability
In decision theory, a pignistic probability is a probability that a rational person will assign to an option when required to make a decision. A person may have, at one level certain beliefs or a lack of knowledge, or uncertainty, about the options and their actual likelihoods. However, when it is necessary to make a decision (such as deciding whether to place a bet), the behaviour of the rational person would suggest that the person has assigned a set of regular probabilities to the options. These are the pignistic probabilities. The term was coined by Philippe Smets, and stems from the Latin pignus, a bet. He contrasts the pignistic level, where one might take action, with the credal level, where one interprets the state of the world: The transferable belief model is based on the assumption that beliefs manifest themselves at two mental levels: the ‘credal’ level where beliefs are entertained and the ‘pignistic’ level where beliefs are used to make decisions (from ‘credo’ I believe and ‘pignus’ a bet, both in Latin). Usually these two levels are not distinguished and probability functions are used to quantify beliefs at both levels. The justification for the use of probability functions is usually linked to “rational” behavior to be held by an ideal agent involved in some decision contexts. A pignistic probability transform will calculate these pignistic probabilities from a structure that describes belief structures. Notes Further reading P. Smets and R. Kennes, “The Transferable Belief Model", Artificial Intelligence (v.66, 1994) pp. 191–243 Decision theory Probability interpretations
Pignistic probability
[ "Mathematics" ]
341
[ "Probability interpretations" ]
16,071,200
https://en.wikipedia.org/wiki/Husimi%20Q%20representation
The Husimi Q representation, introduced by Kôdi Husimi in 1940, is a quasiprobability distribution commonly used in quantum mechanics to represent the phase space distribution of a quantum state such as light in the phase space formulation. It is used in the field of quantum optics and particularly for tomographic purposes. It is also applied in the study of quantum effects in superconductors. Definition and properties The Husimi Q distribution (called Q-function in the context of quantum optics) is one of the simplest distributions of quasiprobability in phase space. It is constructed in such a way that observables written in anti-normal order follow the optical equivalence theorem. This means that it is essentially the density matrix put into normal order. This makes it relatively easy to calculate compared to other quasiprobability distributions through the formula which is proportional to a trace of the operator involving the projection to the coherent state . It produces a pictorial representation of the state ρ to illustrate several of its mathematical properties. Its relative ease of calculation is related to its smoothness compared to other quasiprobability distributions. In fact, it can be understood as the Weierstrass transform of the Wigner quasiprobability distribution, i.e. a smoothing by a Gaussian filter, Such Gauss transforms being essentially invertible in the Fourier domain via the convolution theorem, Q provides an equivalent description of quantum mechanics in phase space to that furnished by the Wigner distribution. Alternatively, one can compute the Husimi Q distribution by taking the Segal–Bargmann transform of the wave function and then computing the associated probability density. Q is normalized to unity, and is non-negative definite and bounded: Despite the fact that is non-negative definite and bounded like a standard joint probability distribution, this similarity may be misleading, because different coherent states are not orthogonal. Two different points do not represent disjoint physical contingencies; thus, Q(α) does not represent the probability of mutually exclusive states, as needed in the third axiom of probability theory. may also be obtained by a different Weierstrass transform of the Glauber–Sudarshan P representation, given , and the standard inner product of coherent states. See also Nonclassical light Glauber–Sudarshan P-representation Wehrl entropy References Quantum optics Particle statistics
Husimi Q representation
[ "Physics" ]
489
[ "Particle statistics", "Statistical mechanics", "Quantum optics", "Quantum mechanics" ]
16,071,220
https://en.wikipedia.org/wiki/Verticordia%20halophila
Verticordia halophila, commonly known as salt-loving featherflower, or salt-loving verticordia, is a flowering plant in the myrtle family, Myrtaceae and is endemic to the south-west of Western Australia. It is an erect, bushy shrub with small, crowded, thick leaves and spikes of red and pink flowers in spring. Description Verticordia halophila is a shrub which grows to high and wide and which has a few main stems with many short, leafy side-branches. The leaves on the side branches are crowded, oblong to egg-shaped, thick with a rounded end but with a short point and covered with soft hairs less than long. The leaves on the flowering stems are broadly egg-shaped to almost round. The flowers are scented and arranged in spike-like groups near the ends of the long flowering stems, each flower on a stalk, long. The floral cup is top-shaped, long, smooth and glabrous with 5 ribs and small bent green appendages. The sepals are pink with a white fringe, long, with 5 or 6 hairy lobes and two ear-shaped, hairy appendages on the sides. The petals are mauve-pink, erect, , with short, coarse teeth along their top edge. The style is long, curved with short hairs near its purple tip. Flowering time is from September to December. It is distinguished from similar verticordias by its thick, crowded leaves, the serrations on the top edge of the petals, the purple-tipped style and by the saline environment in which it is found. Taxonomy and naming Verticordia halophila was first formally described by Alex George in 1991 and the description was published in Nuytsia. The type collection was made by Alex and Elizabeth George south of Coorow in 1985. The specific epithet (halophila) is "named from the Greek hals (salt) and -philus (loving), in reference to the habitat which is unusual in the genus". When Alex George reviewed the genus in 1991, he placed this species in subgenus Eperephes, section Verticordella along with V. pennigera, V. blepharophylla, V. lindleyi, V. carinata, V. attenuata, V. drummondii, V. wonganensis, V. paludosa, V. luteola, V. bifimbriata, V. tumida, V. mitodes, V. centipeda, V. auriculata, V. pholidophylla, V. spicata and V. hughanii. An isolated population recognised as a variant of this species was redescribed as Verticordia elizabethiae in 2020. Distribution and habitat This verticordia usually grows in sand and clay on flats that are slightly saline and on the edges of salt lakes in woodland and shrubland. It is found in and near areas around Coorow, Marchagee and Lake Seabrook in the Avon Wheatbelt and Geraldton Sandplains biogeographic regions. The distribution range included the Coolgardie bioregion, a population around 200 km inland, until a 2020 revision recognised it as a separate species, Verticordia elizabethiae. Conservation Verticordia halophila is classified as "not threatened" by the Western Australian Government Department of Parks and Wildlife. Use in horticulture Some forms of this species are being grown in cultivation and are performing well. Some are bushy shrubs which are producing honey-scented flowers from October to March, sometimes in other months. References halophila Halophytes Endemic flora of Western Australia Myrtales of Australia Rosids of Western Australia Plants described in 1844
Verticordia halophila
[ "Chemistry" ]
776
[ "Halophytes", "Salts" ]
16,071,696
https://en.wikipedia.org/wiki/Aquastat
An aquastat is a device used in hydronic heating systems for controlling water temperature. To prevent the boiler from firing too frequently, aquastats have a high limit temperature and a low limit. If the thermostat is calling for heat, the boiler will fire until the high limit is reached, then shut off (even if the thermostat is still calling for heat). The boiler will re-fire if the boiler water temperature drops below a range around the high limit. The high limit exists for the sake of efficiency and safety. The boiler will also fire (regardless of thermostat state) when the boiler water temperature goes below a range around the low limit, ensuring that the boiler water temperature remains above a certain point. The low limit is intended for tankless domestic hot water; it ensures that boiler water is always warm enough to heat the domestic hot water. Many aquastats also have a differential (diff) control which determines the size of the range around the low and/or high controls. References Heating, ventilation, and air conditioning Plumbing
Aquastat
[ "Engineering" ]
220
[ "Construction", "Plumbing" ]
16,071,836
https://en.wikipedia.org/wiki/Operation%20Giant%20Lance
Operation Giant Lance was a secret U.S. nuclear alert operation by the United States that the Strategic Air Command carried out in late October 1969. Giant Lance was one component of a multi-pronged military exercise, the Joint Chiefs of Staff Readiness Test that the Joint Chiefs developed and carried out during October 1969 in response to White House orders. On 10 October 1969, on the advice of National Security Advisor Henry Kissinger, U.S. President Richard Nixon issued the order for the readiness test that led to Giant Lance. Preparations were made to send a squadron of 18 B-52s, flying in sorties of 6 bombers at a time, of the 92nd Strategic Aerospace Wing loaded with nuclear weapons to fly over northern Alaska in the direction of the Soviet Union. The squadron took off on 27 October and flew towards the Soviet Union. Actions were designed to be detectable by the Soviets. Nixon cancelled the operation on October 30. According to the U.S. Department of State, there are two main "after-the-fact explanations" regarding the purpose of the Joint Chiefs of Staff Readiness Test: one was to convince the Soviets that Nixon was willing to resort to nuclear war in order to win the Vietnam War, the other was to deter Soviet's possible nuclear attack against the People's Republic of China. In the first explanation, the Readiness Test was part of Nixon's madman theory, a concept based on game theory, and its details remained unknown to the public until Freedom of Information Act requests in the 2000s revealed documents about the operation. On the other hand, the second interpretation is consistent with U.S. intelligence reports which indicated that the Soviet leadership was considering a preemptive strike against Chinese nuclear facilities, and in October 1969 the Soviet indeed abandoned its attack against China. Researchers have also called the second interpretation logically the most likely one. Background Soviet's nuclear threat towards China After the Zhenbao Island incident in March 1969, the Soviet Union planned to launch a massive nuclear strike on the People's Republic of China. Soviet diplomat Arkady Shevchenko mentioned in his memoir that "the Soviet leadership had come close to using nuclear arms on China"; he further mentioned that Andrei Grechko, Soviet's Minister of Defence at the time, called for "unrestricted use of the multimegaton bomb known in the West as the 'blockbuster'", in order to "once and for all to get rid of the Chinese threat". As a turning point during the Cold War, this crisis almost led to a nuclear war, seven years after the Cuban missile crisis. State of the Vietnam War Vietnam War tensions were high and were a major driver of Nixon's decision to initiate the operation. The war was one of the primary challenges Nixon sought to address on becoming president, and led to him devising a plan to both end the war and gain international and domestic credibility for the United States as a result. By launching Operation Giant Lance, Nixon aimed to increase war tensions by raising the United States' nuclear threat through a "show of force" alert. These operations acted as a prequel to Nixon's eventual Operation Duck Hook, which was declassified in 2005. The primary goal of these operations was to pressure the Soviets to get their North Vietnamese ally to agree to peace terms favorable to the United States. Preparation Chairman of the Joint Chiefs of Staff Earle Wheeler ordered the operation as a part of the Joint Chiefs of Staff Readiness Test On 27 October 1969, eighteen B-52 bomber aircraft began the operation, accompanied by KC-135 tankers to refuel and support the extended patrol of the squadron. The bombers flew in sorties of 6 bombers at a time. The U.S. Strategic Air Command (SAC) was used to deploy the aircraft from air bases both in California and Washington State in secrecy. The bombers were checked throughout the day, standing by for immediate deployment. Purpose Even the most senior U.S. military leaders were not informed of the purpose of the Joint Chiefs of Staff Readiness Test, during and after the alert. The operation was intended to be a precautionary measure, boasting operational readiness in case of military retaliation from either East Asia or Russia. According to the U.S. Department of State, there are two main "after-the-fact explanations" regarding the purpose of the Joint Chiefs of Staff Readiness Test. In the first interpretation, the operation's intended goal was to directly support Operation Duck Hook as a part of the "show of force" alert; Nixon believed that this would coerce Moscow and Hanoi into a peace treaty through the Paris peace talks with the Soviets, on terms that were advantageous to the United States. In the second interpretation, the outcome of the operation was thought to have deterred Soviet's attack against China and promoted the credibility of the United States intervention in the Sino-Soviet conflict to its general public in the war. Researchers have also called the second interpretation logically the most likely one. Deterrence against Soviet's strike on China According to a number of sources, U.S. President Richard Nixon decided to intervene in the nuclear crisis between the Soviet Union and the People's Republic of China, and on October 15, 1969, the Soviet side was informed that the United States would launch nuclear attack on approximately 130 cities in the Soviet Union once the latter began to attack China. The U.S. government confirmed that "the U.S. military, including its nuclear forces, secretly went on alert" in October 1969, which was the Joint Chiefs of Staff Readiness Test, and that Nixon indeed once considered using nuclear weapons. Eventually, the Soviet abandoned its attack on China. Henry Kissinger wrote in his memoirs later, that the United States "raised our profile somewhat to make clear that we were not indifferent to these Soviet threats." On July 29, 1985, Time magazine published its interview with Nixon, who recalled that "Henry said, 'Can the U.S. allow the Soviet Union to jump the Chinese?'—that is, to take out their nuclear capability. We had to let the Soviets know we would not tolerate that." Researchers and scholars have also speculated that the U.S. authorities might have ordered a nuclear alert in October 1969 in order to deter a Soviet nuclear or conventional attack on China. Vietnam War According to one interpretation, the purpose of Operation Giant Lance and the Joint Chiefs of Staff Readiness Test of which it was a component was to intimidate the foreign contenders in the Vietnam War, primarily the Soviets, through a world-wide alert of U.S. nuclear and non-nuclear forces. By using seemingly irrational actions as a part of Nixon's madman diplomacy, he aimed to push both the Soviet and the Vietnamese to end the war on favourable terms. The squadron of eighteen B-52 bomber aircraft was to patrol the Northern polar ice cap to survey the frozen terrain whilst armed with nuclear weaponry. The patrols consisted of eighteen-hour long vigils, which were intended to appear as suspicious movements by the U.S. These movements were kept secret from the public, whilst also remaining intentionally detectable to the Soviet Union's intelligence systems. Madman theory President Richard Nixon was infamous for radical measures as part of his diplomacy. The radicality of sending eighteen armed bombers on patrol was designed to pressure foreign powers by displaying extreme military aggression. Nixon told Henry Kissinger, the national security advisor, that he was willing to use nuclear weapons in order to end the war. Following so-called madman theory, Nixon would often take diplomatic options that seemed irrational even to the United States' own authorities. The idea was to make it impossible for foreign powers to determine Nixon's motives or predict his actions, giving him a unique strategic advantage. This diplomacy, coupled with Nixon's decision to raise the nuclear alert, served as an indirect threat as the Soviets would not be able to understand his actions. Nixon used this unpredictable diplomacy in a failed attempt to end the war in Vietnam, creating the impression he was willing to take desperate measures including using the United States' nuclear weapons. These actions would also enhance Nixon's reputation as a tough and "mad" leader. The intention was to cause the North Vietnamese and the Soviets to believe that he was an irrational leader, capable of escalating the nuclear threat. The policy failed to produce the concessions desired by the United States. Nixon's "madman" diplomacy was in effect briefly during the Vietnam War, amplified by the numerous "show of force" operations. Although this diplomacy could have been seen by opposing states as a bluff, the risk of uncertainty to them was much larger than the risk to the United States. Ultimately, Nixon possessed an advantage as the US could gauge the effectiveness of its threats based on the reactions of the Soviets and the Vietnamese. Implications Effects of the operation The operation did not directly cause any obvious, significant change due to its cancellation; the impact it may have had on the Soviets or the Vietnamese cannot be accurately measured. The operation was terminated on October 30 suddenly without any known reason. The abrupt halt to the operation may have been due to the fact that the Soviets did not show any significant changes in their actions, which may mean that the Soviets suspected Nixon of bluffing. However, some historians have argued that the sudden withdrawal of the SAC's squadron was an intentional effort to display the maneuverability and freedom the US possessed when it came to nuclear warfare. Operation Giant Lance was intended to jar foreign forces into favourable diplomatic agreements to end the war, to avoid Nixon ordering Operation Duck Hook. Despite the operation ending as a bluff tactic, the operation served to add credibility both to Nixon's madman threats and the proactiveness of the U.S. However this may not have been successful due to the large anti-war movement at the time, which tended to discourage nuclear operations. Seymour Hersh believed that the operation also served as an adjunct to Operation Duck Hook, a proposed mining and bombing operation against North Vietnam. The Soviets showed no clear reaction in response to the Giant Lance patrols. Whilst there may not have been a direct response to the operation, there was a reaction from Soviet intelligence: a sudden heightened nuclear alert. This was the goal of the operation: to make the operation visible to Soviet intelligence whilst hiding it from the American public. The Soviets may have seen Nixon's move simply as a bluff. In October 1973, a Soviet official exclaimed that "Mr. Nixon used to exaggerate his intentions regularly. He used alerts and leaks to do this", which may have caused the U.S. operational threat to be ignored. Perception of U.S. nuclear threat Although both Moscow and Hanoi did not show any reaction to Operation Giant Lance and the Joint Chiefs of Staff Readiness Test, the uncertainty of Nixon's nuclear power posed a significant threat. Nixon's continuous nuclear threat towards Hanoi was undermined by the anti-war sentiment on U.S. home soil. This implied to Hanoi that the U.S. did not wish for further war, or to risk nuclear warfare. The heightened fear of nuclear warfare caused a shared parity of nuclear avoidance across all participants in the war. Neither side wanted a military confrontation that would escalate to that level. There also existed the danger that excessive reliance on the nuclear threat in times of war would cause other governments to begin to accept this as the norm. Nuclear fear might bring the possibility of increased nuclear use. Continual development of nuclear technology and reliance thereon would inevitably lead to increasing paranoia. Military escalation could lead to “the threat that leaves something to chance”. References 1969 in international relations 1969 in military history October 1969 events in the United States 1969 in the Soviet Union Giant Lance Nuclear history of the United States Nuclear warfare Soviet Union–United States relations Military operations of the Cold War United States nuclear command and control Presidency of Richard Nixon
Operation Giant Lance
[ "Chemistry" ]
2,398
[ "Radioactivity", "Nuclear warfare" ]
16,072,343
https://en.wikipedia.org/wiki/Linnaeus%27s%20flower%20clock
Linnaeus's flower clock was a garden plan hypothesized by Carl Linnaeus that would take advantage of several plants that open or close their flowers at particular times of the day to accurately indicate the time. According to Linnaeus's autobiographical notes, he discovered and developed the floral clock in 1748. It builds on the fact that there are species of plants that open or close their flowers at set times of day. He proposed the concept in his 1751 publication Philosophia Botanica, calling it the (). His observations of how plants changed over time are summarised in several publications. Calendarium florae (the Flower Almanack) describes the seasonal changes in nature and the botanic garden during the year 1755. In Somnus plantarum (the Sleep of Plants), he describes how different plants prepare for sleep during the night, and in Vernatio arborum he gives an account of the timing of leaf-bud burst in different trees and bushes. He may never have planted such a garden, but the idea was attempted by several botanical gardens in the early 19th century, with mixed success. Many plants exhibit a strong circadian rhythm (see also Chronobiology), and a few have been observed to open at quite a regular time, but the accuracy of such a clock is diminished because flowering time is affected by weather and seasonal effects. The flowering times recorded by Linnaeus are also subject to differences in daylight due to latitude: his measurements are based on flowering times in Uppsala, where he taught and had received his university education. The plants suggested for use by Linnaeus are given in the table below, ordered by recorded opening time; "-" signifies that data are missing. Cultural references to the concept Some 30 years before Linnaeus's birth, such a floral clock may have been described by Andrew Marvell, in his poem "The Garden" (1678): How well the skilful gardener drew Of flow'rs and herbs this dial new; Where from above the milder sun Does through a fragrant zodiac run; And, as it works, th' industrious bee Computes its time as well as we. How could such sweet and wholesome hoursBe reckoned but with herbs and flow'rs!In Terry Pratchett's novel Thief of Time, a floral clock with the same premise is described. It features fictional flowers that open at night "for the moths", so runs all day. Horologium Florae, released in 2023, is the album name of Japanese singer and virtual YouTuber Kyo Hanabasami. See also Floral clock References External links Online text of Philosophia Botanica Botany
Linnaeus's flower clock
[ "Biology" ]
545
[ "Plants", "Botany" ]
16,073,214
https://en.wikipedia.org/wiki/Tarski%27s%20exponential%20function%20problem
In model theory, Tarski's exponential function problem asks whether the theory of the real numbers together with the exponential function is decidable. Alfred Tarski had previously shown that the theory of the real numbers (without the exponential function) is decidable. The problem The ordered real field is a structure over the language of ordered rings , with the usual interpretation given to each symbol. It was proved by Tarski that the theory of the real field, , is decidable. That is, given any -sentence there is an effective procedure for determining whether He then asked whether this was still the case if one added a unary function to the language that was interpreted as the exponential function on , to get the structure . Conditional and equivalent results The problem can be reduced to finding an effective procedure for determining whether any given exponential polynomial in variables and with coefficients in has a solution in . showed that Schanuel's conjecture implies such a procedure exists, and hence gave a conditional solution to Tarski's problem. Schanuel's conjecture deals with all complex numbers so would be expected to be a stronger result than the decidability of , and indeed, Macintyre and Wilkie proved that only a real version of Schanuel's conjecture is required to imply the decidability of this theory. Even the real version of Schanuel's conjecture is not a necessary condition for the decidability of the theory. In their paper, Macintyre and Wilkie showed that an equivalent result to the decidability of is what they dubbed the weak Schanuel's conjecture. This conjecture states that there is an effective procedure that, given and exponential polynomials in variables with integer coefficients , produces an integer that depends on , and such that if is a non-singular solution of the system then either or . References Model theory Unsolved problems in mathematics
Tarski's exponential function problem
[ "Mathematics" ]
381
[ "Unsolved problems in mathematics", "Mathematical problems", "Mathematical logic", "Model theory" ]
16,073,360
https://en.wikipedia.org/wiki/Support%20of%20a%20module
In commutative algebra, the support of a module M over a commutative ring R is the set of all prime ideals of R such that (that is, the localization of M at is not equal to zero). It is denoted by . The support is, by definition, a subset of the spectrum of R. Properties if and only if its support is empty. Let be a short exact sequence of R-modules. Then Note that this union may not be a disjoint union. If is a sum of submodules , then If is a finitely generated R-module, then is the set of all prime ideals containing the annihilator of M. In particular, it is closed in the Zariski topology on Spec R. If are finitely generated R-modules, then If is a finitely generated R-module and I is an ideal of R, then is the set of all prime ideals containing This is . Support of a quasicoherent sheaf If F is a quasicoherent sheaf on a scheme X, the support of F is the set of all points x in X such that the stalk Fx is nonzero. This definition is similar to the definition of the support of a function on a space X, and this is the motivation for using the word "support". Most properties of the support generalize from modules to quasicoherent sheaves word for word. For example, the support of a coherent sheaf (or more generally, a finite type sheaf) is a closed subspace of X. If M is a module over a ring R, then the support of M as a module coincides with the support of the associated quasicoherent sheaf on the affine scheme Spec R. Moreover, if is an affine cover of a scheme X, then the support of a quasicoherent sheaf F is equal to the union of supports of the associated modules Mα over each Rα. Examples As noted above, a prime ideal is in the support if and only if it contains the annihilator of . For example, over , the annihilator of the module is the ideal . This implies that , the vanishing locus of the polynomial f. Looking at the short exact sequence we might mistakenly conjecture that the support of I = (f) is Spec(R(f)), which is the complement of the vanishing locus of the polynomial f. In fact, since R is an integral domain, the ideal I = (f) = Rf is isomorphic to R as a module, so its support is the entire space: Supp(I) = Spec(R). The support of a finite module over a Noetherian ring is always closed under specialization. Now, if we take two polynomials in an integral domain which form a complete intersection ideal , the tensor property shows us that See also Annihilator (ring theory) Associated prime Support (mathematics) References Atiyah, M. F., and I. G. Macdonald, Introduction to Commutative Algebra, Perseus Books, 1969, Module theory
Support of a module
[ "Mathematics" ]
641
[ "Fields of abstract algebra", "Module theory" ]
16,073,576
https://en.wikipedia.org/wiki/Astronomical%20Netherlands%20Satellite
The Astronomical Netherlands Satellite (ANS; also known as Astronomische Nederlandse Satelliet) was a space-based X-ray and ultraviolet telescope. It was launched into Earth orbit on 30 August 1974 at 14:07:39 UTC in a Scout rocket from Vandenberg Air Force Base, United States. The mission ran for 20 months until June 1976, and was jointly funded by the Netherlands Institute for Space Research (NIVR) and NASA. ANS was the first Dutch satellite, and the Main Belt asteroid 9996 ANS was named after it. ANS reentered Earth's atmosphere on June 14, 1977. The telescope had an initial orbit with a periapsis of , an apoapsis of , with inclination 98.0° and eccentricity 0.064048, giving it a period of 99.2 minutes. The orbit was Sun-synchronous, and the attitude of the spacecraft could be controlled through reaction wheels. The momentum stored in the reaction wheels throughout the orbit was regularly dumped via magnetic coils that interacted with the Earth's magnetic field. The satellite also had two masses that were released shortly after orbit injection, to remove most of the satellite's angular momentum induced by the launcher. The attitude could be measured by a variety of techniques, including solar sensors, horizon sensors, star sensors and a magnetometer. ANS could measure X-ray photons in the energy range 2 to 30 keV, with a 60 cm2 detector, and was used to find the positions of galactic and extragalactic X-ray sources. It also measured their spectra, and looked at their variations over time. It discovered X-ray bursts, and also detected X-rays from Capella. ANS also observed in the ultraviolet part of the spectrum, with a 22 cm (260 cm2) Cassegrain telescope. The wavelengths of the observed photons were between 150 and 330 nm, with the detector split into five channels with central wavelengths of 155, 180, 220, 250 and 330 nm. At these frequencies it took over 18,000 measurements of around 400 objects. See also Ultraviolet astronomy X-ray astronomy Timeline of artificial satellites and space probes List of X-ray space telescopes References Further reading 1974 in spaceflight Space telescopes Ultraviolet telescopes X-ray telescopes First artificial satellites of a country Spacecraft launched in 1974 Astronomy in the Netherlands Satellites of the Netherlands Netherlands–United States relations
Astronomical Netherlands Satellite
[ "Astronomy" ]
495
[ "Space telescopes" ]
16,074,940
https://en.wikipedia.org/wiki/ABRIXAS
A Broadband Imaging X-ray All-sky Survey, or ABRIXAS, was a space-based German X-ray telescope. It was launched on 28 April 1999 in a Kosmos-3M launch vehicle from Kapustin Yar, Russia, into Earth orbit. The orbit had a periapsis of , an apoapsis of , an inclination of 48.0° and an eccentricity of 0.00352, giving it a period of 96 minutes. The telescope's battery was accidentally overcharged and destroyed three days after the mission started. When attempts to communicate with the satellite – while its solar panels were illuminated by sunlight – failed, the $20 million project was abandoned. ABRIXAS decayed from orbit on 31 October 2017. The eROSITA telescope was based on the design of the ABRIXAS observatory. eROSITA was launched on board the Spektr-RG space observatory on 13 July 2019 from Baikonur to be deployed at the second Lagrange point (L2). See also German space programme References Further reading Gamma-ray telescopes Space telescopes 1999 in spaceflight Satellites of Germany Spacecraft launched in 1999 Spacecraft which reentered in 2017
ABRIXAS
[ "Astronomy" ]
245
[ "Space telescopes" ]
16,075,100
https://en.wikipedia.org/wiki/Pleuran
Pleuran is an insoluble polysaccharide (β-(1,3/1,6)-D-glucan), isolated from Pleurotus ostreatus. Pleuran belongs to a group of glucose polymers commonly called beta-glucans demonstrating biological response modifier properties. These immunomodulating properties render the host more resistant to infections and neoplasms. In a study published in December 2010, pleuran demonstrated to have a protective effect against exercise-induced suppression of immune cell activity (NK cells) in subjects taking 100 mg per day. In another study published in 2011, pleuran reduced the incidence of upper respiratory tract infections and increased the number of circulating NK cells. Pleuran is also being studied as a potential immunologic adjuvant. References Polysaccharides
Pleuran
[ "Chemistry", "Biology" ]
179
[ "Carbohydrates", "Biotechnology stubs", "Biochemistry stubs", "Biochemistry", "Polysaccharides" ]
16,075,933
https://en.wikipedia.org/wiki/Lagrange%20number
In mathematics, the Lagrange numbers are a sequence of numbers that appear in bounds relating to the approximation of irrational numbers by rational numbers. They are linked to Hurwitz's theorem. Definition Hurwitz improved Peter Gustav Lejeune Dirichlet's criterion on irrationality to the statement that a real number α is irrational if and only if there are infinitely many rational numbers p/q, written in lowest terms, such that This was an improvement on Dirichlet's result which had 1/q2 on the right hand side. The above result is best possible since the golden ratio φ is irrational but if we replace by any larger number in the above expression then we will only be able to find finitely many rational numbers that satisfy the inequality for α = φ. However, Hurwitz also showed that if we omit the number φ, and numbers derived from it, then we can increase the number . In fact he showed we may replace it with 2. Again this new bound is best possible in the new setting, but this time the number is the problem. If we don't allow then we can increase the number on the right hand side of the inequality from 2 to /5. Repeating this process we get an infinite sequence of numbers , 2, /5, ... which converge to 3. These numbers are called the Lagrange numbers, and are named after Joseph Louis Lagrange. Relation to Markov numbers The nth Lagrange number Ln is given by where mn is the nth Markov number, that is the nth smallest integer m such that the equation has a solution in positive integers x and y. References External links Lagrange number. From MathWorld at Wolfram Research. Introduction to Diophantine methods irrationality and transcendence - Online lecture notes by Michel Waldschmidt, Lagrange Numbers on pp. 24–26. Diophantine approximation
Lagrange number
[ "Mathematics" ]
398
[ "Diophantine approximation", "Mathematical relations", "Approximations", "Number theory" ]
16,076,506
https://en.wikipedia.org/wiki/Particle%20Astrophysics%20Magnet%20Facility
The Particle Astrophysics Magnet Facility (commonly known as ASTROMAG) is a NASA project that was designed to investigate anti-matter. It consisted of a series of experiments which would culminate in an experiment launched in 1995 to be externally attached to the Freedom Space Station History Experiments and postulation conducted during the 1970s and 1980s revealed a higher number of anti-protons than had been expected and to verify, and investigate further, a series of experiments were designed to culminate in an experiment launched for attachment to the Space Station. ALICE and LEAP In preparation for the building of the detectors and superconducting magnets to be used in the experiment some smaller ones were conducted in the upper atmosphere mounted underneath high altitude balloons: ALICE (A Large Isotropic Composition Experiment) and LEAP (Low Energy Antiproton Experiment) being the most notable. ALICE was launched from Prince Albert Airport, Canada on 15 August 1987. It was designed to measure the isotopic composition of the rays entering Earth's atmosphere and so identify the types of particles which ASTROMAG would study in more detail. LEAP was launched twice also from Prince Albert, in July and August 1987 and measured the ratios between protons and anti-protons to try to establish verification in earlier experiments that reported higher than expected numbers of anti-protons. ASTROMAG The original proposal was made in 1987 and announced in 1988 for implementation on the Freedom space station. The experiment was tested, accepted in 1989 and due for launch in 1995 but after various problems with other flights it was demoted from first to fifth place on the schedule. The experiment, called the Particle Astrophysics Magnet Facility, was given the name ASTROMAG (NASA designated ASTRMAG) as it used a large superconducting magnet to deflect particles into its detectors. The magnet was made superconducting by being cooled to 2 kelvins. The hope was that the detectors would discover the oppositely charged anti-protons and so help physicists to use matter–antimatter reactions to develop new propulsions systems based on the resulting expulsion of energy. The experiment was to be mounted on the outside of the Space Station and measured and projections of costs were estimated at $30 million. This was one of the first aimed at capturing material and particle data to further understand the origins and evolution of matter in the composition of the Universe. The experiment was to collect data from collisions of very high velocity particles by measuring their spectrum and attempting to find negatively charged helium or heavier elements. Eventually the delays in NASA missions and the shutdown of the Space Station led ASTROMAG to suffer a non-launch and the mission was shelved in 1991. Free Flyer The free flyer version was to be launched in 2005 into Earth orbit at a height of . It aimed to detect high energy (>1 GeV per nucleon) cosmic ray nuclei, as well as electrons, to search for antimatter and dark matter candidates. BESS After the experiment was not launched researchers continued experiments using BESS and the methods employed by ALICE and LEAP in 1987. The latest attempt was a new Nuclear Compton Telescope (NCY) which was successfully test flown on 1 June 2005 from the Scientific Balloon Flight Facility, Fort Sumner, New Mexico. Its subsequent missions went well and some useful data was collected until it unfortunately failed to launch in April 2010 at Alice Springs, Australia, when the balloon broke its tether to the crane in high winds. Alpha Magnetic Spectrometer The experiment was superseded by the Alpha Magnetic Spectrometer which was approved by Congress. An earlier smaller test version called the AMS-01 was flown in 1998 on the shuttle Discovery during a flight to the Mir Russian space station. AMS-02 was delivered to the International Space Station in 2011. References Space telescopes
Particle Astrophysics Magnet Facility
[ "Astronomy" ]
767
[ "Space telescopes" ]
16,076,786
https://en.wikipedia.org/wiki/Spektr-R
Spektr-R (part of RadioAstron program) (Russian: Спектр-Р) was a Russian scientific satellite with a radio telescope on board. It was launched on 18 July 2011 on a Zenit-3F launcher from Baikonur Cosmodrome, and was designed to perform research on the structure and dynamics of radio sources within and beyond the Milky Way. Together with some of the largest ground-based radio telescopes, the Spektr-R formed interferometric baselines extending up to . On 11 January 2019, the spacecraft stopped responding to ground control, but its science payload was described as "operational". The mission never recovered from the January 2019 incident, and the mission was declared finished (and spacecraft operations ended) on 30 May 2019. Overview The Spektr-R project was funded by the Astro Space Center of Russia, and was launched into Earth orbit on 18 July 2011, with a perigee of and an apogee of , about 700 times the orbital height of the Hubble Space Telescope at its highest point and 20 times at its lowest. In comparison, the average distance from Earth to the Moon is . As of 2018, the satellite has a much more stable orbit with a perigee of and an apogee of , with its orbit no longer intersecting the Moon's orbit and being stable for possibly hundreds or even thousands of years. The main scientific goal of the mission was the study of astronomical objects with an angular resolution up to a few millionths of an arcsecond. This was accomplished by using the satellite in conjunction with ground-based observatories and interferometry techniques. Another purpose of the project was to develop an understanding of fundamental issues of astrophysics and cosmology. This included star formations, the structure of galaxies, interstellar space, black holes and dark matter. Spektr-R was one of the instruments in the RadioAstron program, an international network of observatories led by the Astro Space Center of the Lebedev Physical Institute. The telescope was intended for radio-astrophysical observations of extragalactic objects with ultra-high resolution, as well as researching of characteristics of near-Earth and interplanetary plasma. The very high angular resolving power was achieved in conjunction with a ground-based system of radio-telescopes and interferometrical methods, operating at wavelengths of 1.35–6.0, 18.0 and 92.0 cm. Once in space, the flower-like main dish was to open its 27 'petals' within 30 minutes. There was a science payload of opportunity on board, PLASMA-F, which consists of four instruments to observe solar wind and the outer magnetosphere. These instruments are the energetic particle spectrometer MEP-2, the magnetometer MMFF, the solar wind monitor BMSW, and the data collection and processing unit SSNI-2. At launch the mass of the spacecraft was . It was launched from the Baikonur Cosmodrome on 18 July 2011 at 02:31 UTC by a Zenit-3F launch vehicle, which is composed of a Zenit-2M with a Fregat-SB upper stage. On 11 January 2019, the spacecraft stopped responding to ground control. It was unknown whether the issue could be fixed, or whether the spacecraft's mission would be ended. With Spektr-R's status unknown and the problems hitting the Mikhailo Lomonosov satellite, the Russian space program had no operational space observatories as of 12 January 2019. This changed with the launch of the Spektr-RG satellite in July 2019. The mission was declared as finished on 30 May 2019. The external tank of the Fregat upper stage that delivered the Spektr-R observatory into orbit exploded on May 8, 2020, generating at least 65 trackable debris in orbit around Earth. History of the project At the beginning of the 1980s, one of the USSR leading developers of scientific space probes had completed a preliminary design of revolutionary, new-generation spacecraft, 1F and 2F. The main purpose of Spektr was to develop a common platform that could be used for future deep-space missions. NPO Lavochkin hoped to use the designs of the 1F as the standard design for space telescopes. In 1982, NPO Lavochkin had completed technical blueprints for RadioAstron, a space-based radio telescope. The expectation was that the 1F and 2F spacecraft would follow the expectations of the RadioAstron mission (also known as Astron-2). Early on, many criticized the 1F platform for its questionable astrophysics missions, even when compared to the older 4V spacecraft bus. Although the attitude control system of the 1F seemed to have little issues navigating planetary probes, its accuracy was much below the standard requirements for a high-precision telescope. To add to 1F's technical issues, the spacecraft seemed to lack electrically driven fly-wheels, which critics believed would have increased its stabilization in space. The spacecraft also failed to have a moveable solar panel system, which could track the position of the Sun without requiring the entire satellite to reposition, eventually disrupting the observations process. It was one of three competing Spectrum missions, the others being Spektr X-Gamma and Spektr-UV On 1 August 1983, VPK, the Soviet Military Industrial Commission commissioned an official decision (number 274) titled, "On works for creation of automated interplanetary vehicles for the exploration of planets of the Solar System, the Moon and cosmic space". This document outlined a new impetus for the development of satellites. The new technical proposals submitted in mid-1984 included a gamma-ray telescope designated to register radio waves in the millimetre range. Both of these satellites incorporated rotating solar panels, a highly sensitive star-tracking operating system and fly wheels. By the end of the 1980s, NPO Lavochkin Designer General, Vyacheslav Kovtunenko (ru), proposed to design all future astrophysics satellites on the current Oko-1 spacecraft model, designed originally to track incoming ballistic missiles. According to this plan, Oko-1 (a missile-watching infrared telescope) would eventually be replaced with scientific instruments where the satellite would be pointed towards space rather than Earth. Observing techniques Using a technique called very-long-baseline interferometry, it was anticipated that ground telescopes in Australia, Chile, China, India, Japan, Korea, Mexico, Russia, South Africa, Ukraine and the United States would jointly make observations with the RadioAstron spacecraft. The RadioAstron satellite's main 10-metre radio telescope would communicate in four different bands of radio waves with the international ground telescopes. It can also locate sources from two frequencies simultaneously. The Spektr-R was also planned to include a secondary BMSV within the Plazma-F experiment, the goal of which was to measure the directions and intensity of solar wind. In May 2011, the news agency RIA Novosti reported that the BMSV instrument would indeed be on board. It was also reported that the BMSV would carry a micrometeoroid counter made in Germany. The RadioAstron was expected to extend into a highly elliptical orbit in the Fregat state of the Zenit rocket's launch. Spektr-R's closest point (perigee) would be above the Earth's surface, with its apogee away. The operational orbit would last at least nine years, with the RadioAstron never being in the Earth's shadow for more than two hours. With its apogee as far as the orbit of the Moon, Spektr-R could be considered a deep-space mission. In fact, the gravitational pull of the Moon was expected to fluctuate the satellite's orbit in three-year cycles, with its apogee travelling between 265,000 and from Earth and its perigee between . Each orbit would take RadioAstron around eight to nine days. This drift would vastly augment the telescope's range of vision. It was estimated that the satellite would have upwards of 80% of its potential targets within view at any one point in its orbit. The first 45 days of Spektr-R's orbit were scheduled to consist of engineering commissioning, that is, the launch of the main antenna, various systems checks and communications tests. Spektr-R's tracking was to be handled by the RT-22 radio telescope in Pushchino, Russia. Flight control would be operated by ground stations in near Moscow and Ussuriysk in Russia's Far East. Other Spektr-R joint observations would be handled by ground telescopes in Arecibo, Badary, Effelsberg, Green Bank, Medicina, Noto, Svetloe, Zelenchukskaya and Westerbork. The Spektr-R project was led by the Russian Academy of Sciences's Astro Space Center of the Lebedev Physics Institute. The radio receivers on Spektr-R were to be built in India and Australia. In earlier plans, two additional receivers were to be provided by firms under contract with the European VLBI Consortium, the EVN. These additional payloads were eventually cancelled, with the project citing old age. Similar Russian materials replaced the Indian and Australian instruments. See also Spektr-RG Spektr-UV HALCA References External links RadioAstron website Spacecraft launched in 2011 Radio telescopes Space telescopes 2011 in Russia Satellites of Russia
Spektr-R
[ "Astronomy" ]
1,970
[ "Space telescopes" ]
16,076,816
https://en.wikipedia.org/wiki/MET%20Matrix
A MET (Materials, Energy, and Toxicity) Matrix is an analysis tool used to evaluate various environmental impacts of a product over its life cycle. The tool takes the form of a 3x3 matrix with descriptive text in each of its cells. One dimension of the matrix is composed of a qualitative input-output model that examines environmental concerns related to the product's materials use, energy use, and toxicity. The other dimension looks at the life cycle of the product through its production, use, and disposal phase. The text in each cell corresponds to the intersection of two particular aspects. For example, this means that by looking at certain cells, one can examine aspects such as energy use during the production phase, or levels of toxicity that may be a concern during the disposal phase. References Industrial ecology Environmental impact assessment
MET Matrix
[ "Chemistry", "Engineering" ]
166
[ "Industrial ecology", "Industrial engineering", "Environmental engineering" ]
16,077,444
https://en.wikipedia.org/wiki/Animal%20source%20foods
Animal source foods (ASF) include many food items that come from an animal source such as fish, meat, dairy, eggs and honey. Many individuals consume little ASF or even none for long periods of time by either personal choice or necessity, as ASF may not be accessible or available to these people. Nutrients in animal source foods Six micronutrients are richly found in ASF: vitamin A, vitamin B12, riboflavin (also called vitamin B2), calcium, iron and zinc. They play a critical role in the growth and development of children. Inadequate stores of these micronutrients, either resulting from inadequate intake or poor absorption, is associated with poor growth, anemias (iron deficiency anemia and macrocytic anemia), rickets, night blindness, impaired cognitive functioning, neuromuscular deficits, diminished work capacity, psychiatric disorders and death. Some of these effects, such as impaired cognitive development from an iron deficiency, are irreversible. Micronutrient deficiency is associated in poor early cognitive development. Programs designed to address these micronutrient deficiencies should be targeted to infants, children, and pregnant women. To address these significant micronutrient deficiencies, some global health researchers and practitioners developed and piloted a snack program in Kenya school children. Animal source food production According to a 2006 United Nations study, the livestock industry sector emerges as one of the top two or three most significant contributors to the most serious environmental problems, at every scale from local to global." As such, using plant-derived foods is typically considered better for the interests of the environment. Despite this, the raising of certain animals can be more environmentally sound than others. According to the Farralones Institute's report from 1976, raising rabbits, and chickens (on a well-considered approach) for food can still be quite sustainable. As such, the production of meat and other produce, such as eggs, may still be considered environmentally friendly (if this is done in an industrial, high-efficiency manner). In addition, raising goats (for goat milk and meat) can also be environmentally quite friendly and has been favored by certain environmental activists, such as Mahatma Gandhi. The planetary diet of the EAT-Lancet commission has advised substantial reductions in consumption of ASF on the basis that these diets threaten sustainability because of their environmental footprint and negative health impacts. This report was challenged by Adegbola T. Adesogan and colleagues in 2020 who stated that it "fail[ed] to adequately include the experience of marginalized women and children in low- and middle-income countries whose diets regularly lack the necessary nutrients" and ASF offer the best source of nutrient rich food for children aged 6–23 months. Between 1990–2018, global intakes (servings per week) increased for processed meat, unprocessed red meat, cheese, eggs, milk and seafood. Health effects Animal-source foods are a diverse group of foods that are rich in bioavailable nutrients including calcium, iron, zinc, vitamins B12, vitamin D, choline, DHA, and EPA. Animal-source and plant-based foods have complimentary nutrient profiles and balanced diets containing both reduce the risk of nutritional deficiencies. Animal-source foods such as eggs, fish, red meat and shellfish increase circulating TMAO concentrations. Excess consumption of processed meat, red meat, and saturated fat increases non-communicable disease risk. Animal-source foods have been described as a suitable complementary food to improve growth in 6 to 24-month-old children in low and middle-income countries. A 2022 review of animal-source foods found that red meat but not fish or eggs increases risk of type 2 diabetes. A 2023 review found that substituting animal-source with plant-based foods is associated with a lower risk of cardiovascular disease and all-cause mortality. A 2024 review found that plant-based meat alternatives have the potential to be healthier than animal-source foods and have smaller environmental footprints. See also Animal product References Animal products lt:Gyvūninės kilmės maistas
Animal source foods
[ "Chemistry" ]
851
[ "Animal products", "Natural products" ]
16,077,733
https://en.wikipedia.org/wiki/STSat-1
The STSat-1 (Science and Technology Satellite-1), formerly known as KAISTSat-4 (Korea Advanced Institute of Science and Technology Satellite-4), is an ultraviolet telescope in a satellite. It is funded by the Korea Aerospace Research Institute (KARI), and was launched on 27 September 2003, from Plesetsk Cosmodrome by a Kosmos-3M launch vehicle, into an Earth orbit with a height between 675 and 695 km. STSat-1 is a low-cost KAIST / KAIST Satellite Technology Research Center (SaTReC) satellite technology demonstration mission, funded by the Ministry of Science and Technology (MOST) of South Korea, a follow-up mission in the KITSAT program. STSat-1 is a South Korean astrophysical satellite that was launched by a Kosmos 3M launch vehicle from Plesetsk at 06:11:44 UTC on 27 September 2003. The 106 kg satellite carries a special UV imaging spectrograph to monitor gas clouds in the Galaxy. It will complete a full sky mapping in about a year, by scanning a one-degree strip every day. Additionally, it may also aim the telescope downward to image auroral displays. References External links eoPortal STSat-1 Satellites of South Korea Ultraviolet telescopes Space telescopes KAIST
STSat-1
[ "Astronomy" ]
270
[ "Space telescopes" ]
16,078,168
https://en.wikipedia.org/wiki/Dark%20Universe%20Observatory
The Dark Universe Observatory (DUO) is a planned NASA space-based telescope. It will conduct observations of galaxy clusters on the X-ray range with the intent of finding data related to both dark matter and energy. References Space telescopes Satellites of the United States Proposed satellites
Dark Universe Observatory
[ "Astronomy" ]
55
[ "Space telescopes" ]
16,079,015
https://en.wikipedia.org/wiki/Small%20Astronomy%20Satellite%203
The Small Astronomy Satellite 3 (SAS 3, also known as SAS-C before launch) (Explorer 53) was a NASA X-ray astronomy space telescope. It functioned from 7 May 1975 to 9 April 1979. It covered the X-ray range with four experiments on board. The satellite, built by the Johns Hopkins University Applied Physics Laboratory (APL), was proposed and operated by MIT's Center for Space Research (CSR). It was launched on a Scout vehicle from the Italian San Marco platform (Broglio Space Center) near Malindi, Kenya, into a low-Earth, nearly equatorial orbit. It was also known as Explorer 53, as part of NASA's Explorer program. The spacecraft was 3-axis stabilized with a momentum wheel that was used to establish stability about the nominal rotation, or Z-axis. The orientation of the Z-axis could be altered over a period of hours using magnetic torque coils that interacted with the Earth's magnetic field. Solar panels charged batteries during the daylight portion of each orbit so that SAS 3 had essentially no expendables to limit its lifetime beyond the life of the tape recorders, batteries, and orbital drag. The spacecraft typically operated in a rotating mode, spinning at one revolution per 95-minute orbit, so that the LEDs, tube and slat collimator experiments, which looked out along the Y-axis, could view and scan the sky almost continuously. The rotation could also be stopped, allowing extended (up to 30 minutes) pointed observations of selected sources by the Y-axis instruments. Data were recorded on board by magnetic tape recorders, and played back during station passes every orbit. SAS 3 was commanded from the NASA Goddard Space Flight Center (GSFC) in Greenbelt, Maryland, but data were transmitted by modem to MIT for scientific analysis, where scientific and technical staff were on duty 24 hours a day. The data from each orbit were subjected to quick-look scientific analysis at MIT before the next orbital station passed, so the science operational plan could be altered by telephoned instruction from MIT to GSFC in order to study targets in near real-time. Launch The spacecraft was launched from the San Marco platform off the coast of Kenya, Africa, into a near-circular, near-equatorial orbit. This spacecraft contained four instruments: the Extragalactic Experiment, the Galactic Monitor Experiment, the Scorpio Monitor Experiment and the Galactic Absorption Experiment. In the orbital configuration, the spacecraft was high and the tip-to-tip dimension was . Four solar paddles were used in conjunction with a 12-cell nickel–cadmium battery to provide power over the entire orbit. The spacecraft was stabilized along the Z-axis and rotated at about 0.1°/seconds. Changes to the spin-axis orientation were by ground command, either delayed or in real-time. The spacecraft could be made to move back and forth ± 2.5° across a selected source along the X-axis at 0.01°/seconds. The experiments looked along the Z-axis of the spacecraft, perpendicular to it, and at an angle. Objectives The major scientific objectives of the mission were: Determine bright X-ray source locations to an accuracy of 15 arcseconds; Study selected sources over the energy range 0.1-55 keV; Continuously search the sky for X-ray novae, flares, and other transient phenomena. Explorer 53 (SAS-C) was a small spacecraft whose objectives were to survey the celestial sphere for sources radiating in the X-ray, gamma ray, ultraviolet and other spectral regions. The primary missions of Explorer 53 were to measure the X-ray emission of discrete extragalactic sources, to monitor the intensity and spectra of galactic X-ray sources from 0.2 to 60-keV, and to monitor the X-ray intensity of Scorpio X-1. Experiments Extragalactic Experiment (EGE) This experiment determined the positions of very weak extragalactic X-ray sources. The instrument viewed a 100-sq-degree region of the sky around the direction of the spin axis of the satellite. The nominal targets for a 1-year study were: (1) the Virgo Cluster of galaxies for 4 months, (2) the galactic equator for 2 months, (3) the Andromeda Nebula for 3 months, and (4) the Magellanic Clouds for 3 months. The instrumentation consisted of one 2.5-arc-minutes and one 4.5-arc-minutes Full width at half maximum (FWHM) modulation collimator, as well as proportional counters sensitive over the energy range from 1.5 to 10-keV. The effective area of each collimator was about 225 cm2. The aspect system provided information on the orientation of the collimators to an accuracy of 15-arc-seconds. Galactic Absorption Experiment (GAE) The density and distribution of interstellar matter were determined by measuring the variation in the intensity of the low-energy diffuse X-ray background as a function of galactic latitude. A 1-micrometer polypropylene window proportional counter was used for the 0.1- to 0.4-keV and 0.4- to 1.0-keV energy ranges, while a 2-micrometer titanium window counter covered the energy range from 0.3 to 0.5 keV. In addition, two 1-mm beryllium window counters were used for the 1.0- to 10-keV energy range. The collimators in this experiment had fields of view of 3° for the 1-micrometer counter, 2° for the 2-mm counter, and 2° for the 1-mm counters. Galactic Monitor Experiment (GME) The objectives of this experiment were to locate galactic X-ray sources to 15 arc-seconds and to monitor these sources for intensity variations. The source positions were determined with the use of the modulation collimators of the Extragalactic Experiment during the nominal 2-month observation of the galactic equator. The monitoring of the X-ray sky was accomplished by the use of three slat collimators. One collimator, 1° by 70° FWHM, was oriented perpendicular to the equatorial plane of the satellite, while the other two, each 0.5° by 45° FWHM, were oriented 30° above and 30° below the first. The detector behind each collimator was a proportional counter, sensitive from 1.5 to 13 keV, with an effective area of about 100 cm2. The 1.0° collimator had an additional counter of the same area, sensitive from 8 to 50 keV. Three lines of position were obtained for any given source when the satellite was being spun at a steady rotation of 4 arc-minutes/seconds about the Z-axis. Scorpio Monitor Experiment (SME) A 12° by 50° FWHM slat collimator was oriented with its long axis perpendicular to the satellite spin axis such that a given point in the sky could be monitored for about 25% of a rotation. This collimator was inclined by 31° with respect to the equatorial plane of the satellite so that Scorpio X-1 was observed while the Z-axis was oriented to the Virgo Cluster of galaxies. The detectors used in this experiment were proportional counters with 1-mm beryllium windows. The energy range was from 1.0 to 60-keV, and the total effective area was about 40-cm2. Research results SAS 3 was especially productive due to its flexibility and rapid responsiveness. Among its most important results were: Shortly after the discovery of the first X-ray burster by the ANS, an intense period of burst source discovery by SAS 3 quickly led to the discovery and characterization of about a dozen additional objects, including the famous Rapid Burster, MXB1730-335. These observations established the identification of bursting X-ray sources with neutron star binary systems; The RMC was the first instrument to routinely provide X-ray positions that were sufficiently precise to allow follow-up by optical observatories to establish X-ray/optical counterparts, even in crowded regions near the galactic plane. Roughly 60 positions were obtained with accuracies on the order of 1 arcminute or less. The resulting source identifications helped to connect X-ray astronomy to the main body of stellar astrophysics; Discovery of the 3.6 seconds pulsations of the transient neutron star/Be star binary 4U 0115+63., leading to the determination of its orbit and observation of a cyclotron absorption line in its strong magnetic field. Many Be star/neutron star binaries were subsequently discovered as a class of X-ray emitters; Discovery of X-ray emission from HZ 43 (an isolated white dwarf), Algol, and from AM Her, the first highly magnetic white dwarf binary system seen in X rays; Established the frequent location of X-ray sources near the centers of globular clusters; First identification of a QSO through its X-ray emission; The soft X-ray instrument established that the 0.10-28 keV diffuse intensity is generally inversely correlated with the neutral H column density, indicating absorption of external diffuse sources by the foreground galactic interstellar medium. Lead investigators on SAS 3 were MIT professors George W. Clark, Hale V. Bradt, and Walter H. G. Lewin. Other major contributors were Profs Claude R. Canizares and Saul Rappaport, and Drs Jeffrey A. Hoffman, George Ricker, Jeff McClintock, Rodger Doxsey, Garrett Jernigan, Lynn Cominsky, John Doty, and many others, including numerous graduate students. See also Small Astronomy Satellite 1 Small Astronomy Satellite 2 References Sources SAS (Small Astronomy Satellite), The Internet Encyclopedia of Science 1975 in spaceflight Satellites formerly orbiting Earth Explorers Program Space telescopes
Small Astronomy Satellite 3
[ "Astronomy" ]
2,035
[ "Space telescopes" ]
16,079,328
https://en.wikipedia.org/wiki/NGC%20559
NGC 559 (also known as Caldwell 8) is an open cluster and Caldwell object in the constellation Cassiopeia. It shines at magnitude +9.5. Its celestial coordinates are RA , dec . It is located near the open cluster NGC 637, and the bright magnitude +2.2 irregular variable star Gamma Cassiopeiae. The cluster is 7 arcmins across. The object is also called Ghost's Goblet. This name was coined by astronomer Stephen J. O'Meara, as the center of the star cluster, with a little imagination, is reminiscent of a still photograph of a jeweled goblet that is about to vanish in a ghostly manner. O'Meara attributes the impression of fading to the low brightness (about +12) of many stars in the center as well as to the great age of the star cluster, which is about 1.8 billion years old. References External links Open clusters 0559 008b Cassiopeia (constellation) 17871109
NGC 559
[ "Astronomy" ]
210
[ "Cassiopeia (constellation)", "Constellations" ]
16,079,402
https://en.wikipedia.org/wiki/CXOU%20J164710.2%E2%88%92455216
CXOU J164710.2−455216 is an anomalous X-ray pulsar and magnetar in the massive galactic open cluster Westerlund 1. It is the brightest X-ray source in the cluster, and was discovered in 2005 in observations made by the Chandra X-ray Observatory. The Westerlund 1 cluster is believed to have formed in a single burst of star formation, implying that the progenitor star must have had a mass in excess of 40 solar masses. The fact that a neutron star was formed instead of a black hole implies that more than 95% of the star's original mass must have been lost before or during the supernova that produced the magnetar. On 21 September 2006 the Swift satellite detected a 20ms soft gamma-ray burst in Westerlund 1. Fortuitously, XMM-Newton observations had been made four days earlier, and repeat observations 1.5 days after the burst revealed the magnetar to be the source of the burst, with the X-ray luminosity increasing by a factor of 100 during the outburst. References cxo-j164710.2-455216 Ara (constellation) Anomalous X-ray pulsars ? Magnetars
CXOU J164710.2−455216
[ "Astronomy" ]
257
[ "Magnetars", "Magnetism in astronomy", "Constellations", "Ara (constellation)" ]
49,611
https://en.wikipedia.org/wiki/Menopause
Menopause, also known as the climacteric, is the time when menstrual periods permanently stop, marking the end of reproduction. It typically occurs between the ages of 45 and 55, although the exact timing can vary. Menopause is usually a natural change related to a decrease in circulating blood estrogen levels. It can occur earlier in those who smoke tobacco. Other causes include surgery that removes both ovaries, some types of chemotherapy, or anything that leads to a decrease in hormone levels. At the physiological level, menopause happens because of a decrease in the ovaries' production of the hormones estrogen and progesterone. While typically not needed, a diagnosis of menopause can be confirmed by measuring hormone levels in the blood or urine. Menopause is the opposite of menarche, the time when a girl's periods start. In the years before menopause, a woman's periods typically become irregular, which means that periods may be longer or shorter in duration or be lighter or heavier in the amount of flow. During this time, women often experience hot flashes; these typically last from 30 seconds to ten minutes and may be associated with shivering, night sweats, and reddening of the skin. Hot flashes can recur for four to five years. Other symptoms may include vaginal dryness, trouble sleeping, and mood changes. The severity of symptoms varies between women. Menopause before the age of 45 years is considered to be "early menopause" and when ovarian failure/surgical removal of the ovaries occurs before the age of 40 years this is termed "premature ovarian insufficiency". In addition to symptoms (hot flushes/flashes, night sweats, mood changes, arthralgia and vaginal dryness), the physical consequences of menopause include bone loss, increased central abdominal fat, and adverse changes in a woman's cholesterol profile and vascular function. These changes predispose postmenopausal women to increased risks of osteoporosis and bone fracture, and of cardio-metabolic disease (diabetes and cardiovascular disease). Medical professionals often define menopause as having occurred when a woman has not had any menstrual bleeding for a year. It may also be defined by a decrease in hormone production by the ovaries. In those who have had surgery to remove their uterus but still have functioning ovaries, menopause is not considered to have yet occurred. Following the removal of the uterus, symptoms of menopause typically occur earlier. Iatrogenic menopause occurs when both ovaries are surgically removed (Oophorectomy) along with uterus for medical reasons. The primary indications for treatment of menopause are symptoms and prevention of bone loss. Mild symptoms may be improved with treatment. With respect to hot flashes, avoiding smoking, caffeine, and alcohol is often recommended; sleeping naked in a cool room and using a fan may help. The most effective treatment for menopausal symptoms is menopausal hormone therapy (MHT). Non-hormonal therapies for hot flashes include cognitive-behavioral therapy, clinical hypnosis, gabapentin, fezolinetant or selective serotonin reuptake inhibitors. These will not improve symptoms such as joint pain or vaginal dryness which affect over 55% of women. Exercise may help with sleeping problems. Many of the concerns about the use of MHT raised by older studies are no longer considered barriers to MHT in healthy women. High-quality evidence for the effectiveness of alternative medicine has not been found. Signs and symptoms During early menopause transition, the menstrual cycles remain regular but the interval between cycles begins to lengthen. Hormone levels begin to fluctuate. Ovulation may not occur with each cycle. The term menopause refers to a point in time that follows one year after the last menstruation. During the menopausal transition and after menopause, women can experience a wide range of symptoms. However, for women who enter the menopause transition without having regular menstrual cycles (due to prior surgery, other medical conditions or ongoing hormonal contraception) the menopause cannot be identified by bleeding patterns and is defined as the permanent loss of ovarian function. Vagina, uterus and bladder (urogenital tract) During the transition to menopause, menstrual patterns can show shorter cycling (by 2–7 days); longer cycles remain possible. There may be irregular bleeding (lighter, heavier, spotting). Dysfunctional uterine bleeding is often experienced by women approaching menopause due to the hormonal changes that accompany the menopause transition. Spotting or bleeding may simply be related to vaginal atrophy, a benign sore (polyp or lesion), or may be a functional endometrial response. The European Menopause and Andropause Society has released guidelines for assessment of the endometrium, which is usually the main source of spotting or bleeding. In post-menopausal women, however, any unscheduled vaginal bleeding is of concern and requires an appropriate investigation to rule out the possibility of malignant diseases. Urogenital symptoms may appear during menopause and continue through postmenopause and include painful intercourse, vaginal dryness and atrophic vaginitis (thinning of the membranes of the vulva, the vagina, the cervix and the outer urinary tract). There may also be considerable shrinking and loss in elasticity of all of the outer and inner genital areas. Urinary urgency may also occur and urinary incontinence in some women. Other physical effects The most common physical symptoms of menopause are heavy night sweats, and hot flashes (also known as vasomotor symptoms). Sleeping problems and insomnia are also common. Other physical symptoms may be reported that are not specific to menopause but may be exacerbated by it, such as lack of energy, joint soreness, stiffness, back pain, breast enlargement, breast pain, heart palpitations, headache, dizziness, dry, itchy skin, thinning, tingling skin, rosacea, weight gain. Mood and memory effects Psychological symptoms are often reported but they are not specific to menopause and can be caused by other factors. They include anxiety, poor memory, inability to concentrate, depressive mood, irritability, mood swings, and less interest in sexual activity. Menopause-related cognitive impairment can be confused with the mild cognitive impairment that precedes dementia. There is evidence of small decreases in verbal memory, on average, which may be caused by the effects of declining estrogen levels on the brain, or perhaps by reduced blood flow to the brain during hot flashes. However, these tend to resolve for most women during the postmenopause. Subjective reports of memory and concentration problems are associated with several factors, such as lack of sleep, and stress. Long-term effects Cardiovascular health Exposure to endogenous estrogen during reproductive years provides women with protection against cardiovascular disease, which is lost around 10 years after the onset of menopause. The menopausal transition is associated with an increase in fat mass (predominantly in visceral fat), an increase in insulin resistance, dyslipidaemia, and endothelial dysfunction. Women with vasomotor symptoms during menopause seem to have an especially unfavorable cardiometabolic profile, as well as women with premature onset of menopause (before 45 years of age). These risks can be reduced by managing risk factors, such as tobacco smoking, hypertension, increased blood lipids and body weight. Bone health The annual rates of bone mineral density loss are highest starting one year before the final menstrual period and continuing through the two years after it. Thus, post menopausal women are at increased risk of osteopenia, osteoporosis and fractures. Causes Menopause is a normal event in a woman's life and a natural part of aging. Menopause can also be induced early. Induced menopause occurs as a result of medical treatment such as chemotherapy, radiotherapy, oophorectomy, or complications of tubal ligation, hysterectomy, unilateral or bilateral salpingo-oophorectomy or leuprorelin usage. Age Menopause typically occurs at some point between 47 and 54 years of age. According to various data, more than 95% of women have their last period between the ages of 44–56 (median 49–50). 2% of women under the age of 40, 5% between the ages of 40–45 and the same number between the ages of 55–58 have their last bleeding. The average age of the last period in the United States is 51 years, in Russia is 50 years, in Greece is 49 years, in Turkey is 47 years, in Egypt is 47 years and in India is 46 years. Beyond the influence of genetics, these differences are also due to early-life environmental conditions and associated with epigenetic effects. The menopausal transition or perimenopause leading up to menopause usually lasts 3–4 years (sometimes as long as 5–14 years). Undiagnosed and untreated coeliac disease is a risk factor for early menopause. Coeliac disease can present with several non-gastrointestinal symptoms, in the absence of gastrointestinal symptoms, and most cases escape timely recognition and go undiagnosed, leading to a risk of long-term complications. A strict gluten-free diet reduces the risk. Women with early diagnosis and treatment of coeliac disease present a normal duration of fertile life span. Women who have undergone hysterectomy with ovary conservation go through menopause on average 1.5 years earlier than the expected age. Premature ovarian insufficiency In rare cases, a woman's ovaries stop working at a very early age, ranging anywhere from the age of puberty to age 40. This is known as premature ovarian failure or premature ovarian insufficiency (POI) and affects 1 to 2% of women by age 40. It is diagnosed or confirmed by high blood levels of follicle stimulating hormone (FSH) and luteinizing hormone (LH) on at least three occasions at least four weeks apart. Premature ovarian insufficiency may be related to an auto immune disorder and therefore might co-occur with other autoimmune disorders such as thyroid disease, [adrenal insufficiency], and diabetes mellitus. Other causes include chemotherapy, being a carrier of the fragile X syndrome gene, and radiotherapy. However, in about 50–80% of cases of premature ovarian insufficiency, the cause is unknown, i.e., it is generally idiopathic. Early menopause can be related to cigarette smoking, higher body mass index, racial and ethnic factors, illnesses, and the removal of the uterus. Surgical menopause Menopause can be surgically induced by bilateral oophorectomy (removal of ovaries), which is often, but not always, done in conjunction with removal of the fallopian tubes (salpingo-oophorectomy) and uterus (hysterectomy). Cessation of menses as a result of removal of the ovaries is called "surgical menopause". Surgical treatments, such as the removal of ovaries, might cause periods to stop altogether. The sudden and complete drop in hormone levels may produce extreme withdrawal symptoms such as hot flashes, etc. The symptoms of early menopause may be more severe. Removal of the uterus without removal of the ovaries does not directly cause menopause, although pelvic surgery of this type can often precipitate a somewhat earlier menopause, perhaps because of a compromised blood supply to the ovaries. The time between surgery and possible early menopause is due to the fact that ovaries are still producing hormones. Mechanism The menopausal transition, and postmenopause itself, is a natural change, not usually a disease state or a disorder. The main cause of this transition is the natural depletion and aging of the finite amount of oocytes (ovarian reserve). This process is sometimes accelerated by other conditions and is known to occur earlier after a wide range of gynecologic procedures such as hysterectomy (with and without ovariectomy), endometrial ablation and uterine artery embolisation. The depletion of the ovarian reserve causes an increase in circulating follicle-stimulating hormone (FSH) and luteinizing hormone (LH) levels because there are fewer oocytes and follicles responding to these hormones and producing estrogen. The transition has a variable degree of effects. The stages of the menopause transition have been classified according to a woman's reported bleeding pattern, supported by changes in the pituitary follicle-stimulating hormone (FSH) levels. In younger women, during a normal menstrual cycle the ovaries produce estradiol, testosterone and progesterone in a cyclical pattern under the control of FSH and luteinizing hormone (LH), which are both produced by the pituitary gland. During perimenopause (approaching menopause), estradiol levels and patterns of production remain relatively unchanged or may increase compared to young women, but the cycles become frequently shorter or irregular. The often observed increase in estrogen is presumed to be in response to elevated FSH levels that, in turn, is hypothesized to be caused by decreased feedback by inhibin. Similarly, decreased inhibin feedback after hysterectomy is hypothesized to contribute to increased ovarian stimulation and earlier menopause. The menopausal transition is characterized by marked, and often dramatic, variations in FSH and estradiol levels. Because of this, measurements of these hormones are not considered to be reliable guides to a woman's exact menopausal status. Menopause occurs because of the sharp decrease of estradiol and progesterone production by the ovaries. After menopause, estrogen continues to be produced mostly by aromatase in fat tissues and is produced in small amounts in many other tissues such as ovaries, bone, blood vessels, and the brain where it acts locally. The substantial fall in circulating estradiol levels at menopause impacts many tissues, from brain to skin. In contrast to the sudden fall in estradiol during menopause, the levels of total and free testosterone, as well as dehydroepiandrosterone sulfate (DHEAS) and androstenedione appear to decline more or less steadily with age. An effect of natural menopause on circulating androgen levels has not been observed. Thus specific tissue effects of natural menopause cannot be attributed to loss of androgenic hormone production. Hot flashes and other vasomotor and body symptoms accompanying the menopausal transition are associated with estrogen insufficiency and changes that occur in the brain, primarily the hypothalamus and involve complex interplay between the neurotransmitters kisspeptin, neurokinin B, and dynorphin, which are found in KNDy neurons in the infundibular nucleus. Ovarian aging Decreased inhibin feedback after hysterectomy is hypothesized to contribute to increased ovarian stimulation and earlier menopause. Hastened ovarian aging has been observed after endometrial ablation. While it is difficult to prove that these surgeries are causative, it has been hypothesized that the endometrium may be producing endocrine factors contributing to the endocrine feedback and regulation of the ovarian stimulation. Elimination of these factors contributes to faster depletion of the ovarian reserve. Reduced blood supply to the ovaries that may occur as a consequence of hysterectomy and uterine artery embolisation has been hypothesized to contribute to this effect. Impaired DNA repair mechanisms may contribute to earlier depletion of the ovarian reserve during aging. As women age, double-strand breaks accumulate in the DNA of their primordial follicles. Primordial follicles are immature primary oocytes surrounded by a single layer of granulosa cells. An enzyme system is present in oocytes that ordinarily accurately repairs DNA double-strand breaks. This repair system is called "homologous recombinational repair", and it is especially effective during meiosis. Meiosis is the general process by which germ cells are formed in all sexual eukaryotes; it appears to be an adaptation for efficiently removing damages in germ line DNA. Human primary oocytes are present at an intermediate stage of meiosis, termed prophase I (see Oogenesis). Expression of four key DNA repair genes that are necessary for homologous recombinational repair during meiosis (BRCA1, MRE11, Rad51, and ATM) decline with age in oocytes. This age-related decline in ability to repair DNA double-strand damages can account for the accumulation of these damages, that then likely contributes to the depletion of the ovarian reserve. Diagnosis Ways of assessing the impact on women of some of these menopause effects, include the Greene climacteric scale questionnaire, the Cervantes scale and the Menopause rating scale. Perimenopause The term "perimenopause", which literally means "around the menopause", refers to the menopause transition years before the date of the final episode of flow. According to the North American Menopause Society, this transition can last for four to eight years. The Centre for Menstrual Cycle and Ovulation Research describes it as a six- to ten-year phase ending 12 months after the last menstrual period. During perimenopause, estrogen levels average about 20–30% higher than during premenopause, often with wide fluctuations. These fluctuations cause many of the physical changes during perimenopause as well as menopause, especially during the last 1–2 years of perimenopause (before menopause). Some of these changes are hot flashes, night sweats, difficulty sleeping, mood swings, vaginal dryness or atrophy, incontinence, osteoporosis, and heart disease. Perimenopause is also associated with a higher likelihood of depression (affecting from 45 percent to 68 percent of perimenopausal women), which is twice as likely to affect those with a history of depression. During this period, fertility diminishes but is not considered to reach zero until the official date of menopause. The official date is determined retroactively, once 12 months have passed after the last appearance of menstrual blood. The menopause transition typically begins between 40 and 50 years of age (average 47.5). The duration of perimenopause may be for up to eight years. Women will often, but not always, start these transitions (perimenopause and menopause) about the same time as their mother did. Some research appears to show that melatonin supplementation in perimenopausal women can improve thyroid function and gonadotropin levels, as well as restoring fertility and menstruation and preventing depression associated with menopause. Postmenopause The term "postmenopausal" describes women who have not experienced any menstrual flow for a minimum of 12 months, assuming that they have a uterus and are not pregnant or lactating. The reason for this delay in declaring postmenopause is that periods are usually erratic during menopause. Therefore, a reasonably long stretch of time is necessary to be sure that the cycling has ceased. At this point a woman is considered infertile; however, the possibility of becoming pregnant has usually been very low (but not quite zero) for a number of years before this point is reached. In women with or without a uterus, menopause or postmenopause can also be identified by a blood test showing a very high follicle-stimulating hormone level, greater than 25 IU/L in a random blood draw; it rises as ovaries become inactive. FSH continues to rise, as its counterpart estradiol continues to drop for about 2 years after the last menstrual period, after which the levels of each of these hormones stabilize. The stabilization period after the begin of early postmenopause has been estimated to last 3 to 6 years, so early postmenopause lasts altogether about 5 to 8 years, during which hormone withdrawal effects such as hot flashes disappear. Finally, late postmenopause has been defined as the remainder of a woman s lifespan, when reproductive hormones do not change any more. A period-like flow during postmenopause, even spotting, may be a sign of endometrial cancer. Management Perimenopause is a natural stage of life. It is not a disease or a disorder. Therefore, it does not automatically require any kind of medical treatment. However, in those cases where the physical, mental, and emotional effects of perimenopause are strong enough that they significantly disrupt the life of the woman experiencing them, palliative medical therapy may sometimes be appropriate. Menopausal hormone therapy In the context of the menopause, menopausal hormone therapy (MHT) is the use of estrogen in women without a uterus and estrogen plus progestogen in women who have an intact uterus. MHT may be reasonable for the treatment of menopausal symptoms, such as hot flashes. It is the most effective treatment option, especially when delivered as a skin patch. Its use, however, appears to increase the risk of strokes and blood clots. When used for menopausal symptoms the global recommendation is MHT should be prescribed for a long as there are defined treatment effects and goals for the individual woman. MHT is also effective for preventing bone loss and osteoporotic fracture, but it is generally recommended only for women at significant risk for whom other therapies are unsuitable. MHT may be unsuitable for some women, including those at increased risk of cardiovascular disease, increased risk of thromboembolic disease (such as those with obesity or a history of venous thrombosis) or increased risk of some types of cancer. There is some concern that this treatment increases the risk of breast cancer. Women at increased risk of cardiometabolic disease and VTE may be able to use transdermal estradiol which does not appear to increase risks in low to moderate doses. Adding testosterone to hormone therapy has a positive effect on sexual function in postmenopausal women, although it may be accompanied by hair growth or acne if used in excess. Transdermal testosterone therapy in appropriate dosing is generally safe. Selective estrogen receptor modulators SERMs are a category of drugs, either synthetically produced or derived from a botanical source, that act selectively as agonists or antagonists on the estrogen receptors throughout the body. The most commonly prescribed SERMs are raloxifene and tamoxifen. Raloxifene exhibits oestrogen agonist activity on bone and lipids, and antagonist activity on breast and the endometrium. Tamoxifen is in widespread use for treatment of hormone sensitive breast cancer. Raloxifene prevents vertebral fractures in postmenopausal, osteoporotic women and reduces the risk of invasive breast cancer. Other medications Some of the SSRIs and SNRIs appear to provide some relief from vasomotor symptoms. The most effective SSRIs and SNRIs are paroxetine, escitalopram, citalopram, venlafaxine, and desvenlafaxine. They may, however, be associated with appetite and sleeping problems, constipation and nausea. Gabapentin or fezolinetant can also improve the frequency and severity of vasomotor symptoms. Side effects of using gabapentin include drowsiness and headaches. Therapy Cognitive behavioural therapy and clinical hypnosis can decrease the amount women are affected by hot flashes. Mindfulness is not yet proven to be effective in easing vasomotor symptoms. Lifestyle and exercise Exercise has been thought to reduce postmenopausal symptoms through the increase of endorphin levels, which decrease as estrogen production decreases. However, there is insufficient evidence to suggest that exercise helps with the symptoms of menopause. Similarly, yoga has not been shown to be useful as a treatment for vasomotor symptoms. However a high BMI is a risk factor for vasomotor symptoms in particular. Weight loss may help with symptom management. There is no strong evidence that cooling techniques such as using specific clothing or environment control tools (for example fans) help with symptoms. Paced breathing and relaxation are not effective in easing symptoms. Dietary supplements There is no evidence of consistent benefit of taking any dietary supplements or herbal products for menopausal symptoms. These widely marketed but ineffective supplements include soy isoflavones, pollen extracts, black cohosh, omega-3 among many others. Alternative medicine There is no evidence of consistent benefit of alternative therapies for menopausal symptoms despite their popularity. As of 2023, there is no evidence to support the efficacy of acupuncture as a management for menopausal symptoms. The Cochrane review found not enough evidence in 2016 to show a difference between Chinese herbal medicine and placebo for the vasomotor symptoms. Other efforts Lack of lubrication is a common problem during and after perimenopause. Vaginal moisturizers can help women with overall dryness, and lubricants can help with lubrication difficulties that may be present during intercourse. It is worth pointing out that moisturizers and lubricants are different products for different issues: some women complain that their genitalia are uncomfortably dry all the time, and they may do better with moisturizers. Those who need only lubricants do well using them only during intercourse. Low-dose prescription vaginal estrogen products such as estrogen creams are generally a safe way to use estrogen topically, to help vaginal thinning and dryness problems (see vaginal atrophy) while only minimally increasing the levels of estrogen in the bloodstream. Individual counseling or support groups can sometimes be helpful to handle sad, depressed, anxious or confused feelings women may be having as they pass through what can be for some a very challenging transition time. Osteoporosis can be minimized by smoking cessation, adequate vitamin D intake and regular weight-bearing exercise. The bisphosphonate drug alendronate may decrease the risk of a fracture, in women that have both bone loss and a previous fracture and less so for those with just osteoporosis. A surgical procedure where a part of one of the ovaries is removed earlier in life and frozen and then over time thawed and returned to the body (ovarian tissue cryopreservation) has been tried. While at least 11 women have undergone the procedure and paid over £6,000, there is no evidence it is safe or effective. Society and culture Attitudes and experiences The menopause transition is a process, involving hormonal, menstrual, and typically vasomotor changes. However, the experience of the menopause as a whole is very much influenced by psychological and social factors, such as past experience, lifestyle, social and cultural meanings of menopause, and a woman's social and material circumstances. Menopause has been described as a biopsychosocial experience, with social and cultural factors playing a prominent role in the way menopause is experienced and perceived. The paradigm within which a woman considers menopause influences the way she views it: women who understand menopause as a medical condition rate it significantly more negatively than those who view it as a life transition or a symbol of aging. There is some evidence that negative attitudes and expectations, held before the menopause, predict symptom experience during the menopause, and beliefs and attitudes toward menopause tend to be more positive in postmenopausal than in premenopausal women. Women with more negative attitudes towards the menopause report more symptoms during this transition. Menopause is a stage of life experienced in different ways. It can be characterized by personal challenges, changes in personal roles within the family and society. Women's approaches to changes during menopause are influenced by their personal, family and sociocultural background. Women from different regions and countries also have different attitudes. Postmenopausal women had more positive attitudes toward menopause compared with peri- or premenopausal women. Other influencing factors of attitudes toward menopause include age, menopausal symptoms, psychological and socioeconomical status, and profession and ethnicity. Ethnicity and geography play roles in the experience of menopause. American women of different ethnicities report significantly different types of menopausal effects. One major study found Caucasian women most likely to report what are sometimes described as psychosomatic symptoms, while African-American women were more likely to report vasomotor symptoms. There may be variations in experiences of women from different ethnic backgrounds regarding menopause and care. Immigrant women reported more vasomotor symptoms and other physical symptoms and poorer mental health than non-immigrant women and were mostly dissatisfied with the care they had received. Self-management strategies for menopausal symptoms were also influenced by culture. Two multinational studies of Asian women, found that hot flushes were not the most commonly reported symptoms, instead body and joint aches, memory problems, sleeplessness, irritability and migraines were. In another study comparing experiences of menopause amongst White Australian women and women in Laos, Australian women reported higher rates of depression, as well as fears of aging, weight gain and cancer – fears not reported by Laotian women, who positioned menopause as a positive event. Japanese women experience menopause effects, or kōnenki (更年期), in a different way from American women. Japanese women report lower rates of hot flashes and night sweats; this can be attributed to a variety of factors, both biological and social. Historically, kōnenki was associated with wealthy middle-class housewives in Japan, i.e., it was a "luxury disease" that women from traditional, inter-generational rural households did not report. Menopause in Japan was viewed as a symptom of the inevitable process of aging, rather than a "revolutionary transition", or a "deficiency disease" in need of management. As of 2005, in Japanese culture, reporting of vasomotor symptoms has been on the increase, with research finding that of 140 Japanese participants, hot flashes were prevalent in 22.1%. This was almost double that of 20 years prior. Whilst the exact cause for this is unknown, possible contributing factors include dietary changes, increased medicalisation of middle-aged women and increased media attention on the subject. However, reporting of vasomotor symptoms is still "significantly" lower than in North America. Additionally, while most women in the United States apparently have a negative view of menopause as a time of deterioration or decline, some studies seem to indicate that women from some Asian cultures have an understanding of menopause that focuses on a sense of liberation and celebrates the freedom from the risk of pregnancy. Diverging from these conclusions, one study appeared to show that many American women "experience this time as one of liberation and self-actualization". In some women, menopause may bring about a sense of loss related to the end of fertility. In addition, this change often aligns with other stressors, such as the responsibility of looking after elderly parents or dealing with the emotional challenges of "empty nest syndrome" when children move out of the family home. This situation can be accentuated in cultures where being older is negatively perceived. Impact on work Midlife is typically a life stage when men and women may be dealing with demanding life events and responsibilities, such as work, health problems, and caring roles. For example, in 2018 in the UK women aged 45–54 report more work-related stress than men or women of any other age group. Hot flashes are often reported to be particularly distressing at work and lead to embarrassment and worry about potential stigmatisation. A June 2023 study by the Mayo Clinic estimated an annual loss of $1.8 billion in the United States due to workdays missed as a result of menopause symptoms. This was one of the largest studies to date examining the impact of menopause symptoms on work outcomes. The research concluded there was a strong need to improve medical treatment for menopausal women and make the workplace environment more supportive to avoid such productivity losses. Etymology Menopause literally means the "end of monthly cycles" (the end of monthly periods or menstruation), from the Greek word pausis ("pause") and mēn ("month"). This is a medical coinage; the Greek word for menses is actually different. In Ancient Greek, the menses were described in the plural, ("the monthlies"), and its modern descendant has been clipped to ta emmēna. The Modern Greek medical term is emmenopausis in Katharevousa or emmenopausi in Demotic Greek. The Ancient Greeks did not produce medical concepts about any symptoms associated with end of menstruation and did not use a specific word to refer to this time of a woman's life. The word menopause was invented by French doctors at the beginning of the nineteenth century. Greek etymology was reconstructed at this time and it was the Parisian student doctor Charles-Pierre-Louis de Gardanne who invented a variation of the word in 1812, which was edited to its final French form in 1821. Some of them noted that peasant women had no complaints about the end of menses, while urban middle-class women had many troubling symptoms. Doctors at this time considered the symptoms to be the result of urban lifestyles of sedentary behaviour, alcohol consumption, too much time indoors, and over-eating, with a lack of fresh fruit and vegetables. The word "menopause" was coined specifically for female humans, where the end of fertility is traditionally indicated by the permanent stopping of monthly menstruations. However, menopause exists in some other animals, many of which do not have monthly menstruation; in this case, the term means a natural end to fertility that occurs before the end of the natural lifespan. In popular culture, law and politics In the 21st century, celebrities have spoken out about their experiences of the menopause, which has led to it becoming less of a taboo as it has boosted awareness of the debilitating symptoms. Subsequently, TV shows have been running features on the menopause to help women experiencing symptoms. In the UK Lorraine Kelly has been an advocate for getting women to speak about their experiences including sharing her own. This has led to an increase in women seeking treatment such as HRT. Davina McCall also led an awareness campaign based on a documentary on Channel 4. In the UK, Carolyn Harris sponsored the Menopause (Support and Services) Bill in June 2021. It was to exempt hormone replacement therapy from National Health Service prescription charges and to make provisions about menopause support and services, including public education and communication in supporting perimenopausal and post-menopausal women, and to raise awareness of menopause and its effects. The bill was withdrawn on 29 October 2021. In the US, David McKinley, Republican from West Virginia introduced the Menopause Research Act in September 2022 for $100 million in 2023 and 2024, but it stalled. Other animals The majority of mammal species reach menopause when they cease the production of ovarian follicles, which contain eggs (oocytes), between one-third and two-thirds of their maximum possible lifespan. However, few live long enough in the wild to reach this point. Humans are joined by a limited number of other species in which females live substantially longer than their ability to reproduce. Examples of others include cetaceans: beluga whales, narwhals, orcas, false killer whales and short-finned pilot whales. Menopause has been reported in a variety of other vertebrate species, but these examples tend to be from captive individuals, and thus are not necessarily representative of what happens in natural populations in the wild. Menopause in captivity has been observed in several species of nonhuman primates, including rhesus monkeys and chimpanzees. Some research suggests that wild chimpanzees do not experience menopause, as their fertility declines are associated with declines in overall health. Menopause has been reported in elephants in captivity and guppies. Dogs do not experience menopause; the canine estrus cycle simply becomes irregular and infrequent. Although older female dogs are not considered good candidates for breeding, offspring have been produced by older animals, see Canine reproduction. Similar observations have been made in cats. Life histories show a varying degree of senescence; rapid senescing organisms (e.g., Pacific salmon and annual plants) do not have a post-reproductive life-stage. Gradual senescence is exhibited by all placental mammalian life histories. Evolution There are various theories on the origin and process of the evolution of the menopause. These attempt to suggest evolutionary benefits to the human species stemming from the cessation of women's reproductive capability before the end of their natural lifespan. It is conjectured that in highly social groups natural selection favors females that stop reproducing and devote that post-reproductive life span to continuing to care for existing offspring, both their own and those of others to whom they are related, especially their granddaughters and grandsons. See also European Menopause and Andropause Society Menopause in the workplace Menopause in incarceration Pregnancy over age 50 Biological clock Evolution of menopause References External links Menopause: MedlinePlus What Is Menopause?, National Institute on Aging Menopause & Me, The North American Menopause Society Developmental stages Endocrinology Gynaecological endocrinology Menstrual cycle Middle age Senescence Wikipedia medicine articles ready to translate Human female endocrine system
Menopause
[ "Chemistry", "Biology" ]
8,177
[ "Senescence", "Metabolism", "Cellular processes" ]
49,627
https://en.wikipedia.org/wiki/Universal%20Disk%20Format
Universal Disk Format (UDF) is an open, vendor-neutral file system for computer data storage for a broad range of media. In practice, it has been most widely used for DVDs and newer optical disc formats, supplanting ISO 9660. Due to its design, it is very well suited to incremental updates on both write-once and re-writable optical media. UDF was developed and maintained by the Optical Storage Technology Association (OSTA). In engineering terms, Universal Disk Format is a profile of the specifications known as ISO/IEC 13346 and ECMA-167. Usage Normally, authoring software will master a UDF file system in a batch process and write it to optical media in a single pass. But when packet writing to rewritable media, such as CD-RW, UDF allows files to be created, deleted and changed on-disc just as a general-purpose filesystem would on removable media like floppy disks and flash drives. This is also possible on write-once media, such as CD-R, but in that case the space occupied by the deleted files cannot be reclaimed (and instead becomes inaccessible). Multi-session mastering is also possible in UDF, though some implementations may be unable to read disks with multiple sessions. History The Optical Storage Technology Association standardized the UDF file system to form a common file system for all optical media: both for read-only media and for re-writable optical media. When first standardized, the UDF file system aimed to replace ISO 9660, allowing support for both read-only and writable media. After the release of the first version of UDF, the DVD Consortium adopted it as the official file system for DVD-Video and DVD-Audio. UDF shares the basic volume descriptor format with ISO 9660. A "UDF Bridge" format is defined since 1.50 so that a disc can also contain a ISO 9660 file system making references to files on the UDF part. Revisions Multiple revisions of UDF have been released: Revision 1.00 (24 October 1995). Original Release. Revision 1.01 (3 November 1995). Added DVD Appendix and made a few minor changes. Revision 1.02 (30 August 1996). This format is used by DVD-Video discs. Revision 1.50 (4 February 1997). Added support for CD-R/W Packet Writing and (virtual) rewritability on CD-R/DVD-R media by introducing the Virtual Allocation Table (VAT) structure. Added sparing tables for defect management on rewritable media such as CD-RW, and DVD-RW and DVD+RW. Add UDF bridge. Revision 2.00 (3 April 1998). Added support for Stream Files, Access Control lists, Power Calibration, real-time files (for DVD recording) and simplified directory management. VAT support was extended. Revision 2.01 (15 March 2000) is mainly a bugfix release to UDF 2.00. Many of the UDF standard's ambiguities were resolved in version 2.01. Revision 2.50 (30 April 2003). Added the Metadata Partition facilitating metadata clustering, easier crash recovery and optional duplication of file system information: All metadata like nodes and directory contents are written on a separate partition which can optionally be mirrored. This format is used by some versions of Blu-rays and most HD-DVD discs. Revision 2.60 (1 March 2005). Added Pseudo OverWrite method for drives supporting pseudo overwrite capability on sequentially recordable media. Has read-only compatibility with UDF 2.50 implementations. (Some Blu-rays use this format.) UDF Revisions are internally encoded as binary-coded decimals; Revision 2.60, for example, is represented as . In addition to declaring its own revision, compatibility for each volume is defined by the minimum read and minimum write revisions, each signalling the requirements for these operations to be possible for every structure on this image. A "maximum write" revision additionally records the highest UDF support level of all the implementations that has written to this image. For example, a UDF 2.01 volume that does not use Stream Files (introduced in UDF 2.00) but uses VAT (UDF 1.50) created by a UDF 2.60-capable implementation may have the revision declared as , the minimum read revision set to , the minimum write to , and the maximum write to . Specifications The UDF standard defines three file system variations, called "builds". These are: Plain (Random Read/Write Access). This is the original format supported in all UDF revisions Virtual Allocation Table, also known as VAT (Incremental Writing). Used specifically for writing to write-once media Spared (Limited Random Write Access). Used specifically for writing to rewritable media Plain build Introduced in the first version of the standard, this format can be used on any type of disk that allows random read/write access, such as hard disks, DVD+RW and DVD-RAM media. Metadata (up to v2.50) and file data is addressed more or less directly. In writing to such a disk in this format, any physical block on the disk may be chosen for allocation of new or updated files. Since this is the basic format, practically any operating system or file system driver claiming support for UDF should be able to read this format. VAT build Write-once media such as DVD-R and CD-R have limitations when being written to, in that each physical block can only be written to once, and the writing must happen incrementally. Thus the plain build of UDF can only be written to CD-Rs by pre-mastering the data and then writing all data in one piece to the media, similar to the way an ISO 9660 file system gets written to CD media. To enable a CD-R to be used virtually like a hard disk, whereby the user can add and modify files on a CD-R at will (so-called "drive letter access" on Windows), OSTA added the VAT build to the UDF standard in its revision 1.5. The VAT is an additional structure on the disc that allows packet writing; that is, remapping physical blocks when files or other data on the disc are modified or deleted. For write-once media, the entire disc is virtualized, making the write-once nature transparent for the user; the disc can be treated the same way one would treat a rewritable disc. The write-once nature of CD-R or DVD-R media means that when a file is deleted on the disc, the file's data still remains on the disc. It does not appear in the directory any more, but it still occupies the original space where it was stored. Eventually, after using this scheme for some time, the disc will be full, as free space cannot be recovered by deleting files. Special tools can be used to access the previous state of the disc (the state before the delete occurred), making recovery possible. Not all drives fully implement version 1.5 or higher of the UDF, and some may therefore be unable to handle VAT builds. Spared (RW) build Rewriteable media such as DVD-RW and CD-RW have fewer limitations than DVD-R and CD-R media. Sectors can be rewritten at random (though in packets at a time). These media can be erased entirely at any time, making the disc blank again, ready for writing a new UDF or other file system (e.g., ISO 9660 or CD Audio) to it. However, sectors of -RW media may "wear out" after a while, meaning that their data becomes unreliable, through having been rewritten too often (typically after a few hundred rewrites, with CD-RW). The plain and VAT builds of the UDF format can be used on rewriteable media, with some limitations. If the plain build is used on a -RW media, file-system level modification of the data must not be allowed, as this would quickly wear out often-used sectors on the disc (such as those for directory and block allocation data), which would then go unnoticed and lead to data loss. To allow modification of files on the disc, rewriteable discs can be used like -R media using the VAT build. This ensures that all blocks get written only once (successively), ensuring that there are no blocks that get rewritten more often than others. This way, a RW disc can be erased and reused many times before it should become unreliable. However, it will eventually become unreliable with no easy way of detecting it. When using the VAT build, CD-RW/DVD-RW media effectively appears as CD-R or DVD+/-R media to the computer. However, the media may be erased again at any time. The spared build was added in revision 1.5 to address the particularities of rewriteable media. This build adds an extra Sparing Table in order to manage the defects that will eventually occur on parts of the disc that have been rewritten too many times. This table keeps track of worn-out sectors and remaps them to working ones. UDF defect management does not apply to systems that already implement another form of defect management, such as Mount Rainier (MRW) for optical discs, or a disk controller for a hard drive. The tools and drives that do not fully support revision 1.5 of UDF will ignore the sparing table, which would lead them to read the outdated worn-out sectors, leading to retrieval of corrupted data. An overhead that is spread over the entire disc reserves a portion of the data storage space, limiting the usable capacity of a CD-RW with e.g. 650 MB of original capacity to around 500 MB. Character set The UDF specifications allow only one Character Set OSTA CS0, which can store any Unicode Code point excluding U+FEFF and U+FFFE. Additional character sets defined in ECMA-167 are not used. Since Errata DCN-5157, the range of code points was expanded to all code points from Unicode 4.0 (or any newer or older version), which includes Plane 1-16 characters such as Emoji. DCN-5157 also recommends normalizing the strings to Normalization Form C. The OSTA CS0 character set stores a 16-bit Unicode string "compressed" into 8-bit or 16-bit units, preceded by a single-byte "compID" tag to indicate the compression type. The 8-bit storage is functionally equivalent to ISO-8859-1, and the 16-bit storage is UTF-16 in big endian. 8-bit-per-character file names save space because they only require half the space per character, so they should be used if the file name contains no special characters that can not be represented with 8 bits only. The reference algorithm neither checks for forbidden code points nor interprets surrogate pairs, so like NTFS the string may be malformed. (No specific form of storage is specified by DCN-5157, but UTF-16BE is the only well-known method for storing all of Unicode while being mostly backward compatible with UCS-2.) Compatibility Many DVD players do not support any UDF revision other than version 1.02. Discs created with a newer revision may still work in these players if the ISO 9660 bridge format is used. Even if an operating system claims to be able to read UDF 1.50, it still may only support the plain build and not necessarily either the VAT or Spared UDF builds. Mac OS X 10.4.5 claims to support Revision 1.50 (see man mount_udf), yet it can only mount disks of the plain build properly and provides no virtualization support at all. It cannot mount UDF disks with VAT, as seen with the Sony Mavica issue. Releases before 10.4.11 mount disks with Sparing Table but does not read its files correctly. Version 10.4.11 fixes this problem. Similarly, Windows XP Service Pack 2 (SP2) cannot read DVD-RW discs that use the UDF 2.00 sparing tables as a defect management system. This problem occurs if the UDF defect management system creates a sparing table that spans more than one sector on the DVD-RW disc. Windows XP SP2 can recognize that a DVD is using UDF, but Windows Explorer displays the contents of a DVD as an empty folder. A hotfix is available for this and is included in Service Pack 3. Due to the default UDF versions and options, a UDF partition formatted by Windows cannot be written under macOS. On the other hand, a partition formatted by macOS cannot be directly written by Windows, due to the requirement of a MBR partition table. In addition, Linux only supports writing to UDF 2.01. A script for Linux and macOS called handles these incompatibilities by using UDF 2.01 and adding a fake MBR; for Windows the best solution is using the command-line tool . See also Comparison of file systems DVD authoring ISO/IEC 13490 Notes References Further reading ISO/IEC 13346 standard, also known as ECMA-167. External links Home page of the Optical Storage Technology Association (OSTA) UDF specifications: 1.02, 1.50, 2.00, 2.01, 2.50, 2.60 (March 1, 2005), SecureUDF UDF 1.01 (mirror, original file name: "UDF_101.PDF") - This specification was never published on the OSTA website and only existed in its early days on a FTP server that is long gone as of 2024, at this URL (linked from this 1996 article). ECMA 167/3: Volume and File Structure for Write-Once and Rewritable Media using Non-Sequential Recording for Information Interchange (June 1997) (referenced from UDF specification) Wenguang Wang's UDF Introduction Linux UDF support "CD-ROM Drive May Not Be Able to Read a UDF-Formatted Disc in Windows XP", Microsoft Support AIX - CD-ROM file system and UDFS OSTA Technology (mention of UDF 1.00 on the 1996 OSTA website) (The UDF 1.00 specification itself is lost literary work as of 2024.) UDF - El profesional de la información (mention of UDF 1.01) Disk file systems ISO standards IEC standards Ecma standards Windows components
Universal Disk Format
[ "Technology" ]
3,068
[ "Computer standards", "Ecma standards", "IEC standards" ]
49,635
https://en.wikipedia.org/wiki/SEX%20%28computing%29
In computing, the SEX assembly language mnemonic has often been used for the "Sign EXtend" machine instruction found in the Motorola 6809. A computer's or CPU's "sex" can also mean the endianness of the computer architecture used. x86 computers do not have the same "byte sex" as HC11 computers, for example. Functions are sometimes needed for computers of different endianness to communicate with each other over the internet, as protocols often use big endian byte coding by default. On the RCA 1802 series of microprocessors, the SEX, for "SEt X," instruction is used to designate which of the machine's sixteen 16-bit registers is to be the X (index) register. SEX in software: rarely used jargon The TLA SEX has humorously been said to stand for Software EXchange, meaning copying of software. As file sharing has sometimes spread computer viruses, it has been stated that “illicit SEX can transmit viral diseases to your computer.” The involvement of FTP servers' /pub directories in this process has led to the name being explained as a contraction of 'pubic'. References Machine code Computer jargon
SEX (computing)
[ "Technology" ]
244
[ "Natural language and computing", "Computer jargon", "Computing terminology" ]
49,658
https://en.wikipedia.org/wiki/Aurora
An aurora ( aurorae or auroras), also commonly known as the northern lights (aurora borealis) or southern lights (aurora australis), is a natural light display in Earth's sky, predominantly seen in high-latitude regions (around the Arctic and Antarctic). Auroras display dynamic patterns of brilliant lights that appear as curtains, rays, spirals, or dynamic flickers covering the entire sky. Auroras are the result of disturbances in the Earth's magnetosphere caused by the solar wind. Major disturbances result from enhancements in the speed of the solar wind from coronal holes and coronal mass ejections. These disturbances alter the trajectories of charged particles in the magnetospheric plasma. These particles, mainly electrons and protons, precipitate into the upper atmosphere (thermosphere/exosphere). The resulting ionization and excitation of atmospheric constituents emit light of varying colour and complexity. The form of the aurora, occurring within bands around both polar regions, is also dependent on the amount of acceleration imparted to the precipitating particles. Planets in the Solar System, brown dwarfs, comets, and some natural satellites also host auroras. Etymology The term aurora borealis was coined by Galileo Galilei in 1619, from the Roman Aurora, goddess of the dawn, and the Greek Boreas, god of the cold north wind. The word aurora is derived from the name of the Roman goddess of the dawn, Aurora, who travelled from east to west announcing the coming of the Sun. Aurora was first used in English in the 14th century. The words borealis and australis are derived from the names of the ancient gods of the north wind (Boreas) and the south wind (Auster or australis) in Greco-Roman mythology. Aurora borealis was first used to describe the northern lights by the French philosopher, Pierre Gassendi (also called Petrus Gassendus) in 1621, then entered English in 1828. Occurrence Auroras are most commonly observed in the "auroral zone", a band approximately 6° (~660 km) wide in latitude centered on 67° north and south. The region that currently displays an aurora is called the "auroral oval". The oval is displaced by the solar wind, pushing it about 15° away from the geomagnetic pole (not the geographic pole) in the noon direction and 23° away in the midnight direction. The peak equatorward extent of the oval is displaced slightly from geographic midnight. It is centered about 3–5° nightward of the magnetic pole, so that auroral arcs reach furthest toward the equator when the magnetic pole in question is in between the observer and the Sun, which is called magnetic midnight. Early evidence for a geomagnetic connection comes from the statistics of auroral observations. Elias Loomis (1860), and later Hermann Fritz (1881) and Sophus Tromholt (1881) in more detail, established that the aurora appeared mainly in the auroral zone. In northern latitudes, the effect is known as the aurora borealis or the northern lights. The southern counterpart, the aurora australis or the southern lights, has features almost identical to the aurora borealis and changes simultaneously with changes in the northern auroral zone. The aurora australis is visible from high southern latitudes in Antarctica, the Southern Cone, South Africa, Australasia, the Falkland Islands, and under exceptional circumstances as far north as Uruguay. The aurora borealis is visible from areas around the Arctic such as Alaska, Canada, Iceland, Greenland, the Faroe Islands, Scandinavia, Finland, Scotland, and Russia. A geomagnetic storm causes the auroral ovals (north and south) to expand, bringing the aurora to lower latitudes. On rare occasions, the aurora borealis can be seen as far south as the Mediterranean and the southern states of the US while the aurora australis can be seen as far north as New Caledonia and the Pilbara region in Western Australia. During the Carrington Event, the greatest geomagnetic storm ever observed, auroras were seen even in the tropics. Auroras seen within the auroral oval may be directly overhead. From farther away, they illuminate the poleward horizon as a greenish glow, or sometimes a faint red, as if the Sun were rising from an unusual direction. Auroras also occur poleward of the auroral zone as either diffuse patches or arcs, which can be subvisual. Auroras are occasionally seen in latitudes below the auroral zone, when a geomagnetic storm temporarily enlarges the auroral oval. Large geomagnetic storms are most common during the peak of the 11-year sunspot cycle or during the three years after the peak. An electron spirals (gyrates) about a field line at an angle that is determined by its velocity vectors, parallel and perpendicular, respectively, to the local geomagnetic field vector B. This angle is known as the "pitch angle" of the particle. The distance, or radius, of the electron from the field line at any time is known as its Larmor radius. The pitch angle increases as the electron travels to a region of greater field strength nearer to the atmosphere. Thus, it is possible for some particles to return, or mirror, if the angle becomes 90° before entering the atmosphere to collide with the denser molecules there. Other particles that do not mirror enter the atmosphere and contribute to the auroral display over a range of altitudes. Other types of auroras have been observed from space; for example, "poleward arcs" stretching sunward across the polar cap, the related "theta aurora", and "dayside arcs" near noon. These are relatively infrequent and poorly understood. Other interesting effects occur such as pulsating aurora, "black aurora" and their rarer companion "anti-black aurora" and subvisual red arcs. In addition to all these, a weak glow (often deep red) observed around the two polar cusps, the field lines separating the ones that close through Earth from those that are swept into the tail and close remotely. Images Early work on the imaging of the auroras was done in 1949 by the University of Saskatchewan using the SCR-270 radar. The altitudes where auroral emissions occur were revealed by Carl Størmer and his colleagues, who used cameras to triangulate more than 12,000 auroras. They discovered that most of the light is produced between above the ground, while extending at times to more than . Forms According to Clark (2007), there are five main forms that can be seen from the ground, from least to most visible: A mild glow, near the horizon. These can be close to the limit of visibility, but can be distinguished from moonlit clouds because stars can be seen undiminished through the glow. Patches or surfaces that look like clouds. Arcs curve across the sky. Rays are light and dark stripes across arcs, reaching upwards by various amounts. Coronas cover much of the sky and diverge from one point on it. Brekke (1994) also described some auroras as "curtains". The similarity to curtains is often enhanced by folds within the arcs. Arcs can fragment or break up into separate, at times rapidly changing, often rayed features that may fill the whole sky. These are also known as discrete auroras, which are at times bright enough to read a newspaper by at night. These forms are consistent with auroras being shaped by Earth's magnetic field. The appearances of arcs, rays, curtains, and coronas are determined by the shapes of the luminous parts of the atmosphere and a viewer's position. Colours and wavelengths of auroral light Red: At its highest altitudes, excited atomic oxygen emits at 630 nm (red); low concentration of atoms and lower sensitivity of eyes at this wavelength make this colour visible only under more intense solar activity. The low number of oxygen atoms and their gradually diminishing concentration is responsible for the faint appearance of the top parts of the "curtains". Scarlet, crimson, and carmine are the most often-seen hues of red for the auroras. Green: At lower altitudes, the more frequent collisions suppress the 630 nm (red) mode: rather the 557.7 nm emission (green) dominates. A fairly high concentration of atomic oxygen and higher eye sensitivity in green make green auroras the most common. The excited molecular nitrogen (atomic nitrogen being rare due to the high stability of the N2 molecule) plays a role here, as it can transfer energy by collision to an oxygen atom, which then radiates it away at the green wavelength. (Red and green can also mix together to produce pink or yellow hues.) The rapid decrease of concentration of atomic oxygen below about 100 km is responsible for the abrupt-looking end of the lower edges of the curtains. Both the 557.7 and 630.0 nm wavelengths correspond to forbidden transitions of atomic oxygen, a slow mechanism responsible for the graduality (0.7 s and 107 s respectively) of flaring and fading. Blue: At yet lower altitudes, atomic oxygen is uncommon, and molecular nitrogen and ionized molecular nitrogen take over in producing visible light emission, radiating at a large number of wavelengths in both red and blue parts of the spectrum, with 428 nm (blue) being dominant. Blue and purple emissions, typically at the lower edges of the "curtains", show up at the highest levels of solar activity. The molecular nitrogen transitions are much faster than the atomic oxygen ones. Ultraviolet: Ultraviolet radiation from auroras (within the optical window but not visible to virtually all humans) has been observed with the requisite equipment. Ultraviolet auroras have also been seen on Mars, Jupiter, and Saturn. Infrared: Infrared radiation, in wavelengths that are within the optical window, is also part of many auroras. Yellow and pink are a mix of red and green or blue. Other shades of red, as well as orange and gold, may be seen on rare occasions; yellow-green is moderately common. As red, green, and blue are linearly independent colours, additive synthesis could, in theory, produce most human-perceived colours, but the ones mentioned in this article comprise a virtually exhaustive list. Changes with time Auroras change with time. Over the night they begin with glows and progress toward coronas, although they may not reach them. They tend to fade in the opposite order. Until about 1963, it was thought that these changes are due to the rotation of the Earth under a pattern fixed with respect to the Sun. Later, it was found by comparing all-sky films of auroras from different places (collected during the International Geophysical Year) that they often undergo global changes in a process called auroral substorm. They change in a few minutes from quiet arcs all along the auroral oval to active displays along the darkside and after 1–3 hours they gradually change back. Changes in auroras over time are commonly visualized using keograms. At shorter time scales, auroras can change their appearances and intensity, sometimes so slowly as to be difficult to notice, and at other times rapidly down to the sub-second scale. The phenomenon of pulsating auroras is an example of intensity variations over short timescales, typically with periods of 2–20 seconds. This type of aurora is generally accompanied by decreasing peak emission heights of about 8 km for blue and green emissions and above average solar wind speeds (). Other auroral radiation In addition, the aurora and associated currents produce a strong radio emission around 150 kHz known as auroral kilometric radiation (AKR), discovered in 1972. Ionospheric absorption makes AKR only observable from space. X-ray emissions, originating from the particles associated with auroras, have also been detected. Noise Aurora noise, similar to a crackling noise, begins about above Earth's surface and is caused by charged particles in an inversion layer of the atmosphere formed during a cold night. The charged particles discharge when particles from the Sun hit the inversion layer, creating the noise. Unusual types STEVE In 2016, more than fifty citizen science observations described what was to them an unknown type of aurora which they named "STEVE", for "Strong Thermal Emission Velocity Enhancement". STEVE is not an aurora but is caused by a wide ribbon of hot plasma at an altitude of , with a temperature of and flowing at a speed of (compared to outside the ribbon). Picket-fence aurora The processes that cause STEVE are also associated with a picket-fence aurora, although the latter can be seen without STEVE. It is an aurora because it is caused by precipitation of electrons in the atmosphere but it appears outside the auroral oval, closer to the equator than typical auroras. When the picket-fence aurora appears with STEVE, it is below. Dune aurora First reported in 2020, and confirmed in 2021, the dune aurora phenomenon was discovered by Finnish citizen scientists. It consists of regularly-spaced, parallel stripes of brighter emission in the green diffuse aurora which give the impression of sand dunes. The phenomenon is believed to be caused by the modulation of atomic oxygen density by a large-scale atmospheric wave travelling horizontally in a waveguide through an inversion layer in the mesosphere in presence of electron precipitation. Horse-collar aurora Horse-collar auroras (HCA) are auroral features in which the auroral ellipse shifts poleward during the dawn and dusk portions and the polar cap becomes teardrop-shaped. They form during periods when the interplanetary magnetic field (IMF) is permanently northward, when the IMF clock angle is small. Their formation is associated with the closure of the magnetic flux at the top of the dayside magnetosphere by the double lobe reconnection (DLR). There are approximately 8 HCA events per month, with no seasonal dependence, and that the IMF must be within 30 degrees of northwards. Conjugate auroras Conjugate auroras are nearly exact mirror-image auroras found at conjugate points in the northern and southern hemispheres on the same geomagnetic field lines. These generally happen at the time of the equinoxes, when there is little difference in the orientation of the north and south geomagnetic poles to the sun. Attempts were made to image conjugate auroras by aircraft from Alaska and New Zealand in 1967, 1968, 1970, and 1971, with some success. Causes A full understanding of the physical processes which lead to different types of auroras is still incomplete, but the basic cause involves the interaction of the solar wind with Earth's magnetosphere. The varying intensity of the solar wind produces effects of different magnitudes but includes one or more of the following physical scenarios. A quiescent solar wind flowing past Earth's magnetosphere steadily interacts with it and can both inject solar wind particles directly onto the geomagnetic field lines that are 'open', as opposed to being 'closed' in the opposite hemisphere and provide diffusion through the bow shock. It can also cause particles already trapped in the radiation belts to precipitate into the atmosphere. Once particles are lost to the atmosphere from the radiation belts, under quiet conditions, new ones replace them only slowly, and the loss-cone becomes depleted. In the magnetotail, however, particle trajectories seem constantly to reshuffle, probably when the particles cross the very weak magnetic field near the equator. As a result, the flow of electrons in that region is nearly the same in all directions ("isotropic") and assures a steady supply of leaking electrons. The leakage of electrons does not leave the tail positively charged, because each leaked electron lost to the atmosphere is replaced by a low energy electron drawn upward from the ionosphere. Such replacement of "hot" electrons by "cold" ones is in complete accord with the second law of thermodynamics. The complete process, which also generates an electric ring current around Earth, is uncertain. Geomagnetic disturbance from an enhanced solar wind causes distortions of the magnetotail ("magnetic substorms"). These 'substorms' tend to occur after prolonged spells (on the order of hours) during which the interplanetary magnetic field has had an appreciable southward component. This leads to a higher rate of interconnection between its field lines and those of Earth. As a result, the solar wind moves magnetic flux (tubes of magnetic field lines, 'locked' together with their resident plasma) from the day side of Earth to the magnetotail, widening the obstacle it presents to the solar wind flow and constricting the tail on the night-side. Ultimately some tail plasma can separate ("magnetic reconnection"); some blobs ("plasmoids") are squeezed downstream and are carried away with the solar wind; others are squeezed toward Earth where their motion feeds strong outbursts of auroras, mainly around midnight ("unloading process"). A geomagnetic storm resulting from greater interaction adds many more particles to the plasma trapped around Earth, also producing enhancement of the "ring current". Occasionally the resulting modification of Earth's magnetic field can be so strong that it produces auroras visible at middle latitudes, on field lines much closer to the equator than those of the auroral zone. Acceleration of auroral charged particles invariably accompanies a magnetospheric disturbance that causes an aurora. This mechanism, which is believed to predominantly arise from strong electric fields along the magnetic field or wave-particle interactions, raises the velocity of a particle in the direction of the guiding magnetic field. The pitch angle is thereby decreased and increases the chance of it being precipitated into the atmosphere. Both electromagnetic and electrostatic waves, produced at the time of greater geomagnetic disturbances, make a significant contribution to the energizing processes that sustain an aurora. Particle acceleration provides a complex intermediate process for transferring energy from the solar wind indirectly into the atmosphere. The details of these phenomena are not fully understood. However, it is clear that the prime source of auroral particles is the solar wind feeding the magnetosphere, the reservoir containing the radiation zones and temporarily magnetically trapped particles confined by the geomagnetic field, coupled with particle acceleration processes. Auroral particles The immediate cause of the ionization and excitation of atmospheric constituents leading to auroral emissions was discovered in 1960, when a pioneering rocket flight from Fort Churchill in Canada revealed a flux of electrons entering the atmosphere from above. Since then an extensive collection of measurements has been acquired painstakingly and with steadily improving resolution since the 1960s by many research teams using rockets and satellites to traverse the auroral zone. The main findings have been that auroral arcs and other bright forms are due to electrons that have been accelerated during the final few 10,000 km or so of their plunge into the atmosphere. These electrons often, but not always, exhibit a peak in their energy distribution, and are preferentially aligned along the local direction of the magnetic field. Electrons mainly responsible for diffuse and pulsating auroras have, in contrast, a smoothly falling energy distribution, and an angular (pitch-angle) distribution favouring directions perpendicular to the local magnetic field. Pulsations were discovered to originate at or close to the equatorial crossing point of auroral zone magnetic field lines. Protons are also associated with auroras, both discrete and diffuse. Atmosphere Auroras result from emissions of photons in Earth's upper atmosphere, above , from ionized nitrogen atoms regaining an electron, and oxygen atoms and nitrogen based molecules returning from an excited state to ground state. They are ionized or excited by the collision of particles precipitated into the atmosphere. Both incoming electrons and protons may be involved. Excitation energy is lost within the atmosphere by the emission of a photon, or by collision with another atom or molecule: Oxygen emissions green or orange-red, depending on the amount of energy absorbed. Nitrogen emissionsblue, purple or red; blue and purple if the molecule regains an electron after it has been ionized, red if returning to ground state from an excited state. Oxygen is unusual in terms of its return to ground state: it can take 0.7 seconds to emit the 557.7 nm green light and up to two minutes for the red 630.0 nm emission. Collisions with other atoms or molecules absorb the excitation energy and prevent emission; this process is called collisional quenching. Because the highest parts of the atmosphere contain a higher percentage of oxygen and lower particle densities, such collisions are rare enough to allow time for oxygen to emit red light. Collisions become more frequent progressing down into the atmosphere due to increasing density, so that red emissions do not have time to happen, and eventually, even green light emissions are prevented. This is why there is a colour differential with altitude; at high altitudes oxygen red dominates, then oxygen green and nitrogen blue/purple/red, then finally nitrogen blue/purple/red when collisions prevent oxygen from emitting anything. Green is the most common colour. Then comes pink, a mixture of light green and red, followed by pure red, then yellow (a mixture of red and green), and finally, pure blue. Precipitating protons generally produce optical emissions as incident hydrogen atoms after gaining electrons from the atmosphere. Proton auroras are usually observed at lower latitudes. Ionosphere Bright auroras are generally associated with Birkeland currents (Schield et al., 1969; Zmuda and Armstrong, 1973), which flow down into the ionosphere on one side of the pole and out on the other. In between, some of the current connects directly through the ionospheric E layer (125 km); the rest ("region 2") detours, leaving again through field lines closer to the equator and closing through the "partial ring current" carried by magnetically trapped plasma. The ionosphere is an ohmic conductor, so some consider that such currents require a driving voltage, which an, as yet unspecified, dynamo mechanism can supply. Electric field probes in orbit above the polar cap suggest voltages of the order of 40,000 volts, rising up to more than 200,000 volts during intense magnetic storms. In another interpretation, the currents are the direct result of electron acceleration into the atmosphere by wave/particle interactions. Ionospheric resistance has a complex nature, and leads to a secondary Hall current flow. By a strange twist of physics, the magnetic disturbance on the ground due to the main current almost cancels out, so most of the observed effect of auroras is due to a secondary current, the auroral electrojet. An auroral electrojet index (measured in nanotesla) is regularly derived from ground data and serves as a general measure of auroral activity. Kristian Birkeland deduced that the currents flowed in the east–west directions along the auroral arc, and such currents, flowing from the dayside toward (approximately) midnight were later named "auroral electrojets" (see also Birkeland currents). Ionosphere can contribute to the formation of auroral arcs via the feedback instability under high ionospheric resistance conditions, observed at night time and in dark Winter hemisphere. Interaction of the solar wind with Earth Earth is constantly immersed in the solar wind, a flow of magnetized hot plasma (a gas of free electrons and positive ions) emitted by the Sun in all directions, a result of the two-million-degree temperature of the Sun's outermost layer, the corona. The solar wind reaches Earth with a velocity typically around 400 km/s, a density of around 5 ions/cm3 and a magnetic field intensity of around 2–5 nT (for comparison, Earth's surface field is typically 30,000–50,000 nT). During magnetic storms, in particular, flows can be several times faster; the interplanetary magnetic field (IMF) may also be much stronger. Joan Feynman deduced in the 1970s that the long-term averages of solar wind speed correlated with geomagnetic activity. Her work resulted from data collected by the Explorer 33 spacecraft. The solar wind and magnetosphere consist of plasma (ionized gas), which conducts electricity. It is well known (since Michael Faraday's work around 1830) that when an electrical conductor is placed within a magnetic field while relative motion occurs in a direction that the conductor cuts across (or is cut by), rather than along, the lines of the magnetic field, an electric current is induced within the conductor. The strength of the current depends on a) the rate of relative motion, b) the strength of the magnetic field, c) the number of conductors ganged together and d) the distance between the conductor and the magnetic field, while the direction of flow is dependent upon the direction of relative motion. Dynamos make use of this basic process ("the dynamo effect"), any and all conductors, solid or otherwise are so affected, including plasmas and other fluids. The IMF originates on the Sun, linked to the sunspots, and its field lines (lines of force) are dragged out by the solar wind. That alone would tend to line them up in the Sun-Earth direction, but the rotation of the Sun angles them at Earth by about 45 degrees forming a spiral in the ecliptic plane, known as the Parker spiral. The field lines passing Earth are therefore usually linked to those near the western edge ("limb") of the visible Sun at any time. The solar wind and the magnetosphere, being two electrically conducting fluids in relative motion, should be able in principle to generate electric currents by dynamo action and impart energy from the flow of the solar wind. However, this process is hampered by the fact that plasmas conduct readily along magnetic field lines, but less readily perpendicular to them. Energy is more effectively transferred by the temporary magnetic connection between the field lines of the solar wind and those of the magnetosphere. Unsurprisingly this process is known as magnetic reconnection. As already mentioned, it happens most readily when the interplanetary field is directed southward, in a similar direction to the geomagnetic field in the inner regions of both the north magnetic pole and south magnetic pole. Auroras are more frequent and brighter during the intense phase of the solar cycle when coronal mass ejections increase the intensity of the solar wind. Magnetosphere Earth's magnetosphere is shaped by the impact of the solar wind on Earth's magnetic field. This forms an obstacle to the flow, diverting it, at an average distance of about 70,000 km (11 Earth radii or Re), producing a bow shock 12,000 km to 15,000 km (1.9 to 2.4 Re) further upstream. The width of the magnetosphere abreast of Earth is typically 190,000 km (30 Re), and on the night side a long "magnetotail" of stretched field lines extends to great distances (> 200 Re). The high latitude magnetosphere is filled with plasma as the solar wind passes Earth. The flow of plasma into the magnetosphere increases with additional turbulence, density, and speed in the solar wind. This flow is favoured by a southward component of the IMF, which can then directly connect to the high latitude geomagnetic field lines. The flow pattern of magnetospheric plasma is mainly from the magnetotail toward Earth, around Earth and back into the solar wind through the magnetopause on the day-side. In addition to moving perpendicular to Earth's magnetic field, some magnetospheric plasma travels down along Earth's magnetic field lines, gains additional energy and loses it to the atmosphere in the auroral zones. The cusps of the magnetosphere, separating geomagnetic field lines that close through Earth from those that close remotely allow a small amount of solar wind to directly reach the top of the atmosphere, producing an auroral glow. On 26 February 2008, THEMIS probes were able to determine, for the first time, the triggering event for the onset of magnetospheric substorms. Two of the five probes, positioned approximately one third the distance to the Moon, measured events suggesting a magnetic reconnection event 96 seconds prior to auroral intensification. Geomagnetic storms that ignite auroras may occur more often during the months around the equinoxes. It is not well understood, but geomagnetic storms may vary with Earth's seasons. Two factors to consider are the tilt of both the solar and Earth's axis to the ecliptic plane. As Earth orbits throughout a year, it experiences an interplanetary magnetic field (IMF) from different latitudes of the Sun, which is tilted at 8 degrees. Similarly, the 23-degree tilt of Earth's axis about which the geomagnetic pole rotates with a diurnal variation changes the daily average angle that the geomagnetic field presents to the incident IMF throughout a year. These factors combined can lead to minor cyclical changes in the detailed way that the IMF links to the magnetosphere. In turn, this affects the average probability of opening a door through which energy from the solar wind can reach Earth's inner magnetosphere and thereby enhance auroras. Recent evidence in 2021 has shown that individual separate substorms may in fact be correlated networked communities. Auroral particle acceleration Just as there are many types of aurora, there are many different mechanisms that accelerate auroral particles into the atmosphere. Electron aurora in Earth's auroral zone (i.e. commonly visible aurora) can be split into two main categories with different immediate causes: diffuse and discrete aurora. Diffuse aurora appear relatively structureless to an observer on the ground, with indistinct edges and amorphous forms. Discrete aurora are structured into distinct features with well-defined edges such as arcs, rays and coronas; they also tend to be much brighter than the diffuse aurora. In both cases, the electrons that eventually cause the aurora start out as electrons trapped by the magnetic field in Earth's magnetosphere. These trapped particles bounce back and forth along magnetic field lines and are prevented from hitting the atmosphere by the magnetic mirror formed by the increasing magnetic field strength closer to Earth. The magnetic mirror's ability to trap a particle depends on the particle's pitch angle: the angle between its direction of motion and the local magnetic field. An aurora is created by processes that decrease the pitch angle of many individual electrons, freeing them from the magnetic trap and causing them to hit the atmosphere. In the case of diffuse auroras, the electron pitch angles are altered by their interaction with various plasma waves. Each interaction is essentially wave-particle scattering; the electron energy after interacting with the wave is similar to its energy before interaction, but the direction of motion is altered. If the final direction of motion after scattering is close to the field line (specifically, if it falls within the loss cone) then the electron will hit the atmosphere. Diffuse auroras are caused by the collective effect of many such scattered electrons hitting the atmosphere. The process is mediated by the plasma waves, which become stronger during periods of high geomagnetic activity, leading to increased diffuse aurora at those times. In the case of discrete auroras, the trapped electrons are accelerated toward Earth by electric fields that form at an altitude of about 4000–12000 km in the "auroral acceleration region". The electric fields point away from Earth (i.e. upward) along the magnetic field line. Electrons moving downward through these fields gain a substantial amount of energy (on the order of a few keV) in the direction along the magnetic field line toward Earth. This field-aligned acceleration decreases the pitch angle for all of the electrons passing through the region, causing many of them to hit the upper atmosphere. In contrast to the scattering process leading to diffuse auroras, the electric field increases the kinetic energy of all of the electrons transiting downward through the acceleration region by the same amount. This accelerates electrons starting from the magnetosphere with initially low energies (tens of eV or less) to energies required to create an aurora (100s of eV or greater), allowing that large source of particles to contribute to creating auroral light. The accelerated electrons carry an electric current along the magnetic field lines (a Birkeland current). Since the electric field points in the same direction as the current, there is a net conversion of electromagnetic energy into particle energy in the auroral acceleration region (an electric load). The energy to power this load is eventually supplied by the magnetized solar wind flowing around the obstacle of Earth's magnetic field, although exactly how that power flows through the magnetosphere is still an active area of research. While the energy to power the aurora is ultimately derived from the solar wind, the electrons themselves do not travel directly from the solar wind into Earth's auroral zone; magnetic field lines from these regions do not connect to the solar wind, so there is no direct access for solar wind electrons. Some auroral features are also created by electrons accelerated by dispersive Alfvén waves. At small wavelengths transverse to the background magnetic field (comparable to the electron inertial length or ion gyroradius), Alfvén waves develop a significant electric field parallel to the background magnetic field. This electric field can accelerate electrons to keV energies, significant to produce auroral arcs. If the electrons have a speed close to that of the wave's phase velocity, they are accelerated in a manner analogous to a surfer catching an ocean wave. This constantly-changing wave electric field can accelerate electrons along the field line, causing some of them to hit the atmosphere. Electrons accelerated by this mechanism tend to have a broad energy spectrum, in contrast to the sharply-peaked energy spectrum typical of electrons accelerated by quasi-static electric fields. In addition to the discrete and diffuse electron aurora, proton aurora is caused when magnetospheric protons collide with the upper atmosphere. The proton gains an electron in the interaction, and the resulting neutral hydrogen atom emits photons. The resulting light is too dim to be seen with the naked eye. Other aurora not covered by the above discussion include transpolar arcs (formed poleward of the auroral zone), cusp aurora (formed in two small high-latitude areas on the dayside) and some non-terrestrial auroras. Historically significant events The discovery of a 1770 Japanese diary in 2017 depicting auroras above the ancient Japanese capital of Kyoto suggested that the storm may have been 7% larger than the Carrington event, which affected telegraph networks. The auroras that resulted from the Carrington event on both 28 August and 2 September 1859, are thought to be the most spectacular in recent history. In a paper to the Royal Society on 21 November 1861, Balfour Stewart described both auroral events as documented by a self-recording magnetograph at the Kew Observatory and established the connection between the 2 September 1859 auroral storm and the Carrington–Hodgson flare event when he observed that "It is not impossible to suppose that in this case our luminary was taken in the act." The second auroral event, which occurred on 2 September 1859, was a result of the (unseen) coronal mass ejection associated with the exceptionally intense Carrington–Hodgson white light solar flare on 1 September 1859. This event produced auroras so widespread and extraordinarily bright that they were seen and reported in published scientific measurements, ship logs, and newspapers throughout the United States, Europe, Japan, and Australia. It was reported by The New York Times that in Boston on Friday 2 September 1859 the aurora was "so brilliant that at about one o'clock ordinary print could be read by the light". One o'clock EST time on Friday 2 September would have been 6:00 GMT; the self-recording magnetograph at the Kew Observatory was recording the geomagnetic storm, which was then one hour old, at its full intensity. Between 1859 and 1862, Elias Loomis published a series of nine papers on the Great Auroral Exhibition of 1859 in the American Journal of Science where he collected worldwide reports of the auroral event. That aurora is thought to have been produced by one of the most intense coronal mass ejections in history. It is also notable for the fact that it is the first time where the phenomena of auroral activity and electricity were unambiguously linked. This insight was made possible not only due to scientific magnetometer measurements of the era, but also as a result of a significant portion of the of telegraph lines then in service being significantly disrupted for many hours throughout the storm. Some telegraph lines, however, seem to have been of the appropriate length and orientation to produce a sufficient geomagnetically induced current from the electromagnetic field to allow for continued communication with the telegraph operator power supplies switched off. The following conversation occurred between two operators of the American Telegraph Line between Boston and Portland, Maine, on the night of 2 September 1859 and reported in the Boston Traveller: The conversation was carried on for around two hours using no battery power at all and working solely with the current induced by the aurora, and it was said that this was the first time on record that more than a word or two was transmitted in such manner. Such events led to the general conclusion that In May 2024, a series of solar storms caused the aurora borealis to be observed from as far south as Ferdows, Iran. Historical views and folklore The earliest datable record of an aurora was recorded in the Bamboo Annals, a historical chronicle of the history of ancient China, in 977 or 957 BC. An aurora was described by the Greek explorer Pytheas in the 4th century BC. Seneca wrote about auroras in the first book of his Naturales Quaestiones, classifying them, for instance, as ('barrel-like'); ('chasm'); ('bearded'); ('like cypress trees'); and describing their manifold colours. He wrote about whether they were above or below the clouds, and recalled that under Tiberius, an aurora formed above the port city of Ostia that was so intense and red that a cohort of the army, stationed nearby for fire duty, galloped to the rescue. It has been suggested that Pliny the Elder depicted the aurora borealis in his Natural History, when he refers to , , "falling red flames", and "daylight in the night". The earliest depiction of the aurora may have been in Cro-Magnon cave paintings of northern Spain dating to 30,000 BC. The oldest known written record of the aurora was in a Chinese legend written around 2600 BC. On an autumn around 2000 BC, according to a legend, a young woman named Fubao was sitting alone in the wilderness by a bay, when suddenly a "magical band of light" appeared like "moving clouds and flowing water", turning into a bright halo around the Big Dipper, which cascaded a pale silver brilliance, illuminating the earth and making shapes and shadows seem alive. Moved by this sight, Fubao became pregnant and gave birth to a son, the Emperor Xuanyuan, known legendarily as the initiator of Chinese culture and the ancestor of all Chinese people. In the , a creature named is described to be like a red dragon shining in the night sky with a body a thousand miles long. In ancient times, the Chinese did not have a fixed word for the aurora, so it was named according to the different shapes of the aurora, such as "Sky Dog" (), "Sword/Knife Star" (), "Chiyou banner" (), "Sky's Open Eyes" (), and "Stars like Rain" (). In Japanese folklore, pheasants were considered messengers from heaven. However, researchers from Japan's Graduate University for Advanced Studies and National Institute of Polar Research claimed in March 2020 that red pheasant tails witnessed across the night sky over Japan in 620 A.D., might be a red aurora produced during a magnetic storm. In the traditions of Aboriginal Australians, the Aurora Australis is commonly associated with fire. For example, the Gunditjmara people of western Victoria called auroras ('ashes'), while the Gunai people of eastern Victoria perceived auroras as bushfires in the spirit world. The Dieri people of South Australia say that an auroral display is , an evil spirit creating a large fire. Similarly, the Ngarrindjeri people of South Australia refer to auroras seen over Kangaroo Island as the campfires of spirits in the 'Land of the Dead'. Aboriginal people in southwest Queensland believe the auroras to be the fires of the Oola Pikka, ghostly spirits who spoke to the people through auroras. Sacred law forbade anyone except male elders from watching or interpreting the messages of ancestors they believed were transmitted through an aurora. Among the Māori people of New Zealand, aurora australis or ("great torches in the sky") were lit by ancestors who sailed south to a "land of ice" (or their descendants); these people were said to be Ui-te-Rangiora's expedition party who had reached the Southern Ocean. around the 7th century. In Scandinavia, the first mention of (the northern lights) is found in the Norwegian chronicle from AD 1230. The chronicler has heard about this phenomenon from compatriots returning from Greenland, and he gives three possible explanations: that the ocean was surrounded by vast fires; that the sun flares could reach around the world to its night side; or that glaciers could store energy so that they eventually became fluorescent. Walter William Bryant wrote in his book Kepler (1920) that Tycho Brahe "seems to have been something of a homoeopathist, for he recommends sulfur to cure infectious diseases 'brought on by the sulfurous vapours of the Aurora Borealis. In 1778, Benjamin Franklin theorized in his paper Aurora Borealis, Suppositions and Conjectures towards forming an Hypothesis for its Explanation that an aurora was caused by a concentration of electrical charge in the polar regions intensified by the snow and moisture in the air: Observations of the rhythmic movement of compass needles due to the influence of an aurora were confirmed in the Swedish city of Uppsala by Anders Celsius and Olof Hiorter. In 1741, Hiorter was able to link large magnetic fluctuation to the observation of an aurora overhead. This evidence helped to support their theory that 'magnetic storms' are responsible for such compass fluctuations. A variety of Native American myths surround the spectacle. The European explorer Samuel Hearne travelled with Chipewyan Dene in 1771 and recorded their views on the ('caribou'). According to Hearne, the Dene people saw the resemblance between an aurora and the sparks produced when caribou fur is stroked. They believed that the lights were the spirits of their departed friends dancing in the sky, and when they shone brightly it meant that their deceased friends were very happy. During the night after the Battle of Fredericksburg, an aurora was seen from the battlefield. The Confederate Army took this as a sign that God was on their side, as the lights were rarely seen so far south. The painting Aurora Borealis by Frederic Edwin Church is widely interpreted to represent the conflict of the American Civil War. A mid 19th-century British source says auroras were a rare occurrence before the 18th century. It quotes Halley as saying that before the aurora of 1716, no such phenomenon had been recorded for more than 80 years, and none of any consequence since 1574. It says no appearance is recorded in the Transactions of the French Academy of Sciences between 1666 and 1716; and that one aurora recorded in Berlin Miscellany for 1797 was called a very rare event. One observed in 1723 at Bologna was stated to be the first ever seen there. Celsius (1733) states the oldest residents of Uppsala thought the phenomenon a great rarity before 1716. The period between approximately 1645 and 1715 corresponds to the Maunder minimum in sunspot activity. In Robert W. Service's satirical poem "The Ballad of the Northern Lights" (1908), a Yukon prospector discovers that the aurora is the glow from a radium mine. He stakes his claim, then goes to town looking for investors. In the early 1900s, the Norwegian scientist Kristian Birkeland laid the foundation for the current understanding of geomagnetism and polar auroras. In Sami mythology, the northern lights are caused by the deceased who bled to death cutting themselves, their blood spilling on the sky. Many aboriginal peoples of northern Eurasia and North America share similar beliefs of northern lights being the blood of the deceased, some believing they are caused by dead warriors' blood spraying on the sky as they engage in playing games, riding horses or having fun in some other way. Extraterrestrial aurorae Both Jupiter and Saturn have magnetic fields that are stronger than Earth's (Jupiter's equatorial field strength is 4.3 gauss, compared to 0.3 gauss for Earth), and both have extensive radiation belts. Auroras have been observed on both gas planets, most clearly using the Hubble Space Telescope, and the Cassini and Galileo spacecraft, as well as on Uranus and Neptune. The aurorae on Saturn seem, like Earth's, to be powered by the solar wind. However, Jupiter's aurorae are more complex. Jupiter's main auroral oval is associated with the plasma produced by the volcanic moon Io, and the transport of this plasma within the planet's magnetosphere. An uncertain fraction of Jupiter's aurorae are powered by the solar wind. In addition, the moons, especially Io, are also powerful sources of aurora. These arise from electric currents along field lines ("field aligned currents"), generated by a dynamo mechanism due to the relative motion between the rotating planet and the moving moon. Io, which has active volcanism and an ionosphere, is a particularly strong source, and its currents also generate radio emissions, which have been studied since 1955. Using the Hubble Space Telescope, auroras over Io, Europa and Ganymede have all been observed. Auroras have also been observed on Venus and Mars. Venus has no magnetic field and so Venusian auroras appear as bright and diffuse patches of varying shape and intensity, sometimes distributed over the full disc of the planet. A Venusian aurora originates when electrons from the solar wind collide with the night-side atmosphere. An aurora was detected on Mars, on 14 August 2004, by the SPICAM instrument aboard Mars Express. The aurora was located at Terra Cimmeria, in the region of 177° east, 52° south. The total size of the emission region was about 30 km across, and possibly about 8 km high. By analysing a map of crustal magnetic anomalies compiled with data from Mars Global Surveyor, scientists observed that the region of the emissions corresponded to an area where the strongest magnetic field is localized. This correlation indicated that the origin of the light emission was a flux of electrons moving along the crust magnetic lines and exciting the upper atmosphere of Mars. Between 2014 and 2016, cometary auroras were observed on comet 67P/Churyumov–Gerasimenko by multiple instruments on the Rosetta spacecraft. The auroras were observed at far-ultraviolet wavelengths. Coma observations revealed atomic emissions of hydrogen and oxygen caused by the photodissociation (not photoionization, like in terrestrial auroras) of water molecules in the comet's coma. The interaction of accelerated electrons from the solar wind with gas particles in the coma is responsible for the aurora. Since comet 67P has no magnetic field, the aurora is diffusely spread around the comet. Exoplanets, such as hot Jupiters, have been suggested to experience ionization in their upper atmospheres and generate an aurora modified by weather in their turbulent tropospheres. However, there is no current detection of an exoplanet aurora. The first ever extra-solar auroras were discovered in July 2015 over the brown dwarf star LSR J1835+3259. The mainly red aurora was found to be a million times brighter than the northern lights, a result of the charged particles interacting with hydrogen in the atmosphere. It has been speculated that stellar winds may be stripping off material from the surface of the brown dwarf to produce their own electrons. Another possible explanation for the auroras is that an as-yet-undetected body around the dwarf star is throwing off material, as is the case with Jupiter and its moon Io. See also Airglow Aurora (heraldry) Heliophysics List of solar storms Paschen's law Space tornado Space weather Explanatory notes References Further reading These two both include detailed descriptions of historical observations and descriptions. Alt URL External links Aurora forecast – Will there be northern lights? Current global map showing the probability of visible aurora Aurora – Forecasting (archived 24 November 2016) Official MET aurora forecasting in Iceland Aurora Borealis – Predicting Solar Terrestrial Data – Online Converter – Northern Lights Latitude Aurora Service Europe – Aurora forecasts for Europe (archived 11 March 2019) Live Northern Lights webstream World's Best Aurora – The Northwest Territories is the world's Northern Lights mecca. Multimedia Amazing time-lapse video of Aurora Borealis – Shot in Iceland over the winter of 2013/2014 Popular video of Aurora Borealis – Taken in Norway in 2011 Aurora Photo Gallery – Views taken 2009–2011 (archived 4 October 2011) Aurora Photo Gallery – "Full-Sky Aurora" over Eastern Norway. December 2011 Videos and Photos – Auroras at Night (archived 2 September 2010) Video (04:49) – Aurora Borealis – How The Northern Lights Are Created (video on YouTube) Video (47:40) – Northern Lights – Documentary Video (5:00) – Northern lights video in real time Video (01:42) – Northern Light – Story of Geomagnetic Storm (Terschelling Island – 6/7 April 2000) (archived 17 August 2011) Video (01:56) (time-lapse) − Auroras – Ground-Level View from Finnish Lapland 2011 (video on YouTube) Video (02:43) (time-lapse) − Auroras – Ground-Level View from Tromsø, Norway, 24 November 2010 (video on YouTube) Video (00:27) (time-lapse) – Earth and Auroras – Viewed from the International Space Station (video on YouTube) Terrestrial plasmas Space plasmas Polar regions of the Earth Atmospheric optical phenomena Earth phenomena Electrical phenomena Light sources Planetary science Sources of electromagnetic interference Articles containing video clips Space weather Geomagnetism
Aurora
[ "Physics", "Astronomy" ]
10,461
[ "Space plasmas", "Physical phenomena", "Earth phenomena", "Astrophysics", "Optical phenomena", "Electrical phenomena", "Planetary science", "Atmospheric optical phenomena", "Astronomical sub-disciplines" ]
49,681
https://en.wikipedia.org/wiki/Ontology%20%28information%20science%29
In information science, an ontology encompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to as applied ontology. Every academic discipline or field, in creating its terminology, thereby lays the groundwork for an ontology. Each uses ontological assumptions to frame explicit theories, research and applications. Improved ontologies may improve problem solving within that domain, interoperability of data systems, and discoverability of data. Translating research papers within every field is a problem made easier when experts from different countries maintain a controlled vocabulary of jargon between each of their languages. For instance, the definition and ontology of economics is a primary concern in Marxist economics, but also in other subfields of economics. An example of economics relying on information science occurs in cases where a simulation or model is intended to enable economic decisions, such as determining what capital assets are at risk and by how much (see risk management). What ontologies in both information science and philosophy have in common is the attempt to represent entities, including both objects and events, with all their interdependent properties and relations, according to a system of categories. In both fields, there is considerable work on problems of ontology engineering (e.g., Quine and Kripke in philosophy, Sowa and Guarino in information science), and debates concerning to what extent normative ontology is possible (e.g., foundationalism and coherentism in philosophy, BFO and Cyc in artificial intelligence). Applied ontology is considered by some as a successor to prior work in philosophy. However many current efforts are more concerned with establishing controlled vocabularies of narrow domains than with philosophical first principles, or with questions such as the mode of existence of fixed essences or whether enduring objects (e.g., perdurantism and endurantism) may be ontologically more primary than processes. Artificial intelligence has retained considerable attention regarding applied ontology in subfields like natural language processing within machine translation and knowledge representation, but ontology editors are being used often in a range of fields, including biomedical informatics, industry. Such efforts often use ontology editing tools such as Protégé. Ontology in Philosophy Ontology is a branch of philosophy and intersects areas such as metaphysics, epistemology, and philosophy of language, as it considers how knowledge, language, and perception relate to the nature of reality. Metaphysics deals with questions like "what exists?" and "what is the nature of reality?". One of five traditional branches of philosophy, metaphysics is concerned with exploring existence through properties, entities and relations such as those between particulars and universals, intrinsic and extrinsic properties, or essence and existence. Metaphysics has been an ongoing topic of discussion since recorded history. Etymology The compound word ontology combines onto-, from the Greek ὄν, on (gen. ὄντος, ontos), i.e. "being; that which is", which is the present participle of the verb εἰμί, eimí, i.e. "to be, I am", and -λογία, -logia, i.e. "logical discourse", see classical compounds for this type of word formation. While the etymology is Greek, the oldest extant record of the word itself, the Neo-Latin form ontologia, appeared in 1606 in the work Ogdoas Scholastica by Jacob Lorhard (Lorhardus) and in 1613 in the Lexicon philosophicum by Rudolf Göckel (Goclenius). The first occurrence in English of ontology as recorded by the OED (Oxford English Dictionary, online edition, 2008) came in Archeologia Philosophica Nova or New Principles of Philosophy by Gideon Harvey. Formal Ontology Since the mid-1970s, researchers in the field of artificial intelligence (AI) have recognized that knowledge engineering is the key to building large and powerful AI systems. AI researchers argued that they could create new ontologies as computational models that enable certain kinds of automated reasoning, which was only marginally successful. In the 1980s, the AI community began to use the term ontology to refer to both a theory of a modeled world and a component of knowledge-based systems. In particular, David Powers introduced the word ontology to AI to refer to real world or robotic grounding, publishing in 1990 literature reviews emphasizing grounded ontology in association with the call for papers for a AAAI Summer Symposium Machine Learning of Natural Language and Ontology, with an expanded version published in SIGART Bulletin and included as a preface to the proceedings. Some researchers, drawing inspiration from philosophical ontologies, viewed computational ontology as a kind of applied philosophy. In 1993, the widely cited web page and paper "Toward Principles for the Design of Ontologies Used for Knowledge Sharing" by Tom Gruber used ontology as a technical term in computer science closely related to earlier idea of semantic networks and taxonomies. Gruber introduced the term as a specification of a conceptualization: An ontology is a description (like a formal specification of a program) of the concepts and relationships that can formally exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set of concept definitions, but more general. And it is a different sense of the word than its use in philosophy. Attempting to distance ontologies from taxonomies and similar efforts in knowledge modeling that rely on classes and inheritance, Gruber stated (1993): Ontologies are often equated with taxonomic hierarchies of classes, class definitions, and the subsumption relation, but ontologies need not be limited to these forms. Ontologies are also not limited to conservative definitions – that is, definitions in the traditional logic sense that only introduce terminology and do not add any knowledge about the world. To specify a conceptualization, one needs to state axioms that do constrain the possible interpretations for the defined terms. As refinement of Gruber's definition Feilmayr and Wöß (2016) stated: "An ontology is a formal, explicit specification of a shared conceptualization that is characterized by high semantic expressiveness required for increased complexity." Formal Ontology Components Contemporary ontologies share many structural similarities, regardless of the language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes and relations. Types Domain ontology A domain ontology (or domain-specific ontology) represents concepts which belong to a realm of the world, such as biology or politics. Each domain ontology typically models domain-specific definitions of terms. For example, the word card has many different meanings. An ontology about the domain of poker would model the "playing card" meaning of the word, while an ontology about the domain of computer hardware would model the "punched card" and "video card" meanings. Since domain ontologies are written by different people, they represent concepts in very specific and unique ways, and are often incompatible within the same project. As systems that rely on domain ontologies expand, they often need to merge domain ontologies by hand-tuning each entity or using a combination of software merging and hand-tuning. This presents a challenge to the ontology designer. Different ontologies in the same domain arise due to different languages, different intended usage of the ontologies, and different perceptions of the domain (based on cultural background, education, ideology, etc.). At present, merging ontologies that are not developed from a common upper ontology is a largely manual process and therefore time-consuming and expensive. Domain ontologies that use the same upper ontology to provide a set of basic elements with which to specify the meanings of the domain ontology entities can be merged with less effort. There are studies on generalized techniques for merging ontologies, but this area of research is still ongoing, and it is a recent event to see the issue sidestepped by having multiple domain ontologies using the same upper ontology like the OBO Foundry. Upper ontology An upper ontology (or foundation ontology) is a model of the commonly shared relations and objects that are generally applicable across a wide range of domain ontologies. It usually employs a core glossary that overarches the terms and associated object descriptions as they are used in various relevant domain ontologies. Standardized upper ontologies available for use include BFO, BORO method, Dublin Core, GFO, Cyc, SUMO, UMBEL, and DOLCE. WordNet has been considered an upper ontology by some and has been used as a linguistic tool for learning domain ontologies. Hybrid ontology The Gellish ontology is an example of a combination of an upper and a domain ontology. Visualization A survey of ontology visualization methods is presented by Katifori et al. An updated survey of ontology visualization methods and tools was published by Dudás et al. The most established ontology visualization methods, namely indented tree and graph visualization are evaluated by Fu et al. A visual language for ontologies represented in OWL is specified by the Visual Notation for OWL Ontologies (VOWL). Engineering Ontology engineering (also called ontology building) is a set of tasks related to the development of ontologies for a particular domain. It is a subfield of knowledge engineering that studies the ontology development process, the ontology life cycle, the methods and methodologies for building ontologies, and the tools and languages that support them. Ontology engineering aims to make explicit the knowledge contained in software applications, and organizational procedures for a particular domain. Ontology engineering offers a direction for overcoming semantic obstacles, such as those related to the definitions of business terms and software classes. Known challenges with ontology engineering include: Ensuring the ontology is current with domain knowledge and term use Providing sufficient specificity and concept coverage for the domain of interest, thus minimizing the content completeness problem Ensuring the ontology can support its use cases Editors Ontology editors are applications designed to assist in the creation or manipulation of ontologies. It is common for ontology editors to use one or more ontology languages. Aspects of ontology editors include: visual navigation possibilities within the knowledge model, inference engines and information extraction; support for modules; the import and export of foreign knowledge representation languages for ontology matching; and the support of meta-ontologies such as OWL-S, Dublin Core, etc. Learning Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting a domain's terms from natural language text. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. Information extraction and text mining have been explored to automatically link ontologies to documents, for example in the context of the BioCreative challenges. Research Epistemological assumptions, which in research asks "What do you know? or "How do you know it?", creates the foundation researchers use when approaching a certain topic or area for potential research. As epistemology is directly linked to knowledge and how we come about accepting certain truths, individuals conducting academic research must understand what allows them to begin theory building. Simply, epistemological assumptions force researchers to question how they arrive at the knowledge they have. Languages An ontology language is a formal language used to encode an ontology. There are a number of such languages for ontologies, both proprietary and standards-based: Common Algebraic Specification Language is a general logic-based specification language developed within the IFIP working group 1.3 "Foundations of System Specifications" and is a de facto standard language for software specifications. It is now being applied to ontology specifications in order to provide modularity and structuring mechanisms. Common logic is ISO standard 24707, a specification of a family of ontology languages that can be accurately translated into each other. The Cyc project has its own ontology language called CycL, based on first-order predicate calculus with some higher-order extensions. DOGMA (Developing Ontology-Grounded Methods and Applications) adopts the fact-oriented modeling approach to provide a higher level of semantic stability. The Gellish language includes rules for its own extension and thus integrates an ontology with an ontology language. IDEF5 is a software engineering method to develop and maintain usable, accurate, domain ontologies. KIF is a syntax for first-order logic that is based on S-expressions. SUO-KIF is a derivative version supporting the Suggested Upper Merged Ontology. MOF and UML are standards of the OMG Olog is a category theoretic approach to ontologies, emphasizing translations between ontologies using functors. OBO, a language used for biological and biomedical ontologies. OntoUML is an ontologically well-founded profile of UML for conceptual modeling of domain ontologies. OWL is a language for making ontological statements, developed as a follow-on from RDF and RDFS, as well as earlier ontology language projects including OIL, DAML, and DAML+OIL. OWL is intended to be used over the World Wide Web, and all its elements (classes, properties and individuals) are defined as RDF resources, and identified by URIs. Rule Interchange Format (RIF) and F-Logic combine ontologies and rules. Semantic Application Design Language (SADL) captures a subset of the expressiveness of OWL, using an English-like language entered via an Eclipse Plug-in. SBVR (Semantics of Business Vocabularies and Rules) is an OMG standard adopted in industry to build ontologies. TOVE Project, TOronto Virtual Enterprise project Published examples Arabic Ontology, a linguistic ontology for Arabic, which can be used as an Arabic Wordnet but with ontologically-clean content. AURUM – Information Security Ontology, An ontology for information security knowledge sharing, enabling users to collaboratively understand and extend the domain knowledge body. It may serve as a basis for automated information security risk and compliance management. BabelNet, a very large multilingual semantic network and ontology, lexicalized in many languages Basic Formal Ontology, a formal upper ontology designed to support scientific research BioPAX, an ontology for the exchange and interoperability of biological pathway (cellular processes) data BMO, an e-Business Model Ontology based on a review of enterprise ontologies and business model literature SSBMO, a Strongly Sustainable Business Model Ontology based on a review of the systems based natural and social science literature (including business). Includes critique of and significant extensions to the Business Model Ontology (BMO). CCO and GexKB, Application Ontologies (APO) that integrate diverse types of knowledge with the Cell Cycle Ontology (CCO) and the Gene Expression Knowledge Base (GexKB) CContology (Customer Complaint Ontology), an e-business ontology to support online customer complaint management CIDOC Conceptual Reference Model, an ontology for cultural heritage COSMO, a Foundation Ontology (current version in OWL) that is designed to contain representations of all of the primitive concepts needed to logically specify the meanings of any domain entity. It is intended to serve as a basic ontology that can be used to translate among the representations in other ontologies or databases. It started as a merger of the basic elements of the OpenCyc and SUMO ontologies, and has been supplemented with other ontology elements (types, relations) so as to include representations of all of the words in the Longman dictionary defining vocabulary. Computer Science Ontology, an automatically generated ontology of research topics in the field of computer science Cyc, a large Foundation Ontology for formal representation of the universe of discourse Disease Ontology, designed to facilitate the mapping of diseases and associated conditions to particular medical codes DOLCE, a Descriptive Ontology for Linguistic and Cognitive Engineering Drammar, ontology of drama Dublin Core, a simple ontology for documents and publishing Financial Industry Business Ontology (FIBO), a business conceptual ontology for the financial industry Foundational, Core and Linguistic Ontologies Foundational Model of Anatomy, an ontology for human anatomy Friend of a Friend, an ontology for describing persons, their activities and their relations to other people and objects Gene Ontology for genomics Gellish English dictionary, an ontology that includes a dictionary and taxonomy that includes an upper ontology and a lower ontology that focuses on industrial and business applications in engineering, technology and procurement. Geopolitical ontology, an ontology describing geopolitical information created by Food and Agriculture Organization(FAO). The geopolitical ontology includes names in multiple languages (English, French, Spanish, Arabic, Chinese, Russian and Italian); maps standard coding systems (UN, ISO, FAOSTAT, AGROVOC, etc.); provides relations among territories (land borders, group membership, etc.); and tracks historical changes. In addition, FAO provides web services of geopolitical ontology and a module maker to download modules of the geopolitical ontology into different formats (RDF, XML, and EXCEL). See more information at FAO Country Profiles. GAO (General Automotive Ontology) – an ontology for the automotive industry that includes 'car' extensions GOLD, General Ontology for Linguistic Description GUM (Generalized Upper Model), a linguistically motivated ontology for mediating between clients systems and natural language technology IDEAS Group, a formal ontology for enterprise architecture being developed by the Australian, Canadian, UK and U.S. Defence Depts. Linkbase, a formal representation of the biomedical domain, founded upon Basic Formal Ontology. LPL, Landmark Pattern Language NCBO Bioportal, biological and biomedical ontologies and associated tools to search, browse and visualise NIFSTD Ontologies from the Neuroscience Information Framework: a modular set of ontologies for the neuroscience domain. OBO-Edit, an ontology browser for most of the Open Biological and Biomedical Ontologies OBO Foundry, a suite of interoperable reference ontologies in biology and biomedicine OMNIBUS Ontology, an ontology of learning, instruction, and instructional design Ontology for Biomedical Investigations, an open-access, integrated ontology of biological and clinical investigations ONSTR, Ontology for Newborn Screening Follow-up and Translational Research, Newborn Screening Follow-up Data Integration Collaborative, Emory University, Atlanta. Plant Ontology for plant structures and growth/development stages, etc. POPE, Purdue Ontology for Pharmaceutical Engineering PRO, the Protein Ontology of the Protein Information Resource, Georgetown University ProbOnto, knowledge base and ontology of probability distributions. Program abstraction taxonomy Protein Ontology for proteomics RXNO Ontology, for name reactions in chemistry SCDO, the Sickle Cell Disease Ontology, facilitates data sharing and collaborations within the SDC community, amongst other applications (see list on SCDO website). Schema.org, for embedding structured data into web pages, primarily for the benefit of search engines Sequence Ontology, for representing genomic feature types found on biological sequences SNOMED CT (Systematized Nomenclature of Medicine – Clinical Terms) Suggested Upper Merged Ontology, a formal upper ontology Systems Biology Ontology (SBO), for computational models in biology SWEET, Semantic Web for Earth and Environmental Terminology SSN/SOSA, The Semantic Sensor Network Ontology (SSN) and Sensor, Observation, Sample, and Actuator Ontology (SOSA) are W3C Recommendation and OGC Standards for describing sensors and their observations. ThoughtTreasure ontology TIME-ITEM, Topics for Indexing Medical Education Uberon, representing animal anatomical structures UMBEL, a lightweight reference structure of 20,000 subject concept classes and their relationships derived from OpenCyc WordNet, a lexical reference system YAMATO, Yet Another More Advanced Top-level Ontology YSO – General Finnish Ontology The W3C Linking Open Data community project coordinates attempts to converge different ontologies into worldwide Semantic Web. Libraries The development of ontologies has led to the emergence of services providing lists or directories of ontologies called ontology libraries. The following are libraries of human-selected ontologies. COLORE is an open repository of first-order ontologies in Common Logic with formal links between ontologies in the repository. DAML Ontology Library maintains a legacy of ontologies in DAML. Ontology Design Patterns portal is a wiki repository of reusable components and practices for ontology design, and also maintains a list of exemplary ontologies. Protégé Ontology Library contains a set of OWL, Frame-based and other format ontologies. SchemaWeb is a directory of RDF schemata expressed in RDFS, OWL and DAML+OIL. The following are both directories and search engines. OBO Foundry is a suite of interoperable reference ontologies in biology and biomedicine. Bioportal (ontology repository of NCBO) Linked Open Vocabularies OntoSelect Ontology Library offers similar services for RDF/S, DAML and OWL ontologies. Ontaria is a "searchable and browsable directory of semantic web data" with a focus on RDF vocabularies with OWL ontologies. (NB Project "on hold" since 2004). Swoogle is a directory and search engine for all RDF resources available on the Web, including ontologies. Open Ontology Repository initiative ROMULUS is a foundational ontology repository aimed at improving semantic interoperability. Currently there are three foundational ontologies in the repository: DOLCE, BFO and GFO. Examples of applications In general, ontologies can be used beneficially in several fields. Enterprise applications. A more concrete example is SAPPHIRE (Health care) or Situational Awareness and Preparedness for Public Health Incidences and Reasoning Engines which is a semantics-based health information system capable of tracking and evaluating situations and occurrences that may affect public health. Geographic information systems bring together data from different sources and benefit therefore from ontological metadata which helps to connect the semantics of the data. Domain-specific ontologies are extremely important in biomedical research, which requires named entity disambiguation of various biomedical terms and abbreviations that have the same string of characters but represent different biomedical concepts. For example, CSF can represent Colony Stimulating Factor or Cerebral Spinal Fluid, both of which are represented by the same term, CSF, in biomedical literature. This is why a large number of public ontologies are related to the life sciences. Life science data science tools that fail to implement these types of biomedical ontologies will not be able to accurately determine causal relationships between concepts. See also Commonsense knowledge bases Concept map Controlled vocabulary Classification scheme (information science) Folksonomy Formal concept analysis Formal ontology General Concept Lattice Knowledge graph Lattice Ontology Ontology alignment Ontology chart Open Semantic Framework Semantic technology Soft ontology Terminology extraction Weak ontology Web Ontology Language Related philosophical concepts Alphabet of human thought Characteristica universalis Interoperability Level of measurement Metalanguage Natural semantic metalanguage References Further reading External links Knowledge Representation at Open Directory Project Library of ontologies (Archive, Unmaintained) GoPubMed using Ontologies for searching ONTOLOG (a.k.a. "Ontolog Forum") - an Open, International, Virtual Community of Practice on Ontology, Ontological Engineering and Semantic Technology Use of Ontologies in Natural Language Processing Ontology Summit - an annual series of events (first started in 2006) that involves the ontology community and communities related to each year's theme chosen for the summit. Standardization of Ontologies Knowledge engineering Technical communication Information science Semantic Web Knowledge representation Knowledge bases Ontology editors
Ontology (information science)
[ "Engineering" ]
4,951
[ "Systems engineering", "Knowledge engineering" ]
49,685
https://en.wikipedia.org/wiki/Numeral
A numeral is a figure (symbol), word, or group of figures (symbols) or words denoting a number. It may refer to: Numeral system used in mathematics Numeral (linguistics), a part of speech denoting numbers (e.g. one and first in English) Numerical digit, the glyphs used to represent numerals See also Numerology, belief in a divine relationship between numbers and coinciding events
Numeral
[ "Mathematics" ]
94
[ "Numeral systems", "Numerals" ]
49,693
https://en.wikipedia.org/wiki/Gain%20%28antenna%29
In electromagnetics, an antenna's gain is a key performance parameter which combines the antenna's directivity and radiation efficiency. The term power gain has been deprecated by IEEE. In a transmitting antenna, the gain describes how well the antenna converts input power into radio waves headed in a specified direction. In a receiving antenna, the gain describes how well the antenna converts radio waves arriving from a specified direction into electrical power. When no direction is specified, gain is understood to refer to the peak value of the gain, the gain in the direction of the antenna's main lobe. A plot of the gain as a function of direction is called the antenna pattern or radiation pattern. It is not to be confused with directivity, which does not take an antenna's radiation efficiency into account. Gain or 'absolute gain' is defined as "The ratio of the radiation intensity in a given direction to the radiation intensity that would be produced if the power accepted by the antenna were isotropically radiated". Usually this ratio is expressed in decibels with respect to an isotropic radiator (dBi). An alternative definition compares the received power to the power received by a lossless half-wave dipole antenna, in which case the units are written as dBd. Since a lossless dipole antenna has a gain of 2.15 dBi, the relation between these units is . For a given frequency, the antenna's effective area is proportional to the gain. An antenna's effective length is proportional to the square root of the antenna's gain for a particular frequency and radiation resistance. Due to reciprocity, the gain of any antenna when receiving is equal to its gain when transmitting. Gain Gain is a unitless measure that multiplies an antenna's radiation efficiency and directivity D: Radiation efficiency The radiation efficiency of an antenna is "The ratio of the total power radiated by an antenna to the net power accepted by the antenna from the connected transmitter." A transmitting antenna is supplied with power by a transmission line connecting the antenna to a radio transmitter. The power accepted by the antenna is the power supplied to the antenna's terminals. Losses prior to the antenna terminals are accounted for by separate impedance mismatch factors which are therefore not included in the calculation of radiation efficiency. Gain in decibels Published numbers for antenna gain are almost always expressed in decibels (dB), a logarithmic scale. From the gain factor G, one finds the gain in decibels as: Therefore, an antenna with a peak power gain of 5 would be said to have a gain of 7 dBi. dBi is used rather than just dB to emphasize that this is the gain according to the basic definition, in which the antenna is compared to an isotropic radiator. When actual measurements of an antenna's gain are made by a laboratory, the field strength of the test antenna is measured when supplied with, say, 1 watt of transmitter power, at a certain distance. That field strength is compared to the field strength found using a so-called reference antenna at the same distance receiving the same power in order to determine the gain of the antenna under test. That ratio would be equal to G if the reference antenna were an isotropic radiator (irad). However a true isotropic radiator cannot be built, so in practice a different antenna is used. This will often be a half-wave dipole, a very well understood and repeatable antenna that can be easily built for any frequency. The directive gain of a half-wave dipole with respect to the isotropic radiator is known to be 1.64 and it can be made nearly 100% efficient. Since the gain has been measured with respect to this reference antenna, the difference in the gain of the test antenna is often compared to that of the dipole. The gain relative to a dipole is thus often quoted and is denoted using dBd instead of dBi to avoid confusion. Therefore, in terms of the true gain (relative to an isotropic radiator) G, this figure for the gain is given by: For instance, the above antenna with a gain G = 5 would have a gain with respect to a dipole of 5/1.64 ≈ 3.05, or in decibels one would call this 10 log(3.05) ≈ 4.84 dBd. In general: Both dBi and dBd are in common use. When an antenna's maximum gain is specified in decibels (for instance, by a manufacturer) one must be certain as to whether this means the gain relative to an isotropic radiator or with respect to a dipole. If it specifies dBi or dBd then there is no ambiguity, but if only dB is specified then the fine print must be consulted. Either figure can be easily converted into the other using the above relationship. When considering an antenna's directional pattern, gain with respect to a dipole does not imply a comparison of that antenna's gain in each direction to a dipole's gain in that direction. Rather, it is a comparison between the antenna's gain in each direction to the peak gain of the dipole (1.64). In any direction, therefore, such numbers are 2.15 dB smaller than the gain expressed in dBi. Partial gain Partial gain is calculated as power gain, but for a particular polarization. It is defined as the part of the radiation intensity corresponding to a given polarization, divided by the total radiation intensity of an isotropic antenna. The partial gains in the and components are expressed as and , where and represent the radiation intensity in a given direction contained in their respective field component. As a result of this definition, we can conclude that the total gain of an antenna is the sum of partial gains for any two orthogonal polarizations. Examples First example Suppose a lossless antenna has a radiation pattern given by: Let us find the gain of such an antenna. First we find the peak radiation intensity of this antenna: The total radiated power can be found by integrating over all directions: Since the antenna is specified as being lossless the radiation efficiency is 1. The maximum gain is then equal to: Expressed relative to the gain of a half-wave dipole we would find: . Second example As an example, consider an antenna that radiates an electromagnetic wave whose electrical field has an amplitude at a distance That amplitude is given by: where: is the current (in Amps) fed to the antenna and is a characteristic constant (in Ohm) of each antenna. For a large distance The radiated wave can be considered locally as a plane wave. The intensity of an electromagnetic plane wave is: where is a universal constant called vacuum impedance. and If the resistive part of the series impedance of the antenna is the power fed to the antenna is The intensity of an isotropic antenna is the power so fed divided by the surface of the sphere of radius : The directive gain is: For the commonly utilized half-wave dipole, the particular formulation works out to the following, including its decibel equivalency, expressed as dBi (decibels referenced to isotropic radiator): (In most cases 73.130, is adequate) (Likewise, 1.64 and 2.15 dBi are usually the cited values) Sometimes, the half-wave dipole is taken as a reference instead of the isotropic radiator. The gain is then given in dBd (decibels over dipole): 0 dBd = 2.15 dBi Realized gain Realized gain differs from gain in that it is "reduced by its impedance mismatch factor." This mismatch induces losses above the dissipative losses described above; therefore, realized gain will always be less than gain. Gain may be expressed as absolute gain if further clarification is required to differentiate it from realized gain. Total radiated power Total radiated power (TRP) is the sum of all RF power radiated by the antenna when the source power is included in the measurement. TRP is expressed in watts or the corresponding logarithmic expressions, often dBm or dBW. When testing mobile devices, TRP can be measured while in close proximity of power-absorbing losses such as the body and hand of the user. The TRP can be used to determine body loss (BoL). The body loss is considered as the ratio of TRP measured in the presence of losses and TRP measured while in free space. See also Antenna Antenna boresight Antenna effective area Antenna measurement Cardioid References Bibliography Antenna Theory (3rd edition), by C. Balanis, Wiley, 2005, Antenna for all applications (3rd edition), by John D. Kraus, Ronald J. Marhefka, 2002, Directive Gain Telecommunications engineering Engineering ratios
Gain (antenna)
[ "Mathematics", "Engineering" ]
1,827
[ "Telecommunications engineering", "Metrics", "Engineering ratios", "Quantity", "Electrical engineering" ]
49,718
https://en.wikipedia.org/wiki/Poynting%20vector
In physics, the Poynting vector (or Umov–Poynting vector) represents the directional energy flux (the energy transfer per unit area, per unit time) or power flow of an electromagnetic field. The SI unit of the Poynting vector is the watt per square metre (W/m2); kg/s3 in base SI units. It is named after its discoverer John Henry Poynting who first derived it in 1884. Nikolay Umov is also credited with formulating the concept. Oliver Heaviside also discovered it independently in the more general form that recognises the freedom of adding the curl of an arbitrary vector field to the definition. The Poynting vector is used throughout electromagnetics in conjunction with Poynting's theorem, the continuity equation expressing conservation of electromagnetic energy, to calculate the power flow in electromagnetic fields. Definition In Poynting's original paper and in most textbooks, the Poynting vector is defined as the cross product where bold letters represent vectors and E is the electric field vector; H is the magnetic field's auxiliary field vector or magnetizing field. This expression is often called the Abraham form and is the most widely used. The Poynting vector is usually denoted by S or N. In simple terms, the Poynting vector S depicts the direction and rate of transfer of energy, that is power, due to electromagnetic fields in a region of space that may or may not be empty. More rigorously, it is the quantity that must be used to make Poynting's theorem valid. Poynting's theorem essentially says that the difference between the electromagnetic energy entering a region and the electromagnetic energy leaving a region must equal the energy converted or dissipated in that region, that is, turned into a different form of energy (often heat). So if one accepts the validity of the Poynting vector description of electromagnetic energy transfer, then Poynting's theorem is simply a statement of the conservation of energy. If electromagnetic energy is not gained from or lost to other forms of energy within some region (e.g., mechanical energy, or heat), then electromagnetic energy is locally conserved within that region, yielding a continuity equation as a special case of Poynting's theorem: where is the energy density of the electromagnetic field. This frequent condition holds in the following simple example in which the Poynting vector is calculated and seen to be consistent with the usual computation of power in an electric circuit. Example: Power flow in a coaxial cable Although problems in electromagnetics with arbitrary geometries are notoriously difficult to solve, we can find a relatively simple solution in the case of power transmission through a section of coaxial cable analyzed in cylindrical coordinates as depicted in the accompanying diagram. We can take advantage of the model's symmetry: no dependence on θ (circular symmetry) nor on Z (position along the cable). The model (and solution) can be considered simply as a DC circuit with no time dependence, but the following solution applies equally well to the transmission of radio frequency power, as long as we are considering an instant of time (during which the voltage and current don't change), and over a sufficiently short segment of cable (much smaller than a wavelength, so that these quantities are not dependent on Z). The coaxial cable is specified as having an inner conductor of radius R1 and an outer conductor whose inner radius is R2 (its thickness beyond R2 doesn't affect the following analysis). In between R1 and R2 the cable contains an ideal dielectric material of relative permittivity εr and we assume conductors that are non-magnetic (so μ = μ0) and lossless (perfect conductors), all of which are good approximations to real-world coaxial cable in typical situations. The center conductor is held at voltage V and draws a current I toward the right, so we expect a total power flow of P = V · I according to basic laws of electricity. By evaluating the Poynting vector, however, we are able to identify the profile of power flow in terms of the electric and magnetic fields inside the coaxial cable. The electric fields are of course zero inside of each conductor, but in between the conductors () symmetry dictates that they are strictly in the radial direction and it can be shown (using Gauss's law) that they must obey the following form: W can be evaluated by integrating the electric field from to which must be the negative of the voltage V: so that: The magnetic field, again by symmetry, can only be non-zero in the θ direction, that is, a vector field looping around the center conductor at every radius between R1 and R2. Inside the conductors themselves the magnetic field may or may not be zero, but this is of no concern since the Poynting vector in these regions is zero due to the electric field's being zero. Outside the entire coaxial cable, the magnetic field is identically zero since paths in this region enclose a net current of zero (+I in the center conductor and −I in the outer conductor), and again the electric field is zero there anyway. Using Ampère's law in the region from R1 to R2, which encloses the current +I in the center conductor but with no contribution from the current in the outer conductor, we find at radius r: Now, from an electric field in the radial direction, and a tangential magnetic field, the Poynting vector, given by the cross-product of these, is only non-zero in the Z direction, along the direction of the coaxial cable itself, as we would expect. Again only a function of r, we can evaluate S(r): where W is given above in terms of the center conductor voltage V. The total power flowing down the coaxial cable can be computed by integrating over the entire cross section A of the cable in between the conductors: Substituting the earlier solution for the constant W we find: that is, the power given by integrating the Poynting vector over a cross section of the coaxial cable is exactly equal to the product of voltage and current as one would have computed for the power delivered using basic laws of electricity. Other similar examples in which the P = V · I result can be analytically calculated are: the parallel-plate transmission line, using Cartesian coordinates, and the two-wire transmission line, using bipolar cylindrical coordinates. Other forms In the "microscopic" version of Maxwell's equations, this definition must be replaced by a definition in terms of the electric field E and the magnetic flux density B (described later in the article). It is also possible to combine the electric displacement field D with the magnetic flux B to get the Minkowski form of the Poynting vector, or use D and H to construct yet another version. The choice has been controversial: Pfeifer et al. summarize and to a certain extent resolve the century-long dispute between proponents of the Abraham and Minkowski forms (see Abraham–Minkowski controversy). The Poynting vector represents the particular case of an energy flux vector for electromagnetic energy. However, any type of energy has its direction of movement in space, as well as its density, so energy flux vectors can be defined for other types of energy as well, e.g., for mechanical energy. The Umov–Poynting vector discovered by Nikolay Umov in 1874 describes energy flux in liquid and elastic media in a completely generalized view. Interpretation The Poynting vector appears in Poynting's theorem (see that article for the derivation), an energy-conservation law: where Jf is the current density of free charges and u is the electromagnetic energy density for linear, nondispersive materials, given by where E is the electric field; D is the electric displacement field; B is the magnetic flux density; H is the magnetizing field. The first term in the right-hand side represents the electromagnetic energy flow into a small volume, while the second term subtracts the work done by the field on free electrical currents, which thereby exits from electromagnetic energy as dissipation, heat, etc. In this definition, bound electrical currents are not included in this term and instead contribute to S and u. For light in free space, the linear momentum density is For linear, nondispersive and isotropic (for simplicity) materials, the constitutive relations can be written as where ε is the permittivity of the material; μ is the permeability of the material. Here ε and μ are scalar, real-valued constants independent of position, direction, and frequency. In principle, this limits Poynting's theorem in this form to fields in vacuum and nondispersive linear materials. A generalization to dispersive materials is possible under certain circumstances at the cost of additional terms. One consequence of the Poynting formula is that for the electromagnetic field to do work, both magnetic and electric fields must be present. The magnetic field alone or the electric field alone cannot do any work. Plane waves In a propagating electromagnetic plane wave in an isotropic lossless medium, the instantaneous Poynting vector always points in the direction of propagation while rapidly oscillating in magnitude. This can be simply seen given that in a plane wave, the magnitude of the magnetic field H(r,t) is given by the magnitude of the electric field vector E(r,t) divided by η, the intrinsic impedance of the transmission medium: where |A| represents the vector norm of A. Since E and H are at right angles to each other, the magnitude of their cross product is the product of their magnitudes. Without loss of generality let us take X to be the direction of the electric field and Y to be the direction of the magnetic field. The instantaneous Poynting vector, given by the cross product of E and H will then be in the positive Z direction: Finding the time-averaged power in the plane wave then requires averaging over the wave period (the inverse frequency of the wave): where Erms is the root mean square (RMS) electric field amplitude. In the important case that E(t) is sinusoidally varying at some frequency with peak amplitude Epeak, Erms is , with the average Poynting vector then given by: This is the most common form for the energy flux of a plane wave, since sinusoidal field amplitudes are most often expressed in terms of their peak values, and complicated problems are typically solved considering only one frequency at a time. However, the expression using Erms is totally general, applying, for instance, in the case of noise whose RMS amplitude can be measured but where the "peak" amplitude is meaningless. In free space the intrinsic impedance η is simply given by the impedance of free space η0 ≈377Ω. In non-magnetic dielectrics (such as all transparent materials at optical frequencies) with a specified dielectric constant εr, or in optics with a material whose refractive index , the intrinsic impedance is found as: In optics, the value of radiated flux crossing a surface, thus the average Poynting vector component in the direction normal to that surface, is technically known as the irradiance, more often simply referred to as the intensity (a somewhat ambiguous term). Formulation in terms of microscopic fields The "microscopic" (differential) version of Maxwell's equations admits only the fundamental fields E and B, without a built-in model of material media. Only the vacuum permittivity and permeability are used, and there is no D or H. When this model is used, the Poynting vector is defined as where μ0 is the vacuum permeability; E is the electric field vector; B is the magnetic flux. This is actually the general expression of the Poynting vector. The corresponding form of Poynting's theorem is where J is the total current density and the energy density u is given by where ε0 is the vacuum permittivity. It can be derived directly from Maxwell's equations in terms of total charge and current and the Lorentz force law only. The two alternative definitions of the Poynting vector are equal in vacuum or in non-magnetic materials, where . In all other cases, they differ in that and the corresponding u are purely radiative, since the dissipation term covers the total current, while the E × H definition has contributions from bound currents which are then excluded from the dissipation term. Since only the microscopic fields E and B occur in the derivation of and the energy density, assumptions about any material present are avoided. The Poynting vector and theorem and expression for energy density are universally valid in vacuum and all materials. Time-averaged Poynting vector The above form for the Poynting vector represents the instantaneous power flow due to instantaneous electric and magnetic fields. More commonly, problems in electromagnetics are solved in terms of sinusoidally varying fields at a specified frequency. The results can then be applied more generally, for instance, by representing incoherent radiation as a superposition of such waves at different frequencies and with fluctuating amplitudes. We would thus not be considering the instantaneous and used above, but rather a complex (vector) amplitude for each which describes a coherent wave's phase (as well as amplitude) using phasor notation. These complex amplitude vectors are not functions of time, as they are understood to refer to oscillations over all time. A phasor such as is understood to signify a sinusoidally varying field whose instantaneous amplitude follows the real part of where is the (radian) frequency of the sinusoidal wave being considered. In the time domain, it will be seen that the instantaneous power flow will be fluctuating at a frequency of 2ω. But what is normally of interest is the average power flow in which those fluctuations are not considered. In the math below, this is accomplished by integrating over a full cycle . The following quantity, still referred to as a "Poynting vector", is expressed directly in terms of the phasors as: where ∗ denotes the complex conjugate. The time-averaged power flow (according to the instantaneous Poynting vector averaged over a full cycle, for instance) is then given by the real part of . The imaginary part is usually ignored, however, it signifies "reactive power" such as the interference due to a standing wave or the near field of an antenna. In a single electromagnetic plane wave (rather than a standing wave which can be described as two such waves travelling in opposite directions), and are exactly in phase, so is simply a real number according to the above definition. The equivalence of to the time-average of the instantaneous Poynting vector can be shown as follows. The average of the instantaneous Poynting vector S over time is given by: The second term is the double-frequency component having an average value of zero, so we find: According to some conventions, the factor of 1/2 in the above definition may be left out. Multiplication by 1/2 is required to properly describe the power flow since the magnitudes of and refer to the peak fields of the oscillating quantities. If rather the fields are described in terms of their root mean square (RMS) values (which are each smaller by the factor ), then the correct average power flow is obtained without multiplication by 1/2. Resistive dissipation If a conductor has significant resistance, then, near the surface of that conductor, the Poynting vector would be tilted toward and impinge upon the conductor. Once the Poynting vector enters the conductor, it is bent to a direction that is almost perpendicular to the surface. This is a consequence of Snell's law and the very slow speed of light inside a conductor. The definition and computation of the speed of light in a conductor can be given. Inside the conductor, the Poynting vector represents energy flow from the electromagnetic field into the wire, producing resistive Joule heating in the wire. For a derivation that starts with Snell's law see Reitz page 454. Radiation pressure The density of the linear momentum of the electromagnetic field is S/c2 where S is the magnitude of the Poynting vector and c is the speed of light in free space. The radiation pressure exerted by an electromagnetic wave on the surface of a target is given by Uniqueness of the Poynting vector The Poynting vector occurs in Poynting's theorem only through its divergence , that is, it is only required that the surface integral of the Poynting vector around a closed surface describe the net flow of electromagnetic energy into or out of the enclosed volume. This means that adding a solenoidal vector field (one with zero divergence) to S will result in another field that satisfies this required property of a Poynting vector field according to Poynting's theorem. Since the divergence of any curl is zero, one can add the curl of any vector field to the Poynting vector and the resulting vector field S′ will still satisfy Poynting's theorem. However even though the Poynting vector was originally formulated only for the sake of Poynting's theorem in which only its divergence appears, it turns out that the above choice of its form is unique. The following section gives an example which illustrates why it is not acceptable to add an arbitrary solenoidal field to E × H. Static fields The consideration of the Poynting vector in static fields shows the relativistic nature of the Maxwell equations and allows a better understanding of the magnetic component of the Lorentz force, . To illustrate, the accompanying picture is considered, which describes the Poynting vector in a cylindrical capacitor, which is located in an H field (pointing into the page) generated by a permanent magnet. Although there are only static electric and magnetic fields, the calculation of the Poynting vector produces a clockwise circular flow of electromagnetic energy, with no beginning or end. While the circulating energy flow may seem unphysical, its existence is necessary to maintain conservation of angular momentum. The momentum of an electromagnetic wave in free space is equal to its power divided by c, the speed of light. Therefore, the circular flow of electromagnetic energy implies an angular momentum. If one were to connect a wire between the two plates of the charged capacitor, then there would be a Lorentz force on that wire while the capacitor is discharging due to the discharge current and the crossed magnetic field; that force would be tangential to the central axis and thus add angular momentum to the system. That angular momentum would match the "hidden" angular momentum, revealed by the Poynting vector, circulating before the capacitor was discharged. See also Wave vector References Further reading Electromagnetic radiation Optical quantities Vectors (mathematics and physics)
Poynting vector
[ "Physics", "Mathematics" ]
3,921
[ "Physical phenomena", "Physical quantities", "Electromagnetic radiation", "Quantity", "Radiation", "Optical quantities" ]
49,721
https://en.wikipedia.org/wiki/Core%20dump
In computing, a core dump, memory dump, crash dump, storage dump, system dump, or ABEND dump consists of the recorded state of the working memory of a computer program at a specific time, generally when the program has crashed or otherwise terminated abnormally. In practice, other key pieces of program state are usually dumped at the same time, including the processor registers, which may include the program counter and stack pointer, memory management information, and other processor and operating system flags and information. A snapshot dump (or snap dump) is a memory dump requested by the computer operator or by the running program, after which the program is able to continue. Core dumps are often used to assist in diagnosing and debugging errors in computer programs. On many operating systems, a fatal exception in a program automatically triggers a core dump. By extension, the phrase "to dump core" has come to mean in many cases, any fatal error, regardless of whether a record of the program memory exists. The term "core dump", "memory dump", or just "dump" has also become jargon to indicate any output of a large amount of raw data for further examination or other purposes. Background The name comes from magnetic-core memory, the principal form of random-access memory from the 1950s to the 1970s. The name has remained long after magnetic-core technology became obsolete. Earliest core dumps were paper printouts of the contents of memory, typically arranged in columns of octal or hexadecimal numbers (a "hex dump"), sometimes accompanied by their interpretations as machine language instructions, text strings, or decimal or floating-point numbers (cf. disassembler). As memory sizes increased and post-mortem analysis utilities were developed, dumps were written to magnetic media like tape or disk. Instead of only displaying the contents of the applicable memory, modern operating systems typically generate a file containing an image of the memory belonging to the crashed process, or the memory images of parts of the address space related to that process, along with other information such as the values of processor registers, program counter, system flags, and other information useful in determining the root cause of the crash. These files can be viewed as text, printed, or analysed with specialised tools such as elfdump on Unix and Unix-like systems, objdump and kdump on Linux, IPCS (Interactive Problem Control System) on IBM z/OS, DVF (Dump Viewing Facility) on IBM z/VM, WinDbg on Microsoft Windows, Valgrind, or other debuggers. In some operating systems an application or operator may request a snapshot of selected storage blocks, rather than all of the storage used by the application or operating system. Uses Core dumps can serve as useful debugging aids in several situations. On early standalone or batch-processing systems, core dumps allowed a user to debug a program without monopolizing the (very expensive) computing facility for debugging; a printout could also be more convenient than debugging using front panel switches and lights. On shared computers, whether time-sharing, batch processing, or server systems, core dumps allow off-line debugging of the operating system, so that the system can go back into operation immediately. Core dumps allow a user to save a crash for later or off-site analysis, or comparison with other crashes. For embedded computers, it may be impractical to support debugging on the computer itself, so analysis of a dump may take place on a different computer. Some operating systems such as early versions of Unix did not support attaching debuggers to running processes, so core dumps were necessary to run a debugger on a process's memory contents. Core dumps can be used to capture data freed during dynamic memory allocation and may thus be used to retrieve information from a program that is no longer running. In the absence of an interactive debugger, the core dump may be used by an assiduous programmer to determine the error from direct examination. Snap dumps are sometimes a convenient way for applications to record quick and dirty debugging output. Analysis A core dump generally represents the complete contents of the dumped regions of the address space of the dumped process. Depending on the operating system, the dump may contain few or no data structures to aid interpretation of the memory regions. In these systems, successful interpretation requires that the program or user trying to interpret the dump understands the structure of the program's memory use. A debugger can use a symbol table, if one exists, to help the programmer interpret dumps, identifying variables symbolically and displaying source code; if the symbol table is not available, less interpretation of the dump is possible, but there might still be enough possible to determine the cause of the problem. There are also special-purpose tools called dump analyzers to analyze dumps. One popular tool, available on many operating systems, is the GNU binutils' objdump. On modern Unix-like operating systems, administrators and programmers can read core dump files using the GNU Binutils Binary File Descriptor library (BFD), and the GNU Debugger (gdb) and objdump that use this library. This library will supply the raw data for a given address in a memory region from a core dump; it does not know anything about variables or data structures in that memory region, so the application using the library to read the core dump will have to determine the addresses of variables and determine the layout of data structures itself, for example by using the symbol table for the program undergoing debugging. Analysts of crash dumps from Linux systems can use kdump or the Linux Kernel Crash Dump (LKCD). Core dumps can save the context (state) of a process at a given state for returning to it later. Systems can be made highly available by transferring core between processors, sometimes via core dump files themselves. Core can also be dumped onto a remote host over a network (which is a security risk). Users of IBM mainframes running z/OS can browse SVC and transaction dumps using Interactive Problem Control System (IPCS), a full screen dump reader which was originally introduced in OS/VS2 (MVS), supports user written scripts in REXX and supports point-and-shoot browsing of dumps. Core-dump files Format In older and simpler operating systems, each process had a contiguous address-space, so a dump file was sometimes simply a file with the sequence of bytes, digits, characters or words. On other early machines a dump file contained discrete records, each containing a storage address and the associated contents. On early machines, the dump was often written by a stand-alone dump program rather than by the application or the operating system. The IBSYS monitor for the IBM 7090 included a System Core-Storage Dump Program that supported post-motem and snap dumps. On the IBM System/360, the standard operating systems wrote formatted ABEND and SNAP dumps, with the addresses, registers, storage contents, etc., all converted into printable forms. Later releases added the ability to write unformatted dumps, called at that time core image dumps (also known as SVC dumps.) In modern operating systems, a process address space may contain gaps, and it may share pages with other processes or files, so more elaborate representations are used; they may also include other information about the state of the program at the time of the dump. In Unix-like systems, core dumps generally use the standard executable image-format: a.out in older versions of Unix, ELF in modern Linux, System V, Solaris, and BSD systems, Mach-O in macOS, etc. Naming OS/360 and successors In OS/360 and successors, a job may assign arbitrary data set names (DSNs) to the ddnames SYSABEND and SYSUDUMP for a formatted ABEND dump and to arbitrary ddnames for SNAP dumps, or define those ddnames as SYSOUT. The Damage Assessment and Repair (DAR) facility added an automatic unformatted storage dump to the dataset SYS1.DUMP at the time of failure as well as a console dump requested by the operator. The newer transaction dump is very similar to the older SVC dump. The Interactive Problem Control System (IPCS), added to OS/VS2 by Selectable Unit (SU) 57 and part of every subsequent MVS release, can be used to interactively analyze storage dumps on DASD. IPCS understands the format and relationships of system control blocks, and can produce a formatted display for analysis. The current versions of IPCS allow inspection of active address spaces without first taking a storage dump. Unix-like Since Solaris 8, system utility coreadm allows the name and location of core files to be configured. Dumps of user processes are traditionally created as core. On Linux (since versions 2.4.21 and 2.6 of the Linux kernel mainline), a different name can be specified via procfs using the /proc/sys/kernel/core_pattern configuration file; the specified name can also be a template that contains tags substituted by, for example, the executable filename, the process ID, or the reason for the dump. System-wide dumps on modern Unix-like systems often appear as vmcore or vmcore.incomplete. Others Systems such as Microsoft Windows, which use filename extensions, may use extension .dmp; for example, core dumps may be named memory.dmp or \Minidump\Mini051509-01.dmp. Windows memory dumps Microsoft Windows supports two memory dump formats, described below. Kernel-mode dumps There are five types of kernel-mode dumps: Complete memory dump contains full physical memory for the target system. Kernel memory dump contains all the memory in use by the kernel at the time of the crash. Small memory dump contains various info such as the stop code, parameters, list of loaded device drivers, etc. Automatic Memory Dump (Windows 8 and later) same as Kernel memory dump, but if the paging file is both System Managed and too small to capture the Kernel memory dump, it will automatically increase the paging file to at least the size of RAM for four weeks, then reduce it to the smaller size. Active memory dump (Windows 10 and later) contains most of the memory in use by the kernel and user mode applications. To analyze the Windows kernel-mode dumps Debugging Tools for Windows are used, a set that inludes tools like WinDbg & DumpChk. User-mode memory dumps User-mode memory dump, also known as minidump, is a memory dump of a single process. It contains selected data records: full or partial (filtered) process memory; list of the threads with their call stacks and state (such as registers or TEB); information about handles to the kernel objects; list of loaded and unloaded libraries. Full list of options available in MINIDUMP_TYPE enum. Space missions The NASA Voyager program was probably the first craft to routinely utilize the core dump feature in the Deep Space segment. The core dump feature is a mandatory telemetry feature for the Deep Space segment as it has been proven to minimize system diagnostic costs. The Voyager craft uses routine core dumps to spot memory damage from cosmic ray events. Space Mission core dump systems are mostly based on existing toolkits for the target CPU or subsystem. However, over the duration of a mission the core dump subsystem may be substantially modified or enhanced for the specific needs of the mission. See also Database dump Hex dump Stack trace References Notes External links Descriptions of the file format Minidump files Kernel core dumps: Apple Technical Note TN2118: Kernel Core Dumps Debugging Computer errors
Core dump
[ "Technology" ]
2,484
[ "Computer errors" ]
49,723
https://en.wikipedia.org/wiki/Reserved%20word
In a programming language, a reserved word (sometimes known as a reserved identifier) is a word that cannot be used by a programmer as an identifier, such as the name of a variable, function, or label – it is "reserved from use". In brief, an identifier starts with a letter, which is followed by any sequence of letters and digits (in some languages underline '_' is treated as a letter!). In an imperative programming language and in many object-oriented programming languages, apart from assignments and subroutine calls, keywords are often used to identify a particular statement, e.g. if, while, do, for, etc. Many languages treat keywords as reserved words, including Ada, C, C++, COBOL, Java, and Pascal. The number of reserved words varies widely from one language to another: C has about 30 while COBOL has about 400. Note that a few languages do not have any reserved words. Fortran and PL/I identify keywords by context, while Algol 60 and Algol 68 generally use stropping to distinguish keywords from programmer-defined identifiers, e.g. .if or 'if or 'if'is a keyword distinct from identifier if. Most programming languages have a standard library (or libraries), e.g. mathematical functions sin, cos, etc. The names provided by a library are not reserved, and can be redefined by a programmer if the library functionality is not required. Distinction When using an Interactive Development Environment (IDE) to develop a program, the IDE will generally highlight reserved words by displaying them in a different colour. In some IDEs, comments may also be highlighted (in yet another colour). This makes it easy for a programmer to notice unexpected use of a reserved word and/or failure to terminate a comment correctly. There may be reserved words which are not keywords. For example, in Java, true and false are reserved words used as Boolean (logical) literals. As another example, in Pascal, div and mod are reserved words used as operators (integer division and remainder). There may also be reserved words which have no defined meaning. For example, in Java, goto and const are listed as reserved words, but are not otherwise mentioned in the Java syntax rules. A keyword such as if or while is used during syntax analysis to determine what sort of statement is being considered. Such analysis is much simpler if keywords are either reserved or stropped. Consider the complexity of using contextual analysis in Fortran 77 to distinguish: IF (B) l1,l2 ! two-way branch, where B is a boolean/logical expression IF (N) l1,l2,l3 ! three-way branch, where N is a numeric expression IF (B) THEN ! start conditional block IF (B) THEN = 3.1 ! conditional assignment to variable THEN IF (B) X = 10 ! single conditional statement IF (B) GOTO l4 ! conditional jump IF (N) = 2 ! assignment to a subscripted variable named IF PL/I can also allow some apparently confusing constructions: IF IF = THEN THEN ... (the second IF and the first THEN are variables). Advantages Programs may be more readable as compared with a programming language which uses stropping. It is easier for an IDE to highlight keywords, without having to do a contextual analysis to determine which words are actually keywords (see Fortran examples in previous section). A compiler can be faster, since it can quickly determine if a word is a keyword, without having to do a contextual analysis over the whole of an arbitrarily long statement. Disadvantages A programming language may be difficult for new users to learn because of a (possibly long) list of reserved words to memorize which can't be used as identifiers. It may be difficult to extend the language because addition of reserved words for new features might invalidate existing programs or, conversely, "overloading" of existing reserved words with new meanings can be confusing. Porting programs can be problematic because a word not reserved by one system or compiler might be reserved by another. Further reservation Beyond reserving specific lists of words, some languages reserve entire ranges of words, for use as private spaces for future language version, different dialects, compiler vendor-specific extensions, or for internal use by a compiler, notably in name mangling. This is most often done by using a prefix, often one or more underscores. C and C++ are notable in this respect: C99 reserves identifiers that start with two underscores or an underscore followed by an uppercase letter, and further reserves identifiers that start with a single underscore (in the ordinary and tag spaces) for use in file scope; with C++03 further reserves identifiers that contain a double underscore anywhere – this allows the use of a double underscore as a separator (to connect user identifiers), for instance. The frequent use of a double underscores in internal identifiers in Python gave rise to the abbreviation dunder; this was coined by Mark Jackson and independently by Tim Hochberg, within minutes of each other, both in reply to the same question in 2002. Specification The list of reserved words and keywords in a language are defined when a language is developed, and both form part of a language's formal specification. Generally one wishes to minimize the number of reserved words, to avoid restricting valid identifier names. Further, introducing new reserved words breaks existing programs that use that word (it is not backwards compatible), so this is avoided. To prevent this and provide forward compatibility, sometimes words are reserved without having a current use (a reserved word that is not a keyword), as this allows the word to be used in future without breaking existing programs. Alternatively, new language features can be implemented as predefineds, which can be overridden, thus not breaking existing programs. Reasons for flexibility include allowing compiler vendors to extend the specification by including non-standard features, different standard dialects of language to extend it, or future versions of the language to include additional features. For example, a procedural language may anticipate adding object-oriented capabilities in a future version or some dialect, at which point one might add keywords like class or object. To accommodate this possibility, the current specification may make these reserved words, even if they are not currently used. A notable example is in Java, where const and goto are reserved words — they have no meaning in Java but they also cannot be used as identifiers. By reserving the terms, they can be implemented in future versions of Java, if desired, without breaking older Java source code. For example, there was a proposal in 1999 to add C++-like const to the language, which was possible using the const word, since it was reserved but currently unused; however, this proposal was rejected – notably because even though adding the feature would not break any existing programs, using it in the standard library (notably in collections) would break compatibility. JavaScript also contains a number of reserved words without special functionality; the exact list varies by version and mode. Languages differ significantly in how frequently they introduce new reserved words or keywords and how they name them, with some languages being very conservative and introducing new keywords rarely or never, to avoid breaking existing programs, while other languages introduce new keywords more freely, requiring existing programs to change existing identifiers that conflict. A case study is given by new keywords in C11 compared with C++11, both from 2011 – recall that in C and C++, identifiers that begin with an underscore followed by an uppercase letter are reserved: That is, C11 introduced the keyword _Thread_local within an existing set of reserved words (those with a certain prefix), and then used a separate facility (macro processing) to allow its use as if it were a new keyword without any prefixing, while C++11 introduce the keyword thread_local despite this not being an existing reserved word, breaking any programs that used this, but without requiring macro processing. Reserved words and language independence Microsoft's .NET Common Language Infrastructure (CLI) specification allows code written in 40+ different programming languages to be combined into a final product. Because of this, identifier/reserved word collisions can occur when code implemented in one language tries to execute code written in another language. For example, a Visual Basic (.NET) library may contain a class definition such as: ' Class Definition of This in Visual Basic.NET: Public Class this ' This class does something... End Class If this is compiled and distributed as part of a toolbox, a C# programmer, wishing to define a variable of type "this" would encounter a problem: 'this' is a reserved word in C#. Thus, the following will not compile in C#: // Using This Class in C#: this x = new this(); // Won't compile! A similar issue arises when accessing members, overriding virtual methods, and identifying namespaces. This is resolved by stropping. To work around this issue, the specification allows placing (in C#) the at-sign before the identifier, which forces it to be considered an identifier rather than a reserved word by the compiler: // Using This Class in C#: @this x = new @this(); // Will compile! For consistency, this use is also permitted in non-public settings such as local variables, parameter names, and private members. See also C reserved keywords List of Java keywords List of SQL reserved words Symbol (programming) References Programming constructs Programming language topics
Reserved word
[ "Engineering" ]
2,056
[ "Software engineering", "Programming language topics" ]
49,726
https://en.wikipedia.org/wiki/Alternating%20group
In mathematics, an alternating group is the group of even permutations of a finite set. The alternating group on a set of elements is called the alternating group of degree , or the alternating group on letters and denoted by or Basic properties For , the group An is the commutator subgroup of the symmetric group Sn with index 2 and has therefore n!/2 elements. It is the kernel of the signature group homomorphism explained under symmetric group. The group An is abelian if and only if and simple if and only if or . A5 is the smallest non-abelian simple group, having order 60, and thus the smallest non-solvable group. The group A4 has the Klein four-group V as a proper normal subgroup, namely the identity and the double transpositions , that is the kernel of the surjection of A4 onto . We have the exact sequence . In Galois theory, this map, or rather the corresponding map , corresponds to associating the Lagrange resolvent cubic to a quartic, which allows the quartic polynomial to be solved by radicals, as established by Lodovico Ferrari. Conjugacy classes As in the symmetric group, any two elements of An that are conjugate by an element of An must have the same cycle shape. The converse is not necessarily true, however. If the cycle shape consists only of cycles of odd length with no two cycles the same length, where cycles of length one are included in the cycle type, then there are exactly two conjugacy classes for this cycle shape . Examples: The two permutations (123) and (132) are not conjugates in A3, although they have the same cycle shape, and are therefore conjugate in S3. The permutation (123)(45678) is not conjugate to its inverse (132)(48765) in A8, although the two permutations have the same cycle shape, so they are conjugate in S8. Relation with symmetric group See Symmetric group. As finite symmetric groups are the groups of all permutations of a set with finite elements, and the alternating groups are groups of even permutations, alternating groups are subgroups of finite symmetric groups. Generators and relations For n ≥ 3, An is generated by 3-cycles, since 3-cycles can be obtained by combining pairs of transpositions. This generating set is often used to prove that An is simple for . Automorphism group For , except for , the automorphism group of An is the symmetric group Sn, with inner automorphism group An and outer automorphism group Z2; the outer automorphism comes from conjugation by an odd permutation. For and 2, the automorphism group is trivial. For the automorphism group is Z2, with trivial inner automorphism group and outer automorphism group Z2. The outer automorphism group of A6 is the Klein four-group , and is related to the outer automorphism of S6. The extra outer automorphism in A6 swaps the 3-cycles (like (123)) with elements of shape 32 (like ). Exceptional isomorphisms There are some exceptional isomorphisms between some of the small alternating groups and small groups of Lie type, particularly projective special linear groups. These are: A4 is isomorphic to PSL2(3) and the symmetry group of chiral tetrahedral symmetry. A5 is isomorphic to PSL2(4), PSL2(5), and the symmetry group of chiral icosahedral symmetry. (See for an indirect isomorphism of using a classification of simple groups of order 60, and here for a direct proof). A6 is isomorphic to PSL2(9) and PSp4(2)'. A8 is isomorphic to PSL4(2). More obviously, A3 is isomorphic to the cyclic group Z3, and A0, A1, and A2 are isomorphic to the trivial group (which is also for any q). Examples S4 and A4 Example A5 as a subgroup of 3-space rotations A5 is the group of isometries of a dodecahedron in 3-space, so there is a representation . In this picture the vertices of the polyhedra represent the elements of the group, with the center of the sphere representing the identity element. Each vertex represents a rotation about the axis pointing from the center to that vertex, by an angle equal to the distance from the origin, in radians. Vertices in the same polyhedron are in the same conjugacy class. Since the conjugacy class equation for A5 is , we obtain four distinct (nontrivial) polyhedra. The vertices of each polyhedron are in bijective correspondence with the elements of its conjugacy class, with the exception of the conjugacy class of (2,2)-cycles, which is represented by an icosidodecahedron on the outer surface, with its antipodal vertices identified with each other. The reason for this redundancy is that the corresponding rotations are by radians, and so can be represented by a vector of length in either of two directions. Thus the class of (2,2)-cycles contains 15 elements, while the icosidodecahedron has 30 vertices. The two conjugacy classes of twelve 5-cycles in A5 are represented by two icosahedra, of radii 2/5 and 4/5, respectively. The nontrivial outer automorphism in interchanges these two classes and the corresponding icosahedra. Example: the 15 puzzle It can be proved that the 15 puzzle, a famous example of the sliding puzzle, can be represented by the alternating group A15, because the combinations of the 15 puzzle can be generated by 3-cycles. In fact, any sliding puzzle with square tiles of equal size can be represented by A2k−1. Subgroups A4 is the smallest group demonstrating that the converse of Lagrange's theorem is not true in general: given a finite group G and a divisor d of , there does not necessarily exist a subgroup of G with order d: the group , of order 12, has no subgroup of order 6. A subgroup of three elements (generated by a cyclic rotation of three objects) with any distinct nontrivial element generates the whole group. For all , An has no nontrivial (that is, proper) normal subgroups. Thus, An is a simple group for all . A5 is the smallest non-solvable group. Group homology The group homology of the alternating groups exhibits stabilization, as in stable homotopy theory: for sufficiently large n, it is constant. However, there are some low-dimensional exceptional homology. Note that the homology of the symmetric group exhibits similar stabilization, but without the low-dimensional exceptions (additional homology elements). H1: Abelianization The first homology group coincides with abelianization, and (since An is perfect, except for the cited exceptions) is thus: H1(An, Z) = Z1 for n = 0, 1, 2; H1(A3, Z) = A = A3 = Z3; H1(A4, Z) = A = Z3; H1(An, Z) = Z1 for n ≥ 5. This is easily seen directly, as follows. An is generated by 3-cycles – so the only non-trivial abelianization maps are since order-3 elements must map to order-3 elements – and for all 3-cycles are conjugate, so they must map to the same element in the abelianization, since conjugation is trivial in abelian groups. Thus a 3-cycle like (123) must map to the same element as its inverse (321), but thus must map to the identity, as it must then have order dividing 2 and 3, so the abelianization is trivial. For , An is trivial, and thus has trivial abelianization. For A3 and A4 one can compute the abelianization directly, noting that the 3-cycles form two conjugacy classes (rather than all being conjugate) and there are non-trivial maps (in fact an isomorphism) and . H2: Schur multipliers The Schur multipliers of the alternating groups An (in the case where n is at least 5) are the cyclic groups of order 2, except in the case where n is either 6 or 7, in which case there is also a triple cover. In these cases, then, the Schur multiplier is (the cyclic group) of order 6. These were first computed in . H2(An, Z) = Z1 for n = 1, 2, 3; H2(An, Z) = Z2 for n = 4, 5; H2(An, Z) = Z6 for n = 6, 7; H2(An, Z) = Z2 for n ≥ 8. Notes References External links Finite groups Permutation groups
Alternating group
[ "Mathematics" ]
1,909
[ "Mathematical structures", "Algebraic structures", "Finite groups" ]
49,727
https://en.wikipedia.org/wiki/Parity%20of%20a%20permutation
In mathematics, when X is a finite set with at least two elements, the permutations of X (i.e. the bijective functions from X to X) fall into two classes of equal size: the even permutations and the odd permutations. If any total ordering of X is fixed, the parity (oddness or evenness) of a permutation of X can be defined as the parity of the number of inversions for σ, i.e., of pairs of elements x, y of X such that and . The sign, signature, or signum of a permutation σ is denoted sgn(σ) and defined as +1 if σ is even and −1 if σ is odd. The signature defines the alternating character of the symmetric group Sn. Another notation for the sign of a permutation is given by the more general Levi-Civita symbol (εσ), which is defined for all maps from X to X, and has value zero for non-bijective maps. The sign of a permutation can be explicitly expressed as where N(σ) is the number of inversions in σ. Alternatively, the sign of a permutation σ can be defined from its decomposition into the product of transpositions as where m is the number of transpositions in the decomposition. Although such a decomposition is not unique, the parity of the number of transpositions in all decompositions is the same, implying that the sign of a permutation is well-defined. Example Consider the permutation σ of the set defined by and In one-line notation, this permutation is denoted 34521. It can be obtained from the identity permutation 12345 by three transpositions: first exchange the numbers 2 and 4, then exchange 3 and 5, and finally exchange 1 and 3. This shows that the given permutation σ is odd. Following the method of the cycle notation article, this could be written, composing from right to left, as There are many other ways of writing σ as a composition of transpositions, for instance , but it is impossible to write it as a product of an even number of transpositions. Properties The identity permutation is an even permutation. An even permutation can be obtained as the composition of an even number (and only an even number) of exchanges (called transpositions) of two elements, while an odd permutation can be obtained by (only) an odd number of transpositions. The following rules follow directly from the corresponding rules about addition of integers: the composition of two even permutations is even the composition of two odd permutations is even the composition of an odd and an even permutation is odd From these it follows that the inverse of every even permutation is even the inverse of every odd permutation is odd Considering the symmetric group Sn of all permutations of the set {1, ..., n}, we can conclude that the map that assigns to every permutation its signature is a group homomorphism. Furthermore, we see that the even permutations form a subgroup of Sn. This is the alternating group on n letters, denoted by An. It is the kernel of the homomorphism sgn. The odd permutations cannot form a subgroup, since the composite of two odd permutations is even, but they form a coset of An (in Sn). If , then there are just as many even permutations in Sn as there are odd ones; consequently, An contains n!/2 permutations. (The reason is that if σ is even then is odd, and if σ is odd then is even, and these two maps are inverse to each other.) A cycle is even if and only if its length is odd. This follows from formulas like In practice, in order to determine whether a given permutation is even or odd, one writes the permutation as a product of disjoint cycles. The permutation is odd if and only if this factorization contains an odd number of even-length cycles. Another method for determining whether a given permutation is even or odd is to construct the corresponding permutation matrix and compute its determinant. The value of the determinant is the same as the parity of the permutation. Every permutation of odd order must be even. The permutation in A4 shows that the converse is not true in general. Equivalence of the two definitions This section presents proofs that the parity of a permutation σ can be defined in two equivalent ways: as the parity of the number of inversions in σ (under any ordering); or as the parity of the number of transpositions that σ can be decomposed to (however we choose to decompose it). Other definitions and proofs The parity of a permutation of points is also encoded in its cycle structure. Let σ = (i1 i2 ... ir+1)(j1 j2 ... js+1)...(ℓ1 ℓ2 ... ℓu+1) be the unique decomposition of σ into disjoint cycles, which can be composed in any order because they commute. A cycle involving points can always be obtained by composing k transpositions (2-cycles): so call k the size of the cycle, and observe that, under this definition, transpositions are cycles of size 1. From a decomposition into m disjoint cycles we can obtain a decomposition of σ into transpositions, where ki is the size of the ith cycle. The number is called the discriminant of σ, and can also be computed as if we take care to include the fixed points of σ as 1-cycles. Suppose a transposition (a b) is applied after a permutation σ. When a and b are in different cycles of σ then , and if a and b are in the same cycle of σ then . In either case, it can be seen that , so the parity of N((a b)σ) will be different from the parity of N(σ). If is an arbitrary decomposition of a permutation σ into transpositions, by applying the r transpositions after t2 after ... after tr after the identity (whose N is zero) observe that N(σ) and r have the same parity. By defining the parity of σ as the parity of N(σ), a permutation that has an even length decomposition is an even permutation and a permutation that has one odd length decomposition is an odd permutation. Remarks A careful examination of the above argument shows that , and since any decomposition of σ into cycles whose sizes sum to r can be expressed as a composition of r transpositions, the number N(σ) is the minimum possible sum of the sizes of the cycles in a decomposition of σ, including the cases in which all cycles are transpositions. This proof does not introduce a (possibly arbitrary) order into the set of points on which σ acts. Generalizations Parity can be generalized to Coxeter groups: one defines a length function ℓ(v), which depends on a choice of generators (for the symmetric group, adjacent transpositions), and then the function gives a generalized sign map. See also The fifteen puzzle is a classic application Zolotarev's lemma Notes References Group theory Permutations Parity (mathematics) Articles containing proofs Sign (mathematics) ru:Перестановка#Связанные определения
Parity of a permutation
[ "Mathematics" ]
1,617
[ "Functions and mappings", "Permutations", "Sign (mathematics)", "Mathematical objects", "Combinatorics", "Group theory", "Fields of abstract algebra", "Mathematical relations", "Articles containing proofs", "Numbers" ]
49,761
https://en.wikipedia.org/wiki/Punched%20tape
Punched tape or perforated paper tape is a form of data storage device that consists of a long strip of paper through which small holes are punched. It was developed from and was subsequently used alongside punched cards, the difference being that the tape is continuous. Punched cards, and chains of punched cards, were used for control of looms in the 18th century. Use for telegraphy systems started in 1842. Punched tapes were used throughout the 19th and for much of the 20th centuries for programmable looms, teleprinter communication, for input to computers of the 1950s and 1960s, and later as a storage medium for minicomputers and CNC machine tools. During the Second World War, high-speed punched tape systems using optical readout methods were used in code breaking systems. Punched tape was used to transmit data for manufacture of read-only memory chips. History Perforated paper tapes were first used by Basile Bouchon in 1725 to control looms. However, the paper tapes were expensive to create, fragile, and difficult to repair. By 1801, Joseph Marie Jacquard had developed machines to create paper tapes by tying punched cards in a sequence for Jacquard looms. The resulting paper tape, also called a "chain of cards", was stronger and simpler both to create and to repair. This led to the concept of communicating data not as a stream of individual cards, but as one "continuous card" (or tape). Paper tapes constructed from punched cards were widely used throughout the 19th century for controlling looms. Many professional embroidery operations still refer to those individuals who create the designs and machine patterns as punchers even though punched cards and paper tape were eventually phased out in the 1990s. In 1842, a French patent by Claude Seytre described a piano playing device that read data from perforated paper rolls. By 1900, wide perforated music rolls for player pianos were used to distribute popular music to mass markets. In 1846, Alexander Bain used punched tape to send telegrams. This technology was adopted by Charles Wheatstone in 1857 for the Wheatstone system used for the automated preparation, storage and transmission of data in telegraphy. In the 1880s, Tolbert Lanston invented the Monotype typesetting system, which consisted of a keyboard and a composition caster. The tape, punched with the keyboard, was later read by the caster, which produced lead type according to the combinations of holes in up to 31 positions. The tape reader used compressed air, which passed through the holes and was directed into certain mechanisms of the caster. The system went into commercial use in 1897 and was in production well into the 1970s, undergoing several changes along the way. Modern use In the 21st century, punched tape is obsolete except among hobbyists. In computer numerical control (CNC) machining applications, though paper tape has been superseded by digital memory, some modern systems still measure the size of stored CNC programs in feet or meters, corresponding to the equivalent length if the data were actually punched on paper tape. Formats Data was represented by the presence or absence of a hole at a particular location. Tapes originally had five rows of holes for data across the width of the tape. Later tapes had more rows. A 1944 electro-mechanical programmable calculating machine, the Automatic Sequence Controlled Calculator or Harvard Mark I, used paper tape with 24 rows, The IBM Selective Sequence Electronic Calculator (SSEC) used paper tape with 74 rows. Australia's 1951 electronic computer, CSIRAC, used wide paper tape with twelve rows. A row of smaller sprocket holes was always punched to be used to synchronize tape movement. Originally, this was done using a wheel with radial teeth called a sprocket wheel. Later, optical readers made use of the sprocket holes to generate timing pulses. The sprocket holes were slightly closer to one edge of the tape, dividing the tape into unequal widths, to make it unambiguous which way to orient the tape in the reader. The bits on the narrower width of the tape were generally the least significant bits when the code was represented as numbers in a digital system. Materials Many early machines used oiled paper tape, which was pre-impregnated with a light machine oil, to lubricate the reader and punch mechanisms. The oil impregnation usually made the paper somewhat translucent and slippery, and excess oil could transfer to clothing or any surfaces it contacted. Later optical tape readers often specified non-oiled opaque paper tape, which was less prone to depositing oily debris on the optical sensors and causing read errors. Another innovation was fanfold paper tape, which was easier to store compactly and less prone to tangling, as compared to rolled paper tape. For heavy-duty or repetitive use, polyester Mylar tape was often used. This tough, durable plastic film was usually thinner than paper tapes, but could still be used in many devices originally designed for paper media. The plastic tape was sometimes transparent, but usually was aluminized to make it opaque enough for use in high-speed optical readers. Dimensions Tape for punching was usually thick. The two most common widths were for five bit codes, and for tapes with six or more bits. Hole spacing was in both directions. Data holes were in diameter; sprocket feed holes were . Chadless tape Most tape-punching equipment used solid circular punches to create holes in the tape. This process created "chad", or small circular pieces of paper. Managing the disposal of chad was an annoying and complex problem, as the tiny paper pieces had a tendency to escape containment and to interfere with the other electromechanical parts of the teleprinter equipment. Chad from oiled paper tape was particularly problematic, as it tended to clump and build up, rather than flowing freely into a collection container. A variation on the tape punch was a device called a Chadless Printing Reperforator. This machine would punch a received teleprinter signal into tape and print the message on it at the same time, using a printing mechanism similar to that of an ordinary page printer. The tape punch, rather than punching out the usual round holes, would instead punch little U-shaped cuts in the paper, so that no chad would be produced; the "hole" was still filled with a little paper trap-door. By not fully punching out the hole, the printing on the paper remained intact and legible. This enabled operators to read the tape without having to decipher the holes, which would facilitate relaying the message on to another station in the network. Also, there was no "chad box" to empty from time to time. A disadvantage to this technology was that, once punched, chadless tape did not roll up well for storage, because the protruding flaps of paper would catch on the next layer of tape so it could not be coiled up tightly. Another disadvantage that emerged in time, was that there was no reliable way to read chadless tape in later high-speed readers which used optical sensing. However, the mechanical tape readers used in most standard-speed equipment had no problem with chadless tape, because they sensed the holes by means of blunt spring-loaded mechanical sensing pins, which easily pushed the paper flaps out of the way. Encoding Text was encoded in several ways. The earliest standard character encoding was Baudot, which dates back to the 19th century and had five holes. The Baudot code was superseded by modified five-hole codes such as the Murray code (which added carriage return and line feed) which was developed into the Western Union code which was further developed into the International Telegraph Alphabet No. 2 (ITA 2), and a variant called the American Teletypewriter code (USTTY). Other standards, such as Teletypesetter (TTS), FIELDATA and Flexowriter, had six holes. In the early 1960s, the American Standards Association led a project to develop a universal code for data processing, which became the American Standard Code for Information Interchange (ASCII). This seven-level code was adopted by some teleprinter users, including AT&T (Teletype). Others, such as Telex, stayed with the earlier codes. Applications Communications Punched tape was used as a way of storing messages for teletypewriters. Operators typed in the message to the paper tape, and then sent the message at the maximum line speed from the tape. This permitted the operator to prepare the message "off-line" at the operator's best typing speed, and permitted the operator to correct any error prior to transmission. An experienced operator could prepare a message at 135 words per minute (WPM) or more for short periods. The line typically operated at 75 WPM, but it operated continuously. By preparing the tape "off-line" and then sending the message with a tape reader, the line could operate continuously rather than depending on continuous "on-line" typing by a single operator. Typically, a single 75 WPM line supported three or more teletype operators working offline. Tapes punched at the receiving end could be used to relay messages to another station. Large store and forward networks were developed using these techniques. Paper tape could be read into computers at up to 1,000 characters per second. In 1963, a Danish company called Regnecentralen introduced a paper tape reader called RC 2000 that could read 2,000 characters per second; later they increased the speed further, up to 2,500 cps. As early as World War II, the Heath Robinson tape reader, used by Allied codebreakers, was capable of 2,000 cps while Colossus could run at 5,000 cps using an optical tape reader designed by Arnold Lynch. Minicomputers When the first minicomputers were being released, most manufacturers turned to the existing mass-produced ASCII teleprinters (primarily the Teletype Model 33, capable of ten ASCII characters per second throughput) as a low-cost solution for keyboard input and printer output. The commonly specified Model 33 ASR included a paper tape punch/reader, where ASR stands for "Automatic Send/Receive" as opposed to the punchless/readerless KSR – Keyboard Send/Receive and RO – Receive Only models. As a side effect, punched tape became a popular medium for low-cost minicomputer data and program storage, and it was common to find a selection of tapes containing useful programs in most minicomputer installations. Faster optical readers were also common. Binary data transfer to or from these minicomputers was often accomplished using a doubly encoded technique to compensate for the relatively high error rate of punches and readers. The low-level encoding was typically ASCII, further encoded and framed in various schemes such as Intel Hex, in which a binary value of "01011010" would be represented by the ASCII characters "5A". Framing, addressing and checksum (primarily in ASCII hex characters) information helped with error detection. Efficiencies of such an encoding scheme are on the order of 35–40% (e.g., 36% from 44 8-bit ASCII characters being needed to represent sixteen bytes of binary data per frame). Computer-aided manufacturing In the 1970s, computer-aided manufacturing equipment often used paper tape. A paper tape reader was smaller and less expensive than Hollerith card or magnetic tape readers, and the medium was reasonably reliable in a manufacturing environment. Paper tape was an important storage medium for computer-controlled wire-wrap machines, for example. Premium black waxed and lubricated long-fiber papers, and Mylar film tape were developed so that heavily used production tapes would last longer. Data transfer for ROM and EPROM programming In the 1970s through the early 1980s, paper tape was commonly used to transfer binary data for incorporation in either mask-programmable read-only memory (ROM) chips or their erasable counterparts EPROMs. A significant variety of encoding formats were developed for use in computer and ROM/EPROM data transfer. Encoding formats commonly used were primarily driven by those formats that EPROM programming devices supported and included various ASCII hex variants as well as a number of proprietary formats. A much more primitive as well as a much longer high-level encoding scheme was also used, BNPF (Begin-Negative-Positive-Finish), also written as BPNF (Begin-Positive-Negative-Finish). In BNPF encoding, a single byte (8 bits) would be represented by a highly redundant character framing sequence starting with a single uppercase ASCII "B", eight ASCII characters where a "0" would be represented by a "N" and a "1" would be represented by a "P", followed by an ending ASCII "F". These ten-character ASCII sequences were separated by one or more whitespace characters, therefore using at least eleven ASCII characters for each byte stored (9% efficiency). The ASCII "N" and "P" characters differed in four bit positions, providing excellent protection from single punch errors. Alternative schemes named BHLF (Begin-High-Low-Finish) and B10F (Begin-One-Zero-Finish) were also available where either "L" and "H" or "0" and "1" were also available to represent data bits, but in both of these encoding schemes, the two data-bearing ASCII characters differ in only one bit position, providing very poor single punch error detection. Cash registers NCR of Dayton, Ohio, made cash registers around 1970 that would punch paper tape. Sweda made similar cash registers around the same time. The tape could then be read into a computer and not only could sales information be summarized, billings could be done on charge transactions. The tape was also used for inventory tracking, recording department and class numbers of items sold. Newspaper industry Punched paper tape was used by the newspaper industry until the mid-1970s or later. Newspapers were typically set in hot lead by devices like Linotype machines. With the wire services coming into a device that would punch paper tape, rather than the Linotype operator having to retype all the incoming stories, the paper tape could be put into a paper tape reader on the Linotype and it would create the lead slugs without the operator re-typing the stories. This also allowed newspapers to use devices, such as the Friden Flexowriter, to convert typing to lead type via tape. Even after the demise of Linotype and hot lead typesetting, many early phototypesetter devices utilized paper tape readers. If an error was found at one position on the six-level tape, that character could be turned into a null character to be skipped by punching out the remaining non-punched positions with what was known as a “chicken plucker". It looked like a strawberry stem remover that, pressed with thumb and forefinger, could punch out the remaining positions, one hole at a time. Cryptography Vernam ciphers were invented in 1917 to encrypt teleprinter communications using a key stored on paper tape. During the last third of the 20th century, the National Security Agency (NSA) used punched paper tape to distribute cryptographic keys. The eight-level paper tapes were distributed under strict accounting controls and read by a fill device, such as the hand held KOI-18, that was temporarily connected to each security device that needed new keys. NSA has been trying to replace this method with a more secure electronic key management system (EKMS), but , paper tape was apparently still being employed. The paper tape canister is a tamper-resistant container that contains features to prevent undetected alteration of the contents. Advantages and limitations Acid-free paper or Mylar tapes can be read many decades after manufacture, in contrast with magnetic tape that can deteriorate and become unreadable with time. The hole patterns of punched tape can be decoded by eye if necessary, and even editing of a tape is possible by manual cutting and splicing. Unlike magnetic tape, magnetic fields such as produced by electric motors cannot alter the punched data. In cryptography applications, a punched tape used to distribute a key can be rapidly and completely destroyed by burning, preventing the key from falling into the hands of an enemy. Reliability of paper tape punching operations was a concern, so that for critical applications a new punched tape could be read after punching to verify the correct contents. Rewinding a tape required a takeup reel or other measures to avoid tearing or tangling the tape. In some uses, "fan fold" tape simplified handling as the tape would refold into a "takeup tank" ready to be re-read. The information density of punched tape was low compared with magnetic tape, making large datasets clumsy to handle in punched tape form. Gallery See also Bit bucket Book music Friden Flexowriter Key punch Tape library Zygalski sheets References External links A song mentioning paper tape Various punched media Olympia Flexowriter Detailed description of two paper tape code systems, Baudot code and the system used by the ILLIAC computer Working paper tape punch/reader GNT 3601, Musée Bolo, YouTube Computer storage tape media History of computing Paper data storage Telegraphy
Punched tape
[ "Technology" ]
3,573
[ "Computers", "History of computing" ]
49,827
https://en.wikipedia.org/wiki/Arcade%20video%20game
An arcade video game is an arcade game that takes player input from its controls, processes it through electrical or computerized components, and displays output to an electronic monitor or similar display. All arcade video games are coin-operated or accept other means of payment, housed in an arcade cabinet, and located in amusement arcades alongside other kinds of arcade games. Until the early 2000s, arcade video games were the largest and most technologically advanced segment of the video game industry. Early prototypical entries Galaxy Game and Computer Space in 1971 established the principle operations for arcade games, and Atari's Pong in 1972 is recognized as the first successful commercial arcade video game. Improvements in computer technology and gameplay design led to a golden age of arcade video games, the exact dates of which are debated but range from the late 1970s to the early 1980s. This golden age includes Space Invaders, Pac-Man, and Donkey Kong. The arcade industry had a resurgence from the early 1990s to mid-2000s, including Street Fighter II, Mortal Kombat, and Dance Dance Revolution, but ultimately declined in the Western world as competing home video game consoles such as the Sony PlayStation and Microsoft Xbox increased in their graphics and gameplay capability and decreased in cost. Nevertheless, Japan, China, and South Korea retain a strong arcade industry in the present day. History Games of skill were popular amusement-park midway attractions from the 19th century on. With the introduction of electricity and coin-operated machines, they facilitated a viable business. When pinball machines with electric lights and displays were introduced in 1933 (but without the user-controller flippers which would not be invented until 1947) these machines were seen as games of luck. Numerous states and cities treated them as amoral playthings for rebellious young people, and banned them into the 1960s and 1970s. Electro-mechanical games (EM games) appeared in arcades in the mid-20th century. Following Sega's EM game Periscope (1966), the arcade industry experienced a "technological renaissance" driven by "audio-visual" EM novelty games, establishing the arcades as a suitable environment for the introduction of commercial video games in the early 1970s. In the late 1960s, college student Nolan Bushnell had a part-time job at an arcade where he became familiar with EM games such as Chicago Coin's racing game Speedway (1969), watching customers play and helping to maintain the machinery, while learning the game business. The early mainframe game Spacewar! (1962) inspired the first commercial arcade video game, Computer Space (1971), created by Nolan Bushnell and Ted Dabney and released by Nutting Associates. It was demonstrated at the Amusement & Music Operators Association (AMOA) show in October 1971. Another Spacewar-inspired coin-operated video game, Galaxy Game, was demonstrated at Stanford University in November 1971. Bushnell and Dabney followed their Computer Space success to create - with the help of Allan Alcorn - a table-tennis game, Pong, released in 1972. Pong became a commercial success, leading numerous other coin-op manufacturers to enter the market. The video game industry transitioned from discrete integrated circuitry to programmable microprocessors in the mid-1970s, starting with Gun Fight in 1975. The arcade industry entered a "Golden Age" in 1978 with the release of Taito's Space Invaders, which introduced many novel gameplay features - including a scoreboard. From 1978 to 1982, several other major arcade-games from Namco, Atari, Williams Electronics, Stern Electronics, and Nintendo were all considered blockbusters, particularly Namco's Pac-Man (1980), which became a fixture in popular culture. Across North America and Japan, dedicated video-game arcades appeared and arcade-game cabinets appeared in many smaller storefronts. By 1981, the arcade video-game industry was worth in the US. The novelty of arcade games waned sharply after 1982 due to several factors, including market saturation of arcades and arcade games, a moral panic over video games (similar to fears raised over pinball machines in the decades prior), and the 1983 video game crash as the home-console market impacted arcades. The arcade market had recovered by 1986, with the help of software-conversion kits, the arrival of popular beat 'em up games (such as Kung-Fu Master (1984) and Renegade (1986-1987)), and advanced motion simulator games (such as Sega's "taikan" games including Hang-On (1985), Space Harrier (1985), and Out Run (1986)). However, the growth of home video-game systems such as the Nintendo Entertainment System led to another brief arcade decline toward the end of the 1980s. Arcade games continued to improve with the development of technology and of gameplay. In the early 1990s, the release of Capcom's Street Fighter II established the modern style of fighting games and led to a number of similar games such as Mortal Kombat, Fatal Fury, Killer Instinct, Virtua Fighter, and Tekken, creating a new renaissance in the arcades. Another factor was realism, including the "3D Revolution" from 2D and pseudo-3D graphics to "true" real-time 3D polygon graphics. This was largely driven by a technological arms-race between Sega and Namco. During the early 1990s games such as Sega's Virtua Racing and Virtua Fighter popularized 3D-polygon technology in arcades. 3D graphics later became popular in console and computer games by the mid-1990s, though arcade systems such as the Sega Model 3 remained considerably more advanced than home systems in the late 1990s. Until about 1996, arcade video-games had remained the largest segment of the global video-game industry. Arcades declined in the late 1990s, surpassed by the console market for the first time around 1997–1998. Since the 2000s, arcade games have taken different routes globally. In the United States, arcades have become niche markets as they compete with the home-console market, and they have adapted other business models, such as providing other entertainment options or adding prize redemptions. In Japan, where arcades continue to flourish, games like Dance Dance Revolution and The House of the Dead aim to deliver tailored experiences that players cannot easily have at home. Technology Virtually all modern arcade games (other than the very traditional fair midway) make extensive use of solid state electronics, integrated circuits, and monitor screens, all installed inside an arcade cabinet. With the exception of Galaxy Game and Computer Space, which were built around small form-factor mainframe computers, the first arcade games are based on combinations of multiple discrete logic chips, such as transistor–transistor logic (TTL) chips. Designing an arcade game was more about the combination of these TTL chips and other electronic components to achieve the desired effect on screen. More complex gameplay required significantly more TTL components to achieve this result. By the mid-1970s, the first inexpensive programmable microprocessors had arrived on the market. The first microprocessor-based video game is Midway's Gun Fight in 1975 (a conversion of Taito's Western Gun), and with the advent of Space Invaders and the golden era, microprocessor-based games became typical. Early arcade games were also designed around raster graphics displayed on a cathode-ray tube (CRT) display. Many games of the late 1970s and early 1980s use special displays that rendered vector graphics, though these waned by the mid-1980s as display technology on CRTs improved. Prior to the availability of color CRT or vector displays, some arcade cabinets have a combination of angled monitor positioning, one-way mirrors, and clear overlays to simulate colors and other graphics onto the gameplay field. Coin-operated arcade video games from the 1990s to the 2000s generally use custom hardware often with multiple CPUs, highly specialized sound and graphics chips, and the latest in expensive computer graphics display technology. This allows more complex graphics and sound than contemporary video game consoles or personal computers. Many arcade games since the 2000s run on modified video game console hardware (such as the Sega NAOMI or Triforce) or gaming PC components (such as the Taito Type X). Many arcade games have more immersive and realistic game controls than PC or console games. This includes specialized ambiance or control accessories such as fully enclosed dynamic cabinets with force feedback controls, dedicated lightguns, rear-projection displays, reproductions of automobile or airplane cockpits, motorcycle or horse-shaped controllers, or highly dedicated controllers such as dancing mats and fishing rods. These accessories are usually too bulky, expensive, and specialized to be used with typical home PCs and consoles. Arcade makers experiment with virtual reality technology. Arcades have progressed from using coins as credits to smart cards that hold the virtual currency of credits. Modern arcade cabinets use flat panel displays instead of cathode-ray tubes. Internet services such as ALL.Net, NESiCAxLive, e-Amusement and NESYS, allow the cabinets to download updates or new games, do online multiplayer gameplay, save progress, unlock content, or earn credits. Genres Many arcade games have short levels, simple and intuitive control schemes, and rapidly increasing difficulty. The classic formula for a successful arcade video game is "easy to learn, difficult to master" along with a "multiple life, progressively difficult level" paradigm. This is due to the environment of the arcade, where the player is essentially renting the game for as long as their in-game avatar can stay alive or until they run out of tokens. Games on consoles or PCs can be referred to as "arcade games" if they share these qualities, or are direct ports of arcade games. Arcade racing games often have sophisticated motion simulator arcade cabinets, a simplified physics engine, and short learning time when compared with more realistic racing simulations. Cars can turn sharply without braking or understeer, and the AI rivals are sometimes programmed so they are always near the player with a rubberband effect. Other types of arcade-style games include music games (particularly rhythm games), and mobile and casual games with intuitive controls and short sessions. Action The term "arcade game" can refer to an action video game designed to play similarly to an arcade game with frantic, addictive gameplay. The focus of arcade action games is on the user's reflexes, and many feature very little puzzle-solving, complex thinking, or strategy skills. These include fighting games often played with an arcade controller, beat 'em up games including fast-paced hack and slash games, and light gun rail shooters and "bullet hell" shooters with intuitive controls and rapidly increasing difficulty. Many arcade combat flight simulation games have sophisticated hydraulic motion simulator cabinets, and simplified physics and handling. Arcade flight games are meant to have an easy learning curve, in order to preserve their action component. Increasing numbers of console flight video games, such as Crimson Skies, Ace Combat, and Secret Weapons Over Normandy indicate the falling of manual-heavy flight sim popularity in favor of instant arcade flight action. A modern subgenre of action games called "hack and slash" or "character action games" represent an evolution of traditional arcade action games, and are sometimes considered a subgenre of beat 'em up brawlers. This subgenre of games was largely defined by Hideki Kamiya, creator of the Devil May Cry and Bayonetta franchises. Industry Arcade games are found in restaurants, bowling alleys, college campuses, video rental shops, dormitories, laundromats, movie theaters, supermarkets, shopping malls, airports, and other retail environments. They are popular in public places where people are likely to have free time. Their profitability is expanded by the popularity of conversions of arcade games for home-based platforms. In 1997, WMS Industries (parent company of Midway Games) reported that if more than 5,000 arcade units are sold, at least 100,000 home version units will be sold. The American Amusement Machine Association (AAMA) is a trade association established in 1981 that represents the American coin-operated amusement machine industry, including 120 arcade game distributors and manufacturers. The Japan Amusement Machine and Marketing Association (JAMMA) represents the Japanese arcade industry. Arcade machines may have standardized connectors or interfaces such as JAMMA, or JVS, that help with quick replacement of game systems or boards in arcade cabinets. The game boards or arcade boards may themselves allow for games to be replaced via game cartridges or discs. Conversions, emulators, and recreations Prior to the 2000s, successful video games were often converted to a home video game console or home computer. Many of the initial Atari VCS games, for example, were conversions of Atari's success arcade games. Arcade game manufacturers that were not in the home console or computer business found licensing of their games to console manufacturers to be a successful business model, as console manufacturer competitors would vie for rights to more popular games. Coleco famously bested Atari to secure the rights to convert Nintendo's Donkey Kong, which it subsequently included as a pack-in game for the ColecoVision to challenge the VCS. Arcade conversions typically had to make concessions for the lower computational power and capabilities of the home console, such as limited graphics or alterations in gameplay. Such conversions had mixed results. The Atari VCS conversion of Space Invaders was considered the VCS's killer application, helping to quadruple the VCS sales in 1980. In contrast, the VCS conversion of Pac-Man in 1982 was highly criticized for technical flaws due to VCS limitations such as flickering ghosts and simplified gameplay. Though Pac-Man was the best-selling game on the VCS, it eroded consumer confidence in Atari's games and partially contributed to the 1983 crash. The need for arcade conversions began to wane as arcade game manufacturers like Nintendo, Sega, and SNK entered the home console market and used similar technology within their home consoles as found at the arcade, negating the need to simplify the game. Concessions still may be made for a home release; notably, the Super Nintendo Entertainment System conversion of Mortal Kombat removed much of the gore from the arcade version to meet Nintendo's quality control standards. Exact copies of arcade video games can be run through emulators such as MAME on modern devices. An emulator is an application that translates foreign software onto a modern system, in real-time. Emulated games appeared legally and commercially on the Macintosh in 1994 with Williams floppy disks, Sony PlayStation in 1996, and Sega Saturn in 1997 with CD-ROM compilations such as Williams Arcade's Greatest Hits and Arcade's Greatest Hits: The Atari Collection 1, and on the PlayStation 2 and GameCube with DVD-ROM compilations such as Midway Arcade Treasures. Arcade games are downloaded and emulated through the Nintendo Wii Virtual Console service starting in 2009. Using emulation, companies like Arcade1Up have produced at-scale or reduced-scale recreations of arcade cabinets using modern technology, such as LCD monitors and lightweight construction. These cabinets are typically designed to resemble the original arcade game cabinets, but may also support multiple related games. These cabinets can be offered in diverse and miniaturized styles, such as table-mounted and wall-mounted versions. Highest-grossing For arcade games, success is usually judged by either the number of arcade hardware units sold to operators, or the amount of revenue generated. The revenue can include the coin drop earnings from coins (such as quarters, dollars, or 100 yen coins) inserted into machines, and/or the earnings from hardware sales with each unit costing thousands of dollars. Most of the revenue figures listed below are incomplete as they only include hardware sales revenue, due to a lack of available data for coin drop earnings which typically account for the majority of a hit arcade game's gross revenue. This list only includes arcade games that either sold more than 10,000 hardware units or generated a revenue of more than . Most of the games listed were released between the golden age of arcade video games (1978–1984) and the 1990s. Franchises These are the combined hardware sales of at least two or more arcade games that are part of the same franchise. This list only includes franchises that have sold at least 5,000 hardware units or grossed at least $10 million revenues. See also Claw crane JAMMA List of arcade video games Medal game Money booth Neo Geo Winners Don't Use Drugs Notes References External links The Video Arcade Preservation Society Online collection of Automatic Age trade journals, 1925–1945 Collection of Cocktail Arcade Machines Arcade History (Coin-Op Database) The Museum of Soviet Arcade Games (blog article) Children's entertainment Video game platforms Video game terminology
Arcade video game
[ "Technology" ]
3,392
[ "Computing terminology", "Computing platforms", "Video game terminology", "Video game platforms" ]
49,831
https://en.wikipedia.org/wiki/Light%20pen
A light pen is a computer input device in the form of a light-sensitive wand used in conjunction with a computer's cathode-ray tube (CRT) display. It allows the user to point to displayed objects or draw on the screen in a similar way to a touchscreen but with greater positional accuracy. A light pen can work with any CRT-based display, but its ability to be used with LCDs was unclear (though Toshiba and Hitachi displayed a similar idea at the "Display 2006" show in Japan). A light pen detects changes in brightness of nearby screen pixels when scanned by cathode-ray tube electron beam and communicates the timing of this event to the computer. Since a CRT scans the entire screen one pixel at a time, the computer can keep track of the expected time of scanning various locations on screen by the beam and infer the pen's position from the latest time stamps. History The first light pen, at this time still called "light gun", was created around 1951–1955 as part of the Whirlwind I project at MIT, where it was used to select discrete symbols on the screen, and later at the SAGE project, where it was used for tactical real-time-control of a radar-networked airspace. One of the first more widely deployed uses was in the Situation Display consoles of the AN/FSQ-7 for military airspace surveillance. This is not very surprising, given its relationship with the Whirlwind projects. See Semi-Automatic Ground Environment for more details. During the 1960s, light pens were common on graphics terminals such as the IBM 2250 and were also available for the IBM 3270 text-only terminal. The first nonlinear editor, the CMX 600 was controlled by a light pen, where operator clicked symbols superimposed on edited footage. Light pen usage was expanded in the early 1980s to music workstations such as the Fairlight CMI and personal computers such as the BBC Micro and Holborn 9100. IBM PC-compatible MDA (only early versions), CGA, HGC (including HGC+ and InColor) and some EGA graphics cards also featured a connector compatible with a light pen, as did early Tandy 1000 computers, the Thomson MO5 computer family, the Amiga, Atari 8-bit, Commodore 8-bit, some MSX computers and Amstrad PCW home computers. For the MSX computers, Sanyo produced a light pen interface cartridge. Because the user was required to hold their arm in front of the screen for long periods of time (potentially causing "gorilla arm") or to use a desk that tilts the monitor, the light pen fell out of use as a general-purpose input device. Light pen was also perceived as working well only on displays with low persistence, which tend to flicker. See also Bit banging CueCat Digital pen Light gun Pen computing Stylus (computing) Notes References External links Computing input devices History of human–computer interaction Pointing devices
Light pen
[ "Technology" ]
622
[ "History of human–computer interaction", "History of computing" ]
49,887
https://en.wikipedia.org/wiki/Standard%20enthalpy%20of%20formation
In chemistry and thermodynamics, the standard enthalpy of formation or standard heat of formation of a compound is the change of enthalpy during the formation of 1 mole of the substance from its constituent elements in their reference state, with all substances in their standard states. The standard pressure value is recommended by IUPAC, although prior to 1982 the value 1.00 atm (101.325 kPa) was used. There is no standard temperature. Its symbol is ΔfH⦵. The superscript Plimsoll on this symbol indicates that the process has occurred under standard conditions at the specified temperature (usually 25 °C or 298.15 K). Standard states are defined for various types of substances. For a gas, it is the hypothetical state the gas would assume if it obeyed the ideal gas equation at a pressure of 1 bar. For a gaseous or solid solute present in a diluted ideal solution, the standard state is the hypothetical state of concentration of the solute of exactly one mole per liter (1 M) at a pressure of 1 bar extrapolated from infinite dilution. For a pure substance or a solvent in a condensed state (a liquid or a solid) the standard state is the pure liquid or solid under a pressure of 1 bar. For elements that have multiple allotropes, the reference state usually is chosen to be the form in which the element is most stable under 1 bar of pressure. One exception is phosphorus, for which the most stable form at 1 bar is black phosphorus, but white phosphorus is chosen as the standard reference state for zero enthalpy of formation. For example, the standard enthalpy of formation of carbon dioxide is the enthalpy of the following reaction under the above conditions: C(s, graphite) + O2(g) -> CO2(g) All elements are written in their standard states, and one mole of product is formed. This is true for all enthalpies of formation. The standard enthalpy of formation is measured in units of energy per amount of substance, usually stated in kilojoule per mole (kJ mol−1), but also in kilocalorie per mole, joule per mole or kilocalorie per gram (any combination of these units conforming to the energy per mass or amount guideline). All elements in their reference states (oxygen gas, solid carbon in the form of graphite, etc.) have a standard enthalpy of formation of zero, as there is no change involved in their formation. The formation reaction is a constant pressure and constant temperature process. Since the pressure of the standard formation reaction is fixed at 1 bar, the standard formation enthalpy or reaction heat is a function of temperature. For tabulation purposes, standard formation enthalpies are all given at a single temperature: 298 K, represented by the symbol . Hess' law For many substances, the formation reaction may be considered as the sum of a number of simpler reactions, either real or fictitious. The enthalpy of reaction can then be analyzed by applying Hess' law, which states that the sum of the enthalpy changes for a number of individual reaction steps equals the enthalpy change of the overall reaction. This is true because enthalpy is a state function, whose value for an overall process depends only on the initial and final states and not on any intermediate states. Examples are given in the following sections. Ionic compounds: Born–Haber cycle For ionic compounds, the standard enthalpy of formation is equivalent to the sum of several terms included in the Born–Haber cycle. For example, the formation of lithium fluoride, Li(s) + 1/2 F2(g) -> LiF(s) may be considered as the sum of several steps, each with its own enthalpy (or energy, approximately): , the standard enthalpy of atomization (or sublimation) of solid lithium. , the first ionization energy of gaseous lithium. , the standard enthalpy of atomization (or bond energy) of fluorine gas. , the electron affinity of a fluorine atom. , the lattice energy of lithium fluoride. The sum of these enthalpies give the standard enthalpy of formation () of lithium fluoride: In practice, the enthalpy of formation of lithium fluoride can be determined experimentally, but the lattice energy cannot be measured directly. The equation is therefore rearranged to evaluate the lattice energy: Organic compounds The formation reactions for most organic compounds are hypothetical. For instance, carbon and hydrogen will not directly react to form methane (), so that the standard enthalpy of formation cannot be measured directly. However the standard enthalpy of combustion is readily measurable using bomb calorimetry. The standard enthalpy of formation is then determined using Hess's law. The combustion of methane: CH4 + 2 O2 -> CO2 + 2 H2O is equivalent to the sum of the hypothetical decomposition into elements followed by the combustion of the elements to form carbon dioxide () and water (): CH4 -> C + 2H2 C + O2 -> CO2 2H2 + O2 -> 2H2O Applying Hess's law, Solving for the standard of enthalpy of formation, The value of is determined to be −74.8 kJ/mol. The negative sign shows that the reaction, if it were to proceed, would be exothermic; that is, methane is enthalpically more stable than hydrogen gas and carbon. It is possible to predict heats of formation for simple unstrained organic compounds with the heat of formation group additivity method. Use in calculation for other reactions The standard enthalpy change of any reaction can be calculated from the standard enthalpies of formation of reactants and products using Hess's law. A given reaction is considered as the decomposition of all reactants into elements in their standard states, followed by the formation of all products. The heat of reaction is then minus the sum of the standard enthalpies of formation of the reactants (each being multiplied by its respective stoichiometric coefficient, ) plus the sum of the standard enthalpies of formation of the products (each also multiplied by its respective stoichiometric coefficient), as shown in the equation below: If the standard enthalpy of the products is less than the standard enthalpy of the reactants, the standard enthalpy of reaction is negative. This implies that the reaction is exothermic. The converse is also true; the standard enthalpy of reaction is positive for an endothermic reaction. This calculation has a tacit assumption of ideal solution between reactants and products where the enthalpy of mixing is zero. For example, for the combustion of methane, CH4 + 2O2 -> CO2 + 2H2O: However O2 is an element in its standard state, so that , and the heat of reaction is simplified to which is the equation in the previous section for the enthalpy of combustion . Key concepts for enthalpy calculations When a reaction is reversed, the magnitude of ΔH stays the same, but the sign changes. When the balanced equation for a reaction is multiplied by an integer, the corresponding value of ΔH must be multiplied by that integer as well. The change in enthalpy for a reaction can be calculated from the enthalpies of formation of the reactants and the products Elements in their standard states make no contribution to the enthalpy calculations for the reaction, since the enthalpy of an element in its standard state is zero. Allotropes of an element other than the standard state generally have non-zero standard enthalpies of formation. Examples: standard enthalpies of formation at 25 °C Thermochemical properties of selected substances at 298.15 K and 1 atm Inorganic substances Aliphatic hydrocarbons Other organic compounds See also Calorimetry Thermochemistry References External links NIST Chemistry WebBook Enthalpy Thermochemistry de:Enthalpie#Standardbildungsenthalpie
Standard enthalpy of formation
[ "Physics", "Chemistry", "Mathematics" ]
1,688
[ "Thermodynamic properties", "Thermochemistry", "Physical quantities", "Quantity", "Enthalpy" ]
49,895
https://en.wikipedia.org/wiki/Piety
Piety is a virtue which may include religious devotion or spirituality. A common element in most conceptions of piety is a duty of respect. In a religious context, piety may be expressed through pious activities or devotions, which may vary among countries and cultures. Etymology The word piety comes from the Latin word , the noun form of the adjective (which means "devout" or "dutiful"). English literature scholar Alan Jacobs has written about the origins and early meaning of the term: Classical interpretation in traditional Latin usage expressed a complex, highly valued Roman virtue; a man with respected his responsibilities to gods, country, parents, and kin. In its strictest sense it was the sort of love a son ought to have for his father. Aeneas's consistent epithet in Virgil and other Latin authors is , a term which connotes reverence toward the gods and familial dutifulness. At the fall of Troy, Aeneas carries to safety his father, the lame Anchises, and the Lares and Penates, the statues of the household gods. In addressing whether children have an obligation to provide support for their parents, Aquinas quotes, Cicero, "...piety gives both duty and homage": "duty" referring to service, and "homage" to reverence or honor." Filial piety is central to Confucian ethics; reverence for parents is considered in Chinese ethics the prime virtue and the basis of all right human relations. As a virtue In Catholicism, Eastern Orthodoxy, Lutheranism, and Anglicanism, piety is one of the seven gifts of the Holy Spirit. "It engenders in the soul a filial respect for God, a generous love toward him, and an affectionate obedience that wants to do what he commands because it loves the one who commands." Pope Gregory I, in demonstrating the interrelationship among the gifts, said "Through the fear of the Lord, we rise to piety, from piety then to knowledge..." Aquinas spoke of piety in the context of one's parents and country, and given the obligation to accord each what is rightfully due them, related it to the cardinal virtue of justice. (By analogy, rendering to God what is due him, Aquinas identified as the virtue of religion, also related to justice.) Professor Richard McBrien said piety "is a gift of the Holy Spirit by which we are motivated and enabled to be faithful and respectful to those—ultimately, God—who have had a positive, formative influence on our lives and to whom we owe a debt of gratitude," and requires one to acknowledge, to the extent possible, the sources of those many blessings through words and gestures great and small. Piety belongs to the virtue of Religion, which theologians put among the moral virtues, as a part of the cardinal virtue Justice, since by it one tenders to God what is due to him. The gift of piety perfects the virtue of justice, enabling the individual to fulfill his obligations to God and neighbor, and to do so willingly and joyfully. By inspiring a person with a tender and filial confidence in God, the gift of piety makes them joyfully embrace all that pertains to His service. John Calvin said, "I call ‘piety’ that reverence joined with love of God which the knowledge of his benefits induces. For until [people] recognize that they owe everything to God, that they are nourished by his fatherly care, that he is the Author of their every good, that they should seek nothing beyond him—they will never yield him willing service." Bishop Pierre Whalon says that "Piety, therefore, is the pursuit of an ever-greater sense of being in the presence of God." The Gift of Piety is synonymous with of filial trust in God. Through piety, a person shows reverence for God as a loving Father, and respect for others as children of God. Pope John Paul II defined piety as "the gift of reverence for what comes from God," and related it to his earlier lectures on the Theology of the Body. In a General Audience in June 2014, Pope Francis said, "When the Holy Spirit helps us sense the presence of the Lord and all of his love for us, it warms our heart and drives us almost naturally to prayer and celebration." "Piety", said Pope Francis, points up "our friendship with God." It is a gift that enables people to serve their neighbor "with gentleness and with a smile." Piety and devotion Expressions of piety vary according to country and local tradition. "Feast days", with their preparations for various religious celebrations and activities, have forged traditions peculiar to communities. Many pious exercises are part of the cultic patrimony of particular Churches or religious families. Devotions help incorporate faith into daily life. While acknowledging that Anglican piety took the forms of more frequent communion and liturgical observances and customs, Bishop Ronald Williams spoke for increased reading of the Bible. In the Methodist Church, works of piety are a means of grace. They can be personal, such as reading, prayer, and meditation; or communal, such as sharing in the sacraments or Bible study. For Presbyterians, piety refers to a whole realm of practices—such as worship, prayer, singing, and service—that help shape and guide the way one's reverence and love for God are expressed; and "the duty of the Christian to live a life of piety in accordance with God’s moral law". The veneration of sacred images belongs to the nature of Catholic piety, with the understanding that "the honour rendered to the image is directed to the person represented". See also Affective piety Plato's dialogue Euthyphro, in which Socrates seeks a definition of piety Hasid (the Hebrew term) Islamic views on piety Pietism in the Lutheran Church References External links Ethical principles Personality traits Religious practices Spirituality Virtue
Piety
[ "Biology" ]
1,252
[ "Behavior", "Religious practices", "Human behavior", "Spirituality" ]
50,047
https://en.wikipedia.org/wiki/Rainforest
Rainforests are forests characterized by a closed and continuous tree canopy, moisture-dependent vegetation, the presence of epiphytes and lianas and the absence of wildfire. Rainforests can be generally classified as tropical rainforests or temperate rainforests, but other types have been described. Estimates vary from 40% to 75% of all biotic species being indigenous to the rainforests. There may be many millions of species of plants, insects and microorganisms still undiscovered in tropical rainforests. Tropical rainforests have been called the "jewels of the Earth" and the "world's largest pharmacy", because over one quarter of natural medicines have been discovered there. Rainforests as well as endemic rainforest species are rapidly disappearing due to deforestation, the resulting habitat loss and pollution of the atmosphere. Definition Rainforests are characterized by a closed and continuous tree canopy, high humidity, the presence of moisture-dependent vegetation, a moist layer of leaf litter, the presence of epiphytes and lianas and the absence of wildfire. The largest areas of rainforest are tropical or temperate rainforests, but other vegetation associations including subtropical rainforest, littoral rainforest, cloud forest, vine thicket and even dry rainforest have been described. Tropical rainforest Tropical rainforests are characterized by a warm and wet climate with no substantial dry season: typically found within 10 degrees north and south of the equator. Mean monthly temperatures exceed during all months of the year. Average annual rainfall is no less than and can exceed although it typically lies between and . Many of the world's tropical forests are associated with the location of the monsoon trough, also known as the Intertropical Convergence Zone. The broader category of tropical moist forests are located in the equatorial zone between the Tropic of Cancer and Tropic of Capricorn. Tropical rainforests exist in Southeast Asia (from Myanmar (Burma)) to the Philippines, Malaysia, Indonesia, Papua New Guinea and Sri Lanka; also in Sub-Saharan Africa from the Cameroon to the Congo (Congo Rainforest), South America (e.g. the Amazon rainforest), Central America (e.g. Bosawás, the southern Yucatán Peninsula-El Peten-Belize-Calakmul), Australia, and on Pacific Islands (such as Hawaii). Tropical forests have been called the "Earth's lungs", although it is now known that rainforests contribute little net oxygen addition to the atmosphere through photosynthesis. Temperate rainforest Tropical forests cover a large part of the globe, but temperate rainforests only occur in a few regions around the world. Temperate rainforests are rainforests in temperate regions. They occur in North America (in the Pacific Northwest in Alaska, British Columbia, Washington, Oregon and California), in Europe (parts of the British Isles such as the coastal areas of Ireland and Scotland, southern Norway, parts of the western Balkans along the Adriatic coast, as well as in Galicia and coastal areas of the eastern Black Sea, including Georgia and coastal Turkey), in East Asia (in southern China, Highlands of Taiwan, much of Japan and Korea, and on Sakhalin Island and the adjacent Russian Far East coast), in South America (southern Chile) and also in Australia and New Zealand. Dry rainforest Dry rainforests have a more open canopy layer than other rainforests, and are found in areas of lower rainfall (). They generally have two layers of trees. Layers A tropical rainforest typically has a number of layers, each with different plants and animals adapted for life in that particular area. Examples include the emergent, canopy, understory and forest floor layers. Emergent layer The emergent layer contains a small number of very large trees called emergents, which grow above the general canopy, reaching heights of 45–55 m, although on occasion a few species will grow to 70–80 m tall. They need to be able to withstand the hot temperatures and strong winds that occur above the canopy in some areas. Eagles, butterflies, bats and certain monkeys inhabit this layer. Canopy layer The canopy layer contains the majority of the largest trees, typically to tall. The densest areas of biodiversity are found in the forest canopy, a more or less continuous cover of foliage formed by adjacent treetops. The canopy, by some estimates, is home to 50 percent of all plant species. Epiphytic plants attach to trunks and branches, and obtain water and minerals from rain and debris that collects on the supporting plants. The fauna is similar to that found in the emergent layer but more diverse. A quarter of all insect species are believed to exist in the rainforest canopy. Scientists have long suspected the richness of the canopy as a habitat, but have only recently developed practical methods of exploring it. As long ago as 1917, naturalist William Beebe declared that "another continent of life remains to be discovered, not upon the Earth, but one to two hundred feet above it, extending over thousands of square miles." A true exploration of this habitat only began in the 1980s, when scientists developed methods to reach the canopy, such as firing ropes into the trees using crossbows. Exploration of the canopy is still in its infancy, but other methods include the use of balloons and airships to float above the highest branches and the building of cranes and walkways planted on the forest floor. The science of accessing tropical forest canopy using airships or similar aerial platforms is called dendronautics. Understory layer The understory or understorey layer lies between the canopy and the forest floor. It is home to a number of birds, snakes and lizards, as well as predators such as jaguars, boa constrictors and leopards. The leaves are much larger at this level and insect life is abundant. Many seedlings that will grow to the canopy level are present in the understory. Only about 5% of the sunlight shining on the rainforest canopy reaches the understory. This layer can be called a shrub layer, although the shrub layer may also be considered a separate layer. Forest floor The forest floor, the bottom-most layer, receives only 2% of the sunlight. Only plants adapted to low light can grow in this region. Away from riverbanks, swamps and clearings, where dense undergrowth is found, the forest floor is relatively clear of vegetation because of the low sunlight penetration. It also contains decaying plant and animal matter, which disappears quickly, because the warm, humid conditions promote rapid decay. Many forms of fungi growing here help decay the animal and plant waste. Flora and fauna More than half of the world's species of plants and animals are found in rainforests. Rainforests support a very broad array of fauna, including mammals, reptiles, amphibians, birds and invertebrates. Mammals may include primates, felids and other families. Reptiles include snakes, turtles, chameleons and other families; while birds include such families as vangidae and Cuculidae. Dozens of families of invertebrates are found in rainforests. Fungi are also very common in rainforest areas as they can feed on the decomposing remains of plants and animals. The great diversity in rainforest species is in large part the result of diverse and numerous physical refuges, i.e. places in which plants are inaccessible to many herbivores, or in which animals can hide from predators. Having numerous refuges available also results in much higher total biomass than would otherwise be possible. Some species of fauna show a trend towards declining populations in rainforests, for example, reptiles that feed on amphibians and reptiles. This trend requires close monitoring. The seasonality of rainforests affects the reproductive patterns of amphibians, and this in turn can directly affect the species of reptiles that feed on these groups, particularly species with specialized feeding, since these are less likely to use alternative resources. Soils Despite the growth of vegetation in a tropical rainforest, soil quality is often quite poor. Rapid bacterial decay prevents the accumulation of humus. The concentration of iron and aluminium oxides by the laterization process gives the oxisols a bright red colour and sometimes produces mineral deposits such as bauxite. Most trees have roots near the surface because there are insufficient nutrients below the surface; most of the trees' minerals come from the top layer of decomposing leaves and animals. On younger substrates, especially of volcanic origin, tropical soils may be quite fertile. If rainforest trees are cleared, rain can accumulate on the exposed soil surfaces, creating run-off, and beginning a process of soil erosion. Eventually, streams and rivers form and flooding becomes possible. There are several reasons for the poor soil quality. First is that the soil is highly acidic. The roots of plants rely on an acidity difference between the roots and the soil in order to absorb nutrients. When the soil is acidic, there is little difference, and therefore little absorption of nutrients from the soil. Second, the type of clay particles present in tropical rainforest soil has a poor ability to trap nutrients and stop them from washing away. Even if humans artificially add nutrients to the soil, the nutrients mostly wash away and are not absorbed by the plants. Finally, these soils are poor due to the high volume of rain in tropical rainforests washes nutrients out of the soil more quickly than in other climates. Effect on global climate A natural rainforest emits and absorbs vast quantities of carbon dioxide. On a global scale, long-term fluxes are approximately in balance, so that an undisturbed rainforest would have a small net impact on atmospheric carbon dioxide levels, though they may have other climatic effects (on cloud formation, for example, by recycling water vapour). No rainforest today can be considered to be undisturbed. Human-induced deforestation plays a significant role in causing rainforests to release carbon dioxide, as do other factors, whether human-induced or natural, which result in tree death, such as burning and drought. Some climate models operating with interactive vegetation predict a large loss of Amazonian rainforest around 2050 due to drought, forest dieback and the subsequent release of more carbon dioxide. Human uses Tropical rainforests provide timber as well as animal products such as meat and hides. Rainforests also have value as tourism destinations and for the ecosystem services provided. Many foods originally came from tropical forests, and are still mostly grown on plantations in regions that were formerly primary forest. Also, plant-derived medicines are commonly used for fever, fungal infections, burns, gastrointestinal problems, pain, respiratory problems, and wound treatment. At the same time, rainforests are usually not used sustainably by non-native peoples but are being exploited or removed for agricultural purposes. Native people On 18 January 2007, FUNAI reported also that it had confirmed the presence of 67 different uncontacted tribes in Brazil, up from 40 in 2005. With this addition, Brazil has now overtaken the island of New Guinea as the country having the largest number of uncontacted tribes. The province of Irian Jaya or West Papua in the island of New Guinea is home to an estimated 44 uncontacted tribal groups. The tribes are in danger because of the deforestation, especially in Brazil. Central African rainforest is home of the Mbuti pygmies, one of the hunter-gatherer peoples living in equatorial rainforests characterised by their short height (below one and a half metres, or 59inches, on average). They were the subject of a study by Colin Turnbull, The Forest People, in 1962. Pygmies who live in Southeast Asia are, amongst others, referred to as "Negrito". There are many tribes in the rainforests of the Malaysian state of Sarawak. Sarawak is part of Borneo, the third largest island in the world. Some of the other tribes in Sarawak are: the Kayan, Kenyah, Kejaman, Kelabit, Punan Bah, Tanjong, Sekapan, and the Lahanan. Collectively, they are referred to as Dayaks or Orangulu which means "people of the interior". About half of Sarawak's 1.5 million people are Dayaks. Most Dayaks, it is believed by anthropologists, came originally from the South-East Asian mainland. Their mythologies support this. Deforestation Tropical and temperate rainforests have been subjected to heavy legal and illegal logging for their valuable hardwoods and agricultural clearance (slash-and-burn, clearcutting) throughout the 20th century and the area covered by rainforests around the world is shrinking. Biologists have estimated that large numbers of species are being driven to extinction (possibly more than 50,000 a year; at that rate, says E. O. Wilson of Harvard University, a quarter or more of all species on Earth could be exterminated within 50 years) due to the removal of habitat with destruction of the rainforests. Another factor causing the loss of rainforest is expanding urban areas. Littoral rainforest growing along coastal areas of eastern Australia is now rare due to ribbon development to accommodate the demand for seachange lifestyles. Forests are being destroyed at a rapid pace. Almost 90% of West Africa's rainforest has been destroyed. Since the arrival of humans, Madagascar has lost two thirds of its original rainforest. At present rates, tropical rainforests in Indonesia would be logged out in 10 years and Papua New Guinea in 13 to 16 years. According to Rainforest Rescue, an important reason for the increasing deforestation rate, especially in Indonesia, is the expansion of oil palm plantations to meet growing demand for cheap vegetable fats and biofuels. In Indonesia, palm oil is already cultivated on nine million hectares and, together with Malaysia, the island nation produces about 85 percent of the world's palm oil. Several countries, notably Brazil, have declared their deforestation a national emergency. Amazon deforestation jumped by 69% in 2008 compared to 2007's twelve months, according to official government data. However, a 30 January 2009 New York Times article stated, "By one estimate, for every acre of rainforest cut down each year, more than 50 acres of new forest are growing in the tropics." The new forest includes secondary forest on former farmland and so-called degraded forest. See also Cloud forest Ecology Inland rainforest Intact forest landscape Jungle Stratification (vegetation) References Further reading Butler, R. A. (2005) A Place Out of Time: Tropical Rainforests and the Perils They Face. Published online: Rainforests.mongabay.com Richards, P. W. (1996). The tropical rain forest. 2nd ed. Cambridge University Press Whitmore, T. C. (1998) An introduction to tropical rain forests. 2nd ed. Oxford University Press. External links Animals in a rainforest Rainforest Action Network EIA forest reports: Investigations into illegal logging. EIA in the USA Reports and info. The Coalition for Rainforest Nations United Nations Forum on Forests Dave Kimble's Rainforest Photo Catalog (Wet Tropics, Australia) Rainforest Plants Tropical rainforest for children What is a rainforest National Geographic: Rain forest Tropical rainforests Biodiversity Forest ecology
Rainforest
[ "Biology" ]
3,067
[ "Biodiversity" ]
50,049
https://en.wikipedia.org/wiki/Subarctic%20climate
The subarctic climate (also called subpolar climate, or boreal climate) is a continental climate with long, cold (often very cold) winters, and short, warm to cool summers. It is found on large landmasses, often away from the moderating effects of an ocean, generally at latitudes from 50°N to 70°N, poleward of the humid continental climates. Like other Class D climates, they are rare in the Southern Hemisphere, only found at some isolated highland elevations. Subarctic or boreal climates are the source regions for the cold air that affects temperate latitudes to the south in winter. These climates represent Köppen climate classification Dfc, Dwc, Dsc, Dfd, Dwd and Dsd. Description This type of climate offers some of the most extreme seasonal temperature variations found on the planet: in winter, temperatures can drop to below and in summer, the temperature may exceed . However, the summers are short; no more than three months of the year (but at least one month) must have a 24-hour average temperature of at least to fall into this category of climate, and the coldest month should average below (or ). Record low temperatures can approach . With 5–7 consecutive months when the average temperature is below freezing, all moisture in the soil and subsoil freezes solidly to depths of many feet. Summer warmth is insufficient to thaw more than a few surface feet, so permafrost prevails under most areas not near the southern boundary of this climate zone. Seasonal thaw penetrates from , depending on latitude, aspect, and type of ground. Some northern areas with subarctic climates located near oceans (southern Alaska, northern Norway, Sakhalin Oblast and Kamchatka Oblast), have milder winters and no permafrost, and are more suited for farming unless precipitation is excessive. The frost-free season is very short, varying from about 45 to 100 days at most, and a freeze can occur anytime outside the summer months in many areas. Description The first D indicates continentality, with the coldest month below (or ). The second letter denotes precipitation patterns: s: A dry summer—the driest month in the high-sun half of the year (April to September in the Northern Hemisphere, October to March in the Southern Hemisphere) has less than / of rainfall and has exactly or less than the precipitation of the wettest month in the low-sun half of the year (October to March in the Northern Hemisphere, April to September in the Southern Hemisphere), w: A dry winter—the driest month in the low-sun half of the year has exactly or less than one‑tenth of the precipitation found in the wettest month in the summer half of the year, f: No dry season—does not meet either of the alternative specifications above; precipitation and humidity are often high year-round. The third letter denotes temperature: c: Regular subarctic, only one–three months above , coldest month between (or ) and . d: Severely cold subarctic, only one–three months above , coldest month at or below . Precipitation Most subarctic climates have little precipitation, typically no more than over an entire year due to the low temperatures and evapotranspiration. Away from the coasts, precipitation occurs mostly in the summer months, while in coastal areas with subarctic climates the heaviest precipitation is usually during the autumn months when the relative warmth of sea vis-à-vis land is greatest. Low precipitation, by the standards of more temperate regions with longer summers and warmer winters, is typically sufficient in view of the very low evapotranspiration to allow a water-logged terrain in many areas of subarctic climate and to permit snow cover during winter, which is generally persistent for an extended period. A notable exception to this pattern is that subarctic climates occurring at high elevations in otherwise temperate regions have extremely high precipitation due to orographic lift. Mount Washington, with temperatures typical of a subarctic climate, receives an average rain-equivalent of of precipitation per year. Coastal areas of Khabarovsk Krai also have much higher precipitation in summer due to orographic influences (up to in July in some areas), whilst the mountainous Kamchatka peninsula and Sakhalin island are even wetter, since orographic moisture isn't confined to the warmer months and creates large glaciers in Kamchatka. Labrador, in eastern Canada, is similarly wet throughout the year due to the semi-permanent Icelandic Low and can receive up to of rainfall equivalent per year, creating a snow cover of up to that does not melt until June. Vegetation and land use Vegetation in regions with subarctic climates is generally of low diversity, as only hardy tree species can survive the long winters and make use of the short summers. Trees are mostly limited to conifers, as few broadleaved trees are able to survive the very low temperatures in winter. This type of forest is also known as taiga, a term which is sometimes applied to the climate found therein as well. Even though the diversity may be low, the area and numbers are high, and the taiga (boreal) forest is the largest forest biome on the planet, with most of the forests located in Russia and Canada. The process by which plants become acclimated to cold temperatures is called hardening. Agricultural potential is generally poor, due to the natural infertility of soils and the prevalence of swamps and lakes left by departing ice sheets, and short growing seasons prohibit all but the hardiest of crops. Despite the short season, the long summer days at such latitudes do permit some agriculture. In some areas, ice has scoured rock surfaces bare, entirely stripping off the overburden. Elsewhere, rock basins have been formed and stream courses dammed, creating countless lakes. Neighboring regions Should one go northward or even toward a polar sea, one finds that the warmest month has an average temperature of less than , and the subarctic climate grades into a tundra climate not at all suitable for trees. Southward, this climate grades into the humid continental climates with longer summers (and usually less-severe winters) allowing broadleaf trees; in a few locations close to a temperate sea (as in northern Norway and southern Alaska), this climate can grade into a short-summer version of an oceanic climate, the subpolar oceanic climate, as the sea is approached where winter temperatures average near or above freezing despite maintaining the short, cool summers. In China and Mongolia, as one moves southwestwards or towards lower elevations, temperatures increase but precipitation is so low that the subarctic climate grades into a cold semi-arid climate. Distribution Dfc and Dfd distribution The Dfc climate, by far the most common subarctic type, is found in the following areas: Northern Eurasia The majority of Siberia (notable cities: Yakutsk, Surgut, Norilsk, Magadan) The Kamchatka Peninsula and the northern and central parts of the Kuril Islands and Sakhalin Island (notable cities: Petropavlovsk-Kamchatsky) The northern half of Fennoscandia and European Russia (milder winters in coastal areas) and higher elevations further south (notable cities: Oulu, Umeå, Tromsø, Murmansk, Arkhangelsk) The Western Alps between , and the Eastern Alps between Central Romania Some parts of central Germany and Poland The Tatra Mountains in Poland and Slovakia, above . The Pyrenees, between The Northeastern Anatolia Region and the Pontic Alps, between Mountain summits in Scotland, most notably in the Cairngorms and the Nevis Range The far northeast of Turkey Further north and east in Siberia, continentality increases so much that winters can be exceptionally severe, averaging below , even though the hottest month still averages more than . This creates Dfd climates, which are mostly found in the Sakha Republic: Northeast Siberian taiga Central Yakutian Lowland Oymyakon Verkhoyansk North America Most of Interior, Western and Southcentral Alaska (notable cities and towns: Anchorage, Wasilla, Nome, Fort Yukon) The high Rocky Mountains in Colorado, Wyoming, Idaho, Utah, Montana and the White Mountains of New Hampshire (notable cities: Fraser, Brian Head) Much of Canada from about 53–55°N to the tree line, including: Southern Labrador (notable cities: Labrador City) Certain areas within Newfoundland interior and along its northern coast Quebec: Jamésie, Côte-Nord and far southern Nunavik Far northern Ontario The northern Prairie Provinces Near, but not including, the city of Edmonton The Rocky Mountain Foothills in Alberta and British Columbia Most of the Yukon (Notable cities: Whitehorse, Dawson City) Most of the Northwest Territories (Notable cities: Yellowknife, Inuvik) Southwestern Nunavut In the Southern Hemisphere, the Dfc climate is found only in small, isolated pockets in the Snowy Mountains of Australia, the Southern Alps of New Zealand, and the Lesotho Highlands. In South America, this climate occurs on the western slope of the central Andes in Chile and Argentina, where climatic conditions are notably more humid compared to the eastern slope. The presence of the Andes mountain range contributes to a wetter climate on the western slope by capturing moisture from the Pacific Ocean, resulting in increased precipitation, especially during the winter months. This climate zone supports the presence of temperate rainforests, mostly on highest areas of the Valdivian rainforest in Chile and the subantarctic forest in Argentina. Dsc and Dsd distribution Climates classified as Dsc or Dsd, with a dry summer, are rare, occurring in very small areas at high elevation around the Mediterranean Basin, Iran, Kyrgyzstan, Tajikistan, Alaska and other parts of the northwestern United States (Eastern Washington, Eastern Oregon, Southern Idaho, California's Eastern Sierra), the Russian Far East, Akureyri, Iceland, Seneca, Oregon, and Atlin, British Columbia. Turkey and Afghanistan are exceptions; Dsc climates are common in Northeast Anatolia, in the Taurus and Köroğlu Mountains, and the Central Afghan highlands. In the Southern Hemisphere, the Dsc climate is present in South America as a subarctic climate influenced by Mediterranean characteristics, often considered a high-altitude variant of the Mediterranean climate. It is located on the eastern slopes of the central Argentine Andes and in some sections on the Chilean side. While there are no major settlements exhibiting this climate, several localities in the vicinity experience it, such as San Carlos de Bariloche, Villa La Angostura, San Martín de los Andes, Balmaceda, Punta de Vacas, and Termas del Flaco. Dwc and Dwd distribution Climates classified as Dwc or Dwd, with a dry winter, are found in parts of East Asia, like China, where the Siberian High makes the winters colder than places like Scandinavia or Alaska interior but extremely dry (typically with around of rainfall equivalent per month), meaning that winter snow cover is very limited. The Dwc climate can be found in: Much of northern Mongolia Russia: Most of Khabarovsk Krai except the south Southeastern Sakha Republic Southern Magadan Oblast Northern Amur Oblast Northern Buryatia Zabaykalsky Krai Irkutsk Oblast China: Tahe County and Mohe County in Heilongjiang Northern Hulunbuir in Inner Mongolia Gannan in Gansu (due to extreme elevation) Huangnan, eastern Hainan and eastern Guoluo in Qinghai (due to extreme elevation) Most of Garzê and Ngawa Autonomous Prefectures (due to extreme elevation) in Sichuan Most of Qamdo Prefecture (due to extreme elevation) in the Tibet Autonomous Region Parts of Ladakh (including Siachen Glacier) and Spiti regions of India Middle reaches of the Himalayas in Nepal, Bhutan, Myanmar, and Northeast India. Parts of Kaema Plateau (including Mount Baekdu, Samjiyon, Musan) in North Korea Southeast Fairbanks Census Area in Alaska In the Southern Hemisphere, small pockets of the Lesotho Highlands and the Drakensberg Mountains have a Dwc classification. Charts of selected sites See also Boreal ecology Köppen climate classification Subantarctic Taiga References Köppen climate types Climate of North America Climate of Europe Climate of Asia Ecology Subarctic
Subarctic climate
[ "Biology" ]
2,535
[ "Ecology" ]
50,063
https://en.wikipedia.org/wiki/Beefalo
Beefalo constitutes a hybrid offspring of domestic cattle (Bos taurus), usually a male in managed breeding programs, and the American bison (Bison bison), usually a female in managed breeding programs. The breed was created to combine the characteristics of both animals for beef production. Beefalo are primarily cattle in genetics and appearance, with the breed association defining a full Beefalo as one with three-eighths (37.5%) bison genetics, while animals with higher percentages of bison genetics are called "bison hybrids". However, genomic analysis has found that the vast majority of "Beefalo", even those considered pedigree by the breed association, have no detectable bison ancestry, with no sampled Beefalo having higher than 18% bison ancestry, with most "Beefalo" consisting of a mixture of taurine cattle and zebu cattle ancestry. History and genomics Accidental crosses were noticed as long ago as 1749 in the Southern states of North America, during British colonization. Cattle and bison were first intentionally crossbred during the mid-19th century. One of the first efforts to cross-breed bison and domestic cattle was in 1815 by Robert Wickliffe of Lexington, Kentucky. Mr. Wickliffe's experiments continued for up to 30 years. Another early deliberate attempt to cross-breed bison with cattle was made by Colonel Samuel Bedson, warden of Stoney Mountain Penitentiary, Winnipeg, in 1880. Bedson bought eight bison from a captive herd of James McKay and inter-bred them with Durham cattle. The hybrids raised by Bedson were described by naturalist Ernest Thompson Seton: After seeing thousands of cattle die in a blizzard in 1886, Charles "Buffalo" Jones, a co-founder of Garden City, Kansas, also worked to cross bison and cattle at a ranch near the future Grand Canyon National Park, with the hope the animals could survive the harsh winters. He called the result "cattalo" in 1888. Mossom Martin Boyd of Bobcaygeon, Ontario first started the practice in Canada, publishing about some of his outcomes in the Journal of Heredity. After his death in 1914, the Canadian government continued experiments in crossbreeding up to 1964, with little success. For example, in 1936 the Canadian government had successfully cross-bred only 30 cattalos. It was found early on that crossing a male bison with a domestic cow would produce few offspring, but that crossing a domestic bull with a bison cow apparently solved the problem. The female offspring proved fertile, but rarely so for the males. Although the cattalo performed well, the mating problems meant the breeder had to maintain a herd of wild and difficult-to-handle bison cows. In 1965, Jim Burnett of Montana produced a hybrid bull that was fertile. Californian cattle rancher D.C. “Bud” Basolo developed the Beefalo breed in the 1970s. He did not reveal the precise pedigree of the breed. The breed is defined by The American Beefalo Association as being genetically at least five-eighths Bos taurus and at most three-eighths Bison bison. In 2024, a genetic study, including historical samples from Basolo's foundational herd, found that the majority of "Beefalo" cattle who were genomically sequenced (39 out of 47 sampled), including those from Basolo's original herd, had no detectable bison ancestry. Of the 8 that did have some bison ancestry, this was no higher than 18% (and as low as 2% in some individuals), which is much lower than that of the supposed pedigree. Most "Beefalo" were instead found to be either entirely of taurine cattle ancestry or more commonly, mixed with varying levels of zebu ancestry in proportions of 2% to 38%. Nutrition characteristics A United States Department of Agriculture study found Beefalo meat, like bison meat, to be lower in fat and cholesterol than standard beef cattle. Registration In 1983, the three main Beefalo registration groups reorganized under the American Beefalo World Registry. Until November 2008, there were two Beefalo associations, the American Beefalo World Registry and American Beefalo International. These organizations jointly formed the American Beefalo Association, Inc., which currently operates as the registering body for Beefalo in the United States. Effect on bison conservation Most current bison herds are "genetically polluted", meaning that they are partly crossbred with cattle. There are only four genetically unmixed American bison herds left, and only two that are also free of brucellosis: the Wind Cave bison herd that roams Wind Cave National Park, South Dakota; and the Henry Mountains herd in the Henry Mountains of Utah. Dr. Dirk Van Vuren, formerly of the University of Kansas, however, points out that "The bison today that carry cattle DNA look exactly like bison, function exactly like bison and in fact are bison. For conservation groups, the interest is that they are not totally pure." Environmental impacts Although popular with tourists and hunters, escaped beefalo have been destroying parts of the ecosystem, as well as ancient stone ruins, in the Grand Canyon and threatening native species. By 2015, numbers were growing by 50% per year and there were at least 600 animals roaming the park. Grand Canyon National Park was reporting an accident a day due to tourist interactions with beefalo. In 2018, the park began trapping the animals and giving them to Native American tribes outside the state. In addition, volunteer hunters were enlisted to cull the herds, with a goal of reducing the population to 200 animals. As of 2022, the herd was down to 216 individuals, with only 4 having been taken by hunters. Cattalo The term "cattalo", a portmanteau of cattle and buffalo, is defined by United States law as a cross of bison and cattle which have a bison appearance. In some American states, cattalo are regulated as "exotic animals", along with pure bison and deer. However, in most states, bison and hybrids which are raised solely for livestock purposes similar to cattle, are considered domestic animals like cattle, and do not require special permits. See also American Breed Bovid hybrid Buddy the Beefalo Dzo Haldane's rule Yakalo Żubroń References External links Kansas State Historical Society The Story of Cattalo. Canadian Geographic Beef Cattle breeds originating in Canada Cattle breeds originating in the United States Cattle crossbreeds Intergeneric hybrids American bison
Beefalo
[ "Biology" ]
1,314
[ "Intergeneric hybrids", "Hybrid organisms" ]
50,073
https://en.wikipedia.org/wiki/Chemical%20warfare
Chemical warfare (CW) involves using the toxic properties of chemical substances as weapons. This type of warfare is distinct from nuclear warfare, biological warfare and radiological warfare, which together make up CBRN, the military acronym for chemical, biological, radiological, and nuclear (warfare or weapons), all of which are considered "weapons of mass destruction" (WMDs), a term that contrasts with conventional weapons. The use of chemical weapons in international armed conflicts is prohibited under international humanitarian law by the 1925 Geneva Protocol and the Hague Conventions of 1899 and 1907. The 1993 Chemical Weapons Convention prohibits signatories from acquiring, stockpiling, developing, and using chemical weapons in all circumstances except for very limited purposes (research, medical, pharmaceutical or protective). Definition Chemical warfare is different from the use of conventional weapons or nuclear weapons because the destructive effects of chemical weapons are not primarily due to any explosive force. The offensive use of living organisms (such as anthrax) is considered biological warfare rather than chemical warfare; however, the use of nonliving toxic products produced by living organisms (e.g. toxins such as botulinum toxin, ricin, and saxitoxin) is considered chemical warfare under the provisions of the Chemical Weapons Convention (CWC). Under this convention, any toxic chemical, regardless of its origin, is considered a chemical weapon unless it is used for purposes that are not prohibited (an important legal definition known as the General Purpose Criterion). About 70 different chemicals have been used or were stockpiled as chemical warfare agents during the 20th century. The entire class, known as Lethal Unitary Chemical Agents and Munitions, has been scheduled for elimination by the CWC. Under the convention, chemicals that are toxic enough to be used as chemical weapons, or that may be used to manufacture such chemicals, are divided into three groups according to their purpose and treatment: Schedule 1 – Have few, if any, legitimate uses. These may only be produced or used for research, medical, pharmaceutical or protective purposes (i.e. testing of chemical weapons sensors and protective clothing). Examples include nerve agents, ricin, lewisite and mustard gas. Any production over must be reported to the Organisation for the Prohibition of Chemical Weapons (OPCW) and a country can have a stockpile of no more than one tonne of these chemicals. Schedule 2 – Have no large-scale industrial uses, but may have legitimate small-scale uses. Examples include dimethyl methylphosphonate, a precursor to sarin also used as a flame retardant, and thiodiglycol, a precursor chemical used in the manufacture of mustard gas but also widely used as a solvent in inks. Schedule 3 – Have legitimate large-scale industrial uses. Examples include phosgene and chloropicrin. Both have been used as chemical weapons but phosgene is an important precursor in the manufacture of plastics, and chloropicrin is used as a fumigant. The OPCW must be notified of, and may inspect, any plant producing more than 30 tons per year. Chemical weapons are divided into three categories: Category 1 – based on Schedule 1 substances Category 2 – based on non-Schedule 1 substances Category 3 – devices and equipment designed to use chemical weapons, without the substances themselves History Simple chemical weapons were used sporadically throughout antiquity and into the Industrial Age. It was not until the 19th century that the modern conception of chemical warfare emerged, as various scientists and nations proposed the use of asphyxiating or poisonous gasses. Multiple international treaties were passed banning chemical weapons based upon the alarm of nations and scientists. This however did not prevent the extensive use of chemical weapons in World War I. The development of chlorine gas, among others, was used by both sides to try to break the stalemate of trench warfare. Though largely ineffective over the long run, it decidedly changed the nature of the war. In many cases the gasses used did not kill, but instead horribly maimed, injured, or disfigured casualties. Some 1.3 million gas casualties were recorded, which may have included up to 260,000 civilian casualties. The interwar years saw the occasional use of chemical weapons, mainly to put down rebellions. In Nazi Germany, much research went into developing new chemical weapons, such as potent nerve agents. However, chemical weapons saw little battlefield use in World War II. Both sides were prepared to use such weapons, but the Allied Powers never did, and the Axis used them only very sparingly. The reason for the lack of use by the Nazis, despite the considerable efforts that had gone into developing new varieties, might have been a lack of technical ability or fears that the Allies would retaliate with their own chemical weapons. Those fears were not unfounded: the Allies made comprehensive plans for defensive and retaliatory use of chemical weapons, and stockpiled large quantities. Japanese forces, as part of the Axis, used them more widely, though only against their Asian enemies, as they also feared that using it on Western powers would result in retaliation. Chemical weapons were frequently used against the Kuomintang and Chinese communist troops, the People's Liberation Army. However, the Nazis did extensively use poison gas against civilians, mostly the genocide of European Jews, in The Holocaust. Vast quantities of Zyklon B gas and carbon monoxide were used in the gas chambers of Nazi extermination camps, resulting in the overwhelming majority of some three million deaths. This remains the deadliest use of poison gas in history. The post-war era has seen limited, though devastating, use of chemical weapons. Some 100,000 Iranian troops were casualties of Iraqi chemical weapons during the Iran–Iraq War. Iraq used mustard gas and nerve agents against its own civilians in the 1988 Halabja chemical attack. The Cuban intervention in Angola saw limited use of organophosphates. Terrorist groups have also used chemical weapons, notably in the Tokyo subway sarin attack and the Matsumoto incident. See also chemical terrorism. In the 21st century, the Ba'athist regime in Syria has used chemical weapons against civilian populations, resulting in numerous deadly chemical attacks during the Syrian civil war. The Syrian government has used sarin, chlorine, and mustard gas in the Syrian civil war mostly against civilians. Russia has used chemical weapons during its invasion of Ukraine. This has been done mainly by dropping a grenade with K-51 aerosol CS gas from an unmanned drone. As of 13 December 2024, since the full scale invasion of Ukraine, the Ukrainian military claimed that over 2,000 of its soldiers have been hospitalised due to Russian gas attacks and 3 have died. The use of gas was often hidden by heavy Russian "intense artillery, rocket, and bomb attacks.” Forcing Ukrainian soldiers out of their dugouts and trenches were then exposed to Russian artillery. Often the gas grenades were dropped by drones. Cold weather reduced the effectiveness of chemical gas. A recent US aid package included “nuclear, chemical and radiological protective equipment”. Technology Although crude chemical warfare has been employed in many parts of the world for thousands of years, "modern" chemical warfare began during World War I – see Chemical weapons in World War I. Initially, only well-known commercially available chemicals and their variants were used. These included chlorine and phosgene gas. The methods used to disperse these agents during battle were relatively unrefined and inefficient. Even so, casualties could be heavy, due to the mainly static troop positions which were characteristic features of trench warfare. Germany, the first side to employ chemical warfare on the battlefield, simply opened canisters of chlorine upwind of the opposing side and let the prevailing winds do the dissemination. Soon after, the French modified artillery munitions to contain phosgene – a much more effective method that became the principal means of delivery. Since the development of modern chemical warfare in World War I, nations have pursued research and development on chemical weapons that falls into four major categories: new and more deadly agents; more efficient methods of delivering agents to the target (dissemination); more reliable means of defense against chemical weapons; and more sensitive and accurate means of detecting chemical agents. Chemical warfare agents The chemical used in warfare is called a chemical warfare agent (CWA). About 70 different chemicals have been used or stockpiled as chemical warfare agents during the 20th and 21st centuries. These agents may be in liquid, gas or solid form. Liquid agents that evaporate quickly are said to be volatile or have a high vapor pressure. Many chemical agents are volatile organic compounds so they can be dispersed over a large region quickly. The earliest target of chemical warfare agent research was not toxicity, but development of agents that can affect a target through the skin and clothing, rendering protective gas masks useless. In July 1917, the Germans employed sulfur mustard. Mustard agents easily penetrate leather and fabric to inflict painful burns on the skin. Chemical warfare agents are divided into lethal and incapacitating categories. A substance is classified as incapacitating if less than 1/100 of the lethal dose causes incapacitation, e.g., through nausea or visual problems. The distinction between lethal and incapacitating substances is not fixed, but relies on a statistical average called the . Persistency Chemical warfare agents can be classified according to their persistency, a measure of the length of time that a chemical agent remains effective after dissemination. Chemical agents are classified as persistent or nonpersistent. Agents classified as nonpersistent lose effectiveness after only a few minutes or hours or even only a few seconds. Purely gaseous agents such as chlorine are nonpersistent, as are highly volatile agents such as sarin. Tactically, nonpersistent agents are very useful against targets that are to be taken over and controlled very quickly. Apart from the agent used, the delivery mode is very important. To achieve a nonpersistent deployment, the agent is dispersed into very small droplets comparable with the mist produced by an aerosol can. In this form not only the gaseous part of the agent (around 50%) but also the fine aerosol can be inhaled or absorbed through pores in the skin. Modern doctrine requires very high concentrations almost instantly in order to be effective (one breath should contain a lethal dose of the agent). To achieve this, the primary weapons used would be rocket artillery or bombs and large ballistic missiles with cluster warheads. The contamination in the target area is only low or not existent and after four hours sarin or similar agents are not detectable anymore. By contrast, persistent agents tend to remain in the environment for as long as several weeks, complicating decontamination. Defense against persistent agents requires shielding for extended periods of time. Nonvolatile liquid agents, such as blister agents and the oily VX nerve agent, do not easily evaporate into a gas, and therefore present primarily a contact hazard. The droplet size used for persistent delivery goes up to 1 mm increasing the falling speed and therefore about 80% of the deployed agent reaches the ground, resulting in heavy contamination. Deployment of persistent agents is intended to constrain enemy operations by denying access to contaminated areas. Possible targets include enemy flank positions (averting possible counterattacks), artillery regiments, command posts or supply lines. Because it is not necessary to deliver large quantities of the agent in a short period of time, a wide variety of weapons systems can be used. A special form of persistent agents are thickened agents. These comprise a common agent mixed with thickeners to provide gelatinous, sticky agents. Primary targets for this kind of use include airfields, due to the increased persistency and difficulty of decontaminating affected areas. Classes Chemical weapons are agents that come in four categories: choking, blister, blood and nerve. The agents are organized into several categories according to the manner in which they affect the human body. The names and number of categories varies slightly from source to source, but in general, types of chemical warfare agents are as follows: There are other chemicals used militarily that are not scheduled by the CWC, and thus are not controlled under the CWC treaties. These include: Defoliants and herbicides that destroy vegetation, but are not immediately toxic or poisonous to human beings. Their use is classified as herbicidal warfare. Some batches of Agent Orange, for instance, used by the British during the Malayan Emergency and the United States during the Vietnam War, contained dioxins as manufacturing impurities. Dioxins, rather than Agent Orange itself, have long-term cancer effects and for causing genetic damage leading to serious birth defects. Incendiary or explosive chemicals (such as napalm, extensively used by the United States during the Korean War and the Vietnam War, or dynamite) because their destructive effects are primarily due to fire or explosive force, and not direct chemical action. Their use is classified as conventional warfare. Viruses, bacteria, or other organisms. Their use is classified as biological warfare. Toxins produced by living organisms are considered chemical weapons, although the boundary is blurry. Toxins are covered by the Biological Weapons Convention. Designations Most chemical weapons are assigned a one- to three-letter "NATO weapon designation" in addition to, or in place of, a common name. Binary munitions, in which precursors for chemical warfare agents are automatically mixed in shell to produce the agent just prior to its use, are indicated by a "-2" following the agent's designation (for example, GB-2 and VX-2). Some examples are given below: Delivery The most important factor in the effectiveness of chemical weapons is the efficiency of its delivery, or dissemination, to a target. The most common techniques include munitions (such as bombs, projectiles, warheads) that allow dissemination at a distance and spray tanks which disseminate from low-flying aircraft. Developments in the techniques of filling and storage of munitions have also been important. Although there have been many advances in chemical weapon delivery since World War I, it is still difficult to achieve effective dispersion. The dissemination is highly dependent on atmospheric conditions because many chemical agents act in gaseous form. Thus, weather observations and forecasting are essential to optimize weapon delivery and reduce the risk of injuring friendly forces. Dispersion Dispersion is placing the chemical agent upon or adjacent to a target immediately before dissemination, so that the material is most efficiently used. Dispersion is the simplest technique of delivering an agent to its target. The most common techniques are munitions, bombs, projectiles, spray tanks and warheads. World War I saw the earliest implementation of this technique. The actual first chemical ammunition was the French 26 mm cartouche suffocante rifle grenade, fired from a flare carbine. It contained of the tear-producer ethyl bromoacetate, and was used in autumn 1914 – with little effect on the Germans. The German military contrarily tried to increase the effect of shrapnel shells by adding an irritant – dianisidine chlorosulfonate. Its use against the British at Neuve Chapelle in October 1914 went unnoticed by them. Hans Tappen, a chemist in the Heavy Artillery Department of the War Ministry, suggested to his brother, the Chief of the Operations Branch at German General Headquarters, the use of the tear-gases benzyl bromide or xylyl bromide. Shells were tested successfully at the Wahn artillery range near Cologne on January 9, 1915, and an order was placed for howitzer shells, designated 'T-shells' after Tappen. A shortage of shells limited the first use against the Russians at the Battle of Bolimów on January 31, 1915; the liquid failed to vaporize in the cold weather, and again the experiment went unnoticed by the Allies. The first effective use were when the German forces at the Second Battle of Ypres simply opened cylinders of chlorine and allowed the wind to carry the gas across enemy lines. While simple, this technique had numerous disadvantages. Moving large numbers of heavy gas cylinders to the front-line positions from where the gas would be released was a lengthy and difficult logistical task. Stockpiles of cylinders had to be stored at the front line, posing a great risk if hit by artillery shells. Gas delivery depended greatly on wind speed and direction. If the wind was fickle, as at the Battle of Loos, the gas could blow back, causing friendly casualties. Gas clouds gave plenty of warning, allowing the enemy time to protect themselves, though many soldiers found the sight of a creeping gas cloud unnerving. This made the gas doubly effective, as, in addition to damaging the enemy physically, it also had a psychological effect on the intended victims. Another disadvantage was that gas clouds had limited penetration, capable only of affecting the front-line trenches before dissipating. Although it produced limited results in World War I, this technique shows how simple chemical weapon dissemination can be. Shortly after this "open canister" dissemination, French forces developed a technique for delivery of phosgene in a non-explosive artillery shell. This technique overcame many of the risks of dealing with gas in cylinders. First, gas shells were independent of the wind and increased the effective range of gas, making any target within reach of guns vulnerable. Second, gas shells could be delivered without warning, especially the clear, nearly odorless phosgenethere are numerous accounts of gas shells, landing with a "plop" rather than exploding, being initially dismissed as dud high explosive or shrapnel shells, giving the gas time to work before the soldiers were alerted and took precautions. The major drawback of artillery delivery was the difficulty of achieving a killing concentration. Each shell had a small gas payload and an area would have to be subjected to saturation bombardment to produce a cloud to match cylinder delivery. A British solution to the problem was the Livens Projector. This was effectively a large-bore mortar, dug into the ground that used the gas cylinders themselves as projectiles – firing a cylinder up to . This combined the gas volume of cylinders with the range of artillery. Over the years, there were some refinements in this technique. In the 1950s and early 1960s, chemical artillery rockets and cluster bombs contained a multitude of submunitions, so that a large number of small clouds of the chemical agent would form directly on the target. Thermal dissemination Thermal dissemination is the use of explosives or pyrotechnics to deliver chemical agents. This technique, developed in the 1920s, was a major improvement over earlier dispersal techniques, in that it allowed significant quantities of an agent to be disseminated over a considerable distance. Thermal dissemination remains the principal method of disseminating chemical agents today. Most thermal dissemination devices consist of a bomb or projectile shell that contains a chemical agent and a central "burster" charge; when the burster detonates, the agent is expelled laterally. Thermal dissemination devices, though common, are not particularly efficient. First, a percentage of the agent is lost by incineration in the initial blast and by being forced onto the ground. Second, the sizes of the particles vary greatly because explosive dissemination produces a mixture of liquid droplets of variable and difficult to control sizes. The efficacy of thermal detonation is greatly limited by the flammability of some agents. For flammable aerosols, the cloud is sometimes totally or partially ignited by the disseminating explosion in a phenomenon called flashing. Explosively disseminated VX will ignite roughly one third of the time. Despite a great deal of study, flashing is still not fully understood, and a solution to the problem would be a major technological advance. Despite the limitations of central bursters, most nations use this method in the early stages of chemical weapon development, in part because standard munitions can be adapted to carry the agents. Aerodynamic dissemination Aerodynamic dissemination is the non-explosive delivery of a chemical agent from an aircraft, allowing aerodynamic stress to disseminate the agent. This technique is the most recent major development in chemical agent dissemination, originating in the mid-1960s. This technique eliminates many of the limitations of thermal dissemination by eliminating the flashing effect and theoretically allowing precise control of particle size. In actuality, the altitude of dissemination, wind direction and velocity, and the direction and velocity of the aircraft greatly influence particle size. There are other drawbacks as well; ideal deployment requires precise knowledge of aerodynamics and fluid dynamics, and because the agent must usually be dispersed within the boundary layer (less than above the ground), it puts pilots at risk. Significant research is still being applied toward this technique. For example, by modifying the properties of the liquid, its breakup when subjected to aerodynamic stress can be controlled and an idealized particle distribution achieved, even at supersonic speed. Additionally, advances in fluid dynamics, computer modeling, and weather forecasting allow an ideal direction, speed, and altitude to be calculated, such that warfare agent of a predetermined particle size can predictably and reliably hit a target. Protection against chemical warfare Ideal protection begins with nonproliferation treaties such as the CWC, and detecting, very early, the signatures of someone building a chemical weapons capability. These include a wide range of intelligence disciplines, such as economic analysis of exports of dual-use chemicals and equipment, human intelligence (HUMINT) such as diplomatic, refugee, and agent reports; photography from satellites, aircraft and drones (IMINT); examination of captured equipment (TECHINT); communications intercepts (COMINT); and detection of chemical manufacturing and chemical agents themselves (MASINT). If all the preventive measures fail and there is a clear and present danger, then there is a need for detection of chemical attacks, collective protection, and decontamination. Since industrial accidents can cause dangerous chemical releases (e.g., the Bhopal disaster), these activities are things that civilian, as well as military, organizations must be prepared to carry out. In civilian situations in developed countries, these are duties of HAZMAT organizations, which most commonly are part of fire departments. Detection has been referred to above, as a technical MASINT discipline; specific military procedures, which are usually the model for civilian procedures, depend on the equipment, expertise, and personnel available. When chemical agents are detected, an alarm needs to sound, with specific warnings over emergency broadcasts and the like. There may be a warning to expect an attack. If, for example, the captain of a US Navy ship believes there is a serious threat of chemical, biological, or radiological attack, the crew may be ordered to set Circle William, which means closing all openings to outside air, running breathing air through filters, and possibly starting a system that continually washes down the exterior surfaces. Civilian authorities dealing with an attack or a toxic chemical accident will invoke the Incident Command System, or local equivalent, to coordinate defensive measures. Individual protection starts with a gas mask and, depending on the nature of the threat, through various levels of protective clothing up to a complete chemical-resistant suit with a self-contained air supply. The US military defines various levels of MOPP (mission-oriented protective posture) from mask to full chemical resistant suits; Hazmat suits are the civilian equivalent, but go farther to include a fully independent air supply, rather than the filters of a gas mask. Collective protection allows continued functioning of groups of people in buildings or shelters, the latter which may be fixed, mobile, or improvised. With ordinary buildings, this may be as basic as plastic sheeting and tape, although if the protection needs to be continued for any appreciable length of time, there will need to be an air supply, typically an enhanced gas mask. Decontamination Decontamination varies with the particular chemical agent used. Some nonpersistent agents, including most pulmonary agents (chlorine, phosgene, and so on), blood gases, and nonpersistent nerve gases (e.g., GB), will dissipate from open areas, although powerful exhaust fans may be needed to clear out buildings where they have accumulated. In some cases, it might be necessary to neutralize them chemically, as with ammonia as a neutralizer for hydrogen cyanide or chlorine. Riot control agents such as CS will dissipate in an open area, but things contaminated with CS powder need to be aired out, washed by people wearing protective gear, or safely discarded. Mass decontamination is a less common requirement for people than equipment, since people may be immediately affected and treatment is the action required. It is a requirement when people have been contaminated with persistent agents. Treatment and decontamination may need to be simultaneous, with the medical personnel protecting themselves so they can function. There may need to be immediate intervention to prevent death, such as injection of atropine for nerve agents. Decontamination is especially important for people contaminated with persistent agents; many of the fatalities after the explosion of a WWII US ammunition ship carrying sulfur mustard, in the harbor of Bari, Italy, after a German bombing on December 2, 1943, came when rescue workers, not knowing of the contamination, bundled cold, wet seamen in tight-fitting blankets. For decontaminating equipment and buildings exposed to persistent agents, such as blister agents, VX or other agents made persistent by mixing with a thickener, special equipment and materials might be needed. Some type of neutralizing agent will be needed; e.g. in the form of a spraying device with neutralizing agents such as Chlorine, Fichlor, strong alkaline solutions or enzymes. In other cases, a specific chemical decontaminant will be required. Sociopolitical climate There are many instances of the use of chemical weapons in battles documented in Greek and Roman historical texts; the earliest example was the deliberate poisoning of Kirrha's water supply with hellebore in the First Sacred War, Greece, about 590 BC. One of the earliest reactions to the use of chemical agents was from Rome. Struggling to defend themselves from the Roman legions, Germanic tribes poisoned the wells of their enemies, with Roman jurists having been recorded as declaring "armis bella non venenis geri", meaning "war is fought with weapons, not with poisons." Yet the Romans themselves resorted to poisoning wells of besieged cities in Anatolia in the 2nd century BC. Before 1915 the use of poisonous chemicals in battle was typically the result of local initiative, and not the result of an active government chemical weapons program. There are many reports of the isolated use of chemical agents in individual battles or sieges, but there was no true tradition of their use outside of incendiaries and smoke. Despite this tendency, there have been several attempts to initiate large-scale implementation of poison gas in several wars, but with the notable exception of World War I, the responsible authorities generally rejected the proposals for ethical reasons or fears of retaliation. For example, in 1854 Lyon Playfair (later 1st Baron Playfair, GCB, PC, FRS (1818–1898), a British chemist, proposed using a cacodyl cyanide-filled artillery shell against enemy ships during the Crimean War. The British Ordnance Department rejected the proposal as "as bad a mode of warfare as poisoning the wells of the enemy." Efforts to eradicate chemical weapons August 27, 1874: The Brussels Declaration Concerning the Laws and Customs of War is signed, specifically forbidding the "employment of poison or poisoned weapons", although the treaty was not adopted by any nation whatsoever and it never went into effect. September 4, 1900: The First Hague Convention, which includes a declaration banning the "use of projectiles the object of which is the diffusion of asphyxiating or deleterious gases," enters into force. January 26, 1910: The Second Hague Convention enters into force, prohibiting the use of "poison or poisoned weapons" in warfare. February 6, 1922: After World War I, the Washington Arms Conference Treaty prohibited the use of asphyxiating, poisonous or other gases. It was signed by the United States, Britain, Japan, France, and Italy, but France objected to other provisions in the treaty and it never went into effect. February 8, 1928: The Geneva Protocol enters into force, prohibiting the use of "asphyxiating, poisonous or other gases, and of all analogous liquids, materials or devices" and "bacteriological methods of warfare". Chemical weapon proliferation Despite numerous efforts to reduce or eliminate them, some nations continue to research and/or stockpile chemical warfare agents. In 1997, future US Vice President Dick Cheney opposed the signing ratification of a treaty banning the use of chemical weapons, a recently unearthed letter shows. In a letter dated April 8, 1997, then Halliburton-CEO Cheney told Sen. Jesse Helms, the chairman of the Senate Foreign Relations Committee, that it would be a mistake for America to join the convention. "Those nations most likely to comply with the Chemical Weapons Convention are not likely to ever constitute a military threat to the United States. The governments we should be concerned about are likely to cheat on the CWC, even if they do participate," reads the letter, published by the Federation of American Scientists. The CWC was ratified by the Senate that same month. In the following years, Albania, Libya, Russia, the United States, and India declared over 71,000 metric tons of chemical weapon stockpiles, and destroyed a third of them. Under the terms of the agreement, the United States and Russia agreed to eliminate the rest of their supplies of chemical weapons by 2012, but ended up taking far longer to do so as shown in the previous and following section of this article. Chemical weapons destruction India In June 1997, India declared that it had a stockpile of 1044 tons of sulphur mustard in its possession. India's declaration of its stockpile came after its entry into the Chemical Weapons Convention, that created the Organisation for the Prohibition of Chemical Weapons, and on January 14, 1993, India became one of the original signatories to the Chemical Weapons Convention. By 2005, from among six nations that had declared their possession of chemical weapons, India was the only country to meet its deadline for chemical weapons destruction and for inspection of its facilities by the Organisation for the Prohibition of Chemical Weapons. By 2006, India had destroyed more than 75 percent of its chemical weapons and material stockpile and was granted an extension to complete a 100 percent destruction of its stocks by April 2009. On May 14, 2009, India informed the United Nations that it has completely destroyed its stockpile of chemical weapons. Iraq The Director-General of the Organisation for the Prohibition of Chemical Weapons, Ambassador Rogelio Pfirter, welcomed Iraq's decision to join the OPCW as a significant step to strengthening global and regional efforts to prevent the spread and use of chemical weapons. The OPCW announced "The government of Iraq has deposited its instrument of accession to the Chemical Weapons Convention with the Secretary General of the United Nations and within 30 days, on 12 February 2009, will become the 186th State Party to the Convention". Iraq has also declared stockpiles of chemical weapons, and because of their recent accession is the only State Party exempted from the destruction time-line. Japan During the Second Sino-Japanese War (1937–1945) Japan stored chemical weapons on the territory of mainland China. The weapon stock mostly containing sulfur mustard-lewisite mixture. The weapons are classified as abandoned chemical weapons under the Chemical Weapons Convention, and from September 2010 Japan has started their destruction in Nanjing using mobile destruction facilities in order to do so. Russia Russia signed into the Chemical Weapons Convention on January 13, 1993, and ratified it on November 5, 1995. Declaring an arsenal of 39,967 tons of chemical weapons in 1997, by far the largest arsenal, consisting of blister agents: Lewisite, Sulfur mustard, Lewisite-mustard mix, and nerve agents: Sarin, Soman, and VX. Russia met its treaty obligations by destroying 1 percent of its chemical agents by the 2002 deadline set out by the Chemical Weapons Convention, but requested an extension on the deadlines of 2004 and 2007 due to technical, financial, and environmental challenges of chemical disposal. Since, Russia has received help from other countries such as Canada which donated C$100,000, plus a further C$100,000 already donated, to the Russian Chemical Weapons Destruction Program. This money will be used to complete work at Shchuch'ye and support the construction of a chemical weapons destruction facility at Kizner (Russia), where the destruction of nearly 5,700 tons of nerve agent, stored in approximately 2 million artillery shells and munitions, will be undertaken. Canadian funds are also being used for the operation of a Green Cross Public Outreach Office, to keep the civilian population informed on the progress made in chemical weapons destruction activities. As of July 2011, Russia has destroyed 48 percent (18,241 tons) of its stockpile at destruction facilities located in Gorny (Saratov Oblast) and Kambarka (Udmurt Republic) – where operations have finished – and Schuch'ye (Kurgan Oblast), Maradykovsky (Kirov Oblast), Leonidovka (Penza Oblast) whilst installations are under construction in Pochep (Bryansk Oblast) and Kizner (Udmurt Republic). As August 2013, 76 percent (30,500 tons) were destroyed, and Russia leaves the Cooperative Threat Reduction (CTR) Program, which partially funded chemical weapons destruction. In September 2017, OPCW announced that Russia had destroyed its entire chemical weapons stockpile. United States On November 25, 1969, President Richard Nixon unilaterally renounced the offensive use of biological and toxic weapons, but the U.S. continued to maintain an offensive chemical weapons program. From May 1964 to the early 1970s the U.S. participated in Operation CHASE, a United States Department of Defense program that aimed to dispose of chemical weapons by sinking ships laden with the weapons in the deep Atlantic. After the Marine Protection, Research, and Sanctuaries Act of 1972, Operation Chase was scrapped and safer disposal methods for chemical weapons were researched, with the U.S. destroying several thousand tons of sulfur mustard by incineration at the Rocky Mountain Arsenal, and nearly 4,200 tons of nerve agent by chemical neutralisation at Tooele Army Depot. The U.S. began stockpile reductions in the 1980s with the removal of outdated munitions and destroying its entire stock of 3-Quinuclidinyl benzilate (BZ or Agent 15) at the beginning of 1988. In June 1990 the Johnston Atoll Chemical Agent Disposal System began destruction of chemical agents stored on the Johnston Atoll in the Pacific, seven years before the Chemical Weapons Treaty came into effect. In 1986 President Ronald Reagan made an agreement with German Chancellor Helmut Kohl to remove the U.S. stockpile of chemical weapons from Germany. In 1990, as part of Operation Steel Box, two ships were loaded with over 100,000 shells containing Sarin and VX were taken from the U.S. Army weapons storage depots such as Miesau and then-classified FSTS (Forward Storage / Transportation Sites) and transported from Bremerhaven, Germany to Johnston Atoll in the Pacific, a 46-day nonstop journey. In the 1980s, Congress, at the urging of the Reagan administration, Congress provided funding for the manufacture of binary chemical weapons (sarin artillery shells) from 1987 until 1990, but this was halted after the U.S. and the Soviet Union entered into a bilateral agreement in June 1990. In the 1990 agreement, the U.S. and Soviet Union agreed to begin destroying their chemical weapons stockpiles before 1993 and to reduce them to no more than 5,000 agent tons each by the end of 2002. The agreement also provided for exchanges of data and inspections of sites to verify destruction. Following the collapse of the Soviet Union, the U.S.'s Nunn–Lugar Cooperative Threat Reduction program helped eliminate some of the chemical, biological and nuclear stockpiles of the former Soviet Union. The United Nations Conference on Disarmament in Geneva in 1980 led to the development of the Chemical Weapons Convention (CWC), a multilateral treaty that prohibited the development, production, stockpiling, and use of chemical weapons, and required the elimination of existing stockpiles. The treaty expressly prohibited state parties from making reservations (unilateral caveats). During the Reagan administration and the George H. W. Bush administration, the U.S. participated in the negotiations toward the CWC. The CWC was concluded on September 3, 1992, and opened for signature on January 13, 1993. The U.S. became one of 87 original state parties to the CWC. President Bill Clinton submitted it to the U.S. Senate for ratification on November 23, 1993. Ratification was blocked in the Senate for years, largely as a result of opposition from Senator Jesse Helms, the chairman of the Senate Foreign Relations Committee. On April 24, 1997, the Senate gave its consent to ratification of the CWC by a 74–26 vote (satisfying the required two-thirds majority). The U.S. deposited its instrument of ratification at the United Nations on April 25, 1997, a few days before the CWC entered into force. The U.S. ratification allowed the U.S. to participate in the Organisation for the Prohibition of Chemical Weapons, the organization based in The Hague that oversees implementation of the CWC. Upon U.S. ratification of the CWC, the U.S. declared a total of 29,918 tons of chemical weapons, and committed to destroying all of the U.S.'s chemical weapons and bulk agent. The U.S. was one of eight states to declare a stockpile of chemical weapons and to commit to their safe elimination. The U.S. committed in the CWC to destroy its entire chemical arsenal within 10 years of the entry into force (i.e., by April 29, 2007), However, at a 2012 conference, the parties to the CWC parties agreed to extend the U.S. deadline to 2023. By 2012, stockpiles had been eliminated at seven of the U.S.'s nine chemical weapons depots and 89.75% of the 1997 stockpile was destroyed. The depots were the Aberdeen Chemical Agent Disposal Facility, Anniston Chemical Disposal Facility, Johnston Atoll, Newport Chemical Agent Disposal Facility, Pine Bluff Chemical Disposal Facility, Tooele Chemical Disposal Facility, Umatilla Chemical Disposal Facility, and Deseret Chemical Depot. The U.S. closed each site after the completion of stockpile destruction. In 2019, the U.S. began to eliminate its chemical-weapon stockpile at the last of the nine U.S. chemical weapons storage facilities: the Blue Grass Army Depot in Kentucky. By May 2021, the U.S. destroyed all of its Category 2 and Category 3 chemical weapons and 96.52% of its Category 1 chemical weapons. The U.S. is scheduled to complete the elimination of all its chemical weapons by the September 2023 deadline. In July 2023 OPCW confirmed the last chemical munition of the U.S., and that the last chemical weapon from the stockpiles declared by all States Parties to the Chemical Weapons Convention was verified as destroyed. The U.S. has maintained a "calculated ambiguity" policy that warns potential adversaries that a chemical or biological attack against the U.S. or its allies will prompt a "overwhelming and devastating" response. The policy deliberately leaves open the question of whether the U.S. would respond to a chemical attempt with nuclear retaliation. Commentators have noted that this policy gives policymakers more flexibility, at the possible cost of decreased strategic unpreparedness. Anti-agriculture Herbicidal warfare Although herbicidal warfare use chemical substances, its main purpose is to disrupt agricultural food production and/or to destroy plants which provide cover or concealment to the enemy. The use of herbicides by the U.S. military during the Vietnam War has left tangible, long-term impacts upon the Vietnamese people and U.S. veterans of the war. The government of Vietnam says that around 24% of the forests of Southern Vietnam were defoliated and up to four million people in Vietnam were exposed to Agent Orange. They state that as many as three million people have developed illness because of Agent Orange while the Red Cross of Vietnam estimates that up to one million people were disabled or have health problems associated with Agent Orange. The United States government has described these figures as unreliable. During the war, the U.S. fought the North Vietnamese and their allies in Laos and Cambodia, dropping large quantities of Agent Orange in each of those countries. According on one estimate, the U.S. dropped of Agent Orange in Laos and in Cambodia. Because Laos and Cambodia were officially neutral during the Vietnam War, the U.S. attempted to keep secret its military involvement in these countries. The U.S. has stated that Agent Orange was not widely used and therefore hasn't offered assistance to affected Cambodians or Laotians, and limits benefits American veterans and CIA personnel who were stationed there. Anti-livestock During the Mau Mau Uprising in 1952, the poisonous latex of the African milk bush was used to kill cattle. See also 1990 Chemical Weapons Accord Ali Hassan al-Majid Area denial weapon Chemical weapon designation Chemical weapons and the United Kingdom Gas chamber List of CBRN warfare forces List of chemical warfare agents List of highly toxic gases Ronald Maddison Psychochemical weapon Saint Julien Memorial Sardasht, West Azerbaijan, a town attacked with chemical weapons during the Iran–Iraq War Stink bomb United States Army Medical Research Institute of Chemical Defense Notes References CBWInfo.com (2001). A Brief History of Chemical and Biological Weapons: Ancient Times to the 19th Century. Retrieved November 24, 2004. Chomsky, Noam (March 4, 2001). Prospects for Peace in the Middle East, page 2. Lecture. Cordette, Jessica, MPH(c) (2003). Chemical Weapons of Mass Destruction. Retrieved November 29, 2004. Smart, Jeffery K., M.A. (1997). History of Biological and Chemical Warfare. Retrieved November 24, 2004. United States Senate, 103d Congress, 2d Session. (May 25, 1994). The Riegle Report. Retrieved November 6, 2004. Gerard J Fitzgerald. American Journal of Public Health. Washington: Apr 2008. Vol. 98, Iss. 4; p. 611 Further reading Leo P. Brophy and George J. B. Fisher; The Chemical Warfare Service: Organizing for War Office of the Chief of Military History, 1959; L. P. Brophy, W. D. Miles and C. C. Cochrane, The Chemical Warfare Service: From Laboratory to Field (1959); and B. E. Kleber and D. Birdsell, The Chemical Warfare Service in Combat (1966). official US history; Glenn Cross, Dirty War: Rhodesia and Chemical Biological Warfare, 1975–1980, Helion & Company, 2017 Gordon M. Burck and Charles C. Flowerree; International Handbook on Chemical Weapons Proliferation 1991 L. F. Haber. The Poisonous Cloud: Chemical Warfare in the First World War Oxford University Press: 1986 James W. Hammond Jr; Poison Gas: The Myths Versus Reality Greenwood Press, 1999 Jiri Janata, Role of Analytical Chemistry in Defense Strategies Against Chemical and Biological Attack, Annual Review of Analytical Chemistry, 2009 Ishmael Jones, The Human Factor: Inside the CIA's Dysfunctional Intelligence Culture, Encounter Books, New York 2008, revised 2010, . WMD espionage. Benoit Morel and Kyle Olson; Shadows and Substance: The Chemical Weapons Convention Westview Press, 1993 Adrienne Mayor, "Greek Fire, Poison Arrows & Scorpion Bombs: Biological and Chemical Warfare in the Ancient World" Overlook-Duckworth, 2003, rev ed with new Introduction 2008 Geoff Plunkett, Chemical Warfare in Australia: Australia's Involvement In Chemical Warfare 1914 – Today, (2nd Edition), 2013.. Leech Cup Books. A volume in the Army Military History Series published in association with the Army History Unit. Jonathan B. Tucker. Chemical Warfare from World War I to Al-Qaeda (2006) External links Official website of the Organisation for the Prohibition of Chemical Weapons (OPCW) Rule 74. The use of chemical weapons is prohibited. – section on chemical weapons from Customary IHL Database, an "updated version of the Study on customary international humanitarian law conducted by the International Committee of the Red Cross (ICRC) and originally published by Cambridge University Press." Chemical Warfare information page , from the Disaster Information Management Research Center of the U.S. Department of Health and Human Services including links to relevant sources in the U.S. National Library of Medicine Warfare by type
Chemical warfare
[ "Chemistry" ]
9,245
[ "nan" ]
50,089
https://en.wikipedia.org/wiki/Otto%20Neurath
Otto Karl Wilhelm Neurath (; 10 December 1882 – 22 December 1945) was an Austrian-born philosopher of science, sociologist, and political economist. He was also the inventor of the ISOTYPE method of pictorial statistics and an innovator in museum practice. Before he fled his native country in 1934, Neurath was one of the leading figures of the Vienna Circle. Early life Neurath was born in Vienna, the son of Wilhelm Neurath (1840–1901), a well-known Jewish political economist at the time. Otto's mother was a Protestant, and he would also become one. Helene Migerka was his cousin. He studied mathematics and physics at the University of Vienna (he formally enrolled for classes only for two semesters in 1902–3). In 1906, he gained his PhD in the department of Political Science and Statistics at the University of Berlin with a thesis entitled Zur Anschauung der Antike über Handel, Gewerbe und Landwirtschaft (On the Conceptions in Antiquity of Trade, Commerce and Agriculture). He married Anna Schapire in 1907, who died in 1911 while bearing their son, Paul, and then married a close friend, the mathematician and philosopher Olga Hahn. Perhaps because of his second wife's blindness and then because of the outbreak of war, Paul was sent to a children's home outside Vienna, where Neurath's mother lived, and returned to live with both of his parents when he was nine years old. Career in Vienna Neurath taught political economy at the New Vienna Commercial Academy in Vienna until war broke out. Subsequently, he directed the Department of War Economy in the War Ministry. In 1917, he completed his habilitation thesis Die Kriegswirtschaftslehre und ihre Bedeutung für die Zukunft (War Economics and Their Importance for the Future) at Heidelberg University. In 1918, he became director of the Deutsches Kriegswirtschaftsmuseum (German Museum of War Economy, later the "Deutsches Wirtschaftsmuseum") at Leipzig. Here he worked with Wolfgang Schumann, known from the Dürerbund for which Neurath had written many articles. During the political crisis which led to the armistice, Schumann urged him to work out a plan for socialization in Saxony. Along with Schumann and Hermann Kranold developed the Programm Kranold-Neurath-Schumann. Neurath then joined the German Social Democratic Party in 1918–19 and ran an office for central economic planning in Munich. When the Bavarian Soviet Republic was defeated, Neurath was imprisoned but returned to Austria after intervention from the Austrian government. While in prison, he wrote Anti-Spengler, a critical attack on Oswald Spengler's Decline of the West. In Red Vienna, he joined the Social Democrats and became secretary of the Austrian Association for Settlements and Small Gardens (Verband für Siedlungs-und Kleingartenwesen), a collection of self-help groups that set out to provide housing and garden plots to its members. In 1923, he founded a new museum for housing and city planning called Siedlungsmuseum. In 1925 he renamed it Gesellschafts- und Wirtschaftsmuseum in Wien (Museum of Society and Economy in Vienna) and founded an association for it, in which the Vienna city administration, the trade unions, the Chamber of Workers and the Bank of Workers became members. Then-mayor Karl Seitz acted as first proponent of the association. Julius Tandler, city councillor for welfare and health, served at the first board of the museum together with other prominent social democratic politicians. The museum was provided with exhibition rooms at buildings of the city administration, the most prominent being the People's Hall at the Vienna City Hall. Neurath was a contributor to the Social Democrat magazine Der Kampf. To make the museum understandable for visitors from all around the polyglot Austro-Hungarian Empire, Neurath worked on graphic design and visual education, believing that "Words divide, pictures unite," a coinage of his own that he displayed on the wall of his office there. In the late 1920s, graphic designer and communications theorist Rudolf Modley served as an assistant to Neurath, contributing to a new means of communication: a visual "language." With the illustrator Gerd Arntz and with Marie Reidemeister (who he would marry in 1941), Neurath developed novel ways of representing quantitative information via easily interpretable icons. The forerunner of contemporary infographics, he initially called this the Vienna Method of Pictorial Statistics. As his ambitions for the project expanded beyond social and economic data related to Vienna, he renamed the project "Isotype", an acronymic nickname for the project's full title: International System of Typographic Picture Education. At international conventions of city planners, Neurath presented and promoted his communication tools. During the 1930s, he also began promoting Isotype as an International Picture Language, connecting it both with the adult education movement and with the Internationalist passion for new and artificial languages like Esperanto, although he stressed in talks and correspondence that Isotype was not intended to be a stand-alone language and was limited in what it could communicate. In the 1920s, Neurath also became an ardent logical positivist, and was the main author of the Vienna Circle manifesto. He was the driving force behind the Unity of Science movement and the International Encyclopedia of Unified Science. Neurath was a proponent of Esperanto, and attended the 1924 World Esperanto Congress in Vienna where he met Rudolf Carnap for the first time. In 1927 he became Secretary of the Ernst Mach Society. Exile Netherlands During the Austrian Civil War in 1934, Neurath had been working in Moscow. Anticipating problems, he had asked to get a coded message in case it would be dangerous for him to return to Austria. As Marie Reidemeister reported later, after receiving the telegram "Carnap is waiting for you," Neurath chose to travel to The Hague, the Netherlands, instead of Vienna, to be able to continue his international work. He was joined by Arntz after affairs in Vienna had been sorted out as best they could. His wife also fled to the Netherlands, where she died in 1937. British Isles After the Luftwaffe had bombed Rotterdam, he and Marie Reidemeister fled to Britain, crossing the Channel with other refugees in an open boat. He and Reidemeister married in 1941 after a period of being interned on the Isle of Man (Neurath was in Onchan Camp). In Britain, he and his wife set up the Isotype Institute in Oxford and he was asked to advise on, and design Isotype charts for, the intended redevelopment of the slums of Bilston, near Wolverhampton. Neurath died of a stroke, suddenly and unexpectedly, in December 1945. After his death, Marie Neurath continued the work of the Isotype Institute, publishing Neurath's writings posthumously, completing projects he had started and writing many children's books using the Isotype system, until her death in the 1980s. Ideas Philosophy of science and language Neurath's work on protocol statements tried to reconcile an empiricist concern for the grounding of knowledge in experience with the essential publicity of science. Neurath suggested that reports of experience should be understood to have a third-person and hence public and impersonal character, rather than as being first person subjective pronouncements. Bertrand Russell took issue with Neurath's account of protocol statements in his book An Inquiry Into Meaning and Truth (p. 139ff), on the grounds that it severed the connection to experience that is essential to an empiricist account of truth, facts and knowledge. One of Neurath's later and most important works, Physicalism, completely transformed the nature of the logical positivist discussion of the program of unifying the sciences. Neurath delineates and explains his points of agreement with the general principles of the positivist program and its conceptual bases: the construction of a universal system which would comprehend all of the knowledge furnished by the various sciences, and the absolute rejection of metaphysics, in the sense of any propositions not translatable into verifiable scientific sentences. He then rejects the positivist treatment of language in general and, in particular, some of Wittgenstein's early fundamental ideas. First, Neurath rejects isomorphism between language and reality as useless metaphysical speculation, which would call for explaining how words and sentences could represent things in the external world. Instead, Neurath proposed that language and reality coincide—that reality consists in simply the totality of previously verified sentences in the language, and "truth" of a sentence is about its relationship to the totality of already verified sentences. If a sentence fails to "concord" (or cohere) with the totality of already verified sentences, then either it should be considered false, or some of that totality's propositions must be modified somehow. He thus views truth as internal coherence of linguistic assertions, rather than anything to do with facts or other entities in the world. Moreover, the criterion of verification is to be applied to the system as a whole (see semantic holism) and not to single sentences. Such ideas profoundly shaped the holistic verificationism of Willard Van Orman Quine. Quine's book Word and Object (p. 3f) made famous Neurath's analogy which compares the holistic nature of language and consequently scientific verification with the construction of a boat which is already at sea (cf. Ship of Theseus): Keith Stanovich discusses this metaphor in context of memes and memeplexes and refers to this metaphor as a "Neurathian bootstrap". Neurath also rejected the notion that science should be reconstructed in terms of sense data, because perceptual experiences are too subjective to constitute a valid foundation for the formal reconstruction of science. Thus, the phenomenological language that most positivists were still emphasizing was to be replaced by the language of mathematical physics. This would allow for the required objective formulations because it is based on spatio-temporal coordinates. Such a physicalistic approach to the sciences would facilitate the elimination of every residual element of metaphysics because it would permit them to be reduced to a system of assertions relative to physical facts. "Finally, Neurath suggested that since language itself is a physical system, because it is made up of an ordered succession of sounds or symbols, it is capable of describing its own structure without contradiction." These ideas helped form the foundation of the sort of physicalism which remains the dominant position in metaphysics and especially the philosophy of mind. Economics In economics, Neurath was notable for his advocacy of ideas like "in-kind" economic accounting in place of monetary accounting. In the 1920s, he also advocated Vollsozialisierung, that is "complete" rather than merely partial "socialization". Thus, he advocated changes to the economic system that were more radical than those of the mainstream Social-Democratic parties of Germany and Austria. In the 1920s, Neurath debated these matters with leading Social Democratic theoreticians (such as Karl Kautsky, who insisted that money is necessary in a socialist economy). While serving as a government economist during the war, Neurath had observed that "As a result of the war, in-kind calculus was applied more often and more systematically than before ... war was fought with ammunition and with the supply of food, not with money" i.e. that goods were incommensurable. This convinced Neurath of the feasibility of economic planning in terms of amounts of goods and services, without use of money. In response to these ideas, Ludwig von Mises wrote his famous essay of 1920, "Economic Calculation in the Socialist Commonwealth". Otto Neurath believed it was 'war socialism' that would come into effect after capitalism. For Neurath, war economies showed advantages in speed of decision and execution, optimal distribution of means relative to (military) goals, and no-nonsense evaluation and utilization of inventiveness. Two disadvantages which he perceived as resulting from centralized decision-making were a reduction in productivity and a loss of the benefits of simple economic exchanges; but he thought that the reduction in productivity could be mitigated by means of "scientific" techniques based on analysis of work-flows etc. as advocated by Frederick Winslow Taylor. Neurath believed that socio-economic theory and scientific methods could be applied together in contemporary practice. Neurath's view on socioeconomic development was similar to the materialist conception of history first elaborated in classical Marxism, in which technology and the state of epistemology come into conflict with social organization. In particular, Neurath, influenced also by James George Frazer, associated the rise of scientific thinking and empiricism / positivism with the rise of socialism, both of which were coming into conflict with older modes of epistemology such as theology (which was allied with idealist philosophy), the latter of which served reactionary purposes. However, Neurath followed Frazer in claiming that primitive magic closely resembled modern technology, implying an instrumentalist interpretation of both. Neurath claimed that magic was unfalsifiable and therefore disenchantment could never be complete in a scientific age. Adherents of the scientific view of the world recognize no authority other than science and reject all forms of metaphysics. Under the socialist phase of history, Neurath predicted that the scientific worldview would become the dominant mode of thought. Selected publications Most publications by and about Neurath are still available only in German. However he also wrote in English, using Ogden's Basic English. His scientific papers are held at the Noord-Hollands Archief in Haarlem; the Otto and Marie Neurath Isotype Collection is held in the Department of Typography & Graphic Communication at the University of Reading in England. Books 1913. Serbiens Erfolge im Balkankriege: Eine wirtschaftliche und soziale Studie. Wien : Manz. 1921. Anti-Spengler. München, Callwey Verlag. 1926. Antike Wirtschaftsgeschichte. Leipzig, Berlin: B. G. Teubner. 1928. Lebensgestaltung und Klassenkampf. Berlin: E. Laub. 1933. Einheitswissenschaft und Psychologie. Wien. 1936. International Picture Language; the First Rules of Isotype. London: K. Paul, Trench, Trubner & co., ltd., 1936 1937. Basic by Isotype. London, K. Paul, Trench, Trubner & co., ltd. 1939. Modern Man in the Making. Alfred A. Knopf 1944. Foundations of the Social Sciences. University of Chicago Press 1944. International Encyclopedia of Unified Science. With Rudolf Carnap, and Charles W. Morris (eds.). University of Chicago Press. 1946. Philosophical Papers, 1913–1946: With a Bibliography of Neurath in English. Marie Neurath and Robert S. Cohen, with Carolyn R. Fawcett, eds. 1983 1973. Empiricism and Sociology. Marie Neurath and Robert Cohen, eds. With a selection of biographical and autobiographical sketches by Popper and Carnap. Includes abridged translation of Anti-Spengler. Articles 1912. The problem of the pleasure maximum. In: Cohen and Neurath (eds.) 1983 1913. The lost wanderers of Descartes and the auxiliary motive. In: Cohen and Neurath 1983 1916. On the classification of systems of hypotheses. In: Cohen and Neurath 1983 1919. Through war economy to economy in kind. In: Neurath 1973 (a short fragment only) 1920a. Total socialisation. In: Cohen and Uebel 2004 1920b. A system of socialisation. In: Cohen and Uebel 2004 1928. Personal life and class struggle. In: Neurath 1973 1930. Ways of the scientific world-conception. In: Cohen and Neurath 1983 1931a. The current growth in global productive capacity. In: Cohen and Uebel 2004 1931b. Empirical sociology. In: Neurath 1973 1931c. Physikalismus. In: Scientia : rivista internazionale di sintesi scientifica, 50, 1931, pp. 297–303 1932. Protokollsätze (Protocol statements).In: Erkenntnis, Vol. 3. Repr.: Cohen and Neurath 1983 1935a. Pseudorationalism of falsification. In: Cohen and Neurath 1983 1935b. The unity of science as a task. In: Cohen and Neurath 1983 1937. Die neue enzyklopaedie des wissenschaftlichen empirismus. In: Scientia: rivista internazionale di sintesi scientifica, 62, 1937, pp. 309–320 1938 'The Departmentalization of Unified Science', Erkenntnis VII, pp. 240–46 1940. Argumentation and action. The Otto Neurath Nachlass in Haarlem 198 K.41 1941. The danger of careless terminology. In: The New Era 22: 145–50 1942. International planning for freedom. In: Neurath 1973 1943. Planning or managerial revolution. (Review of J. Burnham, The Managerial Revolution). The New Commonwealth 148–54 1943–5. Neurath–Carnap correspondence, 1943–1945. The Otto Neurath Nachlass in Haarlem, 223 1944b. Ways of life in a world community. The London Quarterly of World Affairs, 29–32 1945a. Physicalism, planning and the social sciences: bricks prepared for a discussion v. Hayek. 26 July 1945. The Otto Neurath Nachlass in Haarlem 202 K.56 1945b. Neurath–Hayek correspondence, 1945. The Otto Neurath Nachlass in Haarlem 243 1945c. Alternatives to market competition. (Review of F. Hayek, The Road to Serfdom). The London Quarterly of World Affairs 121–2 1946a. The orchestration of the sciences by the encyclopedism of logical empiricism. In: Cohen and. Neurath 1983 1946b. After six years. In: Synthese 5:77–82 1946c. The orchestration of the sciences by the encyclopedism of logical empiricism. In: Cohen and. Neurath 1983 1946. From Hieroglyphics to Isotypes. Nicholson and Watson. Excerpts. Rotha (1946) claims that this is in part Neurath's autobiography. References Further reading Arnswald, Ulrich, 2023, "Otto Neurath's Distorted Reception of Weber's Protestant Ethic and the Spirit of Capitalism," Max Weber Studies (MWS), Vol. 23, No. 2, 218-237, ISSN 2056-4074. Cartwright, Nancy, J. Cat, L. Fleck, and T. Uebel, 1996. Otto Neurath: Philosophy between Science and Politics. Cambridge University Press Cohen R. S. and M. Neurath (eds.) 1983. Otto Neurath: Philosophical Papers. Reidel Cohen, R. S. and T. Uebel (eds.) 2004. Otto Neurath: Economic Writings 1904–1945. Kluwer Dale, Gareth, The Technocratic Socialism of Otto Neurath, Jacobin Magazine. Dutto, Andrea Alberto, 2017, "The Pyramid and the Mosaic. Otto Neurath’s encyclopedism as a critical model," Footprint. Delft Architecture Theory Journal, #20. Matthew Eve and Christopher Burke: Otto Neurath: From Hieroglyphics to Isotype. A Visual Autobiography, Hyphen Press, London 2010 Sophie Hochhäusl: Otto Neurath – City Planning: Proposing a Socio-Political Map for Modern Urbanism, Innsbruck University Press, 2011 . Holt, Jim, "Positive Thinking" (review of Karl Sigmund, Exact Thinking in Demented Times: The Vienna Circle and the Epic Quest for the Foundations of Science, Basic Books, 449 pp.), The New York Review of Books, vol. LXIV, no. 20 (21 December 2017), pp. 74–76. Kraeutler, Hadwig. 2008. Otto Neurath. Museum and Exhibition Work – Spaces (Designed) for Communication. Frankfurt, Berlin, Bern, Bruxelles, New York, Oxford, Vienna, Peter Lang Internationaler Verlag der Wissenschaften. Nemeth, E., and Stadler, F., eds., "Encyclopedia and Utopia: The Life and Work of Otto Neurath (1882–1945)." Vienna Circle Institute Yearbook, vol. 4. O'Neill, John, 2003, "Unified science as political philosophy: positivism, pluralism and liberalism," Studies in History and Philosophy of Science. O'Neill, John, 2002, "Socialist Calculation and Environmental Valuation: Money, Markets and Ecology," Science & Society, LXVI/1. Neurath, Otto, 1946, "From Hieroglyphs to Isotypes". Symons, John – Pombo, Olga – Torres, Juan Manuel (eds.): Otto Neurath and the Unity of Science. (Logic, Epistemology, and the Unity of Science, 18.) Dordrecht: Springer, 2011. Vossoughian, Nader. 2008. Otto Neurath: The Language of the Global Polis. NAi Publishers. Sandner, Günther, 2014, Otto Neurath. Eine politische Biographie. Zsolnay, Vienna. . (German) Danilo Zolo, 1990, Reflexive Epistemology and Social Complexity. The Philosophical Legacy of Otto Neurath, Dordrecht: Kluwer External links Shalizi, C R, "Otto Neurath: 1882–1945". Includes references and links. Gerd Arntz Web Archive with more than 500 Isotypes Bibliography Pictorial Statistics Mundaneum in Netherlands Article discussing Gödel's Incomplete theorems as a refutation to Neurath and the Vienna Circle's logical Positivism Austrian Museum for Social and Economic Affairs (Österreichisches Gesellschafts- und Wirtschaftsmuseum) Guide to the Unity of Science Movement Records 1934-1968 at the University of Chicago Special Collections Research Center 1882 births 1945 deaths 20th-century Austrian philosophers Analytic philosophers Austrian emigrants to England Expatriates from Austria-Hungary in Germany Austrian expatriates in the Netherlands Austrian Jews Austrian refugees Austrian Esperantists Austrian socialists Austrian sociologists Sociologists from Austria-Hungary Encyclopedists Information graphic designers Jewish philosophers Jewish socialists Jews who immigrated to the United Kingdom to escape Nazism Logical positivism Marxian economists Marxist theorists People associated with the University of Reading People interned in the Isle of Man during World War II Philosophers of science Philosophers of social science Socialist economists Vienna Circle Data and information visualization experts Writers from Vienna
Otto Neurath
[ "Mathematics" ]
4,840
[ "Mathematical logic", "Logical positivism" ]
50,149
https://en.wikipedia.org/wiki/Longitude%20rewards
The longitude rewards were the system of inducement prizes offered by the British government for a simple and practical method for the precise determination of a ship's longitude at sea. The prizes, established through an act of Parliament, the Longitude Act 1714 (13 Ann. c. 14), in 1714, were administered by the Board of Longitude. This was by no means the first reward to be offered to solve this problem. Philip II of Spain offered one in 1567, Philip III in 1598 offered 6,000 ducats and a pension, whilst the States General of the Netherlands offered 10,000 florins shortly after. In 1675 Robert Hooke wanted to apply for a £1,000 reward in England for his invention of a spring-regulated watch. However, these large sums were never won, though several people were awarded smaller amounts for significant achievements. Background: the longitude problem The measurement of longitude was a problem that came into sharp focus as people began making transoceanic voyages. Determining latitude was relatively easy in that it could be found from the altitude of the sun at noon with the aid of a table giving the sun's declination for the day. For longitude, early ocean navigators had to rely on dead reckoning, based on calculations of the vessel's heading and speed for a given time (much of which was based on intuition on the part of the master and/or navigator). This was inaccurate on long voyages out of sight of land, and these voyages sometimes ended in tragedy. An accurate determination of longitude was also necessary to determine the proper "magnetic declination", that is, the difference between indicated magnetic north and true north, which can differ by up to 10 degrees in the important trade latitudes of the Atlantic and Indian Oceans. Finding an adequate solution to determining longitude at sea was therefore of paramount importance. The Longitude Act 1714 (13 Ann. c. 14) only addressed the determination of longitude at sea. Determining longitude reasonably accurately on land was possible, from the 17th century onwards, using the Galilean moons of Jupiter as an astronomical 'clock'. The moons were easily observable on land, but numerous attempts to reliably observe them from the deck of a ship resulted in failure. The need for better navigational accuracy for increasingly longer oceanic voyages had been an issue explored by many European nations for centuries before the passing of the Longitude Act 1714 in England. Portugal, Spain, and the Netherlands offered financial incentives for solutions to the problem of longitude as early as 1598. Addressing the problem of longitude fell, primarily, into three categories: terrestrial, celestial, and mechanical. This included detailed atlases, lunar charts, and timekeeping mechanisms at sea. It is postulated by scholars that the economic gains and political power to be had in oceanic exploration, and not scientific and technological curiosity, is what resulted in the swift passing of the Longitude Act 1714 (13 Ann. c. 14) and the largest and most famous reward, the Longitude Prize being offered. Establishing the prizes In the early 1700s, a series of maritime disasters occurred, including the wrecking of a squadron of naval vessels on the Isles of Scilly in 1707. Around the same time, mathematician Thomas Axe decreed in his will that a £1,000 prize be awarded for promising research into finding "true longitude" and that annual sums be paid to scholars involved in making corrected world maps. In 1713, when the longitude proposal of William Whiston and Humphrey Ditton was presented at the opening of the session of Parliament, a general understanding of the longitude problem prompted the formation of a parliamentary committee and the swift passing of the Longitude Act on July 8, 1714. Within this act are detailed three prizes based on levels of accuracy, which are the same accuracy requirements used for the Axe prize, set by Whiston and Ditton in their petition, and recommended by Sir Isaac Newton and Edmund Halley to the parliamentary committee. £10,000 (equivalent to £ in 2023) for a method that could determine longitude within 1 degree (equivalent to at the equator). £15,000 (equivalent to £ in 2023) for a method that could determine longitude within 40 minutes. £20,000 (equivalent to £ in 2023) for a method that could determine longitude within 30 minutes. In addition, rewards were on offer for those who could produce a method that worked within 80 geographical miles of the coast (where ships would be in most danger), and for those with promising ideas who needed financial help to bring them to trial. Proposed methods would be tested by sailing through the ocean, from Britain to any port in West Indies (about six weeks) without losing its longitude beyond the limits listed above. Also, the contender would be required to demonstrate the accuracy of their method by determining the longitude of a specific land-based feature whose longitude was already accurately known. The parliamentary committee also established the Board of Longitude. This panel of adjudicators would review proposed solutions and were also given authority to grant up to £2,000 in advances for promising projects that did not entirely fulfill the terms of the prize levels, but that were still found worthy of encouragement. The exact terms of the requirements for the prizes would later be contended by several recipients, including John Harrison. Ultimately, the £20,000 reward was not awarded to anyone in a lump sum, although John Harrison did receive a series of payments totaling £23,065. The Board of Longitude remained in existence for more than 100 years. When it was officially disbanded in 1828, an excess of £100,000 had been disbursed. Notable recipients The Longitude Act offered a very large incentive for solutions to the longitude problem. Some later recipients of rewards, such as Euler and Mayer, made clear publicly that the money was not the incentive, but instead the important improvements to navigation and cartography. Other recipients, such as Kendall and Harrison had to appeal to the Board of Longitude and other governmental officials for adequate compensation for their work. Still others submitted radical and impractical theories, some of which can be seen in a collection at Harvard’s Houghton Library. Schemes and ideas for improvements to instruments and astronomy, both practical and impractical, can be seen among the digitised archives of the Board of Longitude. Though the Board of Longitude did not award £20,000 at one time, they did offer sums to various individuals in recognition of their work for improvements in instrumentation or in published atlases and star charts. List of awardees by amount John Harrison – £23,065 awarded overall after many years of contention with the Board ending in 1773. Thomas Mudge – £500 advance in 1777 for developing his marine timekeeper and a £3,000 award approved by a special committee in 1793 in recognition for his accomplishments. Tobias Mayer – £3,000 awarded to his widow for lunar distance tables, which were published in The National Almanac in 1766 and used by James Cook in his voyages. Thomas Earnshaw – £3,000 awarded for years of design and improvements made to chronometers. Charles Mason – £1,317 awarded for various contributions and improvements on Mayer’s lunar tables. Larcum Kendall – £800 total for his copy of and improvements and simplifications of Harrison’s sea watch (£500 for K1 – Kendall’s copy of Harrison’s H4, £200 for modified K2, and £100 for last modification model K3). Jesse Ramsden – £615 awarded for his engine-divided sextant with the requirement that he share his methods and the design with other instrument makers. John Arnold – £300 awarded in increments to improve his timekeeping design and experiments, though the accuracy required for the prize was never met. Leonhard Euler – £300 awarded for contributions to the lunar distance method in aid of Mayer. Nathaniel Davies – £300 awarded for the design of a lunar telescope for Mayer. A full list of prizes made by the Commissioners and Board of Longitude was drawn up by Derek Howse, in an Appendix to his article on the finances of the Board of Longitude. Other submissions Only two women are known to have submitted proposals to the Longitude Commissioners, Elizabeth Johnson and Jane Squire. Incoming submissions can be found among the correspondence of the digitised papers of the Board of Longitude. John Harrison's contested reward The winner of the most reward money under the Longitude Act is John Harrison for sea timekeepers, including his H4 sea watch. Harrison was 21 years old when the Longitude Act was passed. He spent the next 45 years perfecting the design of his timekeepers. He first received a reward from the Commissioners of Longitude in 1737 and did not receive his final payment until he was 80. Harrison was first awarded £250 in 1737, in order to improve on his promising H1 sea clock, leading to the construction of H2. £2,000 was rewarded over the span of 1741–1755 for continued construction and completion of H2 and H3. From 1760 to 1765, Harrison received £2,865 for various expenses related to the construction, ocean trials, and eventual award for the performance of his sea watch H4. Despite the performance of the H4 exceeding the accuracy requirement of the highest reward possible in the original Longitude Act, Harrison was rewarded £7,500 (that is, £10,000 minus payments he had received in 1762 and 1764) once he had revealed the method of making his device, and was told that he must show that his single machine could be replicated before the final £10,000 could be paid. Harrison made one rather than the requested two further copies of H4, and he and his family members eventually appealed to King George III after petitions for further rewards were not answered by the Board of Longitude. A reward of £8,750 was granted by Parliament in 1773 for a total payment of £23,065 spanning thirty-six years. In popular culture Rupert T. Gould's 1923 The Marine Chronometer () is a thorough reference work on the marine chronometer. It covers the chronometer's history from the earliest attempts to measure longitude, while including detailed discussions and illustrations of the various mechanisms and their inventors. Dava Sobel's 1996 bestseller Longitude () recounts Harrison's story. A film adaptation of Longitude was released by Granada Productions and A&E in 2000, starring Michael Gambon as Harrison and Jeremy Irons as Rupert Gould. The Island of the Day Before, by Umberto Eco. Gulliver’s Travels, by Jonathan Swift. See also History of longitude Nevil Maskelyne Lunar distance (navigation) James Cook Celatone Longitude Prize List of engineering awards References External links Royal Observatory Greenwich: John Harrison and the Longitude Problem Nova Online: Lost at Sea, the Search for Longitude Board of Longitude Collection, Cambridge Digital Library History of navigation Challenge awards Horology 1714 establishments in Great Britain Crowdsourcing
Longitude rewards
[ "Physics" ]
2,213
[ "Spacetime", "Horology", "Physical quantities", "Time" ]
50,165
https://en.wikipedia.org/wiki/Louis%20de%20Broglie
Louis Victor Pierre Raymond, 7th Duc de Broglie (, also ; or ; 15 August 1892 – 19 March 1987) was a French physicist and aristocrat who made groundbreaking contributions to quantum theory. In his 1924 PhD thesis, he postulated the wave nature of electrons and suggested that all matter has wave properties. This concept is known as the de Broglie hypothesis, an example of wave-particle duality, and forms a central part of the theory of quantum mechanics. De Broglie won the Nobel Prize in Physics in 1929, after the wave-like behaviour of matter was first experimentally demonstrated in 1927. The wave-like behaviour of particles discovered by de Broglie was used by Erwin Schrödinger in his formulation of wave mechanics. De Broglie's pilot-wave concept, was presented at the 1927 Solvay Conferences then abandoned, in favor of the quantum mechanics, until 1952 when it was rediscovered and enhanced by David Bohm. Louis de Broglie was the sixteenth member elected to occupy seat 1 of the Académie française in 1944, and served as Perpetual Secretary of the French Academy of Sciences. De Broglie became the first high-level scientist to call for establishment of a multi-national laboratory, a proposal that led to the establishment of the European Organization for Nuclear Research (CERN). Biography Family and education Louis de Broglie belonged to the famous aristocratic family of Broglie, whose representatives for several centuries occupied important military and political posts in France. The father of the future physicist, Louis-Alphonse-Victor, 5th duc de Broglie, was married to Pauline d’Armaille, the granddaughter of the Napoleonic General Philippe Paul, comte de Ségur and his wife, the biographer, Marie Célestine Amélie d'Armaillé. They had five children; in addition to Louis, these were: Albertina (1872–1946), subsequently the Marquise de Luppé; Maurice (1875–1960), subsequently a famous experimental physicist; Philip (1881–1890), who died two years before the birth of Louis, and Pauline, Comtesse de Pange (1888–1972), subsequently a famous writer. Louis was born in Dieppe, Seine-Maritime. As the youngest child in the family, Louis grew up in relative loneliness, read a lot, and was fond of history, especially political. From early childhood, he had a good memory and could accurately read an excerpt from a theatrical production or give a complete list of ministers of the Third Republic of France. For this, he was predicted to become a great statesman in the future. De Broglie had intended a career in humanities, and received his first degree (licence ès lettres) in history. Afterwards he turned his attention toward mathematics and physics and received a degree (licence ès sciences) in physics. With the outbreak of the First World War in 1914, he offered his services to the army in the development of radio communications. Military service After graduation, Louis de Broglie joined the engineering forces to undergo compulsory service. It began at Fort Mont Valérien, but soon, on the initiative of his brother, he was seconded to the Wireless Communications Service and worked on the Eiffel Tower, where the radio transmitter was located. Louis de Broglie remained in military service throughout the First World War, dealing with purely technical issues. In particular, together with Léon Brillouin and brother Maurice, he participated in establishing wireless communications with submarines. Louis de Broglie was demobilized in August 1919 with the rank of adjudant. Later, the scientist regretted that he had to spend about six years away from the fundamental problems of science that interested him. Scientific and pedagogical career His 1924 thesis Recherches sur la théorie des quanta (Research on the Theory of the Quanta) introduced his theory of electron waves. This included the wave–particle duality theory of matter, based on the work of Max Planck and Albert Einstein on light. This research culminated in the de Broglie hypothesis stating that any moving particle or object had an associated wave. De Broglie thus created a new field in physics, the mécanique ondulatoire, or wave mechanics, uniting the physics of energy (wave) and matter (particle). He won the Nobel Prize in Physics in 1929 "for his discovery of the wave nature of electrons". In his later career, de Broglie worked to develop a causal explanation of wave mechanics, in opposition to the wholly probabilistic models which dominate quantum mechanical theory; it was refined by David Bohm in the 1950s. The theory has since been known as the De Broglie–Bohm theory. In addition to strictly scientific work, de Broglie thought and wrote about the philosophy of science, including the value of modern scientific discoveries. In 1930 he founded the book series Actualités scientifiques et industrielles published by Éditions Hermann. De Broglie became a member of the Académie des sciences in 1933, and was the academy's perpetual secretary from 1942. He was asked to join Le Conseil de l'Union Catholique des Scientifiques Francais, but declined because he was non-religious. In 1941, he was made a member of the National Council of Vichy France. On 12 October 1944, he was elected to the Académie Française, replacing mathematician Émile Picard. Because of the deaths and imprisonments of Académie members during the occupation and other effects of the war, the Académie was unable to meet the quorum of twenty members for his election; due to the exceptional circumstances, however, his unanimous election by the seventeen members present was accepted. In an event unique in the history of the Académie, he was received as a member by his own brother Maurice, who had been elected in 1934. UNESCO awarded him the first Kalinga Prize in 1952 for his work in popularizing scientific knowledge, and he was elected a Foreign Member of the Royal Society on 23 April 1953. Louis became the 7th duc de Broglie in 1960 upon the death without heir of his elder brother, Maurice, 6th duc de Broglie, also a physicist. In 1961, he received the title of Knight of the Grand Cross in the Légion d'honneur. De Broglie was awarded a post as counselor to the French High Commission of Atomic Energy in 1945 for his efforts to bring industry and science closer together. He established a center for applied mechanics at the Henri Poincaré Institute, where research into optics, cybernetics, and atomic energy were carried out. He inspired the formation of the International Academy of Quantum Molecular Science and was an early member. Louis never married. When he died on 19 March 1987 in Louveciennes at the age of 94, he was succeeded as duke by a distant cousin, Victor-François, 8th duc de Broglie. His funeral was held 23 March 1987 at the Church of Saint-Pierre-de-Neuilly. Scientific activity Physics of X-ray and photoelectric effect The first works of Louis de Broglie (early 1920s) were performed in the laboratory of his older brother Maurice and dealt with the features of the photoelectric effect and the properties of x-rays. These publications examined the absorption of X-rays and described this phenomenon using the Bohr theory, applied quantum principles to the interpretation of photoelectron spectra, and gave a systematic classification of X-ray spectra. The studies of X-ray spectra were important for elucidating the structure of the internal electron shells of atoms (optical spectra are determined by the outer shells). Thus, the results of experiments conducted together with Alexandre Dauvillier, revealed the shortcomings of the existing schemes for the distribution of electrons in atoms; these difficulties were eliminated by Edmund Stoner. Another result was the elucidation of the insufficiency of the Sommerfeld formula for determining the position of lines in X-ray spectra; this discrepancy was eliminated after the discovery of the electron spin. In 1925 and 1926, Leningrad physicist Orest Khvolson nominated the de Broglie brothers for the Nobel Prize for their work in the field of X-rays. Matter and wave–particle duality Studying the nature of X-ray radiation and discussing its properties with his brother Maurice, who considered these rays to be some kind of combination of waves and particles, contributed to Louis de Broglie's awareness of the need to build a theory linking particle and wave representations. In addition, he was familiar with the works (1919–1922) of Marcel Brillouin, which proposed a hydrodynamic model of an atom and attempted to relate it to the results of Bohr's theory. The starting point in the work of Louis de Broglie was the idea of Einstein about the quanta of light. In his first article on this subject, published in 1922, the French scientist considered blackbody radiation as a gas of light quanta and, using classical statistical mechanics, derived the Wien radiation law in the framework of such a representation. In his next publication, he tried to reconcile the concept of light quanta with the phenomena of interference and diffraction and came to the conclusion that it was necessary to associate a certain periodicity with quanta. In this case, light quanta were interpreted by him as relativistic particles of very small mass. It remained to extend the wave considerations to any massive particles, and in the summer of 1923 a decisive breakthrough occurred. De Broglie outlined his ideas in a short note "Waves and quanta" (, presented at a meeting of the Paris Academy of Sciences on September 10, 1923), which marked the beginning of the creation of wave mechanics. In this paper and his subsequent PhD thesis, the scientist suggested that a moving particle with energy E and velocity v is characterized by some internal periodic process with a frequency (later known as Compton frequency), where is the Planck constant. To reconcile these considerations, based on the quantum principle, with the ideas of special relativity, de Broglie associated wave he called a "phase wave" with a moving body, which propagates with the phase velocity . Such a wave, which later received the name matter wave, or de Broglie wave, in the process of body movement remains in phase with the internal periodic process. Having then examined the motion of an electron in a closed orbit, the scientist showed that the requirement for phase matching directly leads to the quantum Bohr-Sommerfeld condition, that is, to quantize the angular momentum. In the next two notes (reported at the meetings on September 24 and October 8, respectively), de Broglie came to the conclusion that the particle velocity is equal to the group velocity of phase waves, and the particle moves along the normal to surfaces of equal phase. In the general case, the trajectory of a particle can be determined using Fermat's principle (for waves) or the principle of least action (for particles), which indicates a connection between geometric optics and classical mechanics. This theory set the basis of wave mechanics. It was supported by Einstein, confirmed by the electron diffraction experiments of G P Thomson and Davisson and Germer, and generalized by the work of Erwin Schrödinger. From a philosophical viewpoint, this theory of matter-waves has contributed greatly to the ruin of the atomism of the past. Originally, de Broglie thought that real wave (i.e., having a direct physical interpretation) was associated with particles. In fact, the wave aspect of matter was formalized by a wavefunction defined by the Schrödinger equation, which is a pure mathematical entity having a probabilistic interpretation, without the support of real physical elements. This wavefunction gives an appearance of wave behavior to matter, without making real physical waves appear. However, until the end of his life de Broglie returned to a direct and real physical interpretation of matter-waves, following the work of David Bohm. Conjecture of an internal clock of the electron In his 1924 thesis, de Broglie conjectured that the electron has an internal clock that constitutes part of the mechanism by which a pilot wave guides a particle. Subsequently, David Hestenes has proposed a link to the zitterbewegung that was suggested by Schrödinger. While attempts at verifying the internal clock hypothesis and measuring clock frequency are so far not conclusive, recent experimental data is at least compatible with de Broglie's conjecture. Non-nullity and variability of mass According to de Broglie, the neutrino and the photon have rest masses that are non-zero, though very low. That a photon is not quite massless is imposed by the coherence of his theory. Incidentally, this rejection of the hypothesis of a massless photon enabled him to doubt the hypothesis of the expansion of the universe. In addition, he believed that the true mass of particles is not constant, but variable, and that each particle can be represented as a thermodynamic machine equivalent to a cyclic integral of action. Generalization of the principle of least action In the second part of his 1924 thesis, de Broglie used the equivalence of the mechanical principle of least action with Fermat's optical principle: "Fermat's principle applied to phase waves is identical to Maupertuis' principle applied to the moving body; the possible dynamic trajectories of the moving body are identical to the possible rays of the wave." This latter equivalence had been pointed out by William Rowan Hamilton a century earlier, and published by him around 1830, for the case of light. Duality of the laws of nature Far from claiming to make "the contradiction disappear" which Max Born thought could be achieved with a statistical approach, de Broglie extended wave–particle duality to all particles (and to crystals which revealed the effects of diffraction) and extended the principle of duality to the laws of nature. His last work made a single system of laws from the two large systems of thermodynamics and of mechanics: That idea seems to match the continuous–discontinuous duality, since its dynamics could be the limit of its thermodynamics when transitions to continuous limits are postulated. It is also close to that of Gottfried Wilhelm Leibniz, who posited the necessity of "architectonic principles" to complete the system of mechanical laws. However, according to him, there is less duality, in the sense of opposition, than synthesis (one is the limit of the other) and the effort of synthesis is constant according to him, like in his first formula, in which the first member pertains to mechanics and the second to optics: Neutrino theory of light This theory, which dates from 1934, introduces the idea that the photon is equivalent to the fusion of two Dirac neutrinos. In 1938, the concept was challenged as not rotationally invariant and work on the concept was largedly discontinued. Hidden thermodynamics De Broglie's final idea was the hidden thermodynamics of isolated particles. It is an attempt to bring together the three furthest principles of physics: the principles of Fermat, Maupertuis, and Carnot. In this work, action becomes a sort of opposite to entropy, through an equation that relates the only two universal dimensions of the form: As a consequence of its great impact, this theory brings back the uncertainty principle to distances around extrema of action, distances corresponding to reductions in entropy. Honors and awards 1929 Nobel Prize in Physics 1929 Henri Poincaré Medal 1932 Albert I of Monaco Prize 1938 Max Planck Medal 1938 Fellow, Royal Swedish Academy of Sciences 1939 International Member, American Philosophical Society 1944 Fellow, Académie française 1948 International Member, United States National Academy of Sciences 1952 Kalinga Prize 1953 Fellow, Royal Society 1958 International Honorary Member of the American Academy of Arts and Sciences Publications Recherches sur la théorie des quanta (Researches on the quantum theory), Thesis, Paris, 1924, Ann. de Physique (10) 3, 22 (1925). Introduction à la physique des rayons X et gamma (Introduction to physics of X-rays and Gamma-rays), with Maurice de Broglie, Gauthier-Villars, 1928. Rapport au 5ème Conseil de Physique Solvay (Report for the 5th Solvay Physics Congress), Brussels, 1927. Matière et lumière (Matter and Light), Paris: Albin Michel, 1937. La Physique nouvelle et les quanta (New Physics and Quanta), Flammarion, 1937. Continu et discontinu en physique moderne (Continuous and discontinuous in Modern Physics), Paris: Albin Michel, 1941. Ondes, corpuscules, mécanique ondulatoire (Waves, Corpuscles, Wave Mechanics), Paris: Albin Michel, 1945. Physique et microphysique (Physics and Microphysics), Albin Michel, 1947. Vie et œuvre de Paul Langevin (The life and works of Paul Langevin), French Academy of Sciences, 1947. Optique électronique et corpusculaire (Electronic and Corpuscular Optics), Herman, 1950. Savants et découvertes (Scientists and discoveries), Paris, Albin Michel, 1951. Une tentative d'interprétation causale et non linéaire de la mécanique ondulatoire: la théorie de la double solution. Paris: Gauthier-Villars, 1956. English translation: Non-linear Wave Mechanics: A Causal Interpretation. Amsterdam: Elsevier, 1960. Nouvelles perspectives en microphysique (New prospects in Microphysics), Albin Michel, 1956. Sur les sentiers de la science (On the Paths of Science), Paris: Albin Michel, 1960. Introduction à la nouvelle théorie des particules de M. Jean-Pierre Vigier et de ses collaborateurs, Paris: Gauthier-Villars, 1961. Paris: Albin Michel, 1960. English translation: Introduction to the Vigier Theory of elementary particles, Amsterdam: Elsevier, 1963. Étude critique des bases de l'interprétation actuelle de la mécanique ondulatoire, Paris: Gauthier-Villars, 1963. English translation: The Current Interpretation of Wave Mechanics: A Critical Study, Amsterdam, Elsevier, 1964. Certitudes et incertitudes de la science (Certitudes and Incertitudes of Science). Paris: Albin Michel, 1966. with Louis Armand, Pierre Henri Simon and others. Albert Einstein. Paris: Hachette, 1966. English translation: Einstein. Peebles Press, 1979. Recherches d'un demi-siècle (Research of a half-century), Albin Michel, 1976. Les incertitudes d'Heisenberg et l'interprétation probabiliste de la mécanique ondulatoire (Heisenberg uncertainty and wave mechanics probabilistic interpretation), Gauthier-Villars, 1982. References External links Les Immortels: Louis de BROGLIE", Académie française Fondation Louis de Broglie English translation of his book on hidden thermodynamics by D. H. Delphenich The Theory of measurement in wave mechanics (English translation of his book on the subject) "A new conception of light" (English translation) Louis de Broglie Interview, on Ina.fr 1892 births 1987 deaths People from Dieppe, Seine-Maritime Louis Members of the National Council of Vichy France 20th-century French physicists Quantum physicists French theoretical physicists People associated with CERN Presidents of the Société Française de Physique Members of the American Philosophical Society University of Paris alumni Academic staff of the University of Paris Members of the Académie Française Members of the French Academy of Sciences Officers of the French Academy of Sciences Members of the Royal Swedish Academy of Sciences Foreign associates of the National Academy of Sciences Foreign members of the USSR Academy of Sciences Foreign fellows of the Indian National Science Academy Foreign members of the Royal Society Members of the International Academy of Quantum Molecular Science French military personnel of World War I Nobel laureates in Physics French Nobel laureates Kalinga Prize recipients Grand Cross of the Legion of Honour Winners of the Max Planck Medal
Louis de Broglie
[ "Physics" ]
4,203
[ "Quantum physicists", "Quantum mechanics" ]
50,225
https://en.wikipedia.org/wiki/Prime%20meridian
A prime meridian is an arbitrarily chosen meridian (a line of longitude) in a geographic coordinate system at which longitude is defined to be 0°. On a spheroid, a prime meridian and its anti-meridian (the 180th meridian in a 360°-system) form a great ellipse. This divides the body (e.g. Earth) into two hemispheres: the Eastern Hemisphere and the Western Hemisphere (for an east-west notational system). For Earth's prime meridian, various conventions have been used or advocated in different regions throughout history. Earth's current international standard prime meridian is the IERS Reference Meridian. It is derived, but differs slightly, from the Greenwich Meridian, the previous standard. A prime meridian for a planetary body not tidally locked (or at least not in synchronous rotation) is entirely arbitrary, unlike an equator, which is determined by the axis of rotation. However, for celestial objects that are tidally locked (more specifically, synchronous), their prime meridians are determined by the face always inward of the orbit (a planet facing its star, or a moon facing its planet), just as equators are determined by rotation. Longitudes for the Earth and Moon are measured from their prime meridian (at 0°) to 180° east and west. For all other Solar System bodies, longitude is measured from 0° (their prime meridian) to 360°. West longitudes are used if the rotation of the body is prograde (or 'direct', like Earth), meaning that its direction of rotation is the same as that of its orbit. East longitudes are used if the rotation is retrograde. History The notion of longitude for Greeks was developed by the Greek Eratosthenes (c.276195BCE) in Alexandria, and Hipparchus (c.190120BCE) in Rhodes, and applied to a large number of cities by the geographer Strabo (64/63BCEc.24CE). But it was Ptolemy (c.90168CE) who first used a consistent meridian for a world map in his Geographia. Ptolemy used as his basis the "Fortunate Isles", a group of islands in the Atlantic, which are usually associated with the Canary Islands (13° to 18°W), although his maps correspond more closely to the Cape Verde islands (22° to 25° W). The main point is to be comfortably west of the western tip of Africa (17.5° W) as negative numbers were not yet in use. His prime meridian corresponds to 18° 40' west of Winchester (about 20°W) today. At that time the chief method of determining longitude was by using the reported times of lunar eclipses in different countries. One of the earliest known descriptions of standard time in India appeared in the 4th century CE astronomical treatise Surya Siddhanta. Postulating a spherical Earth, the book described the thousands years old customs of the prime meridian, or zero longitude, as passing through Avanti, the ancient name for the historic city of Ujjain, and Rohitaka, the ancient name for Rohtak (), a city near the Kurukshetra. Ptolemy's Geographia was first printed with maps at Bologna in 1477, and many early globes in the 16th century followed his lead. But there was still a hope that a "natural" basis for a prime meridian existed. Christopher Columbus reported (1493) that the compass pointed due north somewhere in mid-Atlantic, and this fact was used in the important Treaty of Tordesillas of 1494, which settled the territorial dispute between Spain and Portugal over newly discovered lands. The Tordesillas line was eventually settled at 370 leagues (2,193 kilometers, 1,362 statute miles, or 1,184 nautical miles) west of Cape Verde. This is shown in the copies of Spain's Padron Real made by Diogo Ribeiro in 1527 and 1529. São Miguel Island (25.5°W) in the Azores was still used for the same reason as late as 1594 by Christopher Saxton, although by then it had been shown that the zero magnetic declination line did not follow a line of longitude. In 1541, Mercator produced his famous 41 cm terrestrial globe and drew his prime meridian precisely through Fuerteventura (14°1'W) in the Canaries. His later maps used the Azores, following the magnetic hypothesis. But by the time that Ortelius produced the first modern atlas in 1570, other islands such as Cape Verde were coming into use. In his atlas longitudes were counted from 0° to 360°, not 180°W to 180°E as is usual today. This practice was followed by navigators well into the 18th century. In 1634, Cardinal Richelieu used the westernmost island of the Canaries, El Hierro, 19° 55' west of Paris, as the choice of meridian. The geographer Delisle decided to round this off to 20°, so that it simply became the meridian of Paris disguised. In the early 18th century the battle was on to improve the determination of longitude at sea, leading to the development of the marine chronometer by John Harrison. But it was the development of accurate star charts, principally by the first British Astronomer Royal, John Flamsteed between 1680 and 1719 and disseminated by his successor Edmund Halley, that enabled navigators to use the lunar method of determining longitude more accurately using the octant developed by Thomas Godfrey and John Hadley. In the 18th century most countries in Europe adapted their own prime meridian, usually through their capital, hence in France the Paris meridian was prime, in Prussia it was the Berlin meridian, in Denmark the Copenhagen meridian, and in United Kingdom the Greenwich meridian. Between 1765 and 1811, Nevil Maskelyne published 49 issues of the Nautical Almanac based on the meridian of the Royal Observatory, Greenwich. "Maskelyne's tables not only made the lunar method practicable, they also made the Greenwich meridian the universal reference point. Even the French translations of the Nautical Almanac retained Maskelyne's calculations from Greenwich – in spite of the fact that every other table in the Connaissance des Temps considered the Paris meridian as the prime." In 1884, at the International Meridian Conference in Washington, D.C., 22 countries voted to adopt the Greenwich meridian as the prime meridian of the world. The French argued for a neutral line, mentioning the Azores and the Bering Strait, but eventually abstained and continued to use the Paris meridian until 1911. The current international standard Prime Meridian is the IERS Reference Meridian. The International Hydrographic Organization adopted an early version of the IRM in 1983 for all nautical charts. It was adopted for air navigation by the International Civil Aviation Organization on 3 March 1989. International prime meridian Since 1984, the international standard for the Earth's prime meridian is the IERS Reference Meridian. Between 1884 and 1984, the meridian of Greenwich was the world standard. These meridians are very close to each other. Prime meridian at Greenwich In October 1884 the Greenwich Meridian was selected by delegates (forty-one delegates representing twenty-five nations) to the International Meridian Conference held in Washington, D.C., United States to be the common zero of longitude and standard of time reckoning throughout the world. The position of the historic prime meridian, based at the Royal Observatory, Greenwich, was established by Sir George Airy in 1851. It was defined by the location of the Airy Transit Circle ever since the first observation he took with it. Prior to that, it was defined by a succession of earlier transit instruments, the first of which was acquired by the second Astronomer Royal, Edmond Halley in 1721. It was set up in the extreme north-west corner of the Observatory between Flamsteed House and the Western Summer House. This spot, now subsumed into Flamsteed House, is roughly 43 metres (47 yards) to the west of the Airy Transit Circle, a distance equivalent to roughly 2 seconds of longitude. It was Airy's transit circle that was adopted in principle (with French delegates, who pressed for adoption of the Paris meridian abstaining) as the Prime Meridian of the world at the 1884 International Meridian Conference. All of these Greenwich meridians were located via an astronomic observation from the surface of the Earth, oriented via a plumb line along the direction of gravity at the surface. This astronomic Greenwich meridian was disseminated around the world, first via the lunar distance method, then by chronometers carried on ships, then via telegraph lines carried by submarine communications cables, then via radio time signals. One remote longitude ultimately based on the Greenwich meridian using these methods was that of the North American Datum 1927 or NAD27, an ellipsoid whose surface best matches mean sea level under the United States. IERS Reference Meridian Beginning in 1973 the International Time Bureau and later the International Earth Rotation and Reference Systems Service changed from reliance on optical instruments like the Airy Transit Circle to techniques such as lunar laser ranging, satellite laser ranging, and very-long-baseline interferometry. The new techniques resulted in the IERS Reference Meridian, the plane of which passes through the centre of mass of the Earth. This differs from the plane established by the Airy transit, which is affected by vertical deflection (the local vertical is affected by influences such as nearby mountains). The change from relying on the local vertical to using a meridian based on the centre of the Earth caused the modern prime meridian to be 5.3 east of the astronomic Greenwich prime meridian through the Airy Transit Circle. At the latitude of Greenwich, this amounts to 102 metres (112 yards). This was officially accepted by the Bureau International de l'Heure (BIH) in 1984 via its BTS84 (BIH Terrestrial System) that later became WGS84 (World Geodetic System 1984) and the various International Terrestrial Reference Frames (ITRFs). Due to the movement of Earth's tectonic plates, the line of 0° longitude along the surface of the Earth has slowly moved toward the west from this shifted position by a few centimetres (inches); that is, towards the Airy Transit Circle (or the Airy Transit Circle has moved toward the east, depending on your point of view) since 1984 (or the 1960s). With the introduction of satellite technology, it became possible to create a more accurate and detailed global map. With these advances there also arose the necessity to define a reference meridian that, whilst being derived from the Airy Transit Circle, would also take into account the effects of plate movement and variations in the way that the Earth was spinning. As a result, the IERS Reference Meridian was established and is commonly used to denote the Earth's prime meridian (0° longitude) by the International Earth Rotation and Reference Systems Service, which defines and maintains the link between longitude and time. Based on observations to satellites and celestial compact radio sources (quasars) from various coordinated stations around the globe, Airy's transit circle drifts northeast about 2.5 centimetres (1 inch) per year relative to this Earth-centred 0° longitude. It is also the reference meridian of the Global Positioning System operated by the United States Department of Defense, and of WGS84 and its two formal versions, the ideal International Terrestrial Reference System (ITRS) and its realization, the International Terrestrial Reference Frame (ITRF). A current convention on the Earth uses the line of longitude 180° opposite the IRM as the basis for the International Date Line. List of places On Earth, starting at the North Pole and heading south to the South Pole, the IERS Reference Meridian (as of 2016) passes through 8 countries, 4 seas, 3 oceans and 1 channel: Prime meridian on other celestial bodies As on the Earth, prime meridians must be arbitrarily defined. Often a landmark such as a crater is used; other times a prime meridian is defined by reference to another celestial object, or by magnetic fields. The prime meridians of the following planetographic systems have been defined: Two different heliographic coordinate systems are used on the Sun. The first is the Carrington heliographic coordinate system. In this system, the prime meridian passes through the center of the solar disk as seen from the Earth on 9 November 1853, which is when the English astronomer Richard Christopher Carrington started his observations of sunspots. The second is the Stonyhurst heliographic coordinates system, originated at Stonyhurst Observatory in Lancashire, England. In 1975 the prime meridian of Mercury was defined to be 20° east of the crater Hun Kal. This meridian was chosen because it runs through the point on Mercury's equator where the average temperature is highest (due to the planet's rotation and orbit, the sun briefly retrogrades at noon at this point during perihelion, giving it more sunlight). Defined in 1992, the prime meridian of Venus passes through the central peak in the crater Ariadne, chosen arbitrarily. The prime meridian of the Moon lies directly in the middle of the face of the Moon visible from Earth and passes near the crater Bruce. The prime meridian of Mars was established in 1971 and passes through the center of the crater Airy-0, although it is fixed by the longitude of the Viking 1 lander, which is defined to be 47.95137°W. The prime meridian on Ceres runs through the Kait crater, which was arbitrarily chosen because it is near the equator (about 2° south). The prime meridian on 4 Vesta is 4 degrees east of the crater Claudia, chosen because it is sharply defined. Jupiter has several coordinate systems because its cloud tops—the only part of the planet visible from space—rotate at different rates depending on latitude. It is unknown whether Jupiter has any internal solid surface that would enable a more Earth-like coordinate system. System I and System II coordinates are based on atmospheric rotation, and System III coordinates use Jupiter's magnetic field. The prime meridians of Jupiter's four Galilean moons were established in 1979. Europa's prime meridian is defined such that the crater Cilix is at 182° W. The 0° longitude runs through the middle of the face that is always turned towards Jupiter. Io's prime meridian, like that of Earth's moon, is defined so that it runs through the middle of the face that is always turned towards Jupiter (the near side, known as the subjovian hemisphere). Ganymede's prime meridian is defined such that the crater Anat is at 128° W, and the 0° longitude runs through the middle of the subjovian hemisphere. Callisto's prime meridian is defined such that the crater Saga is at 326° W. Titan is the largest moon of Saturn and, like the Earth's moon, always has the same face towards Saturn, and so the middle of that face is 0 longitude. Like Jupiter, Neptune is a gas giant, so any surface is obscured by clouds. The prime meridian of its largest moon, Triton, was established in 1991. Pluto's prime meridian is defined as the meridian passing through the center of the face that is always towards Charon, its largest moon, as the two are tidally locked to each other. Charon's prime meridian is similarly defined as the meridian always facing directly toward Pluto. List of historic prime meridians on Earth See also Notes References Works cited External links "Where the Earth's surface begins—and ends", Popular Mechanics, December 1930 scanned TIFFs of the conference proceedings Prime meridians in use in the 1880s, by country Geodesy Meridians (geography) Cardinal Richelieu
Prime meridian
[ "Mathematics" ]
3,263
[ "Applied mathematics", "Geodesy" ]
50,237
https://en.wikipedia.org/wiki/Robert%20Boyle
Robert Boyle (; 25 January 1627 – 31 December 1691) was an Anglo-Irish natural philosopher, chemist, physicist, alchemist and inventor. Boyle is largely regarded today as the first modern chemist, and therefore one of the founders of modern chemistry, and one of the pioneers of modern experimental scientific method. He is best known for Boyle's law, which describes the inversely proportional relationship between the absolute pressure and volume of a gas, if the temperature is kept constant within a closed system. Among his works, The Sceptical Chymist is seen as a cornerstone book in the field of chemistry. He was a devout and pious Anglican and is noted for his works in theology. Biography Early years Boyle was born at Lismore Castle, in County Waterford, Ireland, the seventh son and fourteenth child of The 1st Earl of Cork ('the Great Earl of Cork') and Catherine Fenton. Lord Cork, then known simply as Richard Boyle, had arrived in Dublin from England in 1588 during the Tudor plantations of Ireland and obtained an appointment as a deputy escheator. He had amassed enormous wealth and landholdings by the time Robert was born and had been made Earl of Cork in October 1620. Catherine Fenton, Countess of Cork, was the daughter of Sir Geoffrey Fenton, the former Secretary of State for Ireland, who was born in Dublin in 1539, and Alice Weston, the daughter of Robert Weston, who was born in Lismore in 1541. As a child, Boyle was raised by a wet nurse, as were his elder brothers. Boyle received private tutoring in Latin, Greek, and French and when he was eight years old, following the death of his mother, he, and his brother Francis, were sent to Eton College in England. His father's friend, Sir Henry Wotton, was then the provost of the college. During this time, his father hired a private tutor, Robert Carew, who had knowledge of Irish, to act as private tutor to his sons in Eton. However, "only Mr. Robert sometimes desires it [Irish] and is a little entered in it", but despite the "many reasons" given by Carew to draw their attention to it, "they practise the French and Latin but they affect not the Irish". After spending over three years at Eton, Robert travelled abroad with a French tutor. They visited Italy in 1641 and remained in Florence during the winter of that year studying the "paradoxes of the great star-gazer", the elderly Galileo Galilei. Middle years Robert returned to England from continental Europe in mid-1644 with a keen interest in scientific research. His father, Lord Cork, had died the previous year and had left him the manor of Stalbridge in Dorset as well as substantial estates in County Limerick in Ireland that he had acquired. Robert then made his residence at Stalbridge House, between 1644 and 1652, and settled a laboratory where he conducted many experiments. From that time, Robert devoted his life to scientific research and soon took a prominent place in the band of enquirers, known as the "Invisible College", who devoted themselves to the cultivation of the "new philosophy". They met frequently in London, often at Gresham College, and some of the members also had meetings at Oxford. Having made several visits to his Irish estates beginning in 1647, Robert moved to Ireland in 1652 but became frustrated at his inability to make progress in his chemical work. In one letter, he described Ireland as "a barbarous country where chemical spirits were so misunderstood and chemical instruments so unprocurable that it was hard to have any Hermetic thoughts in it." All Souls, Oxford University, shows the arms of Boyle's family in colonnade of the Great Quadrangle, opposite the arms of the Hill family of Shropshire, close by a sundial designed by Boyle's friend Christopher Wren. In 1654, Boyle left Ireland for Oxford to pursue his work more successfully. An inscription can be found on the wall of University College, Oxford, the High Street at Oxford (now the location of the Shelley Memorial), marking the spot where Cross Hall stood until the early 19th century. It was here that Boyle rented rooms from the wealthy apothecary who owned the Hall. Reading in 1657 of Otto von Guericke's air pump, he set himself, with the assistance of Robert Hooke, to devise improvements in its construction. Guericke's air pump was large and required "the continual labour of two strong men for divers hours", and Boyle constructed one that could be operated conveniently on a desktop. With the result, the "machina Boyleana" or "Pneumatical Engine", finished in 1659, he began a series of experiments on the properties of air and coined the term factitious airs. An account of Boyle's work with the air pump was published in 1660 under the title New Experiments Physico-Mechanical, Touching the Spring of the Air, and its Effects. Among the critics of the views put forward in this book was a Jesuit, Francis Line (1595–1675), and it was while answering his objections that Boyle made his first mention of the law that the volume of a gas varies inversely to the pressure of the gas, which among English-speaking people is usually called Boyle's law, after his name. The person who originally formulated the hypothesis was Henry Power in 1661. Boyle in 1662 included a reference to a paper written by Power, but mistakenly attributed it to Richard Towneley. In continental Europe, the hypothesis is sometimes attributed to Edme Mariotte, although he did not publish it until 1676 and was probably aware of Boyle's work at the time. In 1663 the Invisible College became The Royal Society of London for Improving Natural Knowledge, and the charter of incorporation granted by Charles II of England named Boyle a member of the council. In 1680 he was elected president of the society, but declined the honour from a scruple about oaths. He made a "wish list" of 24 possible inventions which included "the prolongation of life", the "art of flying", "perpetual light", "making armour light and extremely hard", "a ship to sail with all winds, and a ship not to be sunk", "practicable and certain way of finding longitudes", "potent drugs to alter or exalt imagination, waking, memory and other functions and appease pain, procure innocent sleep, harmless dreams, etc.". All but a few of the 24 have come true. In 1668 he left Oxford for London where he resided at the house of his elder sister Katherine Jones, Lady Ranelagh, in Pall Mall. He experimented in the laboratory she had in her home and attended her salon of intellectuals interested in the sciences. The siblings maintained "a lifelong intellectual partnership, where brother and sister shared medical remedies, promoted each other's scientific ideas, and edited each other's manuscripts." His contemporaries widely acknowledged Katherine's influence on his work, but later historiographers dropped discussion of her accomplishments and relationship to her brother from their histories. Later years In 1669 his health, never very strong, began to fail seriously and he gradually withdrew from his public engagements, ceasing his communications to the Royal Society, and advertising his desire to be excused from receiving guests, "unless upon occasions very extraordinary", on Tuesday and Friday forenoon, and Wednesday and Saturday afternoon. In the leisure thus gained he wished to "recruit his spirits, range his papers", and prepare some important chemical investigations which he proposed to leave "as a kind of Hermetic legacy to the studious disciples of that art", but of which he did not make known the nature. His health became still worse in 1691, and he died on 31 December that year, just a week after the death of his sister, Katherine, in whose home he had lived and with whom he had shared scientific pursuits for more than twenty years. Boyle died from paralysis. He was buried in the churchyard of St Martin-in-the-Fields, his funeral sermon being preached by his friend, Bishop Gilbert Burnet. In his will, Boyle endowed a series of lectures that came to be known as the Boyle Lectures. Scientific investigator Boyle's great merit as a scientific investigator is that he carried out the principles which Francis Bacon espoused in the Novum Organum. Yet he would not avow himself a follower of Bacon, or indeed of any other teacher. On several occasions, he mentions that to keep his judgment as unprepossessed as might be with any of the modern theories of philosophy, until he was "provided of experiments" to help him judge of them. He refrained from any study of the atomical and the Cartesian systems, and even of the Novum Organum itself, though he admits to "transiently consulting" them about a few particulars. Nothing was more alien to his mental temperament than the spinning of hypotheses. He regarded the acquisition of knowledge as an end in itself, and in consequence, he gained a wider outlook on the aims of scientific inquiry than had been enjoyed by his predecessors for many centuries. This, however, did not mean that he paid no attention to the practical application of science nor that he despised knowledge which tended to use. Robert Boyle was an alchemist; and believing the transmutation of metals to be a possibility, he carried out experiments in the hope of achieving it; and he was instrumental in obtaining the repeal, by the Royal Mines Act 1688 (1 Will. & Mar. c. 30), of the statute of Henry IV against multiplying gold and silver, the Gold and Silver Act 1403 (5 Hen. 4. c. 4). With all the important work he accomplished in physics – the enunciation of Boyle's law, the discovery of the part taken by air in the propagation of sound, and investigations on the expansive force of freezing water, on specific gravities and refractive powers, on crystals, on electricity, on colour, on hydrostatics, etc. – chemistry was his peculiar and favourite study. His first book on the subject was The Sceptical Chymist, published in 1661, in which he criticised the "experiments whereby vulgar Spagyrists are wont to endeavour to evince their Salt, Sulphur and Mercury to be the true Principles of Things." For him chemistry was the science of the composition of substances, not merely an adjunct to the arts of the alchemist or the physician. He endorsed the view of elements as the undecomposable constituents of material bodies; and made the distinction between mixtures and compounds. He made considerable progress in the technique of detecting their ingredients, a process which he designated by the term "analysis". He further supposed that the elements were ultimately composed of particles of various sorts and sizes, into which, however, they were not to be resolved in any known way. He studied the chemistry of combustion and of respiration, and conducted experiments in physiology, where, however, he was hampered by the "tenderness of his nature" which kept him from anatomical dissections, especially vivisections, though he knew them to be "most instructing". Theological interests In addition to philosophy, Boyle devoted much time to theology, showing a very decided leaning to the practical side and an indifference to controversial polemics. At the Restoration of Charles II of England in 1660, he was favourably received at court and in 1665 would have received the provostship of Eton College had he agreed to take holy orders, but this he refused to do on the ground that his writings on religious subjects would have greater weight coming from a layman than a paid minister of the Church. Moreover, Boyle incorporated his scientific interests into his theology, believing that natural philosophy could provide powerful evidence for the existence of God. In works such as Disquisition about the Final Causes of Natural Things (1688), for instance, he criticised contemporary philosophers – such as René Descartes – who denied that the study of nature could reveal much about God. Instead, Boyle argued that natural philosophers could use the design apparently on display in some parts of nature to demonstrate God's involvement with the world. He also attempted to tackle complex theological questions using methods derived from his scientific practices. In Some Physico-Theological Considerations about the Possibility of the Resurrection (1675), he used a chemical experiment known as the reduction to the pristine state as part of an attempt to demonstrate the physical possibility of the resurrection of the body. Throughout his career, Boyle tried to show that science could lend support to Christianity. As a director of the East India Company he spent large sums in promoting the spread of Christianity in the East, contributing liberally to missionary societies and to the expenses of translating the Bible or portions of it into various languages. Boyle supported the policy that the Bible should be available in the vernacular language of the people. An Irish language version of the New Testament was published in 1602 but was rare in Boyle's adult life. In 1680–85 Boyle personally financed the printing of the Bible, both Old and New Testaments, in Irish. In this respect, Boyle's attitude to the Irish language differed from the Protestant Ascendancy class in Ireland at the time, which was generally hostile to the language and largely opposed the use of Irish (not only as a language of religious worship). Boyle also had a monogenist perspective about race origin. He was a pioneer in studying races, and he believed that all human beings, no matter how diverse their physical differences, came from the same source: Adam and Eve. He studied reported stories of parents giving birth to different coloured albinos, so he concluded that Adam and Eve were originally white and that Caucasians could give birth to different coloured races. Boyle also extended the theories of Robert Hooke and Isaac Newton about colour and light via optical projection (in physics) into discourses of polygenesis, speculating that maybe these differences were due to "seminal impressions". Taking this into account, it might be considered that he envisioned a good explanation for complexion at his time, due to the fact that now we know that skin colour is disposed by genes, which are actually contained in the semen. Boyle's writings mention that at his time, for "European Eyes", beauty was not measured so much in colour of skin, but in "stature, comely symmetry of the parts of the body, and good features in the face". Various members of the scientific community rejected his views and described them as "disturbing" or "amusing". In his will, Boyle provided money for a series of lectures to defend the Christian religion against those he considered "notorious infidels, namely atheists, deists, pagans, Jews and Muslims", with the provision that controversies between Christians were not to be mentioned (see Boyle Lectures). Awards and honours As a founder of the Royal Society, he was elected a Fellow of the Royal Society (FRS) in 1663. Boyle's law is named in his honour. The Royal Society of Chemistry issues a Robert Boyle Prize for Analytical Science, named in his honour. The Boyle Medal for Scientific Excellence in Ireland, inaugurated in 1899, is awarded jointly by the Royal Dublin Society and The Irish Times. Launched in 2012, The Robert Boyle Summer School organized by the Waterford Institute of Technology with support from Lismore Castle, is held annually to honor the heritage of Robert Boyle. Important works The following are some of the more important of his works: 1660 – New Experiments Physico-Mechanical: Touching the Spring of the Air and their Effects 1661 – The Sceptical Chymist 1662 – Whereunto is Added a Defence of the Authors Explication of the Experiments, Against the Obiections of Franciscus Linus and Thomas Hobbes (a book-length addendum to the second edition of New Experiments Physico-Mechanical) 1663 – Considerations touching the Usefulness of Experimental Natural Philosophy (followed by a second part in 1671) 1664 – Experiments and Considerations Touching Colours, with Observations on a Diamond that Shines in the Dark 1665 – New Experiments and Observations upon Cold 1666 – Hydrostatical Paradoxes 1666 – Origin of Forms and Qualities according to the Corpuscular Philosophy. (A continuation of his work on the spring of air demonstrated that a reduction in ambient pressure could lead to bubble formation in living tissue. This description of a viper in a vacuum was the first recorded description of decompression sickness.) 1669 – A Continuation of New Experiments Physico-mechanical, Touching the Spring and Weight of the Air, and Their Effects 1670 – Tracts about the Cosmical Qualities of Things, the Temperature of the Subterraneal and Submarine Regions, the Bottom of the Sea, &tc. with an Introduction to the History of Particular Qualities 1672 – Origin and Virtues of Gems 1673 – Essays of the Strange Subtilty, Great Efficacy, Determinate Nature of Effluviums 1674 – Two volumes of tracts on the Saltiness of the Sea, Suspicions about the Hidden Realities of the Air, Cold, Celestial Magnets 1674 – Animadversions upon Mr. Hobbes's Problemata de Vacuo 1676 – Experiments and Notes about the Mechanical Origin or Production of Particular Qualities, including some notes on electricity and magnetism 1678 – Observations upon an artificial Substance that Shines without any Preceding Illustration 1680 – The Aerial Noctiluca 1682 – New Experiments and Observations upon the Icy Noctiluca (a further continuation of his work on the air) 1684 – Memoirs for the Natural History of the Human Blood 1685 – Short Memoirs for the Natural Experimental History of Mineral Waters 1686 – A Free Enquiry into the Vulgarly Received Notion of Nature 1690 – Medicina Hydrostatica 1691 – Experimenta et Observationes Physicae Among his religious and philosophical writings were: 1648 (1659) – Some Motives and Incentives to the Love of God, often known by its running head Seraphic Love, written in 1648, but not published until 1659 1663 – Some Considerations Touching the Style of the H[oly] Scriptures 1664 – Excellence of Theology compared with Natural Philosophy 1665 – Occasional Reflections upon Several Subjects, which was ridiculed by Swift in Meditation Upon a Broomstick, and by Butler in An Occasional Reflection on Dr Charlton's Feeling a Dog's Pulse at Gresham College 1675 – Some Considerations about the Reconcileableness of Reason and Religion, with a Discourse about the Possibility of the Resurrection 1687 – The Martyrdom of Theodora, and of Didymus, major source for Handel's Oratorio Theodora 1690 – The Christian Virtuoso See also , phosphorus manufacturer who started as Boyle's assistant , a painting of a demonstration of one of Boyle's experiments , thermodynamic quantity named after Boyle References Further reading M. A. Stewart (ed.), Selected Philosophical Papers of Robert Boyle, Indianapolis: Hackett, 1991. Fulton, John F., A Bibliography of the Honourable Robert Boyle, Fellow of the Royal Society. Second edition. Oxford: At the Clarendon Press, 1961. Hunter, Michael, Boyle : Between God and Science, New Haven : Yale University Press, 2009. Hunter, Michael, Robert Boyle, 1627–91: Scrupulosity and Science, The Boydell Press, 2000 Principe, Lawrence, The Aspiring Adept: Robert Boyle and His Alchemical Quest, Princeton University Press, 1998 Shapin, Stephen; Schaffer, Simon, Leviathan and the Air-Pump. Ben-Zaken, Avner, "Exploring the Self, Experimenting Nature", in Reading Hayy Ibn-Yaqzan: A Cross-Cultural History of Autodidacticism (Johns Hopkins University Press, 2011), pp. 101–126. Boyle's published works online The Sceptical Chymist – Project Gutenberg Essay on the Virtue of Gems – Gem and Diamond Foundation Experiments and Considerations Touching Colours – Gem and Diamond Foundation Experiments and Considerations Touching Colours – Project Gutenberg Boyle Papers University of London Hydrostatical Paradoxes – Google Books External links Robert Boyle, Internet Encyclopedia of Philosophy Readable versions of Excellence of the mechanical hypothesis, Excellence of theology, and Origin of forms and qualities Robert Boyle Project, Birkbeck, University of London Summary juxtaposition of Boyle's The Sceptical Chymist and his The Christian Virtuoso The Relationship between Science and Scripture in the Thought of Robert Boyle Robert Boyle and His Alchemical Quest : Including Boyle's "Lost" Dialogue on the Transmutation of Metals, Princeton University Press, 1998, Robert Boyle's (1690) Experimenta et considerationes de coloribus – digital facsimile from the Linda Hall Library 1627 births 1691 deaths 17th-century Anglo-Irish people 17th-century English chemists 17th-century English writers 17th-century English male writers 17th-century Irish philosophers 17th-century English philosophers 17th-century alchemists 17th-century Irish scientists Irish Anglicans Discoverers of chemical elements English alchemists English physicists Founder fellows of the Royal Society Independent scientists Irish alchemists Irish chemists Irish physicists People educated at Eton College People from Lismore, County Waterford Philosophers of science Robert Younger sons of earls Fluid dynamicists Writers about religion and science Scientists from County Waterford Directors of the British East India Company
Robert Boyle
[ "Chemistry" ]
4,394
[ "Fluid dynamicists", "Fluid dynamics" ]
50,245
https://en.wikipedia.org/wiki/Sugar%20beet
A sugar beet is a plant whose root contains a high concentration of sucrose and that is grown commercially for sugar production. In plant breeding, it is known as the Altissima cultivar group of the common beet (Beta vulgaris). Together with other beet cultivars, such as beetroot and chard, it belongs to the subspecies Beta vulgaris subsp. vulgaris but classified as var. saccharifera. Its closest wild relative is the sea beet (Beta vulgaris subsp. maritima). Sugar beets are grown in climates that are too cold for sugarcane. In 2020, Russia, the United States, Germany, France and Turkey were the world's five largest sugar beet producers. In 2010–2011, Europe, and North America except Arctic territories failed to supply the overall domestic demand for sugar and were all net importers of sugar. The US harvested of sugar beets in 2008. In 2009, sugar beets accounted for 20% of the world's sugar production and nearly 30% by 2013. Sugarcane accounts for most of the rest of sugar produced globally. In February 2015, a USDA factsheet reported that sugar beets generally account for about 55 percent of domestically produced sugar, and sugar cane for about 45 percent. Description The sugar beet has a conical, white, fleshy root (a taproot) with a flat crown. The plant consists of the root and a rosette of leaves. Sugar is formed by photosynthesis in the leaves and is then stored in the root. The root of the beet contains 75% water, about 20% sugar, and 5% pulp. The exact sugar content can vary between 12% and 21%, depending on the cultivar and growing conditions. Sugar is the primary value of sugar beet as a cash crop. The pulp, insoluble in water and mainly composed of cellulose, hemicellulose, lignin, and pectin, is used in animal feed. The byproducts of the sugar beet crop, such as pulp and molasses, add another 10% to the value of the harvest. Sugar beets grow exclusively in the temperate zone, in contrast to sugarcane, which grows exclusively in the tropical and subtropical zones. The average weight of a sugar beet ranges between . Sugar beet foliage has a rich, brilliant green color and grows to a height of about . The leaves are numerous and broad and grow in a tuft from the crown of the beet, which is usually level with or just above the ground surface. History of the sugar beet Discovery of beet sugar The species beet consists of several cultivar groups. The 16th-century French scientist Olivier de Serres discovered a process for preparing sugar syrup from (red) beetroot. He wrote: "The beet-root, when being boiled, yields a juice similar to syrup of sugar, which is beautiful to look at on account of its vermilion colour" (1575). Because crystallized cane sugar was already available and had a better taste, this process did not become popular. Modern sugar beets date to the mid-18th century Silesia where Frederick the Great, king of Prussia, subsidized experiments to develop processes for sugar extraction. In 1747, Andreas Sigismund Marggraf, professor of physics in the Academy of Science of Berlin, isolated sugar from beetroots and found them at concentrations of 1.3–1.6%. He also demonstrated that the sugar that could be extracted from beets was identical to that produced from cane. He found the best of these vegetable sources for sugar was the white beet. Despite Marggraf's success in isolating sugar from beets, it did not lead to commercial sugar production. Development of the sugar beet Marggraf's student and successor Franz Karl Achard began plant breeding sugar beet in Kaulsdorf near Berlin in 1786. Achard started his plant breeding by evaluating 23 varieties of beet for sugar content. In the end he selected a local strain from Halberstadt in modern-day Saxony-Anhalt, Germany. Moritz Baron von Koppy and his son further selected white, conical tubers from this strain. The selection was named weiße schlesische Zuckerrübe, meaning white Silesian sugar beet. In about 1800, this cultivar boasted about 5–6% sucrose by (dry) weight. It would go on to be the progenitor of all modern sugar beets. The plant breeding process has continued since then, leading to a sucrose content of around 18% in modern varieties. History of the beet sugar industry Franz Karl Achard opened the world's first beet sugar factory in 1801, at Kunern, Silesia (now Konary, Poland). The idea to produce sugar from beet was soon introduced to France, whence the European sugar beet industry rapidly expanded. By 1840, about 5% of the world's sugar was derived from sugar beets, and by 1880, this number had risen more than tenfold to over 50%. In North America, the first commercial production started in 1879 at a farm in Alvarado, California. The sugar beet was introduced to Chile by German settlers around 1850. Culture The sugar beet, like sugarcane, needs a particular soil and a proper climate for its successful cultivation. The most important requirements are that the soil must contain a large supply of nutrients, be rich in humus, and be able to contain a great deal of moisture. A certain amount of alkali is not necessarily detrimental, as sugar beets are not especially susceptible to injury by some alkali. The ground should be fairly level and well-drained, especially where irrigation is practiced. Generous crops can be grown in both sandy soil and heavy loams, but the ideal soil is a sandy loam, i.e., a mixture of organic matter, clay and sand. A subsoil of gravel, or the presence of hardpan, is not desirable, as cultivation to a depth of from is necessary to produce the best results. Climatic conditions, temperature, sunshine, rainfall and winds have an important bearing upon the success of sugar beet agriculture. A temperature ranging from during the growing months is most favorable. In the absence of adequate irrigation, of rainfall are necessary to raise an average crop. High winds are harmful, as they generally crust the land and prevent the young beets from coming through the ground. The best results are obtained along the coast of southern California, where warm, sunny days succeeded by cool, foggy nights seem to meet sugar beet's favored growth conditions. Sunshine of long duration but not of great intensity is the most important factor in the successful cultivation of sugar beets. Near the equator, the shorter days and the greater heat of the sun sharply reduce the sugar content in the beet. In high elevation regions such as those of Idaho, Colorado and Utah, where the temperature is high during the daytime, but where the nights are cool, the quality of the sugar beet is excellent. In Michigan, the long summer days from the relatively high latitude (the Lower Peninsula, where production is concentrated, lies between the 41st and 46th parallels North) and the influence of the Great Lakes result in satisfactory climatic conditions for sugar beet culture. Sebewaing, Michigan, lies in the Thumb region of Michigan; both the region and state are major sugar beet producers. Sebewaing is home to one of four Michigan Sugar Company factories. The town sponsors an annual Michigan Sugar Festival. To cultivate beets successfully, the land must be properly prepared. Deep ploughing is the first principle of beet culture. It allows the roots to penetrate the subsoil without much obstruction, thereby preventing the beet from growing out of the ground, besides enabling it to extract considerable nourishment and moisture from the lower soil. If the latter is too hard, the roots will not penetrate it readily and, as a result, the plant will be pushed up and out of the earth during the process of growth. A hard subsoil is impervious to water and prevents proper drainage. It should not be too loose, however, as this allows the water to pass through more freely than is desirable. Ideally, the soil should be deep, fairly fine and easily penetrable by the roots. It should also be capable of retaining moisture and at the same time admit of a free circulation of air and good drainage. Sugar beet crops exhaust the soil rapidly. Crop rotation is recommended and necessary. Normally, beets are grown in the same ground every third year, peas, beans or grain being raised the other two years. In most temperate climates, beets are planted in the spring and harvested in the autumn. At the northern end of its range, growing seasons as short as 100 days can produce commercially viable sugar beet crops. In warmer climates, such as in California's Imperial Valley, sugar beets are a winter crop, planted in the autumn and harvested in the spring. In recent years, Syngenta has developed the so-called tropical sugar beet. It allows the plant to grow in tropical and subtropical regions. Beets are planted from a small seed; of beet seed comprises 100,000 seeds and will plant over of ground ( will plant about . Until the latter half of the 20th century, sugar beet production was highly labor-intensive, as weed control was managed by densely planting the crop, which then had to be manually thinned two or three times with a hoe during the growing season. Harvesting also required many workers. Although the roots could be lifted by a plough-like device that could be pulled by a horse team, the rest of the preparation was by hand. One laborer grabbed the beets by their leaves, knocked them together to shake free loose soil, and then laid them in a row, root to one side, greens to the other. A second worker equipped with a beet hook (a short-handled tool between a billhook and a sickle) followed behind, and would lift the beet and swiftly chop the crown and leaves from the root with a single action. Working this way, he would leave a row of beets that could be forked into the back of a cart. Today, mechanical sowing, herbicide application for weed control, and mechanical harvesting have displaced this reliance on manual farm work. A root beater uses a series of blades to chop the leaf and crown (which is high in nonsugar impurities) from the root. The beet harvester lifts the root, and removes excess soil from the root in a single pass over the field. A modern harvester is typically able to cover six rows at the same time. The beets are dumped into trucks as the harvester rolls down the field, and then delivered to the factory. The conveyor then removes more soil. If the beets are to be left for later delivery, they are formed into clamps. Straw bales are used to shield the beets from the weather. Provided the clamp is well built with the right amount of ventilation, the beets do not significantly deteriorate. Beets that freeze and then defrost, produce complex carbohydrates that cause severe production problems in the factory. In the UK, loads may be hand examined at the factory gate before being accepted. In the US, the fall harvest begins with the first hard frost, which arrests photosynthesis and the further growth of the root. Depending on the local climate, it may be carried out over the course of a few weeks or be prolonged throughout the winter months. The harvest and processing of the beet is referred to as "the campaign", reflecting the organization required to deliver the crop at a steady rate to processing factories that run 24 hours a day for the duration of the harvest and processing (for the UK, the campaign lasts about five months). In the Netherlands, this period is known as , a time to be careful when driving on local roads in the area while the beets are being grown, because the naturally high clay content of the soil tends to cause slippery roads when soil falls from the trailers during transport. Production statistics The world harvested of sugar beets in 2022. The world's largest producer was Russia, with a harvest. The average yield of sugar beet crops worldwide was 60.8 tonnes per hectare. The most productive sugar beet farms in the world, in 2022, were in Chile, with a nationwide average yield of 106.2 tonnes per hectare. Imperial Valley (California) farmers have achieved yields of about 160 tonnes per hectare and over 26 tonnes sugar per hectare. Imperial Valley farms benefit from high intensities of incident sunlight and intensive use of irrigation and fertilizers. From sugar beet to white sugar Most sugar beet are used to create white sugar. This is done in a beet sugar factory, often abbreviated to sugar factory. Nowadays these usually also act as a sugar refinery, but historically the beet sugar factory produced raw sugar and the sugar refinery refined raw sugar to create white sugar. Sugar factory In the 1960s, beet sugar processing was described as consisting of these steps. Harvesting and storage in a way that preserves the beet while they wait to be processed Washing and scrubbing to remove soil and debris Slicing the beet in small pieces called cossettes or chips Removing the sugar from the beet in an osmosis process, resulting in raw juice and beet pulp. Nowadays, most sugar factories then refine the raw juice themselves, without moving it to a sugar refinery. The beet pulp is processed on site to become cattle fodder. Sugar refinery The next steps to produce white sugar are not specific for producing sugar from sugar beet. They also apply to producing white sugar from sugar cane. As such, they belong to the sugar refining process, not to the beet sugar production process per se. Purification, the raw juice undergoes a chemical process to remove impurities and create thin juice. Evaporation, the thin juice is concentrated by evaporation to make a "thick juice", roughly 60% sucrose by weight. Crystallization, by boiling under reduced pressure the sugar liquor is turned into crystals and remaining liquor. Centrifugation, in a centrifuge the white sugar crystals are separated from the remaining sugar liquor. The remaining liquor is then boiled and centrifuged, producing a lower grade of crystallised sugar (which is redissolved to feed the white sugar pans) and molasses. Further sugar can be recovered from the molasses by methods such as the Steffen Process. Ethanol and alcohol From molasses There are two obvious methods to produce alcohol (ethanol) from sugar beet. The first method produces alcohol as a byproduct of manufacturing sugar. It is about fermenting the sugar beet molasses that are left after (the second) centrifugation. This strongly resembles the manufacture of rum from sugar cane molasses. In a number of countries, notably the Czech Republic and Slovakia, this analogy led to making a rum-like distilled spirit called Tuzemak. On the Åland Islands, a similar drink is made under the brand name Kobba Libre. From sugar beet The second method to produce alcohol from sugar beet is to ferment the sugar beet themselves. I.e. without attempting to produce sugar. The idea to distill sugar from the beet came up soon after the first beet sugar factory was established. Between 1852 and 1854 Champonnois devised a good system to distill alcohol from sugar beet. Within a few years a large sugar distilling industry was created in France. The current process to produce alcohol by fermenting and distilling sugar beet consists of these steps: Adding Starch milk Liquefaction and Saccharification Fermentation in fermentation vats Distillation Dehydration, this results in Bioethanol Rectification Refining, the result is a highly pure alcohol Large sugar beet distilleries remain limited to Europe. In 2023 Tereos had 8 beet sugar distilleries, located in France, Czechia and Romania. In many European countries rectified spirit from sugar beet is used to make Liquor, e.g. vodka, Gin etc.. Other uses Sugary syrup An unrefined sugary syrup can be produced directly from sugar beet. This thick, dark syrup is produced by cooking shredded sugar beet for several hours, then pressing the resulting mash and concentrating the juice produced until it has a consistency similar to that of honey. No other ingredients are used. In Germany, particularly the Rhineland area, and in the Netherlands, this sugar beet syrup (called Zuckerrüben-Sirup or Zapp in German, or Suikerstroop in Dutch) is used as a spread for sandwiches, as well as for sweetening sauces, cakes and desserts. Dutch people generally top their pancakes with stroop. Suikerstroop made according to the Dutch tradition is a Traditional Speciality Guaranteed under EU and UK law. Commercially, if the syrup has a dextrose equivalency (DE) above 30, the product has to be hydrolyzed and converted to a high-fructose syrup, much like high-fructose corn syrup, or isoglucose syrup in the EU. Uridine Uridine can be isolated from sugar beet. Alternative fuel BP and Associated British Foods plan to use agricultural surpluses of sugar beet to produce biobutanol in East Anglia in the United Kingdom. The feedstock-to-yield ratio for sugarbeet is 56:9. Therefore, it takes 6.22 kg of sugar beet to produce 1 kg of ethanol (approximately 1.27 L at room temperature). In 2006 it was found that producing ethanol from sugar beet or cane became profitable when market prices for ethanol were close to $4 per gallon. According to Atlantic Biomass president Robert Kozak, a study at University of Maryland Eastern Shore indicates sugar beets appear capable of producing 860 to 900 gallons (3,256 to 3,407 liters) of ethanol per acre. Cattle feed In New Zealand, sugar beet is widely grown and harvested as feed for dairy cattle. It is regarded as superior to fodder beet, because it has a lower water content (resulting in better storage properties). Both the beet bulb and the leaves (with 25% protein) are fed to cattle. Although long considered toxic to cattle, harvested beet bulbs can be fed to cattle if they are appropriately transitioned to their new diet. Dairy cattle in New Zealand can thrive on just pasture and beets, without silage or other supplementary feed. The crop is also now grown in some parts of Australia as cattle feed. Monosodium glutamate Molasses can serve to produce monosodium glutamate (MSG). Agriculture Sugar beets are an important part of a crop rotation cycle. Sugar beet plants are susceptible to Rhizomania ("root madness"), which turns the bulbous tap root into many small roots, making the crop economically unprocessable. Strict controls are enforced in European countries to prevent the spread, but it is already present in some areas. It is also susceptible to both the beet leaf curl virus, which causes crinkling and stunting of the leaves and beet yellows virus. Continual research looks for varieties with resistance, as well as increased sugar yield. Sugar beet breeding research in the United States is most prominently conducted at various USDA Agricultural Research Stations, including one in Fort Collins, Colorado, headed by Linda Hanson and Leonard Panella; one in Fargo, North Dakota, headed by John Wieland; and one at Michigan State University in East Lansing, Michigan, headed by Rachel Naegele. Other economically important members of the subfamily Chenopodioideae: Beetroot Chard Mangelwurzel or fodder beet Genetic modification In the United States, genetically modified sugar beets, engineered for resistance to glyphosate, a herbicide marketed as Roundup, were developed by Monsanto as a genetically modified crop. In 2005, the US Department of Agriculture-Animal and Plant Health Inspection Service (USDA-APHIS) deregulated glyphosate-resistant sugar beets after it conducted an environmental assessment and determined glyphosate-resistant sugar beets were highly unlikely to become a plant pest. Sugar from glyphosate-resistant sugar beets has been approved for human and animal consumption in multiple countries, but commercial production of biotech beets has been approved only in the United States and Canada. Studies have concluded the sugar from glyphosate-resistant sugar beets has the same nutritional value as sugar from conventional sugar beets. After deregulation in 2005, glyphosate-resistant sugar beets were extensively adopted in the United States. About 95% of sugar beet acres in the US were planted with glyphosate-resistant seed in 2011. Weeds may be chemically controlled using glyphosate without harming the crop. After planting sugar beet seed, weeds emerge in fields and growers apply glyphosate to control them. Glyphosate is commonly used in field crops because it controls a broad spectrum of weed species and has a low toxicity. A study from the UK suggests yields of genetically modified beet were greater than conventional, while another from the North Dakota State University extension service found lower yields. The introduction of glyphosate-resistant sugar beets may contribute to the growing number of glyphosate-resistant weeds, so Monsanto has developed a program to encourage growers to use different herbicide modes of action to control their weeds. In 2008, the Center for Food Safety, the Sierra Club, the Organic Seed Alliance and High Mowing Seeds filed a lawsuit against USDA-APHIS regarding their decision to deregulate glyphosate-resistant sugar beets in 2005. The organizations expressed concerns regarding glyphosate-resistant sugar beets' ability to potentially cross-pollinate with conventional sugar beets. U.S. District Judge Jeffrey S. White, US District Court for the Northern District of California, revoked the deregulation of glyphosate-resistant sugar beets and declared it unlawful for growers to plant glyphosate-resistant sugar beets in the spring of 2011. Believing a sugar shortage would occur USDA-APHIS developed three options in the environmental assessment to address the concerns of environmentalists. In 2011, a federal appeals court for the Northern district of California in San Francisco overturned the ruling. In July 2012, after completing an environmental impact assessment and a plant pest risk assessment the USDA deregulated Monsanto's Roundup Ready sugar beets. Genome and genetics The sugar beet genome shares a triplication event somewhere super-Caryophyllales and at or sub-Eudicots. It has been sequenced and two reference genome sequences have already been generated. The genome size of the sugar beet is approximately 731 (714–758) Megabases, and sugar beet DNA is packaged in 18 metacentric chromosomes (2n=2x=18). All sugar beet centromeres are made up of a single satellite DNA family and centromere-specific LTR retrotransposons. More than 60% of sugar beet's DNA is repetitive, mostly distributed in a dispersed way along the chromosomes. Crop wild beet populations (B. vulgaris ssp. maritima) have been sequenced as well, allowing for identification of the resistance gene Rz2 in the wild progenitor. Rz2 confers resistance to rhizomania, commonly known as the sugar beet root madness disease. Breeding Sugar beets have been bred for increased sugar content, from 8% to 18% in the 200 years , resistance to viral and fungal diseases, increased taproot size, monogermy, and less bolting. Breeding has been eased by discovery of a cytoplasmic male sterility line – this has especially been useful in yield breeding. References External links How Beet Sugar is Made Guardian (UK) article on how sugar beet can be used for fuel Sugar beet culture in the northern Great Plains area hosted by the University of North Texas Government Documents Department US court bans GM sugar beet: Cultivation to take place under controlled conditions? "Sugar From Beets" Popular Science Monthly, March 1935 Beta vulgaris Crops Phytoremediation plants Root vegetables Sugar
Sugar beet
[ "Chemistry", "Biology" ]
5,107
[ "Carbohydrates", "Phytoremediation plants", "Sugar", "Bioremediation" ]
50,263
https://en.wikipedia.org/wiki/Domain%20of%20a%20function
In mathematics, the domain of a function is the set of inputs accepted by the function. It is sometimes denoted by or , where is the function. In layman's terms, the domain of a function can generally be thought of as "what x can be". More precisely, given a function , the domain of is . In modern mathematical language, the domain is part of the definition of a function rather than a property of it. In the special case that and are both sets of real numbers, the function can be graphed in the Cartesian coordinate system. In this case, the domain is represented on the -axis of the graph, as the projection of the graph of the function onto the -axis. For a function , the set is called the codomain: the set to which all outputs must belong. The set of specific outputs the function assigns to elements of is called its range or image. The image of f is a subset of , shown as the yellow oval in the accompanying diagram. Any function can be restricted to a subset of its domain. The restriction of to , where , is written as . Natural domain If a real function is given by a formula, it may be not defined for some values of the variable. In this case, it is a partial function, and the set of real numbers on which the formula can be evaluated to a real number is called the natural domain or domain of definition of . In many contexts, a partial function is called simply a function, and its natural domain is called simply its domain. Examples The function defined by cannot be evaluated at 0. Therefore, the natural domain of is the set of real numbers excluding 0, which can be denoted by or . The piecewise function defined by has as its natural domain the set of real numbers. The square root function has as its natural domain the set of non-negative real numbers, which can be denoted by , the interval , or . The tangent function, denoted , has as its natural domain the set of all real numbers which are not of the form for some integer , which can be written as . Other uses The term domain is also commonly used in a different sense in mathematical analysis: a domain is a non-empty connected open set in a topological space. In particular, in real and complex analysis, a domain is a non-empty connected open subset of the real coordinate space or the complex coordinate space Sometimes such a domain is used as the domain of a function, although functions may be defined on more general sets. The two concepts are sometimes conflated as in, for example, the study of partial differential equations: in that case, a domain is the open connected subset of where a problem is posed, making it both an analysis-style domain and also the domain of the unknown function(s) sought. Set theoretical notions For example, it is sometimes convenient in set theory to permit the domain of a function to be a proper class , in which case there is formally no such thing as a triple . With such a definition, functions do not have a domain, although some authors still use it informally after introducing a function in the form . See also Argument of a function Attribute domain Bijection, injection and surjection Codomain Domain decomposition Effective domain Endofunction Image (mathematics) Lipschitz domain Naive set theory Range of a function Support (mathematics) Notes References Functions and mappings Basic concepts in set theory
Domain of a function
[ "Mathematics" ]
694
[ "Mathematical analysis", "Functions and mappings", "Mathematical objects", "Basic concepts in set theory", "Mathematical relations" ]
50,264
https://en.wikipedia.org/wiki/Codomain
In mathematics, a codomain or set of destination of a function is a set into which all of the output of the function is constrained to fall. It is the set in the notation . The term range is sometimes ambiguously used to refer to either the codomain or the image of a function. A codomain is part of a function if is defined as a triple where is called the domain of , its codomain, and its graph. The set of all elements of the form , where ranges over the elements of the domain , is called the image of . The image of a function is a subset of its codomain so it might not coincide with it. Namely, a function that is not surjective has elements in its codomain for which the equation does not have a solution. A codomain is not part of a function if is defined as just a graph. For example in set theory it is desirable to permit the domain of a function to be a proper class , in which case there is formally no such thing as a triple . With such a definition functions do not have a codomain, although some authors still use it informally after introducing a function in the form . Examples For a function defined by or equivalently the codomain of is , but does not map to any negative number. Thus the image of is the set ; i.e., the interval . An alternative function is defined thus: While and map a given to the same number, they are not, in this view, the same function because they have different codomains. A third function can be defined to demonstrate why: The domain of cannot be but can be defined to be : The compositions are denoted On inspection, is not useful. It is true, unless defined otherwise, that the image of is not known; it is only known that it is a subset of . For this reason, it is possible that , when composed with , might receive an argument for which no output is defined – negative numbers are not elements of the domain of , which is the square root function. Function composition therefore is a useful notion only when the codomain of the function on the right side of a composition (not its image, which is a consequence of the function and could be unknown at the level of the composition) is a subset of the domain of the function on the left side. The codomain affects whether a function is a surjection, in that the function is surjective if and only if its codomain equals its image. In the example, is a surjection while is not. The codomain does not affect whether a function is an injection. A second example of the difference between codomain and image is demonstrated by the linear transformations between two vector spaces – in particular, all the linear transformations from to itself, which can be represented by the matrices with real coefficients. Each matrix represents a map with the domain and codomain . However, the image is uncertain. Some transformations may have image equal to the whole codomain (in this case the matrices with rank ) but many do not, instead mapping into some smaller subspace (the matrices with rank or ). Take for example the matrix given by which represents a linear transformation that maps the point to . The point is not in the image of , but is still in the codomain since linear transformations from to are of explicit relevance. Just like all matrices, represents a member of that set. Examining the differences between the image and codomain can often be useful for discovering properties of the function in question. For example, it can be concluded that does not have full rank since its image is smaller than the whole codomain. See also Notes References Functions and mappings Basic concepts in set theory
Codomain
[ "Mathematics" ]
770
[ "Mathematical analysis", "Functions and mappings", "Mathematical objects", "Basic concepts in set theory", "Mathematical relations" ]
50,318
https://en.wikipedia.org/wiki/Symmetric%20multiprocessing
Symmetric multiprocessing or shared-memory multiprocessing (SMP) involves a multiprocessor computer hardware and software architecture where two or more identical processors are connected to a single, shared main memory, have full access to all input and output devices, and are controlled by a single operating system instance that treats all processors equally, reserving none for special purposes. Most multiprocessor systems today use an SMP architecture. In the case of multi-core processors, the SMP architecture applies to the cores, treating them as separate processors. Professor John D. Kubiatowicz considers traditionally SMP systems to contain processors without caches. Culler and Pal-Singh in their 1998 book "Parallel Computer Architecture: A Hardware/Software Approach" mention: "The term SMP is widely used but causes a bit of confusion. [...] The more precise description of what is intended by SMP is a shared memory multiprocessor where the cost of accessing a memory location is the same for all processors; that is, it has uniform access costs when the access actually is to memory. If the location is cached, the access will be faster, but cache access times and memory access times are the same on all processors." SMP systems are tightly coupled multiprocessor systems with a pool of homogeneous processors running independently of each other. Each processor, executing different programs and working on different sets of data, has the capability of sharing common resources (memory, I/O device, interrupt system and so on) that are connected using a system bus or a crossbar. Design SMP systems have centralized shared memory called main memory (MM) operating under a single operating system with two or more homogeneous processors. Usually each processor has an associated private high-speed memory known as cache memory (or cache) to speed up the main memory data access and to reduce the system bus traffic. Processors may be interconnected using buses, crossbar switches or on-chip mesh networks. The bottleneck in the scalability of SMP using buses or crossbar switches is the bandwidth and power consumption of the interconnect among the various processors, the memory, and the disk arrays. Mesh architectures avoid these bottlenecks, and provide nearly linear scalability to much higher processor counts at the sacrifice of programmability: Serious programming challenges remain with this kind of architecture because it requires two distinct modes of programming; one for the CPUs themselves and one for the interconnect between the CPUs. A single programming language would have to be able to not only partition the workload, but also comprehend the memory locality, which is severe in a mesh-based architecture. SMP systems allow any processor to work on any task no matter where the data for that task is located in memory, provided that each task in the system is not in execution on two or more processors at the same time. With proper operating system support, SMP systems can easily move tasks between processors to balance the workload efficiently. History The earliest production system with multiple identical processors was the Burroughs B5000, which was functional around 1961. However at run-time this was asymmetric, with one processor restricted to application programs while the other processor mainly handled the operating system and hardware interrupts. The Burroughs D825 first implemented SMP in 1962. IBM offered dual-processor computer systems based on its System/360 Model 65 and the closely related Model 67 and 67–2. The operating systems that ran on these machines were OS/360 M65MP and TSS/360. Other software developed at universities, notably the Michigan Terminal System (MTS), used both CPUs. Both processors could access data channels and initiate I/O. In OS/360 M65MP, peripherals could generally be attached to either processor since the operating system kernel ran on both processors (though with a "big lock" around the I/O handler). The MTS supervisor (UMMPS) has the ability to run on both CPUs of the IBM System/360 model 67–2. Supervisor locks were small and used to protect individual common data structures that might be accessed simultaneously from either CPU. Other mainframes that supported SMP included the UNIVAC 1108 II, released in 1965, which supported up to three CPUs, and the GE-635 and GE-645, although GECOS on multiprocessor GE-635 systems ran in a master-slave asymmetric fashion, unlike Multics on multiprocessor GE-645 systems, which ran in a symmetric fashion. Starting with its version 7.0 (1972), Digital Equipment Corporation's operating system TOPS-10 implemented the SMP feature, the earliest system running SMP was the DECSystem 1077 dual KI10 processor system. Later KL10 system could aggregate up to 8 CPUs in a SMP manner. In contrast, DECs first multi-processor VAX system, the VAX-11/782, was asymmetric, but later VAX multiprocessor systems were SMP. Early commercial Unix SMP implementations included the Sequent Computer Systems Balance 8000 (released in 1984) and Balance 21000 (released in 1986). Both models were based on 10 MHz National Semiconductor NS32032 processors, each with a small write-through cache connected to a common memory to form a shared memory system. Another early commercial Unix SMP implementation was the NUMA based Honeywell Information Systems Italy XPS-100 designed by Dan Gielan of VAST Corporation in 1985. Its design supported up to 14 processors, but due to electrical limitations, the largest marketed version was a dual processor system. The operating system was derived and ported by VAST Corporation from AT&T 3B20 Unix SysVr3 code used internally within AT&T. Earlier non-commercial multiprocessing UNIX ports existed, including a port named MUNIX created at the Naval Postgraduate School by 1975. Uses Time-sharing and server systems can often use SMP without changes to applications, as they may have multiple processes running in parallel, and a system with more than one process running can run different processes on different processors. On personal computers, SMP is less useful for applications that have not been modified. If the system rarely runs more than one process at a time, SMP is useful only for applications that have been modified for multithreaded (multitasked) processing. Custom-programmed software can be written or modified to use multiple threads, so that it can make use of multiple processors. Multithreaded programs can also be used in time-sharing and server systems that support multithreading, allowing them to make more use of multiple processors. Advantages/disadvantages In current SMP systems, all of the processors are tightly coupled inside the same box with a bus or switch; on earlier SMP systems, a single CPU took an entire cabinet. Some of the components that are shared are global memory, disks, and I/O devices. Only one copy of an OS runs on all the processors, and the OS must be designed to take advantage of this architecture. Some of the basic advantages involves cost-effective ways to increase throughput. To solve different problems and tasks, SMP applies multiple processors to that one problem, known as parallel programming. However, there are a few limits on the scalability of SMP due to cache coherence and shared objects. Programming Uniprocessor and SMP systems require different programming methods to achieve maximum performance. Programs running on SMP systems may experience an increase in performance even when they have been written for uniprocessor systems. This is because hardware interrupts usually suspends program execution while the kernel that handles them can execute on an idle processor instead. The effect in most applications (e.g. games) is not so much a performance increase as the appearance that the program is running much more smoothly. Some applications, particularly building software and some distributed computing projects, run faster by a factor of (nearly) the number of additional processors. (Compilers by themselves are single threaded, but, when building a software project with multiple compilation units, if each compilation unit is handled independently, this creates an embarrassingly parallel situation across the entire multi-compilation-unit project, allowing near linear scaling of compilation time. Distributed computing projects are inherently parallel by design.) Systems programmers must build support for SMP into the operating system, otherwise, the additional processors remain idle and the system functions as a uniprocessor system. SMP systems can also lead to more complexity regarding instruction sets. A homogeneous processor system typically requires extra registers for "special instructions" such as SIMD (MMX, SSE, etc.), while a heterogeneous system can implement different types of hardware for different instructions/uses. Performance When more than one program executes at the same time, an SMP system has considerably better performance than a uniprocessor system, because different programs can run on different CPUs simultaneously. Conversely, asymmetric multiprocessing (AMP) usually allows only one processor to run a program or task at a time. For example, AMP can be used in assigning specific tasks to CPU based to priority and importance of task completion. AMP was created well before SMP in terms of handling multiple CPUs, which explains the lack of performance based on the example provided. In cases where an SMP environment processes many jobs, administrators often experience a loss of hardware efficiency. Software programs have been developed to schedule jobs and other functions of the computer so that the processor utilization reaches its maximum potential. Good software packages can achieve this maximum potential by scheduling each CPU separately, as well as being able to integrate multiple SMP machines and clusters. Access to RAM is serialized; this and cache coherency issues cause performance to lag slightly behind the number of additional processors in the system. Alternatives SMP uses a single shared system bus that represents one of the earliest styles of multiprocessor machine architectures, typically used for building smaller computers with up to 8 processors. Larger computer systems might use newer architectures such as NUMA (Non-Uniform Memory Access), which dedicates different memory banks to different processors. In a NUMA architecture, processors may access local memory quickly and remote memory more slowly. This can dramatically improve memory throughput as long as the data are localized to specific processes (and thus processors). On the downside, NUMA makes the cost of moving data from one processor to another, as in workload balancing, more expensive. The benefits of NUMA are limited to particular workloads, notably on servers where the data are often associated strongly with certain tasks or users. Finally, there is computer clustered multiprocessing (such as Beowulf), in which not all memory is available to all processors. Clustering techniques are used fairly extensively to build very large supercomputers. Variable SMP Variable Symmetric Multiprocessing (vSMP) is a specific mobile use case technology initiated by NVIDIA. This technology includes an extra fifth core in a quad-core device, called the Companion core, built specifically for executing tasks at a lower frequency during mobile active standby mode, video playback, and music playback. Project Kal-El (Tegra 3), patented by NVIDIA, was the first SoC (System on Chip) to implement this new vSMP technology. This technology not only reduces mobile power consumption during active standby state, but also maximizes quad core performance during active usage for intensive mobile applications. Overall this technology addresses the need for increase in battery life performance during active and standby usage by reducing the power consumption in mobile processors. Unlike current SMP architectures, the vSMP Companion core is OS transparent meaning that the operating system and the running applications are totally unaware of this extra core but are still able to take advantage of it. Some of the advantages of the vSMP architecture includes cache coherency, OS efficiency, and power optimization. The advantages for this architecture are explained below: Cache coherency: There are no consequences for synchronizing caches between cores running at different frequencies since vSMP does not allow the companion core and the main cores to run simultaneously. OS efficiency: It is inefficient when multiple CPU cores are run at different asynchronous frequencies because this could lead to possible scheduling issues. With vSMP, the active CPU cores will run at similar frequencies to optimize OS scheduling. Power optimization: In asynchronous clocking based architecture, each core is on a different power plane to handle voltage adjustments for different operating frequencies. The result of this could impact performance. vSMP technology is able to dynamically enable and disable certain cores for active and standby usage, reducing overall power consumption. These advantages lead the vSMP architecture to considerably benefit over other architectures using asynchronous clocking technologies. See also Asymmetric multiprocessing Binary Modular Dataflow Machine Cellular multiprocessing Locale (computer hardware) Massively parallel Partitioned global address space Simultaneous multithreading – where functional elements of a CPU core are allocated across multiple threads of execution Software lockout Xeon Phi References External links History of Multi-Processing Linux and Multiprocessing AMD Classes of computers Flynn's taxonomy Parallel computing
Symmetric multiprocessing
[ "Technology" ]
2,733
[ "Classes of computers", "Computers", "Computer systems" ]
50,329
https://en.wikipedia.org/wiki/Communication%20complexity
In theoretical computer science, communication complexity studies the amount of communication required to solve a problem when the input to the problem is distributed among two or more parties. The study of communication complexity was first introduced by Andrew Yao in 1979, while studying the problem of computation distributed among several machines. The problem is usually stated as follows: two parties (traditionally called Alice and Bob) each receive a (potentially different) -bit string and . The goal is for Alice to compute the value of a certain function, , that depends on both and , with the least amount of communication between them. While Alice and Bob can always succeed by having Bob send his whole -bit string to Alice (who then computes the function ), the idea here is to find clever ways of calculating with fewer than bits of communication. Note that, unlike in computational complexity theory, communication complexity is not concerned with the amount of computation performed by Alice or Bob, or the size of the memory used, as we generally assume nothing about the computational power of either Alice or Bob. This abstract problem with two parties (called two-party communication complexity), and its general form with more than two parties, is relevant in many contexts. In VLSI circuit design, for example, one seeks to minimize energy used by decreasing the amount of electric signals passed between the different components during a distributed computation. The problem is also relevant in the study of data structures and in the optimization of computer networks. For surveys of the field, see the textbooks by and . Formal definition Let where we assume in the typical case that and . Alice holds an -bit string while Bob holds an -bit string . By communicating to each other one bit at a time (adopting some communication protocol which is agreed upon in advance), Alice and Bob wish to compute the value of such that at least one party knows the value at the end of the communication. At this point the answer can be communicated back so that at the cost of one extra bit, both parties will know the answer. The worst case communication complexity of this communication problem of computing , denoted as , is then defined to be minimum number of bits exchanged between Alice and Bob in the worst case. As observed above, for any function , we have . Using the above definition, it is useful to think of the function as a matrix (called the input matrix or communication matrix) where the rows are indexed by and columns by . The entries of the matrix are . Initially both Alice and Bob have a copy of the entire matrix (assuming the function is known to both parties). Then, the problem of computing the function value can be rephrased as "zeroing-in" on the corresponding matrix entry. This problem can be solved if either Alice or Bob knows both and . At the start of communication, the number of choices for the value of the function on the inputs is the size of matrix, i.e. . Then, as and when each party communicates a bit to the other, the number of choices for the answer reduces as this eliminates a set of rows/columns resulting in a submatrix of . More formally, a set is called a (combinatorial) rectangle if whenever and then . Equivalently, is a combinatorial rectangle if it can be expressed as for some and . Consider the case when bits are already exchanged between the parties. Now, for a particular , let us define a matrix Then, , and it is not hard to show that is a combinatorial rectangle in . Example: EQ We consider the case where Alice and Bob try to determine whether or not their input strings are equal. Formally, define the Equality function, denoted , by if . As we demonstrate below, any deterministic communication protocol solving requires bits of communication in the worst case. As a warm-up example, consider the simple case of . The equality function in this case can be represented by the matrix below. The rows represent all the possibilities of , the columns those of . In this table, the function only evaluates to 1 when equals (i.e., on the diagonal). It is also fairly easy to see how communicating a single bit divides someone's possibilities in half. When the first bit of is 1, consider only half of the columns (where can equal 100, 101, 110, or 111). Theorem: D(EQ) = n Proof. Assume that . This means that there exists such that and have the same communication transcript . Since this transcript defines a rectangle, must also be 1. By definition and we know that equality is only true for when . This yields a contradiction. This technique of proving deterministic communication lower bounds is called the fooling set technique. Randomized communication complexity In the above definition, we are concerned with the number of bits that must be deterministically transmitted between two parties. If both the parties are given access to a random number generator, can they determine the value of with much less information exchanged? Yao, in his seminal paper answers this question by defining randomized communication complexity. A randomized protocol for a function has two-sided error. A randomized protocol is a deterministic protocol that uses an extra random string in addition to its normal input. There are two models for this: a public string is a random string that is known by both parties beforehand, while a private string is generated by one party and must be communicated to the other party. A theorem presented below shows that any public string protocol can be simulated by a private string protocol that uses O(log n) additional bits compared to the original. Note that in the probability inequalities above, the outcome of the protocol is understood to depend only on the random string; both strings x and y remain fixed. In other words, if R(x,y) yields g(x,y,r) when using random string r, then g(x,y,r) = f(x,y) for at least 2/3 of all choices for the string r. The randomized complexity is simply defined as the number of bits exchanged in such a protocol. Note that it is also possible to define a randomized protocol with one-sided error, and the complexity is defined similarly. Example: EQ Returning to the previous example of EQ, if certainty is not required, Alice and Bob can check for equality using only messages. Consider the following protocol: Assume that Alice and Bob both have access to the same random string . Alice computes and sends this bit (call it b) to Bob. (The is the dot product in GF(2).) Then Bob compares b to . If they are the same, then Bob accepts, saying x equals y. Otherwise, he rejects. Clearly, if , then , so . If x does not equal y, it is still possible that , which would give Bob the wrong answer. How does this happen? If x and y are not equal, they must differ in some locations: Where and agree, so those terms affect the dot products equally. We can safely ignore those terms and look only at where and differ. Furthermore, we can swap the bits and without changing whether or not the dot products are equal. This means we can swap bits so that contains only zeros and contains only ones: Note that and . Now, the question becomes: for some random string , what is the probability that ? Since each is equally likely to be or , this probability is just . Thus, when does not equal , . The algorithm can be repeated many times to increase its accuracy. This fits the requirements for a randomized communication algorithm. This shows that if Alice and Bob share a random string of length n, they can send one bit to each other to compute . In the next section, it is shown that Alice and Bob can exchange only bits that are as good as sharing a random string of length n. Once that is shown, it follows that EQ can be computed in messages. Example: GH For yet another example of randomized communication complexity, we turn to an example known as the gap-Hamming problem (abbreviated GH). Formally, Alice and Bob both maintain binary messages, and would like to determine if the strings are very similar or if they are not very similar. In particular, they would like to find a communication protocol requiring the transmission of as few bits as possible to compute the following partial Boolean function, Clearly, they must communicate all their bits if the protocol is to be deterministic (this is because, if there is a deterministic, strict subset of indices that Alice and Bob relay to one another, then imagine having a pair of strings that on that set disagree in positions. If another disagreement occurs in any position that is not relayed, then this affects the result of , and hence would result in an incorrect procedure. A natural question one then asks is, if we're permitted to err of the time (over random instances drawn uniformly at random from ), then can we get away with a protocol with fewer bits? It turns out that the answer somewhat surprisingly is no, due to a result of Chakrabarti and Regev in 2012: they show that for random instances, any procedure which is correct at least of the time must send bits worth of communication, which is to say essentially all of them. Public coins versus private coins Creating random protocols becomes easier when both parties have access to the same random string, known as a shared string protocol. However, even in cases where the two parties do not share a random string, it is still possible to use private string protocols with only a small communication cost. Any shared string random protocol using any number of random string can be simulated by a private string protocol that uses an extra O(log n) bits. Intuitively, we can find some set of strings that has enough randomness in it to run the random protocol with only a small increase in error. This set can be shared beforehand, and instead of drawing a random string, Alice and Bob need only agree on which string to choose from the shared set. This set is small enough that the choice can be communicated efficiently. A formal proof follows. Consider some random protocol P with a maximum error rate of 0.1. Let be strings of length n, numbered . Given such an , define a new protocol which randomly picks some and then runs P using as the shared random string. It takes O(log 100n) = O(log n) bits to communicate the choice of . Let us define and to be the probabilities that and compute the correct value for the input . For a fixed , we can use Hoeffding's inequality to get the following equation: Thus when we don't have fixed: The last equality above holds because there are different pairs . Since the probability does not equal 1, there is some so that for all : Since has at most 0.1 error probability, can have at most 0.2 error probability. Collapse of Randomized Communication Complexity Let's say we additionally allow Alice and Bob to share some resource, for example a pair of entangle particles. Using that ressource, Alice and Bob can correlate their information and thus try to 'collapse' (or 'trivialize') communication complexity in the following sense. Definition. A resource is said to be "collapsing" if, using that resource , only one bit of classical communication is enough for Alice to know the evaluation in the worst case scenario for any Boolean function . The surprising fact of a collapse of communication complexity is that the function can have arbitrarily large entry size, but still the number of communication bit is constant to a single one. Some resources are shown to be non-collapsing, such as quantum correlations or more generally almost-quantum correlations, whereas on the contrary some other resources are shown to collapse randomized communication complexity, such as the PR-box, or some noisy PR-boxes satisfying some conditions. Distributional Complexity One approach to studying randomized communication complexity is through distributional complexity. Given a joint distribution on the inputs of both players, the corresponding distributional complexity of a function is the minimum cost of a deterministic protocol such that , where the inputs are sampled according to . Yao's minimax principle (a special case of von Neumann's minimax theorem) states that the randomized communication complexity of a function equals its maximum distributional complexity, where the maximum is taken over all joint distributions of the inputs (not necessarily product distributions!). Yao's principle can be used to prove lower bounds on the randomized communication complexity of a function: design the appropriate joint distribution, and prove a lower bound on the distributional complexity. Since distributional complexity concerns deterministic protocols, this could be easier than proving a lower bound on randomized protocols directly. As an example, let us consider the disjointness function DISJ: each of the inputs is interpreted as a subset of , and DISJ(,)=1 if the two sets are disjoint. Razborov proved an lower bound on the randomized communication complexity by considering the following distribution: with probability 3/4, sample two random disjoint sets of size , and with probability 1/4, sample two random sets of size with a unique intersection. Information Complexity A powerful approach to the study of distributional complexity is information complexity. Initiated by Bar-Yossef, Jayram, Kumar and Sivakumar, the approach was codified in work of Barak, Braverman, Chen and Rao and by Braverman and Rao. The (internal) information complexity of a (possibly randomized) protocol with respect to a distribution is defined as follows. Let be random inputs sampled according to , and let be the transcript of when run on the inputs . The information complexity of the protocol is where denotes conditional mutual information. The first summand measures the amount of information that Alice learns about Bob's input from the transcript, and the second measures the amount of information that Bob learns about Alice's input. The -error information complexity of a function with respect to a distribution is the infimal information complexity of a protocol for whose error (with respect to ) is at most . Braverman and Rao proved that information equals amortized communication. This means that the cost for solving independent copies of is roughly times the information complexity of . This is analogous to the well-known interpretation of Shannon entropy as the amortized bit-length required to transmit data from a given information source. Braverman and Rao's proof uses a technique known as "protocol compression", in which an information-efficient protocol is "compressed" into a communication-efficient protocol. The techniques of information complexity enable the computation of the exact (up to first order) communication complexity of set disjointness to be . Information complexity techniques have also been used to analyze extended formulations, proving an essentially optimal lower bound on the complexity of algorithms based on linear programming which approximately solve the maximum clique problem. Omri Weinstein's 2015 survey surveys the subject. Quantum communication complexity Quantum communication complexity tries to quantify the communication reduction possible by using quantum effects during a distributed computation. At least three quantum generalizations of communication complexity have been proposed; for a survey see the suggested text by G. Brassard. The first one is the qubit-communication model, where the parties can use quantum communication instead of classical communication, for example by exchanging photons through an optical fiber. In a second model the communication is still performed with classical bits, but the parties are allowed to manipulate an unlimited supply of quantum entangled states as part of their protocols. By doing measurements on their entangled states, the parties can save on classical communication during a distributed computation. The third model involves access to previously shared entanglement in addition to qubit communication, and is the least explored of the three quantum models. Nondeterministic communication complexity In nondeterministic communication complexity, Alice and Bob have access to an oracle. After receiving the oracle's word, the parties communicate to deduce . The nondeterministic communication complexity is then the maximum over all pairs over the sum of number of bits exchanged and the coding length of the oracle word. Viewed differently, this amounts to covering all 1-entries of the 0/1-matrix by combinatorial 1-rectangles (i.e., non-contiguous, non-convex submatrices, whose entries are all one (see Kushilevitz and Nisan or Dietzfelbinger et al.)). The nondeterministic communication complexity is the binary logarithm of the rectangle covering number of the matrix: the minimum number of combinatorial 1-rectangles required to cover all 1-entries of the matrix, without covering any 0-entries. Nondeterministic communication complexity occurs as a means to obtaining lower bounds for deterministic communication complexity (see Dietzfelbinger et al.), but also in the theory of nonnegative matrices, where it gives a lower bound on the nonnegative rank of a nonnegative matrix. Unbounded-error communication complexity In the unbounded-error setting, Alice and Bob have access to a private coin and their own inputs . In this setting, Alice succeeds if she responds with the correct value of with probability strictly greater than 1/2. In other words, if Alice's responses have any non-zero correlation to the true value of , then the protocol is considered valid. Note that the requirement that the coin is private is essential. In particular, if the number of public bits shared between Alice and Bob are not counted against the communication complexity, it is easy to argue that computing any function has communication complexity. On the other hand, both models are equivalent if the number of public bits used by Alice and Bob is counted against the protocol's total communication. Though subtle, lower bounds on this model are extremely strong. More specifically, it is clear that any bound on problems of this class immediately imply equivalent bounds on problems in the deterministic model and the private and public coin models, but such bounds also hold immediately for nondeterministic communication models and quantum communication models. Forster was the first to prove explicit lower bounds for this class, showing that computing the inner product requires at least bits of communication, though an earlier result of Alon, Frankl, and Rödl proved that the communication complexity for almost all Boolean functions is . Lifting Lifting is a general technique in complexity theory in which a lower bound on a simple measure of complexity is "lifted" to a lower bound on a more difficult measure. This technique was pioneered in the context of communication complexity by Raz and McKenzie, who proved the first query-to-communication lifting theorem, and used the result to separate the monotone NC hierarchy. Given a function and a gadget , their composition is defined as follows: In words, is partitioned into blocks of length , and is partitioned into blocks of length . The gadget is applied times on the blocks, and the outputs are fed into . Diagrammatically: In this diagram, each of the inputs is bits long, and each of the inputs is bits long. A decision tree of depth for can be translated to a communication protocol whose cost is : each time the tree queries a bit, the corresponding value of is computed using an optimal protocol for . Raz and McKenzie showed that this is optimal up to a constant factor when is the so-called "indexing gadget", in which has length (for a large enough constant ), has length , and is the -th bit of . The proof of the Raz–McKenzie lifting theorem uses the method of simulation, in which a protocol for the composed function is used to generate a decision tree for . Göös, Pitassi and Watson gave an exposition of the original proof. Since then, several works have proved similar theorems with different gadgets, such as inner product. The smallest gadget which can be handled is the indexing gadget with . Göös, Pitassi and Watson extended the Raz–McKenzie technique to randomized protocols. A simple modification of the Raz–McKenzie lifting theorem gives a lower bound of on the logarithm of the size of a protocol tree for computing , where is the depth of the optimal decision tree for . Garg, Göös, Kamath and Sokolov extended this to the DAG-like setting, and used their result to obtain monotone circuit lower bounds. The same technique has also yielded applications to proof complexity. A different type of lifting is exemplified by Sherstov's pattern matrix method, which gives a lower bound on the quantum communication complexity of , where is a modified indexing gadget, in terms of the approximate degree of . The approximate degree of a Boolean function is the minimal degree of a polynomial which approximates the function on all Boolean points up to an additive error of 1/3. In contrast to the Raz–McKenzie proof which uses the method of simulation, Sherstov's proof takes a dual witness to the approximate degree of and gives a lower bound on the quantum query complexity of using the generalized discrepancy method. The dual witness for the approximate degree of is a lower bound witness for the approximate degree obtained via LP duality. This dual witness is massaged into other objects constituting data for the generalized discrepancy method. Another example of this approach is the work of Pitassi and Robere, in which an algebraic gap is lifted to a lower bound on Razborov's rank measure. The result is a strongly exponential lower bound on the monotone circuit complexity of an explicit function, obtained via the Karchmer–Wigderson characterization of monotone circuit size in terms of communication complexity. Open problems Considering a 0 or 1 input matrix , the minimum number of bits exchanged to compute deterministically in the worst case, , is known to be bounded from below by the logarithm of the rank of the matrix . The log rank conjecture proposes that the communication complexity, , is bounded from above by a constant power of the logarithm of the rank of . Since D(f) is bounded from above and below by polynomials of log rank, we can say D(f) is polynomially related to log rank. Since the rank of a matrix is polynomial time computable in the size of the matrix, such an upper bound would allow the matrix's communication complexity to be approximated in polynomial time. Note, however, that the size of the matrix itself is exponential in the size of the input. For a randomized protocol, the number of bits exchanged in the worst case, R(f), was conjectured to be polynomially related to the following formula: Such log rank conjectures are valuable because they reduce the question of a matrix's communication complexity to a question of linearly independent rows (columns) of the matrix. This particular version, called the Log-Approximate-Rank Conjecture, was recently refuted by Chattopadhyay, Mande and Sherif (2019) using a surprisingly simple counter-example. This reveals that the essence of the communication complexity problem, for example in the EQ case above, is figuring out where in the matrix the inputs are, in order to find out if they're equivalent. Applications Lower bounds in communication complexity can be used to prove lower bounds in decision tree complexity, VLSI circuits, data structures, streaming algorithms, space–time tradeoffs for Turing machines and more. Conitzer and Sandholm studied the communication complexity of some common voting rules, which are essential in political and non political organizations. Compilation complexity is a closely related notion, which can be seen as a single-round communication complexity. See also Gap-Hamming problem Notes References Brassard, G. Quantum communication complexity: a survey. https://arxiv.org/abs/quant-ph/0101005 Dietzfelbinger, M., J. Hromkovic, J., and G. Schnitger, "A comparison of two lower-bound methods for communication complexity", Theoret. Comput. Sci. 168, 1996. 39-51. Raz, Ran. "Circuit and Communication Complexity." In Computational Complexity Theory. Steven Rudich and Avi Wigderson, eds. American Mathematical Society Institute for Advanced Study, 2004. 129-137. A. C. Yao, "Some Complexity Questions Related to Distributed Computing", Proc. of 11th STOC, pp. 209–213, 1979. 14 I. Newman, Private vs. Common Random Bits in Communication Complexity, Information Processing Letters 39, 1991, pp. 67–71. Information theory Computational complexity theory Quantum complexity theory
Communication complexity
[ "Mathematics", "Technology", "Engineering" ]
5,079
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
50,335
https://en.wikipedia.org/wiki/John%20Polkinghorne
John Charlton Polkinghorne (16 October 1930 – 9 March 2021) was an English theoretical physicist, theologian, and Anglican priest. A prominent and leading voice explaining the relationship between science and religion, he was professor of mathematical physics at the University of Cambridge from 1968 to 1979, when he resigned his chair to study for the priesthood, becoming an ordained Anglican priest in 1982. He served as the president of Queens' College, Cambridge, from 1988 until 1996. Polkinghorne was the author of five books on physics and twenty-six on the relationship between science and religion; his publications include The Quantum World (1989), Quantum Physics and Theology: An Unexpected Kinship (2005), Exploring Reality: The Intertwining of Science and Religion (2007), and Questions of Truth (2009). The Polkinghorne Reader (edited by Thomas Jay Oord) provides key excerpts from Polkinghorne's most influential books. He was knighted in 1997 and in 2002 received the £1-million Templeton Prize, awarded for exceptional contributions to affirming life's spiritual dimension. Early life and education Polkinghorne was born in Weston-super-Mare in Somerset on 16 October 1930 to Dorothy Charlton, the daughter of a groom and George Polkinghorne, who worked for the post office. John was the couple's third child. He had a brother, Peter, and a sister, Ann, who died when she was six, one month before John's birth. Peter died in 1942 while flying for the Royal Air Force during the Second World War. He was educated at the local primary school in Street, Somerset, then was taught by a friend of the family at home, and later at a Quaker school. When he was 11 he went to Elmhurst Grammar School in Street, and when his father was promoted to head postmaster in Ely in 1945, Polkinghorne was transferred to The Perse School, Cambridge. Following National Service in the Royal Army Educational Corps from 1948 to 1949, he read mathematics at Trinity College, Cambridge, graduating in 1952 as Senior Wrangler, then earned his PhD in physics in 1955, supervised by the Nobel laureate Abdus Salam in the group led by Paul Dirac. Career Physics Polkinghorne joined the Christian Union of UCCF while at Cambridge and met his future wife, Ruth Martin, another member of the union and also a mathematics student. They married on 26 March 1955, and at the end of that year sailed from Liverpool to New York. Polkinghorne accepted a postdoctoral Harkness Fellowship with the California Institute of Technology, where he worked with Murray Gell-Mann. Toward the end of the fellowship he was offered a position as lecturer at the University of Edinburgh, which he took up in 1956. After two years in Scotland, he returned to teach at Cambridge in 1958. He was promoted to reader in 1965, and in 1968 was offered a professorship in mathematical physics, a position he held until 1979, his students including Brian Josephson and Martin Rees. For 25 years, he worked on theories about elementary particles, played a role in the discovery of the quark, and researched the analytic and high-energy properties of Feynman integrals and the foundations of S-matrix theory. While employed by Cambridge, he also spent time at Princeton, Berkeley, Stanford, and at CERN in Geneva. He was elected a Fellow of the Royal Society in 1974. Priesthood and Queens' College Polkinghorne decided to train for the priesthood in 1977. He said in an interview that he felt he had done his bit for science after 25 years, and that his best mathematical work was probably behind him; Christianity had always been central to his life, so ordination offered an attractive second career. He resigned his chair in 1979 to study at Westcott House, Cambridge, an Anglican theological college, becoming an ordained priest on 6 June 1982 (Trinity Sunday). The ceremony was held at Trinity College, Cambridge, and presided over by Bishop John A. T. Robinson. He worked for five years as a curate in south Bristol, then as vicar in Blean, Kent, before returning to Cambridge in 1986 as dean of chapel at Trinity Hall. He became the president of Queens' College that year, a position he held until his retirement in 1996. He served as canon theologian of Liverpool Cathedral from 1994 to 2005. Polkinghorne died on 9 March 2021 at the age of 90. Awards In 1997 Polkinghorne was made a Knight Commander of the Order of the British Empire (KBE), although as an ordained priest in the Church of England, he was not styled as "Sir John Polkinghorne". He was an honorary fellow of St Chad's College, Durham, and was awarded an honorary doctorate by the University of Durham in 1998; and in 2002 was awarded the Templeton Prize for his contributions to research at the interface between science and religion. He spoke on "The Universe as Creation" at the Trotter Prize ceremony in 2003. He has been a member of the BMA Medical Ethics Committee, the General Synod of the Church of England, the Doctrine Commission, and the Human Genetics Commission. He served as chairman of the governors of The Perse School from 1972 to 1981. He was a fellow of Queens' College, Cambridge, and was for 10 years a canon theologian of Liverpool Cathedral. He was a founding member of the Society of Ordained Scientists and also of the International Society for Science and Religion, of which he was the first president. He was selected to give the prestigious Gifford Lectures in 1993–1994, which he later published as The Faith of a Physicist. In 2006 he was awarded an honorary doctorate by the Hong Kong Baptist University as part of their 50-year celebrations. This included giving a public lecture on "The Dialogue between Science and Religion and Its Significance for the Academy" and an "East–West Dialogue" with Yang Chen-Ning, a Nobel laureate in physics. He was a member of staff of the Psychology and Religion Research Group at Cambridge University. He was an honorary fellow of St Edmund's College, Cambridge. Ideas Polkinghorne said in an interview that he believes his move from science to religion has given him binocular vision, though he understands that it has aroused the kind of suspicion "that might follow the claim to be a vegetarian butcher." He describes his position as critical realism and believes that science and religion address aspects of the same reality. It is a consistent theme of his work that when he "turned his collar around" he did not stop seeking truth. He argues there are five points of comparison between the ways in which science and theology pursue truth: moments of enforced radical revision, a period of unresolved confusion, new synthesis and understanding, continued wrestling with unresolved problems, deeper implications. He suggests that the mechanistic explanations of the world that have continued from Laplace to Richard Dawkins should be replaced by an understanding that most of nature is cloud-like rather than clock-like. He regards the mind, soul and body as different aspects of the same underlying reality — "dual aspect monism" — writing that "there is only one stuff in the world (not two — the material and the mental), but it can occur in two contrasting states (material and mental phases, a physicist might say) which explain our perception of the difference between mind and matter." He believes that standard physical causation cannot adequately describe the manifold ways in which things and people interact, and uses the phrase "active information" to describe how, when several outcomes are possible, there may be higher levels of causation that choose which one occurs. Sometimes Christianity seems to him to be just too good to be true, but when this sort of doubt arises he says to himself, "All right then, deny it", and writes that he knows this is something he could never do. On the existence of God Polkinghorne considers that "the question of the existence of God is the single most important question we face about the nature of reality" and quotes, with approval, Sir Anthony Kenny: "After all, if there is no God, then God is incalculably the greatest single creation of the human imagination." He addresses the questions of "Does the concept of God make sense? If so, do we have reason for believing in such a thing?" He is "cautious about our powers to assess coherence", pointing out that in 1900 a "competent… undergraduate could have demonstrated the 'incoherence'" of quantum ideas. He suggests that "the nearest analogy in the physical world [to God] would be… the Quantum Vacuum." He suggests that God is the ultimate answer to Leibniz's great question "why is there something rather than nothing?" The atheist's "plain assertion of the world's existence" is a "grossly impoverished view of reality… [arguing that] theism explains more than a reductionist atheism can ever address.". He is very doubtful of St Anselm's Ontological Argument. Referring to Gödel's incompleteness theorem, he said: "If we cannot prove the consistency of arithmetic it seems a bit much to hope that God's existence is easier to deal with," concluding that God is "ontologically necessary, but not logically necessary." He "does not assert that God's existence can be demonstrated in a logically coercive way (any more than God's non-existence can) but that theism makes more sense of the world, and of human experience, than does atheism." He cites in particular: The intelligibility of the universe: One would anticipate that evolutionary selection would produce hominid minds apt for coping with everyday experience, but that these minds should also be able to understand the subatomic world and general relativity goes far beyond anything of relevance to survival fitness. The mystery deepens when one recognises the proven fruitfulness of mathematical beauty as a guide to successful theory choice. The anthropic fine tuning of the universe: He quotes with approval Freeman Dyson, who said "the more I examine the universe and the details of its architecture, the more evidence I find that the universe in some sense must have known we were coming" and suggests there is a wide consensus amongst physicists that either there are a very large number of other universes in the Multiverse or that "there is just one universe which is the way it is in its anthropic fruitfulness because it is the expression of the purposive design of a Creator, who has endowed it with the finely tuned potentialty for life." A wider humane reality: He considers that theism offers a more persuasive account of ethical and aesthetic perceptions. He argues that it is difficult to accommodate the idea that "we have real moral knowledge" and that statements such as 'torturing children is wrong' are more than "simply social conventions of the societies within which they are uttered" within an atheistic or naturalistic world view. He also believes such a world view finds it hard to explain how "Something of lasting significance is glimpsed in the beauty of the natural world and the beauty of the fruits of human creativity." On free will Polkinghorne believes that On creationism Following the resignation of Michael Reiss, the director of education at the Royal Society—who had controversially argued that school pupils who believed in creationism should be used by science teachers to start discussions, rather than be rejected per se — Polkinghorne argued in The Times that "As a Christian believer I am, of course, a creationist in the proper sense of the term, for I believe that the mind and the purpose of a divine Creator lie behind the fruitful history and remarkable order of the universe which science explores. But I am certainly not a creationist in that curious North American sense, which implies interpreting Genesis 1 in a flat-footed literal way and supposing that evolution is wrong." Critical reception Nancy Frankenberry, Professor of Religion at Dartmouth College, has described Polkinghorne as the finest British theologian/scientist of our time, citing his work on the possible relationship between chaos theory and natural theology. Owen Gingerich, an astronomer and former Harvard professor, has called him a leading voice on the relationship between science and religion. The British philosopher Simon Blackburn has criticized Polkinghorne for using primitive thinking and rhetorical devices instead of engaging in philosophy. When Polkinghorne argues that the minute adjustments of cosmological constants for life points towards an explanation beyond the scientific realm, Blackburn argues that this relies on a natural preference for explanation in terms of agency. Blackburn writes that he finished Polkinghorne's books in "despair at humanity's capacity for self-deception." Against this, Freeman Dyson called Polkinghorne's arguments on theology and natural science "polished and logically coherent." The novelist Simon Ings, writing in the New Scientist, said Polkinghorne's argument for the proposition that God is real is cogent and his evidence elegant. Richard Dawkins, formerly Professor for Public Understanding of Science at Oxford, writes that the same three names of British scientists who are also sincerely religious crop up with the "likable familiarity of senior partners in a firm of Dickensian lawyers": Arthur Peacocke, Russell Stannard, and John Polkinghorne, all of whom have either won the Templeton Prize or are on its board of trustees. Dawkins writes that he is not so much bewildered by their belief in a cosmic lawgiver, but by their beliefs in the minutiae of Christianity, such as the resurrection and forgiveness of sins, and that such scientists, in Britain and in the US, are the subject of bemused bafflement among their peers. Polkinghorne responded that "debating with Dawkins is hopeless, because there's no give and take. He doesn't give you an inch. He just says no when you say yes." Nicholas Beale writes in Questions of Truth, which he co-authored with Polkinghorne, that he hopes Dawkins will be a bit less baffled once he reads it. A. C. Grayling criticized the Royal Society for allowing its premises to be used in connection with the launch of Questions of Truth, describing it as a scandal, and suggesting that Polkinghorne had exploited his fellowship there to publicize a "weak, casuistical and tendentious pamphlet." After implying that the book's publisher, Westminster John Knox, was a self-publisher, Grayling went on to write that Polkinghorne and others were eager to see the credibility accorded to scientific research extended to religious perspectives through association. In contrast to Grayling, science historian Edward B. Davis praises Questions of Truth, saying the book provides "the kind of technical information… that scientifically trained readers will appreciate—yet they can be read profitably by anyone interested in science and Christianity." Davis concludes, "It hasn't been easy to steer a middle course between fundamentalism and modernism, particularly on issues involving science. Polkinghorne has done that very successfully for a generation, and for this he ought to be both appreciated and emulated." Published works Polkinghorne wrote 34 books, translated into 18 languages; 26 concern science and religion, often for a popular audience. Science and religion The Polkinghorne Reader : Science, Faith, and the Search for Meaning (Edited by Thomas Jay Oord) (SPCK and Templeton Foundation Press, 2010) and The Way the World Is: The Christian Perspective of a Scientist (1984 – revised 1992) One World (SPCK/Princeton University Press 1987; Templeton Foundation Press, 2007) Science and Creation (SPCK/New Science Library, 1989; Templeton Foundation Press, 2006) Science and Providence (SPCK/New Science Library, 1989; Templeton Foundation Press, 2006) Reason and Reality: Relationship Between Science and Theology (SPCK/Trinity Press International 1991) Quarks, Chaos and Christianity (1994; Second edition SPCK/Crossroad 2005) The Faith of a Physicist – published in the UK as Science and Christian Belief (1994) Serious Talk: Science and Religion in Dialogue (Trinity Press International/SCM Press, 1996) Scientists as Theologians (1996) Beyond Science: The wider human context (CUP 1996) Searching for Truth (Bible Reading Fellowship/Crossroad, 1996) Belief in God in an Age of Science (Yale University Press, 1998) Science and Theology (SPCK/Fortress 1998) The End of the World and the Ends of God (Trinity Press International, 2000) with Michael Welker Traffic in Truth: Exchanges Between Sciences and Theology (Canterbury Press/Fortress, 2000) Faith, Science and Understanding (2000) SPCK/Yale University Press The Work of Love: Creation as Kenosis editor, with contributors including Ian Barbour, Sarah Coakley, George Ellis, Jurgen Moltmann and Keith Ward (SPCK/Eerdmans 2001) / The God of Hope and the End of the World (Yale University Press, 2002) The Archbishop's School of Christianity and Science (York Courses, 2003) 'Science and Christian Faith' (Conversation on CD with Canon John Young. York Courses) Living with Hope (SPCK/Westminster John Knox Press, 2003) Science and the Trinity: The Christian Encounter With Reality (2004) (a particularly accessible summary of his thought) Exploring Reality: The Intertwining of Science & Religion (SPCK 2005) Quantum Physics & Theology: An Unexpected Kinship (SPCK 2007) From Physicist to Priest, an Autobiography SPCK 2007 Theology in the Context of Science SPCK 2008 Questions of Truth: Fiftyone Responses to Questions about God, Science and Belief, with Nicholas Beale; foreword by Antony Hewish (Westminster John Knox 2009) Reason and Reality: The Relationship Between Science and Theology (2011) SPCK Science and Religion in Quest of Truth (2011) SPCK 'Hawking, Dawkins and GOD' (2012) (Conversation on CD with Canon John Young. York Courses) What Can We Hope For? (Sam&Sam, 2019) with Patrick Miles Science The Analytic S-Matrix (CUP 1966, jointly with RJ Eden, PV Landshoff and DI Olive) The Particle Play (W.H. Freeman, 1979) Models of High Energy Processes (CUP 1980) The Quantum World (Longmans/Princeton University Press, 1985; Penguin 1986; Templeton Foundation Press 2007) Rochester Roundabout: The Story of High Energy Physics (New York, Longman, 1989) Quantum Theory: A Very Short Introduction (2002) OUP Meaning in Mathematics (2011) OUP (edited, with contributions from Timothy Gowers, Roger Penrose, Marcus du Sautoy and others) Chapters "The Trinity and Scientific Theology" in The Blackwell Companion to Science and Christianity, J.B. Stump and Alan G. Padgett (eds.), (Wiley-Blackwell, 2012) On Space and Time (CUP 2008) along with Andrew Taylor, Shahn Majid, Roger Penrose, Alain Connes and Michael Heller Spiritual Information: 100 Perspectives on Science and Religion (Templeton Foundation Press, 2005) ed Charles Harper Creation, Law and Probability (Fortress Press 2008) ed Fraser Watts with Peter Harrison, George Ellis, Philip Clayton, Michael Ruse, Nancey Murphy, John Bowker & others "Physical Processes, Quantum Events, and Divine Agency," in Quantum Mechanics: Scientific Perspectives on Divine Action, Russell, R.J., Clayton, P., Wegter-McNelly, K., Polkinghorne, J. (eds.), (VATICAN: Vatican Observatory, 2001) See also Double-aspect theory List of Christians in science and technology List of scholars on the relationship between religion and science References Footnotes Bibliography Further reading Google Scholar – List of papers by John Polkinghorne John Polkinghorne on the "consequences of quantum theory" (for theology), accessed 9 July 2012. Video interview with Polkinghorne , accessed 25 March 2010. Interview by Alan Macfarlane 10 November 2008 (video) Polkinghorne, John. "Reductionism", Interdisciplinary Encyclopedia of Religion and Science, accessed 25 March 2010. Semple, Ian (2009). From physicist to priest: A quantum leap of faith, The Guardian, 9 April 2009; interview with Polkinghorne. Smedes, Taede A. Chaos, Complexity, and God: Divine Action and Scientism .Louvain: Peeters 2004, a theological investigation of Polkinghorne's (and Arthur Peacocke's) model of divine action. Runehov, Anne L.C. "Chaos, Complexity, and God: Divine Action and Scientism by Taede A. Smedes", Ars Disputandi, Volume 6, 2006. Southgate, Christopher, ed. (1999) God, Humanity and the Cosmos: A Textbook in Science and Religion T&T Clark. Relevant extracts. Steinke, Johannes Maria (2006) John Polkinghorne – Konsonanz von Naturwissenschaft und Theologie Vandenhoeck & Ruprecht. Investigates Polkinghorne's theory of consonance, and analyses its philosophical background. Wright, Robert. Video interview, Slate, accessed 25 March 2010. External links Website about Polkinghorne 1930 births 2021 deaths 20th-century English Anglican priests Academics of the University of Edinburgh Alumni of Trinity College, Cambridge Alumni of Westcott House, Cambridge British physicists Christian scholars Deans of Trinity Hall, Cambridge Evangelical Anglican clergy Evangelical Anglican theologians Fellows of the Royal Society Knights Commander of the Order of the British Empire Members of the International Society for Science and Religion Particle physicists People associated with CERN People educated at The Perse School People from Weston-super-Mare Presidents of Queens' College, Cambridge Templeton Prize laureates Writers about religion and science
John Polkinghorne
[ "Physics" ]
4,512
[ "Particle physicists", "Particle physics" ]
50,345
https://en.wikipedia.org/wiki/Urban%20design
Urban design is an approach to the design of buildings and the spaces between them that focuses on specific design processes and outcomes. In addition to designing and shaping the physical features of towns, cities, and regional spaces, urban design considers 'bigger picture' issues of economic, social and environmental value and social design. The scope of a project can range from a local street or public space to an entire city and surrounding areas. Urban designers connect the fields of architecture, landscape architecture and urban planning to better organize physical space and community environments. Some important focuses of urban design on this page include its historical impact, paradigm shifts, its interdisciplinary nature, and issues related to urban design. Theory Urban design deals with the larger scale of groups of buildings, infrastructure, streets, and public spaces, entire neighbourhoods and districts, and entire cities, with the goal of making urban environments that are equitable, beautiful, performative, and sustainable. Urban design is an interdisciplinary field that utilizes the procedures and the elements of architecture and other related professions, including landscape design, urban planning, civil engineering, and municipal engineering. It borrows substantive and procedural knowledge from public administration, sociology, law, urban geography, urban economics and other related disciplines from the social and behavioral sciences, as well as from the natural sciences. In more recent times different sub-subfields of urban design have emerged such as strategic urban design, landscape urbanism, water-sensitive urban design, and sustainable urbanism. Urban design demands an understanding of a wide range of subjects from physical geography to social science, and an appreciation for disciplines, such as real estate development, urban economics, political economy, and social theory. Urban design theory deals primarily with the design and management of public space (i.e. the 'public environment', 'public realm' or 'public domain'), and the way public places are used and experienced. Public space includes the totality of spaces used freely on a day-to-day basis by the general public, such as streets, plazas, parks, and public infrastructure. Some aspects of privately owned spaces, such as building facades or domestic gardens, also contribute to public space and are therefore also considered by urban design theory. Important writers on urban design theory include Christopher Alexander, Peter Calthorpe, Gordon Cullen, Andrés Duany, Jane Jacobs, Jan Gehl, Allan B. Jacobs, Kevin Lynch, Aldo Rossi, Colin Rowe, Robert Venturi, William H. Whyte, Camillo Sitte, Bill Hillier (space syntax), and Elizabeth Plater-Zyberk. History Although contemporary professional use of the term 'urban design' dates from the mid-20th century, urban design as such has been practiced throughout history. Ancient examples of carefully planned and designed cities exist in Asia, Africa, Europe, and the Americas, and are particularly well known within Classical Chinese, Roman, and Greek cultures. Specifically, Hippodamus of Miletus was a famous ancient Greek architect and urban planner, and all around academic that is often considered to be a "father of European urban planning", and the namesake of the "Hippodamian plan", also known as the grid plan of a city layout. European Medieval cities are often, and often erroneously, regarded as exemplars of undesigned or 'organic' city development. There are many examples of considered urban design in the Middle Ages. In England, many of the towns listed in the 9th-century Burghal Hidage were designed on a grid, examples including Southampton, Wareham, Dorset and Wallingford, Oxfordshire, having been rapidly created to provide a defensive network against Danish invaders. 12th century western Europe brought renewed focus on urbanisation as a means of stimulating economic growth and generating revenue. The burgage system dating from that time and its associated burgage plots brought a form of self-organising design to medieval towns. Throughout history, the design of streets and deliberate configuration of public spaces with buildings have reflected contemporaneous social norms or philosophical and religious beliefs. Yet the link between designed urban space and the human mind appears to be bidirectional. Indeed, the reverse impact of urban structure upon human behaviour and upon thought is evidenced by both observational study and historical records. There are clear indications of impact through Renaissance urban design on the thought of Johannes Kepler and Galileo Galilei. Already René Descartes in his Discourse on the Method had attested to the impact Renaissance planned new towns had upon his own thought, and much evidence exists that the Renaissance streetscape was also the perceptual stimulus that had led to the development of coordinate geometry. Early modern era The beginnings of modern urban design in Europe are associated with the Renaissance but, especially, with the Age of Enlightenment. Spanish colonial cities were often planned, as were some towns settled by other imperial cultures. These sometimes embodied utopian ambitions as well as aims for functionality and good governance, as with James Oglethorpe's plan for Savannah, Georgia. In the Baroque period the design approaches developed in French formal gardens such as Versailles were extended into urban development and redevelopment. In this period, when modern professional specializations did not exist, urban design was undertaken by people with skills in areas as diverse as sculpture, architecture, garden design, surveying, astronomy, and military engineering. In the 18th and 19th centuries, urban design was perhaps most closely linked with surveyors engineers and architects. The increase in urban populations brought with it problems of epidemic disease, the response to which was a focus on public health, the rise in the UK of municipal engineering and the inclusion in British legislation of provisions such as minimum widths of street in relation to heights of buildings in order to ensure adequate light and ventilation. Much of Frederick Law Olmsted's work was concerned with urban design, and the newly formed profession of landscape architecture also began to play a significant role in the late 19th century. Modern urban design In the 19th century, cities were industrializing and expanding at a tremendous rate. Private businesses largely dictated the pace and style of this development. The expansion created many hardships for the working poor and concern for public health increased. However, the laissez-faire style of government, in fashion for most of the Victorian era, was starting to give way to a New Liberalism. This gave more power to the public. The public wanted the government to provide citizens, especially factory workers, with healthier environments. Around 1900, modern urban design emerged from developing theories on how to mitigate the consequences of the industrial age. The first modern urban planning theorist was Sir Ebenezer Howard. His ideas, although utopian, were adopted around the world because they were highly practical. He initiated the garden city movement. in 1898. His garden cities were intended to be planned, self-contained communities surrounded by parks. Howard wanted the cities to be proportional with separate areas of residences, industry, and agriculture. Inspired by the Utopian novel Looking Backward and Henry George's work Progress and Poverty, Howard published his book Garden Cities of To-morrow in 1898. His work is an important reference in the history of urban planning. He envisioned the self-sufficient garden city to house 32,000 people on a site of . He planned on a concentric pattern with open spaces, public parks, and six radial boulevards, wide, extending from the center. When it reached full population, Howard wanted another garden city to be developed nearby. He envisaged a cluster of several garden cities as satellites of a central city of 50,000 people, linked by road and rail. His model for a garden city was first created at Letchworth and Welwyn Garden City in Hertfordshire. Howard's movement was extended by Sir Frederic Osborn to regional planning. 20th century In the early 1900s, urban planning became professionalized. With input from utopian visionaries, civil engineers, and local councilors, new approaches to city design were developed for consideration by decision-makers such as elected officials. In 1899, the Town and Country Planning Association was founded. In 1909, the first academic course on urban planning was offered by the University of Liverpool. Urban planning was first officially embodied in the Housing and Town Planning Act of 1909 Howard's 'garden city' compelled local authorities to introduce a system where all housing construction conformed to specific building standards. In the United Kingdom following this Act, surveyor, civil engineers, architects, and lawyers began working together within local authorities. In 1910, Thomas Adams became the first Town Planning Inspector at the Local Government Board and began meeting with practitioners. In 1914, The Town Planning Institute was established. The first urban planning course in America was not established until 1924 at Harvard University. Professionals developed schemes for the development of land, transforming town planning into a new area of expertise. In the 20th century, urban planning was changed by the automobile industry. Car-oriented design impacted the rise of 'urban design'. City layouts now revolved around roadways and traffic patterns. In June 1928, the International Congresses of Modern Architecture (CIAM) was founded at the Chateau de la Sarraz in Switzerland, by a group of 28 European architects organized by Le Corbusier, Hélène de Mandrot, and Sigfried Giedion. The CIAM was one of many 20th century manifestos meant to advance the cause of "architecture as a social art". Postwar Team X was a group of architects and other invited participants who assembled starting in July 1953 at the 9th Congress of the International Congresses of Modern Architecture (CIAM) and created a schism within CIAM by challenging its doctrinaire approach to urbanism. In 1956, the term "Urban design" was first used at a series of conferences hosted by Harvard University. The event provided a platform for Harvard's Urban Design program. The program also utilized the writings of famous urban planning thinkers: Gordon Cullen, Jane Jacobs, Kevin Lynch, and Christopher Alexander. In 1961, Gordon Cullen published The Concise Townscape. He examined the traditional artistic approach to city design of theorists including Camillo Sitte, Barry Parker, and Raymond Unwin. Cullen also created the concept of 'serial vision'. It defined the urban landscape as a series of related spaces. Also in 1961, Jane Jacobs published The Death and Life of Great American Cities. She critiqued the modernism of CIAM (International Congresses of Modern Architecture). Jacobs also claimed crime rates in publicly owned spaces were rising because of the Modernist approach of 'city in the park'. She argued instead for an 'eyes on the street' approach to town planning through the resurrection of main public space precedents (e.g. streets, squares). In the same year, Kevin Lynch published The Image of the City. He was seminal to urban design, particularly with regards to the concept of legibility. He reduced urban design theory to five basic elements: paths, districts, edges, nodes, landmarks. He also made the use of mental maps to understand the city popular, rather than the two-dimensional physical master plans of the previous 50 years. Other notable works: Architecture of the City by Aldo Rossi (1966) Learning from Las Vegas by Robert Venturi and Denise Scott Brown (1972) Collage City by Colin Rowe (1978) The Next American Metropolis by Peter Calthorpe (1993) The Social Logic of Space by Bill Hillier and Julienne Hanson (1984) The popularity of these works resulted in terms that become everyday language in the field of urban planning. Aldo Rossi introduced 'historicism' and 'collective memory' to urban design. Rossi also proposed a 'collage metaphor' to understand the collection of new and old forms within the same urban space. Peter Calthorpe developed a manifesto for sustainable urban living via medium-density living. He also designed a manual for building new settlements in his concept of Transit Oriented Development (TOD). Bill Hillier and Julienne Hanson introduced Space Syntax to predict how movement patterns in cities would contribute to urban vitality, anti-social behaviour, and economic success. 'Sustainability', 'livability', and 'high quality of urban components' also became commonplace in the field. Current trends Today, urban design seeks to create sustainable urban environments with long-lasting structures, buildings, and overall livability. Walkable urbanism is another approach to practice that is defined within the Charter of New Urbanism. It aims to reduce environmental impacts by altering the built environment to create smart cities that support sustainable transport. Compact urban neighborhoods encourage residents to drive less. These neighborhoods have significantly lower environmental impacts when compared to sprawling suburbs. To prevent urban sprawl, Circular flow land use management was introduced in Europe to promote sustainable land use patterns. As a result of the recent New Classical Architecture movement, sustainable construction aims to develop smart growth, walkability, architectural tradition, and classical design. It contrasts with modernist and globally uniform architecture. In the 1980s, urban design began to oppose the increasing solitary housing estates and suburban sprawl. Managed Urbanisation with the view to making the urbanising process completely culturally and economically, and environmentally sustainable, and as a possible solution to the urban sprawl, Frank Reale has submitted an interesting concept of Expanding Nodular Development (E.N.D.) that integrates many urban designs and ecological principles, to design and build smaller rural hubs with high-grade connecting freeways, rather than adding more expensive infrastructure to existing big cities and the resulting congestion. Paradigm shifts Throughout the young existence of the Urban Design discipline, many paradigm shifts have occurred that have affected the trajectory of the field regarding theory and practice. These paradigm shifts cover multiple subject areas outside of the traditional design disciplines. Team 10 - The first major paradigm shift was the formation of Team 10 out of CIAM, or the Congres Internationaux d'Architecture Moderne. They believed that Urban Design should introduce ideas of 'Human Association', which pivots the design focus from the individual patron to concentrating on the collective urban population. The Brundtland Report and Silent Spring - Another paradigm shift was the publication of the Brundtland Report and the book Silent Spring by Rachel Carson. These writings introduced the idea that human settlements could have detrimental impacts on ecological processes, as well as human health, which spurred a new era of environmental awareness in the field. The Planner's Triangle - The Planner's Triangle, created by Scott Cambell, emphasized three main conflicts in the planning process. This diagram exposed the complex relationships between Economic Development, Environmental Protection, and Equity and Social Justice. For the first time, the concept of Equity and Social Justice was considered as equally important as Economic Development and Environmental Protection within the design process. Death of Modernism (Demolition of Pruitt Igoe) - Pruitt Igoe was a spatial symbol and representation of Modernist theory regarding social housing. In its failure and demolition, these theories were put into question and many within the design field considered the era of Modernism to be dead. Neoliberalism & the election of Reagan - The election of President Reagan and the rise of Neoliberalism affected the Urban Design discipline because it shifted the planning process to emphasize capitalistic gains and spatial privatization. Inspired by the trickle-down approach of Reaganomics, it was believed that the benefits of a capitalist emphasis within design would positively impact everyone. Conversely, this led to exclusionary design practices and to what many consider as "the death of public space". Right to the City - The spatial and political battle over our citizens' rights to the city has been an ongoing one. David Harvey, along with Dan Mitchell and Edward Soja, discussed rights to the city as a matter of shifting the historical thinking of how spatial matter was determined in a critical form. This change of thinking occurred in three forms: ontologically, sociologically, and the combination of this socio-spatial dialect. Together the aim shifted to be able to measure what matters in a socio-spatial context. Black Lives Matter (Ferguson) - The Black Lives Matter movement challenged design thinking because it emphasized the injustices and inequities suffered by people of color in urban space, as well as emphasized their right to public space without discrimination and brutality. It claims that minority groups lack certain spatial privileges and that this deficiency can result in matters of life and death. In order to reach an equitable state of urbanism, there needs to be equal identification of socio-economic lives within our urbanscapes. New approaches There have been many different theories and approaches applied to the practice of urban design. New Urbanism is an approach that began in the 1980s as a place-making initiative to combat suburban sprawl. Its goal is to increase density by creating compact and complete towns and neighborhoods. The 10 principles of new urbanism are walkability, connectivity, mixed-use and diversity, mixed housing, quality architecture and urban design, traditional neighborhood structure, increased density, smart transportation, sustainability, and quality of life. New urbanism and the developments that it has created are sources of debates within the discipline, primarily with the landscape urbanist approach but also due to its reproduction of idyllic architectural tropes that do not respond to the context. Andres Duany, Elizabeth Plater-Zyberk, Peter Calthorpe, and Jeff Speck are all strongly associated with New Urbanism and its evolution over the years. Landscape Urbanism is a theory that first surfaced in the 1990s, arguing that the city is constructed of interconnected and ecologically rich horizontal field conditions, rather than the arrangement of objects and buildings. Charles Waldheim, Mohsen Mostafavi, James Corner, and Richard Weller are closely associated with this theory. Landscape urbanism theorises sites, territories, ecosystems, networks, and infrastructures through landscape practice according to Corner, while applying a dynamic concept to cities as ecosystems that grow, shrink or change phases of development according to Waldheim. Everyday Urbanism is a concept introduced by Margaret Crawford and influenced by Henry Lefebvre that describes the everyday lived experience shared by urban residents including commuting, working, relaxing, moving through city streets and sidewalks, shopping, buying, eating food, and running errands. Everyday urbanism is not concerned with aesthetic value. Instead, it introduces the idea of eliminating the distance between experts and ordinary users and forces designers and planners to contemplate a 'shift of power' and address social life from a direct and ordinary perspective. Tactical Urbanism (also known as DIY Urbanism, Planning-by-Doing, Urban Acupuncture, or Urban Prototyping) is a city, organizational, or citizen-led approach to neighborhood-building that uses short-term, low-cost, and scalable interventions and policies to catalyze long term change. Top-up Urbanism is the theory and implementation of two techniques in urban design: top-down and bottom-up. Top-down urbanism is when the design is implemented from the top of the hierarchy - normally the government or planning department. Bottom-up or grassroots urbanism begins with the people or the bottom of the hierarchy. Top-up means that both methods are used together to make a more participatory design, so it is sure to be comprehensive and well regarded in order to be as successful as possible. Infrastructural Urbanism is the study of how the major investments that go into making infrastructural systems can be leveraged to be more sustainable for communities. Instead of the systems being solely about efficiency in both cost and production, infrastructural urbanism strives to utilize these investments to be more equitable for social and environmental issues as well. Linda Samuels is a designer investigating how to accomplish this change in infrastructure in what she calls "next-generation infrastructure" which is "multifunctional; public; visible; socially productive; locally specific, flexible, and adaptable; sensitive to the eco-economy; composed of design prototypes or demonstration projects; symbiotic; technologically smart; and developed collaboratively across disciplines and agencies". Sustainable Urbanism is the study from the 1990s of how a community can be beneficial for the ecosystem, the people, and the economy for which it is associated. It is based on Scott Campbell's planner's triangle which tries to find the balance between economy, equity, and the environment. Its main concept is to try and make cities as self-sufficient as possible while not damaging the ecosystem around them, today with an increased focus on climate stability. A key designer working with sustainable urbanism is Douglas Farr. Feminist Urbanism is the study and critique of how the built environment affects genders differently because of patriarchal social and political structures in society. Typically, the people at the table making design decisions are men, so their conception about public space and the built environment relates to their life perspectives and experiences, which do not reflect the same experiences of women or children. Dolores Hayden is a scholar who has researched this topic from 1980 to the present day. Hayden's writing says, “when women, men, and children of all classes and races can identify the public domain as the place where they feel most comfortable as citizens, Americans will finally have homelike urban space.” Educational Urbanism is an emerging discipline, at the crossroads of urban planning, educational planning, and pedagogy. An approach that tackles the notion that economic activities, the need for new skills at the workplace, and the spatial configuration of the workplace rely on the spatial reorientation in the design of educational spaces and the urban dimension of educational planning. Black Urbanism is an approach in which black communities are active creators, innovators, and authors of the process of designing and creating the neighborhoods and spaces of the metropolitan areas they have done so much to help revive over the past half-century. The goal is not to build black cities for black people but to explore and develop the creative energy that exists in so-called black areas: that has the potential to contribute to the sustainable development of the whole city. Debates in urbanism Underlying the practice of urban design are the many theories about how to best design the city. Each theory makes a unique claim about how to effectively design thriving, sustainable urban environments. Debates over the efficacy of these approaches fill the urban design discourse. Landscape Urbanism and New Urbanism are commonly debated as distinct approaches to addressing suburban sprawl. While Landscape Urbanism proposes landscape as the basic building block of the city and embraces horizontality, flexibility, and adaptability, New Urbanism offers the neighborhood as the basic building block of the city and argues for increased density, mixed uses, and walkability. Opponents of Landscape Urbanism point out that most of its projects are urban parks, and as such, its application is limited. Opponents of New Urbanism claim that its preoccupation with traditional neighborhood structures is nostalgic, unimaginative, and culturally problematic. Everyday Urbanism argues for grassroots neighborhood improvements rather than master-planned, top-down interventions. Each theory elevates the roles of certain professions in the urban design process, further fueling the debate. In practice, urban designers often apply principles from many urban design theories. Emerging from the conversation is a universal acknowledgement of the importance of increased interdisciplinary collaboration in designing the modern city. Urban design as an integrative profession Urban designers work with architects, landscape architects, transportation engineers, urban planners, and industrial designers to reshape the city. Cooperation with public agencies, authorities and the interests of nearby property owners is necessary to manage public spaces. Users often compete over the spaces and negotiate across a variety of spheres. Input is frequently needed from a wide range of stakeholders. This can lead to different levels of participation as defined in Arnstein's Ladder of Citizen Participation. While there are some professionals who identify themselves specifically as urban designers, a majority have backgrounds in urban planning, architecture, or landscape architecture. Many collegiate programs incorporate urban design theory and design subjects into their curricula. There is an increasing number of university programs offering degrees in urban design at the post-graduate level. Urban design considers: Pedestrian zones Incorporation of nature within a city Aesthetics Urban structure – arrangement and relation of business and people Urban typology, density, and sustainability - spatial types and morphologies related to the intensity of use, consumption of resources, production, and maintenance of viable communities Accessibility – safe and easy transportation Legibility and wayfinding – accessible information about travel and destinations Animation – Designing places to stimulate public activity Function and fit – places support their varied intended uses Complimentary mixed uses – Locating activities to allow constructive interaction between them Character and meaning – Recognizing differences between places Order and incident – Balancing consistency and variety in the urban environment Continuity and change – Locating people in time and place, respecting heritage and contemporary culture Civil society – people are free to interact as civic equals, important for building social capital Participation/engagement – including people in the decision-making process can be done at many different scales. Relationships with other related disciplines The original urban design was thought to be separated from architecture and urban planning. Urban Design has developed to a certain extent, and comes from the foundation of engineering. In Anglo-Saxon countries, it is often considered as a branch under the architecture, urban planning, and landscape architecture and limited as the construction of the urban physical environment. However Urban Design is more integrated into the social science-based, cultural, economic, political, and other aspects. Not only focus on space and architectural group, but also look at the whole city from a broader and more holistic perspective to shape a better living environment. Compared to architecture, the spatial and temporal scale of Urban Design processing is much larger. It deals with neighborhoods, communities, and even the entire city. The urban design education The University of Liverpool's Department of Civic Design is the first urban design school in the world founded in 1909. Following the 1956 Urban Design conference, Harvard University established the first graduate program with urban design in its title, The Master of Architecture in Urban Design, although as a subject taught in universities its history in Europe is far older. Urban design programs explore the built environment from diverse disciplinary backgrounds and points of view. The pedagogically innovative combination of interdisciplinary studios, lecture courses, seminars, and independent study creates an intimate and engaging educational atmosphere in which students thrive and learn. Soon after in 1961, Washington University in St. Louis founded their Master of Urban Design program. Today, twenty urban design programs exist in the United States: Andrews University, Berrien Springs, MI Clemson University - Charleston, SC Columbia Graduate School of Architecture, Planning and Preservation - New York, NY City College of New York - New York, NY Estopinal College of Architecture and Planning at Ball State University - Muncie, IN Georgia Institute of Technology College of Design - Atlanta, GA Harvard Graduate School of Design - Cambridge, MA Iowa State University - Ames, IA New York Institute of Technology - New York, NY Notre Dame School of Architecture - Notre Dame, IN Pratt Institute - Brooklyn, NY Sam Fox School of Design & Visual Arts at Washington University in St. Louis - St. Louis, MO Savannah College of Art and Design - Savannah, GA Stuart Weitzman School of Design at University of Pennsylvania - Philadelphia, PA Taubman College of Architecture and Urban Planning at University of Michigan - Ann Arbor, MI University of California, Berkeley - Berkeley, CA University of Colorado Denver - Denver, CO University of Maryland - College Park, MD University of Miami - Miami, FL Stuart Weitzman School of Design at University of Pennsylvania - Philadelphia, PA University of Texas at Austin School of Architecture - Austin, TX University of North Carolina at Charlotte - Charlotte, NC In the United Kingdom, Master's programmes in Urban Design at University of Manchester or University of Sheffield and Cardiff University or London South Bank University and City Design at the Royal College of Art or Queen's University Belfast are offered. Issues The field of urban design holds enormous potential for helping us address today's biggest challenges: an expanding population, mass urbanization, rising inequality, and climate change. In its practice as well as its theories, urban design attempts to tackle these pressing issues. As climate change progresses, urban design can mitigate the results of flooding, temperature changes, and increasingly detrimental storm impacts through a mindset of sustainability and resilience. In doing so, the urban design discipline attempts to create environments that are constructed with longevity in mind, such as zero-carbon cities. Cities today must be designed to minimize resource consumption, waste generation, and pollution while also withstanding the unprecedented impacts of climate change. To be truly resilient, our cities need to be able to not just bounce back from a catastrophic climate event but to bounce forward to an improved state. Another issue in this field is that it is often assumed that there are no mothers of planning and urban design. However, this is not the case, many women have made proactive contributions to the field, including the work of Mary Kingsbury Simkhovitch, Florence Kelley, and Lillian Wald, to name a few of whom were prominent leaders in the City Social movement. The City Social was a movement that steamed between the commonly known City Practical and City Beautiful movements. It was a movement mainly concerning lay with the economic and social equalities regarding urban issues. Justice is and will always be a key issue in urban design. As previously mentioned, past urban strategies have caused injustices within communities incapable of being remedied via simple means. As urban designers tackle the issue of justice, they often are required to look at the injustices of the past and must be careful not to overlook the nuances of race, place, and socioeconomic status in their design efforts. This includes ensuring reasonable access to basic services, transportation, and fighting against gentrification and the commodification of space for economic gain. Organizations such as the Divided Cities Initiatives at Washington University in St. Louis and the Just City Lab at Harvard work on promoting justice in urban design. Until the 1970s, the design of towns and cities took little account of the needs of people with disabilities. At that time, disabled people began to form movements demanding recognition of their potential contribution if social obstacles were removed. Disabled people challenged the 'medical model' of disability which saw physical and mental problems as an individual 'tragedy' and people with disabilities as 'brave' for enduring them. They proposed instead a 'social model' which said that barriers to disabled people result from the design of the built environment and attitudes of able-bodied people. 'Access Groups' were established composed of people with disabilities who audited their local areas, checked planning applications, and made representations for improvements. The new profession of 'access officer' was established around that time to produce guidelines based on the recommendations of access groups and to oversee adaptations to existing buildings as well as to check on the accessibility of new proposals. Many local authorities now employ access officers who are regulated by the Access Association. A new chapter of the Building Regulations (Part M) was introduced in 1992. Although it was beneficial to have legislation on this issue the requirements were fairly minimal but continue to be improved with ongoing amendments. The Disability Discrimination Act 1995 continues to raise awareness and enforce action on disability issues in the urban environment. The issue of walkability has gained prominence in recent years, not only with the concerns of the aforementioned climate change, but also the health outcomes of residents. Car-centric urban design has an invariably negative effect on such outcomes. With proximity to internal combustion engines, residents tend to suffer from dangerous levels of air pollution which lead to cardiovascular complications ranging from the acute, in hypertension and alterations in heart rate, and the chronic, the outright development of atherosclerosis. More people die from air pollution each year than from car accidents. This issue has been used to fuel movements for alternative forms of long to mid range transportation such as trains and bicycles, with walking as the primary means of short-range travel. This would bring benefits from two simultaneous avenues. The physical activity from walking, and the lack of particulate matter (carbon dioxide, sulfur dioxide, nitrogen dioxide, etc.) has shown to alleviate and lower the risk of many maladies such as diabetes, hypertension and cardiovascular disease. Physical activity levels from walking are closely related to the abundance of open public spaces, commercial shops, greenery, among others. These attributes also have been stated to contribute to stronger social and emotional health as the open public spaces facilitate more social interaction within communities. This issue is most prevalent in the United States, where the rise of neoliberalism directly and intentionally caused the car-centric infrastructure. See also Blue space Complete streets Continuous productive urban landscape Crime prevention through environmental design Cyclability Neighbourhood character New Urbanism Permeability (spatial and transport planning) Sustainable urbanism Urban density Urban forest Urban heat island Urban green space Urban planning Urban vitality Urbanism Walkability References Further reading Carmona, Matthew Public Places Urban Spaces, The Dimensions of Urban Design, Routledge, London New York, . Carmona, Matthew, and Tiesdell, Steve, editors, Urban Design Reader, Architectural Press of Elsevier Press, Amsterdam Boston other cities 2007, . Larice, Michael, and MacDonald, Elizabeth, editors, The Urban Design Reader, Routledge, New York London 2007, . External links Cities of the Future: overview of important urban design elements Landscape Landscape architecture
Urban design
[ "Engineering" ]
6,767
[ "Landscape architecture", "Architecture" ]
50,363
https://en.wikipedia.org/wiki/Indigo%20dye
Indigo dye is an organic compound with a distinctive blue color. Indigo is a natural dye extracted from the leaves of some plants of the Indigofera genus, in particular Indigofera tinctoria. Dye-bearing Indigofera plants were commonly grown and used throughout the world, particularly in Asia, with the production of indigo dyestuff economically important due to the historical rarity of other blue dyestuffs. Most indigo dye produced today is synthetic, constituting around 80,000 tonnes each year, as of 2023. It is most commonly associated with the production of denim cloth and blue jeans, where its properties allow for effects such as stone washing and acid washing to be applied quickly. Uses The primary use for indigo is as a dye for cotton yarn, mainly used in the production of denim cloth suitable for blue jeans; on average, a pair of blue jeans requires to of dye. Smaller quantities are used in the dyeing of wool and silk. Indigo carmine, also known as indigo, is an indigo derivative which is also used as a colorant. About 20,000 tonnes are produced annually, again mainly for the production of blue jeans. It is also used as a food colorant, and is listed in the United States as FD&C Blue No. 2. Sources Natural sources A variety of plants have provided indigo throughout history, but most natural indigo was obtained from those in the genus Indigofera, which are native to the tropics, notably the Indian Subcontinent. The primary commercial indigo species in Asia was true indigo (Indigofera tinctoria, also known as I. sumatrana). A common alternative used in the relatively colder subtropical locations such as Japan's Islands and Taiwan is Strobilanthes cusia. Until the introduction of Indigofera species from the south, Persicaria tinctoria (dyer's knotweed) was the most important blue dyestuff in East Asia; however, the crop produced less dyestuff than the average crop of indigo, and was quickly surpassed in favour of the more economical Indigofera tinctoria plant. In Central and South America, the species grown is Indigofera suffruticosa, also known as anil, and in India, an important species was Indigofera arrecta, Natal indigo. In Europe, Isatis tinctoria, commonly known as woad, was used for dyeing fabrics blue, containing the same dyeing compounds as indigo, also referred to as indigo. Several plants contain indigo, which, when exposed to an oxidizing source such as atmospheric oxygen, reacts to produce indigo dye; however, the relatively low concentrations of indigo in these plants make them difficult to work with, with the color more easily tainted by other dye substances also present in these plants, typically leading to a greenish tinge. The precursor to indigo is indican, a colorless, water-soluble derivative of the amino acid tryptophan, and Indigofera leaves contain as much as 0.2–0.8% of this compound. Pressing cut leaves into a vat and soaking hydrolyzes the indican, releasing β--glucose and indoxyl. The indoxyl dimerizes in the mixture, and after 12–15 hours of fermentation yields the yellow, water-soluble leucoindigo. Subsequent exposure to air forms the blue, water-insoluble indigo dye. The dye precipitates from the fermented leaf solution upon oxidation, but may also be precipitated when mixed with a strong base such as lye. The solids are filtered, pressed into cakes, dried, and powdered. The powder is then mixed with various other substances to produce different shades of blue and purple. Natural sources of indigo also include mollusks: the Murex genus of sea snails produces a mixture of indigo and 6,6'-dibromoindigo (red), which together produce a range of purple hues known as Tyrian purple. Light exposure during part of the dyeing process can convert the dibromoindigo into indigo, resulting in blue hues known as royal blue, hyacinth purple, or tekhelet. Chemical synthesis Given its economic importance, indigo has been prepared by many methods. The Baeyer–Drewsen indigo synthesis dates back to 1882. It involves an aldol condensation of o-nitrobenzaldehyde with acetone, followed by cyclization and oxidative dimerization to indigo. This route was highly useful for obtaining indigo and many of its derivatives on the laboratory scale, but proved impractical for industrial-scale synthesis. Johannes Pfleger and eventually came up with industrial mass production synthesis from aniline by using mercury as a catalyst. The method was discovered by an accident by Karl Heumann in Zurich which involved a broken thermometer. The first commercially practical route of producing indigo is credited to Pfleger in 1901. In this process, N-phenylglycine is treated with a molten mixture of sodium hydroxide, potassium hydroxide, and sodamide. This highly sensitive melt produces indoxyl, which is subsequently oxidized in air to form indigo. Variations of this method are still in use today. An alternative and also viable route to indigo is credited to Heumann in 1897. It involves heating N-(2-carboxyphenyl)glycine to in an inert atmosphere with sodium hydroxide. The process is easier than the Pfleger method, but the precursors are more expensive. Indoxyl-2-carboxylic acid is generated. This material readily decarboxylates to give indoxyl, which oxidizes in air to form indigo. The preparation of indigo dye is practised in college laboratory classes according to the original Baeyer-Drewsen route. History The oldest known fabric dyed indigo, dated to 6,000 years ago, was discovered in Huaca Prieta, Peru. Many Asian countries, such as India, China, Japan, and Southeast Asian nations have used indigo as a dye (particularly for silk) for centuries. The dye was also known to ancient civilizations in Mesopotamia, Egypt, Britain, Mesoamerica, Peru, Iran, and West Africa. Indigo was also cultivated in India, which was also the earliest major center for its production and processing. The Indigofera tinctoria species was domesticated in India. Indigo, used as a dye, made its way to the Greeks and the Romans, where it was valued as a luxury product. In Mesopotamia, a neo-Babylonian cuneiform tablet of the seventh century BC gives a recipe for the dyeing of wool, where lapis-colored wool (uqnatu) is produced by repeated immersion and airing of the cloth. Indigo was most probably imported from India. The Romans used indigo as a pigment for painting and for medicinal and cosmetic purposes. It was a luxury item imported to the Mediterranean from India by Arab merchants. India was a primary supplier of indigo to Europe as early as the Greco-Roman era. The association of India with indigo is reflected in the Greek word for the dye, indikón (, Indian). The Romans latinized the term to indicum, which passed into Italian dialect and eventually into English as the word indigo. In Bengal indigo cultivators revolted against exploitative working conditions created by European merchants and planters in what became known as the Indigo revolt in 1859. The Bengali play Nil Darpan by Indian playwright Dinabandhu Mitra was a fictionalized retelling of the revolt. The demand for indigo in the 19th century is indicated by the fact that in 1897, were dedicated to the cultivation of indican-producing plants, mainly in India. By comparison, the country of Luxembourg is . In Europe, indigo remained a rare commodity throughout the Middle Ages. A chemically identical dye derived from the woad plant (Isatis tinctoria) was used instead. In the late 15th century, the Portuguese explorer Vasco da Gama discovered a sea route to India. This led to the establishment of direct trade with India, the Spice Islands, China, and Japan. Importers could now avoid the heavy duties imposed by Persian, Levantine, and Greek middlemen and the lengthy and dangerous land routes which had previously been used. Consequently, the importation and use of indigo in Europe rose significantly. Much European indigo from Asia arrived through ports in Portugal, the Netherlands, and England. Many indigo plantations were established by European powers in tropical climates. Spain imported the dye from its colonies in Central and South America, and it was a major crop in Haiti and Jamaica, with much or all of the labor performed by enslaved Africans and African Americans. In the Spanish colonial era, intensive production of indigo for the world market in the region of modern El Salvador entailed such unhealthy conditions that the local indigenous population, forced to labor in pestilential conditions, was decimated. Indigo plantations also thrived in the Virgin Islands. However, France and Germany outlawed imported indigo in the 16th century to protect the local woad dye industry. In central Europe, indigo resist dyeing is a centuries-old skill that has received UNESCO Intangible Cultural Heritage of Humanity recognition. Newton used "indigo" to describe one of the two new primary colors he added to the five he had originally named, in his revised account of the rainbow in Lectiones Opticae of 1675. Because of its high value as a trading commodity, indigo was often referred to as blue gold. Throughout West Africa, Indigo was the foundation of centuries-old textile traditions. From the Tuareg nomads of the Sahara to Cameroon, clothes dyed with indigo signified wealth. Women dyed the cloth in most areas, with the Yoruba of Nigeria and the Mandinka of Mali particularly well known for their expertise. Among the Hausa male dyers, working at communal dye pits was the basis of the wealth of the ancient city of Kano, and they can still be seen plying their trade today at the same pits. The Tuareg are sometimes called the "Blue People" because the indigo pigment in the cloth of their traditional robes and turbans stained their skin dark blue. In Japan, indigo became especially important during the Edo period. This was due to a growing textiles industry, and because commoners had been banned from wearing silk, leading to the increasing cultivation of cotton, and consequently indigo – one of the few substances that could dye it. In North America, indigo was introduced into colonial South Carolina by Eliza Lucas, where it became the colony's second-most important cash crop (after rice). As a major export crop, indigo supported plantation slavery there. In the May and June 1755 issues of The Gentleman's Magazine, there appeared a detailed account of the cultivation of indigo, accompanied by drawings of necessary equipment and a prospective budget for starting such an operation, authored by South Carolina planter Charles Woodmason. It later appeared as a book. By 1775, indigo production in South Carolina exceeded 1,222,000 pounds. When Benjamin Franklin sailed to France in November 1776 to enlist France's support for the American Revolutionary War, 35 barrels of indigo were on board the Reprisal, the sale of which would help fund the war effort. In colonial North America, three commercially important species are found: the native I. caroliniana, and the introduced I. tinctoria and I. suffruticosa. Synthetic development In 1865 the German chemist Adolf von Baeyer began working on the synthesis of indigo. He described his first synthesis of indigo in 1878 (from isatin) and a second synthesis in 1880 (from 2-nitrobenzaldehyde). (It was not until 1883 that Baeyer finally determined the structure of indigo.) The synthesis of indigo remained impractical, so the search for alternative starting materials at Badische Anilin- und Soda-Fabrik (BASF) and Hoechst continued. Johannes Pfleger and Karl Heumann eventually came up with industrial mass production synthesis. The synthesis of N-(2-carboxyphenyl)glycine from the easy to obtain aniline provided a new and economically attractive route. BASF developed a commercially feasible manufacturing process that was in use by 1897, at which time 19,000 tons of indigo were being produced from plant sources. This had dropped to 1,000 tons by 1914 and continued to contract. By 2011, 50,000 tons of synthetic indigo were being produced worldwide. Dyeing technology Indigo white Indigo is a challenging dye because it is not soluble in water. To be dissolved, it must undergo a chemical change (reduction). Reduction converts indigo into "white indigo" (leuco-indigo). When a submerged fabric is removed from the dyebath, the white indigo quickly combines with oxygen in the air and reverts to the insoluble, intensely colored indigo. When it first became widely available in Europe in the 16th century, European dyers and printers struggled with indigo because of this distinctive property. It also required several chemical manipulations, some involving toxic materials, and presented many opportunities to injure workers. In the 19th century, English poet William Wordsworth referred to the plight of indigo dye workers of his hometown of Cockermouth in his autobiographical poem The Prelude. Speaking of their dire working conditions and the empathy that he felt for them, he wrote: A pre-industrial process for production of indigo white, used in Europe, was to dissolve the indigo in stale urine, which contains ammonia. A more convenient reductive agent is zinc. Another pre-industrial method, used in Japan, was to dissolve the indigo in a heated vat in which a culture of thermophilic, anaerobic bacteria was maintained. Some species of such bacteria generate hydrogen as a metabolic product, which convert insoluble indigo into soluble indigo white. Cloth dyed in such a vat was decorated with the techniques of shibori (tie-dye), kasuri, katazome, and tsutsugaki. Examples of clothing and banners dyed with these techniques can be seen in the works of Hokusai and other artists. Direct printing Two different methods for the direct application of indigo were developed in England in the 18th century and remained in use well into the 19th century. The first method, known as 'pencil blue' because it was most often applied by pencil or brush, could be used to achieve dark hues. Arsenic trisulfide and a thickener were added to the indigo vat. The arsenic compound delayed the oxidation of the indigo long enough to paint the dye onto fabrics. The second method was known as 'China blue' due to its resemblance to Chinese blue-and-white porcelain. Instead of using an indigo solution directly, the process involved printing the insoluble form of indigo onto the fabric. The indigo was then reduced in a sequence of baths of iron(II) sulfate, with air oxidation between each immersion. The China blue process could make sharp designs, but it could not produce the dark hues possible with the pencil blue method. Around 1880, the 'glucose process' was developed. It finally enabled the direct printing of indigo onto fabric and could produce inexpensive dark indigo prints unattainable with the China blue method. Since 2004, freeze-dried indigo, or instant indigo, has become available. In this method, the indigo has already been reduced, and then freeze-dried into a crystal. The crystals are added to warm water to create the dye pot. As in a standard indigo dye pot, care has to be taken to avoid mixing in oxygen. Freeze-dried indigo is simple to use, and the crystals can be stored indefinitely as long as they are not exposed to moisture. Chemical properties Indigo dye is a dark blue crystalline powder that sublimes at . It is insoluble in water, alcohol, or ether, but soluble in DMSO, chloroform, nitrobenzene, and concentrated sulfuric acid. The chemical formula of indigo is C16H10N2O2. The molecule absorbs light in the orange part of the spectrum (λmax=613 nm). The compound owes its deep color to the conjugation of the double bonds, i.e. the double bonds within the molecule are adjacent and the molecule is planar. In indigo white, the conjugation is interrupted because the molecule is non-planar. Indigo derivatives The benzene rings in indigo can be modified to give a variety of related dyestuffs. Thioindigo, where the two NH groups are replaced by S atoms, is deep red. Tyrian purple is a dull purple dye that is secreted by a common Mediterranean snail. It was highly prized in antiquity. In 1909, its structure was shown to be 6,6'-dibromoindigo (red). 6-bromoindigo (purple) is a component as well. It has never been produced on a commercial basis. The related Ciba blue (5,7,5',7'-tetrabromoindigo) is, however, of commercial value. Indigo and its derivatives featuring intra- and intermolecular hydrogen bonding have very low solubility in organic solvents. They can be made soluble using transient protecting groups such as the tBOC group, which suppresses intermolecular bonding. Heating of the tBOC indigo results in efficient thermal deprotection and regeneration of the parent H-bonded pigment. Treatment with sulfuric acid converts indigo into a blue-green derivative called indigo carmine (sulfonated indigo). It became available in the mid-18th century. It is used as a colorant for food, pharmaceuticals, and cosmetics. Indigo as an organic semiconductor Indigo and some of its derivatives are known to be ambipolar organic semiconductors when deposited as thin films by vacuum evaporation. Safety and the environment Indigo has a low oral toxicity, with an of 5 g/kg (0.5% of total mass) in mammals. In 2009, large spills of blue dyes had been reported downstream of a blue jeans manufacturer in Lesotho. The compound has been found to act as an agonist of the aryl hydrocarbon receptor. See also Champaran Satyagraha Haint blue Indigo revolt Red, White, and Black Make Blue References Further reading Paul, Jenny Balfour. 2020. "Indigo and Blue: A Marriage Made in Heaven." Textile Museum Journal 47 (January): 160–85. External links Plant Cultures: botany, history and uses of indigo FD&C regulation on indigotine Enones Indigo structure dyes Indolines Organic pigments Organic semiconductors
Indigo dye
[ "Chemistry" ]
3,836
[ "Semiconductor materials", "Molecular electronics", "Organic semiconductors" ]
50,375
https://en.wikipedia.org/wiki/Internet%20relationship
An internet relationship is a relationship between people who have met online, and in many cases know each other only via the Internet. Online relationships are similar in many ways to pen pal relationships. This relationship can be romantic, platonic, or even based on business affairs. An internet relationship (or online relationship) is generally sustained for a certain amount of time before being titled a relationship, just as in-person relationships. The major difference here is that an internet relationship is sustained via computer or online service, and the individuals in the relationship may or may not ever meet each other in person. Otherwise, the term is quite broad and can include relationships based upon text, video, audio, or even virtual character. This relationship can be between people in different regions, different countries, different sides of the world, or even people who reside in the same area but do not communicate in person. Technological advances According to J. Michael Jaffe, author of Gender, Pseudonyms, and CMC: Masking Identities and Baring Souls, "the Internet was originally established to expedite communication between governmental scientists and defense experts, and was not at all intended to be the popular 'interpersonal mass medium' it has become", yet new and revolutionary devices enabling the mass public to communicate online are constantly being developed and released. Rather than having many devices for different uses and ways of interacting, communicating online is more accessible and cheaper by having an Internet function built into one device, such as mobile phones, tablets, laptops, and smartphones. Other ways of communicating online with these devices are via services and applications such as Email, Skype, iChat, instant messaging programs, social networking services, asynchronous discussion groups, online games, virtual worlds and the World Wide Web. Some of these ways of communicating online are asynchronous (meaning not in real time), such as YouTube and some are synchronous (immediate communication), such as Twitter. Synchronous communication occurs when two or more participants are interacting in real time via voice or text chat. Types of relationships Many types of internet relationships are possible in today's world of technology. Internet dating Internet dating is very relevant in the lives of many individuals worldwide. A major benefit in the rise of Internet dating is the decrease in prostitution. People no longer need to search on the streets to find casual relationships. They can find them online if that is what they desire. Internet dating websites offer matchmaking services for people to find love or whatever else they may be looking for. The creation of the internet and its progressive innovations have opened up doors for people to meet other people who they may very well have never met otherwise. Dating website innovations Although the availability of uploading videos to the internet is not a new innovation, it has been made easier since 2008 thanks to YouTube. YouTube began the surge of video streaming sites in 2005 and within three years, smaller web developers started implementing video sharing on their sites. Internet dating sites have benefitted greatly since the surge in easiness and accessibility of picture and video uploading. Videos and pictures are equally important for most personal profiles. These profiles can be found on sites used for interpersonal relationships other than dating as well. "The body, although graphically absent, does not have to be any less present." Older and less advanced sites usually still allow, and often require, each user to upload a picture. Newer and more advanced sites offer the possibility of streaming media live via the user's profile for the site. The inclusion of videos and pictures has become almost a necessity for sexual social networking sites to maintain the loyalty of their members. It is appealing to internet users to be able to view and share videos, especially when forming relationships or friendships. Users According to an article in the New York Times, mediated matchmaking has been around since the mid-1800s. Online dating was made available in the mid-1990s, with the creation of the first dating sites. These dating sites create a space for liberation of sexuality. According to Sam Yagan of OkCupid, "the period between New Year's Day and Valentine's Day is [our] busiest six weeks of the year". Changes that online dating companies have created include not only the increase of pickiness in singles, but the rise in interracial marriages and spread the acceptance of homosexual individuals. Dating sites "are a place where sexual minorities, inter-sexed people and gay people are enjoying a newly found freedom". Several studies have shown the availability of online dating to produce a greater closeness and intimacy between individuals because it circumvents barriers that face-to-face interactions might have. "Participating in personal relationships online allow for almost full freedom from power relations in the offline/real world." A plethora of virtual sexual identities are represented in online profiles. The amount of personal information users are being asked to provide is constantly increasing. More and more online users are starting to explore and experiment with aspects of their sexual identities, whereas before, they may have felt uncomfortable due to social constraints or fear of possible repercussions. Most internet sites containing personal profiles require individuals to fill in "personal information" sections. Often these sections include a series of multiple choice questions. Due to the anonymity of these virtual profiles, individuals are more frequent to 'role'-play at being one of the predefined 'types', although offline, reservations may inhibit the individual from sharing true answers. There have also been many studies done to observe online daters and their reason for turning to the internet to look for romantic partners. According to Robert J. Brym and Rhonda L. Lenton, users of online games, websites, and other virtual communities are encouraged to conceal their identities and learn things about themselves that they never knew before. With a concealed identity, an online user can be whoever they want to be at that exact moment. They have the ability to venture outside of their comfort zone and act as someone completely different. The Journal of Computer-Mediated Communication reports the results of a study conducted by Robert J. Stephure, Susan D. Boon, Stacy L. MacKinnon, and Vicki L. Deveau on types of relationships online participants were seeking. They concluded that "when asked what they were looking for in an online relationship, the considerable majority of participants expressed interest in seeking fun, companionship, and someone to talk to. Most also reported interests in developing casual friendships and dating relationships with online partners. Substantially fewer reported using the Internet for the specific purposes of identifying potential sexual or marital partners." However, a study published in the journal Proceedings of the National Academy of Sciences in 2012, looked at about 19,000 married people and those who met their spouse online said their marriage was more satisfying than those who met their spouse offline. Plus, marriages that began online were less likely to end in separation or divorce. Faye Mishna, Alan McLuckie, and Michael Saini, co-authors of the Social Work Research article Real-World Dangers in an Online Reality: A Qualitative Study Examining Online Relationships and Cyber Abuse, reported the results of their research and observation of over 35,000 individuals between the ages of 6 and 24 who have been or currently are a part of an internet relationship about which they had concerns, and consequently contacted an organization that provided online support. Of the final 346 posts chosen to be included in the study, the average age of online users sharing information about their online relationship(s) was 14 years old. The overwhelming result was that children and youth consider their online relationships to be just as "real" as their offline relationships. The study also showed that the internet plays a crucial role in sexual and romantic experiences of this population of adolescent users. Success of dating websites and social networks Canaan Partners have reported that the dating industry brings in an estimate of 3-4 billion dollars yearly from membership fees and advertisements. The range of dating sites has expanded vastly over the past two decades. There are dating websites that focus on the matchmaking of certain groups of people based on religion, sexual preference, race, etc. The average life expectancy has been on a rise, leaving many young singles feeling as if they have plenty of time to find a life partner. This opens up time to travel and experience things without the burden of a relationship. As of 1996, more than 20% of Canadians "were not living in the same census subdivision as they were five years earlier" and as of 1998, more than half of employed Canadians worried "they [did] not have enough time to spend with their family and friends". Due to an increase in many businesses requiring their employees to travel, singles, often young professionals, find online dating websites to be the perfect answer to their "problem", states Brym and Lenton. Erik Shipmon, author of “Why Do People Date Online?", exclaims, "the Internet is the ultimate singles' bar—without the noise, the drunks, and the high cost of all those not-so-happy hours. Nor, thanks to online dating membership sites, do you have to depend on your friends and family to hook you up with people they think would be perfect for you—and who wouldn't be perfect for, well, anyone, which is why they are still unattached”. Cybersex Some people who are in an online relationship also participate in cybersex, which is a virtual sex encounter in which two or more individuals who are connected remotely via computer network send each other sexually explicit messages describing a sexual experience. This can also include individuals communicating sexually via video or audio. Some websites offer a cybersex service, where a patron pays the website owner in exchange for an online sexual experience with another person. Cybersex sometimes includes real life masturbation. The quality of a cybersex encounter typically depends upon the participants' abilities to evoke a vivid, visceral mental picture in the minds of their partners. Imagination and suspension of disbelief are also critically important. Cybersex can occur either within the context of existing or intimate relationships, e.g. among lovers who are geographically separated, or among individuals who have no prior knowledge of one another and meet in virtual spaces or cyberspaces and may even remain anonymous to one another. In some contexts cybersex is enhanced by the use of a webcam to transmit real-time video of the partners. Social networking relationships Social networking has enabled people to connect with each other via the internet. Sometimes, members of a social networking service do know all, or many of their "friends" (Facebook) or "connections" (LinkedIn) etc. in person. However, sometimes internet relationships are formed through these services, including but not limited to: Facebook, Myspace, Google Plus, LinkedIn, Twitter, Instagram, DeviantArt, Xanga and Discord. "Social networking service" is a very broad term, branching out to websites based on many different aspects. One aspect that is possible on all social networking sites is the possibility of an internet relationship. These sites enable users to search for new connections based on location, education, experiences, hobbies, age, gender, and more. This allows individuals meeting each other to already have some characteristic in common. These sites usually allow for people who do not know each other to "add" each other as a connection or friend and to send each other messages. This connection can lead to more communication between two individuals. An immense amount of information about the individuals can be found directly on their social network profile. Proving those individuals include plentiful and accurate information about themselves, people in online relationships can find out much about each other by viewing profiles and "about me's". Communication between individuals can become more frequent, thus forming some type of relationship via the internet. This relationship can turn into an acquaintance, a friendship, a romantic relationship, or even a business partnership. Online gaming Online gaming elicits the introduction of many different types of people in one interface. A common type of online game where individuals form relationships is the MMORPG, or a massively multiplayer online role-playing game. Some examples of MMORPGs are World of Warcraft, EverQuest, SecondLife, Final Fantasy Online, and Minecraft (see List of massively multiplayer online role-playing games.) These games enable individuals to create a character that represents them and interact with other characters played by real individuals, while at the same time carrying out the tasks and goals of the actual game. Online games other than MMORPGs can elicit internet relationships as well. Card games such as poker and board games like Pictionary have been transformed into virtual interfaces that allow an individual to play against people across the internet, as well as chatting with them. Virtual pet sites such as Webkinz and Neopets are another type of popular online game that allow individuals to socialize with other players. Games create social spaces for people of various ages, with userbases often crossing age brackets. Most of these games enable individuals to chat with each other, as well as form groups and clans. This interaction can lead to further communication, turning into a friendship or romantic relationship. Digital anthropologist Bonnie Nardi emphasizes the significance of online relationships in the video game "World of Warcraft". Based on participant observation, she observes players that meets on the internet and ended up developing a relationship throughout the process of playing the video game. People from all across the world can meet up in a virtual platform, and even starting a relationship. Technologies has really brought people closer with one another, and creating a great environment. Nardi talks about one of her guild members named Zeke who was engaged to Malore that they met in a dungeon run. "I had not seen that there might be anything other than emoting going on, and told him I was married. Zeke then revealed that he was engaged to Malore (whom he had met in World of Warcraft) but that the relationship was not going well." (Nardi, Page 165) Zeke's relationship Malore happened due to the fact that Zeke had several accounts in the game and apparently he was able to flirt with Malore while using different characters to run down the dungeon with her. Online forums and chatrooms An Internet forum is a website that includes conversations in the form of posted messages. Forums can be for general chatting or can be broken down into categories and topics. They can be used to ask questions, post opinions, or debate topics. Forums include their own jargon, for example a conversation is a "thread". Different forums also have different lingo and styles of communicating. There are religion forums, music forums, car forums, and countless other topics. These forums elicit communication between individuals no matter the location, gender, ethnicity, etc. although some do include age restrictions. Through these forums people may comment on each other's topics or threads, and with further communication form a friendship, partnership, or romantic relationship. Professional relationships Even in work settings, the introduction of the internet has established easier and sometimes more practical forms of communicating. The internet is often referred to as a vehicle for investor relations or the "electronic highway" for business transactions in the United States. The Internet has increased organizational involvement by facilitating the flow of information between face-to-face meetings and allowing for people to arrange meetings at virtually any given time. Socially, it has stimulated positive change in people's lives by creating new forms of online interaction and enhancing offline relationships worldwide, allowing for better and more efficient business communication. Advantages For more intimate relationships, research has shown that personal disclosures create a greater sense of intimacy. This gives a sense of trust and equality, which people search for in a relationship, and this is often easier to achieve online than face to face, although not all disclosures are responded to positively. Individuals are able to engage in more self-disclosure than an average interaction, because a person can share their inner thoughts, feelings and beliefs and be met with less disapproval and fewer sanctions online than is the case in face-to-face encounters. Researcher Cooper termed this type of relationship as a "Triple A Engine" implying that internet relationships are accessible, affordable, and anonymous. Online, barriers that might stand in the way of a potential relationship such as physical attractiveness, social anxiety and stuttering do not exist. Whereas those could hinder an individual in face-to-face encounters, an Internet interaction negates this and allows the individual freedom. Research has shown that stigmas such as these can make a large impact on first impressions in face-to-face meeting, and this does not apply with an online relationship. Furthermore, as the internet has become a worldwide phenomenon, many people can interact with others around the world, or find someone who fits their radar or their type, if there is no one who they find physically or emotionally attractive in their own area. The internet allows for interaction of many different people so there is greater chance of finding someone more attractive. The Internet "enhances face-to-face and telephone communication as network members become more aware of each others' needs and stimulate their relationships through more frequent contact". According to Joseph Walter's social information processing theory, computer-mediated communications can work for people. While online interactions take roughly four times longer than face-to-face interactions, this gives users the opportunity to evaluate and the time to think, making sure they say the perfect response. Thus, chronemics is the only verbal clue available to digital communications. With the focus on conversation and not appearance, digital interactions over time will develop higher levels of intimacy than face-to-face interactions. In The Forms of Capital Pierre Bourdieu defines social capital as "the aggregate of the actual or potential resources which are linked to possession of a durable network of more or less institutionalized relationships of mutual acquaintance and recognition." Social capital researchers have found that "various forms of social capital, including ties with friends and neighbors, are related to indices of psychological well-being, such as self-esteem and satisfaction with life". Then, the use of a social networking service could help to improve the social capital. More than helping to improve the social capital, the use of a social networking service could help to retain it. For instance, Cummings, Lee and Kraut have shown that communication services like instant messaging "help college students to remain close to their high school friends after they leave home for college". Disadvantages The Internet provides the opportunity for misrepresentation, particularly in the early stages of a relationship when commitment is low, and self-presentation and enhancement agendas are paramount. After receiving many complaints about his social networking site Ashley Madison, founder Noel Biderman responded to accusations that his and other similar cyber-dating sites are at fault for the "rising divorce rates and growth in casual dating". Biderman argued that the idea for Ashleymadison.com came to him when he realized the growing number of people on "mainstream dating sites" were married or in a relationship but posing as singles in order to start an affair. In an empirical study of commitment and misrepresentation on the Internet, Cornwell and Lundgren (2001) surveyed 80 chat-room users. Half about their 'realspace' relationships, and half about their cyberspace relationships. They found that 'realspace' relationships were considered to be more serious, with greater feelings of commitment, than the cyber-relationship participants. Both groups, however, reported similar levels of satisfaction and potential for 'emotional growth' with regard to romantic relationships. Cornwell and Lundgren went on to ask about whether the participants had misrepresented themselves to their partner in a number of areas: their interests (e.g. hobbies, musical tastes); their age; their background; their appearance and 'mis-presentation of yourself in any other way' (p. 203). Participants responded using either yes or no to each question, and their score was summed into a misrepresentation measure. The results can be found below: Dangers of internet relationships An oft-forgotten aspect of online interactions is the possible danger present. The option for an individual to conceal their identity may be harmless in many cases, but it can also lead to extremely dangerous situations. Hidden identities are often used in cases of cyberbullying and cyberstalking. Concealing one's true identity is also a technique that can be used to manipulate the person's new online friend or lover into convincing them that they are someone completely different. This is something most online predators do in order to prey on victims. Despite the awareness of dangers, Mishna et al. found children and youth to still partake in online relationships with little care or concern for negative effects. Brym and Lenton also claim that "although [their] true identities are usually concealed, they sometimes decide to meet and interact in real life". Engaging in internet relationships is also risky because the information placed online about an individual does not have to be accurate. An individual can formulate an entirely different persona and pose as this person as long as they desire. This can be hurtful to individuals who are honest about their identities and believe that they are in a positive relationship or friendship with the individual. This concept has been most recently illustrated on the television show, Catfish: The TV Show. Internet affairs Internet affairs offer a new perspective on the definition of an affair. Some people consider internet relationships to be classified as an affair while others claim contact affairs are much more serious. Trent Parker and Karen Wampler conducted a qualitative study to discover the different perceptions of internet relationships based on gender differences. Through their study they found internet affairs were considered less of an affair than a physical relationship. Through the results from the same study Parker and Wampler also concluded that women considered sexual internet activities such as internet porn much more severe than the men did. Internet affairs and physical contact affairs are similar because they both involve another partner. "The primary difference between an internet affair and an affair is that in an affair, the couple meet to engage in the relationship. With internet affairs, on the other hand, the couple rarely meet. This offers a unique advantage to internet affairs." Effects on face-to-face interactions Since the creation of the Internet, communication has become one of its prime uses. It has become a ubiquitous force in people's everyday lives due to the increase in the regularity and quality of interaction. The internet has also created a new approach to human relationships, and it has changed the way people connect to one another in their social worlds. Online relationships have also changed which effective strategies we use to perform maintenance on our relationships, depending on the exclusivity of the internet the relationship. In the past, postal services made communication possible without the necessity of physical presence, and the invention of the telephone allowed synchronous communication between people across long distances. The internet combined the advantages of both mail and telephone, unifying the speed of the telephone with the written character of the mail service. The evolution of communication within the Internet has arguably changed the nature of individuals' relationships with one another. Some see a major negative impact resulting in an increased use of internet communication is of its diversion of true community because online interaction via computers is often regarded as a more impersonal communication medium than face-to-face communication. Others consider the incorporation of the internet allowing online activities to be "viewed as an extension of offline activities". The multiple techniques that humans use to communicate, such as taking turns or nodding in agreement, are absent in these settings. Without the body language cues present in a face-to-face conversation, such as pauses or gestures, participants in instant messaging may type over one another's messages without necessarily waiting for a cue to talk. Also, with or without the correct grammar, tone and context can be misunderstood. Recently people who already adapted internet-based communication have missed face-to-face interactions because this traditional way of communication is able to offer advancement in our relationships. Early positive view In 1991, Stone argued that when virtual communities began forming, this process generated a new type of social space where people could still apparently meet face-to-face, but this required a redefinition of the terms “meet” and “face-to-face.” These virtual communities allowed people to effortlessly access others, and in many ways to feel better connected, feel that they receive greater support from others, and to obtain emotional satisfaction from their families, communities and society. However, it does have several obvious problems for people to communicate with others. The representative limitation of this way of communications is that it cannot contain people's diverse emotions completely, so it can cause diverse misunderstanding between people. Pseudocommunity theory In 1987, this understanding of social spaces was challenged by scholars such as James R. Beniger. Beniger questioned whether these virtual communities were "real" or were pseudo communities, "a pattern relating that, while looking highly interpersonal interaction, is essentially impersonal." He put forward the idea that in a society within the virtual world, participants lack the necessary honesty it would take to create a "real" virtual community. Weakening of social ties In many cases the introduction of the Internet as a social instigator may cause a repercussion leading to a weakening of social ties. In a study conducted in 1998, Robert Kraut et al. discovered that Internet users were becoming less socially involved. They linked this to an increase in loneliness and depression in relation to use of the Internet. Though these findings may have been sound, in a later study, Kraut et al. revisited his original study with the idea of expanding his current initial sample and correlating it with new subsequently collected longitudinal data. This synthesis produced a different outcome than the one that Kraut had originally presented. The studies like Sexual Addiction and Compulsivity report (2000) indicate that people who are constantly practicing virtual sexual stimulation losing the social stigma and approval that they experience problems In this newer paper, Kraut stated that there were fewer negative affects than he had originally found, and in some cases the negative effect had vanished. In the second study he saw that small positive effects began to appear in social involvement and psychological well-being. Assessing the effect of the Internet over a period of time, he saw people's use of the Internet increase in sophistication. During the Kraut et al. study, the researchers asked reclusive people if they use the Internet to counteract the loss of social skills that are needed in face-to-face encounters. They also asked people with strong social skills whether they use the Internet to amplify their abilities to network amongst people. The study discovered that these people who already possessed strong social skills were the ones who received the most beneficial outcome to using the Internet. The concluding analysis was, that rather than helping to decrease the difference between those who already had social skills compared with those lacking in social skills, internet use had actually exacerbated the differences in the skill level needed for social interaction. Assisting reclusive people This theory was later challenged in a study, by McKenna et al., that indicated that people who are more socially inept use the internet to create an initial contact which allows them to explore their “true self" within these interactions. These social interactions within cyberspace tend to lead to closer and high quality relationships which influence face-to-face encounters. In essence, these findings meant that although it is not clear whether the internet helps reclusive people develop better social skills, it does allow reclusive people to form relationships that may not have existed otherwise because of their lack of comfort with interpersonal situations in general. When these relationships emerge into face-to-face relationships it is hard to distinguish these relationships from those that started as face-to-face interactions. Future studies on this topic may allow scholars to define whether or not society is becoming too dependent on the Internet as a social tool. Those relationships are also found for people suffering from depression, suicidal ideation and other mental health problems. For example, suicidal people were more likely to go online in search of new interpersonal relationships and to seek interpersonal help. Similar findings were found for suicidal LGBT. These studies show that people who have trouble meeting similar others, not only the 'socially inept', are using the internet to create stronger and more extensive interpersonal relationships. See also Social networking service Robert Kraut Harry Reis References Further reading Clift, Pamala (2013) Virgin's Handbook on Virtual Relationships (CreateSpace) Dwyer, Diana (2000) Interpersonal Relationships (Routledge Modular Psychology) Englehardt, E.E. (2001), Ethical Issues in Interpersonal Communication: Friends, Intimates, Sexuality, Marriage, and Family, Hartcourt College Publishers, Fort Worth, TX, Aboujaoude, E. (2011). Virtually you: The dangerous powers of the e-personality. New York: W. W. Norton. Friendship Intimate relationships Interpersonal relationships Internet culture
Internet relationship
[ "Biology" ]
5,933
[ "Behavior", "Interpersonal relationships", "Human behavior" ]
50,399
https://en.wikipedia.org/wiki/Kiss
A kiss is the touching or pressing of one's lips against another person, animal or object. Cultural connotations of kissing vary widely; depending on the culture and context, a kiss can express sentiments of love, passion, romance, sexual attraction, sexual activity, sexual intercourse, sexual arousal, affection, respect, greeting, peace, or good luck, among many others. In some situations, a kiss is a ritual, formal or symbolic gesture indicating devotion, respect, or a sacramental. The word comes from Old English ('to kiss'), in turn from ('a kiss'). History Anthropologists disagree on whether kissing is an instinctual or learned behaviour. Those who believe kissing to be an instinctual behaviour cite similar behaviours in other animals such as bonobos, which are known to kiss after fighting - possibly to restore peace. Others believe that it is a learned behaviour, having evolved from activities such as suckling or premastication in early human cultures passed on to modern humans. Another theory posits that the practice originated in males during the paleolithic era tasting the saliva of females to test their health in order to determine whether they would make a good partner for procreation. The fact that not all human cultures kiss is used as an argument against kissing being an instinctual behaviour in humans; only around 90% of the human population is believed to practice kissing. The earliest reference to kissing-like behavior comes from the Vedas, Sanskrit scriptures that informed Hinduism, Buddhism, and Jainism, around 3,500 years ago, according to Vaughn Bryant, an anthropologist at Texas A&M University who specialized in the history of the kiss. However, recent studies challenge the belief that kissing originated in South Asia around 1500 BCE, arguing that there is no single point of origin in historical times. Figurines have been found that indicate kissing may have been practiced in prehistory. It’s been suggested that Neandertals and humans kissed. Evidence from ancient Mesopotamia and Egypt suggests that kissing was documented as early as 2500 BCE. Kissing was present in both romantic and familial contexts in ancient Mesopotamia, but it was subject to social regulation, and public display of the sexual aspect of kissing was discouraged. Kissing also had a role in rituals. The act of kissing may have unintentionally facilitated the transmission of orally transmitted microorganisms, potentially leading to disease. Advances in ancient DNA extraction have revealed pathogen genomes in human remains, including those transmitted through saliva. The shift in dominant lineages of the herpes simplex virus 1 (HSV-1) during the Bronze Age implies that cultural practices like romantic-sexual kissing could have contributed to its transmission. Ancient Mesopotamian medical texts mention a disease called bu'shanu, which may have been related to HSV-1 infection. While kissing itself was not directly associated with disease transmission in Mesopotamia, certain cultural and religious factors governed its practice. Both lip and tongue kissing are mentioned in Sumerian poetry: Kissing is described in the surviving ancient Egyptian love poetry from the New Kingdom, found on papyri excavated at Deir el-Medina: The earliest reference to kissing in the Old Testament is in , when Jacob deceives his father to obtain his blessing: features the first man-woman kiss in the Bible, when Jacob flees from Esau and goes to the house of his uncle Laban: Much later, there is the oft-quoted verse from : In Cyropaedia (370 BC), Xenophon wrote about the Persian custom of kissing in the lips upon departure while narrating the departure of Cyrus the Great () as a boy from his Median kinsmen. According to Herodotus (5th century BC), when two Persians meet, the greeting formula expresses their equal or inequal status. They do not speak; rather, equals kiss each other on the mouth, and in the case where one is a little inferior to the other, the kiss is given on the cheek. During the later Classical period, affectionate mouth-to-mouth kissing was first described in the Hindu epic the Mahabharata. Anthropologist Vaughn Bryant argues kissing spread from India to Europe after Alexander the Great conquered parts of Punjab in northern India in 326 BCE. The Romans were passionate about kissing and talked about several types of kissing. Kissing the hand or cheek was called an . Kissing on the lips with mouth closed was called a , which was used between relatives. A kiss of passion was called a . Kissing was not always an indication of eros, or love, but also could show respect and rank as it was used in Medieval Europe. The study of kissing started sometime in the nineteenth century and is called philematology, which has been studied by people including Cesare Lombroso, Ernest Crawley, Charles Darwin, Edward Burnett Tylor and modern scholars such as Elaine Hatfield. Types Kristoffer Nyrop identified a number of types of kisses, including kisses of love, affection, peace, respect, and friendship. He notes, however, that the categories are somewhat contrived and overlapping, and some cultures have more kinds, including the French with twenty and the Germans with thirty. Expression of affection Kissing another person's lips has become a common expression of affection or warm greeting in many cultures worldwide. Yet in certain cultures, kissing was introduced only through European settlement, before which it was not a routine occurrence. Such cultures include certain indigenous peoples of Australia, the Tahitians, and many tribes in Africa. A kiss can also be used to express feelings without an erotic element but can be nonetheless "far deeper and more lasting", writes Nyrop. He adds that such kisses can be expression of love "in the widest and most comprehensive meaning of the word, bringing a message of loyal affection, gratitude, compassion, sympathy, intense joy, and profound sorrow." Nyrop writes that the most common example is the "intense feeling which knits parents to their offspring", but he adds that kisses of affection are not only common between parents and children, but also between other members of the same family, which can include those outside the immediate family circle, "everywhere where deep affection unites people." The tradition is written of in the Bible, as when Esau met Jacob after a long separation, he ran towards him, fell on his neck, and kissed him (), Moses greeted his father-in-law and kissed him (), and Orpah kissed her mother-in-law before leaving her (). The family kiss was traditional with the Romans and kisses of affection are often mentioned by the early Greeks, as when Odysseus, on reaching his home, meets his faithful shepherds. Affection can be a cause of kissing "in all ages in grave and solemn moments," notes Nyrop, "not only among those who love each other, but also as an expression of profound gratitude. When the Apostle Paul took leave of the elders of the congregation at Ephesus, "they all wept sore, and fell on Paul's neck and kissed him" (Acts 20:37)." Kisses can also be exchanged between total strangers, as when there is a profound sympathy with or the warmest interest in another person. Folk poetry has been the source of affectionate kisses where they sometimes played an important part, as when they had the power to cast off spells or to break bonds of witchcraft and sorcery, often restoring a man to his original shape. Nyrop notes the poetical stories of the "redeeming power of the kiss are to be found in the literature of many countries, especially, for example, in the Old French Arthurian romances (Lancelot, Guiglain) in which the princess is changed by evil arts into a dreadful dragon, and can only resume her human shape in the case of a knight being brave enough to kiss her." In the reverse situation, in the tale of "Beauty and the Beast", a transformed prince then told the girl that he had been bewitched by a wicked fairy, and could not be recreated into a man unless a maid fell in love with him and kissed him, despite his ugliness. A kiss of affection can also take place after death. In , it is written that when Jacob was dead, "Joseph fell upon his father's face and wept upon him and kissed him." And it is told of Abu Bakr, Muhammad's first disciple, father-in-law, and successor, that, when the prophet was dead, he went into the latter's tent, uncovered his face, and kissed his forehead. Nyrop writes that "the kiss is the last tender proof of love bestowed on one we have loved, and was believed, in ancient times, to follow mankind to the nether world." Kissing on the lips can be a physical expression of affection or love between two people in which the sensations of touch, taste, and smell are involved. According to the psychologist Menachem Brayer, although many "mammals, birds, and insects exchange caresses" which appear to be kisses of affection, they are not kisses in the human sense. Surveys indicate that kissing is the second most common form of physical intimacy among United States adolescents (after holding hands), and that about 85% of 15 to 16-year-old adolescents in the US have experienced it. Kiss on the lips The kiss on the lips can be performed between two friends or family. This move aims to express affection for a friend. Unlike kissing for love, a friendly kiss has no sexual connotation. The kiss on the lips is a practice that can be found in the time of patriarchs (Bible). In Ancient Greece, the kiss on the mouth was used to express a concept of equality between people of the same rank. In the Middle Ages, the kiss of peace was recommended by the Catholic Church. The kiss on the lips was also common among knights. The gesture has again become popular with young people, particularly in England. Romantic kiss In many cultures, it is considered a harmless custom for teenagers to kiss on a date or to engage in kissing games with friends. These games serve as icebreakers at parties and may be some participants' first exposure to sexuality. There are many such games, including truth or dare, seven minutes in heaven (or the variation "two minutes in the closet"), spin the bottle, post office, and wink. The psychologist William Cane notes that kissing in Western society is often a romantic act and describes a few of its attributes: Romantic kissing in Western cultures is a fairly recent development and is rarely mentioned even in ancient Greek literature. In the Middle Ages it became a social gesture and was considered a sign of refinement of the upper classes. Other cultures have different definitions and uses of kissing, notes Brayer. In China, for example, a similar expression of affection consists of rubbing one's nose against the cheek of another person. In other Eastern cultures kissing is not common. In South East Asian countries the "sniff kiss" is the most common form of affection and Western mouth to mouth kissing is often reserved for sexual foreplay. In some tribal cultures the "equivalent to 'kiss me' is 'smell me.'" The kiss can be an important expression of love and erotic emotions. In his book The Kiss and its History, Kristoffer Nyrop describes the kiss of love as an "exultant message of the longing of love, love eternally young, the burning prayer of hot desire, which is born on the lovers' lips, and 'rises,' as Charles Fuster has said, 'up to the blue sky from the green plains,' like a tender, trembling thank-offering." Nyrop adds that the love kiss, "rich in promise, bestows an intoxicating feeling of infinite happiness, courage, and youth, and therefore surpasses all other earthly joys in sublimity." He also compares it to achievements in life: "Thus even the highest work of art, yet, the loftiest reputation, is nothing in comparison with the passionate kiss of a woman one loves." The power of a kiss is not minimized when he writes that "we all yearn for kisses and we all seek them; it is idle to struggle against this passion. No one can evade the omnipotence of the kiss ..." Kissing, he implies, can lead one to maturity: "It is through kisses that a knowledge of life and happiness first comes to us. Runeberg says that the angels rejoice over the first kiss exchanged by lovers," and can keep one feeling young: "It carries life with it; it even bestows the gift of eternal youth." The importance of the lover's kiss can also be significant, he notes: "In the case of lovers a kiss is everything; that is the reason why a man stakes his all for a kiss," and "man craves for it as his noblest reward." As a result, kissing as an expression of love is contained in much of literature, old and new. Nyrop gives a vivid example in the classic love story of Daphnis and Chloe. As a reward "Chloe has bestowed a kiss on Daphnis—an innocent young-maid's kiss, but it has on him the effect of an electrical shock": Romantic kissing "requires more than simple proximity," notes Cane. It also needs "some degree of intimacy or privacy, ... which is why you'll see lovers stepping to the side of a busy street or sidewalk." Psychologist Wilhelm Reich "lashed out at society" for not giving young lovers enough privacy and making it difficult to be alone. However, Cane describes how many lovers manage to attain romantic privacy despite being in a public setting, as they "lock their minds together" and thereby create an invisible sense of "psychological privacy." He adds, "In this way they can kiss in public even in a crowded plaza and keep it romantic." Nonetheless, when Cane asked people to describe the most romantic places they ever kissed, "their answers almost always referred to this ends-of-the-earth isolation, ... they mentioned an apple orchard, a beach, out in a field looking at the stars, or at a pond in a secluded area ..." French kiss A French kiss, also known as cataglottism or a tongue kiss, is an amorous kiss in which the participants' tongues extend to touch each other's lips or tongue. A kiss with the tongue stimulates the partner's lips, tongue and mouth, which are sensitive to the touch and induce sexual arousal. The sensation when two tongues touch—also known as tongue touching—has been proven to stimulate endorphin release and reduce acute stress levels. Extended French kissing may be part of making out. The term originated at the beginning of the 20th century, in America and Great Britain, as the French had acquired a reputation for more adventurous and passionate sex practices. French kissing may be a mode for disease transmission, particularly if there are open wounds. Kiss as ritual Throughout history, a kiss has been a ritual, formal, symbolic or social gesture indicating devotion, respect or greeting. It appears as a ritual or symbol of religious devotion. For example, in the case of kissing a temple floor, or a religious book or icon. Besides devotion, a kiss has also indicated subordination or, nowadays, respect. In modern times the practice continues, as in the case of a bride and groom kissing at the conclusion of a wedding ceremony or national leaders kissing each other in greeting, and in many other situations. Religion A kiss in a religious context is common. In earlier periods of Christianity or Islam, kissing became a ritual gesture, and is still treated as such in certain customs, as when "kissing... relics, or a bishop's ring." In Judaism, the kissing of the Torah scroll, a prayer book, and a prayer shawl is also common. Crawley notes that it was "very significant of the affectionate element in religion" to give so important a part to the kiss as part of its ritual. In the early Church the baptized were kissed by the celebrant after the ceremony, and its use was even extended as a salute to saints and religious heroes, with Crawley adding, "Thus Joseph kissed Jacob, and his disciples kissed Paul. Joseph kissed his dead father, and the custom was retained in our civilization", as the farewell kiss on dead relatives, although certain sects prohibit this today. A distinctive element in the Christian liturgy was noted by Justin in the 2nd century, now referred to as the "kiss of peace," and once part of the rite in the primitive Mass. Conybeare has stated that this act originated within the ancient Hebrew synagogue, and Philo, the ancient Jewish philosopher called it a "kiss of harmony", where, as Crawley explains, "the Word of God brings hostile things together in concord and the kiss of love." Saint Cyril also writes, "this kiss is the sign that our souls are united, and that we banish all remembrance of injury." Kiss of peace Nyrop notes that the kiss of peace was used as an expression of deep, spiritual devotion in the early Christian Church. Christ said, for instance, "Peace be with you, my peace I give you," and the members of Christ's Church gave each other peace symbolically through a kiss. St Paul repeatedly speaks of the "holy kiss," and, in his Epistle to the Romans, writes: "Salute one another with an holy kiss" and his first Epistle to the Thessalonians (1 Thessalonians 5:26), he says: "Greet all the brethren with an holy kiss." The kiss of peace was also used in secular festivities. During the Middle Ages, for example, Nyrop points out that it was the custom to "seal the reconciliation and pacification of enemies by a kiss." Even knights gave each other the kiss of peace before proceeding to the combat, and forgave one another all real or imaginary wrongs. The holy kiss was also found in the ritual of the Church on solemn occasions, such as baptism, marriage, confession, ordination, or obsequies. However, toward the end of the Middle Ages the kiss of peace disappears as the official token of reconciliation. Kiss of respect The kiss of respect is of ancient origin, notes Nyrop. He writes that "from the remotest times we find it applied to all that is holy, noble, and worshipful—to the gods, their statues, temples, and altars, as well as to kings and emperors; out of reverence, people even kissed the ground, and both sun and moon were greeted with kisses." He notes some examples, as "when the prophet Hosea laments over the idolatry of the children of Israel, he says that they make molten images of calves and kiss them" (). In classical times similar homage was often paid to the gods, and people were known to kiss the hands, knees, feet, and the mouths, of their idols. Cicero writes that the lips and beard of the famous statue of Hercules at Agrigentum were worn away by the kisses of devotees. People kissed the cross with the image of Jesus, and such kissing of the cross is always considered a holy act. In many countries it is required, on taking an oath, as the highest assertion that the witness would be speaking the truth. Nyrop notes that "as a last act of charity, the image of the Redeemer is handed to the dying or death-condemned to be kissed." Kissing the cross brings blessing and happiness; people kiss the image of Mary and the pictures and statues of saints—not only their pictures, "but even their relics are kissed," notes Nyrop. "They make both soul and body whole." There are legends innumerable of sick people regaining their health by kissing relics, he points out. The kiss of respect has also represented a mark of fealty, humility and reverence. Its use in ancient times was widespread, and Nyrop gives examples: "people threw themselves down on the ground before their rulers, kissed their footprints, literally 'licked the dust,' as it is termed." "Nearly everywhere, wheresoever an inferior meets a superior, we observe the kiss of respect. The Roman slaves kissed the hands of their masters; pupils and soldiers those of their teachers and captains respectively." People also kissed the earth for joy on returning to their native land after a lengthened absence, as when Agamemnon returned from the Trojan War. Kiss of friendship The kiss is also commonly used in American and European culture as a salutation between friends or acquaintances. The friendly kiss until recent times usually occurred only between ladies, but today it is also common between men and women, especially if there is a great difference in age. According to Nyrop, up until the 20th century, "it seldom or never takes place between men, with the exception, however, of royal personages," although he notes that in former times the "friendly kiss was very common with us between man and man as well as between persons of opposite sexes." In guilds, for example, it was customary for the members to greet each other "with hearty handshakes and smacking kisses," and, on the conclusion of a meal, people thanked and kissed both their hosts and hostesses. Cultural significance In approximately 10% of the world population, kissing does not take place, for a variety of reasons, including that they find it dirty or because of superstitious reasons. For example, in parts of Sudan it is believed that the mouth is the portal to the soul, so they do not want to invite death or have their spirit taken. Psychology professor Elaine Hatfield noted that "kissing was far from universal and even seen as improper by many societies." Despite kissing being widespread, in some parts of the world it is still taboo to kiss publicly and is often banned in films or in other media. As a theme in art South Asia On-screen lip-kissing was not a regular occurrence in Bollywood until the 1990s, although it has been present from the time of the inception of Bollywood. This can appear contradictory since the culture of kissing is believed to have originated and spread from India. Middle East There are also taboos as to whom one can kiss in some Muslim-majority societies governed by religious law. In the Islamic Republic of Iran, a man who kisses or touches a woman who is not his wife or relative can be punished such as getting whipped up to 100 times or even go to jail. Research from May 2023 found texts from ancient people in Mesopotamia that indicates that kissing was a well-established practice 4500 years ago. According to Dr Troels Pank Arbøll, one of the authors of this study: "In ancient Mesopotamia, which is the name for the early human cultures that existed between the Euphrates and Tigris rivers in present-day Iraq and Syria, people wrote in cuneiform script on clay tablets. Many thousands of these clay tablets have survived to this day, and they contain clear examples that kissing was considered a part of romantic intimacy in ancient times, just as kissing could be part of friendships and family members' relations." East Asia Donald Richie comments that in Japan, as in China, although kissing took place in erotic situations, in public "the kiss was invisible", and the "touching of the lips never became the culturally encoded action it has for so long been in Europe and America." The early Edison film, The Widow Jones – the May Irwin-John Rice Kiss (1896), created a sensation when it was shown in Tokyo, and people crowded to view the enormity. Likewise, Rodin's sculpture The Kiss was not displayed in Japan until after the Pacific War. Also, in the 1900s, Manchu tribes along the Amur River regarded public kissing between adults with revulsion. In a similar situation in Chinese tradition, when Chinese men saw Western women kissing men in public, they thought the women were prostitutes. Contemporary practices In modern Western culture, kissing on the lips is commonly an expression of romantic affection or a warm greeting. When lips are pressed together for an extended period, usually accompanied with an embrace, it is an expression of romantic and sexual desire. The practice of kissing with an open mouth, to allow the other to suck their lips or move their tongue into their mouth, is called French kissing. "Making out" is often an adolescent's first experience of their sexuality and games which involve kissing, such as spin the bottle, facilitate the experience. People may kiss children on the forehead to comfort them or the cheek or lips to show affection. In modern Eastern culture, the etiquette vary depending on the region. In West Asia, kissing on the lips between both men and women is a common form of greeting. In South and Eastern Asia, it might often be a greeting between women, however, between men, it is unusual. Kissing a baby on the cheeks is a common form of affection. Most kisses between men and women are on the cheeks and not on the lips unless they are romantically involved. Sexual forms of kissing between lovers encompass the whole range of global practices. Kissing in films The first romantic kiss on screen was in American silent films in 1896, beginning with the film The Kiss. The kiss lasted 18 seconds and caused many to rail against decadence in the new medium of silent film. Writer Louis Black writes that "it was the United States that brought kissing out of the Dark Ages." However, it met with severe disapproval by defenders of public morality, especially in New York. One critic proclaimed that "it is absolutely disgusting. Such things call for police interference." Young moviegoers began emulating romantic stars on the screen, such as Ronald Colman and Rudolph Valentino, the latter known for ending his passionate scenes with a kiss. Valentino also began his romantic scenes with women by kissing her hand, traveling up her arm, and then kissing her on the back of her neck. Actresses were often turned into stars based on their screen portrayals of passion. Actresses like Nazimova, Pola Negri, Vilma Bánky and Greta Garbo, became screen idols as a result. Eventually, the film industry began to adopt the dictates of the Production Code established in 1934, overseen by Will Hays and influenced by Christian religious leaders in America. According to the new code, "Excessive and lustful kissing, lustful embraces, suggestive postures and gestures, are not to be shown." As a result, kissing scenes were shortened, with scenes cut away, leaving the imagination of the viewer to take over. Under the code, actors kissing had to keep their feet on the ground and had to be either standing or sitting. The heyday of romantic kissing on the screen took place in the early sound era, during the Golden Age of Hollywood in the 1930s and 1940s. Body language began to be used to supplement romantic scenes, especially with the eyes, a talent that added to Greta Garbo's fame. Author Lana Citron writes that "men were perceived as the kissers and women the receivers. Should the roles ever be reversed, women were regarded as vamps . . ." According to Citron, Mae West and Anna May Wong were the only Hollywood actresses never to have been kissed on screen. Among the films rated for having the most romantic kisses are Gone with the Wind, From Here to Eternity, Casablanca, and To Have and Have Not. Sociologist Eva Illouz notes that surveys taken in 1935 showed that "love was the most important theme represented in movies. Similar surveys during the 1930s found the 95% of films had romance as one of their plot lines, what film critics called "the romantic formula." In early Japanese films, kissing and sexual expression were controversial. In 1931, a director slipped a kissing scene past the censor (who was a friend), but when the film opened in a downtown Tokyo theater, the screening was stopped and the film confiscated. During the American occupation of Japan, in 1946, an American censor required a film to include a kissing scene. One scholar says that the censor suggested "we believe that even Japanese do something like kissing when they love each other. Why don't you include that in your films?" Americans encouraged such scenes to force the Japanese to express publicly actions and feelings that had been considered strictly private. Since Pearl Harbor, Americans had felt that the Japanese were "sneaky", claiming that "if Japanese kissed in private, they should do it in public too." Non-sexual kisses In some Western cultures, it is considered good luck to kiss someone on Christmas or on New Year's Eve, especially beneath a sprig of mistletoe. Newlyweds usually kiss at the end of a wedding ceremony. Female friends and relations and close acquaintances commonly offer reciprocal kisses on the cheek as a greeting or farewell. Where cheek kissing is used, in some countries a single kiss is the custom, while in others a kiss on each cheek is the norm, or even three or four kisses on alternating cheeks. In the United States, an air kiss is becoming more common. This involves kissing in the air near the cheek, with the cheeks touching or not. After a first date, it is common for the couple to give each other a quick kiss on the cheek (or lips where that is the norm) on parting, to indicate that a good time was had and perhaps to indicate an interest in another meeting. A symbolic kiss is frequent in Western cultures. A kiss can be "blown" to another by kissing the fingertips and then blowing the fingertips, pointing them in the direction of the recipient. This is used to convey affection, usually when parting or when the partners are physically distant but can view each other. Blown kisses are also used when a person wishes to convey affection to a large crowd or audience. The term flying kiss is used in India to describe a blown kiss. In written correspondence a kiss has been represented by the letter "X" since at least 1763. A stage or screen kiss may be performed by actually kissing, or faked by using the thumbs as a barrier for the lips and turning so the audience is unable to fully see the act. Some literature suggests that a significant percentage of humanity does not kiss. It has been claimed that in Sub-Saharan African, Asiatic, Polynesian and possibly in some Native American cultures, kissing was relatively unimportant until European colonization. Historically however, the culture of kissing is thought to have begun and spread from the Eastern World, specifically India. With the Andamanese, kissing was only used as a sign of affection towards children and had no sexual undertones. In traditional Islamic cultures, kissing is not permitted between a man and woman who are not married or closely related by blood or marriage. A kiss on the cheek is a very common form of greeting among members of the same sex in most Islamic countries, much like the Southern European pattern. Legality of public kissing In 2007, two people were fined and jailed for a month after kissing and hugging in public in Dubai. In India, public display of affection is a criminal offence under Section 294 of the Indian Penal Code, 1860 with a punishment of imprisonment of up to three months, or a fine, or both. This law was used by police to prosecute couples engaging in intimate acts, such as kissing in public. However, in a number of landmark cases, the higher courts dismissed assertions that kissing in public is obscene. Legality of unwanted kissing In New York in the United States, an unwanted kiss constitutes the sex offense of forcible touching. In Italy, the Supreme Court of Cassation has upheld sexual violence convictions for forced kisses. In Australia, unwanted kissing is sexual assault. In the Netherlands, forced-tongue-kissing was prosecuted as rape from 1998 until 2017, when the Dutch Supreme Court ruled that it should instead (while still deemed illegal) be viewed as a potential form of sexual assault, carrying a maximum eight-year prison sentence. In religion Kissing was a custom during the Biblical period mentioned in the , when Isaac kissed his son Jacob. The kiss is used in numerous other contexts in the Bible: the kiss of homage, in Esther 5:2; of subjection, in 1 Samuel 10:1; of reconciliation, in 2 Samuel 14:33; of valediction, in Ruth 1:14; of approbation, in Psalms 2:12; of humble gratitude, in Luke 7:38; of welcome, in Exodus 18:7; of love and joy, in Genesis 20:11. There are also spiritual kisses, as in Song of Songs 1:2; sensual kisses, as in Proverbs 7:13; and hypocritical kisses, as in 2 Samuel 15:5. It was customary to kiss the mouth in biblical times, and also the beard, which is still practiced in Arab culture. Kissing the hand is not biblical, according to Tabor. The kiss of peace was an apostolic custom, and continues to be one of the rites in the Eucharistic services of Roman Catholics. In the Roman Catholic Order of Mass, the bishop or priest celebrant bows and kisses the altar, reverencing it, upon arriving at the altar during the entrance procession before Mass and upon leaving at the recessional at the closing of Mass; if a deacon is assisting, he bows low before the altar but does not kiss it. Among primitive cultures, it was usual to throw kisses to the sun and to the moon, as well as to the images of the gods. Kissing the hand is first heard of among the Persians. According to Tabor, the kiss of homage—the character of which is not indicated in the Bible—was probably upon the forehead, and was expressive of high respect. In Ancient Rome and some modern Pagan beliefs, worshipers, when passing the statue or image of a god or goddess, will kiss their hand and wave it towards the deity (adoration). The holy kiss or kiss of peace is a traditional part of most Christian liturgies, though often replaced with an embrace or handshake today in Western cultures. In the gospels of Matthew and Mark (Luke and John omit this),Judas betrayed Jesus with a kiss: an instance of a kiss tainted with betrayal. This is the basis of the term "the kiss of Judas". Catholics will kiss rosary beads as a part of prayer, or kiss their hand after making the sign of the cross. It is also common to kiss the wounds on a crucifix, or any other image of Christ's Passion. Pope John Paul II would kiss the ground on arrival in a new country. Visitors to the pope traditionally kiss his foot. Catholics traditionally kiss the ring of a cardinal or bishop. Catholics traditionally kiss the hand of a priest. Eastern Orthodox and Eastern Catholic Christians often kiss the icons around the church on entering; they will also kiss the cross and/or the priest's hand in certain other customs in the church, such as confession or receiving a blessing. Local lore in Ireland suggests that kissing the Blarney Stone will bring the gift of the gab. Jews will kiss the Western Wall of the Holy Temple in Jerusalem, and other religious articles during prayer such as the Torah, usually by touching their hand, Tallis, or Siddur (prayerbook) to the Torah and then kissing it. Jewish law prohibits kissing members of the opposite sex, except for spouses and certain close relatives. See Negiah. Muslims may kiss the Black Stone during Hajj (pilgrimage to Mecca). Many Muslims also kiss shrines of Ahlulbayt and Sufis. Biology and evolution Within the natural world of other animals, there are numerous analogies to kissing, notes Crawley, such as "the billing of birds, the cataglottism of pigeons and the antennal play of some insects." Even among mammals such as the dog, cat and bear, similar behavior is noted. Anthropologists have not reached a conclusion as to whether kissing is learned or a behavior from instinct. It may be related to grooming behavior also seen between other animals, or arising as a result of mothers premasticating food for their children. Non-human primates also exhibit kissing behavior. Dogs, cats, birds and other animals display licking, nuzzling, and grooming behavior among themselves, and also towards humans or other species. This is sometimes interpreted by observers as a type of kissing. Kissing in humans was argued by ethologist Eibl-Eibesfeldt to have evolved from the direct mouth-to-mouth regurgitation of food (kiss-feeding) from parent to offspring or male to female (courtship feeding) and has been observed in numerous mammals. The similarity in the methods between kiss-feeding and deep human kisses (e.g. French kiss) is quite pronounced; in the former, the tongue is used to push food from the mouth of the mother to the child with the child receiving both the mother's food and tongue in sucking movements, and the latter is the same but forgoes the premasticated food. In fact, through observations across various species and cultures, it can be confirmed that the act of kissing and premastication has most likely evolved from the similar relationship-based feeding behaviours. Physiology Kissing is a complex behavior that requires significant muscular coordination involving a total of 34 facial muscles and 112 postural muscles. The most important muscle involved is the orbicularis oris muscle, which is used to pucker the lips and informally known as the kissing muscle. In the case of the French kiss, the tongue is also an important component. Lips have many nerve endings which make them sensitive to touch and bite. Health benefits Kissing stimulates the production of hormones responsible for a good mood: oxytocin, which releases the feeling of love and strengthens the bond with the partner, endorphins – hormones responsible for the feeling of happiness –, and dopamine, which stimulates the pleasure center in the brain. Affection in general has stress-reducing effects. Kissing in particular has been studied in a controlled experiment and it was found that increasing the frequency of kissing in marital and cohabiting relationships results in a reduction of perceived stress, an increase in relationship satisfaction, and a lowering of cholesterol levels. Disease transmission Kissing on the lips can result in the transmission of some diseases, including infectious mononucleosis (known as the "kissing disease") and herpes simplex when the infectious viruses are present in saliva. Research indicates that contraction of HIV via kissing is extremely unlikely, although there was a documented case in 1997 of an HIV infection by kissing. Both the woman and infected man had gum disease, so transmission was through the man's blood, not through saliva. See also Eskimo kissing Hand-kissing Hugs and kisses International Kissing Day Kissing games Kissing traditions Kissing booth References Further reading Beadnell,C. M. (1942) The Origin of the Kiss , Thinkers Library No.89, Watts & Co, London External links Kissing in Strange Places.  — slideshow by Life magazine. Put your sweet lips... (a history of the kiss), Keith Thomas, The Times, June 11, 2005. The Kiss of Life, Joshua Foer, The New York Times, February 14, 2006. Why do humans kiss each other when most animals don't?, Melissa Hogenboom, BBC Earth, July 2015. How Kissing Works, History and Anatomy of the Kiss, Tracy V. Wilson, HowStuffWorks. Greetings Interpersonal relationships Sexual acts Gestures
Kiss
[ "Biology" ]
8,125
[ "Behavior", "Sexual acts", "Sexuality", "Gestures", "Interpersonal relationships", "Human behavior", "Mating" ]
50,400
https://en.wikipedia.org/wiki/Dream%20interpretation
Dream interpretation is the process of assigning meaning to dreams. In many ancient societies, such as those of Egypt and Greece, dreaming was considered a supernatural communication or a means of divine intervention, whose message could be interpreted by people with these associated spiritual powers. In the modern era, various schools of psychology and neurobiology have offered theories about the meaning and purpose of dreams. History Early civilizations The ancient Sumerians in Mesopotamia have left evidence of dream interpretation dating back to at least 3100 BC in Mesopotamia. Throughout Mesopotamian history, dreams were always held to be extremely important for divination and Mesopotamian kings paid close attention to them. Gudea, the king of the Sumerian city-state of Lagash (reigned 2144–2124 BC), rebuilt the temple of Ningirsu as the result of a dream in which he was told to do so. The standard Akkadian Epic of Gilgamesh contains numerous accounts of the prophetic power of dreams. First, Gilgamesh himself has two dreams foretelling the arrival of Enkidu. In one of these dreams, Gilgamesh sees an axe fall from the sky. The people gather around it in admiration and worship. Gilgamesh throws the axe in front of his mother Ninsun and then embraces it like a wife. Ninsun interprets the dream to mean that someone powerful will soon appear. Gilgamesh will struggle with him and try to overpower him, but he will not succeed. Eventually, they will become close friends and accomplish great things. She concludes, "That you embraced him like a wife means he will never forsake you. Thus your dream is solved." Later in the epic, Enkidu dreams about the heroes' encounter with the giant Humbaba. Dreams were also sometimes seen as a means of seeing into other worlds and it was thought that the soul, or some part of it, moved out of the body of the sleeping person and actually visited the places and persons the dreamer saw in his or her sleep. In Tablet VII of the epic, Enkidu recounts to Gilgamesh a dream in which he saw the gods Anu, Enlil, and Shamash condemn him to death. He also has a dream in which he visits the Underworld. The Assyrian king Ashurnasirpal II (reigned 883–859 BC) built a temple to Mamu, possibly the god of dreams, at Imgur-Enlil, near Kalhu. The later Assyrian king Ashurbanipal (reigned 668– 627 BC) had a dream during a desperate military situation in which his divine patron, the goddess Ishtar, appeared to him and promised that she would lead him to victory. The Babylonians and Assyrians divided dreams into "good," which were sent by the gods, and "bad," sent by demons. A surviving collection of dream omens entitled Iškar Zaqīqu records various dream scenarios as well as prognostications of what will happen to the person who experiences each dream, apparently based on previous cases. Some list different possible outcomes, based on occasions in which people experienced similar dreams with different results. Dream scenarios mentioned include a variety of daily work events, journeys to different locations, family matters, sex acts, and encounters with human individuals, animals, and deities. In ancient Egypt, priests acted as dream interpreters. Hieroglyphics depicting dreams and their interpretations are evident. Dreams have been held in considerable importance through history by most cultures. Classical Antiquity The ancient Greeks constructed temples they called Asclepieions, where sick people were sent to be cured. It was believed that cures would be effected through divine grace by incubating dreams within the confines of the temple. Dreams were also considered prophetic or omens of particular significance. Artemidorus of Daldis, who lived in the 2nd century AD, wrote a comprehensive text Oneirocritica (The Interpretation of Dreams). Although Artemidorus believed that dreams can predict the future, he presaged many contemporary approaches to dreams. He thought that the meaning of a dream image could involve puns and could be understood by decoding the image into its component words. For example, Alexander, while waging war against the Tyrians, dreamt that a satyr was dancing on his shield. Artemidorus reports that this dream was interpreted as follows: satyr = sa tyros ("Tyre will be thine"), predicting that Alexander would be triumphant. Freud acknowledged this example of Artemidorus when he proposed that dreams be interpreted like a rebus. Middle Ages In medieval Islamic psychology, certain hadiths indicate that dreams consist of three parts, and early Muslim scholars recognized three kinds of dreams: false, pathogenic, and true. Ibn Sirin (654–728) was renowned for his Ta'bir al-Ru'ya and Muntakhab al-Kalam fi Tabir al-Ahlam, a book on dreams. The work is divided into 25 sections on dream interpretation, from the etiquette of interpreting dreams to the interpretation of reciting certain Surahs of the Qur'an in one's dream. He writes that it is important for a layperson to seek assistance from an alim (Muslim scholar) who could guide in the interpretation of dreams with a proper understanding of the cultural context and other such causes and interpretations. Al-Kindi (Alkindus) (801–873) also wrote a treatise on dream interpretation: On Sleep and Dreams. In consciousness studies, Al-Farabi (872–951) wrote the On the Cause of Dreams, which appeared as chapter 24 of his Book of Opinions of the people of the Ideal City. It was a treatise on dreams, in which he was the first to distinguish between dream interpretation and the nature and causes of dreams. In The Canon of Medicine, Avicenna extended the theory of temperaments to encompass "emotional aspects, mental capacity, moral attitudes, self-awareness, movements and dreams." Ibn Khaldun's Muqaddimah (1377) states that "confused dreams" are "pictures of the imagination that are stored inside by perception and to which the ability to think is applied, after (man) has retired from sense perception." Ibn Shaheen states: "Interpretations change their foundations according to the different conditions of the seer (of the vision), so seeing handcuffs during sleep is disliked but if a righteous person sees them it can mean stopping the hands from evil". Ibn Sirin said about a man who saw himself giving a sermon from the mimbar: "He will achieve authority and if he is not from the people who have any kind of authority it means that he will be crucified". China A standard traditional Chinese book on dream-interpretation is the Lofty Principles of Dream Interpretation (夢占逸旨) compiled in the 16th century by Chen Shiyuan (particularly the "Inner Chapters" of that opus). Chinese thinkers also raised profound ideas about dream interpretation, such as the question of how we know we are dreaming and how we know we are awake. It is written in the Chuang-tzu: "Once Chuang Chou dreamed that he was a butterfly. He fluttered about happily, quite pleased with the state that he was in, and knew nothing about Chuang Chou. Presently he awoke and found that he was very much Chuang Chou again. Now, did Chou dream that he was a butterfly or was the butterfly now dreaming that he was Chou?" This raises the question of reality monitoring in dreams, a topic of intense interest in modern cognitive neuroscience. Modern Europe In the 17th century, the English physician and writer Sir Thomas Browne wrote a short tract upon the interpretation of dreams. Dream interpretation became an important part of psychoanalysis at the end of the 19th century with Sigmund Freud's seminal work The Interpretation of Dreams (Die Traumdeutung; literally "dream-interpretation"). Psychology Freud In The Interpretation of Dreams, Sigmund Freud argued that all dream content is disguised wish-fulfillment (later in Beyond the Pleasure Principle, Freud would discuss dreams which do not appear to be wish-fulfillment). According to Freud, the instigation of a dream is often to be found in the events of the day preceding the dream, which he called the "day residue." In very young children, this can be easily seen, as they dream quite straightforwardly of the fulfillment of wishes that were aroused in them the previous day (the "dream day"). In adults the situation is more complicated since, in Freud's analysis, the dreams of adults have been subjected to distortion, with the dream's so-called "manifest content" being a heavily disguised derivative of the "latent dream-thoughts" present in the unconscious. The dream's real significance is thus concealed: dreamers are no more capable of recognizing the actual meaning of their dreams than hysterics are able to understand the connection and significance of their neurotic symptoms. In Freud's original formulation, the latent dream-thought was described as having been subject to an intra-psychic force referred to as "the censor"; in the terminology of his later years, however, discussion was in terms of the super-ego and the work of the ego's defence mechanisms. In waking life, he asserted, these "resistances" prevented the repressed wishes of the unconscious from entering consciousness, and though these wishes were to some extent able to emerge due to the lowered vigilance of the sleep state, the resistances were still strong enough to force them to take on a disguised or distorted form. Freud's view was that dreams are compromises which ensure that sleep is not interrupted: as "a disguised fulfilment of repressed wishes," they succeed in representing wishes as fulfilled which might otherwise disturb and waken the sleeper. One of Freud's early dream analyses is "Irma's injection", a dream he himself had. In the dream a former patient of his, Irma, complains of pains and Freud's colleague gives her an unsterile injection. Freud provides pages of associations to the elements in his dream, using it to demonstrate his technique of decoding the latent dream thoughts from the manifest content of the dream. Freud suggests that the true meaning of a dream must be "weeded out" from the dream as recalled: Freud listed the distorting operations that he claimed were applied to repressed wishes in forming the dream as recollected: it is because of these distortions (the so-called "dream-work") that the manifest content of the dream differs so greatly from the latent dream thought reached through analysis—and it is by reversing these distortions that the latent content is approached. The operations included: Condensation – one dream object stands for several associations and ideas; thus "dreams are brief, meagre and laconic in comparison with the range and wealth of the dream-thoughts." Displacement – a dream object's emotional significance is separated from its real object or content and attached to an entirely different one that does not raise the censor's suspicions. Visualization – a thought is translated to visual images. Symbolism – a symbol replaces an action, person, or idea. To these might be added "secondary elaboration"—the outcome of the dreamer's natural tendency to make some sort of "sense" or "story" out of the various elements of the manifest content as recollected. Freud stressed that it was not merely futile but actually misleading to attempt to explain one part of the manifest content with reference to another part, as if the manifest dream somehow constituted some unified or coherent conception. Freud considered that the experience of anxiety dreams and nightmares was the result of failures in the dream-work: rather than contradicting the "wish-fulfillment" theory, such phenomena demonstrated how the ego reacted to the awareness of repressed wishes that were too powerful and insufficiently disguised. Traumatic dreams (where the dream merely repeats the traumatic experience) were eventually admitted as exceptions to the theory. Freud famously described psychoanalytic dream-interpretation as "the royal road to a knowledge of the unconscious activities of the mind". However, he expressed regret and dissatisfaction at the way his ideas on the subject were misrepresented or simply not understood: Jung Although not dismissing Freud's model of dream interpretation wholesale, Carl Jung believed Freud's notion of dreams as representations of unfulfilled wishes to be limited. Jung argued that Freud's procedure of collecting associations to a dream would bring insights into the dreamer's mental complex—a person's associations to anything will reveal the mental complexes, as Jung had shown experimentally—but not necessarily closer to the meaning of the dream. Jung was convinced that the scope of dream interpretation was larger, reflecting the richness and complexity of the entire unconscious, both personal and collective. Jung believed the psyche to be a self-regulating organism in which conscious attitudes were likely to be compensated for unconsciously (within the dream) by their opposites. And so the role of dreams is to lead a person to wholeness through what Jung calls "a dialogue between ego and the self". The self aspires to tell the ego what it does not know, but it should. This dialogue involves fresh memories, existing obstacles, and future solutions. Jung proposed two basic approaches to analyzing dream material: the objective and the subjective. In the objective approach, every person in the dream refers to the person they are: mother is mother, girlfriend is girlfriend, etc. In the subjective approach, every person in the dream represents an aspect of the dreamer. Jung argued that the subjective approach is much more difficult for the dreamer to accept, but that in most good dream-work, the dreamer will come to recognize that the dream characters can represent an unacknowledged aspect of the dreamer. Thus, if the dreamer is being chased by a crazed killer, the dreamer may come eventually to recognize his own homicidal impulses. Gestalt therapists extended the subjective approach, claiming that even the inanimate objects in a dream can represent aspects of the dreamer. Jung believed that archetypes such as the animus, the anima, the shadow, and others manifested themselves in dreams, as dream symbols or figures. Such figures could take the form of an old man, a young maiden, or a giant spider as the case may be. Each represents an unconscious attitude that is largely hidden to the conscious mind. Although an integral part of the dreamer's psyche, these manifestations were largely autonomous and were perceived by the dreamer to be external personages. Acquaintance with the archetypes as manifested by these symbols serve to increase one's awareness of unconscious attitudes, integrating seemingly disparate parts of the psyche and contributing to the process of holistic self-understanding he considered paramount. Jung believed that material repressed by the conscious mind, postulated by Freud to comprise the unconscious, was similar to his own concept of the shadow, which in itself is only a small part of the unconscious. Jung cautioned against blindly ascribing meaning to dream symbols without a clear understanding of the client's personal situation. He described two approaches to dream symbols: the causal approach and the final approach. In the causal approach, the symbol is reduced to certain fundamental tendencies. Thus, a sword may symbolize a penis, as may a snake. In the final approach, the dream interpreter asks, "Why this symbol and not another?" Thus, a sword representing a penis is hard, sharp, inanimate, and destructive. A snake representing a penis is alive, dangerous, perhaps poisonous, and slimy. The final approach will tell additional things about the dreamer's attitudes. Technically, Jung recommended stripping the dream of its details and presenting the gist of the dream to the dreamer. This was an adaptation of a procedure described by Wilhelm Stekel, who recommended thinking of the dream as a newspaper article and writing a headline for it. Harry Stack Sullivan also described a similar process of "dream distillation." Although Jung acknowledged the universality of archetypal symbols, he contrasted this with the concept of a sign—images having a one-to-one connotation with their meaning. His approach was to recognize the dynamism and fluidity that existed between symbols and their ascribed meaning. Symbols must be explored for their personal significance to the patient, instead of having the dream conform to some predetermined idea. This prevents dream analysis from devolving into a theoretical and dogmatic exercise that is far removed from the patient's own psychological state. In the service of this idea, he stressed the importance of "sticking to the image"—exploring in depth a client's association with a particular image. This may be contrasted with Freud's free associating which he believed was a deviation from the salience of the image. He describes for example the image "deal table." One would expect the dreamer to have some associations with this image, and the professed lack of any perceived significance or familiarity whatsoever should make one suspicious. Jung would ask a patient to imagine the image as vividly as possible and to explain it to him as if he had no idea as to what a "deal table" was. Jung stressed the importance of context in dream analysis. Jung stressed that the dream was not merely a devious puzzle invented by the unconscious to be deciphered, so that the true causal factors behind it may be elicited. Dreams were not to serve as lie detectors, with which to reveal the insincerity behind conscious thought processes. Dreams, like the unconscious, had their own language. As representations of the unconscious, dream images have their own primacy and mechanics. Jung believed that dreams may contain ineluctable truths, philosophical pronouncements, illusions, wild fantasies, memories, plans, irrational experiences, and even telepathic visions. Just as the psyche has a diurnal side which we experience as conscious life, it has an unconscious nocturnal side which we apprehend as dreamlike fantasy. Jung would argue that just as we do not doubt the importance of our conscious experience, then we ought not to second guess the value of our unconscious lives. Hall In 1953, Calvin S. Hall developed a theory of dreams in which dreaming is considered to be a cognitive process. Hall argued that a dream was simply a thought or sequence of thoughts that occurred during sleep, and that dream images are visual representations of personal conceptions. For example, if one dreams of being attacked by friends, this may be a manifestation of fear of friendship; a more complicated example, which requires a cultural metaphor, is that a cat within a dream symbolizes a need to use one's intuition. For English speakers, it may suggest that the dreamer must recognize that there is "more than one way to skin a cat," or in other words, more than one way to do something. He was also critical of Sigmund Freud's psychoanalytic theory of dream interpretation, particularly Freud's notion that the dream of being attacked represented a fear of castration. Hall argued that this dream did not necessarily stem from castration anxiety, but rather represented the dreamer's perception of themselves as weak, passive, and helpless in the face of danger. In support of his argument, Hall pointed out that women have this dream more frequently than men, yet women do not typically experience castration anxiety. Additionally, he noted that there were no significant differences in the form or content of the dream of being attacked between men and women, suggesting that the dream likely has the same meaning for both genders. Hall's work in dream research also provided evidence to support one of Sigmund Freud's theories, the Oedipus Complex. Hall studied the dreams of males and females ages two through twenty-six. He found that young boys frequently dreamed of aggression towards their fathers and older male siblings, while girls dreamed of hostility towards their mothers and older female siblings. These dreams often involved themes of conflict and competition for the affection of the opposite-sex parent, providing empirical support for Freud's theory of the Oedipus Complex. Faraday, Clift, et al. In the 1970s, Ann Faraday and others helped bring dream interpretation into the mainstream by publishing books on do-it-yourself dream interpretation and forming groups to share and analyze dreams. Faraday focused on the application of dreams to situations occurring in one's life. For instance, some dreams are warnings of something about to happen—e.g. a dream of failing an examination, if one is a student, may be a literal warning of unpreparedness. Outside of such context, it could relate to failing some other kind of test. Or it could even have a "punny" nature, e.g. that one has failed to examine some aspect of his life adequately. Faraday noted that "one finding has emerged pretty firmly from modern research, namely that the majority of dreams seem in some way to reflect things that have preoccupied our minds during the previous day or two." In the 1980s and 1990s, Wallace Clift and Jean Dalby Clift further explored the relationship between images produced in dreams and the dreamer's waking life. Their books identified patterns in dreaming, and ways of analyzing dreams to explore life changes, with particular emphasis on moving toward healing and wholeness. Neurobiological theory Allan Hobson and colleagues developed what they called the activation-synthesis hypothesis which proposes that dreams are simply the side effects of the neural activity in the brain that produces beta brain waves during REM sleep that are associated with wakefulness. According to this hypothesis, neurons fire periodically during sleep in the lower brain levels and thus send random signals to the cortex. The cortex then synthesizes a dream in reaction to these signals in order to try to make sense of why the brain is sending them. Although the hypothesis downplays the role that emotional factors play in determining dreams, it does not state that dreams are meaningless. Present-day popular attitudes Most people currently appear to interpret dream content according to Freudian psychoanalysis in the United States, India, and South Korea, according to one study conducted in those countries. People appear to believe dreams are particularly meaningful: they assign more meaning to dreams than to similar waking thoughts. For example, people report they would be more likely to cancel a trip they had planned that involved a plane flight if they dreamt of their plane crashing the night before than if the Department of Homeland Security issued a federal warning. However, people do not attribute equal importance to all dreams. People appear to use motivated reasoning when interpreting their dreams. They are more likely to view dreams confirming their waking beliefs and desires to be more meaningful than dreams that contradict their waking beliefs and desires. A paper in 2009 by Carey Morewedge and Michael Norton in the Journal of Personality and Social Psychology found that most people believe that "their dreams reveal meaningful hidden truths." In one study they found that 74% of Indians, 65% of South Koreans and 56% of Americans believed their dream content provided them with meaningful insight into their unconscious beliefs and desires. This Freudian view of dreaming was endorsed significantly more than theories of dreaming that attribute dream content to memory consolidation, problem solving, or random brain activity. This belief appears to lead people to attribute more importance to dream content than to similar thought content that occurs while they are awake. People were more likely to view a positive dream about a friend to be meaningful than a positive dream about someone they disliked, for example, and were more likely to view a negative dream about a person they disliked as meaningful than a negative dream about a person they liked. Layne Dalfen, a contemporary dream analyst and educator, has significantly contributed to the modern understanding of dream interpretation. She developed the Six Points of Entry method, which provides practical tools for analyzing dreams. This method helps individuals uncover the emotional significance and potential solutions that dreams may offer, emphasizing their role in personal growth and problem-solving. Through her Dream Interpretation Center, media appearances, online course and books, Dalfen has made dream analysis accessible to a broader audience. Spiritual dream interpretation Spiritual dream interpretation is a practice that involves understanding dreams through a spiritual or religious lens. It is based on the belief that dreams can offer insights into one's spiritual journey, inner self, and connection to the divine. This approach to dream analysis often draws upon symbolism, archetypes, and metaphors found in various spiritual traditions and teachings. See also Dream dictionary Dream journal Dream sharing Dreams in analytical psychology DreamsID (Dreams Interpreted and Drawn) Lucid dreaming Oneiromancy Oneironautics Personality test Psychoanalytic dream interpretation Recurring dream Layne Dalfen References Works cited Further reading (Hardcover), (Paperback) Sechrist, Elsie with foreword by Cayce, Hugh Lynn (1974). Dreams, Your Magic Mirror. Warner Books. . External links Dream Psychology: Psychoanalysis for Beginners – Full text of Sigmund Freud's revisitation of The Interpretation of Dreams Divination Interpretation Freudian psychology Analytical psychology
Dream interpretation
[ "Biology" ]
5,168
[ "Dream", "Behavior", "Sleep" ]
50,408
https://en.wikipedia.org/wiki/Computer%20engineering
Computer engineering (CoE or CpE) is a branch of electrical engineering that integrates several fields of electrical engineering, electronics engineering and computer science required to develop computer hardware and software. Computer engineering is referred to as Electrical and Computer engineering OR Computer Science and Engineering at some universities Computer engineers require training in electrical engineering, electronic engineering, physics, computer science, hardware-software integration, software design, and software engineering. It uses the techniques and principles of electrical engineering and computer science, and can encompass areas such as electromagnetism, artificial intelligence (AI), robotics, computer networks, computer architecture and operating systems. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microcontrollers, microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering not only focuses on how computer systems themselves work, but also on how to integrate them into the larger picture. Robotics are one of the applications of computer engineering. Computer engineering usually deals with areas including writing software and firmware for embedded microcontrollers, designing VLSI chips, analog sensors, mixed signal circuit boards, Thermodynamics and Control systems. Computer engineers are also suited for robotics research, which relies heavily on using digital systems to control and monitor electrical systems like motors, communications, and sensors. In many institutions of higher learning, computer engineering students are allowed to choose areas of in-depth study in their junior and senior years because the full breadth of knowledge used in the design and application of computers is beyond the scope of an undergraduate degree. Other institutions may require engineering students to complete one or two years of general engineering before declaring computer engineering as their primary focus. History Computer engineering began in 1939 when John Vincent Atanasoff and Clifford Berry began developing the world's first electronic digital computer through physics, mathematics, and electrical engineering. John Vincent Atanasoff was once a physics and mathematics teacher for Iowa State University and Clifford Berry a former graduate under electrical engineering and physics. Together, they created the Atanasoff-Berry computer, also known as the ABC which took five years to complete. While the original ABC was dismantled and discarded in the 1940s, a tribute was made to the late inventors; a replica of the ABC was made in 1997, where it took a team of researchers and engineers four years and $350,000 to build. The modern personal computer emerged in the 1970s, after several breakthroughs in semiconductor technology. These include the first working transistor by William Shockley, John Bardeen and Walter Brattain at Bell Labs in 1947, in 1955, silicon dioxide surface passivation by Carl Frosch and Lincoln Derick, the first planar silicon dioxide transistors by Frosch and Derick in 1957, planar process by Jean Hoerni, the monolithic integrated circuit chip by Robert Noyce at Fairchild Semiconductor in 1959, the metal–oxide–semiconductor field-effect transistor (MOSFET, or MOS transistor) demonstrated by a team at Bell Labs in 1960 and the single-chip microprocessor (Intel 4004) by Federico Faggin, Marcian Hoff, Masatoshi Shima and Stanley Mazor at Intel in 1971. History of computer engineering education The first computer engineering degree program in the United States was established in 1971 at Case Western Reserve University in Cleveland, Ohio. , there were 250 ABET-accredited computer engineering programs in the U.S. In Europe, accreditation of computer engineering schools is done by a variety of agencies as part of the EQANIE network. Due to increasing job requirements for engineers who can concurrently design hardware, software, firmware, and manage all forms of computer systems used in industry, some tertiary institutions around the world offer a bachelor's degree generally called computer engineering. Both computer engineering and electronic engineering programs include analog and digital circuit design in their curriculum. As with most engineering disciplines, having a sound knowledge of mathematics and science is necessary for computer engineers. Education Computer engineering is referred to as computer science and engineering at some universities. Most entry-level computer engineering jobs require at least a bachelor's degree in computer engineering, electrical engineering or computer science. Typically one must learn an array of mathematics such as calculus, linear algebra and differential equations, along with computer science. Degrees in electronic or electric engineering also suffice due to the similarity of the two fields. Because hardware engineers commonly work with computer software systems, a strong background in computer programming is necessary. According to BLS, "a computer engineering major is similar to electrical engineering but with some computer science courses added to the curriculum". Some large firms or specialized jobs require a master's degree. It is also important for computer engineers to keep up with rapid advances in technology. Therefore, many continue learning throughout their careers. This can be helpful, especially when it comes to learning new skills or improving existing ones. For example, as the relative cost of fixing a bug increases the further along it is in the software development cycle, there can be greater cost savings attributed to developing and testing for quality code as soon as possible in the process, particularly before release. Professions A person with a profession in computer engineering is called a computer engineer. Applications and practice There are two major focuses in computer engineering: hardware and software. Computer hardware engineering According to the BLS, Job Outlook employment for computer hardware engineers, the expected ten-year growth from 2019 to 2029 for computer hardware engineering was an estimated 2% and a total of 71,100 jobs. ("Slower than average" in their own words when compared to other occupations)". This is a decrease from the 2014 to 2024 BLS computer hardware engineering estimate of 3% and a total of 77,700 jobs; "and is down from 7% for the 2012 to 2022 BLS estimate and is further down from 9% in the BLS 2010 to 2020 estimate." Today, computer hardware is somewhat equal to electronic and computer engineering (ECE) and has been divided into many subcategories, the most significant being embedded system design. Computer software engineering According to the U.S. Bureau of Labor Statistics (BLS), "computer applications software engineers and computer systems software engineers are projected to be among the faster than average growing occupations" The expected ten-year growth for computer software engineering was an estimated seventeen percent and there was a total of 1,114,000 jobs that same year. This is down from the 2012 to 2022 BLS estimate of 22% for software developers. And, further down from the 30% 2010 to 2020 BLS estimate. In addition, growing concerns over cybersecurity add up to put computer software engineering high above the average rate of increase for all fields. However, some of the work will be outsourced in foreign countries. Due to this, job growth will not be as fast as during the last decade, as jobs that would have gone to computer software engineers in the United States would instead go to computer software engineers in countries such as India. In addition, the BLS Job Outlook for Computer Programmers, 2014–24 has an −8% (a decline, in their words), then a Job Outlook, 2019-29 of -9% (Decline), then a 10% decline for 2021-2031 and now an 11% decline for 2022-2032 for those who program computers (i.e. embedded systems) who are not computer application developers. Furthermore, women in software fields has been declining over the years even faster than other engineering fields. Computer engineering licensing and practice Computer engineering is generally practiced within larger product development firms, and such practice may not be subject to licensing. However, independent consultants who advertise computer engineering, just like any form of engineering, may be subject to state laws which restrict professional engineer practice to only those who have received the appropriate License. The National Council of Examiners for Engineering and Surveying (NCEES) first offered a Principles and Practice of Engineering Examination for computer engineering in 2003. Specialty areas There are many specialty areas in the field of computer engineering. Processor design Processor design process involves choosing an instruction set and a certain execution paradigm (e.g. VLIW or RISC) and results in a microarchitecture, which might be described in e.g. VHDL or Verilog. CPU design is divided into design of the following components: datapaths (such as ALUs and pipelines), control unit: logic which controls the datapaths, memory components such as register files, caches, clock circuitry such as clock drivers, PLLs, clock distribution networks, pad transceiver circuitry, logic gate cell library which is used to implement the logic. Coding, cryptography, and information protection Computer engineers work in coding, applied cryptography, and information protection to develop new methods for protecting various information, such as digital images and music, fragmentation, copyright infringement and other forms of tampering by, for example, digital watermarking. Communications and wireless networks Those focusing on communications and wireless networks, work advancements in telecommunications systems and networks (especially wireless networks), modulation and error-control coding, and information theory. High-speed network design, interference suppression and modulation, design, and analysis of fault-tolerant system, and storage and transmission schemes are all a part of this specialty. Compilers and operating systems This specialty focuses on compilers and operating systems design and development. Engineers in this field develop new operating system architecture, program analysis techniques, and new techniques to assure quality. Examples of work in this field include post-link-time code transformation algorithm development and new operating system development. Computational science and engineering Computational science and engineering is a relatively new discipline. According to the Sloan Career Cornerstone Center, individuals working in this area, "computational methods are applied to formulate and solve complex mathematical problems in engineering and the physical and the social sciences. Examples include aircraft design, the plasma processing of nanometer features on semiconductor wafers, VLSI circuit design, radar detection systems, ion transport through biological channels, and much more". Computer networks, mobile computing, and distributed systems In this specialty, engineers build integrated environments for computing, communications, and information access. Examples include shared-channel wireless networks, adaptive resource management in various systems, and improving the quality of service in mobile and ATM environments. Some other examples include work on wireless network systems and fast Ethernet cluster wired systems. Computer systems: architecture, parallel processing, and dependability Engineers working in computer systems work on research projects that allow for reliable, secure, and high-performance computer systems. Projects such as designing processors for multithreading and parallel processing are included in this field. Other examples of work in this field include the development of new theories, algorithms, and other tools that add performance to computer systems. Computer architecture includes CPU design, cache hierarchy layout, memory organization, and load balancing. Computer vision and robotics In this specialty, computer engineers focus on developing visual sensing technology to sense an environment, representation of an environment, and manipulation of the environment. The gathered three-dimensional information is then implemented to perform a variety of tasks. These include improved human modeling, image communication, and human-computer interfaces, as well as devices such as special-purpose cameras with versatile vision sensors. Embedded systems Individuals working in this area design technology for enhancing the speed, reliability, and performance of systems. Embedded systems are found in many devices from a small FM radio to the space shuttle. According to the Sloan Cornerstone Career Center, ongoing developments in embedded systems include "automated vehicles and equipment to conduct search and rescue, automated transportation systems, and human-robot coordination to repair equipment in space." , computer embedded systems specializations include system-on-chip design, the architecture of edge computing and the Internet of things. Integrated circuits, VLSI design, testing and CAD This specialty of computer engineering requires adequate knowledge of electronics and electrical systems. Engineers working in this area work on enhancing the speed, reliability, and energy efficiency of next-generation very-large-scale integrated (VLSI) circuits and microsystems. An example of this specialty is work done on reducing the power consumption of VLSI algorithms and architecture. Signal, image and speech processing Computer engineers in this area develop improvements in human–computer interaction, including speech recognition and synthesis, medical and scientific imaging, or communications systems. Other work in this area includes computer vision development such as recognition of human facial features. Quantum computing This area integrates the quantum behaviour of small particles such as superposition, interference and entanglement, with classical computers to solve complex problems and formulate algorithms much more efficiently. Individuals focus on fields like Quantum cryptography, physical simulations and quantum algorithms. Benefits of Engineering in Society An accessible avenue for obtaining information and opportunities in technology, especially for young students, is through digital platforms, enabling learning, exploration, and potential income generation at minimal cost and in regional languages, none of which would be possible without engineers. Computer engineering is important in the changes involved in industry 4.0, with engineers responsible for designing and optimizing the technology that surrounds our lives, from big data to AI. Their work not only facilitates global connections and knowledge access, but also plays a pivotal role in shaping our future, as technology continues to evolve rapidly, leading to a growing demand for skilled computer engineers. Engineering contributes to improving society by creating devices and structures impacting various aspects of our lives, from technology to infrastructure. Engineers also address challenges such as environmental protection and sustainable development, while developing medical treatments. As of 2016, the median annual wage across all BLS engineering categories was over $91,000. Some were much higher, with engineers working for petroleum companies at the top (over $128,000). Other top jobs include: Computer Hardware Engineer – $115,080, Aerospace Engineer – $109,650, Nuclear Engineer – $102,220. See also Related fields Associations IEEE Computer Society Association for Computing Machinery References External links Electrical and computer engineering Engineering disciplines
Computer engineering
[ "Technology", "Engineering" ]
2,856
[ "Electrical engineering", "Electrical and computer engineering", "Computer engineering", "nan" ]
50,416
https://en.wikipedia.org/wiki/Differential%20calculus
In mathematics, differential calculus is a subfield of calculus that studies the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus—the study of the area beneath a curve. The primary objects of study in differential calculus are the derivative of a function, related notions such as the differential, and their applications. The derivative of a function at a chosen input value describes the rate of change of the function near that input value. The process of finding a derivative is called differentiation. Geometrically, the derivative at a point is the slope of the tangent line to the graph of the function at that point, provided that the derivative exists and is defined at that point. For a real-valued function of a single real variable, the derivative of a function at a point generally determines the best linear approximation to the function at that point. Differential calculus and integral calculus are connected by the fundamental theorem of calculus. This states that differentiation is the reverse process to integration. Differentiation has applications in nearly all quantitative disciplines. In physics, the derivative of the displacement of a moving body with respect to time is the velocity of the body, and the derivative of the velocity with respect to time is acceleration. The derivative of the momentum of a body with respect to time equals the force applied to the body; rearranging this derivative statement leads to the famous equation associated with Newton's second law of motion. The reaction rate of a chemical reaction is a derivative. In operations research, derivatives determine the most efficient ways to transport materials and design factories. Derivatives are frequently used to find the maxima and minima of a function. Equations involving derivatives are called differential equations and are fundamental in describing natural phenomena. Derivatives and their generalizations appear in many fields of mathematics, such as complex analysis, functional analysis, differential geometry, measure theory, and abstract algebra. Derivative The derivative of at the point is the slope of the tangent to . In order to gain an intuition for this, one must first be familiar with finding the slope of a linear equation, written in the form . The slope of an equation is its steepness. It can be found by picking any two points and dividing the change in by the change in , meaning that . For, the graph of has a slope of , as shown in the diagram below: For brevity, is often written as , with being the Greek letter delta, meaning 'change in'. The slope of a linear equation is constant, meaning that the steepness is the same everywhere. However, many graphs such as vary in their steepness. This means that you can no longer pick any two arbitrary points and compute the slope. Instead, the slope of the graph can be computed by considering the tangent line—a line that 'just touches' a particular point. The slope of a curve at a particular point is equal to the slope of the tangent to that point. For example, has a slope of at because the slope of the tangent line to that point is equal to : The derivative of a function is then simply the slope of this tangent line. Even though the tangent line only touches a single point at the point of tangency, it can be approximated by a line that goes through two points. This is known as a secant line. If the two points that the secant line goes through are close together, then the secant line closely resembles the tangent line, and, as a result, its slope is also very similar: The advantage of using a secant line is that its slope can be calculated directly. Consider the two points on the graph and , where is a small number. As before, the slope of the line passing through these two points can be calculated with the formula . This gives As gets closer and closer to , the slope of the secant line gets closer and closer to the slope of the tangent line. This is formally written as The above expression means 'as gets closer and closer to 0, the slope of the secant line gets closer and closer to a certain value'. The value that is being approached is the derivative of ; this can be written as . If , the derivative can also be written as , with representing an infinitesimal change. For example, represents an infinitesimal change in x. In summary, if , then the derivative of is provided such a limit exists. We have thus succeeded in properly defining the derivative of a function, meaning that the 'slope of the tangent line' now has a precise mathematical meaning. Differentiating a function using the above definition is known as differentiation from first principles. Here is a proof, using differentiation from first principles, that the derivative of is : As approaches , approaches . Therefore, . This proof can be generalised to show that if and are constants. This is known as the power rule. For example, . However, many other functions cannot be differentiated as easily as polynomial functions, meaning that sometimes further techniques are needed to find the derivative of a function. These techniques include the chain rule, product rule, and quotient rule. Other functions cannot be differentiated at all, giving rise to the concept of differentiability. A closely related concept to the derivative of a function is its differential. When and are real variables, the derivative of at is the slope of the tangent line to the graph of at . Because the source and target of are one-dimensional, the derivative of is a real number. If and are vectors, then the best linear approximation to the graph of depends on how changes in several directions at once. Taking the best linear approximation in a single direction determines a partial derivative, which is usually denoted . The linearization of in all directions at once is called the total derivative. History of differentiation The concept of a derivative in the sense of a tangent line is a very old one, familiar to ancient Greek mathematicians such as Euclid (c. 300 BC), Archimedes (c. 287–212 BC), and Apollonius of Perga (c. 262–190 BC). Archimedes also made use of indivisibles, although these were primarily used to study areas and volumes rather than derivatives and tangents (see The Method of Mechanical Theorems). The use of infinitesimals to compute rates of change was developed significantly by Bhāskara II (1114–1185); indeed, it has been argued that many of the key notions of differential calculus can be found in his work, such as "Rolle's theorem". The mathematician, Sharaf al-Dīn al-Tūsī (1135–1213), in his Treatise on Equations, established conditions for some cubic equations to have solutions, by finding the maxima of appropriate cubic polynomials. He obtained, for example, that the maximum (for positive ) of the cubic occurs when , and concluded therefrom that the equation has exactly one positive solution when , and two positive solutions whenever . The historian of science, Roshdi Rashed, has argued that al-Tūsī must have used the derivative of the cubic to obtain this result. Rashed's conclusion has been contested by other scholars, however, who argue that he could have obtained the result by other methods which do not require the derivative of the function to be known. The modern development of calculus is usually credited to Isaac Newton (1643–1727) and Gottfried Wilhelm Leibniz (1646–1716), who provided independent and unified approaches to differentiation and derivatives. The key insight, however, that earned them this credit, was the fundamental theorem of calculus relating differentiation and integration: this rendered obsolete most previous methods for computing areas and volumes. For their ideas on derivatives, both Newton and Leibniz built on significant earlier work by mathematicians such as Pierre de Fermat (1607-1665), Isaac Barrow (1630–1677), René Descartes (1596–1650), Christiaan Huygens (1629–1695), Blaise Pascal (1623–1662) and John Wallis (1616–1703). Regarding Fermat's influence, Newton once wrote in a letter that "I had the hint of this method [of fluxions] from Fermat's way of drawing tangents, and by applying it to abstract equations, directly and invertedly, I made it general." Isaac Barrow is generally given credit for the early development of the derivative. Nevertheless, Newton and Leibniz remain key figures in the history of differentiation, not least because Newton was the first to apply differentiation to theoretical physics, while Leibniz systematically developed much of the notation still used today. Since the 17th century many mathematicians have contributed to the theory of differentiation. In the 19th century, calculus was put on a much more rigorous footing by mathematicians such as Augustin Louis Cauchy (1789–1857), Bernhard Riemann (1826–1866), and Karl Weierstrass (1815–1897). It was also during this period that the differentiation was generalized to Euclidean space and the complex plane. The 20th century brought two major steps towards our present understanding and practice of derivation : Lebesgue integration, besides extending integral calculus to many more functions, clarified the relation between derivation and integration with the notion of absolute continuity. Later the theory of distributions (after Laurent Schwartz) extended derivation to generalized functions (e.g., the Dirac delta function previously introduced in Quantum Mechanics) and became fundamental to nowadays applied analysis especially by the use of weak solutions to partial differential equations. Applications of derivatives Optimization If is a differentiable function on (or an open interval) and is a local maximum or a local minimum of , then the derivative of at is zero. Points where are called critical points or stationary points (and the value of at is called a critical value). If is not assumed to be everywhere differentiable, then points at which it fails to be differentiable are also designated critical points. If is twice differentiable, then conversely, a critical point of can be analysed by considering the second derivative of at : if it is positive, is a local minimum; if it is negative, is a local maximum; if it is zero, then could be a local minimum, a local maximum, or neither. (For example, has a critical point at , but it has neither a maximum nor a minimum there, whereas has a critical point at and a minimum and a maximum, respectively, there.) This is called the second derivative test. An alternative approach, called the first derivative test, involves considering the sign of the on each side of the critical point. Taking derivatives and solving for critical points is therefore often a simple way to find local minima or maxima, which can be useful in optimization. By the extreme value theorem, a continuous function on a closed interval must attain its minimum and maximum values at least once. If the function is differentiable, the minima and maxima can only occur at critical points or endpoints. This also has applications in graph sketching: once the local minima and maxima of a differentiable function have been found, a rough plot of the graph can be obtained from the observation that it will be either increasing or decreasing between critical points. In higher dimensions, a critical point of a scalar valued function is a point at which the gradient is zero. The second derivative test can still be used to analyse critical points by considering the eigenvalues of the Hessian matrix of second partial derivatives of the function at the critical point. If all of the eigenvalues are positive, then the point is a local minimum; if all are negative, it is a local maximum. If there are some positive and some negative eigenvalues, then the critical point is called a "saddle point", and if none of these cases hold (i.e., some of the eigenvalues are zero) then the test is considered to be inconclusive. Calculus of variations One example of an optimization problem is: Find the shortest curve between two points on a surface, assuming that the curve must also lie on the surface. If the surface is a plane, then the shortest curve is a line. But if the surface is, for example, egg-shaped, then the shortest path is not immediately clear. These paths are called geodesics, and one of the most fundamental problems in the calculus of variations is finding geodesics. Another example is: Find the smallest area surface filling in a closed curve in space. This surface is called a minimal surface and it, too, can be found using the calculus of variations. Physics Calculus is of vital importance in physics: many physical processes are described by equations involving derivatives, called differential equations. Physics is particularly concerned with the way quantities change and develop over time, and the concept of the "time derivative" — the rate of change over time — is essential for the precise definition of several important concepts. In particular, the time derivatives of an object's position are significant in Newtonian physics: velocity is the derivative (with respect to time) of an object's displacement (distance from the original position) acceleration is the derivative (with respect to time) of an object's velocity, that is, the second derivative (with respect to time) of an object's position. For example, if an object's position on a line is given by then the object's velocity is and the object's acceleration is which is constant. Differential equations A differential equation is a relation between a collection of functions and their derivatives. An ordinary differential equation is a differential equation that relates functions of one variable to their derivatives with respect to that variable. A partial differential equation is a differential equation that relates functions of more than one variable to their partial derivatives. Differential equations arise naturally in the physical sciences, in mathematical modelling, and within mathematics itself. For example, Newton's second law, which describes the relationship between acceleration and force, can be stated as the ordinary differential equation The heat equation in one space variable, which describes how heat diffuses through a straight rod, is the partial differential equation Here is the temperature of the rod at position and time and is a constant that depends on how fast heat diffuses through the rod. Mean value theorem The mean value theorem gives a relationship between values of the derivative and values of the original function. If is a real-valued function and and are numbers with , then the mean value theorem says that under mild hypotheses, the slope between the two points and is equal to the slope of the tangent line to at some point between and . In other words, In practice, what the mean value theorem does is control a function in terms of its derivative. For instance, suppose that has derivative equal to zero at each point. This means that its tangent line is horizontal at every point, so the function should also be horizontal. The mean value theorem proves that this must be true: The slope between any two points on the graph of must equal the slope of one of the tangent lines of . All of those slopes are zero, so any line from one point on the graph to another point will also have slope zero. But that says that the function does not move up or down, so it must be a horizontal line. More complicated conditions on the derivative lead to less precise but still highly useful information about the original function. Taylor polynomials and Taylor series The derivative gives the best possible linear approximation of a function at a given point, but this can be very different from the original function. One way of improving the approximation is to take a quadratic approximation. That is to say, the linearization of a real-valued function at the point is a linear polynomial , and it may be possible to get a better approximation by considering a quadratic polynomial . Still better might be a cubic polynomial , and this idea can be extended to arbitrarily high degree polynomials. For each one of these polynomials, there should be a best possible choice of coefficients , , , and that makes the approximation as good as possible. In the neighbourhood of , for the best possible choice is always , and for the best possible choice is always . For , , and higher-degree coefficients, these coefficients are determined by higher derivatives of . should always be , and should always be . Using these coefficients gives the Taylor polynomial of . The Taylor polynomial of degree is the polynomial of degree which best approximates , and its coefficients can be found by a generalization of the above formulas. Taylor's theorem gives a precise bound on how good the approximation is. If is a polynomial of degree less than or equal to , then the Taylor polynomial of degree equals . The limit of the Taylor polynomials is an infinite series called the Taylor series. The Taylor series is frequently a very good approximation to the original function. Functions which are equal to their Taylor series are called analytic functions. It is impossible for functions with discontinuities or sharp corners to be analytic; moreover, there exist smooth functions which are also not analytic. Implicit function theorem Some natural geometric shapes, such as circles, cannot be drawn as the graph of a function. For instance, if , then the circle is the set of all pairs such that . This set is called the zero set of , and is not the same as the graph of , which is a paraboloid. The implicit function theorem converts relations such as into functions. It states that if is continuously differentiable, then around most points, the zero set of looks like graphs of functions pasted together. The points where this is not true are determined by a condition on the derivative of . The circle, for instance, can be pasted together from the graphs of the two functions . In a neighborhood of every point on the circle except and , one of these two functions has a graph that looks like the circle. (These two functions also happen to meet and , but this is not guaranteed by the implicit function theorem.) The implicit function theorem is closely related to the inverse function theorem, which states when a function looks like graphs of invertible functions pasted together. See also Differential (calculus) Numerical differentiation Techniques for differentiation List of calculus topics Notation for differentiation Notes References Citations Works cited Other sources Boman, Eugene, and Robert Rogers. Differential Calculus: From Practice to Theory. 2022, personal.psu.edu/ecb5/DiffCalc.pdf . Calculus
Differential calculus
[ "Mathematics" ]
3,730
[ "Differential calculus", "Calculus" ]
50,425
https://en.wikipedia.org/wiki/Quantum%20Hall%20effect
The quantum Hall effect (or integer quantum Hall effect) is a quantized version of the Hall effect which is observed in two-dimensional electron systems subjected to low temperatures and strong magnetic fields, in which the Hall resistance exhibits steps that take on the quantized values where is the Hall voltage, is the channel current, is the elementary charge and is the Planck constant. The divisor can take on either integer () or fractional () values. Here, is roughly but not exactly equal to the filling factor of Landau levels. The quantum Hall effect is referred to as the integer or fractional quantum Hall effect depending on whether is an integer or fraction, respectively. The striking feature of the integer quantum Hall effect is the persistence of the quantization (i.e. the Hall plateau) as the electron density is varied. Since the electron density remains constant when the Fermi level is in a clean spectral gap, this situation corresponds to one where the Fermi level is an energy with a finite density of states, though these states are localized (see Anderson localization). The fractional quantum Hall effect is more complicated and still considered an open research problem. Its existence relies fundamentally on electron–electron interactions. In 1988, it was proposed that there was a quantum Hall effect without Landau levels. This quantum Hall effect is referred to as the quantum anomalous Hall (QAH) effect. There is also a new concept of the quantum spin Hall effect which is an analogue of the quantum Hall effect, where spin currents flow instead of charge currents. Applications Electrical resistance standards The quantization of the Hall conductance () has the important property of being exceedingly precise. Actual measurements of the Hall conductance have been found to be integer or fractional multiples of to better than one part in a billion. It has allowed for the definition of a new practical standard for electrical resistance, based on the resistance quantum given by the von Klitzing constant . This is named after Klaus von Klitzing, the discoverer of exact quantization. The quantum Hall effect also provides an extremely precise independent determination of the fine-structure constant, a quantity of fundamental importance in quantum electrodynamics. In 1990, a fixed conventional value was defined for use in resistance calibrations worldwide. On 16 November 2018, the 26th meeting of the General Conference on Weights and Measures decided to fix exact values of (the Planck constant) and (the elementary charge), superseding the 1990 conventional value with an exact permanent value (intrinsic standard) . Research status The fractional quantum Hall effect is considered part of exact quantization. Exact quantization in full generality is not completely understood but it has been explained as a very subtle manifestation of the combination of the principle of gauge invariance together with another symmetry (see Anomalies). The integer quantum Hall effect instead is considered a solved research problem and understood in the scope of TKNN formula and Chern–Simons Lagrangians. The fractional quantum Hall effect is still considered an open research problem. The fractional quantum Hall effect can be also understood as an integer quantum Hall effect, although not of electrons but of charge–flux composites known as composite fermions. Other models to explain the fractional quantum Hall effect also exists. Currently it is considered an open research problem because no single, confirmed and agreed list of fractional quantum numbers exists, neither a single agreed model to explain all of them, although there are such claims in the scope of composite fermions and Non Abelian Chern–Simons Lagrangians. History In 1957, Carl Frosch and Lincoln Derick were able to manufacture the first silicon dioxide field effect transistors at Bell Labs, the first transistors in which drain and source were adjacent at the surface. Subsequently, a team demonstrated a working MOSFET at Bell Labs 1960. This enabled physicists to study electron behavior in a nearly ideal two-dimensional gas. In a MOSFET, conduction electrons travel in a thin surface layer, and a "gate" voltage controls the number of charge carriers in this layer. This allows researchers to explore quantum effects by operating high-purity MOSFETs at liquid helium temperatures. The integer quantization of the Hall conductance was originally predicted by University of Tokyo researchers Tsuneya Ando, Yukio Matsumoto and Yasutada Uemura in 1975, on the basis of an approximate calculation which they themselves did not believe to be true. In 1978, the Gakushuin University researchers Jun-ichi Wakabayashi and Shinji Kawaji subsequently observed the effect in experiments carried out on the inversion layer of MOSFETs. In 1980, Klaus von Klitzing, working at the high magnetic field laboratory in Grenoble with silicon-based MOSFET samples developed by Michael Pepper and Gerhard Dorda, made the unexpected discovery that the Hall resistance was exactly quantized. For this finding, von Klitzing was awarded the 1985 Nobel Prize in Physics. A link between exact quantization and gauge invariance was subsequently proposed by Robert Laughlin, who connected the quantized conductivity to the quantized charge transport in a Thouless charge pump. Most integer quantum Hall experiments are now performed on gallium arsenide heterostructures, although many other semiconductor materials can be used. In 2007, the integer quantum Hall effect was reported in graphene at temperatures as high as room temperature, and in the magnesium zinc oxide ZnO–MgxZn1−xO. Integer quantum Hall effect Landau levels In two dimensions, when classical electrons are subjected to a magnetic field they follow circular cyclotron orbits. When the system is treated quantum mechanically, these orbits are quantized. To determine the values of the energy levels the Schrödinger equation must be solved. Since the system is subjected to a magnetic field, it has to be introduced as an electromagnetic vector potential in the Schrödinger equation. The system considered is an electron gas that is free to move in the x and y directions, but is tightly confined in the z direction. Then, a magnetic field is applied in the z direction and according to the Landau gauge the electromagnetic vector potential is and the scalar potential is . Thus the Schrödinger equation for a particle of charge and effective mass in this system is: where is the canonical momentum, which is replaced by the operator and is the total energy. To solve this equation it is possible to separate it into two equations since the magnetic field just affects the movement along x and y axes. The total energy becomes then, the sum of two contributions . The corresponding equations in z axis is: To simplify things, the solution is considered as an infinite well. Thus the solutions for the z direction are the energies , and the wavefunctions are sinusoidal. For the and directions, the solution of the Schrödinger equation can be chosen to be the product of a plane wave in -direction with some unknown function of , i.e., . This is because the vector potential does not depend on and the momentum operator therefore commutes with the Hamiltonian. By substituting this Ansatz into the Schrödinger equation one gets the one-dimensional harmonic oscillator equation centered at . where is defined as the cyclotron frequency and the magnetic length. The energies are: , And the wavefunctions for the motion in the plane are given by the product of a plane wave in and Hermite polynomials attenuated by the gaussian function in , which are the wavefunctions of a harmonic oscillator. From the expression for the Landau levels one notices that the energy depends only on , not on . States with the same but different are degenerate. Density of states At zero field, the density of states per unit surface for the two-dimensional electron gas taking into account degeneration due to spin is independent of the energy . As the field is turned on, the density of states collapses from the constant to a Dirac comb, a series of Dirac functions, corresponding to the Landau levels separated . At finite temperature, however, the Landau levels acquire a width being the time between scattering events. Commonly it is assumed that the precise shape of Landau levels is a Gaussian or Lorentzian profile. Another feature is that the wave functions form parallel strips in the -direction spaced equally along the -axis, along the lines of . Since there is nothing special about any direction in the -plane if the vector potential was differently chosen one should find circular symmetry. Given a sample of dimensions and applying the periodic boundary conditions in the -direction being an integer, one gets that each parabolic potential is placed at a value . The number of states for each Landau Level and can be calculated from the ratio between the total magnetic flux that passes through the sample and the magnetic flux corresponding to a state. Thus the density of states per unit surface is . Note the dependency of the density of states with the magnetic field. The larger the magnetic field is, the more states are in each Landau level. As a consequence, there is more confinement in the system since fewer energy levels are occupied. Rewriting the last expression as it is clear that each Landau level contains as many states as in a 2DEG in a . Given the fact that electrons are fermions, for each state available in the Landau levels it corresponds to two electrons, one electron with each value for the spin . However, if a large magnetic field is applied, the energies split into two levels due to the magnetic moment associated with the alignment of the spin with the magnetic field. The difference in the energies is being a factor which depends on the material ( for free electrons) and the Bohr magneton. The sign is taken when the spin is parallel to the field and when it is antiparallel. This fact called spin splitting implies that the density of states for each level is reduced by a half. Note that is proportional to the magnetic field so, the larger the magnetic field is, the more relevant is the split. In order to get the number of occupied Landau levels, one defines the so-called filling factor as the ratio between the density of states in a 2DEG and the density of states in the Landau levels. In general the filling factor is not an integer. It happens to be an integer when there is an exact number of filled Landau levels. Instead, it becomes a non-integer when the top level is not fully occupied. In actual experiments, one varies the magnetic field and fixes electron density (and not the Fermi energy!) or varies the electron density and fixes the magnetic field. Both cases correspond to a continuous variation of the filling factor and one cannot expect to be an integer. Since , by increasing the magnetic field, the Landau levels move up in energy and the number of states in each level grow, so fewer electrons occupy the top level until it becomes empty. If the magnetic field keeps increasing, eventually, all electrons will be in the lowest Landau level () and this is called the magnetic quantum limit. Longitudinal resistivity It is possible to relate the filling factor to the resistivity and hence, to the conductivity of the system. When is an integer, the Fermi energy lies in between Landau levels where there are no states available for carriers, so the conductivity becomes zero (it is considered that the magnetic field is big enough so that there is no overlap between Landau levels, otherwise there would be few electrons and the conductivity would be approximately ). Consequently, the resistivity becomes zero too (At very high magnetic fields it is proven that longitudinal conductivity and resistivity are proportional). With the conductivity one finds If the longitudinal resistivity is zero and transversal is finite, then . Thus both the longitudinal conductivity and resistivity become zero. Instead, when is a half-integer, the Fermi energy is located at the peak of the density distribution of some Landau Level. This means that the conductivity will have a maximum . This distribution of minimums and maximums corresponds to ¨quantum oscillations¨ called Shubnikov–de Haas oscillations which become more relevant as the magnetic field increases. Obviously, the height of the peaks are larger as the magnetic field increases since the density of states increases with the field, so there are more carriers which contribute to the resistivity. It is interesting to notice that if the magnetic field is very small, the longitudinal resistivity is a constant which means that the classical result is reached. Transverse resistivity From the classical relation of the transverse resistivity and substituting one finds out the quantization of the transverse resistivity and conductivity: One concludes then, that the transverse resistivity is a multiple of the inverse of the so-called conductance quantum if the filling factor is an integer. In experiments, however, plateaus are observed for whole plateaus of filling values , which indicates that there are in fact electron states between the Landau levels. These states are localized in, for example, impurities of the material where they are trapped in orbits so they can not contribute to the conductivity. That is why the resistivity remains constant in between Landau levels. Again if the magnetic field decreases, one gets the classical result in which the resistivity is proportional to the magnetic field. Photonic quantum Hall effect The quantum Hall effect, in addition to being observed in two-dimensional electron systems, can be observed in photons. Photons do not possess inherent electric charge, but through the manipulation of discrete optical resonators and coupling phases or on-site phases, an artificial magnetic field can be created. This process can be expressed through a metaphor of photons bouncing between multiple mirrors. By shooting the light across multiple mirrors, the photons are routed and gain additional phase proportional to their angular momentum. This creates an effect like they are in a magnetic field. Topological classification The integers that appear in the Hall effect are examples of topological quantum numbers. They are known in mathematics as the first Chern numbers and are closely related to Berry's phase. A striking model of much interest in this context is the Azbel–Harper–Hofstadter model whose quantum phase diagram is the Hofstadter butterfly shown in the figure. The vertical axis is the strength of the magnetic field and the horizontal axis is the chemical potential, which fixes the electron density. The colors represent the integer Hall conductances. Warm colors represent positive integers and cold colors negative integers. Note, however, that the density of states in these regions of quantized Hall conductance is zero; hence, they cannot produce the plateaus observed in the experiments. The phase diagram is fractal and has structure on all scales. In the figure there is an obvious self-similarity. In the presence of disorder, which is the source of the plateaus seen in the experiments, this diagram is very different and the fractal structure is mostly washed away. Also, the experiments control the filling factor and not the Fermi energy. If this diagram is plotted as a function of filling factor, all the features are completely washed away, hence, it has very little to do with the actual Hall physics. Concerning physical mechanisms, impurities and/or particular states (e.g., edge currents) are important for both the 'integer' and 'fractional' effects. In addition, Coulomb interaction is also essential in the fractional quantum Hall effect. The observed strong similarity between integer and fractional quantum Hall effects is explained by the tendency of electrons to form bound states with an even number of magnetic flux quanta, called composite fermions. Bohr atom interpretation of the von Klitzing constant The value of the von Klitzing constant may be obtained already on the level of a single atom within the Bohr model while looking at it as a single-electron Hall effect. While during the cyclotron motion on a circular orbit the centrifugal force is balanced by the Lorentz force responsible for the transverse induced voltage and the Hall effect, one may look at the Coulomb potential difference in the Bohr atom as the induced single atom Hall voltage and the periodic electron motion on a circle as a Hall current. Defining the single atom Hall current as a rate a single electron charge is making Kepler revolutions with angular frequency and the induced Hall voltage as a difference between the hydrogen nucleus Coulomb potential at the electron orbital point and at infinity: One obtains the quantization of the defined Bohr orbit Hall resistance in steps of the von Klitzing constant as which for the Bohr atom is linear but not inverse in the integer n. Relativistic analogs Relativistic examples of the integer quantum Hall effect and quantum spin Hall effect arise in the context of lattice gauge theory. See also Quantum Hall transitions Fractional quantum Hall effect Quantum anomalous Hall effect Quantum cellular automata Composite fermions Conductance Quantum Hall effect Hall probe Graphene Quantum spin Hall effect Coulomb potential between two current loops embedded in a magnetic field References Further reading 25 years of Quantum Hall Effect, K. von Klitzing, Poincaré Seminar (Paris-2004). Postscript. Pdf. Magnet Lab Press Release Quantum Hall Effect Observed at Room Temperature Zyun F. Ezawa: Quantum Hall Effects - Field Theoretical Approach and Related Topics. World Scientific, Singapore 2008, Sankar D. Sarma, Aron Pinczuk: Perspectives in Quantum Hall Effects. Wiley-VCH, Weinheim 2004, E. I. Rashba and V. B. Timofeev, Quantum Hall Effect, Sov. Phys. – Semiconductors v. 20, pp. 617–647 (1986). Hall effect Condensed matter physics Quantum electronics Spintronics Quantum phases Mesoscopic physics Articles containing video clips 1980 in science
Quantum Hall effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,646
[ "Quantum phases", "Physical phenomena", "Quantum electronics", "Hall effect", "Spintronics", "Phases of matter", "Quantum mechanics", "Electric and magnetic fields in matter", "Materials science", "Electrical phenomena", "Condensed matter physics", "Nanotechnology", "Mesoscopic physics", "...