id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
3,415,212
https://en.wikipedia.org/wiki/Callan%E2%80%93Symanzik%20equation
In physics, the Callan–Symanzik equation is a differential equation describing the evolution of the n-point correlation functions under variation of the energy scale at which the theory is defined and involves the beta function of the theory and the anomalous dimensions. As an example, for a quantum field theory with one massless scalar field and one self-coupling term, denote the bare field strength by and the bare coupling constant by . In the process of renormalisation, a mass scale M must be chosen. Depending on M, the field strength is rescaled by a constant: , and as a result the bare coupling constant is correspondingly shifted to the renormalised coupling constant g. Of physical importance are the renormalised n-point functions, computed from connected Feynman diagrams, schematically of the form For a given choice of renormalisation scheme, the computation of this quantity depends on the choice of M, which affects the shift in g and the rescaling of . If the choice of is slightly altered by , then the following shifts will occur: The Callan–Symanzik equation relates these shifts: After the following definitions the Callan–Symanzik equation can be put in the conventional form: being the beta function. In quantum electrodynamics this equation takes the form where n and m are the numbers of electron and photon fields, respectively, for which the correlation function is defined. The renormalised coupling constant is now the renormalised elementary charge e. The electron field and the photon field rescale differently under renormalisation, and thus lead to two separate functions, and , respectively. The Callan–Symanzik equation was discovered independently by Curtis Callan and Kurt Symanzik in 1970. Later it was used to understand asymptotic freedom. This equation arises in the framework of renormalization group. It is possible to treat the equation using perturbation theory. See also Renormalization group Beta function Notes References Jean Zinn-Justin, Quantum Field Theory and Critical Phenomena , Oxford University Press, 2003, John Clements Collins, Renormalization, Cambridge University Press, 1986, Michael E. Peskin and Daniel V. Schroeder, An Introduction to Quantum Field Theory, Addison-Wesley, Reading, 1995. Eponymous equations of physics Renormalization group
Callan–Symanzik equation
[ "Physics" ]
488
[ "Physical phenomena", "Equations of physics", "Eponymous equations of physics", "Critical phenomena", "Renormalization group", "Statistical mechanics" ]
10,280,093
https://en.wikipedia.org/wiki/Flora%20Australiensis
Flora Australiensis: a description of the plants of the Australian Territory, more commonly referred to as Flora Australiensis, and also known by its standard abbreviation Fl. Austral., is a seven-volume Flora of Australia published between 1863 and 1878 by George Bentham, with the assistance of Ferdinand von Mueller. It was one of the famous Kew series of colonial floras, and the first flora of any large continental area that had ever been finished. In total the flora included descriptions of 8125 species. Bentham prepared the flora from Kew; with Mueller, the first plant taxonomist residing permanently in Australia, loaning the entire collection of the National Herbarium of Victoria to Bentham over the course of several years. Mueller had been dissuaded from preparing a flora from Australia while in Australia by Bentham and Joseph Dalton Hooker since historic collections of Australian species were all held in European herbaria which Mueller could not access from Australia. Mueller did eventually produce his own flora of Australia, the Systematic Census of Australian Plants published in 1882 extended the work of Bentham with the addition of new species and taxonomic revisions. Flora Australiensis was the standard reference work on the Australian flora for more than a century. As late as 1988, James Willis wrote that "Flora Australiensis still remains the only definitive work on the vascular vegetation of the whole continent." According to Nancy Burbidge, "it represents a prodigious intellectual effort never equalled." Flora Australiensis is credited with forming the basis of subsequently published regional floras; 19th century floras were published for all states except Western Australia, they were for the most part extracts of this work. References Further reading Florae (publication) Botany in Australia Books about Australian natural history 19th-century non-fiction books
Flora Australiensis
[ "Biology" ]
365
[ "Flora", "Florae (publication)" ]
10,280,254
https://en.wikipedia.org/wiki/Isothermal%20coordinates
In mathematics, specifically in differential geometry, isothermal coordinates on a Riemannian manifold are local coordinates where the metric is conformal to the Euclidean metric. This means that in isothermal coordinates, the Riemannian metric locally has the form where is a positive smooth function. (If the Riemannian manifold is oriented, some authors insist that a coordinate system must agree with that orientation to be isothermal.) Isothermal coordinates on surfaces were first introduced by Gauss. Korn and Lichtenstein proved that isothermal coordinates exist around any point on a two dimensional Riemannian manifold. By contrast, most higher-dimensional manifolds do not admit isothermal coordinates anywhere; that is, they are not usually locally conformally flat. In dimension 3, a Riemannian metric is locally conformally flat if and only if its Cotton tensor vanishes. In dimensions > 3, a metric is locally conformally flat if and only if its Weyl tensor vanishes. Isothermal coordinates on surfaces In 1822, Carl Friedrich Gauss proved the existence of isothermal coordinates on an arbitrary surface with a real-analytic Riemannian metric, following earlier results of Joseph Lagrange in the special case of surfaces of revolution. The construction used by Gauss made use of the Cauchy–Kowalevski theorem, so that his method is fundamentally restricted to the real-analytic context. Following innovations in the theory of two-dimensional partial differential equations by Arthur Korn, Leon Lichtenstein found in 1916 the general existence of isothermal coordinates for Riemannian metrics of lower regularity, including smooth metrics and even Hölder continuous metrics. Given a Riemannian metric on a two-dimensional manifold, the transition function between isothermal coordinate charts, which is a map between open subsets of , is necessarily angle-preserving. The angle-preserving property together with orientation-preservation is one characterization (among many) of holomorphic functions, and so an oriented coordinate atlas consisting of isothermal coordinate charts may be viewed as a holomorphic coordinate atlas. This demonstrates that a Riemannian metric and an orientation on a two-dimensional manifold combine to induce the structure of a Riemann surface (i.e. a one-dimensional complex manifold). Furthermore, given an oriented surface, two Riemannian metrics induce the same holomorphic atlas if and only if they are conformal to one another. For this reason, the study of Riemann surfaces is identical to the study of conformal classes of Riemannian metrics on oriented surfaces. By the 1950s, expositions of the ideas of Korn and Lichtenstein were put into the language of complex derivatives and the Beltrami equation by Lipman Bers and Shiing-shen Chern, among others. In this context, it is natural to investigate the existence of generalized solutions, which satisfy the relevant partial differential equations but are no longer interpretable as coordinate charts in the usual way. This was initiated by Charles Morrey in his seminal 1938 article on the theory of elliptic partial differential equations on two-dimensional domains, leading later to the measurable Riemann mapping theorem of Lars Ahlfors and Bers. Beltrami equation The existence of isothermal coordinates can be proved by applying known existence theorems for the Beltrami equation, which rely on Lp estimates for singular integral operators of Calderón and Zygmund. A simpler approach to the Beltrami equation has been given more recently by Adrien Douady. If the Riemannian metric is given locally as then in the complex coordinate , it takes the form where and are smooth with and . In fact In isothermal coordinates the metric should take the form with ρ smooth. The complex coordinate satisfies so that the coordinates (u, v) will be isothermal if the Beltrami equation has a diffeomorphic solution. Such a solution has been proved to exist in any neighbourhood where . Existence via local solvability for elliptic partial differential equations The existence of isothermal coordinates on a smooth two-dimensional Riemannian manifold is a corollary of the standard local solvability result in the analysis of elliptic partial differential equations. In the present context, the relevant elliptic equation is the condition for a function to be harmonic relative to the Riemannian metric. The local solvability then states that any point has a neighborhood on which there is a harmonic function with nowhere-vanishing derivative. Isothermal coordinates are constructed from such a function in the following way. Harmonicity of is identical to the closedness of the differential 1-form defined using the Hodge star operator associated to the Riemannian metric. The Poincaré lemma thus implies the existence of a function on with By definition of the Hodge star, and are orthogonal to one another and hence linearly independent, and it then follows from the inverse function theorem that and form a coordinate system on some neighborhood of . This coordinate system is automatically isothermal, since the orthogonality of and implies the diagonality of the metric, and the norm-preserving property of the Hodge star implies the equality of the two diagonal components. Gaussian curvature In the isothermal coordinates , the Gaussian curvature takes the simpler form See also Conformal map Liouville's equation Quasiconformal map Notes References . Reprinted in: External links Differential geometry Coordinate systems in differential geometry Partial differential equations
Isothermal coordinates
[ "Mathematics" ]
1,093
[ "Coordinate systems in differential geometry", "Coordinate systems" ]
10,285,944
https://en.wikipedia.org/wiki/Princeton%20Ocean%20Model
The Princeton Ocean Model (POM) is a community general numerical model for ocean circulation that can be used to simulate and predict oceanic currents, temperatures, salinities and other water properties. POM-WEB and POMusers.org Development The model code was originally developed at Princeton University (G. Mellor and Alan Blumberg) in collaboration with Dynalysis of Princeton (H. James Herring, Richard C. Patchen). The model incorporates the Mellor–Yamada turbulence scheme developed in the early 1970s by George Mellor and Ted Yamada; this turbulence sub-model is widely used by oceanic and atmospheric models. At the time, early computer ocean models such as the Bryan–Cox model (developed in the late 1960s at the Geophysical Fluid Dynamics Laboratory, GFDL, and later became the Modular Ocean Model, MOM)), were aimed mostly at coarse-resolution simulations of the large-scale ocean circulation, so there was a need for a numerical model that can handle high-resolution coastal ocean processes. The Blumberg–Mellor model (which later became POM) thus included new features such as free surface to handle tides, sigma vertical coordinates (i.e., terrain-following) to handle complex topographies and shallow regions, a curvilinear grid to better handle coastlines, and a turbulence scheme to handle vertical mixing. At the early 1980s the model was used primarily to simulate estuaries such as the Hudson–Raritan Estuary (by Leo Oey) and the Delaware Bay (Boris Galperin), but also first attempts to use a sigma coordinate model for basin-scale problems have started with the coarse resolution model of the Gulf of Mexico (Blumberg and Mellor) and models of the Arctic Ocean (with the inclusion of ice-ocean coupling by Lakshmi Kantha and Sirpa Hakkinen). In the early 1990s when the web and browsers started to be developed, POM became one of the first ocean model codes that were provided free of charge to users through the web. The establishment of the POM users group and its web support (by Tal Ezer) resulted in a continuous increase in the number of POM users which grew from about a dozen U.S. users in the 1980s to over 1000 users in 2000 and over 4000 users by 2009; there are users from over 70 different countries. In the 1990s the usage of POM expands to simulations of the Mediterranean Sea (Zavatarelli) and the first simulations with a sigma coordinate model of the entire Atlantic Ocean for climate research (Ezer). The development of the Mellor–Ezer optimal interpolation data assimilation scheme that projects surface satellite data into deep layers allows the construction of the first ocean forecast systems for the Gulf Stream and the U.S. east coast running operationally at the NOAA's National Weather Service (Frank Aikman and others). Operational forecast system for other regions such as the Great Lakes, the Gulf of Mexico (Oey), the Gulf of Maine (Huijie Xue) and the Hudson River (Blumberg) followed. For more information on applications of the model, see the searchable database of over 1800 POM-related publications. Derivatives and other models In the late 1990s and the 2000s many other terrain-following community ocean models have been developed; some of their features can be traced back to features included in the original POM, other features are additional numerical and parameterization improvements. Several ocean models are direct descendants of POM such as the commercial version of POM known as the estuarine and coastal ocean model (ECOM), the navy coastal ocean model (NCOM) and the finite-volume coastal ocean model (FVCOM). Recent developments in POM include a generalized coordinate system that combines sigma and z-level grids (Mellor and Ezer), inundation features that allow simulations of wetting and drying (e.g., flood of land area) (Oey), and coupling ocean currents with surface waves (Mellor). Efforts to improve turbulent mixing also continue (Galperin, Kantha, Mellor and others). Users' meetings POM users' meetings were held every few years, and in recent years the meetings were extended to include other models and renamed the International Workshop on Modeling the Ocean (IWMO). Meeting Pages: List of meetings: 1. 1996, June 10–12, Princeton, NJ, USA (POM96) 2. 1998, February 17–19, Miami, FL, USA (POM98) 3. 1999, September 20–22, Bar Harbor, ME, USA (SigMod99) 4. 2001, August 20–22, Boulder, CO, USA (SigMod01) 5. 2003, August 4–6, Seattle, WA, USA (SigMod03) 6. 2009, February 23–26, Taipei, Taiwan (1st IWMO-2009) 7. 2010, May 24–26, Norfolk, VA, USA (2nd IWMO-2010; IWMO-2010) 8. 2011, June 6–9, Qingdao, China (3rd IWMO-2011; IWMO-2011) 9. 2012, May 21–24, Yokohama, Japan (4th IWMO-2012; ) 10. 2013, June 17–20, Bergen, Norway (5th IWMO-2013; ) 11. 2014, June 23–27, Halifax, Nova Scotia, Canada (6th IWMO-2014; ) 12. 2015, June 1–5, Canberra, Australia (7th IWMO-2015; ). 13. 2016, June 7–10, Bologna, Italy (8th IWMO-2016;). 14. 2017, July 3–6, Seoul, South Korea (9th IWMO-2017;). 15. 2018, June 25–28, Santos, Brazil (10th IWMO-2018;). 16. 2019, June 17–20, Wuxi, China (11th IWMO-2019;). 17. 2022. June 28-July 1, Ann Arbor, MI (12th IWMO-2022). 17. 2023, June 27–30, Hamburg, Germany (13th IWMO-2023). Reviewed papers from the IWMO meetings are published by Ocean Dynamics in special issues (IWMO-2009 Part-I, IWMO-2009 Part-II, IWMO-2010, IWMO-2011, IWMO-2012, IWMO-2013, IWMO-2014). References External links POM-WEB page (registration and information) MPI-POM and Taiwan Ocean Prediction (TOP) Physical oceanography Earth sciences Numerical climate and weather models
Princeton Ocean Model
[ "Physics" ]
1,405
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
14,280,179
https://en.wikipedia.org/wiki/2%2C2%27-Bis%282-indenyl%29%20biphenyl
2,2′-Bis(2-indenyl) biphenyl is an organic compound with the formula [C6H4C9H7]2. The compound is the precursor, upon deprotonation, to ansa-metallocene complexes within the area of transition metal indenyl complexes. Metals studied with 2,2′-bis(2-indenyl) biphenyl include titanium, zirconium, and hafnium. The ligand and its complexes have been prepared by the research group of the late Brice Bosnich at The University of Chicago. Zirconium and hafnium complexes made from this ligand were found to be active catalysts for the polymerization of the smallest alkenes – compounds with carbon-carbon double bonds—namely, ethylene and propylene. The use of such complexes in the polymerization of alkenes has since been reported, and patented by DSM Research. References Ligands Catalysts Hydrocarbons
2,2'-Bis(2-indenyl) biphenyl
[ "Chemistry" ]
211
[ "Hydrocarbons", "Catalysts", "Ligands", "Catalysis", "Coordination chemistry", "Organic compounds", "Chemical kinetics" ]
14,284,288
https://en.wikipedia.org/wiki/Indole-3-glycerol-phosphate%20synthase
The enzyme indole-3-glycerol-phosphate synthase (IGPS) () catalyzes the chemical reaction 1-(2-carboxyphenylamino)-1-deoxy-D-ribulose 5-phosphate 1-C-(indol-3-yl)-glycerol 3-phosphate + CO2 + H2O This enzyme belongs to the family of lyases, to be specific, the carboxy-lyases, which cleave carbon-carbon bonds. The systematic name of this enzyme class is 1-(2-carboxyphenylamino)-1-deoxy-D-ribulose-5-phosphate carboxy-lyase [cyclizing 1-C-(indol-3-yl)glycerol-3-phosphate-forming]. Other names in common use include indoleglycerol phosphate synthetase, indoleglycerol phosphate synthase, indole-3-glycerophosphate synthase, 1-(2-carboxyphenylamino)-1-deoxy-D-ribulose-5-phosphate, and carboxy-lyase (cyclizing). This enzyme participates in phenylalanine, tyrosine and tryptophan biosynthesis and two-component system - general. It employs one cofactor, pyruvate. Structural studies In some bacteria, IGPS is a single chain enzyme. In others, such as Escherichia coli, it is the N-terminal domain of a bifunctional enzyme that also catalyses N-(5'-phosphoribosyl)anthranilate isomerase () (PRAI) activity, the third step of tryptophan biosynthesis. In fungi, IGPS is the central domain of a trifunctional enzyme that contains a PRAI C-terminal domain and a glutamine amidotransferase () (GATase) N-terminal domain. A structure of the IGPS domain of the bifunctional enzyme from the mesophilic bacterium E. coli (eIGPS) has been compared with the monomeric indole-3-glycerol phosphate synthase from the hyperthermophilic archaeon Sulfolobus solfataricus (sIGPS). Both are single-domain (beta/alpha)8 barrel proteins, with one (eIGPS) or two (sIGPS) additional helices inserted before the first beta strand. As of late 2007, 11 structures have been solved for this class of enzymes, with PDB accession codes , , , , , , , , , , and . References Further reading Protein domains EC 4.1.1 Pyruvate enzymes Enzymes of known structure
Indole-3-glycerol-phosphate synthase
[ "Biology" ]
616
[ "Protein domains", "Protein classification" ]
17,123,227
https://en.wikipedia.org/wiki/Chemerin
Chemerin, also known as retinoic acid receptor responder protein 2 (RARRES2), tazarotene-induced gene 2 protein (TIG2), or RAR-responsive protein TIG2 is a protein that in humans is encoded by the RARRES2 gene. Function Retinoids exert biologic effects such as potent growth inhibitory and cell differentiation activities and are used in the treatment of hyperproliferative dermatological diseases. These effects are mediated by specific nuclear receptor proteins that are members of the steroid and thyroid hormone receptor superfamily of transcriptional regulators. RARRES1, RARRES2 (this gene), and RARRES3 are genes whose expression is upregulated by the synthetic retinoid tazarotene. RARRES2 is thought to act as a cell surface receptor. Chemerin is a chemoattractant protein that acts as a ligand for the G protein-coupled receptor CMKLR1 (also known as ChemR23). Chemerin is a 14 kDa protein secreted in an inactive form as prochemerin and is activated through cleavage of the C-terminus by inflammatory and coagulation serine proteases. Chemerin was found to stimulate chemotaxis of dendritic cells and macrophages to the site of inflammation. In humans, chemerin mRNA is highly expressed in white adipose tissue, liver and lung while its receptor, CMKLR1 is predominantly expressed in immune cells as well as adipose tissue. Because of its role in adipocyte differentiation and glucose uptake, chemerin is classified as an adipokine. Role as an adipokine Chemerin has been implicated in autocrine / paracrine signaling for adipocyte differentiation and also stimulation of lipolysis. Studies with 3T3-L1 cells have shown chemerin expression is low in pre-differentiated adipocytes but its expression and secretion increases both during and after differentiation in vitro. Genetic knockdown of chemerin or its receptor, CMKLR1 impairs differentiation into adipocytes, and reduces the expression of GLUT4 and adiponectin, while increasing expression of IL-6 and insulin receptor. Furthermore, post-differentiation knockdown of chemerin reduced GLUT4, leptin, adiponectin, perilipin, and reduced lipolysis, suggesting chemerin plays a role in metabolic function of mature adipocytes. Studies using mature human adipocytes, 3T3-L1 cells, and in vivo studies in mice showed chemerin stimulates the phosphorylation of the MAPKs, ERK1, and ERK2, which are involved in mediating lipolysis. Studies in mice have shown neither chemerin nor CMKLR1 are highly expressed in brown adipose tissue, indicating that chemerin plays a role in energy storage rather than thermogenesis.2 Role in obesity and diabetes Given chemerin's role as a chemoattractant and a recent finding macrophages have been implicated in chronic inflammation of adipose tissue in obesity. This suggests chemerin may play an important role in the pathogenesis of obesity and insulin resistance. Studies in mice found that feeding mice a high-fat diet, resulted in increased expression of both chemerin and CMKLR1. In humans, chemerin levels are significantly different between individuals with normal glucose tolerance and individuals with type II diabetes and first degree relatives. Moreover, chemerin levels show a significant correlation with body mass index, plasma triglyceride levels and blood pressure. It was found incubation of 3T3-L1 cells with recombinant human chemerin protein facilitated insulin-stimulated glucose uptake. This suggests chemerin plays a role in insulin sensitivity and may be a potential therapeutic target for treating type II diabetes. References Further reading Proteins
Chemerin
[ "Chemistry" ]
829
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
17,125,914
https://en.wikipedia.org/wiki/EAST-ADL
EAST-ADL is an Architecture Description Language (ADL) for automotive embedded systems, developed in several European research projects. It is designed to complement AUTOSAR with descriptions at higher level of abstractions. Aspects covered by EAST-ADL include vehicle features, functions, requirements, variability, software components, hardware components and communication. Currently, it is maintained by the EAST-ADL Association in cooperation with the European FP7 MAENAD project. Overview EAST-ADL is a domain-specific language using meta-modeling constructs such as classes, attributes, and relationships. It is based on concepts from UML, SysML and AADL, but adapted for automotive needs and compliance with AUTOSAR. There is an EAST-ADL UML2 profile which is used in UML2 tools for user modeling.The EAST-ADL definition also serves as the specification for implementation in domain-specific tools. EAST-ADL contains several abstraction levels. The software- and electronics-based functionality of the vehicle are described at different levels of abstraction. The proposed abstraction levels and the contained elements provide a separation of concerns and an implicit style for using the modeling elements. The embedded system is complete on each abstraction level, and parts of the model are linked with various traceability relations. This makes it possible to trace an entity from feature down to components in hardware and software. EAST-ADL is defined with the development of safety-related embedded control systems as a benchmark. The EAST-ADL scope comprises support for the main phases of software development, from early analysis via functional design to the implementation and back to integration and validation on vehicle level. The main role of EAST-ADL is that of providing an integrated system model. On this basis, several concerns are addressed: Documentation, in terms of an integrated system model. Communication between engineers, by providing predefined views as well as related information. Analysis, through the description of system structure and properties. Behavioural models for simulation or code generation are supported as references from EAST-ADL functions to external models, such as a subsystem in MATLAB/Simulink. Organisation of EAST-ADL Meta-Model The EAST-ADL meta-model is organized according to 4 abstraction levels: Vehicle level contains modeling elements to represent intended functionality in a solution-independent way Analysis level represents the abstract functional decomposition of the vehicle with the principal internal and external interfaces. Design level has the detailed functional definition, a hardware architecture and allocations of functions to hardware. Implementation level relies on AUTOSAR elements and does not have EAST-ADL-specific constructs for the core structure. For all abstraction levels, relevant extension elements for requirements, behavior, variability and dependability are associated to the core structure. Relation between EAST-ADL and AUTOSAR Instead of providing modeling entities for the lowest abstraction level, i.e. implementation level, EAST-ADL uses unmodified AUTOSAR entities for this purpose and provides means to link EAST-ADL elements on higher abstraction levels to AUTOSAR elements. Thus, EAST-ADL and AUTOSAR in concert provide means for efficient development and management of the complexity of automotive embedded systems from early analysis right down to implementation. Concepts from model-based development and component-based development reinforce one another.An early, high-level representation of the system can evolve seamlessly into the detailed specifications of the AUTOSAR language. In addition, the EAST-ADL incorporates the following system development concerns: Modeling of requirements and verification/validation information, Feature modeling and support for software system product lines, Modeling of variability of the system design, Structural and behavioral modeling of functions and hardware entities in the context of distributed systems, Environment, i.e., plant model and adjacent systems, and Non-functional operational properties such as a definition of function timing and failure modes, supporting system level analysis. The EAST-ADL metamodel is specified according to the same rules as the AUTOSAR metamodel, which means that the two sets of elements can co-exist in the same model. The dependency is unidirectional from EAST-ADL to AUTOSAR, such that AUTOSAR is independent of EAST-ADL. However, relevant EAST-ADL elements can reference AUTOSAR elements to provide EAST-ADL support for requirements, variability, safety, etc. to the AUTOSAR domain. A model may thus be defined where AUTOSAR elements represent the software architecture and EAST-ADL elements extend the AUTOSAR model with orthogonal aspects and represents abstract system information through e.g. function and feature models. Such model can be defined in UML, by applying both an EAST-ADL profile and an AUTOSAR profile, or in a domain specific tool based on a merged AUTOSAR and EAST-ADL metamodel. History and Specification of EAST-ADL The EAST-ADL language has been defined in several steps within European research projects: EAST-ADL is governed by the EAST-ADL Association, founded in September 2011. The EAST-ADL UML2 profile is represented in the EAST-ADL annex to the OMG MARTE profile. Discussion While interest from automotive companies in EAST-ADL is increasing over the past years, EAST-ADL is still to be seen as a research effort (as of 2012). The practical acceptance of EAST-ADL in the automotive industry is still very low, even though EAST-ADL addresses many important aspects of vehicle development. EAST-ADL is used as a reference model in other research projects, e.g. CESAR and TIMMO-2-USE Modeling Tools and File Format EAST-ADL tool support is still limited, although a UML profile is available and domain specific tools such as MentorGraphics VSA, MetaCase MetaEdit+ and Systemite SystemWeaver have been tailored for EAST-ADL in the context of research projects and with customers. Papyrus UML, extended within the ATESST project as a concept demonstrator has EAST-ADL support, and MagicDraw, can also provide EAST-ADL palettes, diagrams, etc. In the case of UML, developers also need to have knowledge of UML (classes, stereotypes, arrow types, ..) for modeling with EAST-ADL. Many automotive engineers, in particular mechanical engineers, hardware developers, process experts) do not have this knowledge and prefer other approaches. EATOP is an upcoming initiative to make an Eclipse-based implementation of the EAST-ADL meta-model. An XML-based exchange format, EAXML, allows tools to exchange EAST-ADL models. The EAXML schema is autogenerated from the EAST-ADL metamodel according to the same principles as the AUTOSAR ARXML schema. Currently, the exchange format is supported by the EAST-ADL prototype of Mentor Graphics VSA, MetaEdit+ and SystemWeaver. For UML tooling, it is possible to exchange models using XMI, subject to the XMI compatibility between tools. Similar approaches Unified Modeling Language (UML) Systems Modeling Language (SysML) Architecture analysis and design language (AADL) AUTOSAR SystemDesk References External links www.east-adl.info EAST-ADL Association www.maenad.eu MAENAD project, current (2012) main contributing project to EAST-ADL. www.atesst.org Home of ATESST and ATESST2, former main EAST-ADL projects. Data modeling languages Software architecture Systems architecture Architecture description language
EAST-ADL
[ "Engineering" ]
1,549
[ "Systems engineering", "Design", "Systems architecture" ]
17,126,006
https://en.wikipedia.org/wiki/Microactuator
A microactuator is a microscopic servomechanism that supplies and transmits a measured amount of energy for the operation of another mechanism or system. As a general actuator, following standards have to be met: Large travel High precision Fast switching Low power consumption Power free force sustainability For microactuator, there are two in addition Microstructurability Integrability Principle of microactuators The basic principle can be described as the expression for mechanical work since an actuator is to manipulate positions and therefore force is needed. For different kind of microactuators, different physical principles are applied. Classes of microactuators Electrostatic Electromagnetic Piezoelectric Fluid Thermal See also Newton's laws Euler–Bernoulli beam equation Electrostatics Electromagnetism Piezoelectricity Microfluidics Sensors Nanotube nanomotor Microtechnology Actuators
Microactuator
[ "Materials_science", "Engineering" ]
188
[ "Materials science", "Microtechnology" ]
17,131,740
https://en.wikipedia.org/wiki/International%20Society%20of%20Dynamic%20Games
The International Society of Dynamic Games (ISDG) is an international non-profit, professional organization for the advancement of the theory of dynamic games. History The ISDG was founded on August 9, 1990 in Helsinki, Finland, at the site of the 4th International Symposium on Dynamic Games and Applications in the Helsinki University of Technology. ISDG is governed by an executive board chaired by a president. The first president of the society was professor Tamer Başar. In past years the presidents of ISDG were Tamer Başar 1990-1994 Alain Haurie 1994-1998 Pierre Bernhard 1998-2002 Georges Zaccour 2002-2006 Geert Jan Olsder 2006-2008 Leon Petrosyan 2008-2012 Michèle Breton 2012-2016 Vladimir Mazalov 2016-2022 Florian Wagener 2022- The objectives of ISDG to promote and foster the development and applications of the theory of dynamic games. to disseminate scientific information through all conveniently adopted support services. ISDG achieves these goals by organizing or co-organizing symposia, conferences and workshops and publishing distinguished high-standard journals to establish links with the international scientific community and in particular with other societies dealing with game theory, optimization, decision analysis and dynamical systems. ISDG publications Annals of the International Society of Dynamic Games (series ed.: Tamer Başar; published by Birkhäuser) Dynamic Games and Applications (editor-in-chief: Georges Zaccour; published by Birkhäuser) International Game Theory Review (managing editor: David W. K. Yeung, editors: Hans Peters, Leon A. Petrosyan; published by World Scientific Publishing Co. Pte. Ltd.) The Isaacs Award The executive board of the International Society of Dynamic Games decided in 2003 to establish a prize to recognize the "outstanding contribution to the theory and applications of dynamic games" of two scholars at each of its symposium, starting in 2004. The prize was named after Rufus Isaacs, the acknowledged founding father of differential games. The recipients of this prize are: 2004: Yu-Chi Ho & George Leitmann 2006: Nikolay Krasovskii & Wendell Fleming 2008: Pierre Bernhard & Alain Haurie 2010: Tamer Başar & Geert Jan Olsder 2012: Steffen Jørgensen & Karl Sigmund 2014: Eitan Altman & Leon Petrosyan 2016: Martino Bardi & Ross Cressman 2018: Andrzej S. Nowak & Georges Zaccour 2022: Pierre Cardaliaguet & Mabel Tidball 2024: Joel Brown & Roland Malhamé References External links Game Theory Society ISDG. Russian Chapter Organizations related to game theory International professional associations
International Society of Dynamic Games
[ "Mathematics" ]
560
[ "Game theory", "Organizations related to game theory" ]
17,131,878
https://en.wikipedia.org/wiki/Tamer%20Ba%C5%9Far
Mustafa Tamer Başar (born January 19, 1946) is a control and game theorist who is the Swanlund Endowed Chair and Center for Advanced Study Professor of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign, USA. He is also the Director of the Center for Advanced Study (since 2014). Education Tamer Başar received a B.S. in Electrical Engineering from Boğaziçi University (formerly known as Robert College) at Bebek, in Istanbul, Turkey, in 1969, and M.S., M.Phil., and Ph.D. in engineering and applied science from Yale University, in 1970, 1971 and 1972, respectively. Academic life He joined the University of Illinois at Urbana–Champaign - Electrical and Computer Engineering Department in 1981. He was the founding president of the International Society of Dynamic Games during 1990–1994, the president of the IEEE Control Systems Society in 2000, and the president of the American Automatic Control Council during 2010–2011. He received the Medal of Science of Turkey in 1993, H.W. Bode Lecture Prize of the IEEE Control Systems Society in 2004, Georgia Quazza Medal of the International Federation of Automatic Control in 2005, the Richard E. Bellman Control Heritage Award in 2006, Isaacs Award of the International Society of Dynamic Games in 2010, and IEEE Control Systems Award in 2014. He was elected as a member of National Academy of Engineering in 2000 in Electronics, Communication & Information Systems Engineering and Industrial, Manufacturing & Operational Systems Engineering for the development of dynamic game theory and application to robust control of systems with uncertainty. He is a Fellow of IEEE, IFAC, and SIAM. Honorary degrees and chairs He has been awarded Honorary Doctor of Science degrees and Honorary Professorships from: Honorary Professorship, Shandong University, Jinan, China, 2019 Honorary Chair Professorship from Tsinghua University, Beijing, China in 2014 Honorary Doctorate (Doctor Honoris Causa) from Boğaziçi University, Istanbul, Turkey in 2012 Honorary Doctorate from the National Academy of Sciences of Azerbaijan in 2011 Honorary Professorship from Northeastern University, Shenyang, China in 2008 Honorary Doctorate (Doctor Honoris Causa) from Doğuş University, Istanbul, Turkey in 2007 Swanlund Endowed Chair Professorship at UIUC in 2007 Research areas His research interests include optimal, robust, and nonlinear control; large-scale systems; dynamic games; stochastic control; estimation theory; stochastic processes; and mathematical economics. Awards AAA&S Member (2023) IEEE Control Systems Award (2014) Honorary Chair Professorship from Tsinghua University, Beijing, China (2014) Honorary Doctorate (Doctor Honoris Causa) from Boğaziçi University, Istanbul (2012) SIAM Fellow (2012) Honorary Doctorate from the National Academy of Sciences of Azerbaijan (2011) Isaacs Award of ISDG (2010, 2014) Honorary Professorship from Northeastern University, Shenyang, China (2008) Swanlund Endowed Chair at UIUC (2007) Honorary Doctorate (Doctor Honoris Causa) from Doğuş University, Istanbul (2007) Richard E. Bellman Control Heritage Award (2006) Giorgio Quazza Medal of IFAC (2005) Outstanding Service Award of IFAC (2005) IFAC Fellow (2005) Center for Advanced Study Professorship at UIUC (2005) Hendrik Wade Bode Lecture Prize of the IEEE Control Systems Society (2004) Tau Beta Pi Daniel C. Drucker Eminent Faculty Award of the College of Engineering of UIUC (2004) Elected to the National Academy of Engineering (of the USA) (2000) IEEE Millennium Medal (2000) Fredric G. and Elizabeth H. Nearing Distinguished Professorship at UIUC (1998) Axelby Outstanding Paper Award (1995) Distinguished Member Award of the IEEE Control Systems Society (1993) Medal of Science of Turkey (1993) IEEE Fellow (1983) See also List of game theorists List of members of the National Academy of Engineering (Electronics) References 1946 births Living people Boğaziçi University alumni Academic staff of Boğaziçi University Yale School of Engineering & Applied Science alumni University of Illinois Urbana-Champaign faculty Members of the United States National Academy of Engineering Game theorists Control theorists Communication theorists Turkish academics Turkish scientists Turkish mathematicians Turkish electrical engineers American electrical engineers American academics of Turkish descent Fellows of the IEEE Fellows of the Society for Industrial and Applied Mathematics Richard E. Bellman Control Heritage Award recipients
Tamer Başar
[ "Mathematics", "Engineering" ]
876
[ "Game theorists", "Game theory", "Control engineering", "Control theorists" ]
17,132,502
https://en.wikipedia.org/wiki/Michael%20Athans
Michael Athans (born Michael Athanassiades in Drama, Greece, May 3, 1937 - May 26, 2020) was a Greek-American control theorist and a Professor Emeritus in the Department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. He was a Fellow of the IEEE (1973) and a Fellow of the AAAS (1977). He was the recipient of numerous awards for his contributions in the field of control theory. A pioneer in the field of control theory, he helped shape modern control theory and spearheaded the field of multivariable control system design and the field of robust control. Athans was a member of the technical staff at Lincoln Laboratory from 1961 to 1964, and a Department of Electrical Engineering and Computer Science faculty member from 1964 to 1998. Upon retirement, Athans moved to Lisbon, Portugal, where he was an Invited Research Professor in the Institute for Systems and Robotics, Instituto Superior Técnico where he received a honoris causa doctorate from the Universidade Técnica de Lisboa in 2011. Education Athans received his B.S., M.S., and Ph.D. in Electrical Engineering from the University of California, Berkeley in 1958, 1959, and 1961, respectively. Academic career From 1961 to 1964, Athans was employed as a member of the technical staff at the MIT Lincoln Laboratory, Lexington, Mass. where he conducted research in optimal control and estimation theory. From 1964, until his early retirement in 1998, he was a faculty member in the MIT Electrical Engineering and Computer Sciences department, where he held the rank of Professor. He also was the director of the MIT Laboratory for Information and Decision Systems (LIDS) from 1974 to 1981. In 1978 he co-founded ALPHATECH Inc., Burlington, Mass., where he served as Chairman of the Board of Directors. He has also consulted for numerous other industrial organizations and government panels. In 1995 he was visiting professor in the Department of Electrical and Computer Engineering at the National Technical University of Athens, Greece. From 1997 to 2011 he was an Invited Research Professor in the Institute for Systems and Robotics, Instituto Superior Técnico, Lisbon, Portugal. Athans is the co-author of Optimal Control (McGraw Hill, 1966), Systems, Networks and Computation: Basic Concepts (McGraw Hill, 1972) and Systems, Networks and Computation: Multivariable Methods (McGraw Hill, 1974). In 1974 he developed 65 color TV lectures and study guides on Modern Control Theory. In addition he authored or co-authored over 350 technical papers and reports. His research interests and contributions span the areas of optimum system and estimation theory, robust and adaptive multivariable control systems, and the application of these methodologies to defense, large space structures, IVHS transportation systems, aerospace, marine, automotive, power, manufacturing, economic, and military C3 systems. His last research interests focused on dynamic models of the human immune system and robust adaptive control methodologies. In 1964 Athans was the first recipient of the American Automatic Control Council's Donald P. Eckman Award "for outstanding contributions to the field of automatic control". In 1969 he was the first recipient of the Frederick E. Terman Award of the American Society for Engineering Education as "the outstanding young electrical engineering educator." In 1980 he received the second Education Award of the American Control Council for his "outstanding contributions and distinguished leadership in automatic control education." In 1973 he was elected Fellow of the IEEE and in 1977 Fellow of the AAAS. In 1983 he was elected Distinguished Member of the IEEE Control Systems Society. He received the 1993 H.W. Bode Prize from the IEEE Control Systems Society, which also included the delivery of the Bode Plenary Lecture at the 1993 IEEE Conference on Decision and Control. He was the recipient of the Richard E. Bellman Control Heritage Award of the American Automatic Control Council "In Recognition of a Distinguished Career in Automatic Control; As a Leader and Champion of Innovative Research; As a Contributor to Fundamental Knowledge in Optimal, Adaptive, Robust, Decentralized and Distributed Control; and as a Mentor to his Students" presented in June 1995 at the American Control Conference. In 1996 he was awarded honorary doctorates from the National Technical University of Athens, Greece, and from the Technical University of Crete, Chania, Crete, Greece. In July 2002 he was awarded the Ktisivos Award, “In recognition of contributions to control and estimation theory, awarded by the Mediterranean Control and Automation Association. He was the recipient of a Polish Academy of Sciences Medal, “For contributions to Control Theory” in Warsaw, Poland, on June 30, 2005. In 2006 the Institute of Electrical and Electronics Engineers (IEEE) elected him Life Fellow. Athans served in numerous committees of IEEE, IFAC, AACC and AAAS; he was president of the IEEE Control Systems Society from 1972 to 1974. In addition he was a member of AIAA, Phi Beta Kappa, Eta Kappa Nu, and Sigma Xi. He served as Associate Editor of the IEEE Transactions on Automatic Control, Journal of Dynamic Systems and Control, and the IFAC journal Automatica. Awards American Automatic Control Council: Donald P. Eckman Award "for outstanding contributions to the field of automatic control" in 1964. American Society for Engineering Education: Frederick Emmons Terman Award as "the outstanding young electrical engineering educator" in 1969. American Automatic Control Council: John R. Ragazzini Award for "outstanding contributions and distinguished leadership in automatic control education" in 1980. IEEE Control Systems Society's 1993 Hendrik Wade Bode Prize. American Automatic Control Council: Richard E. Bellman Control Heritage Award in 1995. Honoris causa doctorate from Universidade Técnica de Lisboa in 2011. References External links Home Page 1937 births 2020 deaths Greek engineers Greek emigrants to the United States Control theorists Fellows of the IEEE Fellows of the American Association for the Advancement of Science Richard E. Bellman Control Heritage Award recipients People from Drama, Greece
Michael Athans
[ "Engineering" ]
1,224
[ "Control engineering", "Control theorists" ]
17,133,396
https://en.wikipedia.org/wiki/Rutherford%20Aris
Rutherford "Gus" Aris (September 15, 1929 – November 2, 2005) was a chemical engineer, control theorist, applied mathematician, and a regents professor emeritus of chemical engineering at the University of Minnesota (1958–2005). Early life Aris was born in Bournemouth, England, to Algernon Aris and Janet (Elford). From a young age, Aris was interested in chemistry. Aris's father owned a photo-finishing works, where he would experiment with chemicals and reactions. He attended St Martin's, a small local kindergarten and moved to St Wulfran's, a local preparatory school, now Queen Elizabeth's School. Here, he studied Latin (a skill he would make much use of later in his life) and was encouraged to continue pursuing his interest in chemistry. Because of his achievements, he was referred to the Reverend C. B. Canning, Headmaster of Canford School, a well-known public school, close to Wimborne. On the strength of this interview, he was given a place in the newly created house that the school had provided for day-boarders. This was in 1943, when he was 14. His mathematics teacher, H. E. Piggott, had a particular influence on Aris due to "the liveliness, enthusiasm, and care that he brought to his teaching", which "were unparalleled in my experience". Piggot spent substantial time on pure and applied mathematical papers, an experience that Aris described as "extraordinary". Aris dedicated his book Discrete Dynamic Programming to Piggot 15 years later. Industry experience Imperial Chemical Industries Piggot helped Aris to get a job working for Imperial Chemical Industries (ICI) as a laboratory technician in the Mechanical Engineering Department of the Research Labs, at the age of 17. While working at ICI, Aris attended the University of London part-time to work toward his B.Sc. Aris described this as "an excellent way to get a degree, although perhaps not so good a way of getting an education." After 2 years Aris made an attempt to earn the B.Sc. Honours Degree. He sat 12 papers (exams) covering a wide range of mathematical topics, and got a degree with first-class honours. University of Edinburgh In 1948, ICI sent him to Edinburgh, Scotland for two years of study at the Mathematical Institute at the University of Edinburgh, which was presided over by Alexander Aitken. Aris, who was accepted for post-graduate studies but not for a Ph.D., did post-graduate work at the University under the supervision of John Cossar. During this break from ICI, Aris also registered for a University of London M.Sc. in the area of mathematical analysis. When he sat the papers, however, he failed to get the degree. ICI Billingham In 1950, Aris returned to ICI and began working for C. H. Bosanquet in Billingham, England. Working with Bosanquet provided Aris the opportunity to work on a large variety of problems, including catalysis, heat transfer, gas scrubbing, and centrifuge design. Aris was then promoted to Technical Officer, where he began working on chromatography. He utilized results from a paper on dispersion written by Geoffrey Taylor, and extended its results, ultimately writing a paper in 1955 that applied the method of moments to Taylor's approach. He submitted the paper to the Proceedings of the Royal Society, with help from Taylor (who was a Fellow of the Royal Society). Aris communicated with Taylor regarding dispersion and diffusion. In the meantime, however, he was transferred to a different division, where he began working on chemical reactor design. Frustrated with the transfer and with the proprietary nature of his commercial work, which made publishing work very difficult, he decided to move to a university, applying for several lectureship positions during 1954 and 1955 without success. Aris continued to work at ICI, focusing much of his efforts on mathematical modeling of adiabatic multi-bed reactors, a topic that was the central focus of an M.S. student at the University of Minnesota. In 1955, Neal Amundson of the University of Minnesota, who was on sabbatical at Cambridge, visited the ICI Research Department, where Aris was working. Amundson suggested to ICI, during his visit, that Aris be sent to the University of Minnesota in Minneapolis for a year of study. Several months later, Aris later met Amundson at Cambridge and told Amundson of his plans to leave ICI for academia, plans that he had not revealed to his superiors at ICI. Amundson offered Aris a research fellowship at the University of Minnesota, which Aris accepted. After notifying ICI of his intent to leave, he moved to Minneapolis, Minnesota at the end of 1955. Academic career University of Minnesota research fellowship Aris began working on chemically reacting laminar flow, applying Kummer's hypergeometric function to the problem, and control of a stirred tank reactor with some unusual properties. Both problems required the use of a computer to perform calculations, and Amundson provided Aris with a computer science graduate student with whom to work. Aris's research fellowship was extended for a second year, but shortly afterward, in October 1956, Aris was informed of a lectureship opening at the University of Edinburgh. He took advantage of the opportunity, and left immediately for Edinburgh. University of Edinburgh lectureship Aris was on the faculty of the University of Edinburgh for two years, 1956–1958. While at Edinburgh, Aris wrote papers on his work at the University of Minnesota and at ICI. Having the lectureship position allowed Aris to gain experience lecturing to students. He also attended the lectures of, and interacted with, the chair of chemical technology at the University of Edinburgh, Kenneth Denbigh, who was a well-known thermodynamicist and an editor of the journal Chemical Engineering Science. University of Minnesota faculty Aris returned to Minneapolis in the summer of 1957 to continue his work on the stirred tank reactor problem. In August he became engaged to Claire Holman, and when he informed Amundson, Amundson offered him a faculty position at the University. Aris accepted the job, and began working as an assistant professor at the University of Minnesota in 1958. Aris had not formally received a Ph.D., but had registered three years earlier with the University of London, where he had earned his B.Sc., and which offered Ph.D. degrees by correspondence. A Ph.D. degree could be earned without following a strict preparation process; the individual needed to propose a research program after 3 years, select a committee of examiners, and submit a dissertation, and after an oral examination by and approval from the committee, the degree would be granted. Amundson had suggested Aris look into Richard Bellman's method of dynamic programming for his dissertation. Amundson informally served as Aris's advisor, and Aris completed his dissertation on the topic in 1960. His dissertation was published by the Academic Press in a series of which Bellman was the editor, and Bellman took note of the dissertation. Aris and Amundson visited Bellman at the Rand Corporation, where Bellman was working on economic models. The dynamic programming method had originally been developed for economics, but Bellman was attracted by applications in engineering, and the meeting led to a joint collaboration and a publication. Aris's research at the University of Minnesota focused on optimization, dynamic programming, control theory, Taylor diffusion, and computing engines. Aris also taught a graduate fluid mechanics course, and eventually wrote the book Vectors, tensors, and the basic equations of fluid mechanics in an effort to make the rational mechanics approach of Truesdell, Coleman, and others more accessible to students. First Cambridge sabbatical After he had been with the department for six years, Aris took a sabbatical at the Shell Department of Chemical Engineering at the University of Cambridge during the 1964–1965 academic year, where he was able to interact with many well-known engineers and mathematicians such as Geoffrey Taylor and John Littlewood. He also lectured in many places in Europe, including Brussels, Copenhagen, and Trondheim. Second Cambridge sabbatical Aris took a second sabbatical after 6 years, again going to the University of Cambridge, during the 1971–1972 academic year. He spent his time writing a monograph on mathematical models for porous catalysts, which he did not finish until 1973. During his sabbatical, he received financial support in the form of a Guggenheim grant. This also provided Aris an opportunity to participate in the board overseeing the formation and development of Los Alamos National Lab's Center for Nonlinear Studies, which allowed Aris the opportunity to travel to Los Alamos during the 1970s and 1980s. Department chairmanship In 1974, Neal Amundson, who had been the department chairman of the University of Minnesota's chemical engineering department for nearly 25 years, resigned from this position. Aris was appointed acting head of the department, while Amundson left Minnesota for the University of Houston. Aris acted as department chair for 4 years, and was relieved of the position in 1978. Coinciding with this was an offer from Princeton University to join the faculty there, as well as the offer to stay at the University of Minnesota and work half-time in the chemical engineering department and half-time in the paleography department. Aris decided to stay at the University of Minnesota. Paleography In addition to his interest in chemical engineering, Aris was also interested in the humanities. At the University of Minnesota, Aris was able to pursue his interest in paleography when he was granted a professorship in the Classics Department, where he taught classes and published books and research articles. Aris published his book Explicatio Formarum Literarum, or The Unfolding of Letterforms, which covered the history of written letters from the 1st century to the 15th century. Further sabbaticals Aris had several other sabbaticals over his 40-year career. Through the Fairchild Distinguished Scholar program at the California Institute of Technology, Aris was able to spend a portion of 1977 and a year in 1980–1981 on sabbatical in Pasadena, California. He dedicated a portion of his time to paleography, utilizing the nearby Huntington Library. Additionally, through a personal connection at the University of Leeds, Aris was able to spend several weeks there as Brotherton Professor in 1985. Aris spent his last sabbatical, from 1993–1994, at the Institute for Advanced Study at Princeton. Death Aris penned many poems and anecdotes, many relating his difficulties with Parkinson's disease, from which Aris eventually died. Aris died on November 2, 2005, in Edina, Minnesota. Legacy Over the course of his long academic career, Aris was a visiting professor at many institutions, including Cambridge University, the California Institute of Technology, and Princeton University; he authored 13 books and more than 300 chemical engineering research articles, and mentored 48 Ph.D. and 20 M.S. graduate students. Aris was well known for his research on mathematical modeling, chemical reactor and chemical process design, and distillation techniques, as well as his paleographic research. After he had been department head for four years, in 1978 he was named Regents Professor. Some of the awards and honors earned by Aris include a Guggenheim Fellowship, election to the National Academy of Engineering in 1975 and the American Academy of Arts and Sciences in 1988. Aris was also a member of the American Chemical Society, the Society for Mathematical Biology, and the Society of Scribes and Illuminators, among others. Aris was awarded the Richard E. Bellman Control Heritage Award in 1992 for his contributions to the field of control theory. He was awarded the Neal R. Amundson Award for Excellence in Chemical Reaction Engineering by the International Symposia on Chemical Reaction Engineering in 1998. In 2016, the board of the ISCRE (International Symposia on Chemical Reaction Engineering) established the Rutherford Aris Young Investigator Award for Excellence in Chemical Reaction Engineering. This award honors young researchers under the age of 40 to recognize outstanding contributions in experimental and/or theoretical reaction engineering research. Selected bibliography Books Aris, Rutherford (1989). Elementary Chemical Reactor Analysis (Butterworth's Series in Chemical Engineering) .Butterworth-Heinemann Edited books References Footnotes 1929 births 2005 deaths Engineering academics British chemical engineers Control theorists Applied mathematicians University of Minnesota faculty Alumni of the University of London Alumni of the University of Edinburgh Richard E. Bellman Control Heritage Award recipients Fellows of the American Academy of Arts and Sciences People educated at Queen Elizabeth's Grammar School, Wimborne Minster Scientists from Bournemouth Minnesota CEMS
Rutherford Aris
[ "Mathematics", "Engineering" ]
2,652
[ "Applied mathematics", "Applied mathematicians", "Control engineering", "Control theorists" ]
19,334,199
https://en.wikipedia.org/wiki/Atom%20%28time%29
An atom of time or "a-tom" ("indivisible" in Greek), refers to the smallest possible unit of time. History One of the earliest occurrences of the word "atom" to mean the smallest possible unit of measuring time is found in the Greek text of the New Testament in Paul's . The text compares the length of time of the “atom” to the time needed for "the twinkling of an eye." The text reads: "" – the word "atom" is usually translated "a moment" — "In a moment, in the twinkling of an eye". With that meaning, it was later referred to in Medieval philosophical writings as the smallest possible division of time. The earliest known occurrence in English is in Byrhtferth's Enchiridion (a science text) of 1010–1012, where it was defined as 1/564 of a momentum (1½ minutes), and thus equal to almost 160 milliseconds. It was used in the computus, the calculation used to determine the calendar date of Easter. See also Planck time References Units of time Philosophy of time
Atom (time)
[ "Physics", "Mathematics" ]
240
[ "Physical quantities", "Time", "Time stubs", "Units of time", "Quantity", "Philosophy of time", "Spacetime", "Units of measurement" ]
19,334,239
https://en.wikipedia.org/wiki/SCP%2006F6
SCP 06F6 is (or was) an astronomical object of unknown type, discovered on 21 February 2006 in the constellation Boötes during a survey of galaxy cluster CL 1432.5+3332.8 with the Hubble Space Telescope's Advanced Camera for Surveys Wide Field Channel. According to research authored by Kyle Barbary of the Supernova Cosmology Project, the object brightened over a period of roughly 100 days, reaching a peak intensity of magnitude 21; it then faded over a similar period. Barbary and colleagues report that the spectrum of light emitted from the object does not match known supernova types, and is dissimilar to any known phenomenon in the Sloan Digital Sky Survey database. The light in the blue region shows broad line features, while the red region shows continuous emission. The spectrum shows a handful of spectral lines, but when astronomers try to trace any one of them to an element the other lines fail to match up with any other known elements. Because of its uncommon spectrum, the team was not able to determine the distance to the object using standard redshift techniques; it is not even known whether the object is within or outside the Milky Way. Furthermore, no Milky Way star or external galaxy has been detected at this location, meaning any source is very faint. The European X-ray satellite XMM Newton made an observation in early August 2006 which appears to show an X-ray glow around SCP 06F6, two orders of magnitude more luminous than that of supernovae. Observations from the Palomar Transient Factory, reported in 2009, indicate a redshift z = 1.189 and a peak magnitude of −23.5 absolute (comparable to SN2005ap), making SCP 06F6 one of the most luminous transient phenomena known as of that date. Possible causes Supernovae reach their maximum brightness in only 20 days, and then take much longer to fade away. Researchers had initially conjectured that SCP 06F6 might be an extremely remote supernova; relativistic time dilation might have caused a 20-day event to stretch out over a period of 100 days. But this explanation now seems unlikely. Other conjectures that have been advanced involve a collision between a white dwarf and an asteroid, or the collision of a white dwarf with a black hole. An analysis by a team from the University of Warwick (Boris Gänsicke et al.) suggests that the light spectrum is "consistent with emission from a cool, carbon-rich atmosphere at a redshift of z~0.14", possibly representing the core collapse and explosion of a carbon star. Gänsicke's group concurs with Barbary and colleagues that SCP 06F6 may represent "a new class" of celestial object. The analysis of Israeli astronomers of Technion suggests four alternative explanations for SCP 06F6, in plausibility order: the tidal destruction of a carbon-oxygen white dwarf by an intermediate-mass black hole, a type Ia supernova exploding inside the dense stellar wind of a carbon star, an asteroid that was swallowed up by a white dwarf or, least likely, a core-collapse supernova. Observations in 2009 indicate that it may be a pair-instability supernova. The event was similar to SN 2005ap, and other unusually bright supernova suggesting that it was a new type of supernova. References External links Light curves and spectra on the Open Supernova Catalog CBET 546 New Scientist's article from June 2006 when the object was first observed New Scientist's article from September 2008 New Sci June 2009 Astrophysical Journal: Boris T. Gänsicke et al, SCP 06F6: A CARBON-RICH EXTRAGALACTIC TRANSIENT AT REDSHIFT z ~ 0.14. May, 2009 Supernovae Discoveries by the Hubble Space Telescope 20060221 Boötes Unsolved problems in astronomy
SCP 06F6
[ "Physics", "Chemistry", "Astronomy" ]
798
[ "Supernovae", "Unsolved problems in astronomy", "Concepts in astronomy", "Astronomical events", "Boötes", "Constellations", "Astronomical controversies", "Explosions" ]
19,335,893
https://en.wikipedia.org/wiki/CAIFI
The Customer Average Interruption Frequency Index (CAIFI) is a popular reliability index used in the reliability analysis of an electric power system. It is designed to show trends in customers interrupted and helps to show the number of customers affected out of the whole customer base. References Electric power Reliability indices
CAIFI
[ "Physics", "Engineering" ]
58
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
19,341,001
https://en.wikipedia.org/wiki/Dual%20Work%20Exchanger%20Energy%20Recovery
The Dual Work Exchanger Energy Recovery (DWEER) is an energy recovery device. In the 1990s developed by DWEER Bermuda and licensed by Calder AG for use in the Caribbean. Seawater reverse osmosis (SWRO) needs high pressure and some of the reject stream can be reused by using this device. According to Calder AG, 97% of the energy in the reject stream is recovered. The DWEER system uses a piston doublechamber reciprocating hydraulically driven pump, and a patented valve system in a high pressure batch process with large pressure vessels, similar to a locomotive, to capture and transfer the energy lost in the membrane reject stream. Its advantage is its high efficiency rate, but it suffers from complex and large mechanical components which are susceptible to seawater corrosion due to its metal composition. References Water power Membrane technology
Dual Work Exchanger Energy Recovery
[ "Chemistry" ]
173
[ "Membrane technology", "Separation processes" ]
19,344,297
https://en.wikipedia.org/wiki/Desorption%20atmospheric%20pressure%20photoionization
Desorption atmospheric pressure photoionization (DAPPI) is an ambient ionization technique for mass spectrometry that uses hot solvent vapor for desorption in conjunction with photoionization. Ambient Ionization techniques allow for direct analysis of samples without pretreatment. The direct analysis technique, such as DAPPI, eliminates the extraction steps seen in most nontraditional samples. DAPPI can be used to analyze bulkier samples, such as, tablets, powders, resins, plants, and tissues. The first step of this technique utilizes a jet of hot solvent vapor. The hot jet thermally desorbs the sample from a surface. The vaporized sample is then ionized by the vacuum ultraviolet light and consequently sampled into a mass spectrometer. DAPPI can detect a range of both polar and non-polar compounds, but is most sensitive when analyzing neutral or non-polar compounds. This technique also offers a selective and soft ionization for highly conjugated compounds. History The history of desorption atmospheric pressure photoionization is relatively new, but can be traced back through developments of ambient ionization techniques dating back to the 1970s. DAPPI is a combination of popular techniques, such as, atmospheric pressure photoionziation (APPI) and surface desorption techniques. The photoionization techniques were first developed in the late 1970s and began being used in atmospheric pressure experiments in the mid 1980s. Early developments in the desorption of open surface and free matrix experiments were first reported in literature in 1999 in an experiment using desorption/ionization on silicon (DIOS). DAPPI replaced techniques such as desorption electrospray ionization (DESI) and direct analysis in real time (DART). This generation of techniques are all recent developments seen in the 21st century. DESI was discovered in 2004 at Purdue University, while DART was discovered in 2005 by Laramee and Cody. DAPPI was developed soon after in 2007 at the University of Helsinki, Finland. The development of DAPPI widened the range of detection for nonpolar compounds and added a new dimension of thermal desorption of direct analysis samples. Principle of operation The first operation to occur during desorption atmospheric pressure photoionization is desorption. Desorption of the sample is initiated by a hot jet of solvent vapor that is targeted onto the sample by a nebulizer microchip. The nebulizer microchip is a glass device bonded together by pyrex wafers with flow channels embedded from a nozzle at the edge of the chip. The microchip is heated to 250-350C in order to vaporize the entering solvent and create dopant molecules. Dopant molecules are added to help facilitate the ionization of the sample. Some of the common solvents include: nitrogen, toluene, acetone, and anisole. The desorption process can occur by two mechanisms: thermal desorption or momentum transfer/liquid spray. Thermal desorption uses heat to volatilize the sample and increase the surface temperature of the substrate. As the substrate's surface temperature is increased, the higher the sensitivity of the instrument. While studying the substrate temperature, it was seen that the solvent did not have a noticeable effect on the final temperature or heat rate of the substrate. Momentum transfer or liquid spray desoprtion is based on the solvent interaction with the sample, causing the release of specific ions. The momentum transfer is propagated by the collision of the solvent with the sample along with the transfer of ions with the sample. The transfer of positive ions, such as protons and charge transfers, are seen with the solvents: toluene and anisole. Toluene goes through a charge exchange mechanism with the sample, while acetone promotes a proton transfer mechanism with the sample. A beam of 10 eV photons that are given off by a UV lamp is directed at the newly desorbed molecules, as well as the dopant molecules. Photoionization then occurs, which knocks out the molecule's electron and produces an ion. This technique alone is not highly efficient for different varieties of molecules, particularly those that are not easily protonated or deprotonated. In order to completely ionize samples, dopant molecules must help. The gaseous solvent can also undergo photoionization and act as an intermediate for ionization of the sample molecules. Once dopant ions are formed, proton transfer can occur with the sample, creating more sample ions. The ions are then sent to the mass analyzer for analysis. Ionization mechanisms The main desorption mechanism in DAPPI is thermal desorption due to rapid heating of the surface. Therefore, DAPPI only works well for surfaces of low thermal conductivity. The ionization mechanism depends on the analyte and solvent used. For example, the following analyte (M) ions may be formed: [M + H]+, [M - H]−, M+•, M−•. Types of component geometries Reflection geometry Considered the normal or conventional geometry of DAPPI, this mode is ideal for solid samples that do not need any former preparation. The microchip is parallel to the MS inlet. The microchip heater is aimed to hit the samples at . The UV lamp is directly above the sample and it releases photons to interact with the desorbed molecules that are formed. The conventional method generally uses a higher heating power and gas flow rate for the nebulizer gas, while also increasing the amount of dopant used during the technique. These increases can cause higher background noise, analyte interference, substrate impurities, and more ion reactions from excess dopant ions. Transmission geometry This mode is specialized for analyzing liquid samples, with a metal or polymer mesh replacing the sample plate in reflection geometry. The mesh is oriented from the nebulizer microchip and the mass spec inlet, with the lamp directing photons to the area where the mesh releases newly desorbed molecules. The analyte is thermally desorbed as both the dopant vapor and nebulizer gas are directed through the mesh. It has been seen that steel mesh with low density and narrow strands produces better signal intensities. This type of mesh allows for larger openings in the surface and quicker heating of strands. Transmission mode uses a lower microchip heating power which eliminates some of the issues seen with the reflection geometry above, including low signal noise. This method can also improve the S/N ratio of smaller non-polar compounds. Instrument coupling Separation techniques Thin layer chromatography (TLC) is a simple separation technique that can be coupled with DAPPI-MS to identify lipids. Some of the lipids that were seen to be separated and ionized include: cholesterol, triacylglycerols, 1,2-diol diesters, wax esters, hydrocarbons, and cholesterol esters. TLC is normally coupled with instruments in vacuum or atmospheric pressure, but vacuum pressure gives poor sensitivity for more volatile compounds and has minimal area in the vacuum chambers. DAPPI was used for its ability to ionize neutral and non-polar compounds, and was seen to be a fast and efficient method for lipid detection as it was coupled with both NP-TLC and HPTLC plates. Laser desorption is normally used in the presence of a matrix, such as matrix assisted laser desorption ionization (MALDI), but research has combined techniques of laser desoprtion in atmospheric pressure conditions to produce a method that does not use a matrix or discharge. This method is able to help with smaller compounds, and generates both positive and negative ions for detection. A transmission geometry is taken as the beam and spray are guided at a angle into the coupled MS. Studies have shown the detection of organic compounds such as: farnesene, squalene, tetradecahydroanthracene, 5-alpha cholestane, perylene, benzoperylene, coronene, tetradecylprene, dodecyl sulfide, benzodiphenylene sulfide, dibenzosuberone, carbazole, and elipticine. This method was also seen to be coupled with the mass spectroscopy technique, FTICR, to detect shale oils and some smaller nitrogen containing aromatics. Mass spectrometry Fourier transform ion cyclotron resonance (FTICR) is a technique that is normally coupled with electrospray ionization (ESI), DESI, or DART, which allows for the detection of polar compounds. DAPPI allows for a broader range of polarities to be detected, and a range of molecular weights. Without separation or sample preparation, DAPPI is able to thermally desorb compounds such as oak biochars. The study did cite an issue with DAPPI. If the sample is not homogeneous, then the neutral ions will ionize only the surface, which does not provide an accurate detection for the substance. The scanning of the FTICR allows for the detection of complex compounds with high resolution, which leads to the ability to analyze elemental composition. Applications DAPPI can analyze both polar (e.g. verapamil) and nonpolar (e.g. anthracene) compounds. This technique has an upper detection limit of 600 Da. Compared to desorption electrostray ionization (DESI), DAPPI is less likely to be contaminated by biological matrices. DAPPI was also seen to be more sensitive and contain less background noise than popular techniques such as direct analysis in real time (DART). Performance of DAPPI has also been demonstrated on direct analysis of illicit drugs. Other applications include lipid detection and drug analysis sampling. Lipids can be detected through a coupling procedure with orbitrap mass spectroscopy. DAPPI has also been known to couple with liquid chromotography and gas chromotography mass spectroscopy for the analysis of drugs and aerosol compounds. Studies have also shown where DAPPI has been used to find harmful organic compounds in the environment and in food, such as polycyclic aromatic hydrocarbons (PAH) and pesticides. See also Orbitrap Atmospheric pressure chemical ionization Desorption atmospheric pressure chemical ionization References Mass spectrometry Ion source
Desorption atmospheric pressure photoionization
[ "Physics", "Chemistry" ]
2,148
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Ion source", "Mass spectrometry", "Matter" ]
19,345,545
https://en.wikipedia.org/wiki/Gijs%20Kuenen
Johannes Gijsbrecht Kuenen (born 9 December 1940, Heemstede) is a Dutch microbiologist who is professor emeritus at the Delft University of Technology and a visiting scientist at the University of Southern California. His research is influenced by, and a contribution to, the scientific tradition of the Delft School of Microbiology. Kuenen studied at the University of Groningen, where he received both his Doctorandus degree and in 1972 his Doctorate (PhD) under the supervision of Professor Dr. Hans Veldkamp. The title of his thesis was ‘‘Colourless sulphur bacteria from Dutch tidal mudflats’’. After a short post-doc at the University of California in Los Angeles (USA), he returned as a senior lecturer to Groningen. In 1980, he moved to Delft to become the 4th Professor of Microbiology (succeeding M.W. Beijerinck and A.J. Kluyver) at Delft University of Technology. Kuenen's initial research interests were (the application of) bacteria involved in the natural sulfur cycle and yeast physiology and metabolism. His later interest in the (eco)physiology of nitrifying and denitrifying bacteria has led a.o. to the discovery of the bacteria within the phylum Planctomycetota that perform the Anammox process. In addition, his research has been focussed on (halo) alkaliphilic sulfur-oxidizing bacteria from soda lakes. Gijs Kuenen retired in 2005 but remains active in science. Awards In 2004 Gijs Kuenen became a Knight in the Order of the Netherlands Lion, In 2005 he was elected Fellow of the American Academy of Microbiology. In 2006 he received the Jim Tiedje Award for his outstanding contribution to microbial ecology at the 11th International Symposium on Microbial Ecology in Vienna and in 2007 he was awarded the Procter & Gamble Award in Applied and Environmental Microbiology. For his contribution to the founding of the education Life Science and Technology (Delft University of Technology and Leiden University) in 2005 he received an honorary membership of Study Association LIFE. Named after Kuenen One of the five known anammox genera, with the single member Kuenenia stuttgartiensis, has been named after Kuenen. The Kuenen lab had named the first discovered species Brocadia anammoxidans, after the company Gist-Brocades (now DSM Gist), for which Kuenen did consulting work and in which wastewater the bacteria was discovered. References van Caulil G. (2006) Anammox, the cleaning creature that could not exist; Beijerinck, Kluyver, Kuenen - A goodbye to a remarkable microbiologist, Delft Outlook 2006.1 The anammox online resource: www.anammox.com la Rivière JWM. (2004) The Delft School of Microbiology in historical perspective. Antonie van Leeuwenhoek 71:3-13 Kuenen JG. (2008) Anammox bacteria: from discovery to application. Nature Reviews Microbiology 6:320-326 Sorokin DY. & Kuenen JG. (2005) "Chemolithotrophic haloalkaliphiles from soda lakes". FEMS Microbiology Ecology. 52:287-295 van de Graaf AA., Mulder A., de Bruijn P., Jetten MS., Robertson LA., Kuenen JG. (1995) Anaerobic oxidation of ammonium is a biologically mediated process. Applied and Environmental Microbiology. 61:1246-1251 1940 births Living people Dutch microbiologists Nitrogen cycle Environmental microbiology People from Heemstede University of Groningen alumni Academic staff of the University of Groningen Academic staff of the Delft University of Technology
Gijs Kuenen
[ "Chemistry", "Environmental_science" ]
800
[ "Environmental microbiology", "Nitrogen cycle", "Metabolism" ]
12,601,888
https://en.wikipedia.org/wiki/Universal%20measuring%20machine
Universal measuring machines (UMM) are measurement devices used for objects in which geometric relationships are the most critical element, with dimensions specified from geometric locations (see GD&T) rather than absolute coordinates. The very first uses for these machines was the inspection of gauges and parts produced by jig grinding. While bearing some resemblance to a coordinate-measuring machine (CMM), its usage and accuracy envelope differs significantly. While CMMs typically move in three dimensions and measure with a touch probe, a UMM aligns a spindle (4th axis) with a part geometry using a continuous scanning probe. Originally, universal measuring machines were created to fill a need to continuously measure geometric features in both an absolute and comparative capacity, rather than by a point based coordinate measuring system. A CMM provides a rapid method for inspecting absolute points, but geometric relationships, such as runout, parallelism, perpendicularity, etc., must be calculated rather than measured directly. By aligning an accurate spindle with an electronic test indicator with a geometric feature of interest, rather than using a non-scanning cartesian probe to estimate an alignment, a universal measuring machine fills this need. The indicator can be accurately controlled and moved across a part, either along a linear axis or radially around the spindle, to continuously record profile and determine geometry. This gives the universal machine a very strong advantage over non-scanning measuring methods when profiling flats, radii, contours, and holes, as the detail of the feature can be at the resolution of the probe. More modern CMMs do have scanning probes and thus can determine geometry similarly. In practice, the 1970s-era universal measuring machine is a very slow machine that requires a highly skilled and patient operator to use, and the accuracy built into these machines far outstripped the needs of most industries. As a result, the universal measuring machine today is uncommon, only found as a special-purpose machine in metrology laboratories. Because the machine can make comparative length measurements without moving linear axes, it is a valuable tool in comparing master gauges and length standards. While universal measuring machines were never a mass-produced item, they are no longer available on a production basis, and are produced on a to-order basis tailored to the needs of the metrology lab purchasing it. Manufacturers that perform work that must be measured on such a machine will frequently opt to subcontract the measurement to a laboratory which specializes in it. Universal measuring machines placed under corrected interferometric control and using non-contact gauge heads can measure features to millionths of an inch across the machine's entire envelope, where other types of machine are limited either in number of axes or accuracy of the measurement. The accuracy of the machine itself is negligible, as the environment the machine is the limiting factor to effective accuracy. The earlier mechanical machines were built to hold 10 to 20 millionths of an inch accuracy across the entire machine envelope. References American Society for Precision Engineering, Achieving Accuracy in the Modern Machine Shop Wayne R. Moore, Foundations of Mechanical Accuracy Dimensional instruments Metalworking measuring instruments
Universal measuring machine
[ "Physics", "Mathematics" ]
627
[ "Quantity", "Dimensional instruments", "Physical quantities", "Size" ]
2,486,793
https://en.wikipedia.org/wiki/Nov%C3%BD%20Bor
Nový Bor (; ) is a town in Česká Lípa District in the Liberec Region of the Czech Republic. It has about 11,000 inhabitants. The town is known for its glass industry. The historic town centre is well preserved and is protected by law as an urban monument zone. Administrative division Nový Bor consists of five municipal parts (in brackets population according to the 2021 census): Nový Bor (6,951) Arnultovice (3,398) Bukovany (182) Janov (375) Pihel (527) Etymology The town's original German name Heyde was derived from local vegetation and means "heather". The Czech name Nový Bor was also derived from local vegetation and literally means "new pine forest". Geography Nový Bor is located about north of Česká Lípa and west of Liberec. It lies mostly in the Ralsko Uplands, but in the north the municipal territory also extends into the Lusatian Mountains and Central Bohemian Uplands. The highest point is the hill Pramenný vrch at above sea level. History The first written mention of Nový Bor is from 1471, when the village Arnsdorff (Arnultovice) was founded, today a part of Nový Bor. In 1692, a new settlement was founded, and its construction was completed in 1703. The settlement was originally connected by mayor's law with Arnultovice, but it became separate in 1713. In 1710, it became a property of the Kinsky noble family, and under their rule the settlement grew. At their request, the settlement was promoted to a town in 1757. Since the end of the 18th century Nový Bor became known for its large glass industry (as happened in the whole region). In 1869, the railway was built. During the 19th and 20th centuries, several villages were merged with Nový Bor, as the last Arnultovice. From 1938 to 1945, it was annexed by Nazi Germany and administered as part of Reichsgau Sudetenland. From the establishment of a sovereign municipality in 1848 until 1948, the Czech name of the town changed several times – it was called Hajda, then Bor, then Hajda again, and then Bor u České Lípy. In 1948, it was renamed to its current name. Demographics Economy Nový Bor is known for its glass production. The Crystalex company is the largest glassworks in the country and belongs among the most significant regional employers. Transport The I/13 road (the section from Liberec to Děčín, part of the European route E442) and the I/9 road (the section from Česká Lípa to Rumburk) bypass the town. Nový Bor is located on the railway line Česká Lípa–Rumburk. Sport The local chess club, 1. Novoborský ŠK, has been the most successful club in the top-tier Czech team competition in the 21st century. Between the 2009–10 and 2017–18 seasons, the club won nine consecutive titles. Sights The historic centre is formed by Míru Square, Palackého Square and theirs surroundings. The centre includes valuable Empire and Biedermeier houses. The town hall is from 1751, originally built as a manorial granary. Church of the Assumption of the Virgin Mary was rebuilt to its present Baroque form in 1786–1788. It containts a bell from 1606 and a rare organ. The Virgin Mary statue behind the church is from the 18th century and is the oldest monument in the town. The history of the glass industry in the region is presented in the Glass Museum Nový Bor. In addition to the permanent exhibition there are exhibitions of glass craftsmen. Notable people Josef Max (1804–1855), German-Czech sculptor Emanuel Max (1810–1901), German-Czech sculptor Wilhelm Knechtel (1837–1924), German-Czech gardener and botanist Ernst Schwarz (1895–1983), German philologist Volker Oppitz (born 1931), German economist and mathematician Věra Bradáčová (born 1955), athlete Twin towns – sister cities Nový Bor is twinned with: Aniche, France Břeclav, Czech Republic Frauenau, Germany Oybin, Germany Zwiesel, Germany Gallery References External links Culture in Nový Bor and Česká Lípa Cities and towns in the Czech Republic Populated places in Česká Lípa District Lusatian Mountains Glass production
Nový Bor
[ "Materials_science", "Engineering" ]
938
[ "Glass engineering and science", "Glass production" ]
2,486,949
https://en.wikipedia.org/wiki/MELCOR
MELCOR is a fully integrated, engineering-level computer code developed by Sandia National Laboratories for the U.S. Nuclear Regulatory Commission to model the progression of severe accidents in nuclear power plants. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. MELCOR applications include estimation of severe accident source terms, and their sensitivities and uncertainties in a variety of applications. See also Nuclear engineering Monte Carlo method Nuclear reactor MCNP External links SNL MELCOR website NRC "Obtaining MELCOR" site Wikiversity: Nuclear Engineering Nuclear safety and security Physics software
MELCOR
[ "Physics" ]
131
[ "Physics software", "Computational physics stubs", "Computational physics" ]
2,488,125
https://en.wikipedia.org/wiki/Global%20Ozone%20Monitoring%20by%20Occultation%20of%20Stars
Global Ozone Monitoring by Occultation of Stars (GOMOS), is an instrument on board the European satellite Envisat launched 1 March 2002. It is the first space instrument dedicated to the study of the atmosphere of the Earth by the technique of stellar occultation. The spectrum of stars in the ultraviolet, visible and the near infrared parts of the electromagnetic spectrum is observed. It is aimed to use GOMOS to build a climatology of ozone and related species in the middle atmosphere (15 to 100 km). Instrument details The 250-680 nm spectral domain is used for the determination of O3, NO2, NO3−, aerosols and temperature. In addition, two high spectral resolution channels centred at 760 and 940 nm allow measurements of O2 and H2O and two fast photometers are used to correct star scintillation perturbations and to determine high vertical resolution temperature profiles. Global latitude coverage is obtained with up to 40 stellar occultations per orbit from South Pole to North Pole. Data acquired on dark limb (night-time) are of better quality than on bright limb (day-time) because of a smaller perturbation by background light. History GOMOS was first proposed in 1988 as an Announcement of Opportunity instrument dedicated to be a part of the Earth Observation Polar Platform Mission, the former name of Envisat. In 1992 it was decided that GOMOS would be developed as a European Space Agency-funded instrument. External links Official ESA GOMOS page GOMOS page from DLR Ozone Spacecraft instruments
Global Ozone Monitoring by Occultation of Stars
[ "Chemistry", "Astronomy" ]
313
[ "Outer space", "Astronomy stubs", "Oxidizing agents", "Ozone", "Outer space stubs" ]
2,488,614
https://en.wikipedia.org/wiki/DNA%20mismatch%20repair
DNA mismatch repair (MMR) is a system for recognizing and repairing erroneous insertion, deletion, and mis-incorporation of bases that can arise during DNA replication and recombination, as well as repairing some forms of DNA damage. Mismatch repair is strand-specific. During DNA synthesis the newly synthesised (daughter) strand will commonly include errors. In order to begin repair, the mismatch repair machinery distinguishes the newly synthesised strand from the template (parental). In gram-negative bacteria, transient hemimethylation distinguishes the strands (the parental is methylated and daughter is not). However, in other prokaryotes and eukaryotes, the exact mechanism is not clear. It is suspected that, in eukaryotes, newly synthesized lagging-strand DNA transiently contains nicks (before being sealed by DNA ligase) and provides a signal that directs mismatch proofreading systems to the appropriate strand. This implies that these nicks must be present in the leading strand, and evidence for this has recently been found. Recent work has shown that nicks are sites for RFC-dependent loading of the replication sliding clamp, proliferating cell nuclear antigen (PCNA), in an orientation-specific manner, such that one face of the donut-shape protein is juxtaposed toward the 3'-OH end at the nick. Loaded PCNA then directs the action of the MutLalpha endonuclease to the daughter strand in the presence of a mismatch and MutSalpha or MutSbeta. Any mutational event that disrupts the superhelical structure of DNA carries with it the potential to compromise the genetic stability of a cell. The fact that the damage detection and repair systems are as complex as the replication machinery itself highlights the importance evolution has attached to DNA fidelity. Examples of mismatched bases include a G/T or A/C pairing (see DNA repair). Mismatches are commonly due to tautomerization of bases during DNA replication. The damage is repaired by recognition of the deformity caused by the mismatch, determining the template and non-template strand, and excising the wrongly incorporated base and replacing it with the correct nucleotide. The removal process involves more than just the mismatched nucleotide itself. A few or up to thousands of base pairs of the newly synthesized DNA strand can be removed. Mismatch repair proteins Mismatch repair is a highly conserved process from prokaryotes to eukaryotes. The first evidence for mismatch repair was obtained from S. pneumoniae (the hexA and hexB genes). Subsequent work on E. coli has identified a number of genes that, when mutationally inactivated, cause hypermutable strains. The gene products are, therefore, called the "Mut" proteins, and are the major active components of the mismatch repair system. Three of these proteins are essential in detecting the mismatch and directing repair machinery to it: MutS, MutH and MutL (MutS is a homologue of HexA and MutL of HexB). MutS forms a dimer (MutS2) that recognises the mismatched base on the daughter strand and binds the mutated DNA. MutH binds at hemimethylated sites along the daughter DNA, but its action is latent, being activated only upon contact by a MutL dimer (MutL2), which binds the MutS-DNA complex and acts as a mediator between MutS2 and MutH, activating the latter. The DNA is looped out to search for the nearest d(GATC) methylation site to the mismatch, which could be up to 1 kb away. Upon activation by the MutS-DNA complex, MutH nicks the daughter strand near the hemimethylated site. MutL recruits UvrD helicase (DNA Helicase II) to separate the two strands with a specific 3' to 5' polarity. The entire MutSHL complex then slides along the DNA in the direction of the mismatch, liberating the strand to be excised as it goes. An exonuclease trails the complex and digests the ss-DNA tail. The exonuclease recruited is dependent on which side of the mismatch MutH incises the strand – 5' or 3'. If the nick made by MutH is on the 5' end of the mismatch, either RecJ or ExoVII (both 5' to 3' exonucleases) is used. If, however, the nick is on the 3' end of the mismatch, ExoI (a 3' to 5' enzyme) is used. The entire process ends past the mismatch site - i.e., both the site itself and its surrounding nucleotides are fully excised. The single-strand gap created by the exonuclease can then be repaired by DNA Polymerase III (assisted by single-strand-binding protein), which uses the other strand as a template, and finally sealed by DNA ligase. DNA methylase then rapidly methylates the daughter strand. MutS homologs When bound, the MutS2 dimer bends the DNA helix and shields approximately 20 base pairs. It has weak ATPase activity, and binding of ATP leads to the formation of tertiary structures on the surface of the molecule. The crystal structure of MutS reveals that it is exceptionally asymmetric, and, while its active conformation is a dimer, only one of the two halves interacts with the mismatch site. In eukaryotes, MutS homologs form two major heterodimers: Msh2/Msh6 (MutSα) and Msh2/Msh3 (MutSβ). The MutSα pathway is involved primarily in base substitution and small-loop mismatch repair. The MutSβ pathway is also involved in small-loop repair, in addition to large-loop (~10 nucleotide loops) repair. However, MutSβ does not repair base substitutions. MutL homologs MutL also has weak ATPase activity (it uses ATP for purposes of movement). It forms a complex with MutS and MutH, increasing the MutS footprint on the DNA. However, the processivity (the distance the enzyme can move along the DNA before dissociating) of UvrD is only ~40–50 bp. Because the distance between the nick created by MutH and the mismatch can average ~600 bp, if there is not another UvrD loaded the unwound section is then free to re-anneal to its complementary strand, forcing the process to start over. However, when assisted by MutL, the rate of UvrD loading is greatly increased. While the processivity (and ATP utilisation) of the individual UvrD molecules remains the same, the total effect on the DNA is boosted considerably; the DNA has no chance to re-anneal, as each UvrD unwinds 40-50 bp of DNA, dissociates, and then is immediately replaced by another UvrD, repeating the process. This exposes large sections of DNA to exonuclease digestion, allowing for quick excision (and later replacement) of the incorrect DNA. Eukaryotes have five MutL homologs designated as MLH1, MLH2, MLH3, PMS1, and PMS2. They form heterodimers that mimic MutL in E. coli. The human homologs of prokaryotic MutL form three complexes referred to as MutLα, MutLβ, and MutLγ. The MutLα complex is made of MLH1 and PMS2 subunits, the MutLβ heterodimer is made of MLH1 and PMS1, whereas MutLγ is made of MLH1 and MLH3. MutLα acts as an endonuclease that introduces strand breaks in the daughter strand upon activation by mismatch and other required proteins, MutSα and PCNA. These strand interruptions serve as entry points for an exonuclease activity that removes mismatched DNA. Roles played by MutLβ and MutLγ in mismatch repair are less-understood. MutH: an endonuclease present in E. coli and Salmonella MutH is a very weak endonuclease that is activated once bound to MutL (which itself is bound to MutS). It nicks unmethylated DNA and the unmethylated strand of hemimethylated DNA but does not nick fully methylated DNA. Experiments have shown that mismatch repair is random if neither strand is methylated. These behaviours led to the proposal that MutH determines which strand contains the mismatch. MutH has no eukaryotic homolog. Its endonuclease function is taken up by MutL homologs, which have some specialized 5'-3' exonuclease activity. The strand bias for removing mismatches from the newly synthesized daughter strand in eukaryotes may be provided by the free 3' ends of Okazaki fragments in the new strand created during replication. PCNA β-sliding clamp PCNA and the β-sliding clamp associate with MutSα/β and MutL, respectively. Although initial reports suggested that the PCNA-MutSα complex may enhance mismatch recognition, it has been recently demonstrated that there is no apparent change in affinity of MutSα for a mismatch in the presence or absence of PCNA. Furthermore, mutants of MutSα that are unable to interact with PCNA in vitro exhibit the capacity to carry out mismatch recognition and mismatch excision to near wild type levels. Such mutants are defective in the repair reaction directed by a 5' strand break, suggesting for the first time MutSα function in a post-excision step of the reaction. Clinical significance Inherited defects in mismatch repair Mutations in the human homologues of the Mut proteins affect genomic stability, which can result in microsatellite instability (MSI), implicated in some human cancers. In specific, the hereditary nonpolyposis colorectal cancers (HNPCC or Lynch syndrome) are attributed to damaging germline variants in the genes encoding the MutS and MutL homologues MSH2 and MLH1 respectively, which are thus classified as tumour suppressor genes. One subtype of HNPCC, the Muir-Torre Syndrome (MTS), is associated with skin tumors. If both inherited copies (alleles) of a MMR gene bear damaging genetic variants, this results in a very rare and severe condition: the mismatch repair cancer syndrome (or constitutional mismatch repair deficiency, CMMR-D), manifesting as multiple occurrences of tumors at an early age, often colon and brain tumors. Epigenetic silencing of mismatch repair genes Sporadic cancers with a DNA repair deficiency only rarely have a mutation in a DNA repair gene, but they instead tend to have epigenetic alterations such as promoter methylation that inhibit DNA repair gene expression. About 13% of colorectal cancers are deficient in DNA mismatch repair, commonly due to loss of MLH1 (9.8%), or sometimes MSH2, MSH6 or PMS2 (all ≤1.5%). For most MLH1-deficient sporadic colorectal cancers, the deficiency was due to MLH1 promoter methylation. Other cancer types have higher frequencies of MLH1 loss (see table below), which are again largely a result of methylation of the promoter of the MLH1 gene. A different epigenetic mechanism underlying MMR deficiencies might involve over-expression of a microRNA, for example miR-155 levels inversely correlate with expression of MLH1 or MSH2 in colorectal cancer. MMR failures in field defects A field defect (field cancerization) is an area of epithelium that has been preconditioned by epigenetic or genetic changes, predisposing it towards development of cancer. As pointed out by Rubin " ...there is evidence that more than 80% of the somatic mutations found in mutator phenotype human colorectal tumors occur before the onset of terminal clonal expansion." Similarly, Vogelstein et al. point out that more than half of somatic mutations identified in tumors occurred in a pre-neoplastic phase (in a field defect), during growth of apparently normal cells. MLH1 deficiencies were common in the field defects (histologically normal tissues) surrounding tumors; see Table above. Epigenetically silenced or mutated MLH1 would likely not confer a selective advantage upon a stem cell, however, it would cause increased mutation rates, and one or more of the mutated genes may provide the cell with a selective advantage. The deficientMLH1 gene could then be carried along as a selectively near-neutral passenger (hitch-hiker) gene when the mutated stem cell generates an expanded clone. The continued presence of a clone with an epigenetically repressed MLH1 would continue to generate further mutations, some of which could produce a tumor. MSI and immune checkpoint blockade response MMR and mismatch repair mutations were initially observed to associate with immune checkpoint blockade efficacy in a study examining responders to anti-PD1. The association between MSI positivity and positive response to anti-PD1 was subsequently validated in a prospective clinical trial and approved by the FDA. MMR components in humans In humans, seven DNA mismatch repair (MMR) proteins (MLH1, MLH3, MSH2, MSH3, MSH6, PMS1 and PMS2) work coordinately in sequential steps to initiate repair of DNA mismatches. In addition, there are Exo1-dependent and Exo1-independent MMR subpathways. Other gene products involved in mismatch repair (subsequent to initiation by MMR genes) in humans include DNA polymerase delta, PCNA, RPA, HMGB1, RFC and DNA ligase I, plus histone and chromatin modifying factors. In certain circumstances, the MMR pathway may recruit an error-prone DNA polymerase eta (POLH). This happens in B-lymphocytes during somatic hypermutation, where POLH is used to introduce genetic variation into antibody genes. However, this error-prone MMR pathway may be triggered in other types of human cells upon exposure to genotoxins and indeed it is broadly active in various human cancers, causing mutations that bear a signature of POLH activity. MMR and mutation frequency Recognizing and repairing mismatches and indels is important for cells because failure to do so results in microsatellite instability (MSI) and an elevated spontaneous mutation rate (mutator phenotype). In comparison to other cancer types, MMR-deficient (MSI) cancer has a very high frequency of mutations, close to melanoma and lung cancer, cancer types caused by much exposure to UV radiation and mutagenic chemicals. In addition to a very high mutation burden, MMR deficiencies result in an unusual distribution of somatic mutations across the human genome: this suggests that MMR preferentially protects the gene-rich, early-replicating euchromatic regions. In contrast, the gene-poor, late-replicating heterochromatic genome regions exhibit high mutation rates in many human tumors. The histone modification H3K36me3, an epigenetic mark of active chromatin, has the ability to recruit the MSH2-MSH6 (hMutSα) complex. Consistently, regions of the human genome with high levels of H3K36me3 accumulate less mutations due to MMR activity. Loss of multiple DNA repair pathways in tumors Lack of MMR often occurs in coordination with loss of other DNA repair genes. For example, MMR genes MLH1 and MLH3 as well as 11 other DNA repair genes (such as MGMT and many NER pathway genes) were significantly down-regulated in lower grade as well as in higher grade astrocytomas, in contrast to normal brain tissue. Moreover, MLH1 and MGMT expression was closely correlated in 135 specimens of gastric cancer and loss of MLH1 and MGMT appeared to be synchronously accelerated during tumor progression. Deficient expression of multiple DNA repair genes is often found in cancers, and may contribute to the thousands of mutations usually found in cancers (see Mutation frequencies in cancers). Aging A popular idea, that has failed to gain significant experimental support, is the idea that mutation, as distinct from DNA damage, is the primary cause of aging. Mice defective in the mutL homolog Pms2 have about a 100-fold elevated mutation frequency in all tissues, but do not appear to age more rapidly. These mice display mostly normal development and life, except for early onset carcinogenesis and male infertility. See also Base excision repair Nucleotide excision repair References Further reading External links DNA Repair DNA repair Mutation
DNA mismatch repair
[ "Biology" ]
3,623
[ "Molecular genetics", "DNA repair", "Cellular processes" ]
2,488,636
https://en.wikipedia.org/wiki/Base%20excision%20repair
Base excision repair (BER) is a cellular mechanism, studied in the fields of biochemistry and genetics, that repairs damaged DNA throughout the cell cycle. It is responsible primarily for removing small, non-helix-distorting base lesions from the genome. The related nucleotide excision repair pathway repairs bulky helix-distorting lesions. BER is important for removing damaged bases that could otherwise cause mutations by mispairing or lead to breaks in DNA during replication. BER is initiated by DNA glycosylases, which recognize and remove specific damaged or inappropriate bases, forming AP sites. These are then cleaved by an AP endonuclease. The resulting single-strand break can then be processed by either short-patch (where a single nucleotide is replaced) or long-patch BER (where 2–10 new nucleotides are synthesized). Lesions processed by BER Single bases in DNA can be chemically damaged by a variety of mechanisms, the most common ones being deamination, oxidation, and alkylation. These modifications can affect the ability of the base to hydrogen-bond, resulting in incorrect base-pairing, and, as a consequence, mutations in the DNA. For example, incorporation of adenine across from 8-oxoguanine (right) during DNA replication causes a G:C base pair to be mutated to T:A. Other examples of base lesions repaired by BER include: Oxidized bases: 8-oxoguanine, 2,6-diamino-4-hydroxy-5-formamidopyrimidine (FapyG, FapyA) Alkylated bases: 3-methyladenine, 7-methylguanosine Deaminated bases: hypoxanthine formed from deamination of adenine. Xanthine formed from deamination of guanine. (Thymidine products following deamination of 5-methylcytosine are more difficult to recognize, but can be repaired by mismatch-specific glycosylases) Uracil inappropriately incorporated in DNA or formed by deamination of cytosine In addition to base lesions, the downstream steps of BER are also utilized to repair single-strand breaks. The choice between long-patch and short-patch repair The choice between short- and long-patch repair is currently under investigation. Various factors are thought to influence this decision, including the type of lesion, the cell cycle stage, and whether the cell is terminally differentiated or actively dividing. Some lesions, such as oxidized or reduced AP sites, are resistant to pol β lyase activity and, therefore, must be processed by long-patch BER. Pathway preference may differ between organisms, as well. While human cells utilize both short- and long-patch BER, the yeast Saccharomyces cerevisiae was long thought to lack a short-patch pathway because it does not have homologs of several mammalian short-patch proteins, including pol β, DNA ligase III, XRCC1, and the kinase domain of PNKP. The recent discovery that the poly-A polymerase Trf4 possesses 5' dRP lyase activity has challenged this view. Proteins involved in base excision repair DNA glycosylases DNA glycosylases are responsible for initial recognition of the lesion. They flip the damaged base out of the double helix, as pictured, and cleave the N-glycosidic bond of the damaged base, leaving an AP site. There are two categories of glycosylases: monofunctional and bifunctional. Monofunctional glycosylases have only glycosylase activity, whereas bifunctional glycosylases also possess AP lyase activity. Therefore, bifunctional glycosylases can convert a base lesion into a single-strand break without the need for an AP endonuclease. β-Elimination of an AP site by a glycosylase-lyase yields a 3' α,β-unsaturated aldehyde adjacent to a 5' phosphate, which differs from the AP endonuclease cleavage product. Some glycosylase-lyases can further perform δ-elimination, which converts the 3' aldehyde to a 3' phosphate. A wide variety of glycosylases have evolved to recognize different damaged bases. Examples of DNA glycosylases include Ogg1, which recognizes 8-oxoguanine, MPG, which recognizes 3-methyladenine, and UNG, which removes uracil from DNA. AP endonucleases The AP endonucleases cleave an AP site to yield a 3' hydroxyl adjacent to a 5' deoxyribosephosphate (dRP). AP endonucleases are divided into two families based on their homology to the ancestral bacterial AP endonucleases endonuclease IV and exonuclease III. Many eukaryotes have members of both families, including the yeast Saccharomyces cerevisiae, in which Apn1 is the EndoIV homolog and Apn2 is related to ExoIII. In humans, two AP endonucleases, APE1 and APE2, have been identified. It is a member of the ExoIII family. End processing enzymes In order for ligation to occur, a DNA strand break must have a hydroxyl on its 3' end and a phosphate on its 5' end. In humans, polynucleotide kinase-phosphatase (PNKP) promotes formation of these ends during BER. This protein has a kinase domain, which phosphorylates 5' hydroxyl ends, and a phosphatase domain, which removes phosphates from 3' ends. Together, these activities ready single-strand breaks with damaged termini for ligation. The AP endonucleases also participate in 3' end processing. Besides opening AP sites, they possess 3' phosphodiesterase activity and can remove a variety of 3' lesions including phosphates, phosphoglycolates, and aldehydes. 3'-Processing must occur before DNA synthesis can initiate because DNA polymerases require a 3' hydroxyl to extend from. DNA polymerases Pol β is the main human polymerase that catalyzes short-patch BER, with pol λ able to compensate in its absence. These polymerases are members of the Pol X family and typically insert only a single nucleotide. In addition to polymerase activity, these enzymes have a lyase domain that removes the 5' dRP left behind by AP endonuclease cleavage. During long-patch BER, DNA synthesis is thought to be mediated by pol δ and pol ε along with the processivity factor PCNA, the same polymerases that carry out DNA replication. These polymerases perform displacing synthesis, meaning that the downstream 5' DNA end is "displaced" to form a flap (see diagram above). Pol β can also perform long-patch displacing synthesis and can, therefore, participate in either BER pathway. Long-patch synthesis typically inserts 2-10 new nucleotides. Flap endonuclease FEN1 removes the 5' flap generated during long patch BER. This endonuclease shows a strong preference for a long 5' flap adjacent to a 1-nt 3' flap. The yeast homolog of FEN1 is RAD27. In addition to its role in long-patch BER, FEN1 cleaves flaps with a similar structure during Okazaki fragment processing, an important step in lagging strand DNA replication. DNA ligase DNA ligase III along with its cofactor XRCC1 catalyzes the nick-sealing step in short-patch BER in humans. DNA ligase I ligates the break in long-patch BER. Links with cancer Defects in a variety of DNA repair pathways lead to cancer predisposition, and BER appears to follow this pattern. Deletion mutations in BER genes have shown to result in a higher mutation rate in a variety of organisms, implying that loss of BER could contribute to the development of cancer. Indeed, somatic mutations in Pol β have been found in 30% of human cancers, and some of these mutations lead to transformation when expressed in mouse cells. Mutations in the DNA glycosylase MYH are also known to increase susceptibility to colon cancer. Epigenetic deficiencies in cancers Epigenetic alterations (epimutations) in base excision repair genes have only recently begun to be evaluated in a few cancers, compared to the numerous previous studies of epimutations in genes acting in other DNA repair pathways (such as MLH1 in mismatch repair and MGMT in direct reversal). Some examples of epimutations in base excision repair genes that occur in cancers are summarized below. MBD4 MBD4 (methyl-CpG-binding domain protein 4) is a glycosylase employed in an initial step of base excision repair. MBD4 protein binds preferentially to fully methylated CpG sites and to the altered DNA bases at those sites. These altered bases arise from the frequent hydrolysis of cytosine to uracil (see image) and hydrolysis of 5-methylcytosine to thymine, producing G:U and G:T base pairs. If the improper uracils or thymines in these base pairs are not removed before DNA replication, they will cause transition mutations. MBD4 specifically catalyzes the removal of T and U paired with guanine (G) within CpG sites. This is an important repair function since about 1/3 of all intragenic single base pair mutations in human cancers occur in CpG dinucleotides and are the result of G:C to A:T transitions. These transitions comprise the most frequent mutations in human cancer. For example, nearly 50% of somatic mutations of the tumor suppressor gene p53 in colorectal cancer are G:C to A:T transitions within CpG sites. Thus, a decrease in expression of MBD4 could cause an increase in carcinogenic mutations. MBD4 expression is reduced in almost all colorectal neoplasms due to methylation of the promoter region of MBD4. Also MBD4 is deficient due to mutation in about 4% of colorectal cancers. A majority of histologically normal fields surrounding neoplastic growths (adenomas and colon cancers) in the colon also show reduced MBD4 mRNA expression (a field defect) compared to histologically normal tissue from individuals who never had a colonic neoplasm. This finding suggests that epigenetic silencing of MBD4 is an early step in colorectal carcinogenesis. In a Chinese population that was evaluated, the MBD4 Glu346Lys polymorphism was associated with about a 50% reduced risk of cervical cancer, suggesting that alterations in MBD4 may be important in cancer. NEIL1 NEIL1 recognizes (targets) and removes certain oxidatively-damaged bases and then incises the abasic site via β,δ elimination, leaving 3′ and 5′ phosphate ends. NEIL1 recognizes oxidized pyrimidines, formamidopyrimidines, thymine residues oxidized at the methyl group, and both stereoisomers of thymine glycol. The best substrates for human NEIL1 appear to be the hydantoin lesions, guanidinohydantoin, and spiroiminodihydantoin that are further oxidation products of 8-oxoG. NEIL1 is also capable of removing lesions from single-stranded DNA as well as from bubble and forked DNA structures. A deficiency in NEIL1 causes increased mutagenesis at the site of an 8-oxo-Gua:C pair, with most mutations being G:C to T:A transversions. A study in 2004 found that 46% of primary gastric cancers had reduced expression of NEIL1 mRNA, though the mechanism of reduction was not known. This study also found that 4% of gastric cancers had mutations in NEIL1. The authors suggested that low NEIL1 activity arising from reduced expression and/or mutation in NEIL1 was often involved in gastric carcinogenesis. A screen of 145 DNA repair genes for aberrant promoter methylation was performed on head and neck squamous cell carcinoma (HNSCC) tissues from 20 patients and from head and neck mucosa samples from 5 non-cancer patients. This screen showed that NEIL1, with substantially increased hypermethylation, had the most significantly different frequency of methylation. Furthermore, the hypermethylation corresponded to a decrease in NEIL1 mRNA expression. Further work with 135 tumor and 38 normal tissues also showed that 71% of HNSCC tissue samples had elevated NEIL1 promoter methylation. When 8 DNA repair genes were evaluated in non-small cell lung cancer (NSCLC) tumors, 42% were hypermethylated in the NEIL1 promoter region. This was the most frequent DNA repair abnormality found among the 8 DNA repair genes tested. NEIL1 was also one of six DNA repair genes found to be hypermethylated in their promoter regions in colorectal cancer. Links with cognition Active DNA methylation and demethylation is required for the cognition process of memory formation and maintenance. In rats, contextual fear conditioning can trigger life-long memory for the event with a single trial, and methylation changes appear to be correlated with triggering particularly long-lived memories. With contextual fear conditioning, after 24 hours, DNA isolated from the rat brain hippocampus region had 2097 differentially methylated genes, with a proportion being demethylated. As reviewed by Bayraktar and Kreutz, DNA demethylation is dependent on base excision repair (see figure). Physical exercise has well established beneficial effects on learning and memory (see Neurobiological effects of physical exercise). BDNF is a particularly important regulator of learning and memory. As reviewed by Fernandes et al., in rats, exercise enhances the hippocampus expression of the gene Bdnf, which has an essential role in memory formation. Enhanced expression of Bdnf occurs through demethylation of its CpG island promoter at exon IV and demethylation depends on base excision repair (see figure). Decline in BER with age The activity of the DNA glycosylase that removes methylated bases in human leukocytes declines with age. The reduction in the excision of methylated bases from DNA suggests an age-dependent decline in 3-methyladenine DNA glycosylase, a BER enzyme responsible for removing alkylated bases. Young rats (4- to 5 months old), but not old rats (24- to 28 months old), have the ability to induce DNA polymerase beta and AP endonuclease in response to oxidative damage. See also DNA mismatch repair DNA repair Homologous recombination Non-homologous end joining Nucleotide excision repair Host-cell reactivation assay References External links DNA repair Human proteins Proteomics
Base excision repair
[ "Biology" ]
3,239
[ "Molecular genetics", "DNA repair", "Cellular processes" ]
2,488,867
https://en.wikipedia.org/wiki/Shore%20power
Shore power or shore supply is the provision of shoreside electrical power to a ship at berth while its main and auxiliary engines are shut down. While the term denotes shore as opposed to off-shore, it is sometimes applied to aircraft or land-based vehicles (such as campers, heavy trucks with sleeping compartments and tour buses), which may plug into grid power when parked for idle reduction. The source for land-based power may be grid power from an electric utility company, but also possibly an external remote generator. These generators may be powered by diesel or renewable energy sources such as wind or solar. Shore power saves consumption of fuel that would otherwise be used to power vessels while in port, and eliminates the air pollution associated with consumption of that fuel. A port city may have anti-idling laws that require ships to use shore power. Use of shore power may facilitate maintenance of the ship's engines and generators, and reduces noise. Oceangoing ships "Cold ironing" is specifically a shipping industry term that came into use when all ships had coal-fired engines. When a ship tied up at port, there was no need to continue to feed the fire and the iron engines would cool down, eventually going completely cold – hence the term "cold ironing". Commercial ships can use shore-supplied power for services such as cargo handling, pumping, ventilation and lighting while in port, they need not run their own diesel engines, reducing air pollution emissions. Examples are ferries and cruise ships for hotel electric power, and a salmon feeder ship uses shore power while at the salmon farm. Small craft On small private boats, electrical power supply on board is usually provided by 12 or 24 volt DC batteries whilst at sea unless the vessel has a generator. When the vessel is berthed in a marina or harbourside, mains electricity is often offered via a shore power connection. This allows the vessel to use a battery charger to recharge batteries and also to run mains-powered AC devices such as TV, washing machine, cooking appliances and air conditioning. The power is usually provided from a power pedestal on the dock which is often metered or has a card payment system if electricity is not provided free of charge. The vessel connects to the supply using a suitable shore power cable. Trucks Shore power, as it relates to the trucking industry, is commonly referred to as "Truck Stop Electrification" (TSE). The US Environmental Protection Agency estimates that trucks plugging in versus idling on diesel fuel could save as much as $3240 annually. there were 138 truck stops in the USA that offer on-board systems (also called Shore power) or off-board systems (also called single system electrification) for an hourly fee. Auxiliary power units offer another alternative to both idling and shore power for trucks. Aircraft Similar to shore power for ships, a ground power unit (GPU) may be used to supply electric power for an aircraft on the ground, to sustain interior lighting, ventilation and other requirements before starting of the main engines or the aircraft auxiliary power unit (APU). It is also used by aircraft with APUs if the airport authority does not permit the usage of APUs whilst parked, or if the carrier wishes to save on the use of jet fuel (which APUs use). This may be a self-contained engine-generator set, or it may convert commercial power to the voltage and frequency needed for the aircraft (for example 115 V 400 Hz). Trains and buses Shore power may be a grid connection for passenger trains laying over between runs. Similarly buses may be connected when not in use. See also IEC 60309 2P+E plugs are used in Europe for small boats providing 16,32 or 63 amps at 220-250 volts NEMA L5-30 plugs are most often used in N. America for small boats IEC/ISO/IEEE 80005 - international standard for larger vessels References Air pollution control systems Ports and harbours Nautical terminology Power electronics Port infrastructure Maritime transport
Shore power
[ "Engineering" ]
818
[ "Infrastructure", "Construction", "Electronic engineering", "Electrical engineering", "Power electronics" ]
2,490,859
https://en.wikipedia.org/wiki/Flight%20management%20system
A flight management system (FMS) is a fundamental component of a modern airliner's avionics. An FMS is a specialized computer system that automates a wide variety of in-flight tasks, reducing the workload on the flight crew to the point that modern civilian aircraft no longer carry flight engineers or navigators. A primary function is in-flight management of the flight plan. Using various sensors (such as GPS and INS often backed up by radio navigation) to determine the aircraft's position, the FMS can guide the aircraft along the flight plan. From the cockpit, the FMS is normally controlled through a Control Display Unit (CDU) which incorporates a small screen and keyboard or touchscreen. The FMS sends the flight plan for display to the Electronic Flight Instrument System (EFIS), Navigation Display (ND), or Multifunction Display (MFD). The FMS can be summarised as being a dual system consisting of the Flight Management Computer (FMC), CDU and a cross talk bus. The modern FMS was introduced on the Boeing 767, though earlier navigation computers did exist. Now, systems similar to FMS exist on aircraft as small as the Cessna 182. In its evolution an FMS has had many different sizes, capabilities and controls. However certain characteristics are common to all FMSs. Navigation database All FMSs contain a navigation database. The navigation database contains the elements from which the flight plan is constructed. These are defined via the ARINC 424 standard. The navigation database (NDB) is normally updated every 28 days, in order to ensure that its contents are current. Each FMS contains only a subset of the ARINC / AIRAC data, relevant to the capabilities of the FMS. The NDB contains all of the information required for building a flight plan, consisting of: Waypoints/Intersection Airways Radio navigation aids including distance measuring equipment (DME), VHF omnidirectional range (VOR), non-directional beacons (NDBs) and instrument landing systems (ILSs). Airports Runways Standard instrument departure (SID) Standard terminal arrival (STAR) Holding patterns (only as part of IAPs-although can be entered by command of ATC or at pilot's discretion) Instrument approach procedure (IAP) Waypoints can also be defined by the pilot(s) along the route or by reference to other waypoints with entry of a place in the form of a waypoint (e.g. a VOR, NDB, ILS, airport or waypoint/intersection). Flight plan The flight plan is generally determined on the ground, before departure either by the pilot for smaller aircraft or a professional dispatcher for airliners. It is entered into the FMS either by typing it in, selecting it from a saved library of common routes (Company Routes) or via an ACARS datalink with the airline dispatch center. During preflight, other information relevant to managing the flight plan is entered. This can include performance information such as gross weight, fuel weight and center of gravity. It will include altitudes including the initial cruise altitude. For aircraft that do not have a GPS, the initial position is also required. The pilot uses the FMS to modify the flight plan in flight for a variety of reasons. Significant engineering design minimizes the keystrokes in order to minimize pilot workload in flight and eliminate any confusing information (Hazardously Misleading Information). The FMS also sends the flight plan information for display on the Navigation Display (ND) of the flight deck instruments Electronic Flight Instrument System (EFIS). The flight plan generally appears as a magenta line, with other airports, radio aids and waypoints displayed. Some FMSs can calculate special flight plans, often for tactical requirements, such as search patterns, rendezvous, in-flight refueling tanker orbits, and calculated air release points (CARP) for accurate parachute jumps. Position determination Once in flight, a principal task of the FMS is obtaining a position fix, i.e., to determine the aircraft's position and the accuracy of that position. Simple FMS use a single sensor, generally GPS in order to determine position. But modern FMS use as many sensors as they can, such as VORs, in order to determine and validate their exact position. Some FMS use a Kalman filter to integrate the positions from the various sensors into a single position. Common sensors include: Airline-quality GPS receivers act as the primary sensor as they have the highest accuracy and integrity. Radio aids designed for aircraft navigation act as the second highest quality sensors. These include; Scanning DME (distance measuring equipment) that check the distances from five different DME stations simultaneously in order to determine one position every 10 seconds. VORs (VHF omnidirectional radio range) that supply a bearing. With two VOR stations the aircraft position can be determined, but the accuracy is limited. Inertial reference systems (IRS) use ring laser gyros and accelerometers in order to calculate the aircraft position. They are highly accurate and independent of outside sources. Airliners use the weighted average of three independent IRS to determine the “triple mixed IRS” position. The FMS constantly crosschecks the various sensors and determines a single aircraft position and accuracy. The accuracy is described as the Actual Navigation Performance (ANP) a circle that the aircraft can be anywhere within measured as the diameter in nautical miles. Modern airspace has a set required navigation performance (RNP). The aircraft must have its ANP less than its RNP in order to operate in certain high-level airspace. Guidance Given the flight plan and the aircraft's position, the FMS calculates the course to follow. The pilot can follow this course manually (much like following a VOR radial), or the autopilot can be set to follow the course. The FMS mode is normally called LNAV or Lateral Navigation for the lateral flight plan and VNAV or vertical navigation for the vertical flight plan. VNAV provides speed and pitch or altitude targets and LNAV provides roll steering command to the autopilot. VNAV Sophisticated aircraft, generally airliners such as the Airbus A320 or Boeing 737 and other turbofan powered aircraft, have full performance Vertical Navigation (VNAV). The purpose of VNAV is to predict and optimize the vertical path. Guidance includes control of the pitch axis and control of the throttle. The FMS needs to have a comprehensive flight and engine model in order to have the data required to do this. The function can create a forecast vertical path along the lateral flight plan using this information. The aircraft manufacturer is usually the only source of this comprehensive flight model. The vertical profile is constructed by the FMS during pre-flight. Together with the lateral flight plan, it makes use of the aircraft's starting empty weight, fuel weight, center of gravity, and cruising altitude. The first step on a vertical course is to rise to cruise height. Vertical limitations such as "At or ABOVE 8,000" are present in some SID waypoints. Reducing thrust, or "FLEX" climbing, may be used throughout the ascent to spare the engines. Each needs to be taken into account when making vertical profile projections. Implementation of an accurate VNAV is difficult and expensive, but it pays off in fuel savings primarily in cruise and descent. In cruise, where most of the fuel is burned, there are multiple methods for fuel savings. As an aircraft burns fuel it gets lighter and can cruise higher where there is less drag. Step climbs or cruise climbs facilitate this. VNAV can determine where the step or cruise climbs (in which the aircraft climbs continuously) should occur to minimize fuel consumption. Performance optimization allows the FMS to determine the best or most economical speed to fly in level flight. This is often called the ECON speed. This is based on the cost index, which is entered to give a weighting between speed and fuel efficiency. The cost index is calculated by dividing the per-hour cost of operating the plane by the cost of fuel. Generally a cost index of 999 gives ECON speeds as fast as possible without consideration of fuel and a cost index of zero gives maximum fuel economy while disregarding other hourly costs such as maintenance and crew expenses. ECON mode is the VNAV speed used by most airliners in cruise. RTA or required time of arrival allows the VNAV system to target arrival at a particular waypoint at a defined time. This is often useful for airport arrival slot scheduling. In this case, VNAV regulates the cruise speed or cost index to ensure the RTA is met. The first thing the VNAV calculates for the descent is the top of descent point (TOD). This is the point where an efficient and comfortable descent begins. Normally this will involve an idle descent, but for some aircraft an idle descent is too steep and uncomfortable. The FMS calculates the TOD by “flying” the descent backwards from touchdown through the approach and up to cruise. It does this using the flight plan, the aircraft flight model and descent winds. For airline FMS, this is a very sophisticated and accurate prediction, for simple FMS (on smaller aircraft) it can be determined by a “rule of thumb” such as a 3 degree descent path. From the TOD, the VNAV determines a four-dimensional predicted path. As the VNAV commands the throttles to idle, the aircraft begins its descent along the VNAV path. If either the predicted path is incorrect or the downpath winds different from the predictions, then the aircraft will not perfectly follow the path. The aircraft varies the pitch in order to maintain the path. Since the throttles are at idle this will modulate the speed. Normally the FMS allows the speed to vary within a small band. After this, either the throttles advance (if the aircraft is below path) or the FMS requests speed brakes with a message, often "DRAG REQUIRED" (if the aircraft is above path). On Airbus aircraft, this message also appears on the PFD and, if the aircraft is extremely high on path, "MORE DRAG" will be displayed. On Boeing aircraft, if the aircraft gets too far off the prescribed path, it will switch from VNAV PTH (which follows the calculated path) to VNAV SPD (which descends as fast as possible while maintaining a selected speed, similar to OP DES (open descent) on Airbuses. An ideal idle descent, also known as a “green descent” uses the minimum fuel, minimizes pollution (both at high altitude and local to the airport) and minimizes local noise. While most modern FMS of large airliners are capable of idle descents, most air traffic control systems cannot handle multiple aircraft each using its own optimum descent path to the airport, at this time. Thus the use of idle descents is minimized by Air Traffic Control. See also Index of aviation articles Acronyms and abbreviations in avionics Strategic Lateral Offset Procedure References Further reading ARINC 702A, Advanced Flight Management Computer System Avionics, Element, Software and Functions Ch 20, Cary R. Spitzer, FMC User's Guide B737, Ch 1, Bill Bulfer, Leading Edge Libraries Casner, S.M. The Pilot's Guide to the Modern Airline Cockpit. Newcastle WA, Aviation Supplies and Academics, 2007. . Chappell, A.R. et al. "The VNAV Tutor: Addressing a Mode Awareness Difficulty for Pilots of Glass Cockpit Aircraft." IEEE Transactions on Systems, Man, and Cybernetics Part A, Systems and Humans, vol. 27, no.3, May 1997, pp. 372–385. Avionics Flight management Flight planning Navigational flight instruments
Flight management system
[ "Technology", "Engineering" ]
2,435
[ "Systems engineering", "Avionics", "Aircraft systems", "Aircraft instruments", "Navigational flight instruments" ]
2,491,663
https://en.wikipedia.org/wiki/Endoreduplication
Endoreduplication (also referred to as endoreplication or endocycling) is replication of the nuclear genome in the absence of mitosis, which leads to elevated nuclear gene content and polyploidy. Endoreduplication can be understood simply as a variant form of the mitotic cell cycle (G1-S-G2-M) in which mitosis is circumvented entirely, due to modulation of cyclin-dependent kinase (CDK) activity. Examples of endoreduplication characterised in arthropod, mammalian, and plant species suggest that it is a universal developmental mechanism responsible for the differentiation and morphogenesis of cell types that fulfill an array of biological functions. While endoreduplication is often limited to specific cell types in animals, it is considerably more widespread in plants, such that polyploidy can be detected in the majority of plant tissues. Polyploidy and aneuploidy are common phenomena in cancer cells. Given that oncogenesis and endoreduplication likely involve subversion of common cell cycle regulatory mechanisms, a thorough understanding of endoreduplication may provide important insights for cancer biology. Examples in nature Endoreduplicating cell types that have been studied extensively in model organisms Endoreduplication, endomitosis and polytenization Endoreduplication, endomitosis and polytenization are three different processes resulting in polyploidization of a cell in a regulated manner. In endoreduplication cells skip M phase completely by exiting the mitotic cell cycle in the G2 phase after completing the S phase several times, resulting in a mononucleated polyploid cell. The cell ends up with twice as many copies of each chromosome per repeat of the S phase. Endomitosis is a type of cell cycle variation where mitosis is initiated, but stopped during anaphase and thus cytokinesis is not completed. The cell ends up with multiple nuclei in contrast to a cell undergoing endoreduplication. Therefore depending on how far the cell progresses through mitosis, this will give rise to a mononucleated or binucleated polyploid cell. Polytenization arises with under- or overamplification of some genomic regions, creating polytene chromosomes. Biological significance Based on the wide array of cell types in which endoreduplication occurs, a variety of hypotheses have been generated to explain the functional importance of this phenomenon. Unfortunately, experimental evidence to support these conclusions is somewhat limited. Cell differentiation In developing plant tissues the transition from mitosis to endoreduplication often coincides with cell differentiation and morphogenesis. However it remains to be determined whether endoreduplication and polyploidy contribute to cell differentiation or vice versa. Targeted inhibition of endoreduplication in trichome progenitors results in the production of multicellular trichomes that exhibit relatively normal morphology, but ultimately dedifferentiate and undergo absorption into the leaf epidermis. This result suggests that endoreduplication and polyploidy may be required for the maintenance of cell identity. Cell/organism size Cell ploidy often correlates with cell size, and in some instances, disruption of endoreduplication results in diminished cell and tissue size suggesting that endoreduplication may serve as a mechanism for tissue growth. Relative to mitosis, endoreduplication does not require cytoskeletal rearrangement or the production of new cell membrane and it often occurs in cells that have already differentiated. As such it may represent an energetically efficient alternative to cell proliferation among differentiated cell types that can no longer afford to undergo mitosis. While evidence establishing a connection between ploidy and tissue size is prevalent in the literature, contrary examples also exist. Oogenesis and embryonic development Endoreduplication is commonly observed in cells responsible for the nourishment and protection of oocytes and embryos. It has been suggested that increased gene copy number might allow for the mass production of proteins required to meet the metabolic demands of embryogenesis and early development. Consistent with this notion, mutation of the Myc oncogene in Drosophila follicle cells results in reduced endoreduplication and abortive oogenesis. However, reduction of endoreduplication in maize endosperm has limited effect on the accumulation of starch and storage proteins, suggesting that the nutritional requirements of the developing embryo may involve the nucleotides that comprise the polyploid genome rather than the proteins it encodes. Buffering the genome Another hypothesis is that endoreduplication buffers against DNA damage and mutation because it provides extra copies of important genes. However, this notion is purely speculative and there is limited evidence to the contrary. For example, analysis of polyploid yeast strains suggests that they are more sensitive to radiation than diploid strains. Stress response Research in plants suggests that endoreduplication may also play a role in modulating stress responses. By manipulating expression of E2fe (a repressor of endocycling in plants), researchers were able to demonstrate that increased cell ploidy lessens the negative impact of drought stress on leaf size. Given that the sessile lifestyle of plants necessitates a capacity to adapt to environmental conditions, it is appealing to speculate that widespread polyploidization contributes to their developmental plasticity Genetic control of endoreplication The best-studied example of a mitosis-to-endoreduplication transition occurs in Drosophila follicle cells and is activated by Notch signaling. Entry into endoreduplication involves modulation of mitotic and S-phase cyclin-dependent kinase (CDK) activity. Inhibition of M-phase CDK activity is accomplished via transcriptional activation of Cdh/fzr and repression of the G2-M regulator string/cdc25. Cdh/fzr is responsible for activation of the anaphase-promoting complex (APC) and subsequent proteolysis of the mitotic cyclins. String/cdc25 is a phosphatase that stimulates mitotic cyclin-CDK complex activity. Upregulation of S-phase CDK activity is accomplished via transcriptional repression of the inhibitory kinase dacapo. Together, these changes allow for the circumvention of mitotic entry, progression through G1, and entry into S-phase. The induction of endomitosis in mammalian megakaryocytes involves activation of the c-mpl receptor by the thrombopoietin (TPO) cytokine and is mediated by ERK1/2 signaling. As with Drosophila follicle cells, endoreduplication in megakaryocytes results from activation of S-phase cyclin-CDK complexes and inhibition of mitotic cyclin-CDK activity. Entry into S-phase during endoreduplication (and mitosis) is regulated through the formation of a prereplicative complex (pre-RC) at replication origins, followed by recruitment and activation of the DNA replication machinery. In the context of endoreduplication these events are facilitated by an oscillation in cyclin E-Cdk2 activity. Cyclin E-Cdk2 activity drives the recruitment and activation of the replication machinery, but it also inhibits pre-RC formation, presumably to ensure that only one round of replication occurs per cycle. Failure to maintain control over pre-RC formation at replication origins results in a phenomenon known as "rereplication" which is common in cancer cells. The mechanism by which cyclin E-Cdk2 inhibits pre-RC formation involves downregulation of APC-Cdh1-mediated proteolysis and accumulation of the protein Geminin, which is responsible for sequestration of the pre-RC component Cdt1. Oscillations in Cyclin E-Cdk2 activity are modulated via transcriptional and post-transcriptional mechanisms. Expression of cyclin E is activated by E2F transcription factors that were shown to be required for endoreduplication. Recent work suggests that observed oscillations in E2F and cyclin E protein levels result from a negative-feedback loop involving Cul4-dependent ubiquitination and degradation of E2F. Post-transcriptional regulation of cyclin E-Cdk2 activity involves Ago/Fbw7-mediated proteolytic degradation of cyclin E and direct inhibition by factors such as Dacapo and p57. Premeiotic endomitosis in unisexual vertebrates The unisexual salamanders (genus Ambystoma) are the oldest known unisexual vertebrate lineage, having arisen about 5 million years ago. In these polyploid unisexual females, an extra premeiotic endomitotic replication of the genome, doubles the number of chromosomes. As a result, the mature eggs that are produced subsequent to the two meiotic divisions have the same ploidy as the somatic cells of the adult female salamander. Synapsis and recombination during meiotic prophase I in these unisexual females is thought to ordinarily occur between identical sister chromosomes and occasionally between homologous chromosomes. Thus little, if any, genetic variation is produced. Recombination between homeologous chromosomes occurs rarely, if at all. References Genetics Cell biology Cell cycle
Endoreduplication
[ "Biology" ]
1,993
[ "Cell biology", "Cell cycle", "Cellular processes", "Genetics" ]
2,491,705
https://en.wikipedia.org/wiki/Lightweight%20Imaging%20Device%20Interface%20Language
Lightweight Imaging Device Interface Language (abbr. LIDIL) is a printer interface definition language used in more recent Hewlett-Packard printers. This language is commonly used on HP Deskjets that do not support the PCL printer language. As the name suggests, the language only supports the definition of raster documents, and is very limited overall. It is a "host-based" protocol which is advertised with LDL in the CMD: (command set) field of the device ID string. Such models do not support printing ASCII text. External links Host-based printing (including LIDIL) Page description languages Specification languages
Lightweight Imaging Device Interface Language
[ "Engineering" ]
131
[ "Software engineering", "Specification languages" ]
2,491,779
https://en.wikipedia.org/wiki/PH%20helmet
The P helmet, PH helmet and PHG helmet were early types of gas mask issued by the British Army in the First World War, to protect troops against chlorine, phosgene and tear gases. Rather than having a separate filter for removing the toxic chemicals, they consisted of a gas-permeable hood worn over the head which was treated with chemicals. The P (or Phenate) Helmet, officially called the Tube Helmet, appeared in July 1915, replacing the simpler Hypo Helmet. It featured two mica eyepieces instead of the single visor of its predecessor, and added an exhale valve fed from a metal tube which the wearer held in his mouth. The exhale valve was needed because a double layer of flannel – one treated and one not – was needed because the solution attacked the fabric. It had flannel layers of cloth-dipped in sodium phenolate and glycerin and protected against chlorine and phosgene, but not against tear gas. Around 9 million were made. The PH Helmet (Phenate Hexamine) replaced it in January 1916, and added hexamethylene tetramine, which greatly improved protection against phosgene and added protection against hydrocyanic acid. Around 14 million were made and it remained in service until the end of the war by which time it was relegated to second line use. The PHG Helmet appeared in January 1916 and was similar to the PH Helmet but had a facepiece made of rubber sponge to add protection against tear gas. Around one and a half million were produced in 1916–1917. It was finally superseded by the Small box respirator in 1916, which was much more satisfactory against high concentrations of phosgene or lachrymators. References External links The gas mask database: The UK World War I military equipment of the United Kingdom Gas masks of the United Kingdom
PH helmet
[ "Chemistry" ]
394
[ "Gas masks of the United Kingdom", "Gas masks" ]
432,961
https://en.wikipedia.org/wiki/Intertropical%20Convergence%20Zone
The Intertropical Convergence Zone (ITCZ , or ICZ), known by sailors as the doldrums or the calms because of its monotonous windless weather, is the area where the northeast and the southeast trade winds converge. It encircles Earth near the thermal equator though its specific position varies seasonally. When it lies near the geographic equator, it is called the near-equatorial trough. Where the ITCZ is drawn into and merges with a monsoonal circulation, it is sometimes referred to as a monsoon trough (a usage that is more common in Australia and parts of Asia). Meteorology The ITCZ was originally identified from the 1920s to the 1940s as the Intertropical Front (ITF), but after the recognition in the 1940s and the 1950s of the significance of wind field convergence in tropical weather production, the term Intertropical Convergence Zone (ITCZ) was then applied. The ITCZ appears as a band of clouds, usually thunderstorms, that encircle the globe near the Equator. In the Northern Hemisphere, the trade winds move in a southwestward direction from the northeast, while in the Southern Hemisphere, they move northwestward from the southeast. When the ITCZ is positioned north or south of the Equator, these directions change according to the Coriolis effect imparted by Earth's rotation. For instance, when the ITCZ is situated north of the Equator, the southeast trade wind changes to a southwest wind as it crosses the Equator. The ITCZ is formed by vertical motion largely appearing as convective activity of thunderstorms driven by solar heating, which effectively draw air in; these are the trade winds. The ITCZ is effectively a tracer of the ascending branch of the Hadley cell and is wet. The dry descending branch is the horse latitudes. The location of the ITCZ gradually varies with the seasons, roughly corresponding with the location of the thermal equator. As the heat capacity of the oceans is greater than air over land, migration is more prominent over land. Over the oceans, where the convergence zone is better defined, the seasonal cycle is more subtle, as the convection is constrained by the distribution of ocean temperatures. Sometimes, a double ITCZ forms, with one located north and another south of the Equator, one of which is usually stronger than the other. When this occurs, a narrow ridge of high pressure forms between the two convergence zones. ITCZ over oceans vs. land The ITCZ is commonly defined as an equatorial zone where the trade winds converge. Rainfall seasonality is traditionally attributed to the north–south migration of the ITCZ, which follows the sun. Although this is largely valid over the equatorial oceans, the ITCZ and the region of maximum rainfall can be decoupled over the continents. The equatorial precipitation over land is not simply a response to just the surface convergence. Rather, it is modulated by a number of regional features such as local atmospheric jets and waves, proximity to the oceans, terrain-induced convective systems, moisture recycling, and spatiotemporal variability of land cover and albedo. South Pacific convergence zone The South Pacific convergence zone (SPCZ) is a reverse-oriented, or west-northwest to east-southeast aligned, trough extending from the west Pacific warm pool southeastwards towards French Polynesia. It lies just south of the equator during the Southern Hemisphere warm season, but can be more extratropical in nature, especially east of the International Date Line. It is considered the largest and most important piece of the ITCZ, and has the least dependence upon heating from a nearby land mass during the summer than any other portion of the monsoon trough. The southern ITCZ in the southeast Pacific and southern Atlantic, known as the SITCZ, occurs during the Southern Hemisphere fall between 3° and 10° south of the equator east of the 140th meridian west longitude during cool or neutral El Niño–Southern Oscillation (ENSO) patterns. When ENSO reaches its warm phase, otherwise known as El Niño, the tongue of lowered sea surface temperatures due to upwelling off the South American continent disappears, which causes this convergence zone to vanish as well. Effects on weather Variation in the location of the intertropical convergence zone drastically affects rainfall in many equatorial nations, resulting in the wet and dry seasons of the tropics rather than the cold and warm seasons of higher latitudes. Longer term changes in the intertropical convergence zone can result in severe droughts or flooding in nearby areas. In some cases, the ITCZ may become narrow, especially when it moves away from the equator; the ITCZ can then be interpreted as a front along the leading edge of the equatorial air. There appears to be a 15 to 25-day cycle in thunderstorm activity along the ITCZ, which is roughly half the wavelength of the Madden–Julian oscillation (MJO). Within the ITCZ the average winds are slight, unlike the zones north and south of the equator where the trade winds feed. As trans-equator sea voyages became more common, sailors in the eighteenth century named this belt of calm the doldrums because of the calm, stagnant, or inactive winds. Role in tropical cyclone formation Tropical cyclogenesis depends upon low-level vorticity as one of its six requirements, and the ITCZ fills this role as it is a zone of wind change and speed, otherwise known as horizontal wind shear. As the ITCZ migrates to tropical and subtropical latitudes and even beyond during the respective hemisphere's summer season, increasing Coriolis force makes the formation of tropical cyclones within this zone more possible. Surges of higher pressure from high latitudes can enhance tropical disturbances along its axis. In the north Atlantic and the northeastern Pacific oceans, tropical waves move along the axis of the ITCZ causing an increase in thunderstorm activity, and clusters of thunderstorms can develop under weak vertical wind shear. Hazards In the Age of Sail, to find oneself becalmed in this region in a hot and muggy climate could mean death when wind was the only effective way to propel ships across the ocean. Calm periods within the doldrums could strand ships for days or weeks. Even today, leisure and competitive sailors attempt to cross the zone as quickly as possible as the erratic weather and wind patterns may cause unexpected delays. In 2009, thunderstorms along the Intertropical Convergence Zone played a role in the loss of Air France Flight 447, which crashed while flying from Rio de Janeiro–Galeão International Airport to Charles de Gaulle Airport near Paris. The aircraft crashed with no survivors while flying through a series of large ITCZ thunderstorms, and ice forming rapidly on airspeed sensors was the precipitating cause for a cascade of human errors which ultimately doomed the flight. Most aircraft flying these routes are able to avoid the larger convective cells without incident. Effects of climate change Based on paleoclimate proxies, the position and intensity of the ITCZ varied in prehistoric times along with changes in global climate. During Heinrich events within the last 100 ka, a southward shift of the ITCZ coincided with the intensification of the Northern Hemisphere Hadley cell coincident with weakening of the Southern Hemisphere Hadley cell. The ITCZ shifted north during the mid-Holocene but migrated south following changes in insolation during the late-Holocene towards its current position. The ITCZ has also undergone periods of contraction and expansion within the last millennium. A southward shift of the ITCZ commencing after the 1950s and continuing into the 1980s may have been associated with cooling induced by aerosols in the Northern Hemisphere based on results from climate models; a northward rebound began subsequently following forced changes in the gradient in temperature between the Northern and Southern hemispheres. These fluctuations in ITCZ positioning had robust effects on climate; for instance, displacement of the ITCZ may have led to drought in the Sahel in the 1980s. Atmospheric convection may become stronger and more concentrated at the center of the ITCZ in response to a globally warming climate, resulting in sharpened contrasts in precipitation between the ITCZ core (where precipitation would be amplified) and its edges (where precipitation would be suppressed). Atmospheric reanalyses suggest that the ITCZ over the Pacific has narrowed and intensified since at least 1979, in agreement with data collected by satellites and in-situ precipitation measurements. The drier ITCZ fringes are also associated with an increase in outgoing longwave radiation outward of those areas, particularly over land within the mid-latitudes and the subtropics. This change in the ITCZ is also reflected by increasing salinity within the Atlantic and Pacific underlying the ITCZ fringes and decreasing salinity underlying central belt of the ITCZ. The IPCC Sixth Assessment Report indicated "medium agreement" from studies regarding the strengthening and tightening of the ITCZ due to anthropogenic climate change. Less certain are the regional and global shifts in ITCZ position as a result of climate change, with paleoclimate data and model simulations highlighting contrasts stemming from asymmetries in forcing from aerosols, volcanic activity, and orbital variations, as well as uncertainties associated with changes in monsoons and the Atlantic meridional overturning circulation. The climate simulations run as part of Coupled Model Intercomparison Project Phase 5 (CMIP5) did not show a consistent global displacement of the ITCZ under anthropogenic climate change. In contrast, most of the same simulations show narrowing and intensification under the same prescribed conditions. However, simulations in Coupled Model Intercomparison Project Phase 6 (CMIP6) have shown greater agreement over some regional shifts of the ITCZ in response to anthropogenic climate change, including a northward displacement over the Indian Ocean and eastern Africa and a southward displacement over the eastern Pacific and Atlantic oceans. In literature The doldrums are notably described in Samuel Taylor Coleridge's poem The Rime of the Ancient Mariner (1798) and also provide a metaphor for the initial state of boredom and indifference of Milo, the child hero of Norton Juster's classic 1961 children's novel The Phantom Tollbooth. It is also cited in the 1939 book Wind, Sand and Stars. See also Asymmetry of the Intertropical Convergence Zone Chemical equator Monsoon trough Horse latitudes Polar front Roaring Forties References External links The ITCZ in Africa via the University of South Carolina "A Shifting Band of Rain", Scientific American (March 2011) Duane E. Waliser and Catherine Gautier, November 1993: "A Satellite-derived Climatology of the ITCZ". J. Climate, 6, 2162–2174. Atmospheric dynamics Geography terminology Nautical terminology Tropical meteorology
Intertropical Convergence Zone
[ "Chemistry" ]
2,181
[ "Atmospheric dynamics", "Fluid dynamics" ]
433,005
https://en.wikipedia.org/wiki/The%20Emperor%27s%20New%20Mind
The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics is a 1989 book by the mathematical physicist Roger Penrose. Penrose argues that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine, which includes a digital computer. Penrose hypothesizes that quantum mechanics plays an essential role in the understanding of human consciousness. The collapse of the quantum wavefunction is seen as playing an important role in brain function. Most of the book is spent reviewing, for the scientifically-minded lay-reader, a plethora of interrelated subjects such as Newtonian physics, special and general relativity, the philosophy and limitations of mathematics, quantum physics, cosmology, and the nature of time. Penrose intermittently describes how each of these bears on his developing theme: that consciousness is not "algorithmic". Only the later portions of the book address the thesis directly. Overview Penrose states that his ideas on the nature of consciousness are speculative, and his thesis is considered erroneous by some experts in the fields of philosophy, computer science, and robotics. The Emperor's New Mind attacks the claims of artificial intelligence using the physics of computing: Penrose notes that the present home of computing lies more in the tangible world of classical mechanics than in the imponderable realm of quantum mechanics. The modern computer is a deterministic system that for the most part simply executes algorithms. Penrose shows that, by reconfiguring the boundaries of a billiard table, one might make a computer in which the billiard balls act as message carriers and their interactions act as logical decisions. The billiard-ball computer was first designed some years ago by Edward Fredkin and Tommaso Toffoli of the Massachusetts Institute of Technology. Reception Following the publication of the book, Penrose began to collaborate with Stuart Hameroff on a biological analog to quantum computation involving microtubules, which became the foundation for his subsequent book, Shadows of the Mind: A Search for the Missing Science of Consciousness. Penrose won the Science Book Prize in 1990 for The Emperor's New Mind. According to an article in the American Journal of Physics, Penrose incorrectly claims a barrier far away from a localized particle can affect the particle. See also Alan Turing Anathem Church–Turing thesis Mind–body dualism Orchestrated objective reduction Quantum mind Raymond Smullyan Shadows of the Mind "The Emperor's New Clothes" Turing test References 1989 non-fiction books Works about consciousness English-language non-fiction books English non-fiction books Mathematics books Oxford University Press books Philosophy of artificial intelligence Philosophy of mind literature Popular physics books Quantum mind Science books Turing machine Works by Roger Penrose
The Emperor's New Mind
[ "Physics" ]
559
[ "Quantum mind", "Quantum mechanics" ]
433,118
https://en.wikipedia.org/wiki/Xenon%20hexafluoroplatinate
Xenon hexafluoroplatinate is the product of the reaction of platinum hexafluoride with xenon, in an experiment that proved the chemical reactivity of the noble gases. This experiment was performed by Neil Bartlett at the University of British Columbia, who formulated the product as "Xe+[PtF6]−", although subsequent work suggests that Bartlett's product was probably a salt mixture and did not in fact contain this specific salt. Preparation "Xenon hexafluoroplatinate" is prepared from xenon and platinum hexafluoride (PtF6) as gaseous solutions in SF6. The reactants are combined at 77 K and slowly warmed to allow for a controlled reaction. Structure The material described originally as "xenon hexafluoroplatinate" is probably not Xe+[PtF6]−. The main problem with this formulation is "Xe+", which would be a radical and would dimerize or abstract a fluorine atom to give XeF+. Thus, Bartlett discovered that Xe undergoes chemical reactions, but the nature and purity of his initial mustard yellow product remains uncertain. Further work indicates that Bartlett's product probably contained [XeF]+[PtF5]−, [XeF]+[Pt2F11]−, and [Xe2F3]+[PtF6]−. The title "compound" is a salt, consisting of an octahedral anionic fluoride complex of platinum and various xenon cations. It has been proposed that the platinum fluoride forms a negatively charged polymeric network with xenon or xenon fluoride cations held in its interstices. A preparation of "XePtF6" in HF solution results in a solid which has been characterized as a polymeric network associated with XeF+. This result is evidence for such a polymeric structure of xenon hexafluoroplatinate. History In 1962, Neil Bartlett discovered that a mixture of platinum hexafluoride gas and oxygen formed a red solid. The red solid turned out to be dioxygenyl hexafluoroplatinate, Bartlett noticed that the ionization energy for O2 (1175 kJ mol−1) was very close to the ionization energy for Xe (1170 kJ mol−1). He then asked his colleagues to give him some xenon "so that he could try out some reactions", whereupon he established that xenon indeed reacts with PtF6. Although, as discussed above, the product was probably a mixture of several compounds, Bartlett's work was the first proof that compounds could be prepared from a noble gas. Since Bartlett's observation, many well-defined compounds of xenon have been reported including XeF2, XeF4, and XeF6. See also Hexafluoroplatinate References Xenon compounds Fluorides Nonmetal halides Coordination complexes Platinum compounds Fluorometallates
Xenon hexafluoroplatinate
[ "Chemistry" ]
641
[ "Salts", "Coordination chemistry", "Fluorides", "Coordination complexes" ]
434,188
https://en.wikipedia.org/wiki/Bioremediation
Bioremediation broadly refers to any process wherein a biological system (typically bacteria, microalgae, fungi in mycoremediation, and plants in phytoremediation), living or dead, is employed for removing environmental pollutants from air, water, soil, flue gasses, industrial effluents etc., in natural or artificial settings. The natural ability of organisms to adsorb, accumulate, and degrade common and emerging pollutants has attracted the use of biological resources in treatment of contaminated environment. In comparison to conventional physicochemical treatment methods bioremediation may offer advantages as it aims to be sustainable, eco-friendly, cheap, and scalable. Most bioremediation is inadvertent, involving native organisms. Research on bioremediation is heavily focused on stimulating the process by inoculation of a polluted site with organisms or supplying nutrients to promote their growth. Environmental remediation is an alternative to bioremediation. While organic pollutants are susceptible to biodegradation, heavy metals cannot be degraded, but rather oxidized or reduced. Typical bioremediations involves oxidations. Oxidations enhance the water-solubility of organic compounds and their susceptibility to further degradation by further oxidation and hydrolysis. Ultimately biodegradation converts hydrocarbons to carbon dioxide and water. For heavy metals, bioremediation offers few solutions. Metal-containing pollutant can be removed, at least partially, with varying bioremediation techniques. The main challenge to bioremediations is rate: the processes are slow. Bioremediation techniques can be classified as (i) in situ techniques, which treat polluted sites directly, vs (ii) ex situ techniques which are applied to excavated materials. In both these approaches, additional nutrients, vitamins, minerals, and pH buffers are added to enhance the growth and metabolism of the microorganisms. In some cases, specialized microbial cultures are added (biostimulation). Some examples of bioremediation related technologies are phytoremediation, bioventing, bioattenuation, biosparging, composting (biopiles and windrows), and landfarming. Other remediation techniques include thermal desorption, vitrification, air stripping, bioleaching, rhizofiltration, and soil washing. Biological treatment, bioremediation, is a similar approach used to treat wastes including wastewater, industrial waste and solid waste. The end goal of bioremediation is to remove harmful compounds to improve soil and water quality. In situ techniques Bioventing Bioventing is a process that increases the oxygen or air flow into the unsaturated zone of the soil, this in turn increases the rate of natural in situ degradation of the targeted hydrocarbon contaminant. Bioventing, an aerobic bioremediation, is the most common form of oxidative bioremediation process where oxygen is provided as the electron acceptor for oxidation of petroleum, polyaromatic hydrocarbons (PAHs), phenols, and other reduced pollutants. Oxygen is generally the preferred electron acceptor because of the higher energy yield and because oxygen is required for some enzyme systems to initiate the degradation process. Microorganisms can degrade a wide variety of hydrocarbons, including components of gasoline, kerosene, diesel, and jet fuel. Under ideal aerobic conditions, the biodegradation rates of the low- to moderate-weight aliphatic, alicyclic, and aromatic compounds can be very high. As molecular weight of the compound increases, the resistance to biodegradation increases simultaneously. This results in higher contaminated volatile compounds due to their high molecular weight and an increased difficulty to remove from the environment. Most bioremediation processes involve oxidation-reduction reactions where either an electron acceptor (commonly oxygen) is added to stimulate oxidation of a reduced pollutant (e.g. hydrocarbons) or an electron donor (commonly an organic substrate) is added to reduce oxidized pollutants (nitrate, perchlorate, oxidized metals, chlorinated solvents, explosives and propellants). In both these approaches, additional nutrients, vitamins, minerals, and pH buffers may be added to optimize conditions for the microorganisms. In some cases, specialized microbial cultures are added (bioaugmentation) to further enhance biodegradation. Approaches for oxygen addition below the water table include recirculating aerated water through the treatment zone, addition of pure oxygen or peroxides, and air sparging. Recirculation systems typically consist of a combination of injection wells or galleries and one or more recovery wells where the extracted groundwater is treated, oxygenated, amended with nutrients and re-injected. However, the amount of oxygen that can be provided by this method is limited by the low solubility of oxygen in water (8 to 10 mg/L for water in equilibrium with air at typical temperatures). Greater amounts of oxygen can be provided by contacting the water with pure oxygen or addition of hydrogen peroxide (H2O2) to the water. In some cases, slurries of solid calcium or magnesium peroxide are injected under pressure through soil borings. These solid peroxides react with water releasing H2O2 which then decomposes releasing oxygen. Air sparging involves the injection of air under pressure below the water table. The air injection pressure must be great enough to overcome the hydrostatic pressure of the water and resistance to air flow through the soil. Biostimulation Bioremediation can be carried out by bacteria that are naturally present. In biostimulation, the population of these helpful bacteria can be increased by adding nutrients. Bacteria can in principle be used to degrade hydrocarbons. Specific to marine oil spills, nitrogen and phosphorus have been key nutrients in biodegradation. The bioremediation of hydrocarbons suffers from low rates. Bioremediation can involve the action of microbial consortium. Within the consortium, the product of one species could be the substrate for another species. Anaerobic bioremediation can in principle be employed to treat a range of oxidized contaminants including chlorinated ethylenes (PCE, TCE, DCE, VC), chlorinated ethanes (TCA, DCA), chloromethanes (CT, CF), chlorinated cyclic hydrocarbons, various energetics (e.g., perchlorate, RDX, TNT), and nitrate. This process involves the addition of an electron donor to: 1) deplete background electron acceptors including oxygen, nitrate, oxidized iron and manganese and sulfate; and 2) stimulate the biological and/or chemical reduction of the oxidized pollutants. The choice of substrate and the method of injection depend on the contaminant type and distribution in the aquifer, hydrogeology, and remediation objectives. Substrate can be added using conventional well installations, by direct-push technology, or by excavation and backfill such as permeable reactive barriers (PRB) or biowalls. Slow-release products composed of edible oils or solid substrates tend to stay in place for an extended treatment period. Soluble substrates or soluble fermentation products of slow-release substrates can potentially migrate via advection and diffusion, providing broader but shorter-lived treatment zones. The added organic substrates are first fermented to hydrogen (H2) and volatile fatty acids (VFAs). The VFAs, including acetate, lactate, propionate and butyrate, provide carbon and energy for bacterial metabolism. Bioattenuation During bioattenuation, biodegradation occurs naturally with the addition of nutrients or bacteria. The indigenous microbes present will determine the metabolic activity and act as a natural attenuation. While there is no anthropogenic involvement in bioattenuation, the contaminated site must still be monitored. Biosparging Biosparging is the process of groundwater remediation as oxygen, and possible nutrients, is injected. When oxygen is injected, indigenous bacteria are stimulated to increase rate of degradation. However, biosparging focuses on saturated contaminated zones, specifically related to ground water remediation. UNICEF, power producers, bulk water suppliers, and local governments are early adopters of low cost bioremediation, such as aerobic bacteria tablets which are simply dropped into water. Ex situ techniques Biopiles Biopiles, similar to bioventing, are used to remove petroleum pollutants by introducing aerobic hydrocarbons to contaminated soils. However, the soil is excavated and piled with an aeration system. This aeration system enhances microbial activity by introducing oxygen under positive pressure or removes oxygen under negative pressure. Windrows Windrow systems are similar to compost techniques where soil is periodically turned in order to enhance aeration. This periodic turning also allows contaminants present in the soil to be uniformly distributed which accelerates the process of bioremediation. Landfarming Landfarming, or land treatment, is a method commonly used for sludge spills. This method disperses contaminated soil and aerates the soil by cyclically rotating. This process is an above land application and contaminated soils are required to be shallow in order for microbial activity to be stimulated. However, if the contamination is deeper than 5 feet, then the soil is required to be excavated to above ground. While it is an ex situ technique, it can also be considered an in situ technique as Landfarming can be performed at the site of contamination. In situ vs. Ex situ Ex situ techniques are often more expensive because of excavation and transportation costs to the treatment facility, while in situ techniques are performed at the site of contamination so they only have installation costs. While there is less cost there is also less of an ability to determine the scale and spread of the pollutant. The pollutant ultimately determines which bioremediation method to use. The depth and spread of the pollutantare other important factors. Heavy metals Heavy metals are introduced into the environment by both anthropogenic activities and natural factors. Anthropogenic activities include industrial emissions, electronic waste, and mining. Natural factors include mineral weathering, soil erosion, and forest fires. Heavy metals including cadmium, chromium, lead and uranium are unlike organic compounds and cannot be biodegraded. However, bioremediation processes can potentially be used to minimize the mobility of these material in the subsurface, lowering the potential for human and environmental exposure. Heavy metals from these factors are predominantly present in water sources due to runoff where it is uptake by marine fauna and flora. Hexavalent chromium (Cr[VI]) and uranium (U[VI]) can be reduced to less mobile and/or less toxic forms (e.g., Cr[III], U[IV]). Similarly, reduction of sulfate to sulfide (sulfidogenesis) can be used to immobilize certain metals (e.g., zinc, cadmium). The mobility of certain metals including chromium (Cr) and uranium (U) varies depending on the oxidation state of the material. Microorganisms can be used to lower the toxicity and mobility of chromium by reducing hexavalent chromium, Cr(VI) to trivalent Cr(III). Reduction of the more mobile U(VI) species affords the less mobile U(IV) derivatives. Microorganisms are used in this process because the reduction rate of these metals is often slow in the absence of microbial interactions Research is also underway to develop methods to remove metals from water by enhancing the sorption of the metal to cell walls. This approach has been evaluated for treatment of cadmium, chromium, and lead. Genetically modified bacteria has also been explored for use in sequestration of Arsenic. Phytoextraction processes concentrate contaminants in the biomass for subsequent removal. Metal extractions can in principle be performed in situ or ex situ where in situ is preferred since it is less expensive to excavate the substrate. Bioremediation is not specific to metals. In 2010 there was a massive oil spill in the Gulf of Mexico. Populations of bacteria and archaea were used to rejuvenate the coast after the oil spill. These microorganisms over time have developed metabolic networks that can utilize hydrocarbons such as oil and petroleum as a source of carbon and energy. Microbial bioremediation is a very effective modern technique for restoring natural systems by removing toxins from the environment. Pesticides Of the many ways to deal with pesticide contamination, bioremediation promises to be more effective. Many sites around the world are contaminated with agrichemicals. These agrichemicals often resist biodegradation, by design. Harming all manners of organic life with long term health issues such as cancer, rashes, blindness, paralysis, and mental illness. An example is Lindane which was a commonly used insecticide in the 20th century. Long time exposure poses a serious threat to humans and the surrounding ecosystem. Lindane reduces the potential of beneficial bacteria in the soil such as nitrogen fixation cyanobacteria. As well as causing central nervous system issues in smaller mammals such as seizures, dizziness, and even death. What makes it so harmful to these organisms is how quickly distributed it gets through the brain and fatty tissues. While Lindane has been mostly limited to specific use, it is still produced and used around the world. Actinobacteria has been a promising candidate in situ technique specifically for removing pesticides. When certain strains of Actinobacteria have been grouped together, their efficiency in degrading pesticides has enhanced. As well as being a reusable technique that strengthens through further use by limiting the migration space of these cells to target specific areas and not fully consume their cleansing abilities. Despite encouraging results, Actinobacteria has only been used in controlled lab settings and will need further development in finding the cost effectiveness and scalability of use. Limitations of bioremediation Bioremediation can be used to mineralize organic pollutants, to partially transform the pollutants, or alter their mobility. Heavy metals and radionuclides generally cannot be biodegraded, but can be bio-transformed to less mobile forms. In some cases, microbes do not fully mineralize the pollutant, potentially producing a more toxic compound. For example, under anaerobic conditions, the reductive dehalogenation of TCE may produce dichloroethylene (DCE) and vinyl chloride (VC), which are suspected or known carcinogens. However, the microorganism Dehalococcoides can further reduce DCE and VC to the non-toxic product ethene. The molecular pathways for bioremediation are of considerable interest. In addition, knowing these pathways will help develop new technologies that can deal with sites that have uneven distributions of a mixture of contaminants. Biodegradation requires microbial population with the metabolic capacity to degrade the pollutant. The biological processes used by these microbes are highly specific, therefore, many environmental factors must be taken into account and regulated as well. It can be difficult to extrapolate the results from the small-scale test studies into big field operations. In many cases, bioremediation takes more time than other alternatives such as land filling and incineration. Another example is bioventing, which is inexpensive to bioremediate contaminated sites, however, this process is extensive and can take a few years to decontaminate a site.> Another major drawback is finding the right species to perform bioremediation. In order to prevent the introduction and spreading of an invasive species to the ecosystem, an indigenous species is needed. As well as a species plentiful enough to clean the whole site without exhausting the population. Finally the species should be resilient enough to withstand the environmental conditions. These specific criteria may make it difficult to perform bioremediation on a contaminated site. In agricultural industries, the use of pesticides is a top factor in direct soil contamination and runoff water contamination. The limitation or remediation of pesticides is the low bioavailability. Altering the pH and temperature of the contaminated soil is a resolution to increase bioavailability which, in turn, increased degradation of harmful compounds. The compound acrylonitrile is commonly produced in industrial setting but adversely contaminates soils. Microorganisms containing nitrile hydratases (NHase) degraded harmful acrylonitrile compounds into non-polluting substances. Since the experience with harmful contaminants are limited, laboratory practices are required to evaluate effectiveness, treatment designs, and estimate treatment times. Bioremediation processes may take several months to several years depending on the size of the contaminated area. Genetic engineering The use of genetic engineering to create organisms specifically designed for bioremediation is under preliminary research. Two category of genes can be inserted in the organism: degradative genes, which encode proteins required for the degradation of pollutants, and reporter genes, which encode proteins able to monitor pollution levels. Numerous members of Pseudomonas have been modified with the lux gene for the detection of the polyaromatic hydrocarbon naphthalene. A field test for the release of the modified organism has been successful on a moderately large scale. There are concerns surrounding release and containment of genetically modified organisms into the environment due to the potential of horizontal gene transfer. Genetically modified organisms are classified and controlled under the Toxic Substances Control Act of 1976 under United States Environmental Protection Agency. Measures have been created to address these concerns. Organisms can be modified such that they can only survive and grow under specific sets of environmental conditions. In addition, the tracking of modified organisms can be made easier with the insertion of bioluminescence genes for visual identification. Genetically modified organisms have been created to treat oil spills and break down certain plastics (PET). Additive manufacturing Additive manufacturing technologies such as bioprinting offer distinctive benefits that can be leveraged in bioremediation to develop structures with characteristics tailored to biological systems and environmental cleanup needs, and even though the adoption of this technology in bioremediation is in its early stages, the area is seeing massive growth. See also Bioremediation of radioactive waste Biosurfactant Chelation Dutch pollutant standards Folkewall In situ chemical oxidation In situ chemical reduction List of environment topics Mega Borg Oil Spill Microbial biodegradation Mycoremediation Mycorrhizal bioremediation Pleurotus Phytoremediation Pseudomonas putida (used for degrading oil) Restoration ecology Xenocatabolism References External links Phytoremediation, hosted by the Missouri Botanical Garden To remediate or to not remediate? Anaerobic Bioremediation Biotechnology Environmental soil science Environmental engineering Environmental terminology Conservation projects Ecological restoration Soil contamination Radioactive waste
Bioremediation
[ "Chemistry", "Technology", "Engineering", "Biology", "Environmental_science" ]
3,979
[ "Ecological restoration", "Chemical engineering", "Environmental chemistry", "Environmental soil science", "Biotechnology", "Biodegradation", "Ecological techniques", "Civil engineering", "Soil contamination", "Environmental impact of nuclear power", "Radioactivity", "Environmental engineering...
434,288
https://en.wikipedia.org/wiki/Catastrophe%20theory
In mathematics, catastrophe theory is a branch of bifurcation theory in the study of dynamical systems; it is also a particular special case of more general singularity theory in geometry. Bifurcation theory studies and classifies phenomena characterized by sudden shifts in behavior arising from small changes in circumstances, analysing how the qualitative nature of equation solutions depends on the parameters that appear in the equation. This may lead to sudden and dramatic changes, for example the unpredictable timing and magnitude of a landslide. Catastrophe theory originated with the work of the French mathematician René Thom in the 1960s, and became very popular due to the efforts of Christopher Zeeman in the 1970s. It considers the special case where the long-run stable equilibrium can be identified as the minimum of a smooth, well-defined potential function (Lyapunov function). Small changes in certain parameters of a nonlinear system can cause equilibria to appear or disappear, or to change from attracting to repelling and vice versa, leading to large and sudden changes of the behaviour of the system. However, examined in a larger parameter space, catastrophe theory reveals that such bifurcation points tend to occur as part of well-defined qualitative geometrical structures. In the late 1970s, applications of catastrophe theory to areas outside its scope began to be criticized, especially in biology and social sciences. Zahler and Sussmann, in a 1977 article in Nature, referred to such applications as being "characterised by incorrect reasoning, far-fetched assumptions, erroneous consequences, and exaggerated claims". As a result, catastrophe theory has become less popular in applications. Elementary catastrophes Catastrophe theory analyzes degenerate critical points of the potential function — points where not just the first derivative, but one or more higher derivatives of the potential function are also zero. These are called the germs of the catastrophe geometries. The degeneracy of these critical points can be unfolded by expanding the potential function as a Taylor series in small perturbations of the parameters. When the degenerate points are not merely accidental, but are structurally stable, the degenerate points exist as organising centres for particular geometric structures of lower degeneracy, with critical features in the parameter space around them. If the potential function depends on two or fewer active variables, and four or fewer active parameters, then there are only seven generic structures for these bifurcation geometries, with corresponding standard forms into which the Taylor series around the catastrophe germs can be transformed by diffeomorphism (a smooth transformation whose inverse is also smooth). These seven fundamental types are now presented, with the names that Thom gave them. Potential functions of one active variable Catastrophe theory studies dynamical systems that describe the evolution of a state variable over time : In the above equation, is referred to as the potential function, and is often a vector or a scalar which parameterise the potential function. The value of may change over time, and it can also be referred to as the control variable. In the following examples, parameters like are such controls. Fold catastrophe When , the potential V has two extrema - one stable, and one unstable. If the parameter a is slowly increased, the system can follow the stable minimum point. But at the stable and unstable extrema meet, and annihilate. This is the bifurcation point. At there is no longer a stable solution. If a physical system is followed through a fold bifurcation, one therefore finds that as a reaches 0, the stability of the solution is suddenly lost, and the system will make a sudden transition to a new, very different behaviour. This bifurcation value of the parameter a is sometimes called the "tipping point". Cusp catastrophe The cusp geometry is very common when one explores what happens to a fold bifurcation if a second parameter, b, is added to the control space. Varying the parameters, one finds that there is now a curve (blue) of points in (a,b) space where stability is lost, where the stable solution will suddenly jump to an alternate outcome. But in a cusp geometry the bifurcation curve loops back on itself, giving a second branch where this alternate solution itself loses stability, and will make a jump back to the original solution set. By repeatedly increasing b and then decreasing it, one can therefore observe hysteresis loops, as the system alternately follows one solution, jumps to the other, follows the other back, and then jumps back to the first. However, this is only possible in the region of parameter space . As a is increased, the hysteresis loops become smaller and smaller, until above they disappear altogether (the cusp catastrophe), and there is only one stable solution. One can also consider what happens if one holds b constant and varies a. In the symmetrical case , one observes a pitchfork bifurcation as a is reduced, with one stable solution suddenly splitting into two stable solutions and one unstable solution as the physical system passes to through the cusp point (0,0) (an example of spontaneous symmetry breaking). Away from the cusp point, there is no sudden change in a physical solution being followed: when passing through the curve of fold bifurcations, all that happens is an alternate second solution becomes available. A famous suggestion is that the cusp catastrophe can be used to model the behaviour of a stressed dog, which may respond by becoming cowed or becoming angry. The suggestion is that at moderate stress (), the dog will exhibit a smooth transition of response from cowed to angry, depending on how it is provoked. But higher stress levels correspond to moving to the region (). Then, if the dog starts cowed, it will remain cowed as it is irritated more and more, until it reaches the 'fold' point, when it will suddenly, discontinuously snap through to angry mode. Once in 'angry' mode, it will remain angry, even if the direct irritation parameter is considerably reduced. A simple mechanical system, the "Zeeman Catastrophe Machine", nicely illustrates a cusp catastrophe. In this device, smooth variations in the position of the end of a spring can cause sudden changes in the rotational position of an attached wheel. Catastrophic failure of a complex system with parallel redundancy can be evaluated based on the relationship between local and external stresses. The model of the structural fracture mechanics is similar to the cusp catastrophe behavior. The model predicts reserve ability of a complex system. Other applications include the outer sphere electron transfer frequently encountered in chemical and biological systems, modelling the dynamics of cloud condensation nuclei in the atmosphere, and modelling real estate prices. Fold bifurcations and the cusp geometry are by far the most important practical consequences of catastrophe theory. They are patterns which reoccur again and again in physics, engineering and mathematical modelling. They produce the strong gravitational lensing events and provide astronomers with one of the methods used for detecting black holes and the dark matter of the universe, via the phenomenon of gravitational lensing producing multiple images of distant quasars. The remaining simple catastrophe geometries are very specialised in comparison. Swallowtail catastrophe The control parameter space is three-dimensional. The bifurcation set in parameter space is made up of three surfaces of fold bifurcations, which meet in two lines of cusp bifurcations, which in turn meet at a single swallowtail bifurcation point. As the parameters go through the surface of fold bifurcations, one minimum and one maximum of the potential function disappear. At the cusp bifurcations, two minima and one maximum are replaced by one minimum; beyond them the fold bifurcations disappear. At the swallowtail point, two minima and two maxima all meet at a single value of x. For values of , beyond the swallowtail, there is either one maximum-minimum pair, or none at all, depending on the values of b and c. Two of the surfaces of fold bifurcations, and the two lines of cusp bifurcations where they meet for , therefore disappear at the swallowtail point, to be replaced with only a single surface of fold bifurcations remaining. Salvador Dalí's last painting, The Swallow's Tail, was based on this catastrophe. Butterfly catastrophe Depending on the parameter values, the potential function may have three, two, or one different local minima, separated by the loci of fold bifurcations. At the butterfly point, the different 3-surfaces of fold bifurcations, the 2-surfaces of cusp bifurcations, and the lines of swallowtail bifurcations all meet up and disappear, leaving a single cusp structure remaining when . Potential functions of two active variables Umbilic catastrophes are examples of corank 2 catastrophes. They can be observed in optics in the focal surfaces created by light reflecting off a surface in three dimensions and are intimately connected with the geometry of nearly spherical surfaces: umbilical point. Thom proposed that the hyperbolic umbilic catastrophe modeled the breaking of a wave and the elliptical umbilic modeled the creation of hair-like structures. Hyperbolic umbilic catastrophe Elliptic umbilic catastrophe Parabolic umbilic catastrophe Arnold's notation Vladimir Arnold gave the catastrophes the ADE classification, due to a deep connection with simple Lie groups. A0 - a non-singular point: . A1 - a local extremum, either a stable minimum or unstable maximum . A2 - the fold A3 - the cusp A4 - the swallowtail A5 - the butterfly Ak - a representative of an infinite sequence of one variable forms D4− - the elliptical umbilic D4+ - the hyperbolic umbilic D5 - the parabolic umbilic Dk - a representative of an infinite sequence of further umbilic forms E6 - the symbolic umbilic E7 E8 There are objects in singularity theory which correspond to most of the other simple Lie groups. Optics As predicted by catastrophe theory, singularities are generic, and stable under perturbation. This explains how the bright lines and surfaces are stable under perturbation. The caustics one sees at the bottom of a swimming pool, for example, have a distinctive texture and only has a few types of singular points, even though the surface of the water is ever changing. The edge of the rainbow, for example, has a fold catastrophe. Due to the wave nature of light, the catastrophe has fine diffraction details described by the Airy function. This is a generic result and does not depend on the precise shape of the water droplet, and so the edge of the rainbow always has the shape of an Airy function. The same Airy function fold catastrophe can be seen in nuclear-nuclear scattering ("nuclear rainbow"). The cusp catastrophe is the next-simplest to observe. Due to the wave nature of light, the catastrophe has fine diffraction details described by the Pearcey function. Higher-order catastrophes, such as the swallowtail and the butterfly, have also been observed. See also Broken symmetry Butterfly effect Chaos theory Domino effect Inflection point Morphology Phase transition Punctuated equilibrium Snowball effect Spontaneous symmetry breaking References Bibliography Arnold, Vladimir Igorevich (1992) Catastrophe Theory, 3rd ed. Berlin: Springer-Verlag V. S. Afrajmovich, V. I. Arnold, et al., Bifurcation Theory And Catastrophe Theory, Bełej, M. Kulesza, S. (2013) M"odeling the Real Estate Prices in Olsztyn under Instability Conditions", Folia Oeconomica Stetinensia 11(1): 61–72, ISSN (Online) 1898–0198, ISSN (Print) 1730–4237, Castrigiano, Domenico P. L. and Hayes, Sandra A. (2004) Catastrophe Theory, second edition, Boulder: Westview Gilmore, Robert (1993) Catastrophe Theory for Scientists and Engineers, New York: Dover Petters, Arlie O., Levine, Harold and Wambsganss, Joachim (2001) Singularity Theory and Gravitational Lensing, Boston: Birkhäuser Postle, Denis (1980) Catastrophe Theory – Predict and avoid personal disasters, Fontana Paperbacks Poston, Tim and Stewart, Ian (1998) Catastrophe: Theory and Its Applications, New York: Dover Sanns, Werner (2000) Catastrophe Theory with Mathematica: A Geometric Approach, Germany: DAV Saunders, Peter Timothy (1980) An Introduction to Catastrophe Theory, Cambridge, England: Cambridge University Press Thom, René (1989) Structural Stability and Morphogenesis: An Outline of a General Theory of Models, Reading, MA: Addison-Wesley Woodcock, Alexander Edward Richard and Davis, Monte. (1978) Catastrophe Theory, New York: E. P. Dutton Zeeman, E.C. (1977) Catastrophe Theory-Selected Papers 1972–1977, Reading, MA: Addison-Wesley External links CompLexicon: Catastrophe Theory Catastrophe teacher Java simulation of Zeeman's catastrophe machine Bifurcation theory Singularity theory Systems theory Chaos theory
Catastrophe theory
[ "Mathematics" ]
2,702
[ "Bifurcation theory", "Dynamical systems" ]
435,420
https://en.wikipedia.org/wiki/Anaerobic%20respiration
Anaerobic respiration is respiration using electron acceptors other than molecular oxygen (O2). Although oxygen is not the final electron acceptor, the process still uses a respiratory electron transport chain. In aerobic organisms undergoing respiration, electrons are shuttled to an electron transport chain, and the final electron acceptor is oxygen. Molecular oxygen is an excellent electron acceptor. Anaerobes instead use less-oxidizing substances such as nitrate (), fumarate (), sulfate (), or elemental sulfur (S). These terminal electron acceptors have smaller reduction potentials than O2. Less energy per oxidized molecule is released. Therefore, anaerobic respiration is less efficient than aerobic. As compared with fermentation Anaerobic cellular respiration and fermentation generate ATP in very different ways, and the terms should not be treated as synonyms. Cellular respiration (both aerobic and anaerobic) uses highly reduced chemical compounds such as NADH and FADH2 (for example produced during glycolysis and the citric acid cycle) to establish an electrochemical gradient (often a proton gradient) across a membrane. This results in an electrical potential or ion concentration difference across the membrane. The reduced chemical compounds are oxidized by a series of respiratory integral membrane proteins with sequentially increasing reduction potentials, with the final electron acceptor being oxygen (in aerobic respiration) or another chemical substance (in anaerobic respiration). A proton motive force drives protons down the gradient (across the membrane) through the proton channel of ATP synthase. The resulting current drives ATP synthesis from ADP and inorganic phosphate. Fermentation, in contrast, does not use an electrochemical gradient but instead uses only substrate-level phosphorylation to produce ATP. The electron acceptor NAD+ is regenerated from NADH formed in oxidative steps of the fermentation pathway by the reduction of oxidized compounds. These oxidized compounds are often formed during the fermentation pathway itself, but may also be external. For example, in homofermentative lactic acid bacteria, NADH formed during the oxidation of glyceraldehyde-3-phosphate is oxidized back to NAD+ by the reduction of pyruvate to lactic acid at a later stage in the pathway. In yeast, acetaldehyde is reduced to ethanol to regenerate NAD+. There are two important anaerobic microbial methane formation pathways, through carbon dioxide / bicarbonate () reduction (respiration) or acetate fermentation. Ecological importance Anaerobic respiration is a critical component of the global nitrogen, iron, sulfur, and carbon cycles through the reduction of the oxyanions of nitrogen, sulfur, and carbon to more-reduced compounds. The biogeochemical cycling of these compounds, which depends upon anaerobic respiration, significantly impacts the carbon cycle and global warming. Anaerobic respiration occurs in many environments, including freshwater and marine sediments, soil, subsurface aquifers, deep subsurface environments, and biofilms. Even environments that contain oxygen, such as soil, have micro-environments that lack oxygen due to the slow diffusion characteristics of oxygen gas. An example of the ecological importance of anaerobic respiration is the use of nitrate as a terminal electron acceptor, or dissimilatory denitrification, which is the main route by which fixed nitrogen is returned to the atmosphere as molecular nitrogen gas. The denitrification process is also very important in host-microbe interactions. Like mitochondria in oxygen-respiring microorganisms, some single-cellular anaerobic ciliates use denitrifying endosymbionts to gain energy. Another example is methanogenesis, a form of carbon-dioxide respiration, that is used to produce methane gas by anaerobic digestion. Biogenic methane can be a sustainable alternative to fossil fuels. However, uncontrolled methanogenesis in landfill sites releases large amounts of methane into the atmosphere, acting as a potent greenhouse gas. Sulfate respiration produces hydrogen sulfide, which is responsible for the characteristic 'rotten egg' smell of coastal wetlands and has the capacity to precipitate heavy metal ions from solution, leading to the deposition of sulfidic metal ores. Economic relevance Dissimilatory denitrification is widely used in the removal of nitrate and nitrite from municipal wastewater. An excess of nitrate can lead to eutrophication of waterways into which treated water is released. Elevated nitrite levels in drinking water can lead to problems due to its toxicity. Denitrification converts both compounds into harmless nitrogen gas. Specific types of anaerobic respiration are also critical in bioremediation, which uses microorganisms to convert toxic chemicals into less-harmful molecules to clean up contaminated beaches, aquifers, lakes, and oceans. For example, toxic arsenate or selenate can be reduced to less toxic compounds by various anaerobic bacteria via anaerobic respiration. The reduction of chlorinated chemical pollutants, such as vinyl chloride and carbon tetrachloride, also occurs through anaerobic respiration. Anaerobic respiration is useful in generating electricity in microbial fuel cells, which employ bacteria that respire solid electron acceptors (such as oxidized iron) to transfer electrons from reduced compounds to an electrode. This process can simultaneously degrade organic carbon waste and generate electricity. Examples of electron acceptors in respiration See also Hydrogenosomes and mitosomes Anaerobic digestion Microbial fuel cell Standard electrode potential (data page) Table of standard reduction potentials for half-reactions important in biochemistry Lithotrophs Further reading References Anaerobic digestion Biodegradation Cellular respiration Anaerobic respiration
Anaerobic respiration
[ "Chemistry", "Engineering", "Biology" ]
1,221
[ "Cellular respiration", "Biochemistry", "Biodegradation", "Anaerobic digestion", "Environmental engineering", "Water technology", "Metabolism" ]
435,639
https://en.wikipedia.org/wiki/Mertens%20function
In number theory, the Mertens function is defined for all positive integers n as where is the Möbius function. The function is named in honour of Franz Mertens. This definition can be extended to positive real numbers as follows: Less formally, is the count of square-free integers up to x that have an even number of prime factors, minus the count of those that have an odd number. The first 143 M(n) values are The Mertens function slowly grows in positive and negative directions both on average and in peak value, oscillating in an apparently chaotic manner passing through zero when n has the values 2, 39, 40, 58, 65, 93, 101, 145, 149, 150, 159, 160, 163, 164, 166, 214, 231, 232, 235, 236, 238, 254, 329, 331, 332, 333, 353, 355, 356, 358, 362, 363, 364, 366, 393, 401, 403, 404, 405, 407, 408, 413, 414, 419, 420, 422, 423, 424, 425, 427, 428, ... . Because the Möbius function only takes the values −1, 0, and +1, the Mertens function moves slowly, and there is no x such that |M(x)| > x. H. Davenport demonstrated that, for any fixed h, uniformly in . This implies, for that The Mertens conjecture went further, stating that there would be no x where the absolute value of the Mertens function exceeds the square root of x. The Mertens conjecture was proven false in 1985 by Andrew Odlyzko and Herman te Riele. However, the Riemann hypothesis is equivalent to a weaker conjecture on the growth of M(x), namely M(x) = O(x1/2 + ε). Since high values for M(x) grow at least as fast as , this puts a rather tight bound on its rate of growth. Here, O refers to big O notation. The true rate of growth of M(x) is not known. An unpublished conjecture of Steve Gonek states that Probabilistic evidence towards this conjecture is given by Nathan Ng. In particular, Ng gives a conditional proof that the function has a limiting distribution on . That is, for all bounded Lipschitz continuous functions on the reals we have that if one assumes various conjectures about the Riemann zeta function. Representations As an integral Using the Euler product, one finds that where is the Riemann zeta function, and the product is taken over primes. Then, using this Dirichlet series with Perron's formula, one obtains where c > 1. Conversely, one has the Mellin transform which holds for . A curious relation given by Mertens himself involving the second Chebyshev function is Assuming that the Riemann zeta function has no multiple non-trivial zeros, one has the "exact formula" by the residue theorem: Weyl conjectured that the Mertens function satisfied the approximate functional-differential equation where H(x) is the Heaviside step function, B are Bernoulli numbers, and all derivatives with respect to t are evaluated at t = 0. There is also a trace formula involving a sum over the Möbius function and zeros of the Riemann zeta function in the form where the first sum on the right-hand side is taken over the non-trivial zeros of the Riemann zeta function, and (g, h) are related by the Fourier transform, such that As a sum over Farey sequences Another formula for the Mertens function is where is the Farey sequence of order n. This formula is used in the proof of the Franel–Landau theorem. As a determinant M(n) is the determinant of the n × n Redheffer matrix, a (0, 1) matrix in which aij is 1 if either j is 1 or i divides j. As a sum of the number of points under n-dimensional hyperboloids This formulation expanding the Mertens function suggests asymptotic bounds obtained by considering the Piltz divisor problem, which generalizes the Dirichlet divisor problem of computing asymptotic estimates for the summatory function of the divisor function. Other properties From we have Furthermore, from where is the totient summatory function. Calculation Neither of the methods mentioned previously leads to practical algorithms to calculate the Mertens function. Using sieve methods similar to those used in prime counting, the Mertens function has been computed for all integers up to an increasing range of x. The Mertens function for all integer values up to x may be computed in time. A combinatorial algorithm has been developed incrementally starting in 1870 by Ernst Meissel, Lehmer, Lagarias-Miller-Odlyzko, and Deléglise-Rivat that computes isolated values of M(x) in time; a further improvement by Harald Helfgott and Lola Thompson in 2021 improves this to , and an algorithm by Lagarias and Odlyzko based on integrals of the Riemann zeta function achieves a running time of . See for values of M(x) at powers of 10. Known upper bounds Ng notes that the Riemann hypothesis (RH) is equivalent to for some positive constant . Other upper bounds have been obtained by Maier, Montgomery, and Soundarajan assuming the RH including Known explicit upper bounds without assuming the RH are given by: It is possible to simplify the above expression into a less restrictive but illustrative form as: See also Perron's formula Liouville's function Notes References Deléglise, M. and Rivat, J. "Computing the Summation of the Möbius Function." Experiment. Math. 5, 291-295, 1996. Computing the summation of the Möbius function Nathan Ng, "The distribution of the summatory function of the Möbius function", Proc. London Math. Soc. (3) 89 (2004) 361-389. Arithmetic functions
Mertens function
[ "Mathematics" ]
1,278
[ "Arithmetic functions", "Number theory" ]
15,419,430
https://en.wikipedia.org/wiki/Stieglitz%20rearrangement
The Stieglitz rearrangement is a rearrangement reaction in organic chemistry which is named after the American chemist Julius Stieglitz (1867–1937) and was first investigated by him and Paul Nicholas Leech in 1913. It describes the 1,2-rearrangement of trityl amine derivatives to triaryl imines. It is comparable to a Beckmann rearrangement which also involves a substitution at a nitrogen atom through a carbon to nitrogen shift. As an example, triaryl hydroxylamines can undergo a Stieglitz rearrangement by dehydration and the shift of a phenyl group after activation with phosphorus pentachloride to yield the respective triaryl imine, a Schiff base. In general, the term "Stieglitz rearrangement" is used to describe a wide variety of rearrangement reactions of amines to imines. Although, it is generally associated with the rearrangement of triaryl hydroxylamines, that are well-reported in the academic literature, Stieglitz rearrangements can also occur on alkylated amine derivatives, haloamines and azides as well as other activated amine derivatives. General mechanism and relatedness to the Beckmann rearrangement The Stieglitz rearrangement's reaction mechanism and the products and starting materials involved make it closely related to the Beckmann rearrangement, which can be used for the synthesis of carboxamides. Both rearrangement reactions involve a carbon to nitrogen shift, usually after electrophilic activation of the leaving group on the nitrogen atom. The main difference in the starting materials, however, is their saturation degree. While a Stieglitz rearrangement takes place on saturated amine derivatives with a σ-single bond, the typical starting material for a Beckmann rearrangement is an oxime (a hydroxylimine) with a C=N-double bond. In a Beckmann rearrangement, the acid catalyzed carbon to nitrogen migration takes place on the oxime to yield a nitrilium ion intermediate. In principle, the first step of a Stieglitz rearrangement proceeds in an analogous way. However, after the generation of the positively charged iminium ion through the π-interaction between the nitrogen lone pair and the electron deficient carbon in the Stieglitz rearrangement, the pathways diverge. In the Stieglitz rearrangement, a charge-neutral state of the molecule can be achieved by dissociation of a proton. Alternatively, if the starting material did not possess any amino protons, the neutral state can be achieved with an external reducing agent, such as sodium borohydride. It reduces the iminium ion intermediate to the corresponding saturated amine. In the Beckmann rearrangement such a proton is also missing and the stabilization of the intermediate proceeds via a nucleophilic addition of a water molecule, dissociation of a proton and tautomerism from the imidic acid to the carboxamide. Variations Although the original Stieglitz reaction is best known for the rearrangement of trityl hydroxylamines, there are several variations which include good leaving groups as N-substituents (such as halogens and sulfonates). Different reagents are commonly applied, depending on the exact nature of the substrate. Stieglitz rearrangement of N-hydroxylated amines, N-alkoxylated amines and N-sulfonated amines Stieglitz rearrangement of N-hydroxylated amines For the rearrangement of trityl hydroxylamines, Lewis acids such as phosphorus pentachloride (PCl5), phosphorus pentoxide (P2O5) or boron trifluoride (BF3) can be used. They function as electrophilic activators for the hydroxyl group by increasing the quality of the leaving group. For example, when using PCl5 as a reagent, the trityl hydroxylamine is first transformed into the activated intermediate via a nucleophilic substitution. The generated intermediate can then undergo rearrangement by the migration of the phenyl group and dissociation of the phosphorus(V) species to form N-phenyl benzophenone imine. Stieglitz rearrangement of N-alkoxylated amines Additionally to N-hydroxy trityl amines, rearrangements in N-alkoxy trityl amines are also possible. However, those reactions are known for their intrinsically low yields. For example, N-benzyloxy substituted trityl amine can undergo a Stieglitz rearrangement in the presence of phosphorus pentachloride (160 °C, 40% yield) or with BF3 as a reagent (60 °C, 29% yield). In the latter case, BF3 acts as a Lewis acid in the electrophilic activation of the benzylic oxygen to allow for a nucleophilic attack on the adjacent nitrogen atom. Stieglitz rearrangement of N-sulfonated amines Stieglitz rearrangements also readily proceed with active sulfonates as a leaving group. N-sulfonated amines can be obtained from the respective hydroxylamines and suitable sulfonation reagents. For example, Herderin et al. synthesized their secondary hydroxylamine (starting material in the rearrangement shown below) by subjecting the respective hydroxylamine to tosyl chloride and sodium hydroxide in acetonitrile. The Stieglitz rearrangement is especially reactive in the case of bridged bicyclic N-sulfonated amines as starting materials, where mild conditions are sufficient for an efficient reaction to take place. For example, the rearrangement of the bicyclic N-tosylated amine proceeds readily in aqueous dioxane at room temperature. However, the respective imine is not formed in this case, presumably due to the strain that would thermodynamically disfavor such a structure, bearing a double bond at a bridgehead atom (Bredt's rule). Instead, the tosylate is nucleophilically added at the geminal position of the nitrogen via an attack on the iminium ion. Stieglitz rearrangement of azides Stieglitz rearrangements can also proceed on organic azides with molecular nitrogen as a good leaving group. Those reactions proceed comparably to steps of the Schmidt reaction, by which carboxylic acids can be transformed into amines through the addition of hydrazoic acid under acidic aqueous conditions. The Stieglitz rearrangement of azides generally profits from a protonic or thermal activation, which can also be combined. In both cases, molecular nitrogen is set free as a gas in an irreversible step. It has been suggested that the rearrangement, after the dissociation of the N2 molecule, proceeds over a reactive nitrene intermediate. These intermediates would be quite similar to those that have been proposed to be key intermediates in the rearrangement reactions named after Hofmann and Curtius, but have since been considered unlikely. When subjecting the azide to a Brønsted acid, the protonation of the azide activates the basal nitrogen and lowers the bond strength to the adjacent one, so that the dissociation and expulsion of molecular nitrogen is eased. After the rearrangement the proton can then dissociate from the iminium ion to yield the imine. An alternative way for the production of protonated organic azides is the nuclophilic addition of hydrazoic acid to a carbocations, which can then also undergo Stieglitz rearrangements. Stieglitz rearrangement of N-halogenated amines The Stieglitz rearrangement of N-halogenated amines can be observed for chlorine and bromine substituted amines, often in combination with an organic base, such as sodium methoxide. The need for a base is generally affiliated with the need for a deprotonation of the amine. However, there also have been reported examples of base-free Stieglitz rearrangements of N-halogenated amines. An example for that can be found in the total synthesis of (±)-lycopodine by Paul Grieco et al. There, a ring formation takes place by a rearrangement on a secondary haloamine by subjecting it to silver tetrafluoroborate. AgBF4 is known to act as a source of Ag+ ions that can facilitate the dissociation of halides from organic molecules, with the formation of the respective silver halide as a driving force. The desired product is then obtained by reduction with sodium cyanoborohydride, a mild reducing agent which is commonly employed in the reduction of imines to amines. Stieglitz rearrangement of lead tetraacetate-activated amines It was also observed, that the addition of lead tetraacetate can facilitate the Stieglitz rearrangement of amine derivatives. After the formation of the activated amine derivative intermediate by coordination to the lead center, the following rearrangement again proceeds via migration of the aromatic group under formation of a C–N bond, dissociation of lead and the deprotonation of the resulting iminium ion. See also Beckmann rearrangement Curtius rearrangement Dakin oxidation Schmidt reaction References Rearrangement reactions Name reactions
Stieglitz rearrangement
[ "Chemistry" ]
2,012
[ "Name reactions", "Rearrangement reactions", "Organic reactions" ]
15,419,609
https://en.wikipedia.org/wiki/AATK
Serine/threonine-protein kinase LMTK1 (also known as Apoptosis-associated tyrosine kinase) is an enzyme that in humans is encoded by the (AATK) gene. Structure and expression The gene was identified in 1998. It is located on chromosome 17 (17q25.3) and is expressed in the pancreas, kidney, brain and lungs. The protein is composed of 1,207 amino acids. Function The protein contains a tyrosine kinase domain at the N-terminal end and a proline-rich domain at the C-terminal end. Studies of the mouse homologue have indicated that it may be necessary for the induction of growth arrest and/or apoptosis of myeloid precursor cells. It may also have a role in inducing differentiation in neuronal cells. Its suppressive role on melanoma development has been reported recently. AATK is thought to indirectly inhibit the SPAK/WNK4 activation of the Na-K-Cl cotransporter. References Further reading External links Genes on human chromosome 17 Tyrosine kinase receptors
AATK
[ "Chemistry" ]
230
[ "Tyrosine kinase receptors", "Signal transduction" ]
18,335,428
https://en.wikipedia.org/wiki/Edwin%20J.%20Vandenberg
Edwin J. Vandenberg (September 13, 1918 – June 11, 2005) was a chemist at Hercules Inc. and a researcher at Arizona State University. Vandenberg is best known for his work at Hercules in the 1950s through the 1970s that included the independent discovery of isotactic polypropylene, the development of Ziegler-type catalysts and epoxide polymerization. The Vandenberg catalyst is named after him. This catalyst is an aluminoxane, prepared from an alkyl-aluminium and water, used as a catalyst in the manufacture of polyether elastomers. Early life and education Vandenberg was raised in Hawthorne, New Jersey. His father owned a grain and feed store. He graduated in 1935 as part of the first graduating class at Hawthorne High School. He attended the Stevens Institute of Technology, earning an ME degree in 1939 and a D Eng degree in 1965. Awards ACS Award in Polymer Chemistry (1981) ACS Award in Applied Polymer Science (1991) ACS ACS Rubber Division's Charles Goodyear Medal (1991) ACS Polymer Division's Herman F. Mark Award (1992) Society of Plastics Engineers International Award (1994). Priestley Medal (2003) References 20th-century American chemists 1918 births 2005 deaths People from Hawthorne, New Jersey Polymer scientists and engineers Hawthorne High School (New Jersey) alumni Scientists from New Jersey Arizona State University faculty Stevens Institute of Technology alumni
Edwin J. Vandenberg
[ "Chemistry", "Materials_science" ]
297
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
18,335,489
https://en.wikipedia.org/wiki/Alternated%20hypercubic%20honeycomb
In geometry, the alternated hypercube honeycomb (or demicubic honeycomb) is a dimensional infinite series of honeycombs, based on the hypercube honeycomb with an alternation operation. It is given a Schläfli symbol h{4,3...3,4} representing the regular form with half the vertices removed and containing the symmetry of Coxeter group for n ≥ 4. A lower symmetry form can be created by removing another mirror on an order-4 peak. The alternated hypercube facets become demihypercubes, and the deleted vertices create new orthoplex facets. The vertex figure for honeycombs of this family are rectified orthoplexes. These are also named as hδn for an (n-1)-dimensional honeycomb. References Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, pp. 122–123, 1973. (The lattice of hypercubes γn form the cubic honeycombs, δn+1) pp. 154–156: Partial truncation or alternation, represented by h prefix: h{4,4}={4,4}; h{4,3,4}={31,1,4}, h{4,3,3,4}={3,3,4,3} p. 296, Table II: Regular honeycombs, δn+1 Kaleidoscopes: Selected Writings of H. S. M. Coxeter, editied by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] Honeycombs (geometry) Uniform polytopes
Alternated hypercubic honeycomb
[ "Physics", "Chemistry", "Materials_science" ]
410
[ "Uniform polytopes", "Honeycombs (geometry)", "Tessellation", "Crystallography", "Symmetry" ]
18,343,996
https://en.wikipedia.org/wiki/Biotic%20index
A biotic index is a scale for showing the quality of an environment by indicating the types and abundances of organisms present in a representative sample of the environment. It is often used to assess the quality of water in marine and freshwater ecosystems. Numerous biotic indices have been created to account for the indicator species found in each region of study. The concept of the biotic index was developed by Cherie Stephens in an effort to provide a simple measurement of stream pollution and its effects on the biology of the stream. Technique To assign a biotic index value to a specific water site, the tester first collects macro invertebrates from portions of the sample area of the stream, river or lake, and separates them into groups of similar-looking organisms. More extensive testing can be done by looking for certain microscopic organisms. Then an identification key is used to help determine which category or group the organism belongs in and allows a numerical value be assigned to that organism. A worksheet is then used to calculate the final value or score of all the organisms found. Depending upon the worksheet's equations, the score determines the condition of the water quality. Usefulness of macro invertebrates Aquatic macro invertebrates have some general characteristics that make them very useful to assess stream health: They are abundant and found in water bodies throughout the world They are not extremely mobile They carry out part or all of their life cycle within the stream or river. Macro invertebrates limited mobility and extended presence in the water means that they are exposed on a continuous basis to water quality in that stream or river. In particular, many of these organisms breath dissolved oxygen that is in the water. They are also easier to see at the time of sampling. Not all the macro invertebrates found in samples are listed on the biotic index scoring sheets. This is because some do not rely on oxygen within the water for survival. Many are able to collect air from the atmosphere and hold a bubble alongside their body to use like a scuba diver uses a tank of oxygen. For those macro invertebrates that do rely on dissolved oxygen, some can only live in water that has a lot of oxygen. Others can live in water that doesn't have much oxygen dissolved in it at all. Generally, it is assumed that the more pollution there is in the water, the less oxygen. Classification The biotic index works by assigning different levels of tolerance to pollution to the different types of organisms. The types of macro invertebrates and other organisms found during sampling are broken into 4 groups: Pollution intolerant: These organisms are highly sensitive to pollution (Stonefly or Alderfly Larva) Semi-Pollution intolerant: These organisms are sensitive to pollution (Dragonfly Larva or Craw fish) Semi-Pollution tolerant: These organisms will be found in clean and slightly polluted waterways (Snails or Black Fly Larva) Pollution tolerant: These organisms will be found in polluted, as well as clean aquatic ecosystems (Leeches, Blood worms) Some index worksheets combine groups 2 and 3 together, giving only 3 groups. Each group has a number assigned to it and is multiplied by the number of organisms found in that group. This is why identifying the type of organism is important. Examples of Biotic Indices AZTI's Marine Biotic Index Extended Biotic Index Family Biotic Index Hilsenhoff Biotic Index Sludge Biotic Index Trent Biotic Index See also Bioindicator Biological integrity Biosurvey Index of biological integrity Indicator species Macroinvertebrate Community Index References William E. Sharpe, William G. Kimmel, and Anthony R. Buda (2002)."Biotic Index Guide." Pennsylvania State University. Water Action Volunteers - "Stream Monitoring." University of Wisconsin Extension office. Aquatic ecology Environmental science Water pollution Environmental indices
Biotic index
[ "Chemistry", "Biology", "Environmental_science" ]
763
[ "Aquatic ecology", "Ecosystems", "nan", "Water pollution" ]
18,344,400
https://en.wikipedia.org/wiki/Methanation
Methanation is the conversion of carbon monoxide and carbon dioxide (COx) to methane (CH4) through hydrogenation. The methanation reactions of COx were first discovered by Sabatier and Senderens in 1902. COx methanation has many practical applications. It is a means of carbon oxide removal from process gases and is also being discussed as an alternative to PROX in fuel processors for mobile fuel cell applications. Methanation as a means of producing synthetic natural gas has been considered since the 1970s. More recently it has been considered as a way to store energy produced from solar or wind power using power-to-gas systems in conjunction with existing natural gas storage. Chemical reactions The following reactions describe the methanation of carbon monoxide and carbon dioxide respectively: CO + 3H2 -> CH4 + H2O -206 kJ/mol CO2 + 4H2 -> CH4 + 2 H2O -164 kJ/mol The methanation reactions are classified as exothermic and their energy of formations are listed. There is disagreement on whether the CO2 methanation occurs by first associatively adsorbing an adatom hydrogen and forming oxygen intermediates before hydrogenation or dissociating and forming a carbonyl before being hydrogenated. CO is believed to be methanated through a dissociative mechanism where the carbon-oxygen bond is broken before hydrogenation with an associative mechanism only being observed at high H2 concentrations. Methanation reaction over different carried metal catalysts including Ni, Ru and Rh has been widely investigated for the production of CH4 from syngas and other power to gas initiatives. Nickel is the most widely used catalyst due to its high selectivity and low cost. Industrial applications Creation of synthetic natural gas Methanation is an important step in the creation of synthetic or substitute natural gas (SNG). Coal or wood undergo gasification which creates a producer gas that must undergo methanation in order to produce a usable gas that just needs to undergo a final purification step. The first commercial synthetic gas plant opened in 1984 and is the Great Plains Synfuel plant in Beulah, North Dakota. It is still operational and produces 1500 MW worth of SNG using coal as the carbon source. In the years since its opening, other commercial facilities have been opened using other carbon sources such as wood chips. In France, the AFUL Chantrerie, located in Nantes, started in November 2017 the demonstrator MINERVE. This methanation unit of 14 Nm3/day was carried out by Top Industrie, with the support of Leaf. This installation is used to feed a CNG station and to inject methane into the natural gas boiler. Ammonia synthesis In ammonia production CO and CO2 are considered poisons to most commonly used catalysts. Methanation catalysts are added after several hydrogen producing steps to prevent carbon oxide buildup in the ammonia synthesis loop as methane does not have similar adverse effects on ammonia synthesis rates. See also Biological methanation Biomethane Hydrogen economy Renewable natural gas Sabatier reaction References Organic reactions Methane Chemical processes Hydrogenation
Methanation
[ "Chemistry" ]
648
[ "Methane", "Organic reactions", "Chemical processes", "nan", "Hydrogenation", "Chemical process engineering", "Greenhouse gases" ]
18,346,799
https://en.wikipedia.org/wiki/Folded%20optics
Folded optics is an optical system in which the beam is bent in a way to make the optical path much longer than the size of the system. This allows the resulting focal length of the objective to be greater than the physical length of the optical device. Prismatic binoculars are a well-known example. An early conventional film camera (35 mm) was designed by Tessina that used the concept of folded optics. Fold mirrors are used to direct infrared light within the optical path of the James Webb Space Telescope. These optical fold mirrors are not to be confused with the observatory's deployable primary mirrors, which are folded inward to fit the telescope within the launch vehicle's payload fairing; when deployed, these segments are part of the three-mirror anastigmat design's primary element and don't serve as fold mirrors in the optical sense. See also Periscope lens also called "folded lens" References External links — Origami Lens Optics
Folded optics
[ "Physics", "Chemistry" ]
195
[ "Applied and interdisciplinary physics", "Optics", " molecular", "Atomic", " and optical physics" ]
18,347,014
https://en.wikipedia.org/wiki/Vendomyces
Vendomyces is a genus of purported Ediacaran fungi, assigned to the Chytridiomycetes. However, it is unlikely that these fossils truly represent fungi. See also List of Ediacaran genera References Prehistoric fungi Prehistoric life genera
Vendomyces
[ "Biology" ]
52
[ "Fungus stubs", "Fungi", "Prehistoric fungi" ]
4,603,613
https://en.wikipedia.org/wiki/Inhomogeneous%20electromagnetic%20wave%20equation
In electromagnetism and applications, an inhomogeneous electromagnetic wave equation, or nonhomogeneous electromagnetic wave equation, is one of a set of wave equations describing the propagation of electromagnetic waves generated by nonzero source charges and currents. The source terms in the wave equations make the partial differential equations inhomogeneous, if the source terms are zero the equations reduce to the homogeneous electromagnetic wave equations, which follow from Maxwell's equations. Maxwell's equations For reference, Maxwell's equations are summarized below in SI units and Gaussian units. They govern the electric field E and magnetic field B due to a source charge density ρ and current density J: where ε0 is the vacuum permittivity and μ0 is the vacuum permeability. Throughout, the relation is also used. SI units E and B fields Maxwell's equations can directly give inhomogeneous wave equations for the electric field E and magnetic field B. Substituting Gauss's law for electricity and Ampère's law into the curl of Faraday's law of induction, and using the curl of the curl identity (The last term in the right side is the vector Laplacian, not Laplacian applied on scalar functions.) gives the wave equation for the electric field E: Similarly substituting Gauss's law for magnetism into the curl of Ampère's circuital law (with Maxwell's additional time-dependent term), and using the curl of the curl identity, gives the wave equation for the magnetic field B: The left hand sides of each equation correspond to wave motion (the D'Alembert operator acting on the fields), while the right hand sides are the wave sources. The equations imply that EM waves are generated if there are gradients in charge density ρ, circulations in current density J, time-varying current density, or any mixture of these. The above equation for the electric field can be transformed to a homogeneous wave equation with a so called damping term if we study a problem where Ohm's law in differential form hold (we assume that is we dealing with homogeneous conductors that have relative permeability and permittivity around 1), and by substituting from the differential form of Gauss law and The final homogeneous equation with only the unknown electric field and its partial derivatives is The solutions for the above homogeneous equation for the electric field are infinitely many and we must specify boundary conditions for the electric field in order to find specific solutions These forms of the wave equations are not often used in practice, as the source terms are inconveniently complicated. A simpler formulation more commonly encountered in the literature and used in theory use the electromagnetic potential formulation, presented next. A and φ potential fields Introducing the electric potential φ (a scalar potential) and the magnetic potential A (a vector potential) defined from the E and B fields by: The four Maxwell's equations in a vacuum with charge ρ and current J sources reduce to two equations, Gauss's law for electricity is: where here is the Laplacian applied on scalar functions, and the Ampère-Maxwell law is: where here is the vector Laplacian applied on vector fields. The source terms are now much simpler, but the wave terms are less obvious. Since the potentials are not unique, but have gauge freedom, these equations can be simplified by gauge fixing. A common choice is the Lorenz gauge condition: Then the nonhomogeneous wave equations become uncoupled and symmetric in the potentials: For reference, in cgs units these equations are with the Lorenz gauge condition Covariant form of the inhomogeneous wave equation The relativistic Maxwell's equations can be written in covariant form as where is the d'Alembert operator, is the four-current, is the 4-gradient, and is the electromagnetic four-potential with the Lorenz gauge condition Curved spacetime The electromagnetic wave equation is modified in two ways in curved spacetime, the derivative is replaced with the covariant derivative and a new term that depends on the curvature appears (SI units). where is the Ricci curvature tensor. Here the semicolon indicates covariant differentiation. To obtain the equation in cgs units, replace the permeability with 4π/c. The Lorenz gauge condition in curved spacetime is assumed: Solutions to the inhomogeneous electromagnetic wave equation In the case that there are no boundaries surrounding the sources, the solutions (cgs units) of the nonhomogeneous wave equations are and where is a Dirac delta function. These solutions are known as the retarded Lorenz gauge potentials. They represent a superposition of spherical light waves traveling outward from the sources of the waves, from the present into the future. There are also advanced solutions (cgs units) and These represent a superposition of spherical waves travelling from the future into the present. See also Wave equation Sinusoidal plane-wave solutions of the electromagnetic wave equation Larmor formula Covariant formulation of classical electromagnetism Maxwell's equations in curved spacetime Abraham–Lorentz force Green's function References Electromagnetics Journal articles James Clerk Maxwell, "A Dynamical Theory of the Electromagnetic Field", Philosophical Transactions of the Royal Society of London 155, 459-512 (1865). (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.) Undergraduate-level textbooks . Graduate-level textbooks Robert Wald, Advanced Classical Electromagnetism, (2022). Landau, L. D., The Classical Theory of Fields (Course of Theoretical Physics: Volume 2), (Butterworth-Heinemann: Oxford, 1987). . (Provides a treatment of Maxwell's equations in terms of differential forms.) Vector Calculus & Further Topics Arfken et al., Mathematical Methods for Physicists, 6th edition (2005). Chapters 1 & 2 cover vector calculus and tensor calculus respectively. David Tong, Lectures on Vector Calculus. Freely available lecture notes that can be found here: http://www.damtp.cam.ac.uk/user/tong/vc.html Partial differential equations Special relativity Electromagnetism
Inhomogeneous electromagnetic wave equation
[ "Physics" ]
1,282
[ "Electromagnetism", "Physical phenomena", "Special relativity", "Fundamental interactions", "Theory of relativity" ]
4,603,927
https://en.wikipedia.org/wiki/Journal%20of%20Virology
The Journal of Virology is a biweekly peer-reviewed scientific journal that covers research concerning all aspects of virology. It was established in 1967 and is published by the American Society for Microbiology. Research papers are available free online 6 months after print publication. The current editors-in-chief are Felicia Goodrum (University of Arizona) and Stacey Schultz-Cherry (St. Jude Children's Research Hospital). Past editors-in-chief include Rozanne M. Sandri-Goldin (University of California, Irvine, California) (2012-2022), Lynn W. Enquist (2002–2012), Thomas Shenk (1994–2002), and Arnold J. Levine (1984–1994). Abstracting and indexing The journal is abstracted and indexed in AGRICOLA, Biological Abstracts, BIOSIS Previews, Chemical Abstracts, Current Contents, EMBASE, MEDLINE/Index Medicus/PubMed, and the Science Citation Index Expanded. Its 2023 impact factor was 4.0 References External links Delayed open access journals Academic journals established in 1967 English-language journals Biweekly journals Academic journals published by learned and professional societies Virology journals American Society for Microbiology academic journals
Journal of Virology
[ "Biology" ]
252
[ "Virus stubs", "Viruses" ]
4,604,270
https://en.wikipedia.org/wiki/Chemistry%20Development%20Kit
The Chemistry Development Kit (CDK) is computer software, a library in the programming language Java, for chemoinformatics and bioinformatics. It is available for Windows, Linux, Unix, and macOS. It is free and open-source software distributed under the GNU Lesser General Public License (LGPL) 2.0. History The CDK was created by Christoph Steinbeck, Egon Willighagen and Dan Gezelter, then developers of Jmol and JChemPaint, to provide a common code base, on 27–29 September 2000 at the University of Notre Dame. The first source code release was made on 11 May 2011. Since then more than 100 people have contributed to the project, leading to a rich set of functions, as given below. Between 2004 and 2007, CDK News was the project's newsletter of which all articles are available from a public archive. Due to an unsteady rate of contributions, the newsletter was put on hold. Later, unit testing, code quality checking, and Javadoc validation was introduced. Rajarshi Guha developed a nightly build system, named Nightly, which is still operating at Uppsala University. In 2012, the project became a support of the InChI Trust, to encourage continued development. The library uses JNI-InChI to generate International Chemical Identifiers (InChIs). In April 2013, John Mayfield (né May) joined the ranks of release managers of the CDK, to handle the development branch. Library The CDK is a library, instead of a user program. However, it has been integrated into various environments to make its functions available. CDK is currently used in several applications, including the programming language R, CDK-Taverna (a Taverna workbench plugin), Bioclipse, PaDEL, and Cinfony. Also, CDK extensions exist for Konstanz Information Miner (KNIME) and for Excel, called LICSS (). In 2008, bits of GPL-licensed code were removed from the library. While those code bits were independent from the main CDK library, and no copylefting was involved, to reduce confusions among users, the ChemoJava project was instantiated. Major features Chemoinformatics 2D molecule editor and generator 3D geometry generation ring finding substructure search using exact structures and Smiles arbitrary target specification (SMARTS) like query language QSAR descriptor calculation fingerprint calculation, including the ECFP and FCFP fingerprints force field calculations many input-output chemical file formats, including simplified molecular-input line-entry system (SMILES), Chemical Markup Language (CML), and chemical table file (MDL) structure generators International Chemical Identifier support, via JNI-InChI Bioinformatics protein active site detection cognate ligand detection metabolite identification pathway databases 2D and 3D protein descriptors General Python wrapper; see Cinfony Ruby wrapper active user community See also Bioclipse – an Eclipse–RCP based chemo-bioinformatics workbench Blue Obelisk JChemPaint – Java 2D molecule editor, applet and application Jmol – Java 3D renderer, applet and application JOELib – Java version of Open Babel, OELib List of free and open-source software packages List of software for molecular mechanics modeling References External links CDK Wiki – the community wiki Planet CDK - a blog planet CDK Depict OpenScience.org Bioinformatics software Chemistry software for Linux Computational chemistry software Free chemistry software Free software programmed in Java (programming language)
Chemistry Development Kit
[ "Chemistry", "Biology" ]
754
[ "Computational chemistry software", "Free chemistry software", "Chemistry software", "Bioinformatics software", "Bioinformatics", "Computational chemistry", "Chemistry software for Linux" ]
4,605,404
https://en.wikipedia.org/wiki/Forest%E2%80%93savanna%20mosaic
Forest–savanna mosaic is a transitory ecotone between the tropical moist broadleaf forests of Equatorial Africa and the drier savannas and open woodlands to the north and south of the forest belt. The forest–savanna mosaic consists of drier forests, often gallery forest, interspersed with savannas and open grasslands. Flora This band of marginal savannas bordering the dense dry forest extends from the Atlantic coast of Guinea to South Sudan, corresponding to a climatic zone with relatively high rainfall, between 800 and 1400 mm. It is an often unresolvable, complex of secondary forests and mixed savannas, resulting from intense erosion of primary forests by fire and clearing. The vegetation ceases to have an evergreen character, and becomes more and more seasonal. A species of acacia, Faidherbia albida, marks, with its geographical distribution, the Guinean area of the savannas together with the area of the forest-savanna, arboreal and shrub, and a good part of the dense dry forest with prevalently deciduous trees. Ecoregions The World Wildlife Fund recognizes several distinct forest-savanna mosaic ecoregions: The Guinean forest–savanna mosaic is the transition between the Upper and Lower Guinean forests of West Africa and the West Sudanian savanna. The ecoregion extends from Senegal on the west to the Cameroon Highlands on the east. The Dahomey Gap is a region of Togo and Benin where the forest-savanna mosaic extends to the coast, separating the Upper and Lower Guinean forests. The Northern Congolian forest–savanna mosaic lies between the Congolian forests of Central Africa and the East Sudanian savanna. It extends from the Cameroon Highlands in the west to the East African Rift in the east, encompassing portions of Cameroon, Central African Republic, Democratic Republic of the Congo, and southwestern Sudan. The Western Congolian forest–savanna mosaic lies southwest of the Congolian forest belt, covering portions of southern Gabon, southern Republic of the Congo, western Democratic Republic of the Congo, and northwestern Angola. The Southern Congolian forest–savanna mosaic lies east of the Western Congolian forest savanna mosaic in the Democratic Republic of the Congo, separating the Congolian forests to the north from the Miombo woodlands to the south. The Victoria Basin forest–savanna mosaic lies to the east and north of Lake Victoria in East Africa, and is surrounded on the east and west by the montane forests of the East African Rift's Western and Eastern arcs. The ecoregion covers much of Uganda, extending into portions of eastern Kenya, northwestern Tanzania, and eastern Rwanda. References Tropical and subtropical grasslands, savannas, and shrublands Tropical and subtropical moist broadleaf forests Forests Grasslands Afrotropical ecoregions
Forest–savanna mosaic
[ "Biology" ]
540
[ "Forests", "Grasslands", "Ecosystems" ]
4,606,244
https://en.wikipedia.org/wiki/Halonium%20ion
A halonium ion is any onium ion containing a halogen atom carrying a positive charge. This cation has the general structure where X is any halogen and no restrictions on R, this structure can be cyclic or an open chain molecular structure. Halonium ions formed from fluorine, chlorine, bromine, and iodine are called fluoronium, chloronium, bromonium, and iodonium, respectively. The 3-membered cyclic variety commonly proposed as intermediates in electrophilic halogenation may be called haliranium ions, using the Hantzsch-Widman nomenclature system. Structure The simplest halonium ions are of the structure (X = F, Cl, Br, I). Many halonium ions have a three-atom cyclic structure, similar to that of an epoxide, resulting from the formal addition of a halogenium ion to a C=C double bond, as when a halogen is added to an alkene. The formation of 5-membered halonium ions (e.g., chlorolanium, bromolanium ions) via neighboring group participation is also well studied. Diaryliodonium ions () are generally stable, isolable salts which exhibit a T-shaped geometry with the aryl groups at ~90 degrees apart; for more details, see hypervalent iodine. The tendency to form bridging halonium ions is in the order I > Br > Cl > F. Whereas iodine and bromine readily form bridged iodonium and bromonium ions, fluoronium ions have only recently been characterized in designed systems that force close encounter of the fluorine lone pair and a carbocationic center. In practice, structurally, there is a continuum between a symmetrically bridged halonium, to an unsymmetrical halonium with a long weak bond to one of the carbon centers, to a true β-halocarbocation with no halonium character. The equilibrium structure depends on the ability of the carbon atoms and the halogen to accommodate positive charge. Thus, a bromonium ion that bridges a primary and tertiary carbon will often exhibit a skewed structure, with a weak bond to the tertiary center (with significant carbocation character) and stronger bond to the primary carbon. This is due to the increased stability of tertiary carbons to stabilize positive charge. In the more extreme case, if the tertiary center is doubly benzylic for instance, then the open form may be favored. Similarly, switching from bromine to chlorine also weakens bridging character, due to the higher electronegativity of chlorine and lower propensity to share electron density compared to bromine. Reactivity These ions are usually only short-lived reaction intermediates; they are very reactive, owing to high ring strain in the three-membered ring and the positive charge on the halogen; this positive charge makes them great electrophiles. In almost all cases, the halonium ion is attacked by a nucleophile within a very short time. Even a weak nucleophile, such as water will attack the halonium ion; this is how halohydrins can be made. On occasion, a halonium atom will rearrange to a carbocation. This usually occurs only when that carbocation is an allylic or a benzylic carbocation. History Halonium ions were first postulated in 1937 by Roberts and Kimball to account for observed anti diastereoselectivity in halogen addition reactions to alkenes. They correctly argued that if the initial reaction intermediate in bromination is the open-chain X–C–C+ species, rotation around the C–C single bond would be possible leading to a mixture of equal amounts of dihalogen syn isomer and anti isomer, which is not the case. They also asserted that a positively charged halogen atom is isoelectronic with oxygen and that carbon and bromine have comparable ionization potentials. For certain aryl substituted alkenes, the anti stereospecificity is diminished or lost, as a result of weakened or absent halonium character in the cationic intermediate. In 1970 George A. Olah succeeded in preparing and isolating halonium salts by adding a methyl halide such as methyl bromide or methyl chloride in sulfur dioxide at −78 °C to a complex of antimony pentafluoride and tetrafluoromethane in sulfur dioxide. After evaporation of sulfur dioxide this procedure left crystals of , stable at room temperature but not to moisture. A fluoronium ion was recently characterized in solution phase (dissolved in sulfur dioxide or sulfuryl chloride fluoride) at low temperature. Cyclic and acyclic chloronium, bromonium, and iodonium ions have been structurally characterised by X-ray crystallography, such as the bi(adamantylidene)-derived bromonium cation shown below. Compounds containing trivalent or tetravalent halonium ions do not exist but for some hypothetical compounds stability has been computationally tested. References Cations Organohalides
Halonium ion
[ "Physics", "Chemistry" ]
1,065
[ "Matter", "Functional groups", "Organic compounds", "Cations", "Organohalides", "Ions" ]
4,608,787
https://en.wikipedia.org/wiki/Liquid%20metal%20cooled%20reactor
A liquid metal cooled nuclear reactor, or LMR is a type of nuclear reactor where the primary coolant is a liquid metal. Liquid metal cooled reactors were first adapted for breeder reactor power generation. They have also been used to power nuclear submarines. Due to their high thermal conductivity, metal coolants remove heat effectively, enabling high power density. This makes them attractive in situations where size and weight are at a premium, like on ships and submarines. Most water-based reactor designs are highly pressurized to raise the boiling point (thereby improving cooling capabilities), which presents safety and maintenance issues that liquid metal designs lack. Additionally, the high temperature of the liquid metal can be used to drive power conversion cycles with high thermodynamic efficiency. This makes them attractive for improving power output, cost effectiveness, and fuel efficiency in nuclear power plants. Liquid metals, being electrically highly conductive, can be moved by electromagnetic pumps. Disadvantages include difficulties associated with inspection and repair of a reactor immersed in opaque molten metal, and depending on the choice of metal, fire hazard risk (for alkali metals), corrosion and/or production of radioactive activation products may be an issue. Applications Liquid metal coolant has been applied to both thermal- and fast-neutron reactors. To date, most fast neutron reactors have been liquid metal cooled and so are called liquid metal cooled fast reactors (LMFRs). When configured as a breeder reactor (e.g. with a breeding blanket), such reactors are called liquid metal fast breeder reactors (LMFBRs). Coolant properties Suitable liquid metal coolants must have a low neutron capture cross section, must not cause excessive corrosion of the structural materials, and must have melting and boiling points that are suitable for the reactor's operating temperature. Liquid metals generally have high boiling points, reducing the probability that the coolant can boil, which could lead to a loss-of-coolant accident. Low vapor pressure enables operation at near-ambient pressure, further dramatically reducing the probability of an accident. Some designs immerse the entire core and heat exchangers into a pool of coolant, virtually eliminating the risk that inner-loop cooling will be lost. Mercury Clementine was the first liquid metal cooled nuclear reactor and used mercury coolant, thought to be the obvious choice since it is liquid at room temperature. However, because of disadvantages including high toxicity, high vapor pressure even at room temperature, low boiling point producing noxious fumes when heated, relatively low thermal conductivity, and a high neutron cross-section, it has fallen out of favor. Sodium and NaK Sodium and NaK (a eutectic sodium-potassium alloy) do not corrode steel to any significant degree and are compatible with many nuclear fuels, allowing for a wide choice of structural materials. NaK was used as the coolant in the first breeder reactor prototype, the Experimental Breeder Reactor-1, in 1951. Sodium and NaK do, however, ignite spontaneously on contact with air and react violently with water, producing hydrogen gas. This was the case at the Monju Nuclear Power Plant in a 1995 accident and fire. Sodium is also the coolant used in the Russian BN reactor series and the Chinese CFR series in commercial operation today. Neutron activation of sodium also causes these liquids to become intensely radioactive during operation, though the half-life is short and therefore their radioactivity does not pose an additional disposal concern. There are two proposals for a sodium cooled Gen IV LMFR, one based on oxide fuel, the other on the metal-fueled integral fast reactor. Lead Lead has excellent neutron properties (reflection, low absorption) and is a very potent radiation shield against gamma rays. The high boiling point of lead provides safety advantages as it can cool the reactor efficiently even if it reaches several hundred degrees Celsius above normal operating conditions. However, because lead has a high melting point and a high vapor pressure, it is tricky to refuel and service a lead cooled reactor. The melting point can be lowered by alloying the lead with bismuth, but lead-bismuth eutectic is highly corrosive to most metals used for structural materials. Lead-bismuth eutectic Lead-bismuth eutectic allows operation at lower temperatures while preventing the freezing of the metal coolant in a lower temperature range (eutectic point: . Beside its highly corrosive character, its main disadvantage is the formation by neutron activation of (and subsequent beta decay) of (T = 138.38 day), a volatile alpha-emitter highly radiotoxic (the highest known radiotoxicity, above that of plutonium). Tin Although tin today is not used as a coolant for working reactors because it builds a crust, it can be a useful additional or replacement coolant at nuclear disasters or loss-of-coolant accidents. Further advantages of tin are the high boiling point and the ability to build a crust even over liquid tin helps to cover poisonous leaks and keeps the coolant in and at the reactor. It has been tested by Ukrainian researchers and was proposed to convert the boiling water reactors at the Fukushima Daiichi nuclear disaster into liquid tin cooled reactors. Propulsion Submarines The Soviet and all seven s used reactors cooled by lead-bismuth eutectic and moderated with beryllium as their propulsion plants. (VT-1 reactors in K-27; BM-40A and OK-550 reactors in others). The second nuclear submarine, was the only U.S. submarine to have a sodium-cooled, beryllium-moderated nuclear power plant. It was commissioned in 1957, but it had leaks in its superheaters, which were bypassed. In order to standardize the reactors in the fleet, the submarine's sodium-cooled, beryllium-moderated reactor was removed starting in 1958 and replaced with a pressurized water reactor. Nuclear aircraft Liquid metal cooled reactors were studied by Pratt & Whitney for use in nuclear aircraft as part of the Aircraft Nuclear Propulsion program. Power generation The Sodium Reactor Experiment was an experimental sodium-cooled graphite-moderated nuclear reactor (A Sodium-Graphite Reactor, or SGR) sited in a section of the Santa Susana Field Laboratory then operated by the Atomics International division of North American Aviation. In July 1959, the Sodium Reactor Experiment suffered a serious incident involving the partial melting of 13 of 43 fuel elements and a significant release of radioactive gases. The reactor was repaired and returned to service in September 1960 and ended operation in 1964. The reactor produced a total of 37 GW-h of electricity. SRE was the prototype for the Hallam Nuclear Power Facility, another sodium-cooled graphite-moderated SGR that operated in Nebraska. Fermi 1 in Monroe County, Michigan was an experimental, liquid sodium-cooled fast breeder reactor that operated from 1963 to 1972. It suffered a partial nuclear meltdown in 1963 and was decommissioned in 1975. At Dounreay in Caithness, in the far north of Scotland, the United Kingdom Atomic Energy Authority (UKAEA) operated the Dounreay Fast Reactor (DFR), using NaK as a coolant, from 1959 to 1977, exporting 600 GW-h of electricity to the grid over that period. It was succeeded at the same site by PFR, the Prototype Fast Reactor, which operated from 1974 to 1994 and used liquid sodium as its coolant. The Soviet BN-600 is sodium cooled. The BN-350 and U.S. EBR-II nuclear power plants were sodium cooled. EBR-I used a liquid metal alloy, NaK, for cooling. NaK is liquid at room temperature. Liquid metal cooling is also used in most fast neutron reactors including fast breeder reactors such as the Integral Fast Reactor. Many Generation IV reactors studied are liquid metal cooled: Sodium-cooled fast reactor (SFR) Lead-cooled fast reactor References Nuclear physics
Liquid metal cooled reactor
[ "Physics" ]
1,622
[ "Nuclear physics" ]
4,609,097
https://en.wikipedia.org/wiki/Post%20riders
Post riders or postriders describes a horse and rider postal delivery system that existed at various times and various places throughout history. The term is usually reserved for instances where a network of regularly scheduled service was provided under some degree of central management by the State or State licensed monopoly. These networks included predefined routes known as post roads complete with distance markers and waypoints. Unlike other forms of mounted courier, post riders collected and delivered mail over the course of their route, meeting with other riders at scheduled times and scheduled places to exchange forwarded items. In this way correspondence could pass reliably from rider to rider and cover a considerable distance in a reasonable time at reduced cost. While some integration with local postal services in larger centers occurred, by and large the post riders were a separate entity under separate management and tariff structure. History While relay rider networks were a common feature of every ancient empire, these were primarily for the exclusive use of the government or military and carried no civil correspondence as a rule. What differentiated postriders from these earlier efforts was that they were open to the public and created for the public convenience. The other distinguishing feature was that post riders operated on a schedule. The Hanseatic League While in the case of the post riders the shift from royal messenger to public courier must be seen as evolutionary, there were some notable early examples. The Hanseatic League had a regular mounted service as early as the year 1274 between the principal towns of the League as well as the fortified castles which protected the merchants in their commerce. Business and diplomatic messages were handled by the service equally. Well organised, it had its own post-horses that were used exclusively for the letter service; at each stop the time of the receipt and dispatch of the post-bag was written upon the face of each letter. The Holy Roman Empire On behalf of the far-flung Habsburg dynasty, of The Holy Roman Empire, Franz von Taxis set up a courier network that grew to cover all of Western Europe by the middle of the 16th century. Permanent post stations were built about a day's journey apart. Over time, these stations became important economic centers: They were meeting places, inns and public rooms, trade centers and stables. Post stations became important centers for the development of villages and cities. While not solely used for government traffic, private use of this service required a licence from the State. Elizabethan England It was in England, during the Elizabethan period where the post rider truly began to serve all comers almost in spite of the declared restrictive policy of the Government as regards to their public use. Merchants and farmers, constables and innkeepers, soldiers and sailors were using the postal system, attesting to the remarkable standard of literacy of the ordinary people. A huge number of horses were involved in this operation as each stage was only about 10 miles, after which a fresh horse was used. In most cases the horses were kept at inns or hostelries. There was also for the first time, a system of post roads although the original usage referred more to the fixed routes of the service than the thoroughfares themselves. The American Colonies In the American Colonies postal routes were farmed out to contractors who promised to deliver the mail within a certain area for a set length of time. When mail was first delivered to a town, the townspeople would have to come to a central location, usually the general store, to pick up the mail. It was in the New World that the post riders would provide the longest and most complete service before being eliminated by other forms of transport. In 1780 the system consisted of only of a Postmaster General, a Secretary/Comptroller, three surveyors, one Inspector of Dead Letters, 26 post riders, 75 post offices and about 2,000 miles of post roads. Postmasters and post riders were exempt from military duties so as not to interrupt service. These post-riders were allowed the exclusive privilege of carrying letters, papers and packages on their respective routes, and any person who infringed upon their rights was subject to a fine. The post riders had to make good time, specified clearly, and milestones came into their own to measure progress. Significant early legislation that affected the post riders included an act of the United States Congress in 1838 that declared that all railroads in the United States were post roads. The act had a twofold effect: it increased the use of railroads to transmit the mails and limited the use of post riders to postal districts that were not on railway routes. In those areas of the country that were not on railroad routes, mail was carried by contractors, and the transportation of mail by any means other than by water or railroad was called a star route service. Operations A typical schedule taken from The Virginia Gazette, March 21, 1766 shows an example of the type of service the post riders provided: "THE publick is hereby desired to take notice that the Hampton rider will arrive in Williamsburg every Tuesday and Saturday at noon, come through York town, and return to Hampton the same evening. That the Hanover rider will set off from Hanover town early every Monday and Friday, to meet the Hampton rider in Williamsburg at noon every Tuesday and Saturday, and return to Hanover town on the Wednesday and Sunday nights. That the James river rider will set off from Hanover town, by the way of Richmond and Warwick, to Petersburg and Blandford, and return to Hanover town on Tuesday and Saturday nights. That the Fredericksburg rider will set out every Monday morning from Hanover town, by the way of Todd's bridge, and arrive at Fredericksburg on Tuesday night, where he exchanges mails with the Northern rider, and returns to Hanover town every Thursday. That a rider will set off from Fredericksburg every Wednesday, to be at Hobb's Hole that night, where he exchanges mails with a rider from Urbanna, and returns to Fredericksburg on Thursday night. And that on Friday morning a rider proceeds with the mail to the Northward. John Dixon, D. Postmaster." See also Boston Post Road Bicycle messenger Mail delivery by animal Pony Express Portmanteau (mail) Sources Postal systems
Post riders
[ "Technology" ]
1,236
[ "Transport systems", "Postal systems" ]
1,190,904
https://en.wikipedia.org/wiki/Hydraulic%20press
A hydraulic press is a machine press using a hydraulic cylinder to generate a compressive force. It uses the hydraulic equivalent of a mechanical lever, and was also known as a Bramah press after the inventor, Joseph Bramah, of England. He invented and was issued a patent on this press in 1795. As Bramah (who is also known for his development of the flush toilet) installed toilets, he studied the existing literature on the motion of fluids and put this knowledge into the development of the press. Main principle The hydraulic press depends on Pascal's principle. The pressure throughout a closed system is constant. One part of the system is a piston acting as a pump, with a modest mechanical force acting on a small cross-sectional area; the other part is a piston with a larger area which generates a correspondingly large mechanical force. Only small-diameter tubing (which more easily resists pressure) is needed if the pump is separated from the press cylinder. Application Hydraulic presses are commonly used for assembly and disassembly of tightly-fitting components. In manufacturing, they are used for forging, clinching, molding, blanking, punching, deep drawing, and metal forming operations. Hydraulic presses are also used for stretch forming, rubber pad forming, and powder compacting. The hydraulic press is advantageous in manufacturing, it gives the ability to create more intricate shapes and can be economical with materials. A hydraulic press will take up less space compared to a mechanical press of the same capability. In geology a tungsten carbide coated hydraulic press is used in the rock crushing stage of preparing samples for geochemical analyses in topics such as understanding the origins of volcanism. In popular culture The room featured in Fermat's Room has a design similar to that of a hydraulic press. Boris Artzybasheff also created a drawing of a hydraulic press, in which the press was created out of the shape of a robot. In 2015, the Hydraulic Press Channel, a YouTube channel dedicated to crushing objects with a hydraulic press, was created by bengie , a factory owner from Tampere, Finland. The Hydraulic Press Channel has since grown to over 9 million subscribers on YouTube. There are numerous other YouTube channels that publish videos involving hydraulic presses that are tasked with crushing many different items, such as bowling balls, soda cans, plastic toys, and metal tools. A hydraulic press features prominently in the Sherlock Holmes story "The Adventure of the Engineer's Thumb". See also Universal testing machine References External links Hydraulic machinery Press tools Machine tools Metalworking tools
Hydraulic press
[ "Physics", "Engineering" ]
521
[ "Machine tools", "Physical systems", "Hydraulics", "Hydraulic machinery", "Industrial machinery" ]
1,190,959
https://en.wikipedia.org/wiki/Petr%20Ho%C5%99ava%20%28physicist%29
Petr Hořava (born 1963 in Prostějov) is a Czech string theorist. He is a professor of physics in the Berkeley Center for Theoretical Physics at the University of California, Berkeley, where he teaches courses on quantum field theory and string theory. Hořava is a member of the theory group at Lawrence Berkeley National Laboratory. Work Hořava is known for his articles written with Edward Witten about the Hořava-Witten domain walls in M-theory. These articles demonstrated that the ten-dimensional heterotic string theory could be produced from 11-dimensional M-theory by making one of the dimensions have edges (the domain walls). This discovery provided crucial support for the conjecture that all string theories could arise as limits of a single higher-dimensional theory. Hořava is less well known for his discovery of D-branes, usually attributed to Dai, Leigh and Polchinski, who discovered them independently, also in 1989. In 2009, Hořava proposed a theory of gravity that separates space from time at high energy while matching some predictions of general relativity at lower energies. See also Hořava–Lifshitz gravity Hořava–Witten domain wall K-theory (physics) References External links Hořava's webpage at LBNL 1963 births String theorists Czech physicists Living people Theoretical physicists People from Prostějov
Petr Hořava (physicist)
[ "Physics" ]
283
[ "Theoretical physics", "Theoretical physicists" ]
1,191,031
https://en.wikipedia.org/wiki/Graviphoton
In theoretical physics and quantum physics, a graviphoton or gravivector is a hypothetical particle which emerges as an excitation of the metric tensor (i.e. gravitational field) in spacetime dimensions higher than four, as described in Kaluza–Klein theory. However, its crucial physical properties are analogous to a (massive) photon: it induces a "vector force", sometimes dubbed a "fifth force". The electromagnetic potential emerges from an extra component of the metric tensor , where the figure 5 labels an additional, fifth dimension. In gravity theories with extended supersymmetry (extended supergravities), a graviphoton is normally a superpartner of the graviton that behaves like a photon, and is prone to couple with gravitational strength, as was appreciated in the late 1970s. Unlike the graviton, it may provide a repulsive (as well as an attractive) force, and thus, in some technical sense, a type of anti-gravity. Under special circumstances, in several natural models, often descending from five-dimensional theories mentioned, it may actually cancel the gravitational attraction in the static limit. Joël Scherk investigated semirealistic aspects of this phenomenon, stimulating searches for physical manifestations of this mechanism. See also Graviscalar (a.k.a. radion) Supergravity List of hypothetical particles References Supersymmetry Bosons Photons Hypothetical elementary particles Force carriers Subatomic particles with spin 1
Graviphoton
[ "Physics" ]
306
[ "Symmetry", "Physical phenomena", "Force carriers", "Unsolved problems in physics", "Bosons", "Subatomic particles", "Fundamental interactions", "Particle physics", "Particle physics stubs", "Hypothetical elementary particles", "Supersymmetry", "Physics beyond the Standard Model", "Matter" ]
1,191,067
https://en.wikipedia.org/wiki/Magnetic%20vector%20potential
In classical electromagnetism, magnetic vector potential (often called A) is the vector quantity defined so that its curl is equal to the magnetic field: . Together with the electric potential φ, the magnetic vector potential can be used to specify the electric field E as well. Therefore, many equations of electromagnetism can be written either in terms of the fields E and B, or equivalently in terms of the potentials φ and A. In more advanced theories such as quantum mechanics, most equations use potentials rather than fields. Magnetic vector potential was independently introduced by Franz Ernst Neumann and Wilhelm Eduard Weber in 1845 and in 1846, respectively to discuss Ampère's circuital law. William Thomson also introduced the modern version of the vector potential in 1847, along with the formula relating it to the magnetic field. Unit conventions This article uses the SI system. In the SI system, the units of A are V·s·m−1 and are the same as that of momentum per unit charge, or force per unit current. Magnetic vector potential The magnetic vector potential, , is a vector field, and the electric potential, , is a scalar field such that: where is the magnetic field and is the electric field. In magnetostatics where there is no time-varying current or charge distribution, only the first equation is needed. (In the context of electrodynamics, the terms vector potential and scalar potential are used for magnetic vector potential and electric potential, respectively. In mathematics, vector potential and scalar potential can be generalized to higher dimensions.) If electric and magnetic fields are defined as above from potentials, they automatically satisfy two of Maxwell's equations: Gauss's law for magnetism and Faraday's law. For example, if is continuous and well-defined everywhere, then it is guaranteed not to result in magnetic monopoles. (In the mathematical theory of magnetic monopoles, is allowed to be either undefined or multiple-valued in some places; see magnetic monopole for details). Starting with the above definitions and remembering that the divergence of the curl is zero and the curl of the gradient is the zero vector: Alternatively, the existence of and is guaranteed from these two laws using Helmholtz's theorem. For example, since the magnetic field is divergence-free (Gauss's law for magnetism; i.e., ), always exists that satisfies the above definition. The vector potential is used when studying the Lagrangian in classical mechanics and in quantum mechanics (see Schrödinger equation for charged particles, Dirac equation, Aharonov–Bohm effect). In minimal coupling, is called the potential momentum, and is part of the canonical momentum. The line integral of over a closed loop, , is equal to the magnetic flux, , through a surface, , that it encloses: Therefore, the units of are also equivalent to weber per metre. The above equation is useful in the flux quantization of superconducting loops. Although the magnetic field, , is a pseudovector (also called axial vector), the vector potential, , is a polar vector. This means that if the right-hand rule for cross products were replaced with a left-hand rule, but without changing any other equations or definitions, then would switch signs, but A would not change. This is an example of a general theorem: The curl of a polar vector is a pseudovector, and vice versa. Gauge choices The above definition does not define the magnetic vector potential uniquely because, by definition, we can arbitrarily add curl-free components to the magnetic potential without changing the observed magnetic field. Thus, there is a degree of freedom available when choosing . This condition is known as gauge invariance. Two common gauge choices are The Lorenz gauge: The Coulomb gauge: Lorenz gauge In other gauges, the formulas for and are different; for example, see Coulomb gauge for another possibility. Time domain Using the above definition of the potentials and applying it to the other two Maxwell's equations (the ones that are not automatically satisfied) results in a complicated differential equation that can be simplified using the Lorenz gauge where is chosen to satisfy: Using the Lorenz gauge, the electromagnetic wave equations can be written compactly in terms of the potentials, Wave equation of the scalar potential Wave equation of the vector potential The solutions of Maxwell's equations in the Lorenz gauge (see Feynman and Jackson) with the boundary condition that both potentials go to zero sufficiently fast as they approach infinity are called the retarded potentials, which are the magnetic vector potential and the electric scalar potential due to a current distribution of current density , charge density , and volume , within which and are non-zero at least sometimes and some places): Solutions where the fields at position vector and time are calculated from sources at distant position at an earlier time The location is a source point in the charge or current distribution (also the integration variable, within volume ). The earlier time is called the retarded time, and calculated as Time-domain notes The Lorenz gauge condition is satisfied: The position of , the point at which values for and are found, only enters the equation as part of the scalar distance from to The direction from to does not enter into the equation. The only thing that matters about a source point is how far away it is. The integrand uses retarded time, This reflects the fact that changes in the sources propagate at the speed of light. Hence the charge and current densities affecting the electric and magnetic potential at and , from remote location must also be at some prior time The equation for is a vector equation. In Cartesian coordinates, the equation separates into three scalar equations: In this form it is apparent that the component of in a given direction depends only on the components of that are in the same direction. If the current is carried in a straight wire, points in the same direction as the wire. Frequency domain The preceding time domain equations can be expressed in the frequency domain. Lorenz gauge or Solutions Wave equations Electromagnetic field equations where and are scalar phasors. and are vector phasors. Frequency domain notes There are a few notable things about and calculated in this way: The Lorenz gauge condition is satisfied: This implies that the frequency domain electric potential, , can be computed entirely from the current density distribution, . The position of the point at which values for and are found, only enters the equation as part of the scalar distance from to The direction from to does not enter into the equation. The only thing that matters about a source point is how far away it is. The integrand uses the phase shift term which plays a role equivalent to retarded time. This reflects the fact that changes in the sources propagate at the speed of light; propagation delay in the time domain is equivalent to a phase shift in the frequency domain. The equation for is a vector equation. In Cartesian coordinates, the equation separates into three scalar equations: In this form it is apparent that the component of in a given direction depends only on the components of that are in the same direction. If the current is carried in a straight wire, points in the same direction as the wire. Depiction of the A-field See Feynman for the depiction of the field around a long thin solenoid. Since assuming quasi-static conditions, i.e. and , the lines and contours of relate to like the lines and contours of relate to Thus, a depiction of the field around a loop of flux (as would be produced in a toroidal inductor) is qualitatively the same as the field around a loop of current. The figure to the right is an artist's depiction of the field. The thicker lines indicate paths of higher average intensity (shorter paths have higher intensity so that the path integral is the same). The lines are drawn to (aesthetically) impart the general look of the The drawing tacitly assumes , true under any one of the following assumptions: the Coulomb gauge is assumed the Lorenz gauge is assumed and there is no distribution of charge, the Lorenz gauge is assumed and zero frequency is assumed the Lorenz gauge is assumed and a non-zero frequency, but still assumed sufficiently low to neglect the term Electromagnetic four-potential In the context of special relativity, it is natural to join the magnetic vector potential together with the (scalar) electric potential into the electromagnetic potential, also called four-potential. One motivation for doing so is that the four-potential is a mathematical four-vector. Thus, using standard four-vector transformation rules, if the electric and magnetic potentials are known in one inertial reference frame, they can be simply calculated in any other inertial reference frame. Another, related motivation is that the content of classical electromagnetism can be written in a concise and convenient form using the electromagnetic four potential, especially when the Lorenz gauge is used. In particular, in abstract index notation, the set of Maxwell's equations (in the Lorenz gauge) may be written (in Gaussian units) as follows: where is the d'Alembertian and is the four-current. The first equation is the Lorenz gauge condition while the second contains Maxwell's equations. The four-potential also plays a very important role in quantum electrodynamics. Charged particle in a field In a field with electric potential and magnetic potential , the Lagrangian () and the Hamiltonian () of a particle with mass and charge are See also Magnetic scalar potential Aharonov–Bohm effect Gluon field Notes References External links Potentials Magnetism Vector physical quantities
Magnetic vector potential
[ "Physics", "Mathematics" ]
2,011
[ "Quantity", "Vector physical quantities", "Physical quantities" ]
1,191,172
https://en.wikipedia.org/wiki/Dangling%20bond
In chemistry, a dangling bond is an unsatisfied valence on an immobilized atom. An atom with a dangling bond is also referred to as an immobilized free radical or an immobilized radical, a reference to its structural and chemical similarity to a free radical. When speaking of a dangling bond, one is generally referring to the state described above, containing one electron and thus leading to a neutrally charged atom. There are also dangling bond defects containing two or no electrons. These are negatively and positively charged respectively. Dangling bonds with two electrons have an energy close to the valence band of the material and those with none have an energy that is closer to the conduction band. Properties In order to gain enough electrons to fill their valence shells (see also octet rule), many atoms will form covalent bonds with other atoms. In the simplest case, that of a single bond, two atoms each contribute one unpaired electron, and the resulting pair of electrons is shared between them. Atoms that possess too few bonding partners to satisfy their valences and that possess unpaired electrons are termed "free radicals"; so, often, are molecules containing such atoms. When a free radical exists in an immobilized environment (for example, a solid), it is referred to as an "immobilized free radical" or a "dangling bond". A dangling bond in (bulk) crystalline silicon is often pictured as a single unbound hybrid sp3 orbital on the silicon atom, with the other three sp3 orbitals facing away from the unbound orbital. In reality, the dangling bond unbound orbital is better described by having more than half of the dangling bond wave function localized on the silicon nucleus, with delocalized electron density around the three bonding orbitals, comparable to a p-orbital with more electron density localized on the silicon nucleus. The three remaining bonds tend to shift to a more planar configuration. It has also been found in experiments that Electron Paramagnetic Resonance (EPR) spectra of amorphous hydrogenated silicon (a-Si:H) do not differ significantly from the deuterated counterpart, a-Si:D, suggesting that there is hardly any backbonding to the silicon from hydrogen on a dangling bond. It also appeared that the Si-Si and Si-H bonds are about equally strong. Reactivity Both free and immobilized radicals display very different chemical characteristics from atoms and molecules containing only complete bonds. Generally, they are extremely reactive. Immobilized free radicals, like their mobile counterparts, are highly unstable, but they gain some kinetic stability because of limited mobility and steric hindrance. While free radicals are usually short-lived, immobilized free radicals often exhibit a longer lifetime because of this reduction in reactivity. Magnetic The presence of dangling bonds can lead to ferromagnetism in materials that are normally magnetically inactive, such as polymers and hydrogenated graphitic materials. A dangling bond contains/consists of an electron and can thus contribute its own net (para)magnetic moment. This only happens when the dangling bond electron does not pair its spin to that of another electron. Ferromagnetic properties in various carbon nanostructures can be described using dangling bonds and may be used to create metal-free organic spintronics and polymeric ferromagnetic materials (see Applications). Creating dangling bonds with unpaired electrons can, for example, be achieved by cutting or putting large mechanical strain on a polymer. In this process, covalent bonds between carbon atoms are broken. One electron can end up on each of the carbon atoms that originally contributed to the bond, leading to two unpaired dangling bonds. Optical A dangling bond adds an extra energy level between the valence band and conduction band of a lattice. This allows for absorption and emission at longer wavelengths, because electrons can take smaller energy steps by moving to and from this extra level. The energy of the photons absorbed by or emitted from this level is not exactly equal to the energy difference between the bottom of the conduction band and the dangling bond or the top of the valence band and the dangling bond. This is due to lattice relaxation which causes a Franck-Condon shift in the energy. This shift accounts for the difference between a tight-binding calculation of these energy differences and the experimentally measured energies. Another way in which the presence of dangling bonds affects the optical properties of a material is via polarization. For a material with dangling bonds, the absorption intensity depends on the polarization of the absorbed light. This is an effect of the symmetry in which the dangling bonds are distributed over the surface of the material. The dependence only occurs up to the energy at which an electron can be excited to the level of the gap but not to the valence band. This effect along with the polarization dependence disappearing after the dangling bonds have been annealed, shows that it is an effect of the dangling bonds and not just of the general symmetry of the material. Induced In hydrogenated silicon, dangling bonds can be induced by (long) exposure to light. This causes a decrease in the photoconductivity of the material. (This is the most named explanation for the so-called Staebler-Wronski effect.) The mechanism of this is thought to be as follows: The photon energy is transferred to the system which causes the weak Si-Si bonds to break, leading to the formation of two bound radicals. The free electrons being localized and being very close together is an unstable state, so hydrogen atoms “move” to the site of the breakage. This causes the electrons to be delocalized further apart which is a more stable state. For a hydrogen content of around 10%, the dangling bonds from only a very small fraction of displaced hydrogen atoms can lead to observable EPR signal increases. The diffusion of hydrogen plays a key role in the process and explains why long illumination is required. It has been found that illumination under increased temperatures increases the rate at which light-induced dangling bonds form. This can be explained by the increased hydrogen diffusion. It is thought that the formation mechanism of intrinsic dangling bonds (in hydrogenated silicon) is very similar to that of light induced dangling bonds, except that the energy source is heat rather than photons. This explains why the intrinsic dangling bond density is negligible at room temperature.  Light can also induce dangling bond formation in materials with intimately related valence alternation pairs (IVAP), such as a-As2S3. These IVAP defects consist of a dangling bond containing two electrons (D−) and a dangling bond containing no electrons (D+). When one of these pairs is illuminated, it can capture an electron or an electron hole resulting in the following reactions: D+D− + e− → D0D− D+D− + h+ → D+D0 Here, D0 is an uncharged dangling bond. Surface Surfaces of silicon, germanium, graphite (carbon) and germanium-silicide are active in EPR measurements. Mainly group 14 (formerly group IV) elements show EPR signals from a surface after crushing. Crystals of elements from groups 13 to 15 prefer to have the (110) plane exposed as a surface. On this surface, an atom of group 13 has 3/4 dangling bond, and an atom of group 15 has 5/4 dangling bond. Because of dehybridization of surface orbitals (caused by the decreased number of nearest neighbor atoms around the surface atom), a group 13 atom will have a largely unfilled dangling orbital since it has valence 3 and makes three bonds, while a group 15 atom will have a fully occupied dangling orbital at the surface. In that case, there is hardly any unpaired electron density, which results in a weak EPR signal for such materials. Clean cleaved surfaces of such materials form paired electron localized states on alternate sites resulting in a very weak to no EPR signal. Not well-cleaved surfaces and microcracks obtained from crushing, cleaving, abrading, neutron or high-energy ion irradiation or heating and rapid cooling in vacuum give a measurable EPR signal (a characteristic signal in Si at g = 2,0055). The presence of oxygen and hydrogen gas affects the EPR signal from microcracks by affecting the single electron spin centers. The gas molecules can get trapped and, when staying close to a spin center, affect the EPR signal. When a microcrack is sufficiently small, the wave functions of the dangling bond states extend beyond the surface and can overlap with wave functions from the opposite surface. This can create shear forces in the crystal surface, causing atom layers to realign while creating dangling bonds in the process. Due to the reactivity of dangling bonds, the semiconductor native oxide will form due to adsorption of gas molecules, the only remaining dangling bonds are located at oxygen vacancies. Dangling bonds form an sp3-hybridized bond with the adsorbed molecule, which have a metallic character. They are often the only defect sites present on atomic semiconductors, which provide such "soft centers" for molecules to adsorb to. When no gas adsorption is possible (for example for clean surfaces in vacuum), the surface energy can be reduced by reorganizing bonding electrons, creating lattice strain in the process. In case of the (001) surface plane of silicon, a single dangling bond on each atom will be formed, while pairing the other electron with a neighboring atom. Removal of dangling bond surface states on the silicon (001) surface from the band gap can be achieved by treatment of the surface with a monolayer of selenium (alternatively, sulfur was proposed). Selenium can attach to the silicon (001) surface and can bind to surface dangling bonds, bridging between silicon atoms. This releases the strain in the silicon surface and terminates the dangling bonds, covering them from the outside environment. When exposed, dangling bonds can act as surface states in electronic processes. In semiconductors Some allotropes of silicon, such as amorphous silicon, display a high concentration of dangling bonds. Besides being of fundamental interest, these dangling bonds are important in modern semiconductor device operation. Hydrogen introduced to the silicon during the synthesis process is well known to saturate most dangling bonds, as are other elements such as oxygen, making the material suitable for applications (see semiconductor devices). The dangling bond states have wave functions that extend beyond the surface and can occupy states above the valence band. The resulting difference in surface and bulk Fermi level causes surface band bending and the abundancy of surface states pins the Fermi level. For the compound semiconductor GaAs, stronger electron pairing is observed at the surface, making for almost filled orbitals in arsenic and almost empty orbitals for gallium. Consequently, the dangling bond density at the surface is much lower and no Fermi level pinning occurs. In doped semiconductors, surface properties are still dependent on the dangling bonds, since they occur in a number density of around 1013 per square centimeter, compared to dopant electrons or holes with a number density of 1014 to 1018 per cubic centimeter which are thus much less abundant on the material surface. Passivation (silicon photovoltaics) By definition, passivation is a treatment process of the surface of the layers to reduce the effects of the surrounding environment. In photovoltaics (PV) technology, passivation is the surface treatment of the wafer or thin film in order to reduce the surface and some of the bulk recombination of the minority carriers. There are two main ways to passivate the surface of the silicon wafer in order to saturate the dangling bonds: field-effect passivation of the surface with a dielectric layer of SiOx, also known as \Atalla passivation", and hydrogen passivation, which is one of the chemical methods used for passivation. Hydrogen passivation Hydrogen passivation is one way to saturate these dangling bonds. This passivation process is carried out by one of the following mechanisms: deposition of a thin film from silicon nitride SiNx on the top of the polycrystalline silicon layer, or passivation by remote plasma hydrogen passivation (RPHP). In the latter method, hydrogen, oxygen, and argon gases react inside the chamber, then, the hydrogen is dissociating to the atomic hydrogen under the plasma condition to diffuse into the silicon interface to saturate the dangling bonds. This saturation reduces the interface defect state, where the recombination takes place. Dielectric layer passivation Passivation by a dielectric layer on the top of crystalline silicon (c-Si) wafer, also called "tunnel passivation" is one of the passivation techniques used most widely in PV technology. This technique combines both chemical passivation and field-effect passivation. This strategy is based on the formation of a dielectric layer (mostly silicon dioxide SiO2, aluminum oxide Al2O3, or silicon nitride (SiNx) on the top of the c-Si substrate be the mean of thermal oxidation or other deposition techniques such as atomic layer deposition (ALD). In the case of the formation of SiOx by thermal oxidation, the process acts as chemical passivation since, on the one hand, the formation of the oxide layer reacts with the dangling bonds on the surface wherein it reduces the defects states at the interface. On the other hand, since there are fixed charges (Qf) in the dielectric film, these fixed charges establish an electric field that repels one type of charge carrier and accumulates the other type at the interface. This repletion assures reducing one type of the charge carriers concentration at the interface wherein the recombination decreases. Applications Catalysis In experiments by Yunteng Qu et al., dangling bonds on graphene oxide were used to bind single metal atoms (Fe, Co, Ni, Cu) for applications in catalysis. Metal atoms were adsorbed by oxidizing metal from a foam and coordinating the metal ions to the dangling bonds on the oxygen of the graphene oxide. The resulting catalyst had a high density of catalytic centers and showed high activity, comparable to other non-noble metal catalysts in oxygen reduction reactions while maintaining stability in a wide range of electrochemical potential, comparable to Pt/C electrodes. Ferromagnetic polymers An example of an organic ferromagnetic polymer is presented in an article by Yuwei Ma et al.: by cutting with ceramic scissors or stretching a piece of Teflon tape, a network of strongly coupling dangling bonds arises on surfaces where the polymer was broken (from cutting or in strain-induced cavities). In the case of weak structural deformation, where only very few dangling bonds are formed, the coupling is very weak and a paramagnetic signal is measured in EPR analysis. Annealing Teflon under an argon atmosphere at 100 °C to 200 °C results also in ferromagnetic properties. However, annealing close to the melting temperature of Teflon makes the ferromagnetism disappear. Under longer air exposure, the magnetization is reduced due to adsorbed water molecules. It also appeared that no ferromagnetism would develop under annealing Teflon under water steam or cutting in a H2 environment. Computational chemistry In computational chemistry, a dangling bond generally represents an error in structure creation, in which an atom is inadvertently drawn with too few bonding partners, or a bond is mistakenly drawn with an atom at only one end. References Further reading Condensed matter physics Solid-state chemistry
Dangling bond
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,244
[ "Phases of matter", "Materials science", "Condensed matter physics", "nan", "Matter", "Solid-state chemistry" ]
1,191,951
https://en.wikipedia.org/wiki/Glycated%20hemoglobin
Glycated hemoglobin, also called glycohemoglobin, is a form of hemoglobin (Hb) that is chemically linked to a sugar. Most monosaccharides, including glucose, galactose, and fructose, spontaneously (that is, non-enzymatically) bond with hemoglobin when they are present in the bloodstream. However, glucose is only 21% as likely to do so as galactose and 13% as likely to do so as fructose, which may explain why glucose is used as the primary metabolic fuel in humans. The formation of excess sugar-hemoglobin linkages indicates the presence of excessive sugar in the bloodstream and is an indicator of diabetes or other hormone diseases in high concentration . A1c is of particular interest because it is easy to detect. The process by which sugars attach to hemoglobin is called glycation and the reference system is based on HbA1c, defined as beta-N-1-deoxy fructosyl hemoglobin as component. There are several ways to measure glycated hemoglobin, of which HbA1c (or simply A1c) is a standard single test. HbA1c is measured primarily to determine the three-month average blood sugar level and is used as a standard diagnostic test for evaluating the risk of complications of diabetes and as an assessment of glycemic control. The test is considered a three-month average because the average lifespan of a red blood cell is three to four months. Normal levels of glucose produce a normal amount of glycated hemoglobin. As the average amount of plasma glucose increases, the fraction of glycated hemoglobin increases in a predictable way. In diabetes, higher amounts of glycated hemoglobin, indicating higher of blood glucose levels, have been associated with cardiovascular disease, nephropathy, neuropathy, and retinopathy. Terminology Glycated hemoglobin is preferred over glycosylated hemoglobin to reflect the correct (non-enzymatic) process. Early literature often used glycosylated as it was unclear which process was involved until further research was performed. The terms are still sometimes used interchangeably in English-language literature. The naming of HbA1c derives from hemoglobin type A being separated on cation exchange chromatography. The first fraction to separate, probably considered to be pure hemoglobin A, was designated HbA0, and the following fractions were designated HbA1a, HbA1b, and HbA1c, in their order of elution. Improved separation techniques have subsequently led to the isolation of more subfractions. History Hemoglobin A1c was first separated from other forms of hemoglobin by Huisman and Meyering in 1958 using a chromatographic column. It was first characterized as a glycoprotein by Bookchin and Gallop in 1968. Its increase in diabetes was first described in 1969 by Samuel Rahbar et al. The reactions leading to its formation were characterized by Bunn and his coworkers in 1975. The use of hemoglobin A1c for monitoring the degree of control of glucose metabolism in diabetic patients was proposed in 1976 by Anthony Cerami, Ronald Koenig, and coworkers. Damage mechanisms Glycated hemoglobin causes an increase of highly reactive free radicals inside blood cells, altering the properties of their cell membranes. This leads to blood cell aggregation and increased blood viscosity, which results in impaired blood flow. Another way glycated hemoglobin causes damage is via inflammation, which results in atherosclerotic plaque (atheroma) formation. Free-radical build-up promotes the excitation of Fe2+-hemoglobin through into abnormal ferryl hemoglobin (Fe4+-Hb). Fe4+ is unstable and reacts with specific amino acids in hemoglobin to regain its Fe3+ oxidation state. Hemoglobin molecules clump together via cross-linking reactions, and these hemoglobin clumps (multimers) promote cell damage and the release of Fe4+-hemoglobin into the matrix of innermost layers (subendothelium) of arteries and veins. This results in increased permeability of interior surface (endothelium) of blood vessels and production of pro-inflammatory monocyte adhesion proteins, which promote macrophage accumulation in blood vessel surfaces, ultimately leading to harmful plaques in these vessels. Highly glycated Hb-AGEs go through vascular smooth muscle layer and inactivate acetylcholine-induced endothelium-dependent relaxation, possibly through binding to nitric oxide (NO), preventing its normal function. NO is a potent vasodilator and also inhibits formation of plaque-promoting LDLs (sometimes called "bad cholesterol") oxidized form. This overall degradation of blood cells also releases heme from them. Loose heme can cause oxidation of endothelial and LDL proteins, which results in plaques. Principle in medical diagnostics Glycation of proteins is a frequent occurrence, but in the case of hemoglobin, a nonenzymatic condensation reaction occurs between glucose and the N-end of the beta chain. This reaction produces a Schiff base (, R=beta chain, CHR'=glucose-derived), which is itself converted to 1-deoxyfructose. This second conversion is an example of an Amadori rearrangement. When blood glucose levels are high, glucose molecules attach to the hemoglobin in red blood cells. The longer hyperglycemia occurs in blood, the more glucose binds to hemoglobin in the red blood cells and the higher the glycated hemoglobin. Once a hemoglobin molecule is glycated, it remains that way. A buildup of glycated hemoglobin within the red cell, therefore, reflects the average level of glucose to which the cell has been exposed during its life-cycle. Measuring glycated hemoglobin assesses the effectiveness of therapy by monitoring long-term serum glucose regulation. A1c is a weighted average of blood glucose levels during the life of the red blood cells (117 days for men and 106 days in women). Therefore, glucose levels on days nearer to the test contribute substantially more to the level of A1c than the levels in days further from the test. This is also supported by data from clinical practice showing that HbA1c levels improved significantly after 20 days from start or intensification of glucose-lowering treatment. Measurement Several techniques are used to measure hemoglobin A1c. Laboratories may use high-performance liquid chromatography, immunoassay, enzymatic assay, capillary electrophoresis, or boronate affinity chromatography. Point of care (e.g., doctor's office) devices use immunoassay boronate affinity chromatography. In the United States, HbA1c testing laboratories are certified by the National Glycohemoglobin Standardization Program to standardize them against the results of the 1993 Diabetes Control and Complications Trial (DCCT). An additional percentage scale, Mono S has previously been in use by Sweden and KO500 is in use in Japan. Switch to IFCC units The American Diabetes Association, European Association for the Study of Diabetes, and International Diabetes Federation have agreed that, in the future, HbA1c is to be reported in the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) units. IFCC reporting was introduced in Europe except for the UK in 2003; the UK carried out dual reporting from 1 June 2009 until 1 October 2011. Conversion between DCCT and IFCC is by the following equation: Interpretation of results Laboratory results may differ depending on the analytical technique, the age of the subject, and biological variation among individuals. Higher levels of HbA1c are found in people with persistently elevated blood sugar, as in diabetes mellitus. While diabetic patient treatment goals vary, many include a target range of HbA1c values. A diabetic person with good glucose control has an HbA1c level that is close to or within the reference range. The International Diabetes Federation and the American College of Endocrinology recommend HbA1c values below 48 mmol/mol (6.5 DCCT %), while the American Diabetes Association recommends HbA1c be below 53 mmol/mol (7.0 DCCT %) for most patients. Results from large trials in suggested that a target below 53 mmol/mol (7.0 DCCT %) for older adults with type 2 diabetes may be excessive: Below 53 mmol/mol, the health benefits of reduced A1c become smaller, and the intensive glycemic control required to reach this level leads to an increased rate of dangerous hypoglycemic episodes. A retrospective study of 47,970 type 2 diabetes patients, aged 50 years and older, found that patients with an HbA1c more than 48 mmol/mol (6.5 DCCT %) had an increased mortality rate, but a later international study contradicted these findings. A review of the UKPDS, Action to Control Cardiovascular Risk in Diabetes (ACCORD), Advance and Veterans Affairs Diabetes Trials (VADT) estimated that the risks of the main complications of diabetes (diabetic retinopathy, diabetic nephropathy, diabetic neuropathy, and macrovascular disease) decreased by about 3% for every 1 mmol/mol decrease in HbA1c. However, a trial by ACCORD designed specifically to determine whether reducing HbA1c below 42 mmol/mol (6.0 DCCT %) using increased amounts of medication would reduce the rate of cardiovascular events found higher mortality with this intensive therapy, so much so that the trial was terminated 17 months early. Practitioners must consider patients' health, their risk of hypoglycemia, and their specific health risks when setting a target HbA1c level. Because patients are responsible for averting or responding to their own hypoglycemic episodes, their input and the doctors' assessments of the patients' self-care skills are also important. Persistent elevations in blood sugar (and, therefore, HbA1c) increase the risk of long-term vascular complications of diabetes, such as coronary disease, heart attack, stroke, heart failure, kidney failure, blindness, erectile dysfunction, neuropathy (loss of sensation, especially in the feet), gangrene, and gastroparesis (slowed emptying of the stomach). Poor blood glucose control also increases the risk of short-term complications of surgery such as poor wound healing. All-cause mortality is higher above 64 mmol/mol (8.0 DCCT%) HbA1c as well as below 42 mmol/mol (6.0 DCCT %) in diabetic patients, and above 42 mmol/mol (6.0 DCCT %) as well as below 31 mmol/mol (5.0 DCCT %) in non-diabetic persons, indicating the risks of hyperglycemia and hypoglycemia, respectively. Similar risk results are seen for cardiovascular disease. The 2022 ADA guidelines reaffirmed the recommendation that HbA1c should be maintained below 7.0% for most patients. Higher target values are appropriate for children and adolescents, patients with extensive co-morbid illness and those with a history of severe hypoglycemia. More stringent targets (<6.0%) are preferred for pregnant patients if this can be achieved without significant hypoglycemia. Factors other than glucose that affect A1c Lower-than-expected levels of HbA1c can be seen in people with shortened red blood cell lifespans, such as with glucose-6-phosphate dehydrogenase deficiency, sickle-cell disease, or any other condition causing premature red blood cell death. For these patients, alternate assessment with fructosamine or glycated albumin is recommended; these methods reflect glycemic control over the preceding 2-3 weeks. Blood donation will result in rapid replacement of lost RBCs with newly formed red blood cells. Since these new RBCs will have only existed for a short period of time, their presence will lead HbA1c to underestimate the actual average levels. There may also be distortions resulting from blood donation during the preceding two months, due to an abnormal synchronization of the age of the RBCs, resulting in an older than normal average blood cell life (resulting in an overestimate of actual average blood glucose levels). Conversely, higher-than-expected levels can be seen in people with a longer red blood cell lifespan, such as with iron deficiency. Results can be unreliable in many circumstances, for example after blood loss, after surgery, blood transfusions, anemia, or high erythrocyte turnover; in the presence of chronic renal or liver disease; after administration of high-dose vitamin C; or erythropoetin treatment. Hypothyroidism can artificially raise the A1c. In general, the reference range (that found in healthy young persons), is about 30–33 mmol/mol (4.9–5.2 DCCT %). The mean HbA1c for diabetics type 1 in Sweden in 2014 was 63 mmol/mol (7.9 DCCT%) and for type 2, 61 mmol/mol (7.7 DCCT%). HbA1c levels show a small, but statistically significant, progressive uptick with age; the clinical importance of this increase is unclear. Mapping from A1c to estimated average glucose The approximate mapping between HbA1c values given in DCCT percentage (%) and eAG (estimated average glucose) measurements is given by the following equation: eAG(mg/dL) = 28.7 × A1C − 46.7eAG(mmol/L) = 1.59 × A1C − 2.59(Data in parentheses are 95% confidence intervals>) Normal, prediabetic, and diabetic ranges The 2010 American Diabetes Association Standards of Medical Care in Diabetes added the HbA1c ≥ 48 mmol/mol (≥6.5 DCCT %) as another criterion for the diagnosis of diabetes. Indications and uses Glycated hemoglobin testing is recommended for both checking the blood sugar control in people who might be prediabetic and monitoring blood sugar control in patients with more elevated levels, termed diabetes mellitus. For a single blood sample, it provides far more revealing information on glycemic behavior than a fasting blood sugar value. However, fasting blood sugar tests are crucial in making treatment decisions. The American Diabetes Association guidelines are similar to others in advising that the glycated hemoglobin test be performed at least twice a year in patients with diabetes who are meeting treatment goals (and who have stable glycemic control) and quarterly in patients with diabetes whose therapy has changed or who are not meeting glycemic goals. Glycated hemoglobin measurement is not appropriate where a change in diet or treatment has been made within six weeks. Likewise, the test assumes a normal red blood cell aging process and mix of hemoglobin subtypes (predominantly HbA in normal adults). Hence, people with recent blood loss, hemolytic anemia, or genetic differences in the hemoglobin molecule (hemoglobinopathy) such as sickle-cell disease and other conditions, as well as those who have donated blood recently, are not suitable for this test. Due to glycated hemoglobin's variability (as shown in the table above), additional measures should be checked in patients at or near recommended goals. People with HbA1c values at 64 mmol/mol or less should be provided additional testing to determine whether the HbA1c values are due to averaging out high blood glucose (hyperglycemia) with low blood glucose (hypoglycemia) or the HbA1c is more reflective of an elevated blood glucose that does not vary much throughout the day. Devices such as continuous blood glucose monitoring allow people with diabetes to determine their blood glucose levels on a continuous basis, testing every few minutes. Continuous use of blood glucose monitors is becoming more common, and the devices are covered by many health insurance plans, including Medicare in the United States. The supplies tend to be expensive, since the sensors must be changed at least every 2 weeks. Another useful test in determining if HbA1c values are due to wide variations of blood glucose throughout the day is 1,5-anhydroglucitol, also known as GlycoMark. GlycoMark reflects only the times that the person experiences hyperglycemia above 180 mg/dL over a two-week period. Concentrations of hemoglobin A1 (HbA1) are increased, both in diabetic patients and in patients with kidney failure, when measured by ion-exchange chromatography. The thiobarbituric acid method (a chemical method specific for the detection of glycation) shows that patients with kidney failure have values for glycated hemoglobin similar to those observed in normal subjects, suggesting that the high values in these patients are a result of binding of something other than glucose to hemoglobin. In autoimmune hemolytic anemia, concentrations of HbA1 is undetectable. Administration of prednisolone will allow the HbA1 to be detected. The alternative fructosamine test may be used in these circumstances and it also reflects an average of blood glucose levels over the preceding 2 to 3 weeks. All the major institutions such as the International Expert Committee Report, drawn from the International Diabetes Federation, the European Association for the Study of Diabetes, and the American Diabetes Association, suggest the HbA1c level of 48 mmol/mol (6.5 DCCT %) as a diagnostic level. The Committee Report further states that, when HbA1c testing cannot be done, the fasting and glucose-tolerance tests be done. Screening for diabetes during pregnancy continues to require fasting and glucose-tolerance measurements for gestational diabetes at 24 to 28 weeks gestation, although glycated hemoglobin may be used for screening at the first prenatal visit. Modification by diet Meta-analysis has shown probiotics to cause a statistically significant reduction in glycated hemoglobin in type-2 diabetics. Trials with multiple strains of probiotics had statistically significant reductions in glycated hemoglobin, whereas trials with single strains did not. Standardization and traceability Most clinical studies recommend the use of HbA1c assays that are traceable to the DCCT assay. The National Glycohemoglobin Standardization Program (NGSP) and IFCC have improved assay standardization. For initial diagnosis of diabetes, only HbA1c methods that are NGSP-certified should be used, not point-of-care testing devices. Analytical performance has been a problem with earlier point-of-care devices for HbA1c testing, specifically large standard deviations and negative bias. Veterinary medicine HbA1c testing has not been found useful in the monitoring during the treatment of cats and dogs with diabetes, and is not generally used; monitoring of fructosamine levels is favoured instead. See also Diabetes mellitus Hemoglobin A2 Prediabetes Proteopedia: Structure of glycated hemoglobin Notes References External links Health Information: Diabetes — National Institutes of Health (NIH): National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) National Diabetes Information Clearinghouse — NIDDK (old site, archived 2010-02-21) Standards of Care in Diabetes, American Diabetes Association Professional Practice Committee Standards of Care in Diabetes — 2024 (pdf), American Diabetes Association Professional Practice Committee Blood tests Diabetes-related tests Diabetes Glucose Hemoglobins
Glycated hemoglobin
[ "Chemistry" ]
4,258
[ "Blood tests", "Chemical pathology" ]
1,192,008
https://en.wikipedia.org/wiki/Sunspot%20number
The Wolf number (also known as the relative sunspot number or Zürich number) is a quantity that measures the number of sunspots and groups of sunspots present on the surface of the Sun. Historically, it was only possible to detect sunspots on the far side of the Sun indirectly using helioseismology. Since 2006, NASA's STEREO spacecrafts allow their direct observation. History Astronomers have been observing the Sun recording information about sunspots since the advent of the telescope in 1609. However, the idea of compiling the information about the sunspot number from various observers originates in Rudolf Wolf in 1848 in Zürich, Switzerland. The produced series initially had his name, but now it is more commonly referred to as the international sunspot number series. The international sunspot number series is still being produced today at the observatory of Brussels. The international number series shows an approximate periodicity of 11 years, the solar cycle, which was first found by Heinrich Schwabe in 1843, thus sometimes it is also referred to as the Schwabe cycle. The periodicity is not constant but varies roughly in the range 9.5 to 11 years. The international sunspot number series extends back to 1700 with annual values while daily values exist only since 1818. Since 1 July 2015 a revised and updated international sunspot number series has been made available. The biggest difference is an overall increase by a factor of 1.6 to the entire series. Traditionally, a scaling of 0.6 was applied to all sunspot counts after 1893, to compensate for Alfred Wolfer's better equipment, after taking over from Wolf. This scaling has been dropped from the revised series, making modern counts closer to their raw values. Also, counts were reduced slightly after 1947 to compensate for bias introduced by a new counting method adopted that year, in which sunspots are weighted according to their size. Calculation The relative sunspot number is computed using the formula where is the number of individual spots, is the number of sunspot groups, and is a factor that varies with observer and is referred to as the observatory factor or the personal reduction coefficient. The observatory factor compensates for the differing number of recorded individual sunspots and sunspot groups by different observers. These differences in recorded values occur due to differences in instrumentation, local seeing, personal experience, and other factors between observers. Since Wolf was the primary observer for the relative sunspot number, his observatory factor was 1. Smoothed monthly mean To calculate the 13-month smoothed monthly mean sunspot number, which is commonly used to calculate the minima and maxima of solar cycles, a tapered-boxcar smoothing function is used. For a given month , with a monthly sunspot number of , the smoothed monthly mean can be expressed as where is the monthly sunspot number months away from month . The smoothed monthly mean is intended to dampen any sudden jumps in the monthly sunspot number and remove the effects of the 27-day solar rotation period. Alternative series The accuracy of the compilation of the group sunspot number series has been questioned, motivating the development of several alternative series suggesting different behavior of sunspot group activity before the 20th century. However, indirect indices of solar activity favor the group sunspot number series by Chatzistergos T. et al. A different index of sunspot activity was introduced in 1998 in the form of the number of groups apparent on the solar disc. With this index it was made possible to include sunspot data acquired since 1609, being the date of the invention of the telescope. See also Solar cycle Joy's law (astronomy) References External links The Exploratorium's Guide to Sunspots Solar Influences Data Analysis Center (SIDC) for the Sunspot Index NASA Solar Physics Sunspot Cycle page and Table of Sunspot Numbers (txt) by month since 1749 CE Stellar phenomena Solar phenomena de:Sonnenfleck#Sonnenflecken-Relativzahl
Sunspot number
[ "Physics" ]
801
[ "Physical phenomena", "Stellar phenomena", "Solar phenomena" ]
1,192,012
https://en.wikipedia.org/wiki/Green%27s%20relations
In mathematics, Green's relations are five equivalence relations that characterise the elements of a semigroup in terms of the principal ideals they generate. The relations are named for James Alexander Green, who introduced them in a paper of 1951. John Mackintosh Howie, a prominent semigroup theorist, described this work as "so all-pervading that, on encountering a new semigroup, almost the first question one asks is 'What are the Green relations like?'" (Howie 2002). The relations are useful for understanding the nature of divisibility in a semigroup; they are also valid for groups, but in this case tell us nothing useful, because groups always have divisibility. Instead of working directly with a semigroup S, it is convenient to define Green's relations over the monoid S1. (S1 is "S with an identity adjoined if necessary"; if S is not already a monoid, a new element is adjoined and defined to be an identity.) This ensures that principal ideals generated by some semigroup element do indeed contain that element. For an element a of S, the relevant ideals are: The principal left ideal generated by a: . This is the same as , which is . The principal right ideal generated by a: , or equivalently . The principal two-sided ideal generated by a: , or . The L, R, and J relations For elements a and b of S, Green's relations L, R and J are defined by a L b if and only if S1 a = S1 b. a R b if and only if a S1 = b S1. a J b if and only if S1 a S1 = S1 b S1. That is, a and b are L-related if they generate the same left ideal; R-related if they generate the same right ideal; and J-related if they generate the same two-sided ideal. These are equivalence relations on S, so each of them yields a partition of S into equivalence classes. The L-class of a is denoted La (and similarly for the other relations). The L-classes and R-classes can be equivalently understood as the strongly connected components of the left and right Cayley graphs of S1. Further, the L, R, and J relations define three preorders ≤L, ≤R, and ≤J, where a ≤J b holds for two elements a and b of S if the ideal generated by a is included in that of b, i.e., S1 a S1 ⊆ S1 b S1, and ≤L and ≤R are defined analogously. Green used the lowercase blackletter , and for these relations, and wrote for a L b (and likewise for R and J). Mathematicians today tend to use script letters such as instead, and replace Green's modular arithmetic-style notation with the infix style used here. Ordinary letters are used for the equivalence classes. The L and R relations are left-right dual to one another; theorems concerning one can be translated into similar statements about the other. For example, L is right-compatible: if a L b and c is another element of S, then ac L bc. Dually, R is left-compatible: if a R b, then ca R cb. If S is commutative, then L, R and J coincide. The H and D relations The remaining relations are derived from L and R. Their intersection is H: a H b if and only if a L b and a R b. This is also an equivalence relation on S. The class Ha is the intersection of La and Ra. More generally, the intersection of any L-class with any R-class is either an H-class or the empty set. Green's Theorem states that for any -class H of a semigroup S either (i) or (ii) and H is a subgroup of S. An important corollary is that the equivalence class He, where e is an idempotent, is a subgroup of S (its identity is e, and all elements have inverses), and indeed is the largest subgroup of S containing e. No -class can contain more than one idempotent, thus is idempotent separating. In a monoid M, the class H1 is traditionally called the group of units. (Beware that unit does not mean identity in this context, i.e. in general there are non-identity elements in H1. The "unit" terminology comes from ring theory.) For example, in the transformation monoid on n elements, Tn, the group of units is the symmetric group Sn. Finally, D is defined: a D b if and only if there exists a c in S such that a L c and c R b. In the language of lattices, D is the join of L and R. (The join for equivalence relations is normally more difficult to define, but is simplified in this case by the fact that a L c and c R b for some c if and only if a R d and d L b for some d.) As D is the smallest equivalence relation containing both L and R, we know that a D b implies a J b—so J contains D. In a finite semigroup, D and J are the same, as also in a rational monoid. Furthermore they also coincide in any epigroup. There is also a formulation of D in terms of equivalence classes, derived directly from the above definition: a D b if and only if the intersection of Ra and Lb is not empty. Consequently, the D-classes of a semigroup can be seen as unions of L-classes, as unions of R-classes, or as unions of H-classes. Clifford and Preston (1961) suggest thinking of this situation in terms of an "egg-box": Each row of eggs represents an R-class, and each column an L-class; the eggs themselves are the H-classes. For a group, there is only one egg, because all five of Green's relations coincide, and make all group elements equivalent. The opposite case, found for example in the bicyclic semigroup, is where each element is in an H-class of its own. The egg-box for this semigroup would contain infinitely many eggs, but all eggs are in the same box because there is only one D-class. (A semigroup for which all elements are D-related is called bisimple.) It can be shown that within a D-class, all H-classes are the same size. For example, the transformation semigroup T4 contains four D-classes, within which the H-classes have 1, 2, 6, and 24 elements respectively. Recent advances in the combinatorics of semigroups have used Green's relations to help enumerate semigroups with certain properties. A typical result (Satoh, Yama, and Tokizawa 1994) shows that there are exactly 1,843,120,128 non-equivalent semigroups of order 8, including 221,805 that are commutative; their work is based on a systematic exploration of possible D-classes. (By contrast, there are only five groups of order 8.) Example The full transformation semigroup T3 consists of all functions from the set {1, 2, 3} to itself; there are 27 of these. Write (a b c) for the function that sends 1 to a, 2 to b, and 3 to c. Since T3 contains the identity map, (1 2 3), there is no need to adjoin an identity. The egg-box diagram for T3 has three D-classes. They are also J-classes, because these relations coincide for a finite semigroup. In T3, two functions are L-related if and only if they have the same image. Such functions appear in the same column of the table above. Likewise, the functions f and g are R-related if and only if f(x) = f(y) ⇔ g(x) = g(y) for x and y in {1, 2, 3}; such functions are in the same table row. Consequently, two functions are D-related if and only if their images are the same size. The elements in bold are the idempotents. Any H-class containing one of these is a (maximal) subgroup. In particular, the third D-class is isomorphic to the symmetric group S3. There are also six subgroups of order 2, and three of order 1 (as well as subgroups of these subgroups). Six elements of T3 are not in any subgroup. Generalisations There are essentially two ways of generalising an algebraic theory. One is to change its definitions so that it covers more or different objects; the other, more subtle way, is to find some desirable outcome of the theory and consider alternative ways of reaching that conclusion. Following the first route, analogous versions of Green's relations have been defined for semirings (Grillet 1970) and rings (Petro 2002). Some, but not all, of the properties associated with the relations in semigroups carry over to these cases. Staying within the world of semigroups, Green's relations can be extended to cover relative ideals, which are subsets that are only ideals with respect to a subsemigroup (Wallace 1963). For the second kind of generalisation, researchers have concentrated on properties of bijections between L- and R- classes. If x R y, then it is always possible to find bijections between Lx and Ly that are R-class-preserving. (That is, if two elements of an L-class are in the same R-class, then their images under a bijection will still be in the same R-class.) The dual statement for x L y also holds. These bijections are right and left translations, restricted to the appropriate equivalence classes. The question that arises is: how else could there be such bijections? Suppose that Λ and Ρ are semigroups of partial transformations of some semigroup S. Under certain conditions, it can be shown that if x Ρ = y Ρ, with x ρ1 = y and y ρ2 = x, then the restrictions ρ1 : Λ x → Λ y ρ2 : Λ y → Λ x are mutually inverse bijections. (Conventionally, arguments are written on the right for Λ, and on the left for Ρ.) Then the L and R relations can be defined by x L y if and only if Λ x = Λ y x R y if and only if x Ρ = y Ρ and D and H follow as usual. Generalisation of J is not part of this system, as it plays no part in the desired property. We call (Λ, Ρ) a Green's pair. There are several choices of partial transformation semigroup that yield the original relations. One example would be to take Λ to be the semigroup of all left translations on S1, restricted to S, and Ρ the corresponding semigroup of restricted right translations. These definitions are due to Clark and Carruth (1980). They subsume Wallace's work, as well as various other generalised definitions proposed in the mid-1970s. The full axioms are fairly lengthy to state; informally, the most important requirements are that both Λ and Ρ should contain the identity transformation, and that elements of Λ should commute with elements of Ρ. See also Schutzenberger group References C. E. Clark and J. H. Carruth (1980) Generalized Green's theories, Semigroup Forum 20(2);  95–127. A. H. Clifford and G. B. Preston (1961) The Algebraic Theory of Semigroups, volume 1, (1967) volume 2, American Mathematical Society, Green's relations are introduced in Chapter 2 of the first volume. J. A. Green (July 1951) "On the structure of semigroups", Annals of Mathematics (second series) 54(1): 163–172. John M. Howie (1976) An introduction to Semigroup Theory, Academic Press . An updated version is available as Fundamentals of Semigroup Theory, Oxford University Press, 1995. . John M. Howie (2002) "Semigroups, Past, Present and Future", Proceedings of the International Conference on Algebra and its Applications, Chulalongkorn University, Thailand Petraq Petro (2002) Green's relations and minimal quasi-ideals in rings, Communications in Algebra 30(10): 4677–4686. S. Satoh, K. Yama, and M. Tokizawa (1994) "Semigroups of order 8", Semigroup Forum 49: 7–29. Semigroup theory
Green's relations
[ "Mathematics" ]
2,677
[ "Semigroup theory", "Fields of abstract algebra", "Mathematical structures", "Algebraic structures" ]
1,192,305
https://en.wikipedia.org/wiki/Web%20accessibility
Web accessibility, or eAccessibility, is the inclusive practice of ensuring there are no barriers that prevent interaction with, or access to, websites on the World Wide Web by people with physical disabilities, situational disabilities, and socio-economic restrictions on bandwidth and speed. When sites are correctly designed, developed and edited, more users have equal access to information and functionality. For example, when a site is coded with semantically meaningful HTML, with textual equivalents provided for images and with links named meaningfully, this helps blind users using text-to-speech software and/or text-to-Braille hardware. When text and images are large and/or enlargeable, it is easier for users with poor sight to read and understand the content. When links are underlined (or otherwise differentiated) as well as colored, this ensures that color blind users will be able to notice them. When clickable links and areas are large, this helps users who cannot control a mouse with precision. When pages are not coded in a way that hinders navigation by means of the keyboard alone, or a single switch access device alone, this helps users who cannot use a mouse or even a standard keyboard. When videos are closed captioned, chaptered, or a sign language version is available, deaf and hard-of-hearing users can understand the video. When flashing effects are avoided or made optional, users prone to seizures caused by these effects are not put at risk. And when content is written in plain language and illustrated with instructional diagrams and animations, users with dyslexia and learning difficulties are better able to understand the content. When sites are correctly built and maintained, all of these users can be accommodated without decreasing the usability of the site for non-disabled users. The needs that web accessibility aims to address include: Visual: Visual impairments including blindness, various common types of low vision and poor eyesight, various types of color blindness; Motor/mobility: e.g. difficulty or inability to use the hands, including tremors, muscle slowness, loss of fine muscle control, etc., due to conditions such as Parkinson's disease, muscular dystrophy, cerebral palsy, stroke; Auditory: Deafness or hearing impairments, including individuals who are hard of hearing; Seizures: Photo epileptic seizures caused by visual strobe or flashing effects. Cognitive and intellectual: Developmental disabilities, learning difficulties (dyslexia, dyscalculia, etc.), and cognitive disabilities (PTSD, Alzheimer's) of various origins, affecting memory, attention, developmental "maturity", problem-solving and logic skills, etc. Accessibility is not confined to the list above, rather it extends to anyone who is experiencing any permanent, temporary or situational disability. Situational disability refers to someone who may be experiencing a boundary based on the current experience. For example, a person may be situationally one-handed if they are carrying a baby. Web accessibility should be mindful of users experiencing a wide variety of barriers. According to a 2018 WebAIM global survey of web accessibility practitioners, close to 93% of survey respondents received no formal schooling on web accessibility. Assistive technologies used for web browsing Individuals living with a disability use assistive technologies such as the following to enable and assist web browsing: Screen reader software such as Check Meister browser, which can read out, using synthesized speech, either selected elements of what is being displayed on the monitor (helpful for users with reading or learning difficulties), or which can read out everything that is happening on the computer (used by blind and vision impaired users). Braille terminals, consisting of a refreshable braille display which renders text as braille characters (usually by means of raising pegs through holes in a flat surface) and either a mainstream keyboard or a braille keyboard. Screen magnification software, which enlarges what is displayed on the computer monitor, making it easier to read for vision impaired users. Speech recognition software that can accept spoken commands to the computer, or turn dictation into grammatically correct text – useful for those who have difficulty using a mouse or a keyboard. Keyboard overlays, which can make typing easier or more accurate for those who have motor control difficulties. Access to subtitled or sign language videos for deaf people. Guidelines on accessible web design Web Content Accessibility Guidelines In 1999 the Web Accessibility Initiative, a project by the World Wide Web Consortium (W3C), published the Web Content Accessibility Guidelines WCAG 1.0. On 11 December 2008, the WAI released the WCAG 2.0 as a Recommendation. WCAG 2.0 aims to be up to date and more technology neutral. Though web designers can choose either standard to follow, the WCAG 2.0 have been widely accepted as the definitive guidelines on how to create accessible websites. Governments are steadily adopting the WCAG 2.0 as the accessibility standard for their own websites. In 2012, the Web Content Accessibility Guidelines were also published as an ISO/IEC standard: "ISO/IEC 40500:2012: Information technology – W3C Web Content Accessibility Guidelines (WCAG) 2.0". In 2018, the WAI released the WCAG 2.1 Recommendation that extends WCAG 2.0. Criticism of WAI guidelines There has been some criticism of the W3C process, claiming that it does not sufficiently put the user at the heart of the process. There was a formal objection to WCAG's original claim that WCAG 2.0 will address requirements for people with learning disabilities and cognitive limitations headed by Lisa Seeman and signed by 40 organizations and people. In articles such as "WCAG 2.0: The new W3C guidelines evaluated", "To Hell with WCAG 2.0" and "Testability Costs Too Much", the WAI has been criticised for allowing WCAG 1.0 to get increasingly out of step with today's technologies and techniques for creating and consuming web content, for the slow pace of development of WCAG 2.0, for making the new guidelines difficult to navigate and understand, and other argued failings. Essential components of web accessibility The accessibility of websites relies on the cooperation of several components: content – the information in a web page or web application, including natural information (such as text, images, and sounds) and code or markup that defines structure, presentation, etc. web browsers, media players, and other "user agents" assistive technology, in some cases – screen readers, alternative keyboards, switches, scanning software, etc. users' knowledge, experiences, and in some cases, adaptive strategies using the web developers – designers, coders, authors, etc., including developers with disabilities and users who contribute content authoring tools – software that creates websites evaluation tools – web accessibility evaluation tools, HTML validators, CSS validators, etc. Guidelines for different components Authoring Tool Accessibility Guidelines (ATAG) ATAG contains 28 checkpoints that provide guidance on: producing accessible output that meets standards and guidelines promoting the content author for accessibility-related information providing ways of checking and correcting inaccessible content integrating accessibility in the overall look and feel making the authoring tool itself accessible to people with disabilities Web Content Accessibility Guidelines (WCAG) WCAG 1.0: 14 guidelines that are general principles of accessible design WCAG 2.0: 4 principles that form the foundation for web accessibility; 12 guidelines (untestable) that are goals for which authors should aim; and 65 testable success criteria. The W3C's Techniques for WCAG 2.0 is a list of techniques that support authors to meet the guidelines and success criteria. The techniques are periodically updated whereas the principles, guidelines and success criteria are stable and do not change. User Agent Accessibility Guidelines (UAAG) UAAG contains a comprehensive set of checkpoints that cover: access to all content user control over how content is rendered user control over the user interface standard programming interfaces Web accessibility legislation Because of the growth in internet usage and its growing importance in everyday life, countries around the world are addressing digital access issues through legislation. One approach is to protect access to websites for people with disabilities by using existing human or civil rights legislation. Some countries, like the U.S., protect access for people with disabilities through the technology procurement process. It is common for nations to support and adopt the Web Content Accessibility Guidelines (WCAG) 2.0 by referring to the guidelines in their legislation. Compliance with web accessibility guidelines is a legal requirement primarily in North America, Europe, parts of South America and parts of Asia. Argentina Law 26.653 on Accessibility to Information on Web Pages. Approved by the National Congress of Argentina on November 3, 2010. It specifies in its Article 1 that both the National State and its decentralized organisms or those companies that are related in any way with public services or goods, must respect the rules and requirements on accessibility in the design of their web pages. The objective is to facilitate access to contents to all persons with disabilities, in order to guarantee equal opportunities in relation to access to information and to avoid discrimination. In addition, by Decree 656/2019 the regulation of the aforementioned Law No. 26,653 is approved and it is reported that the authority in charge of its application will be the ONTI, "Oficina Nacional de Tecnologías de Información" (National Office of Information Technologies). This agency is in charge of assisting and/or advising the individuals and legal entities reached by this Law; in addition to disseminating, approving/updating and also controlling the fulfillment of the accessibility standards and requirements of the web pages; among other functions. Australia In 2000, an Australian blind man won a $20,000 court case against the Sydney Organising Committee of the Olympic Games (SOCOG). This was the first successful case under Disability Discrimination Act 1992 because SOCOG had failed to make their official website, Sydney Olympic Games, adequately accessible to blind users. The Human Rights and Equal Opportunity Commission (HREOC) also published World Wide Web Access: Disability Discrimination Act Advisory Notes. All Governments in Australia also have policies and guidelines that require accessible public websites. Brazil In Brazil, the federal government published a paper with guidelines for accessibility on 18 January 2005, for public reviewing. On 14 December of the same year, the second version was published, including suggestions made to the first version of the paper. On 7 May 2007, the accessibility guidelines of the paper became compulsory to all federal websites. The current version of the paper, which follows the WCAG 2.0 guidelines, is named e-MAG, Modelo de de Governo Eletrônico (Electronic Government Accessibility Model), and is maintained by Brazilian Ministry of Planning, Budget, and Management. The paper can be viewed and downloaded at its official website. Canada In 2011, the Government of Canada began phasing in the implementation of a new set of web standards that are aimed at ensuring government websites are accessible, usable, interoperable and optimized for mobile devices. These standards replace Common Look and Feel 2.0 (CLF 2.0) Standards for the Internet. The first of these four standards, Standard on Web Accessibility came into full effect on 31 July 2013. The Standard on Web Accessibility follows the Web Content Accessibility Guidelines (WCAG) 2.0 AA, and contains a list of exclusions that is updated annually. It is accompanied by an explicit Assessment Methodology that helps government departments comply. The government also developed the Web Experience Toolkit (WET), a set of reusable web components for building innovative websites. The WET helps government departments build innovative websites that are accessible, usable and interoperable and therefore comply with the government's standards. The WET toolkit is open source and available for anyone to use. The three related web standards are: the Standard on Optimizing Websites and Applications for Mobile Devices, the Standard on Web Usability and the Standard on Web Interoperability. In 2019 the Government of Canada passed the Accessible Canada Act. This builds on the on provincial legislation like the Accessibility for Ontarians with Disabilities Act, The Accessibility for Manitobans Act and the Nova Scotia Accessibility Act. European Union In February 2014 a draft law was endorsed by the European Parliament stating that all websites managed by public sector bodies have to be made accessible to everyone. A European Commission Communication on eAccessibility was published on 13 September 2005. The commission's aim to "harmonise and facilitate the public procurement of accessible ICT products and services" was embedded in a mandate issued to CEN, CENELEC and ETSI in December 2005, reference M 376. A mandate is a request for the drafting and adoption of a European standard or European standardisation deliverables issued to one or more of the European standardisation organisations. Mandates are usually accepted by the standardisation organisation because they are based on preliminary consultation, although technically the organisation is independent and has a right to decline the mandate. The mandate also called for the development of an electronic toolkit for public procurers enabling them to have access to the resulting harmonised requirements. The commission also noted that the harmonised outcome, while intended for public procurement purposes, might also be useful for procurement in the private sector. On 26 October 2016, the European Parliament approved the Web Accessibility Directive, which requires that the websites and mobile apps of public sector bodies be accessible. The relevant accessibility requirements are described in the European standard EN 301 549 V3.2.1 (published by ETSI). EU member states were expected to bring into force by 23 September 2018 laws and regulations that enforce the relevant accessibility requirements. websites of public sector bodies should comply by 23 September 2018; mobile apps by 23 June 2021. Some categories of websites and apps are excepted from the directive, for example "websites and mobile applications of public service broadcasters and their subsidiaries". The European Commission's "Rolling Plan for ICT Standardisation 2017" notes that ETSI standard EN 301 549 V1.1.2 will need to be updated to add accessibility requirements for mobile applications and evaluation methodologies to test compliance with the standard. In 2019 the European Union introduced the European Accessibility Act, as one of the leading pieces of legislation for digital accessibility and digital inclusion. The European Accessibility Act (EAA), which will enter into force on 28 June 2025, requiring companies to ensure that the newly marketed products and services covered by the Act are accessible. All websites will need to adhere to the WCAG Principles of Perceivable, Operable, Understandable and Robust, and deliver comparative levels of user experience to disabled customers. As of June 28, 2025, customers will be able to file complaints before national courts or authorities if services or products do not respect the new rules. India In India, National Informatics Centre (NIC), under Ministry of Electronics and Information Technology (MeitY) has passed Guidelines for Indian Government Websites (GIGW) for government agencies in 2009, compelling them to adhere to WCAG 2.0 Level A standards. Ministry of Electronics and Information Technology (MeitY) has National Policy on Universal Electronic Accessibility clearly stated, Accessibility Standards and Guidelines be formulated or adapted from prevailing standards in the domain including World Wide Web Consortium accessibility Web standards and guidelines such as Authoring Tool Accessibility Guidelines (ATAG), Web Content Accessibility Guidelines (WCAG 2.0) and User Agent Accessibility Guidelines (UAAG). GIGW aims to ensure the quality and accessibility of government guidelines by offering guidance on desirable practices covering the entire lifecycle of websites, web portals and web applications, right from conceptualization and design to their development, maintenance and management. The Department of Administrative Reforms and Public Grievances made the same a part of the Central Secretariat Manual of Office Procedure. GIGW 3.0 also significantly enhances the guidance on the accessibility and usability of mobile apps, especially by offering specific guidance to government organizations on how to leverage public digital infrastructure devised for whole-of-government delivery of services, benefits and information. The Rights of Persons with Disabilities Act, 2016 (RPwD) passed in parliament. The law replaced earlier legislation and provided clearer guidance for digital accessibility. The RPwD Act, 106 through Sections 40-46 mandates accessibility to be ensured in all public-centric buildings, transportation systems, Information and Communication Technology (ICT) services, consumer products and all other services being provided by the Government or other service providers. Ireland In Ireland, the Disability Act 2005 requires that where a public body communicates in electronic form with one or more persons, the contents of the communication must be, as far as practicable, "accessible to persons with a visual impairment to whom adaptive technology is available" (Section 28(2)). The National Disability Authority has produced a Code of Practice giving guidance to public bodies on how to meet the obligations of the Act. This is an approved code of practice and its provisions have the force of legally binding statutory obligations. It states that a public body can achieve compliance with Section 28(2) by "reviewing existing practices for electronic communications in terms of accessibility against relevant guidelines and standards", giving the example of "Double A conformance with the Web Accessibility Initiative's (WAI) Web Content Accessibility Guidelines (WCAG)". Israel The Israeli Ministry of Justice recently published regulations requiring Internet websites to comply with Israeli standard 5568, which is based on the W3C Web Content Accessibility Guidelines 2.0. The main differences between the Israeli standard and the W3C standard concern the requirements to provide captions and texts for audio and video media. The Israeli standards are somewhat more lenient, reflecting the current technical difficulties in providing such captions and texts in Hebrew. Italy In Italy, web accessibility is ruled by the so-called "Legge Stanca" (Stanca Act), formally Act n.4 of 9 January 2004, officially published on the Gazzetta Ufficiale on 17 January 2004. The original Stanca Act was based on the WCAG 1.0. On 20 March 2013 the standards required by the Stanca Act were updated to the WCAG 2.0. Japan Web Content Accessibility Guidelines in Japan were established in 2004 as JIS (Japanese Industrial Standards) X 8341–3. JIS X 8341-3 was revised in 2010 as JIS X 8341-3:2010 to encompass WCAG 2.0, and it was revised in 2016 as JIS X 8341-3:2016 to be identical standards with the international standard ISO/IEC 40500:2012. The Japanese organization WAIC (Web Accessibility Infrastructure Committee) has published the history and structure of JIS X 8341-3:2016. Malta In Malta Web Content Accessibility assessments were carried out by the Foundation for Information Technology Accessibility (FITA) since 2003. Until 2018, this was done in conformance with the requirements of the Equal Opportunities Act (2000) CAP 43 and applied WACG guidelines. With the advent of the EU Web Accessibility Directive the Malta Communications Authority was charged with ensuring the accessibility of online resources owned by Maltese public entities. FITA continues to provide ICT accessibility assessments to public and commercial entities, applying standard EN301549 and WCAG 2.1 as applicable. Therefore, both the Equal Opportunities Act anti-discrimination legislation and the transposed EU Web Accessibility Directive are applicable to the Maltese scenario. Norway In Norway, web accessibility is a legal obligation under the Act 20 June 2008 No 42 relating to a prohibition against discrimination on the basis of disability, also known as the Anti-discrimination Accessibility Act. The Act went into force in 2009, and the Ministry of Government Administration, Reform and Church Affairs [Fornyings-, administrasjons- og kirkedepartementet] published the Regulations for universal design of information and communication technology (ICT) solutions [Forskrift om universell utforming av informasjons- og kommunikasjonsteknologiske (IKT)-løsninger] in 2013. The regulations require compliance with Web Content Accessibility Guidelines 2.0 (WCAG 2.0) / NS / ISO / IEC 40500: 2012, level A and AA with some exceptions. The Norwegian Agency for Public Management and eGovernment (Difi) is responsible for overseeing that ICT solutions aimed at the general public are in compliance with the legislative and regulatory requirements. Philippines As part of the Web Accessibility Initiatives in the Philippines, the government through the National Council for the Welfare of Disabled Persons (NCWDP) board approved the recommendation of forming an ad hoc or core group of webmasters that will help in the implementation of the Biwako Millennium Framework set by the UNESCAP. The Philippines was also the place where the Interregional Seminar and Regional Demonstration Workshop on Accessible Information and Communications Technologies (ICT) to Persons with Disabilities was held where eleven countries from Asia – Pacific were represented. The Manila Accessible Information and Communications Technologies Design Recommendations was drafted and adopted in 2003. Spain In Spain, UNE 139803:2012 is the norm entrusted to regulate web accessibility. This standard is based on Web Content Accessibility Guidelines 2.0. Sweden In Sweden, Verva, the Swedish Administrative Development Agency is responsible for a set of guidelines for Swedish public sector web sites. Through the guidelines, web accessibility is presented as an integral part of the overall development process and not as a separate issue. The Swedish guidelines contain criteria which cover the entire life cycle of a website; from its conception to the publication of live web content. These criteria address several areas which should be considered, including: accessibility usability web standards privacy issues information architecture developing content for the web Content Management Systems (CMS) / authoring tools selection. development of web content for mobile devices. An English translation was released in April 2008: Swedish National Guidelines for Public Sector Websites. The translation is based on the latest version of Guidelines which was released in 2006. United Kingdom In the UK, the Equality Act 2010 does not refer explicitly to website accessibility, but makes it illegal to discriminate against people with disabilities. The Act applies to anyone providing a service; public, private and voluntary sectors. The Code of Practice: Rights of Access – Goods, Facilities, Services and Premises document published by the government's Equality and Human Rights Commission to accompany the Act does refer explicitly to websites as one of the "services to the public" which should be considered covered by the Act. In December 2010 the UK released the standard BS 8878:2010 Web accessibility. Code of practice. This standard effectively supersedes PAS 78 (pub. 2006). PAS 78, produced by the Disability Rights Commission and usable by disabled people. The standard has been designed to introduce non-technical professionals to improved accessibility, usability and user experience for disabled and older people. It will be especially beneficial to anyone new to this subject as it gives guidance on process, rather than on technical and design issues. BS 8878 is consistent with the Equality Act 2010 and is referenced in the UK government's e-Accessibility Action Plan as the basis of updated advice on developing accessible online services. It includes recommendations for: Involving disabled people in the development process and using automated tools to assist with accessibility testing The management of the guidance and process for upholding existing accessibility guidelines and specifications. BS 8878 is intended for anyone responsible for the policies covering web product creation within their organization, and governance against those policies. It additionally assists people responsible for promoting and supporting equality and inclusion initiatives within organizations and people involved in the procurement, creation or training of web products and content. A summary of BS 8878 is available to help organisations better understand how the standard can help them embed accessibility and inclusive design in their business-as-usual processes. On 28 May 2019, BS 8878 was superseded by ISO 30071-1, the international Standard that built on BS 8878 and expanded it for international use. A summary of how ISO 30071-1 relates to BS 8878 is available to help organisations understand the new Standard. On April 9, National Rail replaced its blue and white aesthetic with a black and white theme, which was criticized for not conforming to the Web Content Accessibility Guidelines. The company restored the blue and white theme and said it is investing in modernising its website in accords to the latest accessibility guidelines. In 2019 new accessibility regulations came into force setting a legal duty for public sector bodies to publish accessibility statements and make their websites accessible by 23 September 2020 Accessibility statements include information about how the website was tested and the organisation's plan to fix any accessibility problems. Statements should be published and linked to on every page on the website. United States In the United States, Section 508 Amendment to the Rehabilitation Act of 1973 requires all Federal agencies' electronic and information technology to be accessible to those with disabilities. Both members of the public and federal employees have the right to access this technology, such as computer hardware and software, websites, phone systems, and copiers. Also, Section 504 of the Rehabilitation Act prohibits discrimination on the basis of disability for entities receiving federal funds and has been cited in multiple lawsuits against organizations such as hospitals that receive federal funds through medicare/medicaid. In addition, Title III of the Americans with Disabilities Act (ADA) prohibits discrimination on the basis of disability. There is some debate on the matter; multiple courts and the U.S. Department of Justice have taken the position that the ADA requires website and app operators and owners to take affirmative steps to make their websites and apps accessible to disabled persons and compatible with common assistive technologies such as the JAWS screen reader, while other courts have taken the position that the ADA does not apply online. The U.S. Department of Justice has endorsed the WCAG2.0AA standard as an appropriate standard for accessibility in multiple settlement agreements. Numerous lawsuits challenging websites and mobile apps on the basis of the ADA have been filed since 2017. These cases appears spurred by a 2017 case, Gil v. Winn Dixie Stores, in which a federal court in Florida ruled that Winn Dixie's website must be accessible. Around 800 cases related to web accessibility were filed in 2017, and over 2,200 were filed in 2018. Additionally, though the Justice Department had stated in 2010 that they would publish guidelines for web accessibility, they reversed this plan in 2017, also spurring legal action against inaccessible sites. A notable lawsuit related to the ADA was filed against Domino's Pizza by a blind user who could not use Domino's mobile app. At the federal district level, the court ruled in favor of Domino's as the Justice Department had not established the guidelines for accessibility, but this was appealed to the Ninth Circuit. The Ninth Circuit overruled the district court, ruling that because Domino's is a brick-and-mortar store, which must meet the ADA, and the mobile app an extension of their services, their app must also be compliant with the ADA. Domino's petitioned to the Supreme Court, backed by many other restaurants and retail chains, arguing that this decision impacts their Due Process since disabled customers have other, more accessible means to order. In October 2019, the Supreme Court declined to hear the case, which effectively upheld the decision of the 9th Circuit Court and requires the case to be heard as it stands. The number and cost of federal accessibility lawsuits has risen dramatically in the last few years. Website accessibility audits A growing number of organizations, companies and consultants offer website accessibility audits. These audits, a type of system testing, identify accessibility problems that exist within a website, and provide advice and guidance on the steps that need to be taken to correct these problems. A range of methods are used to audit websites for accessibility: Automated tools such as the Check Meister website evaluation tool are available which can identify some of the problems that are present. Depending on the tool the result may vary widely making it difficult to compare test results. Expert technical reviewers, knowledgeable in web design technologies and accessibility, can review a representative selection of pages and provide detailed feedback and advice based on their findings. User testing, usually overseen by technical experts, involves setting tasks for ordinary users to carry out on the website, and reviewing the problems these users encounter as they try to carry out the tasks. Each of these methods has its strengths and weaknesses: Automated tools can process many pages in a relatively short length of time, but can only identify a limited portion of the accessibility problems that might be present in the website. Technical expert review will identify many of the problems that exist, but the process is time-consuming, and many websites are too large to make it possible for a person to review every page. User testing combines elements of usability and accessibility testing, and is valuable for identifying problems that might otherwise be overlooked, but needs to be used knowledgeably to avoid the risk of basing design decisions on one user's preferences. Ideally, a combination of methods should be used to assess the accessibility of a website. Remediating inaccessible websites Once an accessibility audit has been conducted, and accessibility errors have been identified, the errors will need to be remediated in order to ensure the site is compliant with accessibility errors. The traditional way of correcting an inaccessible site is to go back into the source code, reprogram the error, and then test to make sure the bug was fixed. If the website is not scheduled to be revised in the near future, that error (and others) would remain on the site for a lengthy period of time, possibly violating accessibility guidelines. Because this is a complicated process, many website owners choose to build accessibility into a new site design or re-launch, as it can be more efficient to develop the site to comply with accessibility guidelines, rather than to remediate errors later. With the progress in AI technology, web accessibility has become more accessible. With 3rd party add-ons that leverage AI and machine learning, it is possible to offer changes to the website design without altering the source code. This way, a website can be accessible to different types of users without the need to adjust the website for every accessibility equipment. Accessible Web applications and WAI-ARIA For a web page to be accessible all important semantics about the page's functionality must be available so that assistive technology can understand and process the content and adapt it for the user. However, as content becomes more and more complex, the standard HTML tags and attributes become inadequate in providing semantics reliably. Modern Web applications often apply scripts to elements to control their functionality and to enable them to act as a control or other dynamic component. These custom components or widgets do not provide a way to convey semantic information to the user agent. WAI-ARIA (Accessible Rich Internet Applications) is a specification published by the World Wide Web Consortium that specifies how to increase the accessibility of dynamic content and user interface components developed with Ajax, HTML, JavaScript and related technologies. ARIA enables accessibility by enabling the author to provide all the semantics to fully describe its supported behaviour. It also allows each element to expose its current states and properties and its relationships between other elements. Accessibility problems with the focus and tab index are also corrected. Neurological UX Neurological UX is a specialised branch of web accessibility aimed at designing digital experiences that cater to individuals with neurological dispositions such as ADHD, dyslexia, autism spectrum disorder (ASD), and anxiety. Coined by Gareth Slinn in his book NeurologicalUX neurologicalux.com, this approach goes beyond conventional accessibility by addressing cognitive, emotional, and behavioural needs. Neurological UX focuses on creating interfaces that reduce cognitive load, support diverse ways of thinking, and accommodate challenges in executive functioning. Core principles include: Clarity and Simplicity: Streamlining interfaces to reduce distractions and enhance focus for users with ADHD and similar conditions. Cognitive Support: Offering features like tooltips, hover states, or progressive disclosures to help users with memory and information processing challenges, such as those with dyslexia or traumatic brain injuries. Emotionally Comfortable Design: Using calming color schemes, predictable navigation, and consistent layouts to reduce anxiety for users prone to stress. Flexible Interaction Models: Providing adjustable settings for font size, spacing, and contrast to suit the needs of users with dyslexia, visual stress, or sensory processing disorders. Intuitive Feedback: Ensuring interactive elements provide clear, immediate feedback to accommodate difficulties with impulse control and decision-making. Minimising Overstimulation: Avoiding overly busy layouts, autoplay media, or complex animations that can overwhelm users with ASD or ADHD. By prioritising usability and emotional well-being, Neurological UX seeks to create inclusive digital experiences that empower all users, regardless of their cognitive or neurological profile. This approach not only improves accessibility compliance but also fosters a more equitable and human-centered web. See also Accessible publishing Augmentative and alternative communication Blue Beanie Day Computer accessibility Device independence Digital divide European Internet Accessibility Observatory Knowbility Maguire v Sydney Organising Committee for the Olympic Games (2000) Multimodal interaction Neurologicalux Progressive enhancement Universal design Web Accessibility Initiative Web engineering Web interoperability Web literacy References Further reading External links How To Design For Accessibility (BBC) Inclusive Design Principles Apple Developer Accessibility Resources BBC GEL Technical Accessibility Guides Neurological UX Google Developer Accessibility Resources Microsoft Developer Accessibility Resources W3C WCAG Developer Accessibility Resources A Curated List of Awesome Accessibility Tools, Articles, and Resources ADA Compliance For Websites Checklist Standards and guidelines W3C – Web Accessibility Initiative (WAI) W3C – Web Content Accessibility Guidelines (WCAG) 2.0 Equality and Human Rights Commission: PAS 78: a guide to good practice in commissioning accessible websites (which BS 8878 supersedes) European Union – Unified Web Evaluation Methodology 1.2 University of Illinois iCITA HTML Accessibility Best Practices BBC GEL Product Accessibility Guidelines BBC GEL Subtitles (Captions) Guidelines BBC Editorial Accessibility Guide (Online and TV) Accessible information Web design Usability
Web accessibility
[ "Engineering" ]
6,914
[ "Design", "Web design" ]
1,192,503
https://en.wikipedia.org/wiki/Nereocystis
Nereocystis (Greek, 'mermaid's bladder') is a monotypic genus of subtidal kelp containing the species Nereocystis luetkeana. Some English names include edible kelp, bull kelp, bullwhip kelp, ribbon kelp, bladder wrack, and variations of these names. Due to the English name, bull kelp can be confused with southern bull kelps, which are found in the Southern Hemisphere. Nereocystis luetkeana forms thick beds on subtidal rocks, and is an important part of kelp forests. Etymology The species Nereocystis luetkeana was named (as Fucus luetkeanus) after the German-Russian explorer Fyodor Petrovich Litke (also spelled Lütke) by Mertens. The species was renamed in a description by Postels and Ruprecht. Description Nereocystis is a brown macroalgae that derives chemical energy from photosynthesis. Nereocystis in particular, similar to Pelagophycus porra, can be identified by a single large pneumatocyst between the end of its hollow stipe and the blades. Individuals can grow to a maximum of . Nereocystis has a holdfast of about , and a single stipe, topped with a pneumatocyst containing carbon monoxide, from which sprout the numerous (about 30-64) blades. The blades may be up to long, and up to wide. It is usually annual, sometimes persisting up to 18 months. Nereocystis is the only kelp which will drop spore patches, so that the right concentration of spores lands near the parent's holdfast. The thallus of this common canopy-forming kelp has a richly branched holdfast (haptera) and a cylindrical stipe 10–36 m (33–118 ft) long. The stipe terminates in a single, gas-filled pneumatocyst from which many blades grow. Each blade can grow up to 10 m (33 ft) long, and blade growth can reach 15 cm (5.9 in) per day. Nereocystis grows in areas where Pterygophora californica also inhabits. Bull kelp will often grow on the stipe of Pterygophora, with up anywhere from 10 to 20 individuals of Nereocystis attaching to a single Pterygophora stipe. Reproduction Reproduction in Nereocystis is characterized by an alternation of generations. The diploid generation is the recognizable macroscopic sporophyte. During sexual reproduction, reproductive patches (sori) develop on the blades of the sporophyte and drop to the seafloor at maturity. The sori release haploid spores, which become the microscopic gametophytes. The gametophytes produce gametes, and if fertilization occurs, a new sporophyte organism may develop and begin to grow up from the seafloor. Distribution The species is common along the Pacific Coast of North America, from Southern California to the Aleutian Islands, Alaska. However, drift individuals disperse with ocean currents further south into northwest Baja California, Mexico. Offshore beds can persist for one or many years, usually in deeper water than Eualaria or Macrocystis, where they co-occur. This annual kelp grows on rock from the low intertidal to subtidal zones; it prefers semi-exposed habitats or high-current areas. It also does not grow in areas with breaking waves or swells. Its distribution is limited by the requirement of light for photosynthesis, and preference for areas of high water movement where the microscopic gametophyte stage will not be covered by sediment. Other factors such as salinity, turbidity and water temperature can affect Nereocystis distribution. Nereocystis tends to thrive in temperatures ranging from 5 to 20 degrees Celsius. It is rarely found in environments with high turbidity and low salinity. Nereocystis fails to thrive in areas of reduced salinity, such as brackish estuarine waters, because it has difficulty adjusting to changes in salinity. The increased turbidity of such waters also decreases light available for photosynthesis, limiting its growth. Additionally, disease, competition, and herbivory can affect distribution. Ecology Nereocystis, like other large, canopy forming kelps, play a crucial role in maintaining the biologically diverse kelp forests in the temperate marine environments where they flourish. Its fast growth and size provide an important habitat not only for the fish and invertebrates that reside in kelp forests, but also for species that use kelp forests as foraging grounds. In bull kelp forests, kelp crabs are important grazers that control the ecosystem by feeding on large canopy kelps such as Nereocystis. Microbial communities Nereocystis fosters microbacteria species, affecting the ecology on a microscopic level. These microbial bacteria species foster the growth of seaweed, producing growth-promoting substances. According to studies by Weigel, the microbial communities that grow on Nereocystis are composed mostly of Proteobacteria, Bacteroidetes, Verrucomicrobia, and Planctomycetes. Nereocystis is unique in that it contains a large percentage of Verrucomicrobia, with it composing approximately 10% of microbacteria populations on Nereocystis. Human effects Abalone mariculture (the commercial farming and harvest of abalone) and an increasing demand in human consumption have led to a notable and marked increase in Nereocystis extraction. This extraction is done by hand and removes the top two meters of the forest. These first two meters contain bull kelp's pneumatocysts and its reproductive organs, so this method of extraction destroys kelp forests that depend on Nereocystis. Since bull kelp tend to only reproduce once a year, removal of these organs renders Nereocystis unable to reproduce. The tissues of bull kelp are processed and turned into liquid fertilizer as well as food for abalones. Human uses Nereocystis was not commercially harvested off the coast of California until around the 1980s. The beginning of this harvest is attributed to the Abalone International company, which was seeking mariculture expansion and efficiency. Kelp harvesters are legally mandated to record every aspect of their harvest, including but not limited to the amount of kelp, the species, and the location where it was taken from. Kelp is currently harvested from the Californian coast, Oregon, Washington, British Columbia, and Alaska. Human uses of Nereocystis include consumption and agriculture. It is pickled and eaten as a delicacy as well as used for creative purposes. In South Korea, Nereocystis used to make miyeok-guk (Korean kelp soup) weekly by new mothers as it's revered as a blood-cleanser. It is also customary to eat it on one's birthday. References Further reading External links Nereocystis luetkeana (K.Mertens) Postels & Ruprecht on Algaebase Decews Guide Laminariaceae Flora of the Pacific Marine biota of North America Flora of Alaska Flora of California Flora of the West Coast of the United States Edible algae Laminariales genera Monotypic brown algae genera Flora without expected TNC conservation status Biota of the Temperate Northern Pacific
Nereocystis
[ "Biology" ]
1,593
[ "Edible algae", "Algae" ]
105,223
https://en.wikipedia.org/wiki/Homocysteine
Homocysteine (; symbol Hcy) is a non-proteinogenic α-amino acid. It is a homologue of the amino acid cysteine, differing by an additional methylene bridge (-CH2-). It is biosynthesized from methionine by the removal of its terminal Cε methyl group. In the body, homocysteine can be recycled into methionine or converted into cysteine with the aid of vitamin B6, B9, and B12. High levels of homocysteine in the blood (hyperhomocysteinemia) is regarded as a marker of cardiovascular disease, likely working through atherogenesis, which can result in ischemic injury. Therefore, hyperhomocysteinemia is a possible risk factor for coronary artery disease. Coronary artery disease occurs when an atherosclerotic plaque blocks blood flow to the coronary arteries, which supply the heart with oxygenated blood. Hyperhomocysteinemia has been correlated with the occurrence of blood clots, heart attacks, and strokes, although it is unclear whether hyperhomocysteinemia is an independent risk factor for these conditions. Hyperhomocysteinemia has also been associated with early-term spontaneous abortions and with neural tube defects. Structure Homocysteine exists at neutral pH values as a zwitterion. Biosynthesis and biochemical roles Homocysteine is biosynthesized naturally via a multi-step process. First, methionine receives an adenosine group from ATP, a reaction catalyzed by S-adenosyl-methionine synthetase, to give S-adenosyl methionine (SAM-e). SAM-e then transfers the methyl group to an acceptor molecule, (e.g., norepinephrine as an acceptor during epinephrine synthesis, DNA methyltransferase as an intermediate acceptor in the process of DNA methylation). The adenosine is then hydrolyzed to yield L-homocysteine. L-Homocysteine has two primary fates: conversion via tetrahydrofolate (THF) back into L-methionine or conversion to L-cysteine. Biosynthesis of cysteine Mammals biosynthesize the amino acid cysteine via homocysteine. Cystathionine β-synthase catalyses the condensation of homocysteine and serine to give cystathionine. This reaction uses pyridoxine (vitamin B6) as a cofactor. Cystathionine γ-lyase then converts this double amino acid to cysteine, ammonia, and α-ketobutyrate. Bacteria and plants rely on a different pathway to produce cysteine, relying on O-acetylserine. Methionine salvage Homocysteine can be recycled into methionine. This process uses N5-methyl tetrahydrofolate as the methyl donor and cobalamin (vitamin B12)-related enzymes. More detail on these enzymes can be found in the article for methionine synthase. Other reactions of biochemical significance Homocysteine can cyclize to give homocysteine thiolactone, a five-membered heterocycle. Because of this "self-looping" reaction, homocysteine-containing peptides tend to cleave themselves by reactions generating oxidative stress. Homocysteine also acts as an allosteric antagonist at Dopamine D2 receptors. It has been proposed that both homocysteine and its thiolactone may have played a significant role in the appearance of life on the early Earth. Homocysteine levels Homocysteine levels typically are higher in men than women, and increase with age. Common levels in Western populations are 10 to 12 μmol/L, and levels of 20 μmol/L are found in populations with low B-vitamin intakes or in the elderly (e.g., Rotterdam, Framingham). It is decreased with methyl folate trapping, where it is accompanied by decreased methylmalonic acid, increased folate, and a decrease in formiminoglutamic acid. This is the opposite of MTHFR C677T mutations, which result in an increase in homocysteine. The ranges above are provided as examples only; test results always should be interpreted using the range provided by the laboratory that produced the result. Elevated homocysteine Abnormally high levels of homocysteine in the serum, above 15 μmol/L, are a medical condition called hyperhomocysteinemia. This has been claimed to be a significant risk factor for the development of a wide range of diseases, in total more than 100 including thrombosis, neuropsychiatric illness, in particular dementia and fractures. It also is found to be associated with microalbuminuria, which is a strong indicator of the risk of future cardiovascular disease and renal dysfunction. Vitamin B12 deficiency, even when coupled with high serum folate levels, has been found to increase overall homocysteine concentrations as well. Typically, hyperhomocysteinemia is managed with vitamin B6, vitamin B9, and vitamin B12 supplementation. However, supplementation with these vitamins does not appear to improve cardiovascular disease outcomes. References External links Homocysteine MS Spectrum Homocysteine at Lab Tests Online Prof. David Spence on homocysteine levels, kidney damage, and cardiovascular disease, The Health Report, Radio National, 24 May 2010 Alpha-Amino acids Sulfur amino acids Thiols Non-proteinogenic amino acids Excitatory amino acids
Homocysteine
[ "Chemistry" ]
1,211
[ "Organic compounds", "Thiols" ]
105,328
https://en.wikipedia.org/wiki/Hemerythrin
Hemerythrin (also spelled haemerythrin; , ) is an oligomeric protein responsible for oxygen (O2) transport in the marine invertebrate phyla of sipunculids, priapulids, brachiopods, and in a single annelid worm genus, Magelona. Myohemerythrin is a monomeric O2-binding protein found in the muscles of marine invertebrates. Hemerythrin and myohemerythrin are essentially colorless when deoxygenated, but turn a violet-pink in the oxygenated state. Hemerythrin does not, as the name might suggest, contain a heme. The names of the blood oxygen transporters hemoglobin, hemocyanin, and hemerythrin do not refer to the heme group (only found in globins). Instead, these names are derived from the Greek word for blood. Hemerythrin may also contribute to innate immunity and anterior tissue regeneration in certain worms. O2 binding mechanism The mechanism of dioxygen binding is unusual. Most O2 carriers operate via formation of dioxygen complexes, but hemerythrin holds the O2 as a hydroperoxide (HO2, or -OOH−). The site that binds O2 consists of a pair of iron centres. The iron atoms are bound to the protein through the carboxylate side chains of a glutamate and aspartates as well as through five histidine residues. Hemerythrin and myohemerythrin are often described according to oxidation and ligation states of the iron center: The uptake of O2 by hemerythrin is accompanied by two-electron oxidation of the diferrous centre to produce a hydroperoxide (OOH−) complex. The binding of O2 is roughly described in this diagram: Deoxyhemerythrin contains two high-spin ferrous ions bridged by hydroxyl group (A). One iron is hexacoordinate and another is pentacoordinate. A hydroxyl group serves as a bridging ligand but also functions as a proton donor to the O2 substrate. This proton-transfer result in the formation of a single oxygen atom (μ-oxo) bridge in oxy- and methemerythrin. O2 binds to the pentacoordinate Fe2+ centre at the vacant coordination site (B). Then electrons are transferred from the ferrous ions to generate the binuclear ferric (Fe3+,Fe3+) centre with bound peroxide (C). Quaternary structure and cooperativity Hemerythrin typically exists as a homooctamer or heterooctamer composed of α- and β-type subunits of 13–14 kDa each, although some species have dimeric, trimeric and tetrameric hemerythrins. Each subunit has a four-α-helix fold binding a binuclear iron centre. Because of its size hemerythrin is usually found in cells or "corpuscles" in the blood rather than free floating. Unlike hemoglobin, most hemerythrins lack cooperative binding to oxygen, making it roughly 1/4 as efficient as hemoglobin. In some brachiopods though, hemerythrin shows cooperative binding of O2. Cooperative binding is achieved by interactions between subunits: the oxygenation of one subunit increases the affinity of a second unit for oxygen. Hemerythrin affinity for carbon monoxide (CO) is actually lower than its affinity for O2, unlike hemoglobin which has a very high affinity for CO. Hemerythrin's low affinity for CO poisoning reflects the role of hydrogen-bonding in the binding of O2, a pathway mode that is incompatible with CO complexes which usually do not engage in hydrogen bonding. Hemerythrin/HHE cation-binding domain The hemerythrin/HHE cation-binding domain occurs as a duplicated domain in hemerythrins, myohemerythrins and related proteins. This domain binds iron in hemerythrin, but can bind other metals in related proteins, such as cadmium in the Nereis diversicolor hemerythrin. It is also found in the NorA protein from Cupriavidus necator, this protein is a regulator of response to nitric oxide, which suggests a different set-up for its metal ligands. A protein from Cryptococcus neoformans (Filobasidiella neoformans) that contains haemerythrin/HHE cation-binding domains is also involved in nitric oxide response. A Staphylococcus aureus protein containing this domain, iron-sulfur cluster repair protein ScdA, has been noted to be important when the organism switches to living in environments with low oxygen concentrations; perhaps this protein acts as an oxygen store or scavenger. Hemerythrin/HHE (H-HxxxE-HxxxH-HxxxxD) proteins found in bacteria are implicated in signal transduction and chemotaxis. More distantly related ones include H-HxxxE-H-HxxxE proteins (including the E3 ligase) and animal F-box proteins (H-HExxE-H-HxxxE). References Further reading External links 1HMD - PDB structure of deoxyhemerythrin Themiste dyscrita (sipunculid worm) 1HMO – PDB structure of oxyhemerythrin from Themiste dyscrita 2MHR – PDB structure of azido-met myohemerythrin from Themiste zostericola (sipunculid worm) IPR002063 – InterPro entry for hemerythrin Protein domains Metalloproteins Iron compounds Respiratory pigments
Hemerythrin
[ "Chemistry", "Biology" ]
1,246
[ "Protein domains", "Metalloproteins", "Bioinorganic chemistry", "Protein classification" ]
105,340
https://en.wikipedia.org/wiki/Strange%20quark
The strange quark or s quark (from its symbol, s) is the third lightest of all quarks, a type of elementary particle. Strange quarks are found in subatomic particles called hadrons. Examples of hadrons containing strange quarks include kaons (), strange D mesons (), Sigma baryons (), and other strange particles. According to the IUPAP, the symbol s is the official name, while "strange" is to be considered only as a mnemonic. The name sideways has also been used because the s quark (but also the other three remaining quarks) has an I value of 0 while the u ("up") and d ("down") quarks have values of + and − respectively. Along with the charm quark, it is part of the second generation of matter. It has an electric charge of  e and a bare mass of . Like all quarks, the strange quark is an elementary fermion with spin , and experiences all four fundamental interactions: gravitation, electromagnetism, weak interactions, and strong interactions. The antiparticle of the strange quark is the strange antiquark (sometimes called antistrange quark or simply antistrange), which differs from it only in that some of its properties have equal magnitude but opposite sign. The first strange particle (a particle containing a strange quark) was discovered by George Rochester and Clifford Butler in Department of Physics and Astronomy, University of Manchester in 1947 (kaons), with the existence of the strange quark itself (and that of the up and down quarks) postulated in 1964 by Murray Gell-Mann and George Zweig to explain the eightfold way classification scheme of hadrons. The first evidence for the existence of quarks came in 1968, in deep inelastic scattering experiments at the Stanford Linear Accelerator Center. These experiments confirmed the existence of up and down quarks, and by extension, strange quarks, as they were required to explain the eightfold way. History In the beginnings of particle physics (first half of the 20th century), hadrons such as protons, neutrons and pions were thought to be elementary particles. However, new hadrons were discovered and the "particle zoo" grew from a few particles in the early 1930s and 1940s to several dozens of them in the 1950s. Some particles were much longer lived than others; most particles decayed through the strong interaction and had lifetimes of around 10−23 seconds. When they decayed through the weak interactions, they had lifetimes of around 10−10 seconds. While studying these decays, Murray Gell-Mann (in 1953) and Kazuhiko Nishijima (in 1955) developed the concept of strangeness (which Nishijima called eta-charge, after the eta meson ()) to explain the "strangeness" of the longer-lived particles. The Gell-Mann–Nishijima formula is the result of these efforts to understand strange decays. Despite their work, the relationships between each particle and the physical basis behind the strangeness property remained unclear. In 1961, Gell-Mann and Yuval Ne'eman independently proposed a hadron classification scheme called the eightfold way, also known as SU(3) flavor symmetry. This ordered hadrons into isospin multiplets. The physical basis behind both isospin and strangeness was only explained in 1964, when Gell-Mann and George Zweig independently proposed the quark model, which at that time consisted only of the up, down, and strange quarks. Up and down quarks were the carriers of isospin, while the strange quark carried strangeness. While the quark model explained the eightfold way, no direct evidence of the existence of quarks was found until 1968 at the Stanford Linear Accelerator Center. Deep inelastic scattering experiments indicated that protons had substructure, and that protons made of three more-fundamental particles explained the data (thus confirming the quark model). At first people were reluctant to identify the three-bodies as quarks, instead preferring Richard Feynman's parton description, but over time the quark theory became accepted (see November Revolution). See also Strangeness Quark model Strange matter Strangeness production Strangelet Strange star References Further reading Quarks Elementary particles
Strange quark
[ "Physics" ]
918
[ "Elementary particles", "Subatomic particles", "Matter" ]
105,355
https://en.wikipedia.org/wiki/Biomechanics
Biomechanics is the study of the structure, function and motion of the mechanical aspects of biological systems, at any level from whole organisms to organs, cells and cell organelles, using the methods of mechanics. Biomechanics is a branch of biophysics. Etymology The word "biomechanics" (1899) and the related "biomechanical" (1856) come from the Ancient Greek βίος bios "life" and μηχανική, mēchanikē "mechanics", to refer to the study of the mechanical principles of living organisms, particularly their movement and structure. Subfields Biofluid mechanics Biological fluid mechanics, or biofluid mechanics, is the study of both gas and liquid fluid flows in or around biological organisms. An often studied liquid biofluid problem is that of blood flow in the human cardiovascular system. Under certain mathematical circumstances, blood flow can be modeled by the Navier–Stokes equations. In vivo whole blood is assumed to be an incompressible Newtonian fluid. However, this assumption fails when considering forward flow within arterioles. At the microscopic scale, the effects of individual red blood cells become significant, and whole blood can no longer be modeled as a continuum. When the diameter of the blood vessel is just slightly larger than the diameter of the red blood cell the Fahraeus–Lindquist effect occurs and there is a decrease in wall shear stress. However, as the diameter of the blood vessel decreases further, the red blood cells have to squeeze through the vessel and often can only pass in a single file. In this case, the inverse Fahraeus–Lindquist effect occurs and the wall shear stress increases. An example of a gaseous biofluids problem is that of human respiration. Respiratory systems in insects have been studied for bioinspiration for designing improved microfluidic devices. Biotribology Biotribology is the study of friction, wear and lubrication of biological systems, especially human joints such as hips and knees. In general, these processes are studied in the context of contact mechanics and tribology. Additional aspects of biotribology include analysis of subsurface damage resulting from two surfaces coming in contact during motion, i.e. rubbing against each other, such as in the evaluation of tissue-engineered cartilage. Comparative biomechanics Comparative biomechanics is the application of biomechanics to non-human organisms, whether used to gain greater insights into humans (as in physical anthropology) or into the functions, ecology and adaptations of the organisms themselves. Common areas of investigation are animal locomotion and feeding, as these have strong connections to the organism's fitness and impose high mechanical demands. Animal locomotion has many manifestations, including running, jumping and flying. Locomotion requires energy to overcome friction, drag, inertia, and gravity, though which factor predominates varies with environment. Comparative biomechanics overlaps strongly with many other fields, including ecology, neurobiology, developmental biology, ethology, and paleontology, to the extent of commonly publishing papers in the journals of these other fields. Comparative biomechanics is often applied in medicine (with regards to common model organisms such as mice and rats) as well as in biomimetics, which looks to nature for solutions to engineering problems. Computational biomechanics Computational biomechanics is the application of engineering computational tools, such as the Finite element method to study the mechanics of biological systems. Computational models and simulations are used to predict the relationship between parameters that are otherwise challenging to test experimentally, or used to design more relevant experiments reducing the time and costs of experiments. Mechanical modeling using finite element analysis has been used to interpret the experimental observation of plant cell growth to understand how they differentiate, for instance. In medicine, over the past decade, the Finite element method has become an established alternative to in vivo surgical assessment. One of the main advantages of computational biomechanics lies in its ability to determine the endo-anatomical response of an anatomy, without being subject to ethical restrictions. This has led FE modeling (or other discretization techniques) to the point of becoming ubiquitous in several fields of Biomechanics while several projects have even adopted an open source philosophy (e.g., BioSpine). Computational biomechanics is an essential ingredient in surgical simulation, which is used for surgical planning, assistance, and training. In this case, numerical (discretization) methods are used to compute, as fast as possible, a system's response to boundary conditions such as forces, heat and mass transfer, and electrical and magnetic stimuli. Continuum biomechanics The mechanical analysis of biomaterials and biofluids is usually carried forth with the concepts of continuum mechanics. This assumption breaks down when the length scales of interest approach the order of the microstructural details of the material. One of the most remarkable characteristics of biomaterials is their hierarchical structure. In other words, the mechanical characteristics of these materials rely on physical phenomena occurring in multiple levels, from the molecular all the way up to the tissue and organ levels. Biomaterials are classified into two groups: hard and soft tissues. Mechanical deformation of hard tissues (like wood, shell and bone) may be analysed with the theory of linear elasticity. On the other hand, soft tissues (like skin, tendon, muscle, and cartilage) usually undergo large deformations, and thus, their analysis relies on the finite strain theory and computer simulations. The interest in continuum biomechanics is spurred by the need for realism in the development of medical simulation. Neuromechanics Neuromechanics uses a biomechanical approach to better understand how the brain and nervous system interact to control the body. During motor tasks, motor units activate a set of muscles to perform a specific movement, which can be modified via motor adaptation and learning. In recent years, neuromechanical experiments have been enabled by combining motion capture tools with neural recordings. Plant biomechanics The application of biomechanical principles to plants, plant organs and cells has developed into the subfield of plant biomechanics. Application of biomechanics for plants ranges from studying the resilience of crops to environmental stress to development and morphogenesis at cell and tissue scale, overlapping with mechanobiology. Sports biomechanics In sports biomechanics, the laws of mechanics are applied to human movement in order to gain a greater understanding of athletic performance and to reduce sport injuries as well. It focuses on the application of the scientific principles of mechanical physics to understand movements of action of human bodies and sports implements such as cricket bat, hockey stick and javelin etc. Elements of mechanical engineering (e.g., strain gauges), electrical engineering (e.g., digital filtering), computer science (e.g., numerical methods), gait analysis (e.g., force platforms), and clinical neurophysiology (e.g., surface EMG) are common methods used in sports biomechanics. Biomechanics in sports can be stated as the body's muscular, joint, and skeletal actions while executing a given task, skill, or technique. Understanding biomechanics relating to sports skills has the greatest implications on sports performance, rehabilitation and injury prevention, and sports mastery. As noted by Doctor Michael Yessis, one could say that best athlete is the one that executes his or her skill the best. Vascular biomechanics The main topics of the vascular biomechanics is the description of the mechanical behaviour of vascular tissues. It is well known that cardiovascular disease is the leading cause of death worldwide. Vascular system in the human body is the main component that is supposed to maintain pressure and allow for blood flow and chemical exchanges. Studying the mechanical properties of these complex tissues improves the possibility of better understanding cardiovascular diseases and drastically improves personalized medicine. Vascular tissues are inhomogeneous with a strongly non linear behaviour. Generally this study involves complex geometry with intricate load conditions and material properties. The correct description of these mechanisms is based on the study of physiology and biological interaction. Therefore, is necessary to study wall mechanics and hemodynamics with their interaction. It is also necessary to premise that the vascular wall is a dynamic structure in continuous evolution. This evolution directly follows the chemical and mechanical environment in which the tissues are immersed like Wall Shear Stress or biochemical signaling. Immunomechanics The emerging field of immunomechanics focuses on characterising mechanical properties of the immune cells and their functional relevance. Mechanics of immune cells can be characterised using various force spectroscopy approaches such as acoustic force spectroscopy and optical tweezers, and these measurements can be performed at physiological conditions (e.g. temperature). Furthermore, one can study the link between immune cell mechanics and immunometabolism and immune signalling. The term "immunomechanics" is some times interchangeably used with immune cell mechanobiology or cell mechanoimmunology. Other applied subfields of biomechanics include Allometry Animal locomotion and Gait analysis Biotribology Biofluid mechanics Cardiovascular biomechanics Comparative biomechanics Computational biomechanics Ergonomy Forensic Biomechanics Human factors engineering and occupational biomechanics Injury biomechanics Implant (medicine), Orthotics and Prosthesis Kinaesthetics Kinesiology (kinetics + physiology) Musculoskeletal and orthopedic biomechanics Rehabilitation Soft body dynamics Sports biomechanics History Antiquity Aristotle, a student of Plato, can be considered the first bio-mechanic because of his work with animal anatomy. Aristotle wrote the first book on the motion of animals, De Motu Animalium, or On the Movement of Animals. He saw animal's bodies as mechanical systems, pursued questions such as the physiological difference between imagining performing an action and actual performance. In another work, On the Parts of Animals, he provided an accurate description of how the ureter uses peristalsis to carry urine from the kidneys to the bladder. With the rise of the Roman Empire, technology became more popular than philosophy and the next bio-mechanic arose. Galen (129 AD-210 AD), physician to Marcus Aurelius, wrote his famous work, On the Function of the Parts (about the human body). This would be the world's standard medical book for the next 1,400 years. Renaissance The next major biomechanic would not be around until the 1490s, with the studies of human anatomy and biomechanics by Leonardo da Vinci. He had a great understanding of science and mechanics and studied anatomy in a mechanics context. He analyzed muscle forces and movements and studied joint functions. These studies could be considered studies in the realm of biomechanics. Leonardo da Vinci studied anatomy in the context of mechanics. He analyzed muscle forces as acting along lines connecting origins and insertions, and studied joint function. Da Vinci is also known for mimicking some animal features in his machines. For example, he studied the flight of birds to find means by which humans could fly; and because horses were the principal source of mechanical power in that time, he studied their muscular systems to design machines that would better benefit from the forces applied by this animal. In 1543, Galen's work, On the Function of the Parts was challenged by Andreas Vesalius at the age of 29. Vesalius published his own work called, On the Structure of the Human Body. In this work, Vesalius corrected many errors made by Galen, which would not be globally accepted for many centuries. With the death of Copernicus came a new desire to understand and learn about the world around people and how it works. On his deathbed, he published his work, On the Revolutions of the Heavenly Spheres. This work not only revolutionized science and physics, but also the development of mechanics and later bio-mechanics. Galileo Galilei, the father of mechanics and part time biomechanic was born 21 years after the death of Copernicus. Over his years of science, Galileo made a lot of biomechanical aspects known. For example, he discovered that  "animals' masses increase disproportionately to their size, and their bones must consequently also disproportionately increase in girth, adapting to loadbearing rather than mere size. The bending strength of a tubular structure such as a bone is increased relative to its weight by making it hollow and increasing its diameter. Marine animals can be larger than terrestrial animals because the water's buoyancy relieves their tissues of weight." Galileo Galilei was interested in the strength of bones and suggested that bones are hollow because this affords maximum strength with minimum weight. He noted that animals' bone masses increased disproportionately to their size. Consequently, bones must also increase disproportionately in girth rather than mere size. This is because the bending strength of a tubular structure (such as a bone) is much more efficient relative to its weight. Mason suggests that this insight was one of the first grasps of the principles of biological optimization. In the 17th century, Descartes suggested a philosophic system whereby all living systems, including the human body (but not the soul), are simply machines ruled by the same mechanical laws, an idea that did much to promote and sustain biomechanical study. Industrial era The next major bio-mechanic, Giovanni Alfonso Borelli, embraced Descartes' mechanical philosophy and studied walking, running, jumping, the flight of birds, the swimming of fish, and even the piston action of the heart within a mechanical framework. He could determine the position of the human center of gravity, calculate and measure inspired and expired air volumes, and he showed that inspiration is muscle-driven and expiration is due to tissue elasticity. Borelli was the first to understand that "the levers of the musculature system magnify motion rather than force, so that muscles must produce much larger forces than those resisting the motion". Influenced by the work of Galileo, whom he personally knew, he had an intuitive understanding of static equilibrium in various joints of the human body well before Newton published the laws of motion. His work is often considered the most important in the history of bio-mechanics because he made so many new discoveries that opened the way for the future generations to continue his work and studies. It was many years after Borelli before the field of bio-mechanics made any major leaps. After that time, more and more scientists took to learning about the human body and its functions. There are not many notable scientists from the 19th or 20th century in bio-mechanics because the field is far too vast now to attribute one thing to one person. However, the field is continuing to grow every year and continues to make advances in discovering more about the human body. Because the field became so popular, many institutions and labs have opened over the last century and people continue doing research. With the Creation of the American Society of Bio-mechanics in 1977, the field continues to grow and make many new discoveries. In the 19th century Étienne-Jules Marey used cinematography to scientifically investigate locomotion. He opened the field of modern 'motion analysis' by being the first to correlate ground reaction forces with movement. In Germany, the brothers Ernst Heinrich Weber and Wilhelm Eduard Weber hypothesized a great deal about human gait, but it was Christian Wilhelm Braune who significantly advanced the science using recent advances in engineering mechanics. During the same period, the engineering mechanics of materials began to flourish in France and Germany under the demands of the Industrial Revolution. This led to the rebirth of bone biomechanics when the railroad engineer Karl Culmann and the anatomist Hermann von Meyer compared the stress patterns in a human femur with those in a similarly shaped crane. Inspired by this finding Julius Wolff proposed the famous Wolff's law of bone remodeling. Applications The study of biomechanics ranges from the inner workings of a cell to the movement and development of limbs, to the mechanical properties of soft tissue, and bones. Some simple examples of biomechanics research include the investigation of the forces that act on limbs, the aerodynamics of bird and insect flight, the hydrodynamics of swimming in fish, and locomotion in general across all forms of life, from individual cells to whole organisms. With growing understanding of the physiological behavior of living tissues, researchers are able to advance the field of tissue engineering, as well as develop improved treatments for a wide array of pathologies including cancer. Biomechanics is also applied to studying human musculoskeletal systems. Such research utilizes force platforms to study human ground reaction forces and infrared videography to capture the trajectories of markers attached to the human body to study human 3D motion. Research also applies electromyography to study muscle activation, investigating muscle responses to external forces and perturbations. Biomechanics is widely used in orthopedic industry to design orthopedic implants for human joints, dental parts, external fixations and other medical purposes. Biotribology is a very important part of it. It is a study of the performance and function of biomaterials used for orthopedic implants. It plays a vital role to improve the design and produce successful biomaterials for medical and clinical purposes. One such example is in tissue engineered cartilage. The dynamic loading of joints considered as impact is discussed in detail by Emanuel Willert. It is also tied to the field of engineering, because it often uses traditional engineering sciences to analyze biological systems. Some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems. Applied mechanics, most notably mechanical engineering disciplines such as continuum mechanics, mechanism analysis, structural analysis, kinematics and dynamics play prominent roles in the study of biomechanics. Usually biological systems are much more complex than man-built systems. Numerical methods are hence applied in almost every biomechanical study. Research is done in an iterative process of hypothesis and verification, including several steps of modeling, computer simulation and experimental measurements. See also Biomechatronics Biomedical engineering Cardiovascular System Dynamics Society Evolutionary physiology Forensic biomechanics International Society of Biomechanics List of biofluid mechanics research groups Mechanics of human sexuality OpenSim (simulation toolkit) Physical oncology References Further reading External links Biomechanics and Movement Science Listserver (Biomch-L) Biomechanics Links A Genealogy of Biomechanics Motor control
Biomechanics
[ "Physics", "Biology" ]
3,869
[ "Biomechanics", "Behavior", "Mechanics", "Motor control" ]
105,659
https://en.wikipedia.org/wiki/Upwelling
Upwelling is an oceanographic phenomenon that involves wind-driven motion of dense, cooler, and usually nutrient-rich water from deep water towards the ocean surface. It replaces the warmer and usually nutrient-depleted surface water. The nutrient-rich upwelled water stimulates the growth and reproduction of primary producers such as phytoplankton. The biomass of phytoplankton and the presence of cool water in those regions allow upwelling zones to be identified by cool sea surface temperatures (SST) and high concentrations of chlorophyll a. The increased availability of nutrients in upwelling regions results in high levels of primary production and thus fishery production. Approximately 25% of the total global marine fish catches come from five upwellings, which occupy only 5% of the total ocean area. Upwellings that are driven by coastal currents or diverging open ocean have the greatest impact on nutrient-enriched waters and global fishery yields. Mechanisms The three main drivers that work together to cause upwelling are wind, Coriolis effect, and Ekman transport. They operate differently for different types of upwelling, but the general effects are the same. In the overall process of upwelling, winds blow across the sea surface at a particular direction, which causes a wind-water interaction. As a result of the wind, the water has transported a net of 90 degrees from the direction of the wind due to Coriolis forces and Ekman transport. Ekman transport causes the surface layer of water to move at about a 45-degree angle from the direction of the wind, and the friction between that layer and the layer beneath it causes the successive layers to move in the same direction. This results in a spiral of water moving down the water column. Then, it is the Coriolis forces that dictate which way the water will move; in the Northern hemisphere, the water is transported to the right of the direction of the wind. In the Southern Hemisphere, the water is transported to the left of the wind. If this net movement of water is divergent, then upwelling of deep water occurs to replace the water that was lost. Types The major upwellings in the ocean are associated with the divergence of currents that bring deeper, colder, nutrient rich waters to the surface. There are at least five types of upwelling: coastal upwelling, large-scale wind-driven upwelling in the ocean interior, upwelling associated with eddies, topographically-associated upwelling, and broad-diffusive upwelling in the ocean interior. Coastal Coastal upwelling is the best known type of upwelling, and the most closely related to human activities as it supports some of the most productive fisheries in the world. Coastal upwelling will occur if the wind direction is parallel to the coastline and generates wind-driven currents. The wind-driven currents are diverted to the right of the winds in the Northern Hemisphere and to the left in the Southern Hemisphere due to the Coriolis effect. The result is a net movement of surface water at right angles to the direction of the wind, known as the Ekman transport (See also Ekman Spiral). When Ekman transport is occurring away from the coast, surface waters moving away are replaced by deeper, colder, and denser water. Normally, this upwelling process occurs at a rate of about 5–10 meters per day, but the rate and proximity of upwelling to the coast can be changed due to the strength and distance of the wind. Deep waters are rich in nutrients, including nitrate, phosphate and silicic acid, themselves the result of decomposition of sinking organic matter (dead/detrital plankton) from surface waters. When brought to the surface, these nutrients are utilized by phytoplankton, along with dissolved CO2 (carbon dioxide) and light energy from the sun, to produce organic compounds, through the process of photosynthesis. Upwelling regions therefore result in very high levels of primary production (the amount of carbon fixed by phytoplankton) in comparison to other areas of the ocean. They account for about 50% of global marine productivity. High primary production propagates up the food chain because phytoplankton are at the base of the oceanic food chain. The food chain follows the course of: Phytoplankton → Zooplankton → Predatory zooplankton → Filter feeders → Predatory fish → Marine birds, marine mammals Coastal upwelling exists year-round in some regions, known as major coastal upwelling systems, and only in certain months of the year in other regions, known as seasonal coastal upwelling systems. Many of these upwelling systems are associated with relatively high carbon productivity and hence are classified as Large Marine Ecosystems. Worldwide, there are five major coastal currents associated with upwelling areas: the Canary Current (off Northwest Africa), the Benguela Current (off southern Africa), the California Current (off California and Oregon), the Humboldt Current (off Peru and Chile), and the Somali Current (off Somalia and Oman). All of these currents support major fisheries. The four major eastern boundary currents in which coastal upwelling primarily occurs are the Canary Current, Benguela Current, California Current, and Humboldt Current. The Benguela Current is the eastern boundary of the South Atlantic subtropical gyre and can be divided into a northern and southern sub-system with upwelling occurring in both areas. The subsystems are divided by an area of permanent upwelling off of Luderitz, which is the strongest upwelling zone in the world. The California Current System (CCS) is an eastern boundary current of the North Pacific that is also characterized by a north and south split. The split in this system occurs at Point Conception, California due to weak upwelling in the South and strong upwelling in the north. The Canary Current is an eastern boundary current of the North Atlantic Gyre and is also separated due to the presence of the Canary Islands. Finally, the Humboldt Current or the Peru Current flows west along the coast of South America from Peru to Chile and extends up to 1,000 kilometers offshore. These four eastern boundary currents comprise the majority of coastal upwelling zones in the oceans. Equatorial Upwelling at the equator is associated with the Intertropical Convergence Zone (ITCZ) which actually moves, and consequently, is often located just north or south of the equator. Easterly (westward) trade winds blow from the Northeast and Southeast and converge along the equator blowing West to form the ITCZ. Although there are no Coriolis forces present along the equator, upwelling still occurs just north and south of the equator. This results in a divergence, with denser, nutrient-rich water being upwelled from below, and results in the remarkable fact that the equatorial region in the Pacific can be detected from space as a broad line of high phytoplankton concentration. Southern Ocean Large-scale upwelling is also found in the Southern Ocean. Here, strong westerly (eastward) winds blow around Antarctica, driving a significant flow of water northwards. This is actually a type of coastal upwelling. Since there are no continents in a band of open latitudes between South America and the tip of the Antarctic Peninsula, some of this water is drawn up from great depths. In many numerical models and observational syntheses, the Southern Ocean upwelling represents the primary means by which deep dense water is brought to the surface. In some regions of Antarctica, wind-driven upwelling near the coast pulls relatively warm Circumpolar deep water onto the continental shelf, where it can enhance ice shelf melt and influence ice sheet stability. Shallower, wind-driven upwelling is also found in off the west coasts of North and South America, northwest and southwest Africa, and southwest and south Australia, all associated with oceanic subtropical high pressure circulations (see coastal upwelling above). Some models of the ocean circulation suggest that broad-scale upwelling occurs in the tropics, as pressure driven flows converge water toward the low latitudes where it is diffusively warmed from above. The required diffusion coefficients, however, appear to be larger than are observed in the real ocean. Nonetheless, some diffusive upwelling does probably occur. Other sources Local and intermittent upwellings may occur when offshore islands, ridges, or seamounts cause a deflection of deep currents, providing a nutrient rich area in otherwise low productivity ocean areas. Examples include upwellings around the Galapagos Islands and the Seychelles Islands, which have major pelagic fisheries. Upwelling could occur anywhere as long as there is an adequate shear in the horizontal wind field. For example when a tropical cyclone transits an area, usually when moving at speeds of less than 5 mph (8 km/h). The cyclonic winds cause a divergence in the surface water in the Ekman layer, that turn requires upwelling of deeper water to maintain continuity. Artificial upwelling is produced by devices that use ocean wave energy or ocean thermal energy conversion to pump water to the surface. Ocean wind turbines are also known to produce upwellings. Ocean wave devices have been shown to produce plankton blooms. Variations Upwelling intensity depends on wind strength and seasonal variability, as well as the vertical structure of the water, variations in the bottom bathymetry, and instabilities in the currents. In some areas, upwelling is a seasonal event leading to periodic bursts of productivity similar to spring blooms in coastal waters. Wind-induced upwelling is generated by temperature differences between the warm, light air above the land and the cooler denser air over the sea. In temperate latitudes, the temperature contrast is greatly seasonably variable, creating periods of strong upwelling in the spring and summer, to weak or no upwelling in the winter. For example, off the coast of Oregon, there are four or five strong upwelling events separated by periods of little to no upwelling during the six-month season of upwelling. In contrast, tropical latitudes have a more constant temperature contrast, creating constant upwelling throughout the year. The Peruvian upwelling, for instance, occurs throughout most of the year, resulting in one of the world's largest marine fisheries for sardines and anchovies. In anomalous years when the trade winds weaken or reverse, the water that is upwelled is much warmer and low in nutrients, resulting in a sharp reduction in the biomass and phytoplankton productivity. This event is known as the El Nino-Southern Oscillation (ENSO) event. The Peruvian upwelling system is particularly vulnerable to ENSO events, and can cause extreme interannual variability in productivity. Changes in bathymetry can affect the strength of an upwelling. For example, a submarine ridge that extends out from the coast will produce more favorable upwelling conditions than neighboring regions. Upwelling typically begins at such ridges and remains strongest at the ridge even after developing in other locations. High productivity The most productive and fertile ocean areas, upwelling regions are important sources of marine productivity. They attract hundreds of species throughout the trophic levels; these systems' diversity has been a focal point for marine research. While studying the trophic levels and patterns typical of upwelling regions, researchers have discovered that upwelling systems exhibit a wasp-waist richness pattern. In this type of pattern, the high and low trophic levels are well-represented by high species diversity. However, the intermediate trophic level is only represented by one or two species. This trophic layer, which consists of small, pelagic fish usually makes up about only three to four percent of the species diversity of all fish species present. The lower trophic layers are very well-represented with about 500 species of copepods, 2500 species of gastropods, and 2500 species of crustaceans on average. At the apex and near-apex trophic levels, there are usually about 100 species of marine mammals and about 50 species of marine birds. The vital intermediate trophic species however are small pelagic fish that usually feed on phytoplankton. In most upwelling systems, these species are either anchovies or sardines, and usually only one is present, although two or three species may be present occasionally. These fish are an important food source for predators, such as large pelagic fish, marine mammals, and marine birds. Although they are not at the base of the trophic pyramid, they are the vital species that connect the entire marine ecosystem and keep the productivity of upwelling zones so high Threats to upwelling ecosystems A major threat to both this crucial intermediate trophic level and the entire upwelling trophic ecosystem is the problem of commercial fishing. Since upwelling regions are the most productive and species rich areas in the world, they attract a high number of commercial fishers and fisheries. On one hand, this is another benefit of the upwelling process as it serves as a viable source of food and income for so many people and nations besides marine animals. However, just as in any ecosystem, the consequences of over-fishing from a population could be detrimental to that population and the ecosystem as a whole. In upwelling ecosystems, every species present plays a vital role in the functioning of that ecosystem. If one species is significantly depleted, that will have an effect throughout the rest of the trophic levels. For example, if a popular prey species is targeted by fisheries, fishermen may collect hundreds of thousands of individuals of this species just by casting their nets into the upwelling waters. As these fish are depleted, the food source for those who preyed on these fish is depleted. Therefore, the predators of the targeted fish will begin to die off, and there will not be as many of them to feed the predators above them. This system continues throughout the entire food chain, resulting in a possible collapse of the ecosystem. It is possible that the ecosystem may be restored over time, but not all species can recover from events such as these. Even if the species can adapt, there may be a delay in the reconstruction of this upwelling community. The possibility of such an ecosystem collapse is the very danger of fisheries in upwelling regions. Fisheries may target a variety of different species, and therefore they are a direct threat to many species in the ecosystem, however they pose the highest threat to the intermediate pelagic fish. Since these fish form the crux of the entire trophic process of upwelling ecosystems, they are highly represented throughout the ecosystem (even if there is only one species present). Unfortunately, these fish tend to be the most popular targets of fisheries as about 64 percent of their entire catch consists of pelagic fish. Among those, the six main species that usually form the intermediate trophic layer represent over half of the catch. Besides directly causing the collapse of the ecosystem due to their absence, this can create problems in the ecosystem through a variety of other methods as well. The animals higher in the trophic levels may not completely starve to death and die off, but the decreased food supply could still hurt the populations. If animals do not get enough food, it will decrease their reproductive viability meaning that they will not breed as often or as successfully as usual. This can lead to a decreasing population, especially in species that do not breed often under normal circumstances or become reproductively mature late in life. Another problem is that the decrease in the population of a species due to fisheries can lead to a decrease in genetic diversity, resulting in a decrease in biodiversity of a species. If the species diversity is decreased significantly, this could cause problems for the species in an environment that is so variable and quick-changing; they may not be able to adapt, which could result in a collapse of the population or ecosystem. Another threat to the productivity and ecosystems of upwelling regions is El Niño-Southern Oscillation (ENSO) system, or more specifically El Niño events. During the normal period and La Niña events, the easterly trade winds are still strong, which continues to drive the process of upwelling. However, during El Niño events, trade winds are weaker, causing decreased upwelling in the equatorial regions as the divergence of water north and south of the equator is not as strong or as prevalent. The coastal upwelling zones diminish as well since they are wind driven systems, and the wind is no longer a very strong driving force in these areas. As a result, global upwelling drastically decreases, causing a decrease in productivity as the waters are no longer receiving nutrient-rich water. Without these nutrients, the rest of the trophic pyramid cannot be sustained, and the rich upwelling ecosystem will collapse. Effect on climate Coastal upwelling has a major influence over the affected region's local climate. This effect is magnified if the ocean current is already cool. As the cold, nutrient-rich water moves upwards and the sea surface temperature gets cooler, the air immediately above it also cools down and is likely to condensate, forming sea fog and stratus clouds. This also inhibits the formation of higher altitude clouds, showers and thunderstorms and results in rainfall over the ocean leaving the land dry. In year-round upwelling systems (like that of the western coasts of Southern Africa and South America), temperatures are generally cooler and precipitation scarce. Seasonal upwelling systems are often paired with seasonal downwelling systems (like that of the western coasts of the United States and Iberian Peninsula), resulting in cooler, drier than average summers and milder, wetter than average winters. Permanent upwelling locations typically have semi-arid/desert climates while seasonal upwelling locations usually have Mediterranean/semi-arid climates, oceanic in some cases. Some worldwide cities affected by strong upwelling regimes include: San Francisco, Antofagasta, Sines, Essaouira, Walvis Bay, Curaçao among others. References External links Wind Driven Surface Currents: Upwelling and Downwelling Coastal Upwelling On the influence of large wind farms on the upper ocean circulation. Göran Broström, Norwegian Meteorological Institute, Oslo, Norway Aquatic ecology Oceanography Fisheries science
Upwelling
[ "Physics", "Biology", "Environmental_science" ]
3,799
[ "Hydrology", "Applied and interdisciplinary physics", "Oceanography", "Ecosystems", "Aquatic ecology" ]
105,706
https://en.wikipedia.org/wiki/High-energy%20phosphate
High-energy phosphate can mean one of two things: The phosphate-phosphate (phosphoanhydride/phosphoric anhydride/macroergic/phosphagen) bonds formed when compounds such as adenosine diphosphate (ADP) and adenosine triphosphate (ATP) are created. The compounds that contain these bonds, which include the nucleoside diphosphates and nucleoside triphosphates, and the high-energy storage compounds of the muscle, the phosphagens. When people speak of a high-energy phosphate pool, they speak of the total concentration of these compounds with these high-energy bonds. Description High-energy phosphate bonds are usually pyrophosphate bonds, acid anhydride linkages formed by taking phosphoric acid derivatives and dehydrating them. As a consequence, the hydrolysis of these bonds is exergonic under physiological conditions, releasing Gibbs free energy. Except for PPi → 2 Pi, these reactions are, in general, not allowed to go uncontrolled in the human cell but are instead coupled to other processes needing energy to drive them to completion. Thus, high-energy phosphate reactions can: provide energy to cellular processes, allowing them to run couple processes to a particular nucleoside, allowing for regulatory control of the process drive a reaction out of equilibrium (drive it to the right) by promoting one direction of the reaction faster than the equilibrium can relax. The one exception is of value because it allows a single hydrolysis, ATP + H2O → AMP + PPi, to effectively supply the energy of hydrolysis of two high-energy bonds, with the hydrolysis of PPi being allowed to go to completion in a separate reaction. The AMP is regenerated to ATP in two steps, with the equilibrium reaction ATP + AMP ↔ 2ADP, followed by regeneration of ATP by the usual means, oxidative phosphorylation or other energy-producing pathways such as glycolysis. Often, high-energy phosphate bonds are denoted by the character '~'. In this "squiggle" notation, ATP becomes A-P~P~P. The squiggle notation was invented by Fritz Albert Lipmann, who first proposed ATP as the main energy transfer molecule of the cell, in 1941. Lipmann's notation emphasizes the special nature of these bonds. Stryer states: The term 'high energy' with respect to these bonds can be misleading because the negative free energy change is not due directly to the breaking of the bonds themselves. The breaking of these bonds, like the breaking of most bonds, is endergonic and consumes energy rather than releasing it. The negative free energy change comes instead from the fact that the bonds formed after hydrolysis - or the phosphorylation of a residue by ATP - are lower in energy than the bonds present before hydrolysis. (This includes all of the bonds involved in the reaction, not just the phosphate bonds themselves). This effect is due to a number of factors including increased resonance stabilization and solvation of the products relative to the reactants, and destabilization of the reactants due to electrostatic repulsion between neighboring phosphorus atoms. References Further reading McGilvery, R. W. and Goldstein, G., Biochemistry - A Functional Approach, W. B. Saunders and Co, 1979, 345–351. Bioenergetics Organophosphates Pyrophosphate esters
High-energy phosphate
[ "Chemistry", "Biology" ]
727
[ "Biochemistry", "Bioenergetics", "Metabolism" ]
106,218
https://en.wikipedia.org/wiki/Cooperative%20binding
Cooperative binding occurs in molecular binding systems containing more than one type, or species, of molecule and in which one of the partners is not mono-valent and can bind more than one molecule of the other species. In general, molecular binding is an interaction between molecules that results in a stable physical association between those molecules. Cooperative binding occurs in a molecular binding system where two or more ligand molecules can bind to a receptor molecule. Binding can be considered "cooperative" if the actual binding of the first molecule of the ligand to the receptor changes the binding affinity of the second ligand molecule. The binding of ligand molecules to the different sites on the receptor molecule do not constitute mutually independent events. Cooperativity can be positive or negative, meaning that it becomes more or less likely that successive ligand molecules will bind to the receptor molecule. Cooperative binding is observed in many biopolymers, including proteins and nucleic acids. Cooperative binding has been shown to be the mechanism underlying a large range of biochemical and physiological processes. History and mathematical formalisms Christian Bohr and the concept of cooperative binding In 1904, Christian Bohr studied hemoglobin binding to oxygen under different conditions. When plotting hemoglobin saturation with oxygen as a function of the partial pressure of oxygen, he obtained a sigmoidal (or "S-shaped") curve. This indicates that the more oxygen is bound to hemoglobin, the easier it is for more oxygen to bind - until all binding sites are saturated. In addition, Bohr noticed that increasing CO2 pressure shifted this curve to the right - i.e. higher concentrations of CO2 make it more difficult for hemoglobin to bind oxygen. This latter phenomenon, together with the observation that hemoglobin's affinity for oxygen increases with increasing pH, is known as the Bohr effect. A receptor molecule is said to exhibit cooperative binding if its binding to ligand scales non-linearly with ligand concentration. Cooperativity can be positive (if binding of a ligand molecule increases the receptor's apparent affinity, and hence increases the chance of another ligand molecule binding) or negative (if binding of a ligand molecule decreases affinity and hence makes binding of other ligand molecules less likely). The "fractional occupancy" of a receptor with a given ligand is defined as the quantity of ligand-bound binding sites divided by the total quantity of ligand binding sites: If , then the protein is completely unbound, and if , it is completely saturated. If the plot of at equilibrium as a function of ligand concentration is sigmoidal in shape, as observed by Bohr for hemoglobin, this indicates positive cooperativity. If it is not, no statement can be made about cooperativity from looking at this plot alone. The concept of cooperative binding only applies to molecules or complexes with more than one ligand binding sites. If several ligand binding sites exist, but ligand binding to any one site does not affect the others, the receptor is said to be non-cooperative. Cooperativity can be homotropic, if a ligand influences the binding of ligands of the same kind, or heterotropic, if it influences binding of other kinds of ligands. In the case of hemoglobin, Bohr observed homotropic positive cooperativity (binding of oxygen facilitates binding of more oxygen) and heterotropic negative cooperativity (binding of CO2 reduces hemoglobin's facility to bind oxygen.) Throughout the 20th century, various frameworks have been developed to describe the binding of a ligand to a protein with more than one binding site and the cooperative effects observed in this context. The Hill equation The first description of cooperative binding to a multi-site protein was developed by A.V. Hill. Drawing on observations of oxygen binding to hemoglobin and the idea that cooperativity arose from the aggregation of hemoglobin molecules, each one binding one oxygen molecule, Hill suggested a phenomenological equation that has since been named after him: where is the "Hill coefficient", denotes ligand concentration, denotes an apparent association constant (used in the original form of the equation), is an empirical dissociation constant, and a microscopic dissociation constant (used in modern forms of the equation, and equivalent to an ). If , the system exhibits negative cooperativity, whereas cooperativity is positive if . The total number of ligand binding sites is an upper bound for . The Hill equation can be linearized as: The "Hill plot" is obtained by plotting versus . In the case of the Hill equation, it is a line with slope and intercept . This means that cooperativity is assumed to be fixed, i.e. it does not change with saturation. It also means that binding sites always exhibit the same affinity, and cooperativity does not arise from an affinity increasing with ligand concentration. The Adair equation G.S. Adair found that the Hill plot for hemoglobin was not a straight line, and hypothesized that binding affinity was not a fixed term, but dependent on ligand saturation. Having demonstrated that hemoglobin contained four hemes (and therefore binding sites for oxygen), he worked from the assumption that fully saturated hemoglobin is formed in stages, with intermediate forms with one, two, or three bound oxygen molecules. The formation of each intermediate stage from unbound hemoglobin can be described using an apparent macroscopic association constant . The resulting fractional occupancy can be expressed as: Or, for any protein with n ligand binding sites: where n denotes the number of binding sites and each is a combined association constant, describing the binding of i ligand molecules. By combining the Adair treatment with the Hill plot, one arrives at the modern experimental definition of cooperativity (Hill, 1985, Abeliovich, 2005). The resultant Hill coefficient, or more correctly the slope of the Hill plot as calculated from the Adair Equation, can be shown to be the ratio between the variance of the binding number to the variance of the binding number in an equivalent system of non-interacting binding sites. Thus, the Hill coefficient defines cooperativity as a statistical dependence of one binding site on the state of other site(s). The Klotz equation Working on calcium binding proteins, Irving Klotz deconvoluted Adair's association constants by considering stepwise formation of the intermediate stages, and tried to express the cooperative binding in terms of elementary processes governed by mass action law. In his framework, is the association constant governing binding of the first ligand molecule, the association constant governing binding of the second ligand molecule (once the first is already bound) etc. For , this gives: It is worth noting that the constants , and so forth do not relate to individual binding sites. They describe how many binding sites are occupied, rather than which ones. This form has the advantage that cooperativity is easily recognised when considering the association constants. If all ligand binding sites are identical with a microscopic association constant , one would expect (that is ) in the absence of cooperativity. We have positive cooperativity if lies above these expected values for . The Klotz equation (which is sometimes also called the Adair-Klotz equation) is still often used in the experimental literature to describe measurements of ligand binding in terms of sequential apparent binding constants. Pauling equation By the middle of the 20th century, there was an increased interest in models that would not only describe binding curves phenomenologically, but offer an underlying biochemical mechanism. Linus Pauling reinterpreted the equation provided by Adair, assuming that his constants were the combination of the binding constant for the ligand ( in the equation below) and energy coming from the interaction between subunits of the cooperative protein ( below). Pauling actually derived several equations, depending on the degree of interaction between subunits. Based on wrong assumptions about the localization of hemes, he opted for the wrong one to describe oxygen binding by hemoglobin, assuming the subunit were arranged in a square. The equation below provides the equation for a tetrahedral structure, which would be more accurate in the case of hemoglobin: The KNF model Based on results showing that the structure of cooperative proteins changed upon binding to their ligand, Daniel Koshland and colleagues refined the biochemical explanation of the mechanism described by Pauling. The Koshland-Némethy-Filmer (KNF) model assumes that each subunit can exist in one of two conformations: active or inactive. Ligand binding to one subunit would induce an immediate conformational change of that subunit from the inactive to the active conformation, a mechanism described as "induced fit". Cooperativity, according to the KNF model, would arise from interactions between the subunits, the strength of which varies depending on the relative conformations of the subunits involved. For a tetrahedric structure (they also considered linear and square structures), they proposed the following formula: Where is the constant of association for X, is the ratio of B and A states in the absence of ligand ("transition"), and are the relative stabilities of pairs of neighbouring subunits relative to a pair where both subunits are in the A state (Note that the KNF paper actually presents , the number of occupied sites, which is here 4 times ). The MWC model The Monod-Wyman-Changeux (MWC) model for concerted allosteric transitions went a step further by exploring cooperativity based on thermodynamics and three-dimensional conformations. It was originally formulated for oligomeric proteins with symmetrically arranged, identical subunits, each of which has one ligand binding site. According to this framework, two (or more) interconvertible conformational states of an allosteric protein coexist in a thermal equilibrium. The states - often termed tense (T) and relaxed (R) - differ in affinity for the ligand molecule. The ratio between the two states is regulated by the binding of ligand molecules that stabilizes the higher-affinity state. Importantly, all subunits of a molecule change states at the same time, a phenomenon known as "concerted transition". The allosteric isomerisation constant L describes the equilibrium between both states when no ligand molecule is bound: . If L is very large, most of the protein exists in the T state in the absence of ligand. If L is small (close to one), the R state is nearly as populated as the T state. The ratio of dissociation constants for the ligand from the T and R states is described by the constant c: . If , both R and T states have the same affinity for the ligand and the ligand does not affect isomerisation. The value of c also indicates how much the equilibrium between T and R states changes upon ligand binding: the smaller c, the more the equilibrium shifts towards the R state after one binding. With , fractional occupancy is described as: The sigmoid Hill plot of allosteric proteins can then be analysed as a progressive transition from the T state (low affinity) to the R state (high affinity) as the saturation increases. The slope of the Hill plot also depends on saturation, with a maximum value at the inflexion point. The intercepts between the two asymptotes and the y-axis allow to determine the affinities of both states for the ligand. In proteins, conformational change is often associated with activity, or activity towards specific targets. Such activity is often what is physiologically relevant or what is experimentally measured. The degree of conformational change is described by the state function , which denotes the fraction of protein present in the state. As the energy diagram illustrates, increases as more ligand molecules bind. The expression for is: A crucial aspect of the MWC model is that the curves for and do not coincide, i.e. fractional saturation is not a direct indicator of conformational state (and hence, of activity). Moreover, the extents of the cooperativity of binding and the cooperativity of activation can be very different: an extreme case is provide by the bacteria flagella motor with a Hill coefficient of 1.7 for the binding and 10.3 for the activation. The supra-linearity of the response is sometimes called ultrasensitivity. If an allosteric protein binds to a target that also has a higher affinity for the R state, then target binding further stabilizes the R state, hence increasing ligand affinity. If, on the other hand, a target preferentially binds to the T state, then target binding will have a negative effect on ligand affinity. Such targets are called allosteric modulators. Since its inception, the MWC framework has been extended and generalized. Variations have been proposed, for example to cater for proteins with more than two states, proteins that bind to several types of ligands or several types of allosteric modulators and proteins with non-identical subunits or ligand-binding sites. Examples The list of molecular assemblies that exhibit cooperative binding of ligands is very large, but some examples are particularly notable for their historical interest, their unusual properties, or their physiological importance. As described in the historical section, the most famous example of cooperative binding is hemoglobin. Its quaternary structure, solved by Max Perutz using X-ray diffraction, exhibits a pseudo-symmetrical tetrahedron carrying four binding sites (hemes) for oxygen. Many other molecular assemblies exhibiting cooperative binding have been studied in great detail. Multimeric enzymes The activity of many enzymes is regulated by allosteric effectors. Some of these enzymes are multimeric and carry several binding sites for the regulators. Threonine deaminase was one of the first enzymes suggested to behave like hemoglobin and shown to bind ligands cooperatively. It was later shown to be a tetrameric protein. Another enzyme that has been suggested early to bind ligands cooperatively is aspartate trans-carbamylase. Although initial models were consistent with four binding sites, its structure was later shown to be hexameric by William Lipscomb and colleagues. Ion channels Most ion channels are formed of several identical or pseudo-identical monomers or domains, arranged symmetrically in biological membranes. Several classes of such channels whose opening is regulated by ligands exhibit cooperative binding of these ligands. It was suggested as early as 1967 (when the exact nature of those channels was still unknown) that the nicotinic acetylcholine receptors bound acetylcholine in a cooperative manner due to the existence of several binding sites. The purification of the receptor and its characterization demonstrated a pentameric structure with binding sites located at the interfaces between subunits, confirmed by the structure of the receptor binding domain. Inositol triphosphate (IP3) receptors form another class of ligand-gated ion channels exhibiting cooperative binding. The structure of those receptors shows four IP3 binding sites symmetrically arranged. Multi-site molecules Although most proteins showing cooperative binding are multimeric complexes of homologous subunits, some proteins carry several binding sites for the same ligand on the same polypeptide. One such example is calmodulin. One molecule of calmodulin binds four calcium ions cooperatively. Its structure presents four EF-hand domains, each one binding one calcium ion. The molecule does not display a square or tetrahedron structure, but is formed of two lobes, each carrying two EF-hand domains. Transcription factors Cooperative binding of proteins onto nucleic acids has also been shown. A classical example is the binding of the lambda phage repressor to its operators, which occurs cooperatively. Other examples of transcription factors exhibit positive cooperativity when binding their target, such as the repressor of the TtgABC pumps (n=1.6), as well as conditional cooperativity exhibited by the transcription factors HOXA11 and FOXO1. Conversely, examples of negative cooperativity for the binding of transcription factors were also documented, as for the homodimeric repressor of the Pseudomonas putida cytochrome P450cam hydroxylase operon (n=0.56). Conformational spread and binding cooperativity Early on, it has been argued that some proteins, especially those consisting of many subunits, could be regulated by a generalized MWC mechanism, in which the transition between R and T state is not necessarily synchronized across the entire protein. In 1969, Wyman proposed such a model with "mixed conformations" (i.e. some protomers in the R state, some in the T state) for respiratory proteins in invertebrates. Following a similar idea, the conformational spread model by Duke and colleagues subsumes both the KNF and the MWC model as special cases. In this model, a subunit does not automatically change conformation upon ligand binding (as in the KNF model), nor do all subunits in a complex change conformations together (as in the MWC model). Conformational changes are stochastic with the likelihood of a subunit switching states depending on whether or not it is ligand bound and on the conformational state of neighbouring subunits. Thus, conformational states can "spread" around the entire complex. Impact of upstream and downstream components on module's ultrasensitivity In a living cell, ultrasensitive modules are embedded in a bigger network with upstream and downstream components. These components may constrain the range of inputs that the module will receive as well as the range of the module's outputs that network will be able to detect. The sensitivity of a modular system is affected by these restrictions. The dynamic range limitations imposed by downstream components can produce effective sensitivities much larger than that of the original module when considered in isolation. References Chemical bonding Protein structure Enzyme kinetics Wikipedia articles published in PLOS Computational Biology
Cooperative binding
[ "Physics", "Chemistry", "Materials_science" ]
3,721
[ "Enzyme kinetics", "Condensed matter physics", "Structural biology", "nan", "Chemical bonding", "Protein structure", "Chemical kinetics" ]
106,231
https://en.wikipedia.org/wiki/Macromolecule
A macromolecule is a very large molecule important to biological processes, such as a protein or nucleic acid. It is composed of thousands of covalently bonded atoms. Many macromolecules are polymers of smaller molecules called monomers. The most common macromolecules in biochemistry are biopolymers (nucleic acids, proteins, and carbohydrates) and large non-polymeric molecules such as lipids, nanogels and macrocycles. Synthetic fibers and experimental materials such as carbon nanotubes are also examples of macromolecules. Definition The term macromolecule (macro- + molecule) was coined by Nobel laureate Hermann Staudinger in the 1920s, although his first relevant publication on this field only mentions high molecular compounds (in excess of 1,000 atoms). At that time the term polymer, as introduced by Berzelius in 1832, had a different meaning from that of today: it simply was another form of isomerism for example with benzene and acetylene and had little to do with size. Usage of the term to describe large molecules varies among the disciplines. For example, while biology refers to macromolecules as the four large molecules comprising living things, in chemistry, the term may refer to aggregates of two or more molecules held together by intermolecular forces rather than covalent bonds but which do not readily dissociate. According to the standard IUPAC definition, the term macromolecule as used in polymer science refers only to a single molecule. For example, a single polymeric molecule is appropriately described as a "macromolecule" or "polymer molecule" rather than a "polymer," which suggests a substance composed of macromolecules. Because of their size, macromolecules are not conveniently described in terms of stoichiometry alone. The structure of simple macromolecules, such as homopolymers, may be described in terms of the individual monomer subunit and total molecular mass. Complicated biomacromolecules, on the other hand, require multi-faceted structural description such as the hierarchy of structures used to describe proteins. In British English, the word "macromolecule" tends to be called "high polymer". Properties Macromolecules often have unusual physical properties that do not occur for smaller molecules. Another common macromolecular property that does not characterize smaller molecules is their relative insolubility in water and similar solvents, instead forming colloids. Many require salts or particular ions to dissolve in water. Similarly, many proteins will denature if the solute concentration of their solution is too high or too low. High concentrations of macromolecules in a solution can alter the rates and equilibrium constants of the reactions of other macromolecules, through an effect known as macromolecular crowding. This comes from macromolecules excluding other molecules from a large part of the volume of the solution, thereby increasing the effective concentrations of these molecules. Linear biopolymers All living organisms are dependent on three essential biopolymers for their biological functions: DNA, RNA and proteins. Each of these molecules is required for life since each plays a distinct, indispensable role in the cell. The simple summary is that DNA makes RNA, and then RNA makes proteins. DNA, RNA, and proteins all consist of a repeating structure of related building blocks (nucleotides in the case of DNA and RNA, amino acids in the case of proteins). In general, they are all unbranched polymers, and so can be represented in the form of a string. Indeed, they can be viewed as a string of beads, with each bead representing a single nucleotide or amino acid monomer linked together through covalent chemical bonds into a very long chain. In most cases, the monomers within the chain have a strong propensity to interact with other amino acids or nucleotides. In DNA and RNA, this can take the form of Watson–Crick base pairs (G–C and A–T or A–U), although many more complicated interactions can and do occur. Structural features Because of the double-stranded nature of DNA, essentially all of the nucleotides take the form of Watson–Crick base pairs between nucleotides on the two complementary strands of the double helix. In contrast, both RNA and proteins are normally single-stranded. Therefore, they are not constrained by the regular geometry of the DNA double helix, and so fold into complex three-dimensional shapes dependent on their sequence. These different shapes are responsible for many of the common properties of RNA and proteins, including the formation of specific binding pockets, and the ability to catalyse biochemical reactions. DNA is optimised for encoding information DNA is an information storage macromolecule that encodes the complete set of instructions (the genome) that are required to assemble, maintain, and reproduce every living organism. DNA and RNA are both capable of encoding genetic information, because there are biochemical mechanisms which read the information coded within a DNA or RNA sequence and use it to generate a specified protein. On the other hand, the sequence information of a protein molecule is not used by cells to functionally encode genetic information. DNA has three primary attributes that allow it to be far better than RNA at encoding genetic information. First, it is normally double-stranded, so that there are a minimum of two copies of the information encoding each gene in every cell. Second, DNA has a much greater stability against breakdown than does RNA, an attribute primarily associated with the absence of the 2'-hydroxyl group within every nucleotide of DNA. Third, highly sophisticated DNA surveillance and repair systems are present which monitor damage to the DNA and repair the sequence when necessary. Analogous systems have not evolved for repairing damaged RNA molecules. Consequently, chromosomes can contain many billions of atoms, arranged in a specific chemical structure. Proteins are optimised for catalysis Proteins are functional macromolecules responsible for catalysing the biochemical reactions that sustain life. Proteins carry out all functions of an organism, for example photosynthesis, neural function, vision, and movement. The single-stranded nature of protein molecules, together with their composition of 20 or more different amino acid building blocks, allows them to fold in to a vast number of different three-dimensional shapes, while providing binding pockets through which they can specifically interact with all manner of molecules. In addition, the chemical diversity of the different amino acids, together with different chemical environments afforded by local 3D structure, enables many proteins to act as enzymes, catalyzing a wide range of specific biochemical transformations within cells. In addition, proteins have evolved the ability to bind a wide range of cofactors and coenzymes, smaller molecules that can endow the protein with specific activities beyond those associated with the polypeptide chain alone. RNA is multifunctional RNA is multifunctional, its primary function is to encode proteins, according to the instructions within a cell's DNA. They control and regulate many aspects of protein synthesis in eukaryotes. RNA encodes genetic information that can be translated into the amino acid sequence of proteins, as evidenced by the messenger RNA molecules present within every cell, and the RNA genomes of a large number of viruses. The single-stranded nature of RNA, together with tendency for rapid breakdown and a lack of repair systems means that RNA is not so well suited for the long-term storage of genetic information as is DNA. In addition, RNA is a single-stranded polymer that can, like proteins, fold into a very large number of three-dimensional structures. Some of these structures provide binding sites for other molecules and chemically active centers that can catalyze specific chemical reactions on those bound molecules. The limited number of different building blocks of RNA (4 nucleotides vs >20 amino acids in proteins), together with their lack of chemical diversity, results in catalytic RNA (ribozymes) being generally less-effective catalysts than proteins for most biological reactions. The Major Macromolecules: Branched biopolymers Carbohydrate macromolecules (polysaccharides) are formed from polymers of monosaccharides. Because monosaccharides have multiple functional groups, polysaccharides can form linear polymers (e.g. cellulose) or complex branched structures (e.g. glycogen). Polysaccharides perform numerous roles in living organisms, acting as energy stores (e.g. starch) and as structural components (e.g. chitin in arthropods and fungi). Many carbohydrates contain modified monosaccharide units that have had functional groups replaced or removed. Polyphenols consist of a branched structure of multiple phenolic subunits. They can perform structural roles (e.g. lignin) as well as roles as secondary metabolites involved in signalling, pigmentation and defense. Synthetic macromolecules Some examples of macromolecules are synthetic polymers (plastics, synthetic fibers, and synthetic rubber), graphene, and carbon nanotubes. Polymers may be prepared from inorganic matter as well as for instance in inorganic polymers and geopolymers. The incorporation of inorganic elements enables the tunability of properties and/or responsive behavior as for instance in smart inorganic polymers. See also List of biophysically important macromolecular crystal structures Small molecule Soft matter References External links Synopsis of Chapter 5, Campbell & Reece, 2002 Lecture notes on the structure and function of macromolecules Several (free) introductory macromolecule related internet-based courses Giant Molecules! by Ulysses Magee, ISSA Review Winter 2002–2003, . Cached HTML version of a missing PDF file. Retrieved March 10, 2010. The article is based on the book, Inventing Polymer Science: Staudinger, Carothers, and the Emergence of Macromolecular Chemistry by Yasu Furukawa. Molecular physics Biochemistry Polymer chemistry Polymers
Macromolecule
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
2,096
[ "Molecular physics", "Biochemistry", "Molecules", "Materials science", "Macromolecules", " molecular", "Polymer chemistry", "nan", "Atomic", "Polymers", "Matter", " and optical physics" ]
106,270
https://en.wikipedia.org/wiki/Plasmolysis
Plasmolysis is the process in which cells lose water in a hypertonic solution. The reverse process, deplasmolysis or cytolysis, can occur if the cell is in a hypotonic solution resulting in a lower external osmotic pressure and a net flow of water into the cell. Through observation of plasmolysis and deplasmolysis, it is possible to determine the tonicity of the cell's environment as well as the rate solute molecules cross the cellular membrane. Etymology The term plasmolysis is derived from the Latin word ‘plasma’ meaning ‘matrix’ and the Greek word ‘lysis’, meaning ‘loosening’. Turgidity A plant cell in hypotonic solution will absorb water by endosmosis, so that the increased volume of water in the cell will increase pressure, making the protoplasm push against the cell wall, a condition known as turgor. Turgor makes plant cells push against each other in the same way and is the main line method of support in non-woody plant tissue. Plant cell walls resist further water entry after a certain point, known as full turgor, which stops plant cells from bursting as animal cells do in the same conditions. This is also the reason that plants stand upright. Without the stiffness of the plant cells the plant would fall under its own weight. Turgor pressure allows plants to stay firm and erect, and plants without turgor pressure (known as flaccid) wilt. A cell will begin to decline in turgor pressure only when there is no air spaces surrounding it and eventually leads to a greater osmotic pressure than that of the cell. Vacuoles play a role in turgor pressure when water leaves the cell due to hyperosmotic solutions containing solutes such as mannitol, sorbitol, and sucrose. Plasmolysis If a plant cell is placed in a hypertonic solution, the plant cell loses water and hence turgor pressure by plasmolysis: pressure decreases to the point where the protoplasm of the cell peels away from the cell wall, leaving gaps between the cell wall and the membrane and making the plant cell shrink and crumple. A continued decrease in pressure eventually leads to cytorrhysis – the complete collapse of the cell wall. Plants with cells in this condition wilt. After plasmolysis the gap between the cell wall and the cell membrane in a plant cell is filled with hypertonic solution. This is because as the solution surrounding the cell is hypertonic, exosmosis takes place and the space between the cell wall and cytoplasm is filled with solutes, as most of the water drains away and hence the concentration inside the cell becomes more hypertonic. There are some mechanisms in plants to prevent excess water loss in the same way as excess water gain. Plasmolysis can be reversed if the cell is placed in a hypotonic solution. Stomata close to help keep water in the plant so it does not dry out. Wax also keeps water in the plant. The equivalent process in animal cells is called crenation. The liquid content of the cell leaks out due to exosmosis. The cell collapses, and the cell membrane pulls away from the cell wall (in plants). Most animal cells consist of only a phospholipid bilayer (plasma membrane) and not a cell wall, therefore shrinking up under such conditions. Plasmolysis only occurs in extreme conditions and rarely occurs in nature. It is induced in the laboratory by immersing cells in strong saline or sugar (sucrose) solutions to cause exosmosis, often using Elodea plants or onion epidermal cells, which have colored cell sap so that the process is clearly visible. Methylene blue can be used to stain plant cells. Plasmolysis is mainly known as shrinking of cell membrane in hypertonic solution and great pressure. Plasmolysis can be of two types, either concave plasmolysis or convex plasmolysis. Convex plasmolysis is always irreversible while concave plasmolysis is usually reversible. During concave plasmolysis, the plasma membrane and the enclosed protoplast partially shrinks from the cell wall due to half-spherical, inwarding curving pockets forming between the plasma membrane and the cell wall. During convex plasmolysis, the plasma membrane and the enclosed protoplast shrinks completely from the cell wall, with the plasma membrane's ends in a symmetrically, spherically curved pattern. References External links Pictures of plasmolysis in Elodea and onion skin. Wilting and plasmolysis. Plant physiology Membrane biology
Plasmolysis
[ "Chemistry", "Biology" ]
991
[ "Plant physiology", "Membrane biology", "Plants", "Molecular biology" ]
106,284
https://en.wikipedia.org/wiki/Centrifuge
A centrifuge is a device that uses centrifugal force to subject a specimen to a specified constant force - for example, to separate various components of a fluid. This is achieved by spinning the fluid at high speed within a container, thereby separating fluids of different densities (e.g. cream from milk) or liquids from solids. It works by causing denser substances and particles to move outward in the radial direction. At the same time, objects that are less dense are displaced and moved to the centre. In a laboratory centrifuge that uses sample tubes, the radial acceleration causes denser particles to settle to the bottom of the tube, while low-density substances rise to the top. A centrifuge can be a very effective filter that separates contaminants from the main body of fluid. Industrial scale centrifuges are commonly used in manufacturing and waste processing to sediment suspended solids, or to separate immiscible liquids. An example is the cream separator found in dairies. Very high speed centrifuges and ultracentrifuges able to provide very high accelerations can separate fine particles down to the nano-scale, and molecules of different masses. Large centrifuges are used to simulate high gravity or acceleration environments (for example, high-G training for test pilots). Medium-sized centrifuges are used in washing machines and at some swimming pools to draw water out of fabrics. Gas centrifuges are used for isotope separation, such as to enrich nuclear fuel for fissile isotopes. History English military engineer Benjamin Robins (1707–1751) invented a whirling arm apparatus to determine drag. In 1864, Antonin Prandtl proposed the idea of a dairy centrifuge to separate cream from milk. The idea was subsequently put into practice by his brother, Alexander Prandtl, who made improvements to his brother's design, and exhibited a working butterfat extraction machine in 1875. Types A centrifuge machine can be described as a machine with a rapidly rotating container that applies centrifugal force to its contents. There are multiple types of centrifuge, which can be classified by intended use or by rotor design: Types by rotor design: Fixed-angle centrifuges are designed to hold the sample containers at a constant angle relative to the central axis. Swinging head (or swinging bucket) centrifuges, in contrast to fixed-angle centrifuges, have a hinge where the sample containers are attached to the central rotor. This allows all of the samples to swing outwards as the centrifuge is spun. Continuous tubular centrifuges do not have individual sample vessels and are used for high volume applications. Types by intended use: Laboratory centrifuges, are general-purpose instruments of several types with distinct, but overlapping, capabilities. These include clinical centrifuges, superspeed centrifuges and preparative ultracentrifuges. Analytical ultracentrifuges are designed to perform sedimentation analysis of macromolecules using the principles devised by Theodor Svedberg. Haematocrit centrifuges are used to measure the volume percentage of red blood cells in whole blood. Gas centrifuges, including Zippe-type centrifuges, for isotopic separations in the gas phase. Industrial centrifuges may otherwise be classified according to the type of separation of the high density fraction from the low density one. Generally, there are two types of centrifuges: the filtration and sedimentation centrifuges. For the filtration or the so-called screen centrifuge, the drum is perforated and is inserted with a filter, for example a filter cloth, wire mesh or lot screen. The suspension flows through the filter and the drum with the perforated wall from the inside to the outside. In this way, the solid material is restrained and can be removed. The kind of removing depends on the type of centrifuge, for example manually or periodically. Common types are: Centrifugal oil filters Screen/scroll centrifuges (Screen centrifuges, where the centrifugal acceleration allows the liquid to pass through a screen of some sort, through which the solids cannot go (due to granulometry larger than the screen gap or due to agglomeration)) Pusher centrifuges Peeler centrifuges Inverting filter centrifuges Sliding discharge centrifuges Pendulum centrifuges Sedimentation centrifuges In the centrifuges, the drum is a solid wall (not perforated). This type of centrifuge is used for the purification of a suspension. For the acceleration of the natural deposition, process of suspension the centrifuges use centrifugal force. With so-called overflow centrifuges, the suspension is drained off and the liquid is added constantly. Common types are: Separator centrifuges (Continuous liquid); common types are: Self-cleaning Centrifuges Solid bowl centrifuges Conical plate centrifuges Tubular centrifuges Decanter centrifuges, in which there is no physical separation between the solid and liquid phase, rather an accelerated settling due to centrifugal acceleration. Though most modern centrifuges are electrically powered, a hand-powered variant inspired by the whirligig has been developed for medical applications in developing countries. Many designs have been shared for free and open-source centrifuges that can be digitally manufactured. The open-source hardware designs for hand-powered centrifuge for larger volumes of fluids with a radial velocity of over 1750 rpm and over 50 N of relative centrifugal force can be completely 3-D printed for about $25. Other open hardware designs use custom 3-D printed fixtures with inexpensive electric motors to make low-cost centrifuges (e.g. the Dremelfuge that uses a Dremel power tool) or CNC cut out OpenFuge. Uses Laboratory separations A wide variety of laboratory-scale centrifuges are used in chemistry, biology, biochemistry and clinical medicine for isolating and separating suspensions and immiscible liquids. They vary widely in speed, capacity, temperature control, and other characteristics. Laboratory centrifuges often can accept a range of different fixed-angle and swinging bucket rotors able to carry different numbers of centrifuge tubes and rated for specific maximum speeds. Controls vary from simple electrical timers to programmable models able to control acceleration and deceleration rates, running speeds, and temperature regimes. Ultracentrifuges spin the rotors under vacuum, eliminating air resistance and enabling exact temperature control. Zonal rotors and continuous flow systems are capable of handing bulk and larger sample volumes, respectively, in a laboratory-scale instrument. An application in laboratories is blood separation. Blood separates into cells and proteins (RBC, WBC, platelets, etc.) and serum. DNA preparation is another common application for pharmacogenetics and clinical diagnosis. DNA samples are purified and the DNA is prepped for separation by adding buffers and then centrifuging it for a certain amount of time. The blood waste is then removed and another buffer is added and spun inside the centrifuge again. Once the blood waste is removed and another buffer is added the pellet can be suspended and cooled. Proteins can then be removed and the entire thing can be centrifuged again and the DNA can be isolated completely. Specialized cytocentrifuges are used in medical and biological laboratories to concentrate cells for microscopic examination. Isotope separation Other centrifuges, the first being the Zippe-type centrifuge, separate isotopes, and these kinds of centrifuges are in use in nuclear power and nuclear weapon programs. Aeronautics and astronautics Human centrifuges are exceptionally large centrifuges that test the reactions and tolerance of pilots and astronauts to acceleration above those experienced in the Earth's gravity. The first centrifuges used for human research were used by Erasmus Darwin, the grandfather of Charles Darwin. The first large-scale human centrifuge designed for aeronautical training was created in Germany in 1933. The US Air Force at Brooks City Base, Texas, operates a human centrifuge while awaiting completion of the new human centrifuge in construction at Wright-Patterson AFB, Ohio. The centrifuge at Brooks City Base is operated by the United States Air Force School of Aerospace Medicine for the purpose of training and evaluating prospective fighter pilots for high-g flight in Air Force fighter aircraft. The use of large centrifuges to simulate a feeling of gravity has been proposed for future long-duration space missions. Exposure to this simulated gravity would prevent or reduce the bone decalcification and muscle atrophy that affect individuals exposed to long periods of freefall. Non-Human centrifuge At the European Space Agency (ESA) technology center ESTEC (in Noordwijk, the Netherlands), an diameter centrifuge is used to expose samples in fields of life sciences as well as physical sciences. This Large Diameter Centrifuge (LDC) began operation in 2007. Samples can be exposed to a maximum of 20 times Earth's gravity. With its four arms and six freely swinging out gondolas it is possible to expose samples with different g-levels at the same time. Gondolas can be fixed at eight different positions. Depending on their locations one could e.g. run an experiment at 5 and 10g in the same run. Each gondola can hold an experiment of a maximum . Experiments performed in this facility ranged from zebra fish, metal alloys, plasma, cells, liquids, Planaria, Drosophila or plants. Industrial centrifugal separator Industrial centrifugal separator is a coolant filtration system for separating particles from liquid like, grinding machining coolant. It is usually used for non-ferrous particles separation such as, silicon, glass, ceramic, and graphite etc. The filtering process does not require any consumption parts like filter bags, which saves the earth from harm. Geotechnical centrifuge modeling Geotechnical centrifuge modeling is used for physical testing of models involving soils. Centrifuge acceleration is applied to scale models to scale the gravitational acceleration and enable prototype scale stresses to be obtained in scale models. Problems such as building and bridge foundations, earth dams, tunnels, and slope stability, including effects such as blast loading and earthquake shaking. Synthesis of materials High gravity conditions generated by centrifuge are applied in the chemical industry, casting, and material synthesis. The convection and mass transfer are greatly affected by the gravitational condition. Researchers reported that the high-gravity level can effectively affect the phase composition and morphology of the products. Commercial applications Standalone centrifuges for drying (hand-washed) clothes – usually with a water outlet. Washing machines are designed to act as centrifuges to get rid of excess water in laundry loads. Centrifuges are used in the attraction Mission: SPACE, located at Epcot in Walt Disney World, which propels riders using a combination of a centrifuge and a motion simulator to simulate the feeling of going into space. In soil mechanics, centrifuges utilize centrifugal acceleration to match soil stresses in a scale model to those found in reality. Large industrial centrifuges are commonly used in water and wastewater treatment to dry sludges. The resulting dry product is often termed cake, and the water leaving a centrifuge after most of the solids have been removed is called centrate. Large industrial centrifuges are also used in the oil industry to remove solids from the drilling fluid. Disc-stack centrifuges used by some companies in the oil sands industry to separate small amounts of water and solids from bitumen Centrifuges are used to separate cream (remove fat) from milk; see Separator (milk). Mathematical description Protocols for centrifugation typically specify the amount of acceleration to be applied to the sample, rather than specifying a rotational speed such as revolutions per minute. This distinction is important because two rotors with different diameters running at the same rotational speed will subject samples to different accelerations. During circular motion the acceleration is the product of the radius and the square of the angular velocity , and the acceleration relative to "g" is traditionally named "relative centrifugal force" (RCF). The acceleration is measured in multiples of "g" (or × "g"), the standard acceleration due to gravity at the Earth's surface, a dimensionless quantity given by the expression: where is earth's gravitational acceleration, is the rotational radius, is the angular velocity in radians per unit time This relationship may be written as or where is the rotational radius measured in millimeters (mm), and is rotational speed measured in revolutions per minute (RPM). To avoid having to perform a mathematical calculation every time, one can find nomograms for converting RCF to rpm for a rotor of a given radius. A ruler or other straight edge lined up with the radius on one scale, and the desired RCF on another scale, will point at the correct rpm on the third scale. Based on automatic rotor recognition, modern centrifuges have a button for automatic conversion from RCF to rpm and vice versa. See also Centrifugal force Centrifugation Clearing factor Honey extractor Hydroextractor Lamm equation Sedimentation coefficient Sedimentation Separation process—includes list of techniques References and notes Further reading Naesgaard et al., Modeling flow liquefaction, its mitigation, and comparison with centrifuge tests External links RCF Calculator and Nomograph Centrifugation Rotor Calculator Selection of historical centrifuges in the Virtual Laboratory of the Max Planck Institute for the History of Science Biochemical separation processes Medical devices
Centrifuge
[ "Chemistry", "Engineering", "Biology" ]
2,917
[ "Biochemistry methods", "Centrifugation", "Separation processes", "Chemical equipment", "Biochemical separation processes", "Medical devices", "Centrifuges", "Medical technology" ]
7,951,427
https://en.wikipedia.org/wiki/Higher%20spin%20alternating%20sign%20matrix
In mathematics, a higher spin alternating sign matrix is a generalisation of the alternating sign matrix (ASM), where the columns and rows sum to an integer r (the spin) rather than simply summing to 1 as in the usual alternating sign matrix definition. HSASMs are square matrices whose elements may be integers in the range −r to +r. When traversing any row or column of an ASM or HSASM, the partial sum of its entries must always be non-negative. High spin ASMs have found application in statistical mechanics and physics, where they have been found to represent symmetry groups in ice crystal formation. Some typical examples of HSASMs are shown below: The set of HSASMs is a superset of the ASMs. The extreme points of the convex hull of the set of r-spin HSASMs are themselves integer multiples of the usual ASMs. References Matrices Statistical mechanics Enumerative combinatorics
Higher spin alternating sign matrix
[ "Physics", "Mathematics" ]
195
[ "Statistical mechanics stubs", "Mathematical objects", "Enumerative combinatorics", "Combinatorics", "Matrices (mathematics)", "Combinatorics stubs", "Statistical mechanics", "Matrix stubs" ]
7,958,880
https://en.wikipedia.org/wiki/D%27Alembert%E2%80%93Euler%20condition
In mathematics and physics, especially the study of mechanics and fluid dynamics, the d'Alembert-Euler condition is a requirement that the streaklines of a flow are irrotational. Let x = x(X,t) be the coordinates of the point x into which X is carried at time t by a (fluid) flow. Let be the second material derivative of x. Then the d'Alembert-Euler condition is: The d'Alembert-Euler condition is named for Jean le Rond d'Alembert and Leonhard Euler who independently first described its use in the mid-18th century. It is not to be confused with the Cauchy–Riemann conditions. References See sections 45–48. d'Alembert–Euler conditions on the Springer Encyclopedia of Mathematics Fluid mechanics Mechanical engineering Vector calculus
D'Alembert–Euler condition
[ "Physics", "Engineering" ]
179
[ "Civil engineering", "Applied and interdisciplinary physics", "Fluid mechanics", "Mechanical engineering" ]
20,514,533
https://en.wikipedia.org/wiki/160-minute%20solar%20cycle
The 160-minute solar cycle was an apparent periodic oscillation in the solar surface which was observed in a number of early sets of data collected for helioseismology. The presence of a 160 minute cycle in the Sun is not substantiated by contemporary solar observations, and the historical signal is considered by mainstream scientists to occur as the redistribution of power from the diurnal cycle as a result of the observation window and atmospheric extinction. History The birth of helioseismology occurred in 1976 with the publications of papers from Brookes, Isaak and van der Raay and Severny, Kotov and Tsap, both of which reported upon the observation of a 160-minute solar oscillation with an amplitude of approximately two metres per second. It was rapidly realised that this frequency corresponded to one-ninth of a day, and therefore the authenticity of this signal was in some doubt. If a non-sinusoidal oscillation is present in a time-series then power will be seen in a periodogram at not only the frequency of the oscillation, but also harmonics at integer multiples of this frequency. A re-analysis of data obtained over the period of 1974–1976 by Brookes et al. showed that the evidence for a stable, phase-coherent 160 minute oscillation at a constant amplitude was far from conclusive. Although the signal could be detected the amplitude appeared variable and was lower than first reported. A re-affirmation of the 160 minute signal was obtained by analysis of data from groups in Crimea and Stanford over a long period of time. It was found that the phase showed a steady drift, indicative that the frequency being used in analysis differed slightly from that in the data. This implied that a period of 160.01 minutes produced a better fit to the data. Evidence also emerged that multiple sets of observations were phase-coherent. These facts contributed to impressions that the origin of the observed signal was stellar and not terrestrial in origin. In 1989 as higher-quality multiple-year datasets from a single site became available it was shown by Elsworth et al. that the period of the 160 minute signal was indeed 160.00 minutes, and the amplitude was dependent upon both the length and quality of data obtained in a season, with the signal more prominent at time where atmospheric condition were worse. The group were able to demonstrate that the signal may be simulated by a slightly distorted diurnal sine-wave such as may be obtained by differential atmospheric extinction. Although claims of the presence of a 160-minute period in the Sun were still presented by Kotov et al. in 1990, and 1991, the mainstream scientific establishment had moved on. Contemporary observations There are currently two solar-observation networks, the BiSON and GONG networks which consist of a global network of stations, as well as space based instruments such as the GOLF instrument aboard the SOHO spacecraft. These are able to keep the Sun under near-continuous observation, and so largely eliminate the influence of diurnal signals. Data from these instruments shows no oscillation at 160 minutes. References Solar phenomena
160-minute solar cycle
[ "Physics" ]
630
[ "Physical phenomena", "Stellar phenomena", "Solar phenomena" ]
20,515,591
https://en.wikipedia.org/wiki/Aumann%27s%20agreement%20theorem
Aumann's agreement theorem was stated and proved by Robert Aumann in a paper titled "Agreeing to Disagree", which introduced the set theoretic description of common knowledge. The theorem concerns agents who share a common prior and update their probabilistic beliefs by Bayes' rule. It states that if the probabilistic beliefs of such agents, regarding a fixed event, are common knowledge then these probabilities must coincide. Thus, agents cannot agree to disagree, that is have common knowledge of a disagreement over the posterior probability of a given event. The theorem The model used in Aumann to prove the theorem consists of a finite set of states with a prior probability , which is common to all agents. Agent 's knowledge is given by a partition of . The posterior probability of agent , denoted is the conditional probability of given . Fix an event and let be the event that for each , . The theorem claims that if the event that is common knowledge is not empty then all the numbers are the same. The proof follows directly from the definition of common knowledge. The event is a union of elements of for each . Thus, for each , . The claim of the theorem follows since the left hand side is independent of . The theorem was proved for two agents but the proof for any number of agents is similar. Extensions Monderer and Samet relaxed the assumption of common knowledge and assumed instead common -belief of the posteriors of the agents. They gave an upper bound of the distance between the posteriors . This bound approaches 0 when approaches 1. Ziv Hellman relaxed the assumption of a common prior and assumed instead that the agents have priors that are -close in a well defined metric. He showed that common knowledge of the posteriors in this case implies that they are -close. When goes to zero, Aumann's original theorem is recapitulated. Nielsen extended the theorem to non-discrete models in which knowledge is described by -algebras rather than partitions. Knowledge which is defined in terms of partitions has the property of negative introspection. That is, agents know that they do not know what they do not know. However, it is possible to show that it is impossible to agree to disagree even when knowledge does not have this property. Halpern and Kets argued that players can agree to disagree in the presence of ambiguity, even if there is a common prior. However, allowing for ambiguity is more restrictive than assuming heterogeneous priors. The impossibility of agreeing to disagree, in Aumann's theorem, is a necessary condition for the existence of a common prior. A stronger condition can be formulated in terms of bets. A bet is a set of random variables , one for each agent , such the . The bet is favorable to agent in a state if the expected value of at is positive. The impossibility of agreeing on the profitability of a bet is a stronger condition than the impossibility of agreeing to disagree, and moreover, it is a necessary and sufficient condition for the existence of a common prior. Dynamics A dialogue between two agents is a dynamic process in which, in each stage, the agents tell each other their posteriors of a given event . Upon gaining this new information, each is updating their posterior of . Aumann suggested that such a process leads the agents to commonly know their posteriors, and hence, by the agreement theorem, the posteriors at the end of the process coincide. Geanakoplos and Polemarchakis proved it for dialogues in finite state spaces. Polemarchakis showed that any pair of finite sequences of the same length that end with the same number can be obtained as a dialogue. In contrast, Di Tillio and co-authors showed that infinite dialogues must satisfy certain restrictions on their variation. Scott Aaronson studied the complexity and rate of convergence of various types of dialogues with more than two agents. References Further reading Bayesian statistics Economics theorems Game theory Probability theorems Rational choice theory Theorems in statistics
Aumann's agreement theorem
[ "Mathematics" ]
816
[ "Mathematical theorems", "Theorems in statistics", "Theorems in probability theory", "Game theory", "Mathematical problems" ]
20,516,581
https://en.wikipedia.org/wiki/Spectral%20Database%20for%20Organic%20Compounds
The Spectral Database for Organic Compounds (SDBS) is a free online searchable database hosted by the National Institute of Advanced Industrial Science and Technology (AIST) in Japan, that contains spectral data for ca 34,000 organic molecules. The database is available in English and in Japanese and it includes six types of spectra: laser Raman spectra, electron ionization mass spectra (EI-MS), Fourier-transform infrared (FT-IR) spectra, 1H nuclear magnetic resonance (1H-NMR) spectra, 13C nuclear magnetic resonance (13C-NMR) spectra and electron paramagnetic resonance (EPR) spectra. The construction of the database started in 1982. Most of the spectra were acquired and recorded in AIST and some of the collections are still being updated. Since 1997, the database can be accessed free of charge, but its use requires agreeing to a disclaimer; the total accumulated number of times accessed reached 550 million by the end of January, 2015. Content Laser Raman spectra The database contains ca 3,500 Raman spectra. The spectra were recorded in the region of 4,000 – 0 cm−1 with an excitation wavelength of 4,800 nm and a slit width of 100 – 200 micrometers. This collection is not being updated. Electron ionization mass (EI-MS) spectra The EI-MS spectra were measured in a JEOL JMS-01SG or a JEOL JMS-700 spectrometers, by the electron ionization method, with an electronic accelerating voltage of 75 eV and an ion accelerating voltage of 8 – 10 kV. The direct or reservoir inlet systems were used. The accuracy of the mass number is 0.5. This collection contains ca. 25,000 EI-MS spectra and is being updated. Fourier-transform infrared (FT-IR) spectra The FT-IR spectra were recorded using a Nicolet 170SX or a JASCO FT/IR-410 spectrometer. For spectra recorded in the Nicolet spectrometer, the data were stored at intervals of 0.5 cm−1 in the 4,000 – 2,000 cm−1 region and of 0.25 cm−1 in the 2,000 – 400 cm−1 region and the spectral resolution was 0.25 cm−1. For spectra recorded in the JASCO spectrometer, the resolution as well as the intervals was 0.5 cm−1. Samples from solids were prepared using the KBr disc or the Nujol paste methods, samples from liquids were prepared with the liquid film method. This collections contains ca 54,100 spectra and is being updated. 1H NMR spectra The 1H NMR spectra were recorded at a resonance frequency of 400 MHz with a resolution of 0.0625 Hz or at 90 MHz with a resolution of 0.125 Hz. The spectral acquisition was carried out using a flip angle of 22.5 – 30.0 degrees and a pulse repetition time of 30 seconds. Samples were prepared by dissolution in deuterated chloroform (CDCl3), deuterium oxide (D2O), or deuterated dimethylsulfoxide (DMSO-d6). Each spectrum is accompanied by a list of peaks with their respective intensities and chemical shifts reported in ppm and in Hz. Most spectra show the peak assignment. This collection contains ca 15,900 spectra and is being updated. 13C NMR spectra The 13C NMR spectra were recorded at several spectrometers with resonance frequencies ranging from 15 MHz to 100 MHz and a resolution ranging from 0.025 to 0.045 ppm. Spectra were acquired using a pulse flip angle of 22.5 – 45 degrees and a pulse repetition time of 4 – 7 seconds. Samples were prepared by dissolution in CDCl3, D2O, or DMSO-d6. Each spectrum is accompanied by a list of the observed peaks with their respective chemical shifts in ppm and their intensities. Most spectra show the peak assignment. This collection contains ca 14,200 spectra and is being updated. Electron paramagnetic resonance (EPR) spectra This collection contains ca 2,000 spectra. The measuring conditions and sample preparation is described for each particular spectrum. This collection stopped being updated in 1987. Searching the database Direct searches The database can be searched by entering one or more of the following parameters: chemical name (is possible to request partial or full matching), molecular formula, number of different types of atoms present in the molecule (as a single value or as a range of values), molecular weight (as a single value or as a range of values), CAS Registry Number or SDBS number. In all cases “%” or “*” can be used as wildcards. The result of the search includes all the available spectra for the search parameters entered. Results can be sorted by molecular weight, number of carbons or SDBS number in ascending or descending order. Reverse searches If a spectrum of an unknown chemical compound is available, a reverse search can be carried out by entering the values of the chemical shift, frequency or mass of the peaks in the NMR, FT-IR or EI-MS spectrum respectively. This type of search affords all the chemical compounds in the database that have the entered spectral characteristics. References External links SDBS website. Chemical databases Databases in Japan Nuclear magnetic resonance Infrared spectroscopy Mass spectrometry Raman spectroscopy
Spectral Database for Organic Compounds
[ "Physics", "Chemistry" ]
1,115
[ "Nuclear magnetic resonance", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Chemical databases", "Infrared spectroscopy", "Mass spectrometry", "Nuclear physics", "Spectroscopy", "Matter" ]
20,519,569
https://en.wikipedia.org/wiki/Dimension%20theory%20%28algebra%29
In mathematics, dimension theory is the study in terms of commutative algebra of the notion dimension of an algebraic variety (and by extension that of a scheme). The need of a theory for such an apparently simple notion results from the existence of many definitions of dimension that are equivalent only in the most regular cases (see Dimension of an algebraic variety). A large part of dimension theory consists in studying the conditions under which several dimensions are equal, and many important classes of commutative rings may be defined as the rings such that two dimensions are equal; for example, a regular ring is a commutative ring such that the homological dimension is equal to the Krull dimension. The theory is simpler for commutative rings that are finitely generated algebras over a field, which are also quotient rings of polynomial rings in a finite number of indeterminates over a field. In this case, which is the algebraic counterpart of the case of affine algebraic sets, most of the definitions of the dimension are equivalent. For general commutative rings, the lack of geometric interpretation is an obstacle to the development of the theory; in particular, very little is known for non-noetherian rings. (Kaplansky's Commutative rings gives a good account of the non-noetherian case.) Throughout the article, denotes Krull dimension of a ring and the height of a prime ideal (i.e., the Krull dimension of the localization at that prime ideal). Rings are assumed to be commutative except in the last section on dimensions of non-commutative rings. Basic results Let R be a noetherian ring or valuation ring. Then If R is noetherian, this follows from the fundamental theorem below (in particular, Krull's principal ideal theorem), but it is also a consequence of a more precise result. For any prime ideal in R, for any prime ideal in that contracts to . This can be shown within basic ring theory (cf. Kaplansky, commutative rings). In addition, in each fiber of , one cannot have a chain of primes ideals of length . Since an artinian ring (e.g., a field) has dimension zero, by induction one gets a formula: for an artinian ring R, Local rings Fundamental theorem Let be a noetherian local ring and I a -primary ideal (i.e., it sits between some power of and ). Let be the Poincaré series of the associated graded ring . That is, where refers to the length of a module (over an artinian ring ). If generate I, then their image in have degree 1 and generate as -algebra. By the Hilbert–Serre theorem, F is a rational function with exactly one pole at of order . Since we find that the coefficient of in is of the form That is to say, is a polynomial in n of degree . P is called the Hilbert polynomial of . We set . We also set to be the minimum number of elements of R that can generate an -primary ideal of R. Our ambition is to prove the fundamental theorem: Since we can take s to be , we already have from the above. Next we prove by induction on . Let be a chain of prime ideals in R. Let and x a nonzero nonunit element in D. Since x is not a zero-divisor, we have the exact sequence The degree bound of the Hilbert-Samuel polynomial now implies that . (This essentially follows from the Artin–Rees lemma; see Hilbert–Samuel function for the statement and the proof.) In , the chain becomes a chain of length and so, by inductive hypothesis and again by the degree estimate, The claim follows. It now remains to show More precisely, we shall show: (Notice: is then -primary.) The proof is omitted. It appears, for example, in Atiyah–MacDonald. But it can also be supplied privately; the idea is to use prime avoidance. Consequences of the fundamental theorem Let be a noetherian local ring and put . Then , since a basis of lifts to a generating set of by Nakayama. If the equality holds, then R is called a regular local ring. , since . (Krull's principal ideal theorem) The height of the ideal generated by elements in a noetherian ring is at most s. Conversely, a prime ideal of height s is minimal over an ideal generated by s elements. (Proof: Let be a prime ideal minimal over such an ideal. Then . The converse was shown in the course of the proof of the fundamental theorem.) Proof: Let generate a -primary ideal and be such that their images generate a -primary ideal. Then for some s. Raising both sides to higher powers, we see some power of is contained in ; i.e., the latter ideal is -primary; thus, . The equality is a straightforward application of the going-down property. Q.E.D. Proof: If are a chain of prime ideals in R, then are a chain of prime ideals in while is not a maximal ideal. Thus, . For the reverse inequality, let be a maximal ideal of and . Clearly, . Since is then a localization of a principal ideal domain and has dimension at most one, we get by the previous inequality. Since is arbitrary, it follows . Q.E.D. Nagata's altitude formula Proof: First suppose is a polynomial ring. By induction on the number of variables, it is enough to consider the case . Since R is flat over R, By Noether's normalization lemma, the second term on the right side is: Next, suppose is generated by a single element; thus, . If I = 0, then we are already done. Suppose not. Then is algebraic over R and so . Since R is a subring of R, and so since is algebraic over . Let denote the pre-image in of . Then, as , by the polynomial case, Here, note that the inequality is the equality if R is catenary. Finally, working with a chain of prime ideals, it is straightforward to reduce the general case to the above case. Q.E.D. Homological methods Regular rings Let R be a noetherian ring. The projective dimension of a finite R-module M is the shortest length of any projective resolution of M (possibly infinite) and is denoted by . We set ; it is called the global dimension of R. Assume R is local with residue field k. Proof: We claim: for any finite R-module M, By dimension shifting (cf. the proof of Theorem of Serre below), it is enough to prove this for . But then, by the local criterion for flatness, Now, completing the proof. Q.E.D. Remark: The proof also shows that if M is not free and is the kernel of some surjection from a free module to M. Proof: If , then M is R-free and thus is -free. Next suppose . Then we have: as in the remark above. Thus, by induction, it is enough to consider the case . Then there is a projective resolution: , which gives: But Hence, is at most 1. Q.E.D. Proof: If R is regular, we can write , a regular system of parameters. An exact sequence , some f in the maximal ideal, of finite modules, , gives us: But f here is zero since it kills k. Thus, and consequently . Using this, we get: The proof of the converse is by induction on . We begin with the inductive step. Set , among a system of parameters. To show R is regular, it is enough to show is regular. But, since , by inductive hypothesis and the preceding lemma with , The basic step remains. Suppose . We claim if it is finite. (This would imply that R is a semisimple local ring; i.e., a field.) If that is not the case, then there is some finite module with and thus in fact we can find M with . By Nakayama's lemma, there is a surjection from a free module F to M whose kernel K is contained in . Since , the maximal ideal is an associated prime of R; i.e., for some nonzero s in R. Since , . Since K is not zero and is free, this implies , which is absurd. Q.E.D. Proof: Let R be a regular local ring. Then , which is an integrally closed domain. It is a standard algebra exercise to show this implies that R is an integrally closed domain. Now, we need to show every divisorial ideal is principal; i.e., the divisor class group of R vanishes. But, according to Bourbaki, Algèbre commutative, chapitre 7, §. 4. Corollary 2 to Proposition 16, a divisorial ideal is principal if it admits a finite free resolution, which is indeed the case by the theorem. Q.E.D. Depth Let R be a ring and M a module over it. A sequence of elements in is called an M-regular sequence if is not a zero-divisor on and is not a zero divisor on for each . A priori, it is not obvious whether any permutation of a regular sequence is still regular (see the section below for some positive answer). Let R be a local Noetherian ring with maximal ideal and put . Then, by definition, the depth of a finite R-module M is the supremum of the lengths of all M-regular sequences in . For example, we have consists of zerodivisors on M is associated with M. By induction, we find for any associated primes of M. In particular, . If the equality holds for M = R, R is called a Cohen–Macaulay ring. Example: A regular Noetherian local ring is Cohen–Macaulay (since a regular system of parameters is an R-regular sequence). In general, a Noetherian ring is called a Cohen–Macaulay ring if the localizations at all maximal ideals are Cohen–Macaulay. We note that a Cohen–Macaulay ring is universally catenary. This implies for example that a polynomial ring is universally catenary since it is regular and thus Cohen–Macaulay. Proof: We first prove by induction on n the following statement: for every R-module M and every M-regular sequence in , The basic step n = 0 is trivial. Next, by inductive hypothesis, . But the latter is zero since the annihilator of N contains some power of . Thus, from the exact sequence and the fact that kills N, using the inductive hypothesis again, we get proving (). Now, if , then we can find an M-regular sequence of length more than n and so by () we see . It remains to show if . By () we can assume n = 0. Then is associated with M; thus is in the support of M. On the other hand, It follows by linear algebra that there is a nonzero homomorphism from N to M modulo ; hence, one from N to M by Nakayama's lemma. Q.E.D. The Auslander–Buchsbaum formula relates depth and projective dimension. Proof: We argue by induction on , the basic case (i.e., M free) being trivial. By Nakayama's lemma, we have the exact sequence where F is free and the image of f is contained in . Since what we need to show is . Since f kills k, the exact sequence yields: for any i, Note the left-most term is zero if . If , then since by inductive hypothesis, we see If , then and it must be Q.E.D. As a matter of notation, for any R-module M, we let One sees without difficulty that is a left-exact functor and then let be its j-th right derived functor, called the local cohomology of R. Since , via abstract nonsense, This observation proves the first part of the theorem below. Proof: 1. is already noted (except to show the nonvanishing at the degree equal to the depth of M; use induction to see this) and 3. is a general fact by abstract nonsense. 2. is a consequence of an explicit computation of a local cohomology by means of Koszul complexes (see below). Koszul complex Let R be a ring and x an element in it. We form the chain complex K(x) given by for i = 0, 1 and for any other i with the differential For any R-module M, we then get the complex with the differential and let be its homology. Note: More generally, given a finite sequence of elements in a ring R, we form the tensor product of complexes: and let its homology. As before, We now have the homological characterization of a regular sequence. A Koszul complex is a powerful computational tool. For instance, it follows from the theorem and the corollary (Here, one uses the self-duality of a Koszul complex; see Proposition 17.15. of Eisenbud, Commutative Algebra with a View Toward Algebraic Geometry.) Another instance would be Remark: The theorem can be used to give a second quick proof of Serre's theorem, that R is regular if and only if it has finite global dimension. Indeed, by the above theorem, and thus . On the other hand, as , the Auslander–Buchsbaum formula gives . Hence, . We next use a Koszul homology to define and study complete intersection rings. Let R be a Noetherian local ring. By definition, the first deviation of R is the vector space dimension where is a system of parameters. By definition, R is a complete intersection ring if is the dimension of the tangent space. (See Hartshorne for a geometric meaning.) Injective dimension and Tor dimensions Let R be a ring. The injective dimension of an R-module M denoted by is defined just like a projective dimension: it is the minimal length of an injective resolution of M. Let be the category of R-modules. Proof: Suppose . Let M be an R-module and consider a resolution where are injective modules. For any ideal I, which is zero since is computed via a projective resolution of . Thus, by Baer's criterion, N is injective. We conclude that . Essentially by reversing the arrows, one can also prove the implication in the other way. Q.E.D. The theorem suggests that we consider a sort of a dual of a global dimension: It was originally called the weak global dimension of R but today it is more commonly called the Tor dimension of R. Remark: for any ring R, . Dimensions of non-commutative rings Let A be a graded algebra over a field k. If V is a finite-dimensional generating subspace of A, then we let and then put It is called the Gelfand–Kirillov dimension of A. It is easy to show is independent of a choice of V. Given a graded right (or left) module M over A one may similarly define the Gelfand-Kirillov dimension of M. Example: If A is finite-dimensional, then gk(A) = 0. If A is an affine ring, then gk(A) = Krull dimension of A. Example: If is the n-th Weyl algebra then See also Multiplicity theory Bass number Perfect complex amplitude Notes References Part II of . Chapter 10 of . Kaplansky, Irving, Commutative rings, Allyn and Bacon, 1970. Dimension Commutative algebra
Dimension theory (algebra)
[ "Physics", "Mathematics" ]
3,313
[ "Geometric measurement", "Physical quantities", "Fields of abstract algebra", "Theory of relativity", "Commutative algebra", "Dimension" ]
20,520,482
https://en.wikipedia.org/wiki/Fermi%20contact%20interaction
The Fermi contact interaction is the magnetic interaction between an electron and an atomic nucleus. Its major manifestation is in electron paramagnetic resonance and nuclear magnetic resonance spectroscopies, where it is responsible for the appearance of isotropic hyperfine coupling. This requires that the electron occupy an s-orbital. The interaction is described with the parameter A, which takes the units megahertz. The magnitude of A is given by this relationships and where A is the energy of the interaction, μn is the nuclear magnetic moment, μe is the electron magnetic dipole moment, Ψ(0) is the value of the electron wavefunction at the nucleus, and denotes the quantum mechanical spin coupling. It has been pointed out that it is an ill-defined problem because the standard formulation assumes that the nucleus has a magnetic dipolar moment, which is not always the case. Use in magnetic resonance spectroscopy Roughly, the magnitude of A indicates the extent to which the unpaired spin resides on the nucleus. Thus, knowledge of the A values allows one to map the singly occupied molecular orbital. History The interaction was first derived by Enrico Fermi in 1930. A classical derivation of this term is contained in "Classical Electrodynamics" by J. D. Jackson. In short, the classical energy may be written in terms of the energy of one magnetic dipole moment in the magnetic field B(r) of another dipole. This field acquires a simple expression when the distance r between the two dipoles goes to zero, since References Magnetostatics Magnetism Electric and magnetic fields in matter
Fermi contact interaction
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
325
[ "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
20,521,094
https://en.wikipedia.org/wiki/Gemstone%20irradiation
Gemstone irradiation is a process in which a gemstone is exposed to artificial radiation in order to enhance its optical properties. High levels of ionizing radiation can change the atomic structure of the gemstone's crystal lattice, which in turn alters the optical properties within it. As a result, the gemstone's color may be significantly altered or the visibility of its inclusions may be lessened. The process, widely practiced in jewelry industry, is done in either a nuclear reactor for neutron bombardment, a particle accelerator for electron bombardment, or a gamma ray facility using the radioactive isotope cobalt-60. The irradiation treatment has enabled the creation of gemstone colors that do not exist or are extremely rare in nature. However, the process, particularly when done in a nuclear reactor, can make the gemstones radioactive. Health risks related to the residual radioactivity in the irradiated gemstones have led to government regulations in many countries. Radioactivity and regulations The term irradiation broadly refers to the exposure of matter to subatomic particles or electromagnetic radiation across the entire spectrum, which includes—in order of increasing frequency and decreasing wavelength—infrared, visible light, ultraviolet, X-rays, and gamma rays. Certain natural gemstone colors, such as blue-to-green colors in diamonds or red colors in zircon, are the results of the exposure to natural radiation in the earth, which is usually alpha or beta particle. The limited penetrating ability of these particles result in partial coloring of the gemstone's surface. Only high-energy radiation such as gamma ray or neutron can produce fully saturated body colors, and the sources of these types of radiation are rare in nature, which necessitates the artificial treatment in jewelry industry. The process, particularly when done in a nuclear reactor for neutron bombardment, can make gemstones radioactive. Neutrons penetrate the gemstones easily and may cause visually pleasing uniform coloration, but also penetrate into the atomic nucleus and cause the excited nucleus to decay, thereby inducing radioactivity. So neutron-treated gemstones are set aside afterward for a couple of months to several years to allow any residual radioactivity to decay, until they reach a safe level of less than to depending on the country. The first documented artificially irradiated gemstone was created by English chemist William Crookes in 1905 by burying a colorless diamond in powdered radium bromide. After having been kept there for 16 months, the diamond became olive green. This method produces a dangerous degree of long-term residual radioactivity and is no longer in use. Some of these radium-treated diamonds—which are still occasionally put on sale and can be detected by particle detectors such as the Geiger counter, the scintillation counter, or the semiconductor detector—are so high in radiation emission that they may darken photographic film in minutes. The concerns for possible health risks related to the residual radioactivity of the irradiated gemstones led to government regulations in many countries. In the United States, the Nuclear Regulatory Commission (NRC) has set strict limits on the allowable levels of residual radioactivity before an irradiated gemstone can be distributed in the country. All neutron- or electron beam-irradiated gemstones must be tested by an NRC-licensee prior to release for sales; however, when treated in a cobalt-60 gamma ray facility, gemstones do not become radioactive and thus are not under NRC authority. In India, the Board of Radiation and Isotope Technology (BRIT), the industrial unit of the Department of Atomic Energy, conducts the process for private sectors. In Thailand, the Office of Atoms for Peace (OAP) did the same, irradiating of gemstones from 1993 to 2003, until the Thailand Institute of Nuclear Technology was established in 2006 and housed the Gem Irradiation Center to provide the service. Materials and results The most commonly irradiated gemstone is topaz, which usually becomes blue after the process. Intensely blue topaz does not exist in nature and is the result of artificial irradiation. According to the American Gem Trade Association, approximately 30 million carats () of topaz are irradiated every year globally, 40 percent of which were done in the United States as of 1988. Dark-blue varieties of topaz, including American Super Blue and London Blue, are the results of neutron bombardment, while lighter sky-blue ones are often those of electron bombardment. Swiss Blue, subtly lighter than the US variety, is the result of a combination of the two methods. Diamonds are mainly irradiated to become blue-green or green, although other colors are possible. When light-to-medium-yellow diamonds are treated with gamma rays they may become green; with a high-energy electron beam, blue. The difference in results may be caused by local heating of the stones, which occurs when the latter method is used. Colorless beryls, also called goshenite, become pure yellow when irradiated, which are called golden beryl or heliodor. Quartz crystals turn "smoky" or light gray upon irradiation if they contain an aluminum impurity, or amethyst if small amounts of iron are present in them; either of the results can be obtained from natural radiation as well. Pearls are irradiated to produce gray blue or gray-to-black colors. Methods of using a cobalt-60 gamma ray facility to darken white Akoya pearls were patented in the early-1960s. But the gamma ray treatment does not alter the color of the pearl's nacre, therefore is not effective if the pearl has a thick or non-transparent nacre. Most black pearls available in markets prior to the late-1970s had been either irradiated or dyed. Uniformity of coloration Gemstones that have been subjected to artificial irradiation generally show no visible evidence of the process, although some diamonds irradiated in an electron beam may show color concentrations around the culet or along the keel line. Color stability In some cases, the new colors induced by artificial irradiation may fade rapidly when exposed to light or gentle heat, so some laboratories submit them to a "fade test" to determine color stability. Sometimes colorless or pink beryls become deep blue upon irradiation, which are called Maxixe-type beryl. However, the color easily fades when exposed to heat or light, so it has no practical jewelry application. Notes References Citations Works cited . . Gemstones Nuclear technology Radiation
Gemstone irradiation
[ "Physics", "Chemistry" ]
1,331
[ "Transport phenomena", "Physical phenomena", "Nuclear technology", "Waves", "Materials", "Radiation", "Gemstones", "Nuclear physics", "Matter" ]
20,526,146
https://en.wikipedia.org/wiki/Amanullin
Amanullin is a cyclic peptide. It is an amatoxin, all of which are found in several members of the mushroom genus Amanita. The oral of amanullin is approximately 20 mg/kg in mice; however, it is non-toxic in humans. Toxicology Like other amatoxins, amanullin is an inhibitor of RNA polymerase II. Amanullin has a species dependent and specific attraction to the enzyme RNA polymerase II. Upon ingestion, it binds to the RNA polymerase II enzyme, effectively causing cytolysis of hepatocytes (liver cells). See also Mushroom poisoning References External links Amatoxins REVISED Poisonous Mushrooms (German) Peptides Amatoxins Hepatotoxins
Amanullin
[ "Chemistry" ]
156
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
20,526,418
https://en.wikipedia.org/wiki/Direct-quadrature-zero%20transformation
The direct-quadrature-zero (DQZ, DQ0 or DQO, sometimes lowercase) transformation or zero-direct-quadrature (0DQ or ODQ, sometimes lowercase) transformation is a tensor that rotates the reference frame of a three-element vector or a three-by-three element matrix in an effort to simplify analysis. The DQZ transform is the product of the Clarke transform and the Park transform, first proposed in 1929 by Robert H. Park. The DQZ transform is often used in the context of electrical engineering with three-phase circuits. The transform can be used to rotate the reference frames of AC waveforms such that they become DC signals. Simplified calculations can then be carried out on these DC quantities before performing the inverse transform to recover the actual three-phase AC results. As an example, the DQZ transform is often used in order to simplify the analysis of three-phase synchronous machines or to simplify calculations for the control of three-phase inverters. In analysis of three-phase synchronous machines, the transformation transfers three-phase stator and rotor quantities into a single rotating reference frame to eliminate the effect of time-varying inductances and transform the system into a linear time-invariant system Introduction The DQZ transform is made of the Park and Clarke transformation matrices. The Clarke transform (named after Edith Clarke) converts vectors in the ABC reference frame to the XYZ (also called αβγ) reference frame. The primary value of the Clarke transform is isolating that part of the ABC-referenced vector, which is common to all three components of the vector; it isolates the common-mode component (i.e., the Z component). The power-invariant, right-handed, uniformly-scaled Clarke transformation matrix is . To convert an ABC-referenced column vector to the XYZ reference frame, the vector must be pre-multiplied by the Clarke transformation matrix: . And, to convert back from an XYZ-referenced column vector to the ABC reference frame, the vector must be pre-multiplied by the inverse Clarke transformation matrix: . The Park transform (named after Robert H. Park) converts vectors in the XYZ reference frame to the DQZ reference frame. The Park transform's primary value is to rotate a vector's reference frame at an arbitrary frequency. The Park transform shifts the signal's frequency spectrum such that the arbitrary frequency now appears as "dc," and the old dc appears as the negative of the arbitrary frequency. The Park transformation matrix is , where θ is the instantaneous angle of an arbitrary ω frequency. To convert an XYZ-referenced vector to the DQZ reference frame, the column vector signal must be pre-multiplied by the Park transformation matrix: . And, to convert back from a DQZ-referenced vector to the XYZ reference frame, the column vector signal must be pre-multiplied by the inverse Park transformation matrix: . The Clarke and Park transforms together form the DQZ transform: The inverse transform is: To convert an ABC-referenced vector to the DQZ reference frame, the column vector signal must be pre-multiplied by the DQZ transformation matrix: . And, to convert back from a DQZ-referenced vector to the ABC reference frame, the column vector signal must be pre-multiplied by the inverse DQZ transformation matrix: . To understand this transform better, a derivation of the transform is included. Derivation The Park transform derivation The Park transform is based on the concept of the dot product and projections of vectors onto other vectors. First, let us imagine two unit vectors, and (the unit vectors, or axes, of the new reference frame from the perspective of the old reference frame), and a third, arbitrary, vector . We can define the two unit vectors and the random vector in terms of their Cartesian coordinates in the old reference frame: , where and are the unit basis vectors of the old coordinate system and is the angle between the and unit vectors (i.e., the angle between the two reference frames). The projection of the arbitrary vector onto each of the two new unit vectors implies the dot product: . So, is the projection of onto the axis, and is the projection of onto the axis. These new vector components, and , together compose the new vector , the original vector in terms of the new DQ reference frame. Notice that the positive angle above caused the arbitrary vector to rotate backward when transitioned to the new DQ reference frame. In other words, its angle concerning the new reference frame is less than its angle to the old reference frame. This is because the reference frame, not the vector, was rotated forwards. Actually, a forward rotation of the reference frame is identical to a negative rotation of the vector. If the old reference frame were rotating forwards, such as in three-phase electrical systems, then the resulting DQ vector remains stationary. A single matrix equation can summarize the operation above: . This tensor can be expanded to three-dimensional problems, where the axis about which rotation occurs is left unaffected. In the following example, the rotation is about the Z axis, but any axis could have been chosen: . From a linear algebra perspective, this is simply a clockwise rotation about the z-axis and is mathematically equivalent to the trigonometric difference angle formulae. The Clarke transform derivation The ABC unit basis vectors Consider a three-dimensional space with unit basis vectors A, B, and C. The sphere in the figure below is used to show the scale of the reference frame for context and the box is used to provide a rotational context. Typically, in electrical engineering (or any other context that uses three-phase systems), the three-phase components are shown in a two-dimensional perspective. However, given the three phases can change independently, they are by definition orthogonal to each other. This implies a three-dimensional perspective, as shown in the figure above. So, the two-dimensional perspective is really showing the projection of the three-dimensional reality onto a plane. Three-phase problems are typically described as operating within this plane. In reality, the problem is likely a balanced-phase problem (i.e., vA + vB + vC = 0) and the net vector is always on this plane. The AYC' unit basis vectors To build the Clarke transform, we actually use the Park transform in two steps. Our goal is to rotate the C axis into the corner of the box. This way the rotated C axis will be orthogonal to the plane of the two-dimensional perspective mentioned above. The first step towards building the Clarke transform requires rotating the ABC reference frame about the A axis. So, this time, the 1 will be in the first element of the Park transform: The following figure shows how the ABC reference frame is rotated to the AYC' reference frame when any vector is pre-multiplied by the K1 matrix. The C' and Y axes now point to the midpoints of the edges of the box, but the magnitude of the reference frame has not changed (i.e., the sphere did not grow or shrink).This is due to the fact that the norm of the K1 tensor is 1: ||K1|| = 1. This means that any vector in the ABC reference frame will continue to have the same magnitude when rotated into the AYC' reference frame. The XYZ unit basis vectors Next, the following tensor rotates the vector about the new Y axis in a counter-clockwise direction with respect to the Y axis (The angle was chosen so that the C' axis would be pointed towards the corner of the box.): , or . Notice that the distance from the center of the sphere to the midpoint of the edge of the box is but from the center of the sphere to the corner of the box is . That is where the 35.26° angle came from. The angle can be calculated using the dot product. Let be the unit vector in the direction of C' and let be a unit vector in the direction of the corner of the box at . Because where is the angle between and we have The norm of the K2 matrix is also 1, so it too does not change the magnitude of any vector pre-multiplied by the K2 matrix. The zero plane At this point, the Z axis is now orthogonal to the plane in which any ABC vector without a common-mode component can be found. Any balanced ABC vector waveform (a vector without a common mode) will travel about this plane. This plane will be called the zero plane and is shown below by the hexagonal outline. The X and Y basis vectors are on the zero plane. Notice that the X axis is parallel to the projection of the A axis onto the zero plane. The X axis is slightly larger than the projection of the A axis onto the zero plane. It is larger by a factor of . The arbitrary vector did not change magnitude through this conversion from the ABC reference frame to the XYZ reference frame (i.e., the sphere did not change size). This is true for the power-invariant form of the Clarke transform. The following figure shows the common two-dimensional perspective of the ABC and XYZ reference frames. It might seem odd that though the magnitude of the vector did not change, the magnitude of its components did (i.e., the X and Y components are longer than the A, B, and C components). Perhaps this can be intuitively understood by considering that for a vector without common mode, what took three values (A, B, and C components) to express, now only takes 2 (X and Y components) since the Z component is zero. Therefore, the X and Y component values must be larger to compensate. Combination of tensors The power-invariant Clarke transformation matrix is a combination of the K1 and K2 tensors: , or . Notice that when multiplied through, the bottom row of the KC matrix is 1/, not 1/3. (Edith Clarke did use 1/3 for the power-variant case.) The Z component is not exactly the average of the A, B, and C components. If only the bottom row elements were changed to be 1/3, then the sphere would be squashed along the Z axis. This means that the Z component would not have the same scaling as the X and Y components. As things are written above, the norm of the Clarke transformation matrix is still 1, which means that it only rotates an ABC vector but does not scale it. The same cannot be said for Clarke's original transform. It is easy to verify (by matrix multiplication) that the inverse of KC is Power-variant form It is sometimes desirable to scale the Clarke transformation matrix so that the X axis is the projection of the A axis onto the zero plane. To do this, we uniformly apply a scaling factor of and a to the zero component to get the power-variant Clarke transformation matrix: or . This will necessarily shrink the sphere by a factor of as shown below. Notice that this new X axis is exactly the projection of the A axis onto the zero plane. With the power-variant Clarke transform, the magnitude of the arbitrary vector is smaller in the XYZ reference frame than in the ABC reference frame (the norm of the transform is ), but the magnitudes of the individual vector components are the same (when there is no common mode). So, as an example, a signal defined by becomes, in the XYZ reference frame, , a new vector whose components are the same magnitude as the original components: 1. In many cases, this is an advantageous quality of the power-variant Clarke transform. The DQZ transform The DQZ transformation uses the Clarke transform to convert ABC-referenced vectors into two differential-mode components (i.e., X and Y) and one common-mode component (i.e., Z) and then applies the Park transform to rotate the reference frame about the Z axis at some given angle. The X component becomes the D component, which is in direct alignment with the vector of rotation, and the Y component becomes the Q component, which is at a quadrature angle to the direct component. The DQZ transform is . Example In electric systems, very often the A, B, and C values are oscillating in such a way that the net vector is spinning. In a balanced system, the vector is spinning about the Z axis. Very often, it is helpful to rotate the reference frame such that the majority of the changes in the ABC values, due to this spinning, are canceled out and any finer variations become more obvious. This is incredibly useful as it now transforms the system into a linear time-invariant system. The DQZ transformation can be thought of in geometric terms as the projection of the three separate sinusoidal phase quantities onto two axes rotating with the same angular velocity as the sinusoidal phase quantities. Shown above is the DQZ transform as applied to the stator of a synchronous machine. There are three windings separated by 120 physical degrees. The three phase currents are equal in magnitude and are separated from one another by 120 electrical degrees. The three phase currents lag their corresponding phase voltages by . The DQ axes are shown rotating with angular velocity equal to , the same angular velocity as the phase voltages and currents. The D axis makes an angle with the phase A winding which has been chosen as the reference. The currents and are constant dc quantities. Comparison with other transforms Park's transformation The transformation originally proposed by Park differs slightly from the one given above. In Park's transformation, the q-axis is ahead of d-axis, qd0, and the angle is the angle between phase-a and d-axis, as given below. and αβγ transform The dq0 transform is conceptually similar to the αβγ transform. Whereas the dq0 transform is the projection of the phase quantities onto a rotating two-axis reference frame, the αβγ transform can be thought of as the projection of the phase quantities onto a stationary two-axis reference frame. References In-line references General references C.J. O'Rourke et al. "A Geometric Interpretation of Reference Frames and Transformations: dq0, Clarke, and Park," in IEEE Transactions on Energy Conversion, vol. 34, no. 4, pp. 2070-2083, Dec. 2019. J. Lewis Blackburn Symmetrical Components for Power Systems Engineering, Marcel Dekker, New York (1993), . Zhang et al. A three-phase inverter with a neutral leg with space vector modulation IEEE APEC '97 Conference Proceedings (1997). T.A.Lipo, “A Cartesian Vector Approach To Reference Theory of AC Machines”, Int. Conference On Electric Machines, Laussane, Sept. 18–24, 1984. See also Symmetrical components αβγ transform Vector control (motor) Electrical engineering Synchronous machines
Direct-quadrature-zero transformation
[ "Engineering" ]
3,078
[ "Electrical engineering", "Synchronous machines" ]
6,051,874
https://en.wikipedia.org/wiki/Abohm
The abohm is the derived unit of electrical resistance in the emu-cgs (centimeter-gram-second) system of units (emu stands for "electromagnetic units"). One abohm corresponds to 10−9 ohms in the SI system of units, which is a nanoohm. The emu-cgs (or "electromagnetic cgs") units are one of several systems of electromagnetic units within the centimetre gram second system of units; others include esu-cgs, Gaussian units, and Heaviside–Lorentz units. In these other systems, the abohm is not one of the units. When a current of one abampere (1 abA) flows through a resistance of 1 abohm, the resulting potential difference across the component is one abvolt (1 abV). The name abohm was introduced by Kennelly in 1903 as a short name for the long name (absolute) electromagnetic cgs unit of resistance that was in use since the adoption of the cgs system in 1875. The abohm was coherent with the emu-cgs system, in contrast to the ohm, the practical unit of resistance that had been adopted too in 1875. See also Electrical resistance Statohm References The McGraw Hill Dictionary Of Scientific and Technical Terms, . Units of electrical resistance Centimetre–gram–second system of units
Abohm
[ "Physics" ]
292
[ "Units of electrical resistance", "Physical quantities", "Electrical resistance and conductance" ]
6,053,993
https://en.wikipedia.org/wiki/Distributive%20category
In mathematics, a category is distributive if it has finite products and finite coproducts and such that for every choice of objects , the canonical map is an isomorphism, and for all objects , the canonical map is an isomorphism (where 0 denotes the initial object). Equivalently, if for every object the endofunctor defined by preserves coproducts up to isomorphisms . It follows that and aforementioned canonical maps are equal for each choice of objects. In particular, if the functor has a right adjoint (i.e., if the category is cartesian closed), it necessarily preserves all colimits, and thus any cartesian closed category with finite coproducts (i.e., any bicartesian closed category) is distributive. Example The category of sets is distributive. Let , , and be sets. Then where denotes the coproduct in Set, namely the disjoint union, and denotes a bijection. In the case where , , and are finite sets, this result reflects the distributive property: the above sets each have cardinality . The categories Grp and Ab are not distributive, even though they have both products and coproducts. An even simpler category that has both products and coproducts but is not distributive is the category of pointed sets. References Further reading Category theory
Distributive category
[ "Mathematics" ]
306
[ "Functions and mappings", "Mathematical structures", "Category theory stubs", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory" ]
6,054,639
https://en.wikipedia.org/wiki/Jensen%27s%20formula
In complex analysis, Jensen's formula relates the average magnitude of an analytic function on a circle with the number of its zeros inside the circle. The formula was introduced by and forms an important statement in the study of entire functions. Formal statement Suppose that is an analytic function in a region in the complex plane which contains the closed disk of radius about the origin, are the zeros of in the interior of (repeated according to their respective multiplicity), and that . Jensen's formula states that This formula establishes a connection between the moduli of the zeros of in the interior of and the average of on the boundary circle , and can be seen as a generalisation of the mean value property of harmonic functions. Namely, if has no zeros in , then Jensen's formula reduces to which is the mean-value property of the harmonic function . An equivalent statement of Jensen's formula that is frequently used is where denotes the number of zeros of in the disc of radius centered at the origin. Applications Jensen's formula can be used to estimate the number of zeros of an analytic function in a circle. Namely, if is a function analytic in a disk of radius centered at and if is bounded by on the boundary of that disk, then the number of zeros of in a circle of radius centered at the same point does not exceed Jensen's formula is an important statement in the study of value distribution of entire and meromorphic functions. In particular, it is the starting point of Nevanlinna theory, and it often appears in proofs of Hadamard factorization theorem, which requires an estimate on the number of zeros of an entire function. Jensen's formula is also used to prove a generalization of Paley-Wiener theorem for quasi-analytic functions with . In the field of control theory (in particular: spectral factorization methods) this generalization is often referred to as the Paley–Wiener condition. Generalizations Jensen's formula may be generalized for functions which are merely meromorphic on . Namely, assume that where and are analytic functions in having zeros at and respectively, then Jensen's formula for meromorphic functions states that Jensen's formula is a consequence of the more general Poisson–Jensen formula, which in turn follows from Jensen's formula by applying a Möbius transformation to . It was introduced and named by Rolf Nevanlinna. If is a function which is analytic in the unit disk, with zeros located in the interior of the unit disk, then for every in the unit disk the Poisson–Jensen formula states that Here, is the Poisson kernel on the unit disk. If the function has no zeros in the unit disk, the Poisson-Jensen formula reduces to which is the Poisson formula for the harmonic function . See also Paley–Wiener theorem References Sources Theorems in complex analysis
Jensen's formula
[ "Mathematics" ]
584
[ "Theorems in mathematical analysis", "Theorems in complex analysis" ]
6,054,823
https://en.wikipedia.org/wiki/Tumor%20M2-PK
Tumor M2-PK is a synonym for the dimeric form of the pyruvate kinase isoenzyme type M2 (PKM2), a key enzyme within tumor metabolism. Tumor M2-PK can be elevated in many tumor types, rather than being an organ-specific tumor marker such as PSA. Increased stool (fecal) levels are being investigated as a method of screening for colorectal tumors, and EDTA plasma levels are undergoing testing for possible application in the follow-up of various cancers. Sandwich ELISAs based on two monoclonal antibodies which specifically recognize Tumor M2-PK (the dimeric form of M2-PK) are available for the quantification of Tumor M2-PK in stool and EDTA-plasma samples respectively. As a biomarker, the amount of Tumor M2-PK in stool and EDTA-plasma reflects the specific metabolic status of the tumors. Early detection of colorectal tumors and polyps M2-PK, as measured in feces, is a potential tumor marker for colorectal cancer. When measured in feces with a cutoff value of 4 U/ml, its sensitivity has been estimated to be 85% (with a 95% confidence interval of 65 to 96%) for colon cancer and 56% (confidence interval 41–74%) for rectal cancer. Its specificity is 95%. The M2-PK test is not dependent on occult blood (ELISA method), so it can detect bleeding or non-bleeding bowel cancer and also polyps with high sensitivity and high specificity with no false negative, but false positives may occur. Most people are more willing to accept non-invasive preventive medical check-ups. Therefore, the measurement of tumor M2-PK in stool samples, with follow-up by colonoscopy to clarify the tumor M2-PK positive results, may prove to be an advance in the early detection of colorectal carcinomas. The CE marked M2-PK Test is available in form of an ELISA test for quantitative results or as point of care test to receive results within minutes. Tumor M2-PK is also useful to diagnose lung cancer and better than SCC and NSE tumor markers. With renal cell carcinoma (RCC), the M2PK test has sensitivity of 66.7 percent for metastatic RCC and 27.5 percent for nonmetastatic RCC, but M2PK test cannot detect transitional cell carcinoma of the bladder, prostate cancer and benign prostatic hyperplasia. Cancer follow-up Studies from various international working groups have revealed a significantly increased amount of Tumor M2-PK in EDTA-plasma samples of patients with renal, lung, breast, cervical and gastrointestinal tumors (oesophagus, stomach, pancreas, colon, rectum), as well as melanoma, which correlated with the tumor stage. The combination of Tumor M2-PK with the appropriate classical tumor marker, such as CEA for bowel cancer, CA 19-9 for pancreatic cancer and CA 72-4 for gastric cancer, significantly increases the sensitivity to detect various cancers. An important application of the Tumor M2-PK test in EDTA-plasma is for follow-up during tumor therapy, to monitor the success or failure of the chosen treatment, as well as predicting the chances of a “cure” and survival. If Tumor M2-PK levels decrease during therapy and then remain low after therapy it points towards successful treatment. An increase in the Tumor M2-PK values during or after therapy points towards relapse and/or metastasis. Increased Tumor M2-PK values can sometimes also occur in severe inflammatory diseases, which must be excluded by differential diagnosis. Tetrameric and dimeric PKM2 Pyruvate kinase catalyzes the last step within the glycolytic sequence, the dephosphorylation of phosphoenolpyruvate to pyruvate and is responsible for net energy production within the glycolytic pathway. Depending upon the different metabolic functions of the tissues, different isoenzymes of pyruvate kinase are expressed. M2-PK (PKM2) is the predominant pyruvate kinase isoform in proliferating cells, such as fibroblasts, embryonic cells and adult stem cells and most human tissue, including lung, bladder, kidney and thymus; M2-PK is upgregulated in many human tumors. M2-PK can occur in two different forms in proliferating cells: a tetrameric form, which consists of four subunits a dimeric form, consisting of two subunits. The tetrameric form of M2-PK has a high affinity to its substrate, phosphoenolpyruvate (PEP), and is highly active at physiological PEP concentrations. Furthermore, the tetrameric form of M2-PK is associated with several other glycolytic enzymes within the so-called glycolytic enzyme complex. Due to the close proximity of the enzymes, the association within the glycolytic enzyme complex leads to a highly effective conversion of glucose to lactate. When M2-PK is mainly in the highly active tetrameric form, which is the case in most normal cells, glucose is mostly converted to lactate, with the attendant production of energy. In contrast, the dimeric form of M2-PK has a low affinity for phosphoenolpyruvate, being nearly inactive at physiological PEP concentrations. When M2-PK is mainly in the dimeric form, which is the case in tumor cells, all phosphometabolites above pyruvate kinase accumulate and are channelled into synthetic processes which branch off from glycolytic intermediates, such as nucleic acids, phospholipids and amino acids, important cell building blocks for highly proliferating cells such as tumor cells. As a consequence of the key position of pyruvate kinase within glycolysis, the tetramer : dimer ratio of M2-PK determines whether glucose carbons are converted to pyruvate and lactate, along with the production of energy (tetrameric form), or channelled into synthetic processes (dimeric form). In tumor cells M2-PK is mainly in the dimeric form. Therefore, the dimeric form of M2-PK has been termed Tumor M2-PK. The dimerization of M2-PK in tumor cells is induced by the direct interaction of M2-PK with different oncoproteins. However, the tetramer : dimer ratio of M2-PK is not constant. Oxygen starvation or highly accumulated glycolytic intermediates, such as fructose 1,6-bisphosphate (fructose 1,6-P2) or the amino acid serine, induce the reassociation of the dimeric form of M2-PK to the tetrameric form. Consequently, due to the activation of M2-PK, glucose is converted to pyruvate and lactate under the production of energy until the fructose 1,6-P2 levels drop below a certain threshold value, which allows the dissociation of the tetrameric form of M2-PK to the dimeric form. Thereafter, the cycle of oscillation starts again when the fructose 1,6-P2 levels reach a certain upper threshold value which induces the tetramerization of M2-PK. When M2-PK is mainly in the less active dimeric form, energy is produced by the degradation of the amino acid glutamine to aspartate, pyruvate and lactate, which is termed glutaminolysis. In tumor cells the increased rate of lactate production in the presence of oxygen is termed the Warburg effect. Mutations For the first time pyruvate kinase M2 enzyme was reported with two missense mutations, H391Y and K422R, found in cells from Bloom syndrome patients, prone to develop cancer. Results show that despite the presence of mutations in the inter-subunit contact domain, the K422R and H391Y mutant proteins maintained their homotetrameric structure, similar to the wild-type protein, but showed a loss of activity of 75 and 20%, respectively. H391Y showed a 6-fold increase in affinity for its substrate phosphoenolpyruvate and behaved like a non-allosteric protein with compromised cooperative binding. However, the affinity for phosphoenolpyruvate was lost significantly in K422R. Unlike K422R, H391Y showed enhanced thermal stability, stability over a range of pH values, a lesser effect of the allosteric inhibitor Phe, and resistance toward structural alteration upon binding of the activator (fructose 1,6-bisphosphate) and inhibitor (Phe). Both mutants showed a slight shift in the pH optimum from 7.4 to 7.0. The co-expression of homotetrameric wild type and mutant PKM2 in the cellular milieu resulting in the interaction between the two at the monomer level was substantiated further by in vitro experiments. The cross-monomer interaction significantly altered the oligomeric state of PKM2 by favoring dimerisation and heterotetramerization. In silico study provided an added support in showing that hetero-oligomerization was energetically favorable. The hetero-oligomeric populations of PKM2 showed altered activity and affinity, and their expression resulted in an increased growth rate of Escherichia coli as well as mammalian cells, along with an increased rate of polyploidy. These features are known to be essential to tumor progression. Potential multi-functional protein See also glycolysis tumor metabolome PKM2 References Stool Koss K, Maxton D, Jankowski JAZ. The potential use of fecal dimeric M2 pyruvate kinase (Tumor M2-PK) in screening for colorectal cancer (CRC). Abstract from Digestive Disease Week, May 2005; Chicago, USA. Mc Loughlin R, Shiel E, Sebastian S, Ryan B, O´Connor HJ, O´Morain C. Tumor M2-PK, a novel screening tool for colorectal cancer. Abstract from Digestive Disease Week, May 2005, Chicago/USA Plasma Scientific background External links Tumor M2-PK as a diagnostic biomarker The pyruvate kinase isoenzyme type M2 EC 2.7.1 Chemical pathology Oncology
Tumor M2-PK
[ "Chemistry", "Biology" ]
2,236
[ "Biochemistry", "Chemical pathology" ]
6,057,100
https://en.wikipedia.org/wiki/Berlekamp%27s%20algorithm
In mathematics, particularly computational algebra, Berlekamp's algorithm is a well-known method for factoring polynomials over finite fields (also known as Galois fields). The algorithm consists mainly of matrix reduction and polynomial GCD computations. It was invented by Elwyn Berlekamp in 1967. It was the dominant algorithm for solving the problem until the Cantor–Zassenhaus algorithm of 1981. It is currently implemented in many well-known computer algebra systems. Overview Berlekamp's algorithm takes as input a square-free polynomial (i.e. one with no repeated factors) of degree with coefficients in a finite field and gives as output a polynomial with coefficients in the same field such that divides . The algorithm may then be applied recursively to these and subsequent divisors, until we find the decomposition of into powers of irreducible polynomials (recalling that the ring of polynomials over a finite field is a unique factorization domain). All possible factors of are contained within the factor ring The algorithm focuses on polynomials which satisfy the congruence: These polynomials form a subalgebra of R (which can be considered as an -dimensional vector space over ), called the Berlekamp subalgebra. The Berlekamp subalgebra is of interest because the polynomials it contains satisfy In general, not every GCD in the above product will be a non-trivial factor of , but some are, providing the factors we seek. Berlekamp's algorithm finds polynomials suitable for use with the above result by computing a basis for the Berlekamp subalgebra. This is achieved via the observation that Berlekamp subalgebra is in fact the kernel of a certain matrix over , which is derived from the so-called Berlekamp matrix of the polynomial, denoted . If then is the coefficient of the -th power term in the reduction of modulo , i.e.: With a certain polynomial , say: we may associate the row vector: It is relatively straightforward to see that the row vector corresponds, in the same way, to the reduction of modulo . Consequently, a polynomial is in the Berlekamp subalgebra if and only if (where is the identity matrix), i.e. if and only if it is in the null space of . By computing the matrix and reducing it to reduced row echelon form and then easily reading off a basis for the null space, we may find a basis for the Berlekamp subalgebra and hence construct polynomials in it. We then need to successively compute GCDs of the form above until we find a non-trivial factor. Since the ring of polynomials over a field is a Euclidean domain, we may compute these GCDs using the Euclidean algorithm. Conceptual algebraic explanation With some abstract algebra, the idea behind Berlekamp's algorithm becomes conceptually clear. We represent a finite field , where for some prime , as . We can assume that is square free, by taking all possible pth roots and then computing the gcd with its derivative. Now, suppose that is the factorization into irreducibles. Then we have a ring isomorphism, , given by the Chinese remainder theorem. The crucial observation is that the Frobenius automorphism commutes with , so that if we denote , then restricts to an isomorphism . By finite field theory, is always the prime subfield of that field extension. Thus, has elements if and only if is irreducible. Moreover, we can use the fact that the Frobenius automorphism is -linear to calculate the fixed set. That is, we note that is a -subspace, and an explicit basis for it can be calculated in the polynomial ring by computing and establishing the linear equations on the coefficients of polynomials that are satisfied iff it is fixed by Frobenius. We note that at this point we have an efficiently computable irreducibility criterion, and the remaining analysis shows how to use this to find factors. The algorithm now breaks down into two cases: In the case of small we can construct any , and then observe that for some there are so that and . Such a has a nontrivial factor in common with , which can be computed via the gcd. As is small, we can cycle through all possible . For the case of large primes, which are necessarily odd, one can exploit the fact that a random nonzero element of is a square with probability , and that the map maps the set of non-zero squares to , and the set of non-squares to . Thus, if we take a random element , then with good probability will have a non-trivial factor in common with . For further details one can consult. Applications One important application of Berlekamp's algorithm is in computing discrete logarithms over finite fields , where is prime and . Computing discrete logarithms is an important problem in public key cryptography and error-control coding. For a finite field, the fastest known method is the index calculus method, which involves the factorisation of field elements. If we represent the field in the usual way - that is, as polynomials over the base field , reduced modulo an irreducible polynomial of degree - then this is simply polynomial factorisation, as provided by Berlekamp's algorithm. Implementation in computer algebra systems Berlekamp's algorithm may be accessed in the PARI/GP package using the factormod command, and the WolframAlpha website. See also Polynomial factorisation Factorization of polynomials over a finite field and irreducibility tests Cantor–Zassenhaus algorithm References BSTJ Later republished in: Computer algebra Finite fields Polynomial factorization algorithms
Berlekamp's algorithm
[ "Mathematics", "Technology" ]
1,184
[ "Computer science", "Computational mathematics", "Computer algebra", "Algebra" ]
1,797,708
https://en.wikipedia.org/wiki/Numerical%20relativity
Numerical relativity is one of the branches of general relativity that uses numerical methods and algorithms to solve and analyze problems. To this end, supercomputers are often employed to study black holes, gravitational waves, neutron stars and many other phenomena described by Albert Einstein's theory of general relativity. A currently active field of research in numerical relativity is the simulation of relativistic binaries and their associated gravitational waves. Overview A primary goal of numerical relativity is to study spacetimes whose exact form is not known. The spacetimes so found computationally can either be fully dynamical, stationary or static and may contain matter fields or vacuum. In the case of stationary and static solutions, numerical methods may also be used to study the stability of the equilibrium spacetimes. In the case of dynamical spacetimes, the problem may be divided into the initial value problem and the evolution, each requiring different methods. Numerical relativity is applied to many areas, such as cosmological models, critical phenomena, perturbed black holes and neutron stars, and the coalescence of black holes and neutron stars, for example. In any of these cases, Einstein's equations can be formulated in several ways that allow us to evolve the dynamics. While Cauchy methods have received a majority of the attention, characteristic and Regge calculus based methods have also been used. All of these methods begin with a snapshot of the gravitational fields on some hypersurface, the initial data, and evolve these data to neighboring hypersurfaces. Like all problems in numerical analysis, careful attention is paid to the stability and convergence of the numerical solutions. In this line, much attention is paid to the gauge conditions, coordinates, and various formulations of the Einstein equations and the effect they have on the ability to produce accurate numerical solutions. Numerical relativity research is distinct from work on classical field theories as many techniques implemented in these areas are inapplicable in relativity. Many facets are however shared with large scale problems in other computational sciences like computational fluid dynamics, electromagnetics, and solid mechanics. Numerical relativists often work with applied mathematicians and draw insight from numerical analysis, scientific computation, partial differential equations, and geometry among other mathematical areas of specialization. History Foundations in theory Albert Einstein published his theory of general relativity in 1915. It, like his earlier theory of special relativity, described space and time as a unified spacetime subject to what are now known as the Einstein field equations. These form a set of coupled nonlinear partial differential equations (PDEs). After more than 100 years since the first publication of the theory, relatively few closed-form solutions are known for the field equations, and, of those, most are cosmological solutions that assume special symmetry to reduce the complexity of the equations. The field of numerical relativity emerged from the desire to construct and study more general solutions to the field equations by approximately solving the Einstein equations numerically. A necessary precursor to such attempts was a decomposition of spacetime back into separated space and time. This was first published by Richard Arnowitt, Stanley Deser, and Charles W. Misner in the late 1950s in what has become known as the ADM formalism. Although for technical reasons the precise equations formulated in the original ADM paper are rarely used in numerical simulations, most practical approaches to numerical relativity use a "3+1 decomposition" of spacetime into three-dimensional space and one-dimensional time that is closely related to the ADM formulation, because the ADM procedure reformulates the Einstein field equations into a constrained initial value problem that can be addressed using computational methodologies. At the time that ADM published their original paper, computer technology would not have supported numerical solution to their equations on any problem of any substantial size. The first documented attempt to solve the Einstein field equations numerically appears to be by S. G. Hahn and R. W. Lindquist in 1964, followed soon thereafter by Larry Smarr and by K. R. Eppley. These early attempts were focused on evolving Misner data in axisymmetry (also known as "2+1 dimensions"). At around the same time Tsvi Piran wrote the first code that evolved a system with gravitational radiation using a cylindrical symmetry. In this calculation Piran has set the foundation for many of the concepts used today in evolving ADM equations, like "free evolution" versus "constrained evolution", which deal with the fundamental problem of treating the constraint equations that arise in the ADM formalism. Applying symmetry reduced the computational and memory requirements associated with the problem, allowing the researchers to obtain results on the supercomputers available at the time. Early results The first realistic calculations of rotating collapse were carried out in the early eighties by Richard Stark and Tsvi Piran in which the gravitational wave forms resulting from formation of a rotating black hole were calculated for the first time. For nearly 20 years following the initial results, there were fairly few other published results in numerical relativity, probably due to the lack of sufficiently powerful computers to address the problem. In the late 1990s, the Binary Black Hole Grand Challenge Alliance successfully simulated a head-on binary black hole collision. As a post-processing step the group computed the event horizon for the spacetime. This result still required imposing and exploiting axisymmetry in the calculations. Some of the first documented attempts to solve the Einstein equations in three dimensions were focused on a single Schwarzschild black hole, which is described by a static and spherically symmetric solution to the Einstein field equations. This provides an excellent test case in numerical relativity because it does have a closed-form solution so that numerical results can be compared to an exact solution, because it is static, and because it contains one of the most numerically challenging features of relativity theory, a physical singularity. One of the earliest groups to attempt to simulate this solution was Peter Anninos et al. in 1995. In their paper they point out that "Progress in three dimensional numerical relativity has been impeded in part by lack of computers with sufficient memory and computational power to perform well resolved calculations of 3D spacetimes." Maturation of the field In the years that followed, not only did computers become more powerful, but also various research groups developed alternate techniques to improve the efficiency of the calculations. With respect to black hole simulations specifically, two techniques were devised to avoid problems associated with the existence of physical singularities in the solutions to the equations: (1) Excision, and (2) the "puncture" method. In addition the Lazarus group developed techniques for using early results from a short-lived simulation solving the nonlinear ADM equations, in order to provide initial data for a more stable code based on linearized equations derived from perturbation theory. More generally, adaptive mesh refinement techniques, already used in computational fluid dynamics were introduced to the field of numerical relativity. Excision In the excision technique, which was first proposed in the late 1990s, a portion of a spacetime inside of the event horizon surrounding the singularity of a black hole is simply not evolved. In theory this should not affect the solution to the equations outside of the event horizon because of the principle of causality and properties of the event horizon (i.e. nothing physical inside the black hole can influence any of the physics outside the horizon). Thus if one simply does not solve the equations inside the horizon one should still be able to obtain valid solutions outside. One "excises" the interior by imposing ingoing boundary conditions on a boundary surrounding the singularity but inside the horizon. While the implementation of excision has been very successful, the technique has two minor problems. The first is that one has to be careful about the coordinate conditions. While physical effects cannot propagate from inside to outside, coordinate effects could. For example, if the coordinate conditions were elliptical, coordinate changes inside could instantly propagate out through the horizon. This then means that one needs hyperbolic type coordinate conditions with characteristic velocities less than that of light for the propagation of coordinate effects (e.g., using harmonic coordinates coordinate conditions). The second problem is that as the black holes move, one must continually adjust the location of the excision region to move with the black hole. The excision technique was developed over several years including the development of new gauge conditions that increased stability and work that demonstrated the ability of the excision regions to move through the computational grid. The first stable, long-term evolution of the orbit and merger of two black holes using this technique was published in 2005. Punctures In the puncture method the solution is factored into an analytical part, which contains the singularity of the black hole, and a numerically constructed part, which is then singularity free. This is a generalization of the Brill-Lindquist prescription for initial data of black holes at rest and can be generalized to the Bowen-York prescription for spinning and moving black hole initial data. Until 2005, all published usage of the puncture method required that the coordinate position of all punctures remain fixed during the course of the simulation. Of course black holes in proximity to each other will tend to move under the force of gravity, so the fact that the coordinate position of the puncture remained fixed meant that the coordinate systems themselves became "stretched" or "twisted," and this typically led to numerical instabilities at some stage of the simulation. 2005's Breakthrough (annus mirabilis of numerical relativity) In 2005, a group of researchers demonstrated for the first time the ability to allow punctures to move through the coordinate system, thus eliminating some of the earlier problems with the method. This allowed accurate long-term evolutions of black holes. By choosing appropriate coordinate conditions and making crude analytic assumption about the fields near the singularity (since no physical effects can propagate out of the black hole, the crudeness of the approximations does not matter), numerical solutions could be obtained to the problem of two black holes orbiting each other, as well as accurate computation of gravitational radiation (ripples in spacetime) emitted by them. 2005 was renamed the "annus mirabilis" of numerical relativity, 100 years after the annus mirabilis papers of special relativity (1905). Lazarus project The Lazarus project (1998–2005) was developed as a post-Grand Challenge technique to extract astrophysical results from short lived full numerical simulations of binary black holes. It combined approximation techniques before (post-Newtonian trajectories) and after (perturbations of single black holes) with full numerical simulations attempting to solve Einstein's field equations. All previous attempts to numerically integrate in supercomputers the Hilbert-Einstein equations describing the gravitational field around binary black holes led to software failure before a single orbit was completed. The Lazarus project approach, in the meantime, gave the best insight into the binary black hole problem and produced numerous and relatively accurate results, such as the radiated energy and angular momentum emitted in the latest merging state, the linear momentum radiated by unequal mass holes, and the final mass and spin of the remnant black hole. The method also computed detailed gravitational waves emitted by the merger process and predicted that the collision of black holes is the most energetic single event in the Universe, releasing more energy in a fraction of a second in the form of gravitational radiation than an entire galaxy in its lifetime. Adaptive mesh refinement Adaptive mesh refinement (AMR) as a numerical method has roots that go well beyond its first application in the field of numerical relativity. Mesh refinement first appears in the numerical relativity literature in the 1980s, through the work of Choptuik in his studies of critical collapse of scalar fields. The original work was in one dimension, but it was subsequently extended to two dimensions. In two dimensions, AMR has also been applied to the study of inhomogeneous cosmologies, and to the study of Schwarzschild black holes. The technique has now become a standard tool in numerical relativity and has been used to study the merger of black holes and other compact objects in addition to the propagation of gravitational radiation generated by such astronomical events. Recent developments In the past few years, hundreds of research papers have been published leading to a wide spectrum of mathematical relativity, gravitational wave, and astrophysical results for the orbiting black hole problem. This technique extended to astrophysical binary systems involving neutron stars and black holes, and multiple black holes. One of the most surprising predictions is that the merger of two black holes can give the remnant hole a speed of up to 4000 km/s that can allow it to escape from any known galaxy. The simulations also predict an enormous release of gravitational energy in this merger process, amounting up to 8% of its total rest mass. See also Mathematics of general relativity Post-Newtonian expansion Spin-flip Cactus Framework Chapel Hill Conference Notes External links Initial Data for Numerical Relativity — A review article which includes a technical discussion of numerical relativity. Rotating Stars in Relativity — A technical review article about rotating stars, with a section on numerical relativity applications. A Relativity Tutorial at Caltech — A basic introduction to concepts of Numerical Relativity. Computational physics Mathematical methods in general relativity
Numerical relativity
[ "Physics" ]
2,683
[ "Computational physics" ]
1,798,843
https://en.wikipedia.org/wiki/Invariant%20polynomial
In mathematics, an invariant polynomial is a polynomial that is invariant under a group acting on a vector space . Therefore, is a -invariant polynomial if for all and . Cases of particular importance are for Γ a finite group (in the theory of Molien series, in particular), a compact group, a Lie group or algebraic group. For a basis-independent definition of 'polynomial' nothing is lost by referring to the symmetric powers of the given linear representation of Γ. References Commutative algebra Invariant theory Polynomials
Invariant polynomial
[ "Physics", "Mathematics" ]
105
[ "Symmetry", "Group actions", "Algebra", "Polynomials", "Fields of abstract algebra", "Commutative algebra", "Invariant theory" ]
1,799,157
https://en.wikipedia.org/wiki/Complex%20fluid
Complex fluids are mixtures that have a coexistence between two phases: solid–liquid (suspensions or solutions of macromolecules such as polymers), solid–gas (granular), liquid–gas (foams) or liquid–liquid (emulsions). They exhibit unusual mechanical responses to applied stress or strain due to the geometrical constraints that the phase coexistence imposes. The mechanical response includes transitions between solid-like and fluid-like behavior as well as fluctuations. Their mechanical properties can be attributed to characteristics such as high disorder, caging, and clustering on multiple length scales. Example Shaving cream is an example of a complex fluid. Without stress, the foam appears to be a solid: it does not flow and can support (very) light loads. However, when adequate stress is applied, shaving cream flows easily like a fluid. On the level of individual bubbles, the flow is due to rearrangements of small collections of bubbles. On this scale, the flow is not smooth, but instead consists of fluctuations due to rearrangements of the bubbles and releases of stress. These fluctuations are similar to the fluctuations that are studied in earthquakes. Dynamics The dynamics of the particles in complex fluids are an area of current research. Energy lost due to friction may be a nonlinear function of the velocity and normal forces. The topological inhibition to flow by the crowding of constituent particles is a key element in these systems. Under certain conditions, including high densities and low temperatures, when externally driven to induce flow, complex fluids are characterized by irregular intervals of solid-like behavior followed by stress relaxations due to particle rearrangements. The dynamics of these systems are highly nonlinear in nature. The increase in stress by an infinitesimal amount or a small displacement of a single particle can result in the difference between an arrested state and fluid-like behavior. Although many materials found in nature can fit into the class of complex fluids, very little is well understood about them; inconsistent and controversial conclusions concerning their material properties still persist. The careful study of these systems may lead to "new physics" and new states of matter. For example, it has been suggested that these systems can jam and a "jamming phase diagram" can be used to consider how these systems can jam and unjam. It is not known whether further research will demonstrate these findings, or whether such a theoretical framework will prove useful. As yet this large body of theoretical work has been poorly supported with experiments. References External links Stephan Herminghaus' Dynamics of Complex Fluids Department David Weitz's Soft Condensed Matter Physics Laboratory Howard Stone's Complex Fluids Group Physical Chemistry and Soft Matter Group, Wageningen Bob Behringer's complex fluids page Hernán Alejandro Makse's complex fluids page Complex Fluids/Nonlinear Dynamics Laboratory Francois Graner's complex fluids page Carnegie Mellon University Center for Complex Fluids Engineering UCLA Center for Complex Fluids and Interfacial Physics Paulo Arratia's Complex Fluids Laboratory at Penn Complex Fluids & Computational Polymer Physics at ETH Zurich Ubaldo M. Córdova-Figueroa's Low Reynolds Fluid Mechanics Group at UPRM Zhengdong Cheng's Soft Condensed Matter Group New England Complex Fluids (NECF) Workgroup Fluid dynamics Non-Newtonian fluids
Complex fluid
[ "Chemistry", "Engineering" ]
674
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
1,799,584
https://en.wikipedia.org/wiki/Phase-change%20material
A phase-change material (PCM) is a substance which releases/absorbs sufficient energy at phase transition to provide useful heat or cooling. Generally the transition will be from one of the first two fundamental states of matter - solid and liquid - to the other. The phase transition may also be between non-classical states of matter, such as the conformity of crystals, where the material goes from conforming to one crystalline structure to conforming to another, which may be a higher or lower energy state. The energy released/absorbed by phase transition from solid to liquid, or vice versa, the heat of fusion is generally much higher than the sensible heat. Ice, for example, requires 333.55 J/g to melt, but then water will rise one degree further with the addition of just 4.18 J/g. Water/ice is therefore a very useful phase change material and has been used to store winter cold to cool buildings in summer since at least the time of the Achaemenid Empire. By melting and solidifying at the phase-change temperature (PCT), a PCM is capable of storing and releasing large amounts of energy compared to sensible heat storage. Heat is absorbed or released when the material changes from solid to liquid and vice versa or when the internal structure of the material changes; PCMs are accordingly referred to as latent heat storage (LHS) materials. There are two principal classes of phase-change material: organic (carbon-containing) materials derived either from petroleum, from plants or from animals; and salt hydrates, which generally either use natural salts from the sea or from mineral deposits or are by-products of other processes. A third class is solid to solid phase change. PCMs are used in many different commercial applications where energy storage and/or stable temperatures are required, including, among others, heating pads, cooling for telephone switching boxes, and clothing. By far the biggest potential market is for building heating and cooling. In this application area, PCMs hold potential in light of the progressive reduction in the cost of renewable electricity, coupled with the intermittent nature of such electricity. This can result in a misfit between peak demand and availability of supply. In North America, China, Japan, Australia, Southern Europe and other developed countries with hot summers, peak supply is at midday while peak demand is from around 17:00 to 20:00. This creates opportunities for thermal storage media. Solid-liquid phase-change materials are usually encapsulated for installation in the end application, to be contained in the liquid state. In some applications, especially when incorporation to textiles is required, phase change materials are micro-encapsulated. Micro-encapsulation allows the material to remain solid, in the form of small bubbles, when the PCM core has melted. Characteristics and classification Latent heat storage can be achieved through changes in the state of matter from liquid→solid, solid→liquid, solid→gas and liquid→gas. However, only solid→liquid and liquid→solid phase changes are practical for PCMs. Although liquid–gas transitions have a higher heat of transformation than solid–liquid transitions, liquid→gas phase changes are impractical for thermal storage because large volumes or high pressures are required to store the materials in their gas phase. Solid–solid phase changes are typically very slow and have a relatively low heat of transformation. Initially, solid–liquid PCMs behave like sensible heat storage (SHS) materials; their temperature rises as they absorb heat. When PCMs reach their phase change temperature (their melting point) they absorb large amounts of heat at an almost constant temperature until all the material is melted. When the ambient temperature around a liquid material falls, the PCM solidifies, releasing its stored latent heat. A large number of PCMs are available in any required temperature range from −5 up to 190 °C. Within the human comfort range between 20 and 30 °C, some PCMs are very effective, storing over 200 kJ/kg of latent heat, as against a specific heat capacity of around one kJ/(kg*°C) for masonry. The storage density can therefore be 20 times greater than masonry per kg if a temperature swing of 10 °C is allowed. However, since the mass of the masonry is far higher than that of PCM this specific (per mass) heat capacity is somewhat offset. A masonry wall might have a mass of 200 kg/m2, so to double the heat capacity one would require additional 10 kg/m2 of PCM. Organic PCMs Hydrocarbons, primarily paraffins (CnH2n+2) and lipids but also sugar alcohols. Advantages Freeze without much supercooling Ability to melt congruently Self nucleating properties Compatibility with conventional material of construction No segregation Chemically stable Safe and non-reactive Disadvantages Low thermal conductivity in their solid state. High heat transfer rates are required during the freezing cycle. Nano composites were found to yield an effective thermal conductivity increase up to 216%. Volumetric latent heat storage capacity can be low Flammable. This can be partially alleviated by specialised containment. Inorganic Salt hydrates (MxNy·nH2O) Advantages High volumetric latent heat storage capacity Availability and low cost Sharp melting point High thermal conductivity High heat of fusion Non-flammable Sustainability Disadvantages Difficult to prevent incongruent melting and phase separation upon cycling, which can cause a significant loss in latent heat enthalpy. Can be corrosive to many other materials, such as metals. This can be overcome by only using specific metal-PCM pairings or encapsulation in small quantities in non-reactive plastic. Change of volume is very high in some mixtures Super cooling can be a problem in solid–liquid transition, necessitating the use of nucleating agents which may become inoperative after repeated cycling Hygroscopic materials Many natural building materials are hygroscopic, that is they can absorb (water condenses) and release water (water evaporates). The process is thus: Condensation (gas to liquid) ΔH<0; enthalpy decreases (exothermic process) gives off heat. Vaporization (liquid to gas) ΔH>0; enthalpy increases (endothermic process) absorbs heat (or cools). While this process liberates a small quantity of energy, large surfaces area allows significant (1–2 °C) heating or cooling in buildings. The corresponding materials are wool insulation and earth/clay render finishes. Solid-solid PCMs A specialised group of PCMs that undergo a solid/solid phase transition with the associated absorption and release of large amounts of heat. These materials change their crystalline structure from one lattice configuration to another at a fixed and well-defined temperature, and the transformation can involve latent heats comparable to the most effective solid/liquid PCMs. Such materials are useful because, unlike solid/liquid PCMs, they do not require nucleation to prevent supercooling. Additionally, because it is a solid/solid phase change, there is no visible change in the appearance of the PCM, and there are no problems associated with handling liquids, e.g. containment, potential leakage, etc. Currently the temperature range of solid-solid PCM solutions spans from -50 °C (-58 °F) up to +175 °C (347 °F). Selection criteria The phase change material should possess the following thermodynamic properties: Melting temperature in the desired operating temperature range High latent heat of fusion per unit volume High specific heat, high density, and high thermal conductivity Small volume changes on phase transformation and small vapor pressure at operating temperatures to reduce the containment problem Congruent melting Kinetic properties High nucleation rate to avoid supercooling of the liquid phase High rate of crystal growth, so that the system can meet demands of heat recovery from the storage system Chemical properties Chemical stability Complete reversible freeze/melt cycle No degradation after a large number of freeze/melt cycle Non-corrosiveness, non-toxic, non-flammable and non-explosive materials Economic properties Low cost Availability Thermophysical properties Key thermophysical properties of phase-change materials include: Melting point (Tm), Heat of fusion (ΔHfus), Specific heat (cp) (of solid and liquid phase), Density (ρ) (of solid and liquid phase) and thermal conductivity. Values such as volume change and volumetric heat capacity can be calculated there from. Technology, development, and encapsulation The most commonly used PCMs are salt hydrates, fatty acids and esters, and various paraffins (such as octadecane). Recently also ionic liquids were investigated as novel PCMs. As most of the organic solutions are water-free, they can be exposed to air, but all salt based PCM solutions must be encapsulated to prevent water evaporation or uptake. Both types offer certain advantages and disadvantages and if they are correctly applied some of the disadvantages becomes an advantage for certain applications. They have been used since the late 19th century as a medium for thermal storage applications. They have been used in such diverse applications as refrigerated transportation for rail and road applications and their physical properties are, therefore, well known. Unlike the ice storage system, however, the PCM systems can be used with any conventional water chiller both for a new or alternatively retrofit application. The positive temperature phase change allows centrifugal and absorption chillers as well as the conventional reciprocating and screw chiller systems or even lower ambient conditions utilizing a cooling tower or dry cooler for charging the TES system. The temperature range offered by the PCM technology provides a new horizon for the building services and refrigeration engineers regarding medium and high temperature energy storage applications. The scope of this thermal energy application is wide-ranging of solar heating, hot water, heating rejection (i.e., cooling tower), and dry cooler circuitry thermal energy storage applications. Since PCMs transform between solid–liquid in thermal cycling, encapsulation naturally became the obvious storage choice. Encapsulation of PCMs Macro-encapsulation: Early development of macro-encapsulation with large volume containment failed due to the poor thermal conductivity of most PCMs. PCMs tend to solidify at the edges of the containers preventing effective heat transfer. Micro-encapsulation: Micro-encapsulation on the other hand showed no such problem. It allows the PCMs to be incorporated into construction materials, such as concrete, easily and economically. Micro-encapsulated PCMs also provide a portable heat storage system. By coating a microscopic sized PCM with a protective coating, the particles can be suspended within a continuous phase such as water. This system can be considered a phase change slurry (PCS). Molecular-encapsulation is another technology, developed by Dupont de Nemours that allows a very high concentration of PCM within a polymer compound. It allows storage capacity up to 515 kJ/m2 for a 5 mm board (103 MJ/m3). Molecular-encapsulation allows drilling and cutting through the material without any PCM leakage. As phase change materials perform best in small containers, therefore they are usually divided in cells. The cells are shallow to reduce static head – based on the principle of shallow container geometry. The packaging material should conduct heat well; and it should be durable enough to withstand frequent changes in the storage material's volume as phase changes occur. It should also restrict the passage of water through the walls, so the materials will not dry out (or water-out, if the material is hygroscopic). Packaging must also resist leakage and corrosion. Common packaging materials showing chemical compatibility with room temperature PCMs include stainless steel, polypropylene, and polyolefin. Nanoparticles such as carbon nanotubes, graphite, graphene, metal and metal oxide can be dispersed in PCM. It is worth noting that inclusion of nanoparticles will not only alter thermal conductivity characteristic of PCM but also other characteristics as well, including latent heat capacity, sub-cooling, phase change temperature and its duration, density and viscosity. The new group of PCMs called NePCM. NePCMs can be added to metal foams to build even higher thermal conductive combination. Thermal composites Thermal composites is a term given to combinations of phase change materials (PCMs) and other (usually solid) structures. A simple example is a copper mesh immersed in paraffin wax. The copper mesh within paraffin wax can be considered a composite material, dubbed a thermal composite. Such hybrid materials are created to achieve specific overall or bulk properties (an example being the encapsulation of paraffin into distinct silicon dioxide nanospheres for increased surface area-to-volume ratio and, thus, higher heat transfer speeds ). Thermal conductivity is a common property targeted for maximization by creating thermal composites. In this case, the basic idea is to increase thermal conductivity by adding a highly conducting solid (such as the copper mesh or graphite) into the relatively low-conducting PCM, thus increasing overall or bulk (thermal) conductivity. If the PCM is required to flow, the solid must be porous, such as a mesh. Solid composites such as fiberglass or kevlar prepreg for the aerospace industry usually refer to a fiber (the kevlar or the glass) and a matrix (the glue, which solidifies to hold fibers and provide compressive strength). A thermal composite is not so clearly defined but could similarly refer to a matrix (solid) and the PCM, which is of course usually liquid and/or solid depending on conditions. They are also meant to discover minor elements in the earth. Applications Applications of phase change materials include, but are not limited to: Thermal energy storage Solar cooking Cold-energy battery Conditioning of buildings, such as 'ice-storage' Cooling of heat and electrical engines Cooling: food, beverages, coffee, wine, milk products, green houses Delaying ice and frost formation on surfaces Medical applications: transportation of blood, operating tables, hot-cold therapies, treatment of birth asphyxia Human body cooling under bulky clothing or costumes. Waste heat recovery Off-peak power utilization: Heating hot water and Cooling Heat pump systems Passive storage in bioclimatic building/architecture (HDPE, paraffin) Smoothing exothermic temperature peaks in chemical reactions Solar power plants Spacecraft thermal systems Thermal comfort in vehicles Thermal protection of electronic devices Thermal protection of food: transport, hotel trade, ice-cream, etc. Textiles used in clothing Computer cooling Turbine Inlet Chilling with thermal energy storage Telecom shelters in tropical regions. They protect the high-value equipment in the shelter by keeping the indoor air temperature below the maximum permissible by absorbing heat generated by power-hungry equipment such as a Base Station Subsystem. In case of a power failure to conventional cooling systems, PCMs minimize use of diesel generators, and this can translate into enormous savings across thousands of telecom sites in tropics. Fire and safety issues Some phase change materials are suspended in water, and are relatively nontoxic. Others are hydrocarbons or other flammable materials, or are toxic. As such, PCMs must be selected and applied very carefully, in accordance with fire and building codes and sound engineering practices. Because of the increased fire risk, flamespread, smoke, potential for explosion when held in containers, and liability, it may be wise not to use flammable PCMs within residential or other regularly occupied buildings. Phase change materials are also being used in thermal regulation of electronics. See also Heat pipe References Sources Phase Change Material (PCM) Based Energy Storage Materials and Global Application Examples, Zafer URE M.Sc., C.Eng. MASHRAE HVAC Applications Phase Change Material Based Passive Cooling Systems Design Principal and Global Application Examples, Zafer URE M.Sc., C.Eng. MASHRAE Passive Cooling Application Further reading Phase Change Matters (industry blog) Building engineering Physical chemistry Sustainable building
Phase-change material
[ "Physics", "Chemistry", "Engineering" ]
3,355
[ "Sustainable building", "Applied and interdisciplinary physics", "Building engineering", "Construction", "Civil engineering", "nan", "Physical chemistry", "Architecture" ]
1,800,732
https://en.wikipedia.org/wiki/Second%20messenger%20system
Second messengers are intracellular signaling molecules released by the cell in response to exposure to extracellular signaling molecules—the first messengers. (Intercellular signals, a non-local form of cell signaling, encompassing both first messengers and second messengers, are classified as autocrine, juxtacrine, paracrine, and endocrine depending on the range of the signal.) Second messengers trigger physiological changes at cellular level such as proliferation, differentiation, migration, survival, apoptosis and depolarization. They are one of the triggers of intracellular signal transduction cascades. Examples of second messenger molecules include cyclic AMP, cyclic GMP, inositol triphosphate, diacylglycerol, and calcium. First messengers are extracellular factors, often hormones or neurotransmitters, such as epinephrine, growth hormone, and serotonin. Because peptide hormones and neurotransmitters typically are biochemically hydrophilic molecules, these first messengers may not physically cross the phospholipid bilayer to initiate changes within the cell directly—unlike steroid hormones, which usually do. This functional limitation requires the cell to have signal transduction mechanisms to transduce first messenger into second messengers, so that the extracellular signal may be propagated intracellularly. An important feature of the second messenger signaling system is that second messengers may be coupled downstream to multi-cyclic kinase cascades to greatly amplify the strength of the original first messenger signal. For example, RasGTP signals link with the mitogen activated protein kinase (MAPK) cascade to amplify the allosteric activation of proliferative transcription factors such as Myc and CREB. Earl Wilbur Sutherland Jr., discovered second messengers, for which he won the 1971 Nobel Prize in Physiology or Medicine. Sutherland saw that epinephrine would stimulate the liver to convert glycogen to glucose (sugar) in liver cells, but epinephrine alone would not convert glycogen to glucose. He found that epinephrine had to trigger a second messenger, cyclic AMP, for the liver to convert glycogen to glucose. The mechanisms were worked out in detail by Martin Rodbell and Alfred G. Gilman, who won the 1994 Nobel Prize. Secondary messenger systems can be synthesized and activated by enzymes, for example, the cyclases that synthesize cyclic nucleotides, or by opening of ion channels to allow influx of metal ions, for example Ca2+ signaling. These small molecules bind and activate protein kinases, ion channels, and other proteins, thus continuing the signaling cascade. Types of second messenger molecules There are three basic types of secondary messenger molecules: Hydrophobic molecules: water-insoluble molecules such as diacylglycerol, and phosphatidylinositols, which are membrane-associated and diffuse from the plasma membrane into the intermembrane space where they can reach and regulate membrane-associated effector proteins. Hydrophilic molecules: water-soluble molecules, such as cAMP, cGMP, IP3, and Ca2+, that are located within the cytosol. Gases: nitric oxide (NO), carbon monoxide (CO) and hydrogen sulfide (H2S) which can diffuse both through cytosol and across cellular membranes. These intracellular messengers have some properties in common: They can be synthesized/released and broken down again in specific reactions by enzymes or ion channels. Some (such as Ca2+) can be stored in special organelles and quickly released when needed. Their production/release and destruction can be localized, enabling the cell to limit space and time of signal activity. Common mechanisms of second messenger systems There are several different secondary messenger systems (cAMP system, phosphoinositol system, and arachidonic acid system), but they all are quite similar in overall mechanism, although the substances involved and overall effects can vary. In most cases, a ligand binds to a cell surface receptor. The binding of a ligand to the receptor causes a conformation change in the receptor. This conformation change can affect the activity of the receptor and result in the production of active second messengers. In the case of G protein-coupled receptors, the conformation change exposes a binding site for a G-protein. The G-protein (named for the GDP and GTP molecules that bind to it) is bound to the inner membrane of the cell and consists of three subunits: alpha, beta and gamma. The G-protein is known as the "transducer." When the G-protein binds with the receptor, it becomes able to exchange a GDP (guanosine diphosphate) molecule on its alpha subunit for a GTP (guanosine triphosphate) molecule. Once this exchange takes place, the alpha subunit of the G-protein transducer breaks free from the beta and gamma subunits, all parts remaining membrane-bound. The alpha subunit, now free to move along the inner membrane, eventually contacts another cell surface receptor - the "primary effector." The primary effector then has an action, which creates a signal that can diffuse within the cell. This signal is called the "second (or secondary) messenger." The secondary messenger may then activate a "secondary effector" whose effects depend on the particular secondary messenger system. Calcium ions are one type of second messengers and are responsible for many important physiological functions including muscle contraction, fertilization, and neurotransmitter release. The ions are normally bound or stored in intracellular components (such as the endoplasmic reticulum(ER)) and can be released during signal transduction. The enzyme phospholipase C produces diacylglycerol and inositol trisphosphate, which increases calcium ion permeability into the membrane. Active G-protein open up calcium channels to let calcium ions enter the plasma membrane. The other product of phospholipase C, diacylglycerol, activates protein kinase C, which assists in the activation of cAMP (another second messenger). Examples Second Messengers in the Phosphoinositol Signaling Pathway IP3, DAG, and Ca2+ are second messengers in the phosphoinositol pathway. The pathway begins with the binding of extracellular primary messengers such as epinephrine, acetylcholine, and hormones AGT, GnRH, GHRH, oxytocin, and TRH, to their respective receptors. Epinephrine binds to the α1 GTPase Protein Coupled Receptor (GPCR) and acetylcholine binds to M1 and M2 GPCR. Binding of a primary messenger to these receptors results in conformational change of the receptor. The α subunit, with the help of guanine nucleotide exchange factors (GEFS), releases GDP, and binds GTP, resulting in the dissociation of the subunit and subsequent activation. The activated α subunit activates phospholipase C, which hydrolyzes membrane bound phosphatidylinositol 4,5-bisphosphate (PIP2), resulting in the formation of secondary messengers diacylglycerol (DAG) and inositol-1,4,5-triphosphate (IP3). IP3 binds to calcium pumps on ER, transporting Ca2+, another second messenger, into the cytoplasm. Ca2+ ultimately binds to many proteins, activating a cascade of enzymatic pathways. References External links Animation: Second Messenger: cAMP Signal transduction
Second messenger system
[ "Chemistry" ]
1,588
[ "Second messenger system", "Signal transduction" ]
12,606,884
https://en.wikipedia.org/wiki/Carrier%20lifetime
A definition in semiconductor physics, carrier lifetime is defined as the average time it takes for a minority carrier to recombine. The process through which this is done is typically known as minority carrier recombination. The energy released due to recombination can be either thermal, thereby heating up the semiconductor (thermal recombination or non-radiative recombination, one of the sources of waste heat in semiconductors), or released as photons (optical recombination, used in LEDs and semiconductor lasers). The carrier lifetime can vary significantly depending on the materials and construction of the semiconductor. Carrier lifetime plays an important role in bipolar transistors and solar cells. In indirect band gap semiconductors, the carrier lifetime strongly depends on the concentration of recombination centers. Gold atoms act as highly efficient recombination centers, silicon for some high switching speed diodes and transistors is therefore alloyed with a small amount of gold. Many other atoms, e.g. iron or nickel, have similar effect. Overview In practical applications, the electronic band structure of a semiconductor is typically found in a non-equilibrium state. Therefore, processes that tend towards thermal equilibrium, namely mechanisms of carrier recombination, always play a role. Additionally, semiconductors used in devices are very rarely pure semiconductors. Oftentimes, a dopant is used, giving an excess of electrons (in so-called n-type doping) or holes (in so-called p-type doping) within the band structure. This introduces a majority carrier and a minority carrier. As a result of this, the carrier lifetime plays a vital role in many semiconductor devices that have dopants. Recombination mechanisms There are several mechanisms by which minority carriers can recombine, each of which subtract from the carrier lifetime. The main mechanisms that play a role in modern devices are band-to-band recombination and stimulated emission, which are forms of radiative recombination, and Shockley-Read-Hall (SRH), Auger, Langevin, and surface recombination, which are forms of non-radiative recombination. Depending on the system, certain mechanisms may play a greater role than others. For example, surface recombination plays a significant role in solar cells, where much of the effort goes into passivating surfaces to minimize non-radiative recombination. As opposed to this, Langevin recombination plays a major role in organic solar cells, where the semiconductors are characterized by low mobility. In these systems, maximizing the carrier lifetime is synonymous to maximizing the efficiency of the device. Applications Solar cells A solar cell is an electrical device in which a semiconductor is exposed to light that is converted into electricity through the photovoltaic effect. Electrons are either excited through the absorption of light, or if the band-gap energy of the material can be bridged, electron-hole pairs are created. Simultaneously, a voltage potential is created. The charge carriers within the solar cell move through the semiconductor in order to cancel said potential, which is the drifting force that moves the electrons. Also, the electrons can be forced to move by diffusion from higher concentration to lower concentration of electrons. In order to maximize the efficiency of the solar cell, it is desirable to have as many charge carriers as possible collected at the electrodes of the solar cell. Thus, recombination of electrons (among other factors that influence efficiency) must be avoided. This corresponds to an increase in the carrier lifetime. Surface recombination occurs at the top of the solar cell, which makes it preferable to have layers of material that have great surface passivation properties so as not to become affected by exposure to light over longer periods of time. Additionally, the same method of layering different semiconductor materials is used to reduce the capture probability of the electrons, which results in a decrease in trap-assisted SRH recombination, and an increase in carrier lifetime. Radiative (band-to-band) recombination is negligible in solar cells that have semiconductor materials with indirect bandgap structure. Auger recombination occurs as a limiting factor for solar cells when the concentration of excess electrons grows large at low doping rates. Otherwise, the doping-dependent SRH recombination is one of the primary mechanisms that reduces the electrons’ carrier lifetime in solar cells. Bipolar junction transistors A bipolar junction transistor is a type of transistor that is able to use electrons and electron holes as charge carriers. A BJT uses a single crystal of material in its circuit that is divided into two types of semiconductor, an n-type and p-type. These two types of doped semiconductors are spread over three different regions in respective order: the emitter region, the base region and the collector region. The emitter region and collector region are quantitively doped differently, but are of the same type of doping and share a base region, which is why the system is different from two diodes connected in series with each other. For a PNP-transistor, these regions are, respectively, p-type, n-type and p-type, and for a NPN-transistor, these regions are, respectively, n-type, p-type and n-type. For NPN-transistors in typical forward-active operation, given an injection of charge carriers through the first junction from the emitter into the base region, electrons are the charge carriers that are transported diffusively through the base region towards the collector region. These are the minority carriers of the base region. Analogously, for PNP-transistors, electronic holes are the minority carriers of the base region. The carrier lifetime of these minority carriers plays a crucial role in the charge flow of minority carriers in the base region, which is found between the two junctions. Depending on the BJT's mode of operation, recombination is either preferred, or to be avoided in the base region. In particular, for the aforementioned forward-active mode of operation, recombination is not preferable. Thus, in order to get as many minority carriers as possible from the base region into the collecting region before these recombine, the width of the base region must be small enough such that the minority carriers can diffuse in a smaller amount of time than the semiconductor's minority carrier lifetime. Equivalently, the width of the base region must be smaller than the diffusion length, which is the average length a charge carrier travels before recombining. Additionally, in order to prevent high rates of recombination, the base is only lightly doped with respect to the emitter and collector region. As a result of this, the charge carriers do not have a high probability of staying in the base region, which is their preferable region of occupation when recombining into a lower-energy state. For other modes of operation, like that of fast switching, a high recombination rate (and thus a short carrier lifetime) is desirable. The desired mode of operation, and the associated properties of the doped base region must be considered in order to facilitate the appropriate carrier lifetime. Presently, silicon and silicon carbide are the materials used in most BJTs. The recombination mechanisms that must be considered in the base region are surface recombination near the base-emitter junction, as well as SRH- and Auger recombination in the base region. Specifically, Auger recombination increases when the amount of injected charge carriers grows, hence decreasing the efficiency of the current gain with growing injection numbers. Semiconductor lasers In semiconductor lasers, the carrier lifetime is the time it takes an electron before recombining via non-radiative processes in the laser cavity. In the frame of the rate equations model, carrier lifetime is used in the charge conservation equation as the time constant of the exponential decay of carriers. The dependence of carrier lifetime on the carrier density is expressed as: where A, B and C are the non-radiative, radiative and Auger recombination coefficients and is the carrier lifetime. Measurement Because the efficiency of a semiconductor device generally depends on its carrier lifetime, it is important to be able to measure this quantity. The method by which this is done depends on the device, but is usually dependent on measuring the current and voltage. In solar cells, the carrier lifetime can be calculated by illuminating the surface of the cell, which induces carrier generation and increases the voltage until it reaches an equilibrium, and subsequently turning off the light source. This causes the voltage to decay at a consistent rate. The rate at which the voltage decays is determined by the amount of minority carriers that recombine per unit time, with a higher amount of recombining carriers resulting in a faster decay. Subsequently, a lower carrier lifetime will result in a faster decay of the voltage. This means that the carrier lifetime of a solar cell can be calculated by studying its voltage decay rate. This carrier lifetime is generally expressed as: where is the Boltzmann constant, q is the elementary charge, T is the temperature, and is the time derivative of the open-circuit voltage. In bipolar junction transistors (BJTs), determining the carrier lifetime is rather more complicated. Namely, one must measure the output conductance and reverse transconductance, both of which are variables that depend on the voltage and flow of current through the BJT, and calculate the minority carrier transit time, which is determined by the width of the quasi-neutral base (QNB) of the BJT, and the diffusion coefficient; a constant that quantifies the atomic migration within the BJT. This carrier lifetime is expressed as: where and are the output conductance, reverse transconductance, width of the QNB and diffusion coefficient, respectively. Current research Because a longer carrier lifetime is often synonymous to a more efficient device, research tends to focus on minimizing processes that contribute to the recombination of minority carriers. In practice, this generally implies reducing structural defects within the semiconductors, or introducing novel methods that do not suffer from the same recombination mechanisms. In crystalline silicon solar cells, which are particularly common, an important limiting factor is the structural damage done to the cell when the transparent conducting film is applied. This is done with reactive plasma deposition, a form of sputter deposition. In the process of applying this film, defects appear on the silicon layer, which degrades the carrier lifetime. Reducing the amount of damage done during this process is therefore important to increase the efficiency of the solar cell, and a focus of current research. In addition to research that seeks to optimize currently favoured technologies, there is a great deal of research surrounding other, less-utilized technologies, like the Perovskite solar cell (PSC). This solar cell is preferable due to its comparatively cheap and simple manufacturing process. Modern advancements suggest that there is still ample room to improve on the carrier lifetime of this solar cell, with most of the issues surrounding it being construction-related. In addition to solar cells, perovskites can be utilized to manufacture LEDs, lasers, and transistors. As a result of this, lead and halide perovskites are of particular interest in modern research. Current problems include the structural defects that appear when semiconductor devices are manufactured with the material, as the dislocation density associated with the crystals is a detriment to their carrier lifetime. References External links Carrier Lifetime Charge carriers
Carrier lifetime
[ "Physics", "Materials_science" ]
2,409
[ "Physical phenomena", "Electrical phenomena", "Condensed matter physics", "Charge carriers" ]
12,610,385
https://en.wikipedia.org/wiki/Acceptor%20%28semiconductors%29
In semiconductor physics, an acceptor is a dopant atom that when substituted into a semiconductor lattice forms a p-type region. When silicon (Si), having four valence electrons, is doped with elements from group III of the periodic table, such as boron (B) and aluminium (Al), both having three valence electrons, a p-type semiconductor is formed. These dopant elements represent trivalent impurities. Other trivalent dopants include indium (In) and gallium (Ga). When substituting for a silicon atom in the crystal lattice, the three valence electrons of boron form covalent bonds with three of the Si neighbours but the bond with the fourth remains unsatisfied. The initially electro-neutral acceptor becomes negatively charged (ionised). The unsatisfied bond attracts electrons from the neighbouring bonds. At room temperature, an electron from a neighbouring bond can jump to repair the unsatisfied bond thus leaving an electron hole, which is a place where an electron is deficient. The hole, being positively charged, attracts another electron from a neighbouring bond to repair this unsatisfied bond. This chain-like process results in the hole moving around the crystal as a charge carrier. This process can sustain in an electric current useful in electronic circuits. See also Donor (semiconductors) Electron acceptor Semiconductors References Semiconductor properties
Acceptor (semiconductors)
[ "Physics", "Materials_science" ]
287
[ "Semiconductor properties", "Condensed matter stubs", "Condensed matter physics", "Materials science stubs" ]
12,610,398
https://en.wikipedia.org/wiki/Donor%20%28semiconductors%29
In semiconductor physics, a donor is a dopant atom that, when added to a semiconductor, can form a n-type region. For example, when silicon (Si), having four valence electrons, is to be doped as a n-type semiconductor, elements from group V like phosphorus (P) or arsenic (As) can be used because they have five valence electrons. A dopant with five valence electrons is also called a pentavalent impurity. Other pentavalent dopants are antimony (Sb) and bismuth (Bi). When substituting a Si atom in the crystal lattice, four of the valence electrons of phosphorus form covalent bonds with the neighbouring Si atoms but the fifth one remains weakly bonded. If that electron is liberated, the initially electro-neutral donor becomes positively charged (ionised). At room temperature, the liberated electron can move around the Si crystal and carry a current, thus acting as a charge carrier. See also Acceptor (semiconductors) Electron donor References Semiconductor properties
Donor (semiconductors)
[ "Physics", "Materials_science" ]
215
[ "Semiconductor properties", "Condensed matter physics" ]
17,133,979
https://en.wikipedia.org/wiki/Zinc%E2%80%93copper%20couple
Zinc–copper couple is an alloy of zinc and copper that is employed as a reagent in organic synthesis. The “couple” was popularized after the report by Simmons and Smith, published in 1959, on its application as an activated source of zinc required for formation of an organozinc reagent in the Simmons–Smith cyclopropanation of alkenes. The couple has been widely applied as a reagent in other reactions requiring activated zinc metal. Zinc–copper couple does not refer to a rigorously defined chemical structure or alloy composition. The couple may contain varying proportions of copper and zinc; the zinc content is typically greater than 90%, although an alloy containing similar proportions of zinc and copper is used in some cases. The couple is frequently prepared as a darkly-colored powder and is slurried in an ethereal solvent prior to being used in slight excess relative to the substrate. Activation of zinc by copper is essential to the couple’s utility, but the origin of this effect is poorly documented. It is speculated that copper enhances reactivity of zinc at the surface of the alloy. Synthesis Zinc–copper couple has been prepared by numerous methods, which vary mainly with respect to the source of copper, but also by the ratio of copper to zinc, the physical state of the zinc (e.g. powder or granules), the use of protic acids and other additives, and temperature of the preparation. Most often the couple is generated and isolated prior to use, but routes have been described to storable forms of the alloy. Most methods involve reduction of an oxidized copper species with zinc, which is used in excess. An early method for the synthesis of zinc–copper couple entailed treatment of a mixture of zinc dust and copper(II) oxide with hydrogen gas at 500 °C. A more convenient and cheaper method proceeds by treatment of zinc powder with hydrochloric acid and copper(II) sulfate. Treatment of zinc powder with copper(II) acetate monohydrate in hot acetic acid is reportedly highly reproducible. The couple may also be generated in situ by reaction of one equivalent of zinc dust with one equivalent of copper(I) chloride (or copper powder) in refluxing ether. The choice of method is dictated primarily by the application. The development of newer methods was motivated by the need for zinc–copper couple with reproducible behavior. Application Zinc–copper couple has found widespread use in organic synthesis, especially in the Simmons–Smith cyclopropanation of alkenes. In this process, the couple (typically a slurry in an ethereal solvent) reacts with methylene iodide to generate iodomethylzinc iodide, which is the intermediate responsible for cyclopropanation. {Zn_\mathit{n}(Cu)} + CH2I2 -> {IZnCH2I} + Zn_{\mathit{n}-1}(Cu) The couple has also been employed to generate alkyl zinc reagents for conjugate addition, as a dehalogenating reagent, as a promoter of reductive coupling of carbonyl compounds, and to reduce electron-deficient alkenes and alkynes. Sonication has been employed to enhance the rate of the zinc–copper couple-mediated cycloaddition of α,α’-dibromo ketones to 1,3-dienes. See also Devarda's alloy Organozinc compound References Zinc alloys Reducing agents
Zinc–copper couple
[ "Chemistry" ]
739
[ "Redox", "Alloys", "Zinc alloys", "Reducing agents" ]
17,134,535
https://en.wikipedia.org/wiki/Silicon%20tetrabromide
Silicon tetrabromide, also known as tetrabromosilane, is the inorganic compound with the formula SiBr4. This colorless liquid has a suffocating odor due to its tendency to hydrolyze with release of hydrogen bromide. The general properties of silicon tetrabromide closely resemble those of the more commonly used silicon tetrachloride. Comparison of SiX4 The properties of the tetrasilanes, all of which are tetrahedral, are significantly affected by nature of the halide. These trends apply also to the mixed halides. Melting points, boiling points, and bond lengths increase with the atomic mass of the halide. The opposite trend is observed for the Si-X bond energies. Lewis acidity Covalently saturated silicon complexes like SiBr4, along with tetrahalides of germanium (Ge) and tin (Sn), are Lewis acids. Although silicon tetrahalides obey the octet rule, they add Lewis basic ligands to give adducts with the formula SiBr4L and SiBr4L2 (where L is a Lewis base). The Lewis acidic properties of the tetrahalides tend to increase as follows: SiI4 < SiBr4 < SiCl4 < SiF4. This trend is attributed to the relative electronegativities of the halogens. The strength of the Si-X bonds decrease in the order: Si-F > Si-Cl > Si-Br > Si-I. Synthesis Silicon tetrabromide is synthesized by the reaction of silicon with hydrogen bromide at 600 °C.<ref name=booth>Schumb, W. B. Silicobromoform" Inorganic Syntheses 1939, volume 1, pp 38-42. .</ref> Si + 4 HBr → SiBr4 + 2 H2 Side products include dibromosilane (SiH2Br2) and tribromosilane (SiHBr3). Si + 2 HBr → SiH2Br2 Si + 3 HBr → SiHBr3 + H2 It can also be produced by treating silicon-copper mixture with bromine: Reactivity Like other halosilanes, SiBr4 can be converted to hydrides, alkoxides, amides, and alkyls, i.e., products with the following functional groups: Si-H, Si-OR, Si-NR2, Si-R, and Si-X bonds respectively. Silicon tetrabromide can be readily reduced by hydrides or complex hydrides. 4 R2AlH + SiBr4 → SiH4 + 4 R2AlBr Reactions with alcohols and amines proceed as follows: SiBr4 + 4 ROH → Si(OR)4 + 4 HBr SiBr4 + 8 HNR2 → Si(NR2)4 + 4 HNR2HBr Grignard reactions with metal alkyl halides are particularly important reactions due to their production of organosilicon compounds which can be converted to silicones. SiBr4 + n RMgX → RnSiBr4−n + n'' MgXBr Redistribution reactions occur between two different silicon tetrahalides (as well as halogenated polysilanes) when heated to 100 ˚C, resulting in various mixed halosilanes. The melting points and boiling points of these mixed halosilanes generally increase as their molecular weights increase. (Can occur with X= H, F, Cl, Br, and I) 2 SiBr4 + 2 SiCl4 → SiBr3Cl + 2 SiBr2Cl2 + SiBrCl3 Si2Cl6 + Si2Br6 → Si2ClnBr6−n Silicon tetrabromide hydrolyzes readily when exposed to air causing it to fume: SiBr4 + 2 H2O → SiO2 + 4 HBr Silicon tetrabromide is stable in the presence of oxygen at room temperature, but bromosiloxanes form at 670–695 ˚C . 2 SiBr4 + 1⁄2 O2 → Br3SiOSiBr3 + Br2 Uses Due to its close similarity to silicon tetrachloride, there are few applications unique to SiBr4. The pyrolysis of SiBr4 does have the advantage of depositing silicon at faster rates than that of SiCl4, however SiCl4 is usually preferred due to its availability in high purity. Pyrolysis of SiBr4 followed by treatment with ammonia yields silicon nitride (Si3N4) coatings, a hard compound used for ceramics, sealants, and the production of many cutting tools. References Bromides Inorganic silicon compounds
Silicon tetrabromide
[ "Chemistry" ]
990
[ "Bromides", "Inorganic silicon compounds", "Inorganic compounds", "Salts" ]
17,135,616
https://en.wikipedia.org/wiki/Air%20classifier
An air classifier is an industrial machine which separates materials by a combination of size, shape, and density. It works by injecting the material stream to be sorted into a chamber which contains a column of rising air. Inside the separation chamber, air drag on the objects supplies an upward force which counteracts the force of gravity and lifts the material to be sorted up into the air. Due to the dependence of air drag on object size and shape, the objects in the moving air column are sorted vertically and can be separated in this manner. Air classifiers are commonly employed in industrial processes where a large volume of mixed materials with differing physical characteristics need to be separated quickly and efficiently. Air classifier is helpful for cement, air pollution control, food processing, pigments, pharmaceutical, cosmetics or chemical industries. One such example is in municipal recycling centers, where various types of metal, paper, and plastics arrive mixed together and need to be sorted before further processing can take place. Air classifiers can also be used as a step in the automotive recycling process. For example, after the crushing and shredding steps, steel is removed by electromagnets, and nonferrous metals are removed by eddy-current separators. Then an air classifier can be used to deal with the remaining dense materials (such as glass) as well as the remaining materials of various densities, most notably plastics, foams, and cloth. See also Cyclonic separation Elutriation Selected patents Further reading External links Classification overview N.N.Zoubov Engineers SMCE Air Classifier / Air separator - how it works. Air Classifiers / Air separators - databases. Industrial equipment Fluid dynamics Air pollution control systems Particulate control Waste treatment technology Solid-gas separation
Air classifier
[ "Chemistry", "Engineering" ]
362
[ "Separation processes by phases", "Solid-gas separation", "Water treatment", "Chemical engineering", "nan", "Environmental engineering", "Piping", "Waste treatment technology", "Fluid dynamics" ]
17,137,358
https://en.wikipedia.org/wiki/Reactive%20centrifugal%20force
In classical mechanics, a reactive centrifugal force forms part of an action–reaction pair with a centripetal force. In accordance with Newton's first law of motion, an object moves in a straight line in the absence of a net force acting on the object. A curved path ensues when a force that is orthogonal to the object's motion acts on it; this force is often called a centripetal force, as it is directed toward the center of curvature of the path. Then in accordance with Newton's third law of motion, there will also be an equal and opposite force exerted by the object on some other object, and this reaction force is sometimes called a reactive centrifugal force, as it is directed in the opposite direction of the centripetal force. In the case of a ball held in circular motion by a string, the centripetal force is the force exerted by the string on the ball. The reactive centrifugal force on the other hand is the force the ball exerts on the string, placing it under tension. Unlike the inertial force known as centrifugal force, which exists only in the rotating frame of reference, the reactive force is a real Newtonian force that is observed in any reference frame. The two forces will only have the same magnitude in the special cases where circular motion arises and where the axis of rotation is the origin of the rotating frame of reference. Paired forces The figure at right shows a ball in uniform circular motion held to its path by a string tied to an immovable post. In this system a centripetal force upon the ball provided by the string maintains the circular motion, and the reaction to it, which some refer to as the reactive centrifugal force, acts upon the string and the post. Newton's first law requires that any body moving along any path other than a straight line be subject to a net non-zero force, and the free body diagram shows the force upon the ball (center panel) exerted by the string to maintain the ball in its circular motion. Newton's third law of action and reaction states that if the string exerts an inward centripetal force on the ball, the ball will exert an equal but outward reaction upon the string, shown in the free body diagram of the string (lower panel) as the reactive centrifugal force. The string transmits the reactive centrifugal force from the ball to the fixed post, pulling upon the post. Again according to Newton's third law, the post exerts a reaction upon the string, labeled the post reaction, pulling upon the string. The two forces upon the string are equal and opposite, exerting no net force upon the string (assuming that the string is massless), but placing the string under tension. The reason the post appears to be "immovable" is because it is fixed to the earth. If the rotating ball was tethered to the mast of a boat, for example, the boat mast and ball would both experience rotation about a central point. Applications Even though the reactive centrifugal is rarely used in analyses in the physics literature, the concept is applied within some mechanical engineering concepts. An example of this kind of engineering concept is an analysis of the stresses within a rapidly rotating turbine blade. The blade can be treated as a stack of layers going from the axis out to the edge of the blade. Each layer exerts an outward (centrifugal) force on the immediately adjacent, radially inward layer and an inward (centripetal) force on the immediately adjacent, radially outward layer. At the same time the inner layer exerts an elastic centripetal force on the middle layer, while and the outer layer exerts an elastic centrifugal force, which results in an internal stress. It is the stresses in the blade and their causes that mainly interest mechanical engineers in this situation. Another example of a rotating device in which a reactive centrifugal force can be identified used to describe the system behavior is the centrifugal clutch. A centrifugal clutch is used in small engine-powered devices such as chain saws, go-karts and model helicopters. It allows the engine to start and idle without driving the device, but automatically and smoothly engages the drive as the engine speed rises. A spring is used to constrain the spinning clutch shoes. At low speeds, the spring provides the centripetal force to the shoes, which move to larger radius as the speed increases and the spring stretches under tension. At higher speeds, when the shoes can't move any further out to increase the spring tension, due to the outer drum, the drum provides some of the centripetal force that keeps the shoes moving in a circular path. The force of tension applied to the spring, and the outward force applied to the drum by the spinning shoes are the corresponding reactive centrifugal forces. The mutual force between the drum and the shoes provides the friction needed to engage the output drive shaft that is connected to the drum. Thus the centrifugal clutch illustrates both the fictitious centrifugal force and the reactive centrifugal force. Difference from centrifugal pseudoforce The "reactive centrifugal force" discussed in this article is not the same thing as the centrifugal pseudoforce, which is usually what is meant by the term "centrifugal force". Reactive centrifugal force, being one-half of the reaction pair together with centripetal force, is a concept which applies in any reference frame. This distinguishes it from the inertial or fictitious centrifugal force, which appears only in rotating frames. Gravitational two-body case In a two-body rotation, such as a planet and moon rotating about their common center of mass or barycentre, the forces on both bodies are centripetal. In that case, the reaction to the centripetal force of the planet on the moon is the centripetal force of the moon on the planet. References Force Mechanics Rotation
Reactive centrifugal force
[ "Physics", "Mathematics", "Engineering" ]
1,268
[ "Physical phenomena", "Force", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Rotation", "Motion (physics)", "Mechanics", "Mechanical engineering", "Wikipedia categories named after physical quantities", "Matter" ]
18,348,855
https://en.wikipedia.org/wiki/Harmful%20algal%20bloom
A harmful algal bloom (HAB), or excessive algae growth, is an algal bloom that causes negative impacts to other organisms by production of natural algae-produced toxins, water deoxygenation, mechanical damage to other organisms, or by other means. HABs are sometimes defined as only those algal blooms that produce toxins, and sometimes as any algal bloom that can result in severely lower oxygen levels in natural waters, killing organisms in marine or fresh waters. Blooms can last from a few days to many months. After the bloom dies, the microbes that decompose the dead algae use up more of the oxygen, generating a "dead zone" which can cause fish die-offs. When these zones cover a large area for an extended period of time, neither fish nor plants are able to survive. Harmful algal blooms in marine environments are often called "red tides". It is sometimes unclear what causes specific HABs as their occurrence in some locations appears to be entirely natural, while in others they appear to be a result of human activities. In certain locations there are links to particular drivers like nutrients, but HABs have also been occurring since before humans started to affect the environment. HABs are induced by eutrophication, which is an overabundance of nutrients in the water. The two most common nutrients are fixed nitrogen (nitrates, ammonia, and urea) and phosphate. The excess nutrients are emitted by agriculture, industrial pollution, excessive fertilizer use in urban/suburban areas, and associated urban runoff. Higher water temperature and low circulation also contribute. HABs can cause significant harm to animals, the environment and economies. They have been increasing in size and frequency worldwide, a fact that many experts attribute to global climate change. The U.S. National Oceanic and Atmospheric Administration (NOAA) predicts more harmful blooms in the Pacific Ocean. Potential remedies include chemical treatment, additional reservoirs, sensors and monitoring devices, reducing nutrient runoff, research and management as well as monitoring and reporting. Terrestrial runoff, containing fertilizer, sewage and livestock wastes, transports abundant nutrients to the seawater and stimulates bloom events. Natural causes, such as river floods or upwelling of nutrients from the sea floor, often following massive storms, provide nutrients and trigger bloom events as well. Increasing coastal developments and aquaculture also contribute to the occurrence of coastal HABs. Effects of HABs can worsen locally due to wind driven Langmuir circulation and their biological effects. Description and identification HABs from cyanobacteria (blue-green algae) can appear as a foam, scum, or mat on or just below the surface of water and can take on various colors depending on their pigments. Cyanobacteria blooms in freshwater lakes or rivers may appear bright green, often with surface streaks that look like floating paint. Cyanobacterial blooms are a global problem. Most blooms occur in warm waters with excessive nutrients. The harmful effects from such blooms are due to the toxins they produce or from using up oxygen in the water which can lead to fish die-offs. Not all algal blooms produce toxins, however, with some only discoloring water, producing a smelly odor, or adding a bad taste to the water. Unfortunately, it is not possible to tell if a bloom is harmful from just appearances, since sampling and microscopic examination is required. In many cases microscopy is not sufficient to tell the difference between toxic and non-toxic populations. In these cases, tools can be employed to measure the toxin level or to determine if the toxin-production genes are present. Terminology In a narrow definition, harmful algal blooms are only those blooms that release toxins that affect other species. On the other hand, any algal bloom can cause dead zones due to low oxygen levels, and could therefore be called "harmful" in that sense. The usage of the term "harmful algal blooms" in the media and scientific literature is varied. In a broader definition, all "organisms and events are considered to be HABs if they negatively impact human health or socioeconomic interests or are detrimental to aquatic systems". A harmful algal bloom is "a societal concept rather than a scientific definition". A similarly broad definition of HABs was adopted by the US Environmental Protection Agency in 2008 who stated that HABs include "potentially toxic (auxotrophic, heterotrophic) species and high-biomass producers that can cause hypoxia and anoxia and indiscriminate mortalities of marine life after reaching dense concentrations, whether or not toxins are produced". Red tide Harmful algal bloom in coastal areas are also often referred to as "red tides". The term "red tide" is derived from blooms of any of several species of dinoflagellate, such as Karenia brevis. However, the term is misleading since algal blooms can widely vary in color, and growth of algae is unrelated to the tides. Not all red tides are produced by dinoflagellates. The mixotrophic ciliate Mesodinium rubrum produces non-toxic blooms coloured deep red by chloroplasts it obtains from the algae it eats. As a technical term, it is being replaced in favor of more precise terminology, including the generic term "harmful algal bloom" for harmful species, and "algal bloom" for benign species. Types There are three main types of phytoplankton which can form into harmful algal blooms: cyanobacteria, dinoflagellates, and diatoms. All three are made up of microscopic floating organisms which, like plants, can create their own food from sunlight by means of photosynthesis. That ability makes the majority of them an essential part of the food web for small fish and other organisms. Cyanobacteria Harmful algal blooms in freshwater lakes and rivers, or at estuaries, where rivers flow into the ocean, are caused by cyanobacteria, which are commonly referred to as "blue-green algae", but are in fact prokaryotic bacteria, as opposed to algae which are eukaryotes. Some cyanobacteria, including the widespread genus Microsystis, can produce hazardous cyanotoxins such as microcystins, which are hepatotoxins that harm the liver of mammals. Other types of cyanobacteria can also produce hepatotoxins, as well as neurotoxins, cytotoxins, and endotoxins. Water purification plants may be unable to remove these toxins, leading to increasingly common localised advisories against drinking tap water, as happened in Toledo, Ohio in August 2014. In August 2021, there were 47 lakes confirmed to have algal blooms in New York State alone. In September 2021, Spokane County's Environmental Programs issued a HAB alert for Newman Lake following tests showing potentially harmful toxicity levels for cyanobacteria, while in the same month record-high levels of microcystins were reported leading to an extended 'Do Not Drink' advisory for 280 households at Clear Lake, California's second-largest freshwater lake. Water conditions in Florida, meanwhile, continue to deteriorate under increasing nutrient inflows, causing severe HAB events in both freshwater and marine areas. HABs also cause harm by blocking the sunlight used by plants and algae to photosynthesise, or by depleting the dissolved oxygen needed by fish and other aquatic animals, which can lead to fish die-offs. When such oxygen-depleted water covers a large area for an extended period of time, it can become hypoxic or even anoxic; these areas are commonly called dead zones. These dead zones can be the result of numerous different factors ranging from natural phenomenon to deliberate human intervention, and are not just limited to large bodies of fresh water as found in the great lakes, but are also prone to bodies of salt water as well. Dual-stage life systems of algal species Many of the species that form harmful algae blooms will undergo a dual-stage life system. These species will alternate between a benthic resting stage and a pelagic vegetative state. The benthic resting stage corresponds to when these species are resting near the ocean floor. In this stage, the species cells are waiting for optimal conditions so that they can move towards the surface. These species will then transition from the benthic resting stage into the pelagic vegetative state - where they are more active and found near the water body surface. In the pelagic vegetative state, these cells are able to grow and multiply. It is within the pelagic vegetative state that a bloom is able to occur - as the cells rapidly reproduce and take over the upper regions of the body of water. The transition between these two life stages can have multiple effects on the algae bloom (such as rapid termination of the HAB as cells convert from the pelagic state to the benthic state). Many of the algal species that undergo this dual-stage life cycle are capable of rapid vertical migration. This migration is required for the movement from the benthic area of bodies of water to the pelagic zone. These species require immense amounts of energy as they pass through the various thermoclines, haloclines, and pycnoclines that are associated with the bodies of water in which these cells exist. Diatoms and dinoflagellates (in marine coastal areas) The other types of algae are diatoms and dinoflagellates, found primarily in marine environments, such as ocean coastlines or bays, where they can also form algal blooms. Coastal HABs are a natural phenomenon, although in many instances, particularly when they form close to coastlines or in estuaries, it has been shown that they are exacerbated by human-induced eutrophication and / or climate change. They can occur when warmer water, salinity, and nutrients reach certain levels, which then stimulates their growth. Most HAB algae are dinoflagellates. They are visible in water at a concentration of 1,000 algae cells/ml, while in dense blooms they can measure over 200,000/ml. Diatoms produce domoic acid, another neurotoxin, which can cause seizures in higher vertebrates and birds as it concentrates up the food chain. Domoic acid readily accumulates in the bodies of shellfish, sardines, and anchovies, which if then eaten by sea lions, otters, cetaceans, birds or people, can affect the nervous system causing serious injury or death. In the summer of 2015, the state governments closed important shellfish fisheries in Washington, Oregon, and California because of high concentrations of domoic acid in shellfish. In the marine environment, single-celled, microscopic, plant-like organisms naturally occur in the well-lit surface layer of any body of water. These organisms, referred to as phytoplankton or microalgae, form the base of the food web upon which nearly all other marine organisms depend. Of the 5000+ species of marine phytoplankton that exist worldwide, about 2% are known to be harmful or toxic. Blooms of harmful algae can have large and varied impacts on marine ecosystems, depending on the species involved, the environment where they are found, and the mechanism by which they exert negative effects. List of common HAB genera Gonyaulax Karenia Gymnodinium Dinophysis Noctiluca Chattonella Ceratium Amoebophyra Alexandrium Cochlodinium Causes It is sometimes unclear what causes specific HABs as their occurrence in some locations appears to be entirely natural, while in others they appear to be a result of human activities. Furthermore, there are many different species of algae that can form HABs, each with different environmental requirements for optimal growth. The frequency and severity of HABs in some parts of the world have been linked to increased nutrient loading from human activities. In other areas, HABs are a predictable seasonal occurrence resulting from coastal upwelling, a natural result of the movement of certain ocean currents. The growth of marine phytoplankton (both non-toxic and toxic) is generally limited by the availability of nitrates and phosphates, which can be abundant in coastal upwelling zones as well as in agricultural run-off. The type of nitrates and phosphates available in the system are also a factor, since phytoplankton can grow at different rates depending on the relative abundance of these substances (e.g. ammonia, urea, nitrate ion). A variety of other nutrient sources can also play an important role in affecting algal bloom formation, including iron, silica or carbon. Coastal water pollution produced by humans (including iron fertilization) and systematic increase in sea water temperature have also been suggested as possible contributing factors in HABs. Among the causes of algal blooms are: Excess nutrients—phosphorus and nitrates—from fertilizers or sewage that are discharged to water bodies (also called nutrient pollution) climate change thermal pollution from power plants and factories low water levels in inland waterways and lakes, which reduces water flow and increases water temperatures invasive filter feeders—especially Zebra mussels, Dreissena polymorpha—which preferentially eat non-toxic algae, competitors to harmful algae Nutrients Nutrients enter freshwater or marine environments as surface runoff from agricultural pollution and urban runoff from fertilized lawns, golf courses and other landscaped properties; and from sewage treatment plants that lack nutrient control systems. Additional nutrients are introduced from atmospheric pollution. Coastal areas worldwide, especially wetlands and estuaries, coral reefs and swamps, are prone to being overloaded with those nutrients. Most of the large cities along the Mediterranean Sea, for example, discharge all of their sewage into the sea untreated. The same is true for most coastal developing countries, while in parts of the developing world, as much as 70% of wastewater from large cities may re-enter water systems without being treated. Residual nutrients in treated wastewater can also accumulate in downstream source water areas and fuel eutrophication, which leads progressively to a cyanobacteria-dominated system characterized by seasonal HABs. As more wastewater treatment infrastructure is built, more treated wastewater is returned to the natural water system, leading to a significant increase in these residual nutrients. Residual nutrients combine with nutrients from other sources to increase the sediment nutrient stockpile that is the driving force behind phase shifts to entrenched eutrophic conditions. This contributes to the ongoing degradation of dams, lakes, rivers, and reservoirs - source water areas that are starting to become known as ecological infrastructure, placing increasing pressure on wastewater treatment works and water purification plants. Such pressures, in turn, intensify seasonal HABs. Climate change Climate change contributes to warmer waters which makes conditions more favorable for algae growth in more regions and farther north. In general, still, warm, shallow water, combined with high-nutrient conditions in lakes or rivers, increases the risk of harmful algal blooms. Warming of summer surface temperatures of lakes, which rose by 0.34 °C decade per decade between 1985 and 2009 due to global warming, also will likely increase algal blooming by 20% over the next century. Although the drivers of harmful algal blooms are poorly understood, they do appear to have increased in range expansion and frequency in coastal areas since the 1980s. The is the result of human induced factors such as increased nutrient inputs (nutrient pollution) and climate change (in particular the warming of water temperatures). The parameters that affect the formation of HABs are ocean warming, marine heatwaves, oxygen loss, eutrophication and water pollution. Causes or contributing factors of coastal HABs HABs contain dense concentrations of organisms and appear as discolored water, often reddish-brown in color. It is a natural phenomenon, but the exact cause or combination of factors that result in a HAB event are not necessarily known. However, three key natural factors are thought to play an important role in a bloom - salinity, temperature, and wind. HABs cause economic harm, so outbreaks are carefully monitored. For example, the Florida Fish and Wildlife Conservation Commission provides an up-to-date status report on HABs in Florida. The Texas Parks and Wildlife Department also provides a status report. While no particular cause of HABs has been found, many different factors can contribute to their presence. These factors can include water pollution, which originates from sources such as human sewage and agricultural runoff. The occurrence of HABs in some locations appears to be entirely natural (algal blooms are a seasonal occurrence resulting from coastal upwelling, a natural result of the movement of certain ocean currents) while in others they appear to be a result of increased nutrient pollution from human activities. The growth of marine phytoplankton is generally limited by the availability of nitrates and phosphates, which can be abundant in agricultural run-off as well as coastal upwelling zones. Other factors such as iron-rich dust influx from large desert areas such as the Sahara Desert are thought to play a major role in causing HAB events. Some algal blooms on the Pacific Coast have also been linked to occurrences of large-scale climatic oscillations such as El Niño events. Other causes Other factors such as iron-rich dust influx from large desert areas such as the Sahara are thought to play a role in causing HABs. Some algal blooms on the Pacific coast have also been linked to natural occurrences of large-scale climatic oscillations such as El Niño events. HABs are also linked to heavy rainfall. Although HABs in the Gulf of Mexico were witnessed in the early 1500s by explorer Cabeza de Vaca, it is unclear what initiates these blooms and how large a role nanthropogenic and natural factors play in their development. Number and sizes The number of reported harmful algal blooms (cyanobacterial) has been increasing throughout the world. It is unclear whether the apparent increase in frequency and severity of HABs in various parts of the world is in fact a real increase or is due to increased observation effort and advances in species identification technology. In 2008, the U.S. government prepared a report on the problem, "Harmful Algal Bloom Management and Response: Assessment and Plan". The report recognized the seriousness of the problem: Researchers have reported the growth of HABs in Europe, Africa and Australia. Those have included blooms on some of the African Great Lakes, such as Lake Victoria, the second largest freshwater lake in the world. India has been reporting an increase in the number of blooms each year. In 1977 Hong Kong reported its first coastal HAB. By 1987 they were getting an average of 35 per year. Additionally, there have been reports of harmful algal blooms throughout popular Canadian lakes such as Beaver Lake and Quamichan Lake. These blooms were responsible for the deaths of a few animals and led to swimming advisories. Global warming and pollution is causing algal blooms to form in places previously considered "impossible" or rare for them to exist, such as under the ice sheets in the Arctic, in Antarctica, the Himalayan Mountains, the Rocky Mountains, and in the Sierra Nevada Mountains. In the U.S., every coastal state has had harmful algal blooms over the last decade and new species have emerged in new locations that were not previously known to have caused problems. Inland, major rivers have seen an increase in their size and frequency. In 2015 the Ohio River had a bloom which stretched an "unprecedented" into adjoining states and tested positive for toxins, which created drinking water and recreation problems. A portion of Utah's Jordan River was closed due to toxic algal bloom in 2016. Off the west coast of South Africa, HABs caused by Alexandrium catanella occur every spring. These blooms of organisms cause severe disruptions in fisheries of these waters as the toxins in the phytoplankton cause filter-feeding shellfish in affected waters to become poisonous for human consumption. Harmful effects As algal blooms grow, they deplete the oxygen in the water and block sunlight from reaching fish and plants. Such blooms can last from a few days to many months. With less light, plants beneath the bloom can die and fish can starve. Furthermore, the dense population of a bloom reduces oxygen saturation during the night by respiration. And when the algae eventually die off, the microbes which decompose the dead algae use up even more oxygen, which in turn causes more fish to die or leave the area. When oxygen continues to be depleted by blooms it can lead to hypoxic dead zones, where neither fish nor plants are able to survive. These dead zones in the case of the Chesapeake Bay, where they are a normal occurrence, are also suspected of being a major source of methane. Scientists have found that HABs were a prominent feature of previous mass extinction events, including the End-Permian Extinction. Human health Tests have shown some toxins near blooms can be in the air and thereby be inhaled, which could affect health. Food Eating fish or shellfish from lakes with a bloom nearby is not recommended. Potent toxins are accumulated in shellfish that feed on the algae. If the shellfish are consumed, various types of poisoning may result. These include amnesic shellfish poisoning (ASP), diarrhetic shellfish poisoning, neurotoxic shellfish poisoning, and paralytic shellfish poisoning. A 2002 study has shown that algal toxins may be the cause for as many as 60,000 intoxication cases in the world each year. In 1987 a new illness emerged: amnesic shellfish poisoning (ASP). People who had eaten mussels from Prince Edward Island were found to have ASP. The illness was caused by domoic acid, produced by a diatom found in the area where the mussels were cultivated. A 2013 study found that toxic paralytic shellfish poisoning in the Philippines during HABs has caused at least 120 deaths over a few decades. After a 2014 HAB incident in Monterey Bay, California, health officials warned people not to eat certain parts of anchovy, sardines, or crab caught in the bay. In 2015 most shellfish fisheries in Washington, Oregon and California were shut down because of high concentrations of toxic domoic acid in shellfish. People have been warned that inhaling vapors from waves or wind during a HAB event may cause asthma attacks or lead to other respiratory ailments. In 2018 agricultural officials in Utah worried that even crops could become contaminated if irrigated with toxic water, although they admit that they can't measure contamination accurately because of so many variables in farming. They issued warnings to residents, however, out of caution. Drinking water Persons are generally warned not to enter or drink water from algal blooms, or let their pets swim in the water since many pets have died from algal blooms. In at least one case, people began getting sick before warnings were issued. There is no treatment available for animals, including livestock cattle, if they drink from algal blooms where such toxins are present. Pets are advised to be kept away from algal blooms to avoid contact. In some locations visitors have been warned not to even touch the water. Boaters have been told that toxins in the water can be inhaled from the spray from wind or waves. Ocean beaches, lakes and rivers have been closed due to algal blooms. After a dog died in 2015 from swimming in a bloom in California's Russian River, officials likewise posted warnings for parts of the river. Boiling the water at home before drinking does not remove the toxins. In August 2014 the city of Toledo, Ohio advised its 500,000 residents to not drink tap water as the high toxin level from an algal bloom in western Lake Erie had affected their water treatment plant's ability to treat the water to a safe level. The emergency required using bottled water for all normal uses except showering, which seriously affected public services and commercial businesses. The bloom returned in 2015 and was forecast again for the summer of 2016. In 2004, a bloom in Kisumu Bay, which is the drinking water source for 500,000 people in Kisumu, Kenya, suffered from similar water contamination. In China, water was cut off to residents in 2007 due to an algal bloom in its third largest lake, which forced 2 million people to use bottled water. A smaller water shut-down in China affected 15,000 residents two years later at a different location. Australia in 2016 also had to cut off water to farmers. Alan Steinman of Grand Valley State University has explained that among the major causes for the algal blooms in general, and Lake Erie specifically, is because blue-green algae thrive with high nutrients, along with warm and calm water. Lake Erie is more prone to blooms because it has a high nutrient level and is shallow, which causes it to warm up more quickly during the summer. Symptoms from drinking toxic water can show up within a few hours after exposure. They can include nausea, vomiting, and diarrhea, or trigger headaches and gastrointestinal problems. Although rare, liver toxicity can cause death. Those symptoms can then lead to dehydration, another major concern. In high concentrations, the toxins in the algal waters when simply touched can cause skin rashes, irritate the eyes, nose, mouth or throat. Those with suspected symptoms are told to call a doctor if symptoms persist or they can't hold down fluids after 24 hours. In studies at the population level bloom coverage has been significantly related to the risk of non-alcoholic liver disease death. Neurological disorders Toxic algae blooms are thought to play a role in humans developing degenerative neurological disorders such as amyotrophic lateral sclerosis and Parkinson's disease. Less than one percent of algal blooms produce hazardous toxins, such as microcystins. Although blue-green or other algae do not usually pose a direct threat to health, the toxins (poisons) which they produce are considered dangerous to humans, land animals, sea mammals, birds and fish when the toxins are ingested. The toxins are neurotoxins which destroy nerve tissue which can affect the nervous system, brain, and liver, and can lead to death. Effects on humans from harmful algal blooms in marine environments Humans are affected by the HAB species by ingesting improperly harvested shellfish, breathing in aerosolized brevetoxins (i.e. PbTx or Ptychodiscus toxins) and in some cases skin contact. The brevetoxins bind to voltage-gated sodium channels, important structures of cell membranes. Binding results in persistent activation of nerve cells, which interferes with neural transmission leading to health problems. These toxins are created within the unicellular organism, or as a metabolic product. The two major types of brevetoxin compounds have similar but distinct backbone structures. PbTx-2 is the primary intracellular brevetoxin produced by K. brevis blooms. However, over time, the PbTx-2 brevetoxin can be converted to PbTx-3 through metabolic changes. Researchers found that PbTx-2 has been the primary intracellular brevetoxin that converts over time into PbTx-3. In the U.S., the seafood consumed by humans is tested regularly for toxins by the USDA to ensure safe consumption. Such testing is common in other nations. However, improper harvesting of shellfish can cause paralytic shellfish poisoning and neurotoxic shellfish poisoning in humans. Some symptoms include drowsiness, diarrhea, nausea, loss of motor control, tingling, numbing or aching of extremities, incoherence, and respiratory paralysis. Reports of skin irritation after swimming in the ocean during a HAB are common. When the HAB cells rupture, they release extracellular brevetoxins into the environment. Some of those stay in the ocean, while other particles get aerosolized. During onshore winds, brevetoxins can become aerosolized by bubble-mediated transport, causing respiratory irritation, bronchoconstriction, coughing, and wheezing, among other symptoms. It is recommended to avoid contact with wind-blown aerosolized toxin. Some individuals report a decrease in respiratory function after only 1 hour of exposure to a K. brevis red-tide beach and these symptoms may last for days. People with severe or persistent respiratory conditions (such as chronic lung disease or asthma) may experience stronger adverse reactions. The National Oceanic and Atmospheric Administration's National Ocean Service provides a public conditions report identifying possible respiratory irritation impacts in areas affected by HABs. Economic impact Recreation and tourism The hazards which accompany harmful algal blooms have hindered visitors' enjoyment of beaches and lakes in places in the U.S. such as Florida, California, Vermont, and Utah. Persons hoping to enjoy their vacations or days off have been kept away to the detriment of local economies. Lakes and rivers in North Dakota, Minnesota, Utah, California and Ohio have had signs posted warning about the potential of health risk. Similar blooms have become more common in Europe, with France among the countries reporting them. In the summer of 2009, beaches in northern Brittany became covered by tonnes of potentially lethal rotting green algae. A horse being ridden along the beach collapsed and died from fumes given off by the rotting algae. The economic damage resulting from lost business has become a serious concern. According to one report in 2016, the four main economic impacts from harmful algal blooms come from damage to human health, fisheries, tourism and recreation, and the cost of monitoring and management of area where blooms appear. EPA estimates that algal blooms impact 65 percent of the country's major estuaries, with an annual cost of $2.2 billion. In the U.S. there are an estimated 166 coastal dead zones. Because data collection has been more difficult and limited from sources outside the U.S., most of the estimates as of 2016 have been primarily for the U.S. In port cities in the Shandong Province of eastern China, residents are no longer surprised when massive algal blooms arrive each year and inundate beaches. Prior to the Beijing Olympics in 2008, over 10,000 people worked to clear 20,000 tons of dead algae from beaches. In 2013 another bloom in China, thought to be its largest ever, covered an area of 7,500 square miles, and was followed by another in 2015 which blanketed an even greater 13,500 square miles. The blooms in China are thought to be caused by pollution from untreated agricultural and industrial discharges into rivers leading to the ocean. Fisheries industry As early as 1976 a short-term, relatively small, dead zone off the coasts of New York and New Jersey cost commercial and recreational fisheries over $500 million. In 1998 a HAB in Hong Kong killed over $10 million in high-value fish. In 2009, the economic impact for the state of Washington's coastal counties dependent on its fishing industry was estimated to be $22 million. In 2016, the U.S. seafood industry expected future lost revenue could amount to $900 million annually. NOAA has provided a few cost estimates for various blooms over the past few years: $10.3 million in 2011 due to a HAB at Texas oyster landings; $2.4 million lost income by tribal commerce from 2015 fishery closures in the pacific northwest; $40 million from Washington state's loss of tourism from the same fishery closure. Along with damage to businesses, the toll from human sickness results in lost wages and damaged health. The costs of medical treatment, investigation by health agencies through water sampling and testing, and the posting of warning signs at effected locations is also costly. The closures applied to areas where this algae bloom occurs has a big negative impact of the fishing industries, add to that the high fish mortality that follows, the increase in price due to the shortage of fish available and decrease in the demand for seafood due to the fear of contamination by toxins. This causes a big economic loss for the industry. Economic costs are estimated to rise. In June 2015, for instance, the largest known toxic HAB forced the shutdown of the west coast shellfish industry, the first time that has ever happened. One Seattle NOAA expert commented, "This is unprecedented in terms of the extent and magnitude of this harmful algal bloom and the warm water conditions we're seeing offshore...." The bloom covered a range from Santa Barbara, California northward to Alaska. The negative impact on fish can be even more severe when they are confined to pens, as they are in fish farms. In 2007 a fish farm in British Columbia lost 260 tons of salmon as a result of blooms, and in 2016 a farm in Chile lost 23 million salmon after an algal bloom. Environmental impact Dead zones The presence of harmful algae bloom's can lead to hypoxia or anoxia in a body of water. The depletion of oxygen within a body of water can lead to the creation of a dead zone. Dead zones occur when a body of water has become unsuitable for organism survival in that location. HAB's cause dead zones by consuming oxygen in these bodies of water - leaving minimal oxygen available to other marine organisms. When the HAB's die, their bodies will sink to the bottom of the body of water - as the decaying of their bodies (through bacteria) is what causes the consumption of oxygen. Once oxygen levels get so low, the HAB's have placed the body of water in hypoxia - and these low oxygen levels will cause marine organisms to seek out better suited locations for their survival. Blooms can harm the environment even without producing toxins by depleting oxygen from the water when growing and while decaying after they die. Blooms can also block sunlight to organisms living beneath it. A record-breaking number and size of blooms have formed in the Pacific coast, in Lake Erie, in the Chesapeake Bay and in the Gulf of Mexico, where a number of dead zones were created as a result. In the 1960s the number of dead zones worldwide was 49; the number rose to over 400 by 2008. Among the largest dead zones were those in northern Europe's Baltic Sea and the Gulf of Mexico, which affects a $2.8 billion U.S. fish industry. Unfortunately, dead zones rarely recover and usually grow in size. One of the few dead zones to ever recover was in the Black Sea, which returned to normal fairly quickly after the collapse of the Soviet Union in the 1990s due to a resulting reduction in fertilizer use. Fish die-offs Massive fish die-offs have been caused by HABs. In 2016, 23 million salmon which were being farmed in Chile died from a toxic algae bloom. To get rid of the dead fish, the ones fit for consumption were made into fishmeal and the rest were dumped 60 miles offshore to avoid risks to human health. The economic cost of that die-off is estimated to have been $800 million. Environmental expert Lester Brown has written that the farming of salmon and shrimp in offshore ponds concentrates waste, which contributes to eutrophication and the creation of dead zones. Other countries have reported similar impacts, with cities such as Rio de Janeiro, Brazil seeing major fish die-offs from blooms becoming a common occurrence. In early 2015, Rio collected an estimated 50 tons of dead fish from the lagoon where water events in the 2016 Olympics were planned to take place. The Monterey Bay has suffered from harmful algal blooms, most recently in 2015: "Periodic blooms of toxin-producing Pseudo-nitzschia diatoms have been documented for over 25 years in Monterey Bay and elsewhere along the U.S. west coast. During large blooms, the toxin accumulates in shellfish and small fish such as anchovies and sardines that feed on algae, forcing the closure of some fisheries and poisoning marine mammals and birds that feed on contaminated fish." Similar fish die-offs from toxic algae or lack of oxygen have been seen in Russia, Colombia, Vietnam, China, Canada, Turkey, Indonesia, and France. Land animal deaths Land animals, including livestock and pets have been affected. Dogs have died from the toxins after swimming in algal blooms. Warnings have come from government agencies in the state of Ohio, which noted that many dogs and livestock deaths resulted from HAB exposure in the U.S. and other countries. They also noted in a 2003 report that during the previous 30 years, they have seen more frequent and longer-lasting harmful algal blooms." In 50 countries and 27 states that year there were reports of human and animal illnesses linked to algal toxins. In Australia, the department of agriculture warned farmers that the toxins from a HAB had the "potential to kill large numbers of livestock very quickly." Marine mammals have also been seriously harmed, as over 50 percent of unusual marine mammal deaths are caused by harmful algal blooms. In 1999, over 65 bottlenose dolphins died during a coastal HAB in Florida. In 2013 a HAB in southwest Florida killed a record number of Manatee. Whales have also died in large numbers. During the period from 2005 to 2014, Argentina reported an average 65 baby whales dying which experts have linked to algal blooms. A whale expert there expects the whale population to be reduced significantly. In 2003 off Cape Cod in the North Atlantic, at least 12 humpback whales died from toxic algae from a HAB. In 2015 Alaska and British Columbia reported many humpback whales had likely died from HAB toxins, with 30 having washed ashore in Alaska. "Our leading theory at this point is that the harmful algal bloom has contributed to the deaths," said a NOAA spokesperson. Birds have died after eating dead fish contaminated with toxic algae. Rotting and decaying fish are eaten by birds such as pelicans, seagulls, cormorants, and possibly marine or land mammals, which then become poisoned. The nervous systems of dead birds were examined and had failed from the toxin's effect. On the Oregon and Washington coast, a thousand scoters, or sea ducks, were also killed in 2009. "This is huge," said a university professor. As dying or dead birds washed up on the shore, wildlife agencies went into "an emergency crisis mode." It has even been suggested that harmful algal blooms are responsible for the deaths of animals found in fossil troves, such as the dozens of cetacean skeletons found at Cerro Ballena. Effects on marine ecosystems Harmful algal blooms in marine ecosystems have been observed to cause adverse effects to a wide variety of aquatic organisms, most notably marine mammals, sea turtles, seabirds and finfish. The impacts of HAB toxins on these groups can include harmful changes to their developmental, immunological, neurological, or reproductive capacities. The most conspicuous effects of HABs on marine wildlife are large-scale mortality events associated with toxin-producing blooms. For example, a mass mortality event of 107 bottlenose dolphins occurred along the Florida panhandle in the spring of 2004 due to ingestion of contaminated menhaden with high levels of brevetoxin. Manatee mortalities have also been attributed to brevetoxin but unlike dolphins, the main toxin vector was endemic seagrass species (Thalassia testudinum) in which high concentrations of brevetoxins were detected and subsequently found as a main component of the stomach contents of manatees. Additional marine mammal species, like the highly endangered North Atlantic right whale, have been exposed to neurotoxins by preying on highly contaminated zooplankton. With the summertime habitat of this species overlapping with seasonal blooms of the toxic dinoflagellate Alexandrium fundyense, and subsequent copepod grazing, foraging right whales will ingest large concentrations of these contaminated copepods. Ingestion of such contaminated prey can affect respiratory capabilities, feeding behavior, and ultimately the reproductive condition of the population. Immune system responses have been affected by brevetoxin exposure in another critically endangered species, the loggerhead sea turtle. Brevetoxin exposure, from inhalation of aerosolized toxins and ingestion of contaminated prey, can have clinical signs of increased lethargy and muscle weakness in loggerhead sea turtles causing these animals to wash ashore in a decreased metabolic state with increases of immune system responses upon blood analysis. Examples of common harmful effects of HABs include: the production of neurotoxins which cause mass mortalities in fish, seabirds, sea turtles, and marine mammals human illness or death from consumption of seafood contaminated by toxic algae mechanical damage to other organisms, such as disruption of epithelial gill tissues in fish, resulting in asphyxiation oxygen depletion of the water column (hypoxia or anoxia) from cellular respiration and bacterial degradation Marine life exposure HABs occur naturally off coasts all over the world. Marine dinoflagellates produce ichthyotoxins. Where HABs occur, dead fish wash up on shore for up to two weeks after a HAB has been through the area. In addition to killing fish, the toxic algae contaminate shellfish. Some mollusks are not susceptible to the toxin, and store it in their fatty tissues. By consuming the organisms responsible for HABs, shellfish can accumulate and retain saxitoxin produced by these organisms. Saxitoxin blocks sodium channels and ingestion can cause paralysis within 30 minutes. In addition to directly harming marine animals and vegetation loss, harmful algal blooms can also lead to ocean acidification, which occurs when the amount of carbon dioxide in the water is increased to unnatural levels. Ocean acidification slows the growth of certain species of fish and shellfish, and even prevents shell formation in certain species of mollusks. These subtle, small changes can add up over time to cause chain reactions and devastating effects on whole marine ecosystems. Other animals that eat exposed shellfish are susceptible to the neurotoxin, which may lead to neurotoxic shellfish poisoning and sometimes even death. Most mollusks and clams filter feed, which results in higher concentrations of the toxin than just drinking the water. Scaup, for example, are diving ducks whose diet mainly consists of mollusks. When scaup eat the filter-feeding shellfish that have accumulated high levels of the HAB toxin, their population becomes a prime target for poisoning. However, even birds that do not eat mollusks can be affected by simply eating dead fish on the beach or drinking the water. The toxins released by the blooms can kill marine animals including dolphins, sea turtles, birds, and manatees. The Florida Manatee, a subspecies of the West Indian Manatee, is a species often impacted by red tide blooms. Florida manatees are often exposed to the poisonous red-tide toxins either by consumption or inhalation. There are many small barnacles, crustaceans, and other epiphytes that grow on the blades of seagrass. These tiny creatures filter particles from the water around them and use these particles as their main food source. During red tide blooms, they also filter the toxic red tide cells from the water, which then becomes concentrated inside them. Although these toxins do not harm epiphytes, they are extremely poisonous to marine creatures who consume (or accidentally consume) the exposed epiphytes, such as manatees. When manatees unknowingly consume exposed epiphytes while grazing on sea grass, the toxins are subsequently released from the epiphytes and ingested by the manatees. In addition to consumption, manatees may also become exposed to air-borne Brevetoxins released from harmful red-tide cells when passing through algal blooms. Manatees also have an immunoresponse to HABs and their toxins that can make them even more susceptible to other stressors. Due to this susceptibility, manatees can die from either the immediate, or the after effects of the HAB. In addition to causing manatee mortalities, red-tide exposure also causes severe sublethal health problems among Florida manatee populations. Studies have shown that red-tide exposure among free-ranging Florida manatees has been shown to negatively impact immune functioning by causing increased inflammation, a reduction in lymphocyte proliferation responses, and oxidative stress. Fish such as Atlantic herring, American pollock, winter flounder, Atlantic salmon, and cod were dosed orally with these toxins in an experiment, and within minutes the subjects started to exhibit a loss of equilibrium and began to swim in an irregular, jerking pattern, followed by paralysis and shallow, arrhythmic breathing and eventually death, after about an hour. HABs have been shown to have a negative effect also in the memory functions of sea lions. Potential remedies Reducing nutrient runoff Since many algal blooms are caused by a major influx of nutrient-rich runoff into a water body, programs to treat wastewater, reduce the overuse of fertilizers in agriculture and reducing the bulk flow of runoff can be effective for reducing severe algal blooms at river mouths, estuaries, and the ocean directly in front of the river's mouth. The nitrates and phosphorus in fertilizers cause algal blooms when they run off into lakes and rivers after heavy rains. Modifications in farming methods have been suggested, such as only using fertilizer in a targeted way at the appropriate time exactly where it can do the most good for crops to reduce potential runoff. A method used successfully is drip irrigation, which instead of widely dispersing fertilizers on fields, drip-irrigates plant roots through a network of tubes and emitters, leaving no traces of fertilizer to be washed away. Drip irrigation also prevents the formation of algal blooms in reservoirs for drinking water while saving up to 50% of water typically used by agriculture. There have also been proposals to create buffer zones of foliage and wetlands to help filter out the phosphorus before it reaches water. Other experts have suggested using conservation tillage, changing crop rotations, and restoring wetlands. It is possible for some dead zones to shrink within a year under proper management. There have been a few success stories in controlling chemicals. After Norway's lobster fishery collapsed in 1986 due to low oxygen levels, for instance, the government in neighboring Denmark took action and reduced phosphorus output by 80 percent which brought oxygen levels closer to normal. Similarly, dead zones in the Black Sea and along the Danube River recovered after phosphorus applications by farmers were reduced by 60%. Nutrients can be permanently removed from wetlands harvesting wetland plants, reducing nutrient influx into surrounding bodies of water. Research is ongoing to determine the efficacy of floating mats of cattails in removing nutrients from surface waters too deep to sustain the growth of wetland plants. In the U.S., surface runoff is the largest source of nutrients added to rivers and lakes, but is mostly unregulated under the federal Clean Water Act. Locally developed initiatives to reduce nutrient pollution are underway in various areas of the country, such as the Great Lakes region and the Chesapeake Bay. To help reduce algal blooms in Lake Erie, the State of Ohio presented a plan in 2016 to reduce phosphorus runoff. Chemical treatment Although a number of algaecides have been effective in killing algae, they have been used mostly in small bodies of water. For large algal blooms, however, adding algaecides such as silver nitrate or copper sulfate can have worse effects, such as killing fish outright and harming other wildlife. Cyanobacteria can also develop resistance to copper-containing algaecides, requiring a larger quantity of the chemical to be effective for HAB management, but introducing a greater risk to other species in the region. The negative effects can therefore be worse than letting the algae die off naturally. In 2019, Chippewa Lake in Northeast Ohio became the first lake in the U.S. to successfully test a new chemical treatment. The chemical formula killed all of the toxic algae in the lake within a single day. The formula has already been used in China, South Africa and Israel. In February 2020, Roodeplaat Dam in Gauteng Province, South Africa was treated with a new algicide formulation against a severe bloom of Microcystis sp. This formulation allows the granular product to float and slow-release its active ingredient, sodium percarbonate, that releases hydrogen peroxide (H2O2), on the water surface. Consequently, the effective concentrations are limited, vertically, to the surface of the water; and spatially to areas where cyanobacteria are abundant. This provide the aquatic organisms a "safe haven" in untreated areas and avoids the adverse effects associated with the use of standard algicides. Bioactive compounds isolated from terrestrial and aquatic plants, particularly seaweeds, have seen results as a more environmentally friendly control for HABs. Molecules found in seaweeds such as Corallina, Sargassum, and Saccharina japonica have shown to inhibit some bloom-forming microalgae. In addition to their anti-microalgal effects, the bioactive molecules found in these seaweeds also have antibacterial, antifungal, and antioxidant properties. Removal of HABs using aluminum-modified clay Other chemicals are being tested for their efficacy for removing cyanobacteria during blooms. Modified clays, such as aluminum chloride modified clay (AC-MC), aluminum sulfide modified clay (AS-MC) and polyaluminum chloride modified clay (PAC-MC) have shown positive results in vitro for the removal of Aureococcus by trapping the microalgae in the sediment of clay, removing it from the top layer of water where harmful blooms can occur. Many efforts have been made in an attempt to control HAB's so that the harm that they cause can be kept at a minimum. Studies into the use of clay to control HAB's have proven that this method may be an effective way to reduce the negative effects caused by HAB's. The addition of aluminum chloride, aluminum sulfate, or polyaluminum chloride to clay can modify the clay surface and increase its efficiency in the removal of HAB's from a body of water. The addition of aluminum-containing compounds causes the clay particles to achieve a positive charge, with these particles then undergoing flocculation with the harmful algae cells. The algae cells then group together: becoming a sediment instead of a suspension. The process of flocculation will limit the bloom growth and reduce the impact in which the bloom can have on an area. In the Netherlands, successful algae and phosphate removal from surface water has been obtained by pumping affected water through a hydrodynamic separator. The treated water is then free from algae and contains a significant lower amount of phosphate since the removed algae cells contain a lot of phosphate. The treated water also gets a lower turbidity. Future projects will study the positive effects on the ecology and marine life as it is expected plant life will be restored and a reduction in bottom dwelling fish will automatically reduce the turbidity of the cleaned water. The removed algae and phosphate may find its way not as waste but as infeed to bio digesters. Additional reservoirs Other experts have proposed building reservoirs to prevent the movement of algae downstream. However, that can lead to the growth of algae within the reservoir, which become sediment traps with a resultant buildup of nutrients. Some researchers found that intensive blooms in reservoirs were the primary source of toxic algae observed downstream, but the movement of algae has so far been less studied, although it is considered a likely cause of algae transport. Restoring shellfish populations The decline of filter-feeding shellfish populations, such as oysters, likely contribute to HAB occurrence. As such, numerous research projects are assessing the potential of restored shellfish populations to reduce HAB occurrence. Improved monitoring Other remedies include using improved monitoring methods, trying to improve predictability, and testing new potential methods of controlling HABs. Some countries surrounding the Baltic Sea, which has the world's largest dead zone, have considered using massive geoengineering options, such as forcing air into bottom layers to aerate them. Mathematical models are useful to predict future algal blooms. Sensors and monitoring devices A growing number of scientists agree that there is an urgent need to protect the public by being able to forecast harmful algal blooms. One way they hope to do that is with sophisticated sensors which can help warn about potential blooms. The same types of sensors can also be used by water treatment facilities to help them prepare for higher toxic levels. The only sensors now in use are located in the Gulf of Mexico. In 2008 similar sensors in the Gulf forewarned of an increased level of toxins that led to a shutdown of shellfish harvesting in Texas along with a recall of mussels, clams, and oysters, possibly saving many lives. With an increase in the size and frequency of HABs, experts state the need for significantly more sensors located around the country. The same kinds of sensors can also be used to detect threats to drinking water from intentional contamination. Satellite and remote sensing technologies are growing in importance for monitoring, tracking, and detecting HABs. Four U.S. federal agencies—EPA, the National Aeronautics and Space Administration (NASA), NOAA, and the U.S. Geological Survey (USGS)—are working on ways to detect and measure cyanobacteria blooms using satellite data. The data may help develop early-warning indicators of cyanobacteria blooms by monitoring both local and national coverage. In 2016 automated early-warning monitoring systems were successfully tested, and for the first time proven to identify the rapid growth of algae and the subsequent depletion of oxygen in the water. Examples Notable occurrences 1530: First alleged case off the Florida Gulf Coast is without foundation. According to Marine Lab at University of Miami, the first possible Red Tide in Florida was in 1844. Earlier "signs" were from boats sorting fish on their way to home port dumping trash fish overboard. Thus "dead fish" reports along the coast were not Red Tide. 1793: The first recorded case occurring in British Columbia, Canada. 1840: No deaths of humans have been attributed to Florida red tide, but people may experience respiratory irritation (coughing, sneezing, and tearing) when the red tide organism (Karenia brevis) is present along a coast and winds blow its aerosolized toxins. Swimming is usually safe, but skin irritation and burning is possible in areas of high concentration of red tide. 1844: First possible case off the Florida Gulf Coast according to Marine Lab University of Miami, probably by ships off shore, no known inhabitants of the coast reporting. 1901: Lingulodinium polyedrum produces brilliant displays of bioluminescence in warm coastal waters. Seen in Southern California regularly since at least 1901. 1916: Massive fish kill along SW Florida coast. Noxious air thought to be seismic underwater explosion releasing chlorine gas. 1947: Southwest Florida: A massive bloom that lasts close a year almost destroys the commercial fishing industry and sponge beds. The resulting poisoned surf caused beaches to need to be evacuated. 1972: A red tide was caused in New England by a toxic dinoflagellate Alexandrium (Gonyaulax) tamarense. The red tides caused by the dinoflagellate Gonyaulax are serious because this organism produces saxitoxin and gonyautoxins which accumulate in shellfish and if ingested may lead to paralytic shellfish poisoning (PSP) and can lead to death. 1972 and 1973: Red tides killed two villagers west of Port Moresby. In March 1973 a red tide invaded Port Moresby Harbour and destroyed a Japanese pearl farm. In 1972, a red tide was caused in New England by a toxic dinoflagellate Alexandrium (Gonyaulax) tamarense. 1976: The first PSP case in Sabah, Malaysian Borneo where 202 victims were reported to be suffering and 7 deaths. 1987: A red algae bloom in Prince Edward Island caused over a million dollars in losses. 1991: The largest algal bloom on record was the 1991 Darling River cyanobacterial bloom in Australia, largely of Anabaena circinalis, between October and December 1991 over of the Barwon and Darling Rivers. 2005: The Canadian red tide was discovered to have come further south than it has in years prior by the ship (R/V) Oceanus, closing shellfish beds in Maine and Massachusetts and alerting authorities as far south as Montauk (Long Island, NY) to check their beds. Experts who discovered the reproductive cysts in the seabed warn of a possible spread to Long Island in the future, halting the area's fishing and shellfish industry and threatening the tourist trade, which constitutes a significant portion of the island's economy. In 2008 large blooms of the algae Cochlodinium polykrikoid were found along the Chesapeake Bay and nearby tributaries such as the James River, causing millions of dollars in damage and numerous beach closures. In 2009, Brittany, France experienced recurring macroalgal blooms caused by the high amount of fertilizer discharging in the sea due to intensive pig farming, causing lethal gas emissions that have led to one case of human unconsciousness and three animal deaths. In 2010, dissolved iron in the ash from the Eyjafjallajökull volcano triggered a plankton bloom in the North Atlantic. 2011: Northern California 2011: Gulf of Mexico In 2013, an algal bloom was caused in Qingdao, China, by sea lettuce. 2013: In January, a red tide occurred again on the West Coast Sea of Sabah in the Malaysian Borneo. Two human fatalities were reported after they consumed shellfish contaminated with the red tide toxin. 2013: In January, a red tide bloom appeared at Sarasota beach – mainly Siesta Key, Florida causing a fish kill that had a negative impact on tourists, and caused respiratory issues for beach-goers. In 2014, Myrionecta rubra (previously known as Mesodinium rubrum), a ciliate protist that ingests cryptomonad algae, caused a bloom in southeastern coast of Brazil. In 2014, blue green algae caused a bloom in the western basin of Lake Erie, poisoning the Toledo, Ohio water system connected to 500,000 people. 2014: In August, massive 'Florida red tide' long and wide. 2015: June, 12 persons hospitalized in the Philippine province of Bohol for red tide poisoning. 2015: August, several beaches in the Netherlands between Katwijk and Scheveningen were plagued. Government institutions dissuaded swimmers from entering the water. 2015: September, a red tide bloom occurred in the Gulf of Mexico, affecting Padre Island National Seashore along North Padre Island and South Padre Island in Texas. 2017 and 2018: K. brevis red tide algae with warnings not to swim, state of emergency declared, dead dolphin and manatee, worsened by Caloosahatchee River. Peaked in the summer of 2018. Toxic harmful algae bloom red tide in Southwest Florida. A rare harmful algal bloom along Florida's east coast of Palm Beach County occurred the weekend of September 30, 2018. In 2019, blue-green algae, or Cyanobacteria blooms, were again problematic on Lake Erie. In early August 2019, satellite images depicted a bloom stretching up to 1,300 square kilometers, with the epicentre near Toledo, Ohio. The largest Lake Erie bloom to date occurred in 2015, exceeding the severity index at 10.5 and in 2011 at a 10. A large bloom does not necessarily mean the cyanobacteria ... will produce toxins", said Michael McKay, of the University of Windsor. Water quality testing was underway in August. In 2019, a bloom of Noctiluca algae caused bioluminescent glow off the coast of Chennai, India. Similar blooms have been reported annually in the northern Arabian Sea since the early 2000s. 2021: In July, a large red tide occurred on the Gulf Coast of Florida in and around Tampa Bay. The event has caused the death of millions of pounds of fish, and led to the National Weather Service declaring a Beach Hazard. 2021: in October, the mass deaths of shellfish (specifically crabs and lobster) on the beaches of Northern England, led to and algal bloom being blamed as the cause by the UK Government. However, those who work in the fishing industry in the area, and some academics, have stated that pyridine poisoning is the cause. 2023: A blue-green algae bloom occurred in Lough Neagh, Northern Ireland, the largest fresh water lake in the UK and Ireland where 40% of Northern Ireland gets its tap water from. It is caused by Northern Ireland experiencing both the wettest and hottest summer on record making conditions perfect for blue-green algae. Poor management of the Lough is being blamed. The bloom has killed dogs and wildlife, including swans. United States In July 2016 Florida declared a state of emergency for four counties as a result of blooms. They were said to be "destroying" a number of businesses and affecting local economies, with many needing to shut down entirely. Some beaches were closed, and hotels and restaurants suffered a drop in business. Tourist sporting activities such as fishing and boating were also affected. In 2019, the biggest Sargassum bloom ever seen created a crisis in the Tourism industry in North America. This event was likely caused by climate change and nutrient pollution from fertilizers. Several Caribbean countries considered declaring a state of emergency due to the impact on tourism as a result of environmental damage and potentially toxic and harmful health effects. On the U.S. coasts The Gulf of Maine frequently experiences blooms of the dinoflagellate Alexandrium fundyense, an organism that produces saxitoxin, the neurotoxin responsible for paralytic shellfish poisoning. The well-known "Florida red tide" that occurs in the Gulf of Mexico is a HAB caused by Karenia brevis, another dinoflagellate which produces brevetoxin, the neurotoxin responsible for neurotoxic shellfish poisoning. California coastal waters also experience seasonal blooms of Pseudo-nitzschia, a diatom known to produce domoic acid, the neurotoxin responsible for amnesic shellfish poisoning. The term red tide is most often used in the US to refer to Karenia brevis blooms in the eastern Gulf of Mexico, also called the Florida red tide. K. brevis is one of many different species of the genus Karenia found in the world's oceans. Major advances have occurred in the study of dinoflagellates and their genomics. Some include identification of the toxin-producing genes (PKS genes), exploration of environmental changes (temperature, light/dark, etc.) have on gene expression, as well as an appreciation of the complexity of the Karenia genome. These blooms have been documented since the 1800s, and occur almost annually along Florida's coasts. There was increased research activity of harmful algae blooms (HABs) in the 1980s and 1990s. This was primarily driven by media attention from the discovery of new HAB organisms and the potential adverse health effects of their exposure to animals and humans. The Florida red tides have been observed to have spread as far as the eastern coast of Mexico. The density of these organisms during a bloom can exceed tens of millions of cells per litre of seawater, and often discolor the water a deep reddish-brown hue. Red tide is also sometimes used to describe harmful algal blooms on the northeast coast of the United States, particularly in the Gulf of Maine. This type of bloom is caused by another species of dinoflagellate known as Alexandrium fundyense. These blooms of organisms cause severe disruptions in fisheries of these waters, as the toxins in these organism cause filter-feeding shellfish in affected waters to become poisonous for human consumption due to saxitoxin. The related Alexandrium monilatum is found in subtropical or tropical shallow seas and estuaries in the western Atlantic Ocean, the Caribbean Sea, the Gulf of Mexico, and the eastern Pacific Ocean. Texas Natural water reservoirs in Texas have been threatened by anthropogenic activities due to large petroleum refineries and oil wells (i.e. emission and wastewater discharge), massive agricultural activities (i.e. pesticide release) and mining extractions (i.e. toxic wastewater) as well as natural phenomena involving frequent HAB events. For the first time in 1985, the state of Texas documented the presence of the P. parvum (golden alga) bloom along the Pecos River. This phenomenon has affected 33 reservoirs in Texas along major river systems, including the Brazos, Canadian, Rio Grande, Colorado, and Red River, and has resulted in the death of more than 27 million fish and caused tens of millions of dollars in damage. Chesapeake Bay The Chesapeake Bay, the largest estuary in the U.S., has suffered from repeated large algal blooms for decades due to chemical runoff from multiple sources, including 9 large rivers and 141 smaller streams and creeks in parts of six states. In addition, the water is quite shallow and only 1% of the waste entering it gets flushed into the ocean. By weight, 60% of the phosphates entering the bay in 2003 were from sewage treatment plants, while 60% of its nitrates came from fertilizer runoff, farm animal waste, and the atmosphere. About 300 million pounds (140 Gg) of nitrates are added to the bay each year. The population increase in the bay watershed, from 3.7 million people in 1940 to 18 million in 2015 is also a major factor, as economic growth leads to the increased use of fertilizers and rising emissions of industrial waste. As of 2015, the six states and the local governments in the Chesapeake watershed have upgraded their sewage treatment plants to control nutrient discharges. The U.S. Environmental Protection Agency (EPA) estimates that sewage treatment plant improvements in the Chesapeake region between 1985 and 2015 have prevented the discharge of 900 million pounds (410 Gg) of nutrients, with nitrogen discharges reduced by 57% and phosphorus by 75%. Agricultural and urban runoff pollution continue to be major sources of nutrients in the bay, and efforts to manage those problems are continuing throughout the watershed. Lake Erie Recent algae blooms in Lake Erie have been fed primarily by agricultural runoff and have led to warnings for some people in Canada and Ohio not to drink their water. The International Joint Commission has called on United States and Canada to drastically reduce phosphorus loads into Lake Erie to address the threat. Green Bay Green Bay has a dead zone caused by phosphorus pollution that appears to be getting worse. Okeechobee Waterway Lake Okeechobee is an ideal habitat for cyanobacteria because its shallow, sunny, and laden with nutrients from Florida's agriculture. The Okeechobee Waterway connects the lake to the Atlantic Ocean and the Gulf of Mexico through the St. Lucie River and the Caloosahatchee respectively. This means that harmful algal blooms are carried down the estuaries as water is released during the wet summer months. In July 2018 up to 90% of Lake Okeechobee was covered in algae. Water draining from the lake filled the region with a noxious odor and caused respiratory problems in some humans during the following month. To make matters worse, harmful red tide blooms are historically common on Florida's coasts during these same summer months. Cyanobacteria in the rivers die as they reach saltwater but their nitrogen fixation feeds the red tide on the coast. Areas at the mouth of the estuaries such as Cape Coral and Port St. Lucie therefore experience the compounded effects of both types of harmful algal bloom. Cleanup crews hired by authorities in Lee County - where the Caloosahatchee meets the Gulf of Mexico - removed more than 1700 tons of dead marine life in August 2018. Baltic Sea In 2020, a large harmful algal bloom closed beaches in Poland and Finland, brought on by a combination of fertilizer runoff and extreme heat, posing a risk to flounder and mussel beds. This is seen by the Baltic Sea Action Group as a threat to biodiversity and regional fishing stocks. Coastal seas of Bangladesh, India, and Pakistan Open defecation is common in south Asia, but human waste is an often overlooked source of nutrient pollution in marine pollution modeling. When nitrogen (N) and phosphorus (P) contributed by human waste was included in models for Bangladesh, India, and Pakistan, the estimated N and P inputs to bodies of water increased one to two orders of magnitude compared to previous models. River export of nutrients to coastal seas increases coastal eutrophication potential (ICEP). The ICEP of the Godavari River is three times higher when N and P inputs from human waste are included. See also Brevetoxin Ciguatera Cyanobacterial bloom Cyanotoxin GEOHAB - an international research programme on the Global Ecology and Oceanography of Harmful algal blooms Milky seas effect – A phenomenon in which disturbed red algae dinoflagellates will make the water glow blue, at night Pfiesteria Thin layers (oceanography) Water quality Water security References External links International Society for the Study of Harmful Algae (ISSHA) FAQ about Harmful Algal Blooms (NOAA) Harmful Algal Blooms Observing System (NOAA/HAB-OFS) GEOHAB: The International IOC-SCOR Research Programme on the Global Ecology and Oceanography of Harmful Algal Blooms Biological oceanography Aquatic ecology Fishing industry Water quality indicators Human impact on the environment Agriculture and the environment Climate change and the environment Water pollution Algal blooms Red tide Dinoflagellate biology Fisheries science
Harmful algal bloom
[ "Chemistry", "Biology", "Environmental_science" ]
14,449
[ "Algae", "Water treatment", "Water pollution", "Water quality indicators", "Ecosystems", "Aquatic ecology", "Algal blooms" ]