id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
7,330,494
https://en.wikipedia.org/wiki/Hepatocyte%20growth%20factor
Hepatocyte growth factor (HGF) or scatter factor (SF) is a paracrine cellular growth, motility and morphogenic factor. It is secreted by mesenchymal cells and targets and acts primarily upon epithelial cells and endothelial cells, but also acts on haemopoietic progenitor cells and T cells. It has been shown to have a major role in embryonic organ development, specifically in myogenesis, in adult organ regeneration, and in wound healing. Function Hepatocyte growth factor regulates cell growth, cell motility, and morphogenesis by activating a tyrosine kinase signaling cascade after binding to the proto-oncogenic c-Met receptor. Hepatocyte growth factor is secreted by platelets, and mesenchymal cells and acts as a multi-functional cytokine on cells of mainly epithelial origin. Its ability to stimulate mitogenesis, cell motility, and matrix invasion gives it a central role in angiogenesis, tumorogenesis, and tissue regeneration. Structure It is secreted as a single inactive polypeptide and is cleaved by serine proteases into a 69-kDa alpha-chain and 34-kDa beta-chain. A disulfide bond between the alpha and beta chains produces the active, heterodimeric molecule. The protein belongs to the plasminogen subfamily of S1 peptidases but has no detectable protease activity. Clinical significance Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease as well as treatment for the damage that occurs to the heart after myocardial infarction. As well as the well-characterised effects of HGF on epithelial cells, endothelial cells and haemopoietic progenitor cells, HGF also regulates the chemotaxis of T cells into heart tissue. Binding of HGF by c-Met, expressed on T cells, causes the upregulation of c-Met, CXCR3, and CCR4 which in turn imbues them with the ability to migrate into heart tissue. HGF also promotes angiogenesis in ischemia injury. HGF may further play a role as an indicator for prognosis of chronicity for Chikungunya virus induced arthralgia. High HGF levels correlate with high rates of recovery. Excessive local expression of HGF in the breasts has been implicated in macromastia. HGF is also importantly involved in normal mammary gland development. HGF has been implicated in a variety of cancers, including of the lungs, pancreas, thyroid, colon, and breast. Increased expression of HGF has been associated with the enhanced and scarless wound healing capabilities of fibroblast cells isolated from the oral mucosa tissue. Circulating plasma levels Plasma from patients with advanced heart failure presents increased levels of HGF, which correlates with a negative prognosis and a high risk of mortality. Circulating HGF has been also identified as a prognostic marker of severity in patients with hypertension. Circulating HGF has been also suggested as a precocious biomarker for the acute phase of bowel inflammation. Pharmacokinetics Exogenous HGF administered by intravenous injection is cleared rapidly from circulation by the liver, with a half-life of approximately 4 minutes. Modulators Dihexa is an orally active, centrally penetrant small-molecule compound that directly binds to HGF and potentiates its ability to activate its receptor, c-Met. It is a strong inducer of neurogenesis and is being studied for the potential treatment of Alzheimer's disease and Parkinson's disease. Interactions Hepatocyte growth factor has been shown to interact with the protein product of the c-Met oncogene, identified as the HGF receptor (HGFR). Both overexpression of the Met/HGFR receptor protein and autocrine activation of Met/HGFR by simultaneous expression of the hepatocyte growth factor ligand have been implicated in oncogenesis. Hepatocyte growth factor interacts with the sulfated glycosaminoglycans heparan sulfate and dermatan sulfate. The interaction with heparan sulfate allows hepatocyte growth factor to form a complex with c-Met that is able to transduce intracellular signals leading to cell division and cell migration. See also Epidermal growth factor Insulin-like growth factor 1 Epithelial–mesenchymal transition Madin-Darby Canine Kidney Cells References Further reading External links Hepatocyte growth factor on the Atlas of Genetics and Oncology UCSD Signaling Gateway Molecule Page on HGF Growth factors Developmental genes and proteins Cytokines
Hepatocyte growth factor
[ "Chemistry", "Biology" ]
1,017
[ "Growth factors", "Signal transduction", "Cytokines", "Developmental genes and proteins", "Induced stem cells" ]
7,330,539
https://en.wikipedia.org/wiki/Glia%20maturation%20factor
Glia maturation factor is a neurotrophic factor implicated in nervous system development, angiogenesis and immune function. In humans, the glia maturation factor beta and glia maturation factor gamma proteins are encoded by the GMFB and GMFG genes, respectively. The structures of mouse glia maturation factors beta and gamma, solved by both crystallography and NMR, reveal similarities and critical differences with ADF-H (actin depolymerization factor homology) domains and suggest new means of experimentally addressing the function of this protein family. References See also Growth factors Neurotrophic factors
Glia maturation factor
[ "Chemistry" ]
129
[ "Neurotrophic factors", "Neurochemistry", "Growth factors", "Signal transduction" ]
7,330,572
https://en.wikipedia.org/wiki/Apoptosis-inducing%20factor
Apoptosis inducing factor is involved in initiating a caspase-independent pathway of apoptosis (positive intrinsic regulator of apoptosis) by causing DNA fragmentation and chromatin condensation. Apoptosis inducing factor is a flavoprotein. It also acts as an NADH oxidase. Another AIF function is to regulate the permeability of the mitochondrial membrane upon apoptosis. Normally it is found behind the outer membrane of the mitochondrion and is therefore secluded from the nucleus. However, when the mitochondrion is damaged, it moves to the cytosol and to the nucleus. Inactivation of AIF leads to resistance of embryonic stem cells to death following the withdrawal of growth factors indicating that it is involved in apoptosis. Function Apoptosis Inducing Factor (AIF) is a protein that triggers chromatin condensation and DNA fragmentation in a cell in order to induce programmed cell death. The mitochondrial AIF protein was found to be a caspase-independent death effector that can allow independent nuclei to undergo apoptotic changes. The process triggering apoptosis starts when the mitochondrion releases AIF, which exits through the mitochondrial membrane, enters the cytosol, and moves to the nucleus of the cell, where it signals the cell to condense its chromosomes and fragment its DNA molecules in order to prepare for cell death. Recently, researchers have discovered that the activity of AIF depends on the type of cell, the apoptotic insult, and its DNA-binding ability. AIF also plays a significant role in the mitochondrial respiratory chain and metabolic redox reactions. Synthesis The AIF protein is located across 16 exons on the X chromosome in humans. AIF1 (the most abundant type of AIF) is translated in the cytosol and recruited to the mitochondrial membrane and intermembrane space by its N-terminal mitochondrial localization signal (MLS). Inside the mitochondrion, AIF folds into its functional configuration with the help of the cofactor flavin adenine dinucleotide (FAD). A protein called Scythe (BAT3), which is used to regulate organogenesis, can increase the AIF lifetime in the cell. As a result, decreased amounts of Scythe lead to a quicker fragmentation of AIF. The X-linked inhibitor of apoptosis (XIAP) has the power to influence the half-life of AIF along with Scythe. Together, the two do not affect the AIF attached to the inner mitochondrial membrane, however they influence the stability of AIF once it exits the mitochondrion. Role in mitochondria It was thought that if a recombinant version of AIF lacked the first N-terminal 120 amino acids of the protein, then AIF would function as an NADH and NADPH oxidase. However, it was instead discovered that recombinant AIF that do not have the last 100 N-terminal amino acids have limited NADP and NADPH oxidase activity. Therefore, researchers concluded that the AIF N-terminus may function in interactions with other proteins or control AIF redox reactions and substrate specificity. Mutations of AIF due to deletions have stimulated the creation of the mouse model of complex I deficiency. Complex I deficiency is the reason behind over thirty percent of human mitochondrial diseases. For example, complex I mitochondriopathies mostly affect infants by causing symptoms such as seizures, blindness, deafness, etc. These AIF-deficient mouse models are important for fixing complex I deficiencies. The identification of AIF-interacting proteins in the inner mitochondrial membrane and intermembrane space will help researchers identify the mechanism of the signalling pathway that monitors the function of AIF in the mitochondria. Isozymes Human genes encoding apoptosis inducing factor isozymes include: AIFM1 AIFM2 AIFM3 Evolution The apoptotic function of AIFs has been shown in organisms belonging to different eukaryotic organisms including mentioned above human factors: AIM1, AIM2, and AIM3 (Xie et al., 2005), yeast factors NDI1 and AIF1 as well as AIF of Tetrahymena. Phylogenetic analysis indicates that the divergence of the AIFM1, AIFM2, AIFM3, and NDI sequences occurred before the divergence of eukaryotes. Role in cancer Despite an involvement in cell death, AIF plays a contributory role to the growth and aggressiveness of a variety of cancer types including colorectal, prostate, and pancreatic cancers through its NADH oxidase activity. AIF enzymatic activity regulates metabolism but can also increase ROS levels promoting oxidative stress activated signaling molecules including the MAPKs. AIF-mediated redox signaling promotes the activation of JNK1, which in turn can trigger the cadherin switch. See also Apoptosis Parthanatos References External links Programmed cell death Cell signaling Apoptosis
Apoptosis-inducing factor
[ "Chemistry", "Biology" ]
1,045
[ "Senescence", "Programmed cell death", "Apoptosis", "Signal transduction" ]
7,330,598
https://en.wikipedia.org/wiki/Chimerin%201
Chimerin 1 (CHN1), also known as alpha-1-chimerin, n-chimerin, is a protein which in humans is encoded by the CHN1 gene. Chimerin 1 is a GTPase activating protein specific for RAC GTP-binding proteins. It is expressed primarily in the brain and may be involved in signal transduction. This gene encodes GTPase-activating protein for p21-rac and a phorbol ester receptor. It plays an important role in ocular motor axon pathfinding. Function CHN1 is a three-domain protein with the N-terminal SH2 domain, the C-terminal RhoGAP domain and the central C1 domain similar to protein kinase C. When lipid diacylglycerol (DAG) binds to the C1 domain, CHN1 is transferred to the plasma membrane and negatively regulates Rho-family small GTPases RAC1 and CDC42, thus causing the morphological change of axons by pruning the ends of axon dendrites. Mutational analysis suggests that un-overlapping residues of the RhoGAP domain are involved in RAC1-binding and the RAC1-GAP activity. Regulation of the RhoGAP activity of CHN1 by phorbol esters, natural compounds mimic of the lipid second messenger DAG, presents a possible way of designing agents for therapeutics. Clinical significance Heterozygous missense mutations in this gene cause Duane's retraction syndrome 2 (DURS2). References External links GeneReviews/NCBI/NIH/UW entry on Duane syndrome GTP-binding protein regulators Proteins
Chimerin 1
[ "Chemistry" ]
356
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
7,330,616
https://en.wikipedia.org/wiki/Son%20of%20Sevenless
In cell signalling, Son of Sevenless (SOS) refers to a set of genes encoding guanine nucleotide exchange factors that act on the Ras subfamily of small GTPases. History and name The gene was so named because the Sos protein that it encoded was found to operate downstream of the sevenless gene in Drosophila melanogaster in a Ras/MAP kinase pathway. When sevenless is mutated or otherwise dysfunctional during development of the fly's ultraviolet light-sensitive compound eye, the seventh, central photoreceptor (R7) of each ommatidium fails to form. Similarly, the mammalian orthologues of Sos, SOS1 and SOS2, function downstream of many growth factor and adhesion receptors. Function Ras-GTPases act as molecular switches that bind to downstream effectors, such as the protein kinase c-Raf, and localise them to the membrane, resulting in their activation. Ras-GTPases are considered inactive when bound to guanosine diphosphate (GDP), and active when bound to guanosine triphosphate (GTP). As the name implies, Ras-GTPases possess intrinsic enzymatic activity that hydrolyses GTP to GDP and phosphate. Thus, upon binding to GTP, the duration of Ras-GTPase activity depends on the rate of hydrolysis. SOS (and other guanine nucleotide exchange factors) act by binding Ras-GTPases and forcing them to release their bound nucleotide (usually GDP). Once released from SOS, the Ras-GTPase quickly binds fresh guanine nucleotide from the cytosol. Since GTP is roughly ten times more abundant than GDP in the cytosol, this usually results in Ras activation. The normal rate of Ras catalytic GTPase (GTP hydrolysis) activity can be increased by proteins of the RasGAP family, which bind to Ras and increase its catalytic rate by a factor of one thousand - in effect, increasing the rate at which Ras is inactivated. Genetic diseases associated with SOS1 Dominant mutant alleles of SOS1 have recently been found to cause Noonan syndrome and hereditary gingival fibromatosis type 1. Noonan syndrome has also been shown to be caused by mutations in KRAS and PTPN11 genes. A common feature of these genes is that their products have all been strongly implicated as positive regulators of the Ras/MAP kinase signal transduction pathway. Therefore, it is thought that dysregulation of this pathway during development is responsible for many of the clinical features of this syndrome. Noonan syndrome mutations in SOS1 are distributed in clusters positioned throughout the SOS1 coding region. Biochemically, these mutations have been shown to similarly effect aberrant activation of the catalytic domain towards Ras-GTPases. This may be explained because the SOS1 protein adopts an auto-inhibited conformation dependent on multiple domain-to-domain interactions that cooperate to block access of the SOS1 catalytic core to its Ras-GTPase targets. The mutations that cause Noonan syndrome thus appear to perturb intramolecular interactions necessary for SOS1 auto-inhibition. In this way these mutations are thought to create SOS1 alleles encoding hyper-activated and dysregulated variants of the protein. References External links ExPASy Proteomics Server: SOS GTP-binding protein regulators Proteins
Son of Sevenless
[ "Chemistry" ]
722
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
7,330,651
https://en.wikipedia.org/wiki/Neurocalcin
Neurocalcin is a neuronal calcium-binding protein that belongs to the neuronal calcium sensor (NCS) family of proteins. It expressed in mammalian brains. It possesses a Ca2+/myristoyl switch The subclass of neurocalcins are brain-specific proteins that fit into the EF-hand superfamily of calcium binding proteins. The NCS family were defined by the photoreceptor cell-specific protein, recoverin. Neurocalcin was purified from the bovine brain by using calcium-dependent drug affinity chromatography. The amino acid sequence showed that neurocalcin has three functional calcium binding sites. It is expressed in the central nervous system, retina, and adrenal gland. With this unique pattern of expression it is thought that neurcalcin offers a different physiological role than similar proteins visinin and recoverin. Neurocalcin delta an isoform of Neurocalcin is known to regulate adult neurogenesis ( Upadhyay et al., 2019) External links NCS proteins Protein Database Page References Proteins
Neurocalcin
[ "Chemistry" ]
229
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
7,330,660
https://en.wikipedia.org/wiki/Numerical%20linear%20algebra
Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible. Numerical linear algebra aims to solve problems of continuous mathematics using finite precision computers, so its applications to the natural and social sciences are as vast as the applications of continuous mathematics. It is often a fundamental part of engineering and computational science problems, such as image and signal processing, telecommunication, computational finance, materials science simulations, structural biology, data mining, bioinformatics, and fluid dynamics. Matrix methods are particularly used in finite difference methods, finite element methods, and the modeling of differential equations. Noting the broad applications of numerical linear algebra, Lloyd N. Trefethen and David Bau, III argue that it is "as fundamental to the mathematical sciences as calculus and differential equations", even though it is a comparatively small field. Because many properties of matrices and vectors also apply to functions and operators, numerical linear algebra can also be viewed as a type of functional analysis which has a particular emphasis on practical algorithms. Common problems in numerical linear algebra include obtaining matrix decompositions like the singular value decomposition, the QR factorization, the LU factorization, or the eigendecomposition, which can then be used to answer common linear algebraic problems like solving linear systems of equations, locating eigenvalues, or least squares optimisation. Numerical linear algebra's central concern with developing algorithms that do not introduce errors when applied to real data on a finite precision computer is often achieved by iterative methods rather than direct ones. History Numerical linear algebra was developed by computer pioneers like John von Neumann, Alan Turing, James H. Wilkinson, Alston Scott Householder, George Forsythe, and Heinz Rutishauser, in order to apply the earliest computers to problems in continuous mathematics, such as ballistics problems and the solutions to systems of partial differential equations. The first serious attempt to minimize computer error in the application of algorithms to real data is John von Neumann and Herman Goldstine's work in 1947. The field has grown as technology has increasingly enabled researchers to solve complex problems on extremely large high-precision matrices, and some numerical algorithms have grown in prominence as technologies like parallel computing have made them practical approaches to scientific problems. Matrix decompositions Partitioned matrices For many problems in applied linear algebra, it is useful to adopt the perspective of a matrix as being a concatenation of column vectors. For example, when solving the linear system , rather than understanding x as the product of with b, it is helpful to think of x as the vector of coefficients in the linear expansion of b in the basis formed by the columns of A. Thinking of matrices as a concatenation of columns is also a practical approach for the purposes of matrix algorithms. This is because matrix algorithms frequently contain two nested loops: one over the columns of a matrix A, and another over the rows of A. For example, for matrices and vectors and , we could use the column partitioning perspective to compute y := Ax + y as for q = 1:n for p = 1:m y(p) = A(p,q)*x(q) + y(p) end end Singular value decomposition The singular value decomposition of a matrix is where U and V are unitary, and is diagonal. The diagonal entries of are called the singular values of A. Because singular values are the square roots of the eigenvalues of , there is a tight connection between the singular value decomposition and eigenvalue decompositions. This means that most methods for computing the singular value decomposition are similar to eigenvalue methods; perhaps the most common method involves Householder procedures. QR factorization The QR factorization of a matrix is a matrix and a matrix so that A = QR, where Q is orthogonal and R is upper triangular. The two main algorithms for computing QR factorizations are the Gram–Schmidt process and the Householder transformation. The QR factorization is often used to solve linear least-squares problems, and eigenvalue problems (by way of the iterative QR algorithm). LU factorization An LU factorization of a matrix A consists of a lower triangular matrix L and an upper triangular matrix U so that A = LU. The matrix U is found by an upper triangularization procedure which involves left-multiplying A by a series of matrices to form the product , so that equivalently . Eigenvalue decomposition The eigenvalue decomposition of a matrix is , where the columns of X are the eigenvectors of A, and is a diagonal matrix the diagonal entries of which are the corresponding eigenvalues of A. There is no direct method for finding the eigenvalue decomposition of an arbitrary matrix. Because it is not possible to write a program that finds the exact roots of an arbitrary polynomial in finite time, any general eigenvalue solver must necessarily be iterative. Algorithms Gaussian elimination From the numerical linear algebra perspective, Gaussian elimination is a procedure for factoring a matrix A into its LU factorization, which Gaussian elimination accomplishes by left-multiplying A by a succession of matrices until U is upper triangular and L is lower triangular, where . Naive programs for Gaussian elimination are notoriously highly unstable, and produce huge errors when applied to matrices with many significant digits. The simplest solution is to introduce pivoting, which produces a modified Gaussian elimination algorithm that is stable. Solutions of linear systems Numerical linear algebra characteristically approaches matrices as a concatenation of columns vectors. In order to solve the linear system , the traditional algebraic approach is to understand x as the product of with b. Numerical linear algebra instead interprets x as the vector of coefficients of the linear expansion of b in the basis formed by the columns of A. Many different decompositions can be used to solve the linear problem, depending on the characteristics of the matrix A and the vectors x and b, which may make one factorization much easier to obtain than others. If A = QR is a QR factorization of A, then equivalently . This is as easy to compute as a matrix factorization. If is an eigendecomposition A, and we seek to find b so that b = Ax, with and , then we have . This is closely related to the solution to the linear system using the singular value decomposition, because singular values of a matrix are the absolute values of its eigenvalues, which are also equivalent to the square roots of the absolute values of the eigenvalues of the Gram matrix . And if A = LU is an LU factorization of A, then Ax = b can be solved using the triangular matrices Ly = b and Ux = y. Least squares optimisation Matrix decompositions suggest a number of ways to solve the linear system r = b − Ax where we seek to minimize r, as in the regression problem. The QR algorithm solves this problem by computing the reduced QR factorization of A and rearranging to obtain . This upper triangular system can then be solved for x. The SVD also suggests an algorithm for obtaining linear least squares. By computing the reduced SVD decomposition and then computing the vector , we reduce the least squares problem to a simple diagonal system. The fact that least squares solutions can be produced by the QR and SVD factorizations means that, in addition to the classical normal equations method for solving least squares problems, these problems can also be solved by methods that include the Gram-Schmidt algorithm and Householder methods. Conditioning and stability Allow that a problem is a function , where X is a normed vector space of data and Y is a normed vector space of solutions. For some data point , the problem is said to be ill-conditioned if a small perturbation in x produces a large change in the value of f(x). We can quantify this by defining a condition number which represents how well-conditioned a problem is, defined as Instability is the tendency of computer algorithms, which depend on floating-point arithmetic, to produce results that differ dramatically from the exact mathematical solution to a problem. When a matrix contains real data with many significant digits, many algorithms for solving problems like linear systems of equation or least squares optimisation may produce highly inaccurate results. Creating stable algorithms for ill-conditioned problems is a central concern in numerical linear algebra. One example is that the stability of householder triangularization makes it a particularly robust solution method for linear systems, whereas the instability of the normal equations method for solving least squares problems is a reason to favour matrix decomposition methods like using the singular value decomposition. Some matrix decomposition methods may be unstable, but have straightforward modifications that make them stable; one example is the unstable Gram–Schmidt, which can easily be changed to produce the stable modified Gram–Schmidt. Another classical problem in numerical linear algebra is the finding that Gaussian elimination is unstable, but becomes stable with the introduction of pivoting. Iterative methods There are two reasons that iterative algorithms are an important part of numerical linear algebra. First, many important numerical problems have no direct solution; in order to find the eigenvalues and eigenvectors of an arbitrary matrix, we can only adopt an iterative approach. Second, noniterative algorithms for an arbitrary matrix require time, which is a surprisingly high floor given that matrices contain only numbers. Iterative approaches can take advantage of several features of some matrices to reduce this time. For example, when a matrix is sparse, an iterative algorithm can skip many of the steps that a direct approach would necessarily follow, even if they are redundant steps given a highly structured matrix. The core of many iterative methods in numerical linear algebra is the projection of a matrix onto a lower dimensional Krylov subspace, which allows features of a high-dimensional matrix to be approximated by iteratively computing the equivalent features of similar matrices starting in a low dimension space and moving to successively higher dimensions. When A is symmetric and we wish to solve the linear problem Ax = b, the classical iterative approach is the conjugate gradient method. If A is not symmetric, then examples of iterative solutions to the linear problem are the generalized minimal residual method and CGN. If A is symmetric, then to solve the eigenvalue and eigenvector problem we can use the Lanczos algorithm, and if A is non-symmetric, then we can use Arnoldi iteration. Software Several programming languages use numerical linear algebra optimisation techniques and are designed to implement numerical linear algebra algorithms. These languages include MATLAB, Analytica, Maple, and Mathematica. Other programming languages which are not explicitly designed for numerical linear algebra have libraries that provide numerical linear algebra routines and optimisation; C and Fortran have packages like Basic Linear Algebra Subprograms and LAPACK, python has the library NumPy, and Perl has the Perl Data Language. Many numerical linear algebra commands in R rely on these more fundamental libraries like LAPACK. More libraries can be found on the List of numerical libraries. References Further reading External links Freely available software for numerical algebra on the web, composed by Jack Dongarra and Hatem Ltaief, University of Tennessee NAG Library of numerical linear algebra routines (Research group in the United Kingdom) (Activity group about numerical linear algebra in the Society for Industrial and Applied Mathematics) The GAMM Activity Group on Applied and Numerical Linear Algebra Computational fields of study
Numerical linear algebra
[ "Technology" ]
2,455
[ "Computational fields of study", "Computing and society" ]
7,330,787
https://en.wikipedia.org/wiki/Quasi-periodic%20oscillation%20%28astronomy%29
In X-ray astronomy, quasi-periodic oscillation (QPO) is the manner in which the X-ray light from an astronomical object flickers about certain frequencies. In these situations, the X-rays are emitted near the inner edge of an accretion disk in which gas swirls onto a compact object such as a white dwarf, neutron star, or black hole. The QPO phenomenon promises to help astronomers understand the innermost regions of accretion disks and the masses, radii, and spin periods of white dwarfs, neutron stars, and black holes. QPOs could help test Albert Einstein's theory of general relativity which makes predictions that differ most from those of Newtonian gravity when the gravitational force is strongest or when rotation is fastest (when a phenomenon called the Lense–Thirring effect comes into play). However, the various explanations of QPOs remain controversial and the conclusions reached from their study remain provisional. A QPO is identified by performing a power spectrum of the time series of the X-rays. A constant level of white noise is expected from the random variation of sampling the object's light. Systems that show QPOs sometimes also show nonperiodic noise that appears as a continuous curve in the power spectrum. A periodic pulsation appears in the power spectrum as a peak of power at exactly one frequency (a Dirac delta function given a long enough observation). A QPO, on the other hand, appears as a broader peak, sometimes with a Lorentzian shape. What sort of variation with time could cause a QPO? For example, the power spectrum of an oscillating shot appears as a continuum of noise together with a QPO. An oscillating shot is a sinusoidal variation that starts suddenly and decays exponentially. A scenario in which oscillating shots cause the observed QPOs could involve "blobs" of gas in orbit around a rotating, weakly magnetized neutron star. Each time a blob comes near a magnetic pole, more gas accretes and the X-rays increase. At the same time, the blob's mass decreases so that the oscillation decays. Often power spectra are formed from several time intervals and then added together before the QPO can be seen to be statistically significant. History QPOs were first identified in white dwarf systems and then in neutron star systems. At first the neutron star systems found to have QPOs were of a class (Z sources and atoll sources) not known to have pulsations. The spin periods of these neutron stars were unknown as a result. These neutron stars are thought to have relatively low magnetic fields so the gas does not fall mostly onto their magnetic poles, as in accreting pulsars. Because their magnetic fields are so low, the accretion disk can get very close to the neutron star before being disrupted by the magnetic field. The spectral variability of these neutron stars was seen to correspond to changes in the QPOs. Typical QPO frequencies were found to be between about 1 and 60 Hz. The fastest oscillations were found in a spectral state called the Horizontal Branch, and were thought to be a result of the combined rotation of the matter in the disk and the rotation of the collapsed star (the "beat frequency model"). During the Normal Branch and Flaring Branch, the star was thought to approach its Eddington luminosity at which the force of the radiation could repel the accreting gas. This could give rise to a completely different kind of oscillation. Observations starting in 1996 with the Rossi X-ray Timing Explorer could detect faster variability, and it was found that neutron stars and black holes emit X-rays that have QPOs with frequencies up to 1000 Hz or so. Often "twin peak" QPOs were found in which two oscillations of roughly the same power appeared at high amplitudes. These higher frequency QPOs may show behavior related to that of the lower frequency QPOs. Measuring black holes QPOs can be used to determine the mass of black holes. The technique uses a relationship between black holes and the inner part of their surrounding disks, where gas spirals inward before reaching the event horizon. The hot gas piles up near the black hole and radiates a torrent of X-rays, with an intensity that varies in a pattern that repeats itself over a nearly regular interval. This signal is the QPO. Astronomers have long suspected that a QPO's frequency depends on the black hole's mass. The congestion zone lies close in for small black holes, so the QPO clock ticks quickly. As black holes increase in mass, the congestion zone is pushed farther out, so the QPO clock ticks slower and slower. See also Broad iron K line Quasiperiodicity Neutron-star oscillation References X-rays Observational astronomy
Quasi-periodic oscillation (astronomy)
[ "Physics", "Astronomy" ]
1,014
[ "X-rays", "Spectrum (physical sciences)", "Observational astronomy", "Electromagnetic spectrum", "Astronomical sub-disciplines" ]
7,330,822
https://en.wikipedia.org/wiki/Neuropilin
Neuropilin is a protein receptor active in neurons. There are two forms of Neuropilins, NRP-1 and NRP-2. Neuropilins are transmembrane glycoproteins, first documented to regulate neurogenesis and angiogenesis by complexing with Plexin receptors/class-3 semaphorin ligands and Vascular Endothelial Growth Factor (VEGF) receptors/VEGF ligands, respectively. Neuropilins predominantly act as co-receptors as they have a very small cytoplasmic domain and thus rely upon other cell surface receptors to transduce their signals across a cell membrane. Recent studies have shown that Neuropilins are multifunctional and can partner with a wide variety of transmembrane receptors. Neuropilins are therefore associated with numerous signalling pathways including those activated by Epidermal Growth Factor (EGF), Fibroblast Growth Factor (FGF), Hepatocyte Growth Factor (HGF), Insulin-like Growth Factor (IGF), Platelet Derived Growth Factor (PDGF) and Transforming Growth Factor beta (TGFβ). Although Neuropilins are commonly found at the cell surface, they have also been reported within the mitochondria and nucleus. Both Neuropilin family members can also be found in soluble forms created by alternative splicing or by ectodomain shedding from the cell surface. The pleiotropic nature of the NRP receptors results in their involvement in cellular processes, such as axon guidance and angiogenesis, the immune response and remyelination. Therefore, dysregulation of NRP activity has been implicated in many pathological conditions, including many types of cancer and cardiovascular disease. Applications Neuropilin-1 is a therapeutic target protein in the treatment for leukemia and lymphoma, since It has been shown that there is increased expression in neuropilin-1 in leukemia and lymphoma cell lines. Also, antagonism of neuropilin-1 has been found to inhibit tumour cell migration and adhesion. Structure Neuropilins contain the following four domains: N-terminal CUB domain (for complement C1r/C1s, Uegf, Bmp1) Coagulation factor 5/8 type, C-terminal (discoidin domain) MAM domain (for meprin, A-5 protein, and receptor protein-tyrosine phosphatase mu) C-terminal neuropilin The structure of B1 domain (coagulation factor 5/8 type) of neuropilin-1 was determined through X-Ray Diffraction with a resolution of 2.90 Å. The secondary structure of this domain is 5% alpha helical and 46% beta sheet. References External links Transmembrane receptors Single-pass transmembrane proteins
Neuropilin
[ "Chemistry" ]
607
[ "Transmembrane receptors", "Signal transduction" ]
7,330,887
https://en.wikipedia.org/wiki/Myogenin
Myogenin, is a transcriptional activator encoded by the MYOG gene. Myogenin is a muscle-specific basic-helix-loop-helix (bHLH) transcription factor involved in the coordination of skeletal muscle development or myogenesis and repair. Myogenin is a member of the MyoD family of transcription factors, which also includes MyoD, Myf5, and MRF4. In mice, myogenin is essential for the development of functional skeletal muscle. Myogenin is required for the proper differentiation of most myogenic precursor cells during the process of myogenesis. When the DNA coding for myogenin was knocked out of the mouse genome, severe skeletal muscle defects were observed. Mice lacking both copies of myogenin (homozygous-null) suffer from perinatal lethality due to the lack of mature secondary skeletal muscle fibers throughout the body. In cell culture, myogenin can induce myogenesis in a variety of non-muscle cell types. Interactions Myogenin has been shown to interact with: MDFI, POLR2C, Serum response factor Sp1 transcription factor, and TCF3. References Further reading External links Gene expression Human proteins
Myogenin
[ "Chemistry", "Biology" ]
249
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
7,330,907
https://en.wikipedia.org/wiki/Twist-related%20protein%201
Twist-related protein 1 (TWIST1) also known as class A basic helix–loop–helix protein 38 (bHLHa38) is a basic helix-loop-helix transcription factor that in humans is encoded by the TWIST1 gene. Function Basic helix-loop-helix (bHLH) transcription factors have been implicated in cell lineage determination and differentiation. The protein encoded by this gene is a bHLH transcription factor and shares similarity with another bHLH transcription factor, Dermo1 (a.k.a. TWIST2). The strongest expression of this mRNA is in placental tissue; in adults, mesodermally derived tissues express this mRNA preferentially. Twist1 is thought to regulate osteogenic lineage. Clinical significance Mutations in the TWIST1 gene are associated with Saethre–Chotzen syndrome, breast cancer, and Sézary syndrome. Craniosynostosis TWIST1 mutations are involved in a number of craniosynostosis presentations. It can present in nonsyndromic forms (isolated scaphocephaly, right unicoronal synostosis, and turricephaly), but also in syndromic forms such as: Acrocephalosyndactyly type 1 (Apert syndrome) (primary FGFR2) Beare-Stevenson cutis gyrata syndrome (primary FGFR2) Crouzon syndrome (primary FGFR2) Crouzon syndrome-acanthosis nigricans syndrome (primary FGFR3) Jackson-Weiss syndrome (primary FGFR1 or FGFR2) Muenke syndrome (primary FGFR3) Pfeiffer syndrome (primary FGFR1 or FGFR2) As an oncogene Twist plays an essential role in cancer metastasis. Over-expression of Twist or methylation of its promoter is common in metastatic carcinomas. Hence targeting Twist has a great promise as a cancer therapeutic. In cooperation with N-Myc, Twist-1 acts as an oncogene in several cancers including neuroblastoma. Twist is activated by a variety of signal transduction pathways, including Akt, signal transducer and activator of transcription 3 (STAT3), mitogen-activated protein kinase, Ras, and Wnt signaling. Activated Twist upregulates N-cadherin and downregulates E-cadherin, which are the hallmarks of EMT. Moreover, Twist plays an important role in some physiological processes involved in metastasis, like angiogenesis, invadopodia, extravasation, and chromosomal instability. Twist also protects cancer cells from apoptotic cell death. In addition, Twist is responsible for the maintenance of cancer stem cells and the development of chemotherapy resistance. Twist1 is extensively studied for its role in head- and neck cancers. Here and in epithelial ovarian cancer, Twist1 has been shown to be involved in evading apoptosis, making the tumour cells resistant against platinum-based chemotherapeutic drugs like cisplatin. Moreover, Twist1 has been shown to be expressed under conditions of hypoxia, corresponding to the observation that hypoxic cells respond less to chemotherapeutic drugs. Another process in which Twist 1 is involved is tumour metastasis. The underlying mechanism is not completely understood, but it has been implicated in the upregulation of matrix metalloproteinases and inhibition of TIMP. Recently, targeting Twist has gained interest as a target for cancer therapeutics. The inactivation of Twist by small interfering RNA or chemotherapeutic approach has been demonstrated in vitro. Moreover, several inhibitors which are antagonistic to the upstream or downstream molecules of Twist signaling pathways have also been identified. Interactions Twist transcription factor has been shown to interact with EP300, TCF3 and PCAF. See also Transcription factor TWIST2 References Further reading Z External links GeneReviews/NCBI.NIH.UW entry on Saethre–Chotzen syndrome Transcription factors
Twist-related protein 1
[ "Chemistry", "Biology" ]
862
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
7,330,954
https://en.wikipedia.org/wiki/Pattern%20recognition%20%28psychology%29
In psychology and cognitive neuroscience, pattern recognition is a cognitive process that matches information from a stimulus with information retrieved from memory. Pattern recognition occurs when information from the environment is received and entered into short-term memory, causing automatic activation of a specific content of long-term memory. An example of this is learning the alphabet in order. When a carer repeats "A, B, C" multiple times to a child, the child, using pattern recognition, says "C" after hearing "A, B" in order. Recognizing patterns allows anticipation of what is to come. Making the connection between memories and information perceived is a step in pattern recognition called identification. Pattern recognition requires repetition of experience. Semantic memory, which is used implicitly and subconsciously, is the main type of memory involved in recognition. Pattern recognition is crucial not only to humans, but also to other animals. Even koalas, which possess less-developed thinking abilities, use pattern recognition to find and consume eucalyptus leaves. The human brain has developed more, but holds similarities to the brains of birds and lower mammals. The development of neural networks in the outer layer of the brain in humans has allowed for better processing of visual and auditory patterns. Spatial positioning in the environment, remembering findings, and detecting hazards and resources to increase chances of survival are examples of the application of pattern recognition for humans and animals. There are six main theories of pattern recognition: template matching, prototype-matching, feature analysis, recognition-by-components theory, bottom-up and top-down processing, and Fourier analysis. The application of these theories in everyday life is not mutually exclusive. Pattern recognition allows us to read words, understand language, recognize friends, and even appreciate music. Each of the theories applies to various activities and domains where pattern recognition is observed. Facial, music and language recognition, and seriation are a few of such domains. Facial recognition and seriation occur through encoding visual patterns, while music and language recognition use the encoding of auditory patterns. Theories Template matching Template matching theory describes the most basic approach to human pattern recognition. It is a theory that assumes every perceived object is stored as a "template" into long-term memory. Incoming information is compared to these templates to find an exact match. In other words, all sensory input is compared to multiple representations of an object to form one single conceptual understanding. The theory defines perception as a fundamentally recognition-based process. It assumes that everything we see, we understand only through past exposure, which then informs our future perception of the external world. For example, A, A, and A are all recognized as the letter A, but not B. This viewpoint is limited, however, in explaining how new experiences can be understood without being compared to an internal memory template. Prototype matching Unlike the exact, one-to-one, template matching theory, prototype matching instead compares incoming sensory input to one average prototype. This theory proposes that exposure to a series of related stimuli leads to the creation of a "typical" prototype based on their shared features. It reduces the number of stored templates by standardizing them into a single representation. The prototype supports perceptual flexibility, because unlike in template matching, it allows for variability in the recognition of novel stimuli. For instance, if a child had never seen a lawn chair before, they would still be able to recognize it as a chair because of their understanding of its essential characteristics as having four legs and a seat. This idea, however, limits the conceptualization of objects that cannot necessarily be "averaged" into one, like types of canines, for instance. Even though dogs, wolves, and foxes are all typically furry, four-legged, moderately sized animals with ears and a tail, they are not all the same, and thus cannot be strictly perceived with respect to the prototype matching theory. Multiple discrimination scaling Template and feature analysis approaches to recognition of objects (and situations) have been merged / reconciled / overtaken by multiple discrimination theory. This states that the amounts in a test stimulus of each salient feature of a template are recognized in any perceptual judgment as being at a distance in the universal unit of 50% discrimination (the objective performance 'JND') from the amount of that feature in the template. Recognition–by–components theory Similar to feature–detection theory, recognition by components (RBC) focuses on the bottom-up features of the stimuli being processed. First proposed by Irving Biederman (1987), this theory states that humans recognize objects by breaking them down into their basic 3D geometric shapes called geons (i.e., cylinders, cubes, cones, etc.). An example is how we break down a common item like a coffee cup: we recognize the hollow cylinder that holds the liquid and a curved handle off the side that allows us to hold it. Even though not every coffee cup is exactly the same, these basic components help us recognize the consistency across examples (or pattern). RBC suggests that there are fewer than 36 unique geons that when combined can form a virtually unlimited number of objects. To parse and dissect an object, RBC proposes we attend to two specific features: edges and concavities. Edges enable the observer to maintain a consistent representation of the object regardless of the viewing angle and lighting conditions. Concavities are where two edges meet and enable the observer to perceive where one geon ends and another begins. The RBC principles of visual object recognition can be applied to auditory language recognition as well. In place of geons, language researchers propose that spoken language can be broken down into basic components called phonemes. For example, there are 44 phonemes in the English language. Top-down and bottom-up processing Top-down processing Top-down processing refers to the use of background information in pattern recognition. It always begins with a person's previous knowledge, and makes predictions due to this already acquired knowledge. Psychologist Richard Gregory estimated that about 90% of the information is lost between the time it takes to go from the eye to the brain, which is why the brain must guess what the person sees based on past experiences. In other words, we construct our perception of reality, and these perceptions are hypotheses or propositions based on past experiences and stored information. The formation of incorrect propositions will lead to errors of perception such as visual illusions. Given a paragraph written with difficult handwriting, it is easier to understand what the writer wants to convey if one reads the whole paragraph rather than reading the words in separate terms. The brain may be able to perceive and understand the gist of the paragraph due to the context supplied by the surrounding words. Bottom-up processing Bottom-up processing is also known as data-driven processing, because it originates with the stimulation of the sensory receptors. Psychologist James Gibson opposed the top-down model and argued that perception is direct, and not subject to hypothesis testing as Gregory proposed. He stated that sensation is perception and there is no need for extra interpretation, as there is enough information in our environment to make sense of the world in a direct way. His theory is sometimes known as the "ecological theory" because of the claim that perception can be explained solely in terms of the environment. An example of bottom up-processing involves presenting a flower at the center of a person's field. The sight of the flower and all the information about the stimulus are carried from the retina to the visual cortex in the brain. The signal travels in one direction. Seriation In psychologist Jean Piaget's theory of cognitive development, the third stage is called the Concrete Operational State. It is during this stage that the abstract principle of thinking called "seriation" is naturally developed in a child. Seriation is the ability to arrange items in a logical order along a quantitative dimension such as length, weight, age, etc. It is a general cognitive skill which is not fully mastered until after the nursery years. To seriate means to understand that objects can be ordered along a dimension, and to effectively do so, the child needs to be able to answer the question "What comes next?" Seriation skills also help to develop problem-solving skills, which are useful in recognizing and completing patterning tasks. Piaget's work on seriation Piaget studied the development of seriation along with Szeminska in an experiment where they used rods of varying lengths to test children's skills. They found that there were three distinct stages of development of the skill. In the first stage, children around the age of 4 could not arrange the first ten rods in order. They could make smaller groups of 2–4, but could not put all the elements together. In the second stage where the children were 5–6 years of age, they could succeed in the seriation task with the first ten rods through the process of trial and error. They could insert the other set of rods into order through trial and error. In the third stage, the 7-8-year-old children could arrange all the rods in order without much trial and error. The children used the systematic method of first looking for the smallest rod first and the smallest among the rest. Development of problem-solving skills To develop the skill of seriation, which then helps advance problem-solving skills, children should be provided with opportunities to arrange things in order using the appropriate language, such as "big" and "bigger" when working with size relationships. They should also be given the chance to arrange objects in order based on the texture, sound, flavor and color. Along with specific tasks of seriation, children should be given the chance to compare the different materials and toys they use during play. Through activities like these, the true understanding of characteristics of objects will develop. To aid them at a young age, the differences between the objects should be obvious. Lastly, a more complicated task of arranging two different sets of objects and seeing the relationship between the two different sets should also be provided. A common example of this is having children attempt to fit saucepan lids to saucepans of different sizes, or fitting together different sizes of nuts and bolts. Application of seriation in schools To help build up math skills in children, teachers and parents can help them learn seriation and patterning. Young children who understand seriation can put numbers in order from lowest to highest. Eventually, they will come to understand that 6 is higher than 5, and 20 is higher than 10. Similarly, having children copy patterns or create patterns of their own, like ABAB patterns, is a great way to help them recognize order and prepare for later math skills, such as multiplication. Child care providers can begin exposing children to patterns at a very young age by having them make groups and count the total number of objects. Facial pattern recognition Recognizing faces is one of the most common forms of pattern recognition. Humans are extremely effective at remembering faces, but this ease and automaticity belies a very challenging problem. All faces are physically similar. Faces have two eyes, one mouth, and one nose all in predictable locations, yet humans can recognize a face from several different angles and in various lighting conditions. Neuroscientists posit that recognizing faces takes place in three phases. The first phase starts with visually focusing on the physical features. The facial recognition system then needs to reconstruct the identity of the person from previous experiences. This provides us with the signal that this might be a person we know. The final phase of recognition completes when the face elicits the name of the person. Although humans are great at recognizing faces under normal viewing angles, upside-down faces are tremendously difficult to recognize. This demonstrates not only the challenges of facial recognition but also how humans have specialized procedures and capacities for recognizing faces under normal upright viewing conditions. Neural mechanisms Scientists agree that there is a certain area in the brain specifically devoted to processing faces. This structure is called the fusiform gyrus, and brain imaging studies have shown that it becomes highly active when a subject is viewing a face. Several case studies have reported that patients with lesions or tissue damage localized to this area have tremendous difficulty recognizing faces, even their own. Although most of this research is circumstantial, a study at Stanford University provided conclusive evidence for the fusiform gyrus' role in facial recognition. In a unique case study, researchers were able to send direct signals to a patient's fusiform gyrus. The patient reported that the faces of the doctors and nurses changed and morphed in front of him during this electrical stimulation. Researchers agree this demonstrates a convincing causal link between this neural structure and the human ability to recognize faces. Facial recognition development Although in adults, facial recognition is fast and automatic, children do not reach adult levels of performance (in laboratory tasks) until adolescence. Two general theories have been put forth to explain how facial recognition normally develops. The first, general cognitive development theory, proposes that the perceptual ability to encode faces is fully developed early in childhood, and that the continued improvement of facial recognition into adulthood is attributed to other general factors. These general factors include improved attentional focus, deliberate task strategies, and metacognition. Research supports the argument that these other general factors improve dramatically into adulthood. Face-specific perceptual development theory argues that the improved facial recognition between children and adults is due to a precise development of facial perception. The cause for this continuing development is proposed to be an ongoing experience with faces. Developmental issues Several developmental issues manifest as a decreased capacity for facial recognition. Using what is known about the role of the fusiform gyrus, research has shown that impaired social development along the autism spectrum is accompanied by a behavioral marker where these individuals tend to look away from faces, and a neurological marker characterized by decreased neural activity in the fusiform gyrus. Similarly, those with developmental prosopagnosia (DP) struggle with facial recognition to the extent they are often unable to identify even their own faces. Many studies report that around 2% of the world's population have developmental prosopagnosia, and that individuals with DP have a family history of the trait. Individuals with DP are behaviorally indistinguishable from those with physical damage or lesions on the fusiform gyrus, again implicating its importance to facial recognition. Despite those with DP or neurological damage, there remains a large variability in facial recognition ability in the total population. It is unknown what accounts for the differences in facial recognition ability, whether it is a biological or environmental disposition. Recent research analyzing identical and fraternal twins showed that facial recognition was significantly higher correlated in identical twins, suggesting a strong genetic component to individual differences in facial recognition ability. Language development Pattern recognition in language acquisition Research from Frost et al., 2013 reveals that infant language acquisition is linked to cognitive pattern recognition. Unlike classical nativist and behavioral theories of language development, scientists now believe that language is a learned skill. Studies at the Hebrew University and the University of Sydney both show a strong correlation between the ability to identify visual patterns and to learn a new language. Children with high shape recognition showed better grammar knowledge, even when controlling for the effects of intelligence and memory capacity. This is supported by the theory that language learning is based on statistical learning, the process by which infants perceive common combinations of sounds and words in language and use them to inform future speech production. Phonological development The first step in infant language acquisition is to decipher between the most basic sound units of their native language. This includes every consonant, every short and long vowel sound, and any additional letter combinations like "th" and "ph" in English. These units, called phonemes, are detected through exposure and pattern recognition. Infants use their "innate feature detector" capabilities to distinguish between the sounds of words. They split them into phonemes through a mechanism of categorical perception. Then they extract statistical information by recognizing which combinations of sounds are most likely to occur together, like "qu" or "h" plus a vowel. In this way, their ability to learn words is based directly on the accuracy of their earlier phonetic patterning. Grammar development The transition from phonemic differentiation into higher-order word production is only the first step in the hierarchical acquisition of language. Pattern recognition is furthermore utilized in the detection of prosody cues, the stress and intonation patterns among words. Then it is applied to sentence structure and the understanding of typical clause boundaries. This entire process is reflected in reading as well. First, a child recognizes patterns of individual letters, then words, then groups of words together, then paragraphs, and finally entire chapters in books. Learning to read and learning to speak a language are based on the "stepwise refinement of patterns" in perceptual pattern recognition. Music pattern recognition Music provides deep and emotional experiences for the listener. These experiences become contents in long-term memory, and every time we hear the same tunes, those contents are activated. Recognizing the content by the pattern of the music affects our emotion. The mechanism that forms the pattern recognition of music and the experience has been studied by multiple researchers. The sensation felt when listening to our favorite music is evident by the dilation of the pupils, the increase in pulse and blood pressure, the streaming of blood to the leg muscles, and the activation of the cerebellum, the brain region associated with physical movement. While retrieving the memory of a tune demonstrates general recognition of musical pattern, pattern recognition also occurs while listening to a tune for the first time. The recurring nature of the metre allows the listener to follow a tune, recognize the metre, expect its upcoming occurrence, and figure the rhythm. The excitement of following a familiar music pattern happens when the pattern breaks and becomes unpredictable. This following and breaking of a pattern creates a problem-solving opportunity for the mind that form the experience. Psychologist Daniel Levitin argues that the repetitions, melodic nature and organization of this music create meaning for the brain. The brain stores information in an arrangement of neurons which retrieve the same information when activated by the environment. By constantly referencing information and additional stimulation from the environment, the brain constructs musical features into a perceptual whole. The medial prefrontal cortex – one of the last areas affected by Alzheimer's disease – is the region activated by music. Cognitive mechanisms To understand music pattern recognition, we need to understand the underlying cognitive systems that each handle a part of this process. Various activities are at work in this recognition of a piece of music and its patterns. Researchers have begun to unveil the reasons behind the stimulated reactions to music. Montreal-based researchers asked ten volunteers who got "chills" listening to music to listen to their favorite songs while their brain activity was being monitored. The results show the significant role of the nucleus accumbens (NAcc) region – involved with cognitive processes such as motivation, reward, addiction, etc. – creating the neural arrangements that make up the experience. A sense of reward prediction is created by anticipation before the climax of the tune, which comes to a sense of resolution when the climax is reached. The longer the listener is denied the expected pattern, the greater the emotional arousal when the pattern returns. Musicologist Leonard Meyer used fifty measures of Beethoven's 5th movement of the String Quartet in C-sharp minor, Op. 131 to examine this notion. The stronger this experience is, the more vivid memory it will create and store. This strength affects the speed and accuracy of retrieval and recognition of the musical pattern. The brain not only recognizes specific tunes, it distinguishes standard acoustic features, speech and music. MIT researchers conducted a study to examine this notion. The results showed six neural clusters in the auditory cortex responding to the sounds. Four were triggered when hearing standard acoustic features, one specifically responded to speech, and the last exclusively responded to music. Researchers who studied the correlation between temporal evolution of timbral, tonal and rhythmic features of music, came to the conclusion that music engages the brain regions connected to motor actions, emotions and creativity. The research indicates that the whole brain "lights up" when listening to music. This amount of activity boosts memory preservation, hence pattern recognition. Recognizing patterns of music is different for a musician and a listener. Although a musician may play the same notes every time, the details of the frequency will always be different. The listener will recognize the musical pattern and their types despite the variations. These musical types are conceptual and learned, meaning they might vary culturally. While listeners are involved with recognizing (implicit) musical material, musicians are involved with recalling them (explicit). A UCLA study found that when watching or hearing music being played, neurons associated with the muscles needed for playing the instrument fire. Mirror neurons light up when musicians and non-musicians listen to a piece. Developmental issues Pattern recognition of music can build and strengthen other skills, such as musical synchrony and attentional performance and musical notation and brain engagement. Even a few years of musical training enhances memory and attention levels. Scientists at University of Newcastle conducted a study on patients with severe acquired brain injuries (ABIs) and healthy participants, using popular music to examine music-evoked autobiographical memories (MEAMs). The participants were asked to record their familiarity with the songs, whether they liked them and what memories they evoked. The results showed that the ABI patients had the highest MEAMs, and all the participants had MEAMs of a person, people or life period that were generally positive. The participants completed the task by utilizing pattern recognition skills. Memory evocation caused the songs to sound more familiar and well-liked. This research can be beneficial to rehabilitating patients of autobiographical amnesia who do not have fundamental deficiency in autobiographical recall memory and intact pitch perception. In a study at University of California, Davis mapped the brain of participants while they listened to music. The results showed links between brain regions to autobiographical memories and emotions activated by familiar music. This study can explain the strong response of patients with Alzheimer's disease to music. This research can help such patients with pattern recognition-enhancing tasks. False pattern recognition The human tendency to see patterns that do not actually exist is called apophenia. Examples include the Man in the Moon, faces or figures in shadows, in clouds, and in patterns with no deliberate design, such as the swirls on a baked confection, and the perception of causal relationships between events which are, in fact, unrelated. Apophenia figures prominently in conspiracy theories, gambling, misinterpretation of statistics and scientific data, and some kinds of religious and paranormal experiences. Misperception of patterns in random data is called pareidolia. Recent researches in neurosciences and cognitive sciences suggest to understand 'false pattern recognition', in the paradigm of predictive coding. See also Apophenia Gambler's fallacy Gestalt psychology Pareidolia Synchronicity Thin-slicing Notes References External links nAsagram - A web app for creating anagrams interactively. Pattern recognition Cognition Cognitive psychology Perception Articles containing video clips
Pattern recognition (psychology)
[ "Biology" ]
4,705
[ "Behavioural sciences", "Behavior", "Cognitive psychology" ]
7,330,969
https://en.wikipedia.org/wiki/Microphthalmia-associated%20transcription%20factor
Microphthalmia-associated transcription factor also known as class E basic helix-loop-helix protein 32 or bHLHe32 is a protein that in humans is encoded by the MITF gene. MITF is a basic helix-loop-helix leucine zipper transcription factor involved in lineage-specific pathway regulation of many types of cells including melanocytes, osteoclasts, and mast cells. The term "lineage-specific", since it relates to MITF, means genes or traits that are only found in a certain cell type. Therefore, MITF may be involved in the rewiring of signaling cascades that are specifically required for the survival and physiological function of their normal cell precursors. MITF, together with transcription factor EB (TFEB), TFE3 and TFEC, belong to a subfamily of related bHLHZip proteins, termed the MiT-TFE family of transcription factors. The factors are able to form stable DNA-binding homo- and heterodimers. The gene that encodes for MITF resides at the mi locus in mice, and its protumorogenic targets include factors involved in cell death, DNA replication, repair, mitosis, microRNA production, membrane trafficking, mitochondrial metabolism, and much more. Mutation of this gene results in deafness, bone loss, small eyes, and poorly pigmented eyes and skin. In human subjects, because it is known that MITF controls the expression of various genes that are essential for normal melanin synthesis in melanocytes, mutations of MITF can lead to diseases such as melanoma, Waardenburg syndrome, and Tietz syndrome. Its function is conserved across vertebrates, including in fishes such as zebrafish and Xiphophorus. An understanding of MITF is necessary to understand how certain lineage-specific cancers and other diseases progress. In addition, current and future research can lead to potential avenues to target this transcription factor mechanism for cancer prevention. Clinical significance Mutations As mentioned above, changes in MITF can result in serious health conditions. For example, mutations of MITF have been implicated in both Waardenburg syndrome and Tietz syndrome. Waardenburg syndrome is a rare genetic disorder. Its symptoms include deafness, minor defects, and abnormalities in pigmentation. Mutations in the MITF gene have been found in certain patients with Waardenburg syndrome, type II. Mutations that change the amino acid sequence of that result in an abnormally small MITF are found. These mutations disrupt dimer formation, and as a result cause insufficient development of melanocytes. The shortage of melanocytes causes some of the characteristic features of Waardenburg syndrome. Tietz syndrome, first described in 1923, is a congenital disorder often characterized by deafness and leucism. Tietz is caused by a mutation in the MITF gene. The mutation in MITF deletes or changes a single amino acid base pair specifically in the base motif region of the MITF protein. The new MITF protein is unable to bind to DNA and melanocyte development and subsequently melanin production is altered. A reduced number of melanocytes can lead to hearing loss, and decreased melanin production can account for the light skin and hair color that make Tietz syndrome so noticeable. Melanoma Melanocytes are commonly known as cells that are responsible for producing the pigment melanin which gives coloration to the hair, skin, and nails. The exact mechanisms of how exactly melanocytes become cancerous are relatively unclear, but there is ongoing research to gain more information about the process. For example, it has been uncovered that the DNA of certain genes is often damaged in melanoma cells, most likely as a result of damage from UV radiation, and in turn increases the likelihood of developing melanoma. Specifically, it has been found that a large percentage of melanomas have mutations in the B-RAF gene which leads to melanoma by causing an MEK-ERK kinase cascade when activated. In addition to B-RAF, MITF is also known to play a crucial role in melanoma progression. Since it is a transcription factor that is involved in the regulation of genes related to invasiveness, migration, and metastasis, it can play a role in the progression of melanoma. Target genes MITF recognizes E-box (CAYRTG) and M-box (TCAYRTG or CAYRTGA) sequences in the promoter regions of target genes. Known target genes (confirmed by at least two independent sources) of this transcription factor include, Additional genes identified by a microarray study (which confirmed the above targets) include the following, The LysRS-Ap4A-MITF signaling pathway The LysRS-Ap4A-MITF signaling pathway was first discovered in mast cells, in which, the A mitogen-activated protein kinase (MAPK) pathway is activated upon allergen stimulation. The binding of immunoglobulin E to the high-affinity IgE receptor (FcεRI) provides the stimulus that starts the cascade. Lysyl-tRNA synthetase (LysRS) normally resides in the multisynthetase complex. This complex consists of nine different aminoacyl-tRNA synthetases and three scaffold proteins and has been termed the "signalosome" due to its non-catalytic signalling functions. After activation, LysRS is phosphorylated on Serine 207 in a MAPK-dependent manner. This phosphorylation causes LysRS to change its conformation, detach from the complex and translocate into the nucleus, where it associates with the encoding histidine triad nucleotide–binding protein 1 (HINT1) thus forming the MITF-HINT1 inhibitory complex. The conformational change also switches LysRS activity from aminoacylation of Lysine tRNA to diadenosine tetraphosphate (Ap4A) production. Ap4A, which is an adenosine joined to another adenosine through a 5‘-5’tetraphosphate bridge, binds to HINT1 and this releases MITF from the inhibitory complex, allowing it to transcribe its target genes. Specifically, Ap4A causes a polymerization of the HINT1 molecule into filaments. The polymerization blocks the interface for MITF and thus prevents the binding of the two proteins. This mechanism is dependent on the precise length of the phosphate bridge in the Ap4A molecule so other nucleotides such as ATP or AMP will not affect it. MITF is also an integral part of melanocytes, where it regulates the expression of a number of proteins with melanogenic potential. Continuous expression of MITF at a certain level is one of the necessary factors for melanoma cells to proliferate, survive and avoid detection by host immune cells through the T-cell recognition of the melanoma-associated antigen (melan-A). Post-translational modifications of the HINT1 molecules have been shown to affect MITF gene expression as well as the binding of Ap4A. Mutations in HINT1 itself have been shown to be the cause of axonal neuropathies.  The regulatory mechanism relies on the enzyme diadenosine tetraphosphate hydrolase, a member of the Nudix type 2 enzymatic family (NUDT2), to cleave Ap4A, allow the binding of HINT1 to MITF and thus suppress the expression of the MITF transcribed genes. NUDT2 itself has also been shown to be associated with human breast carcinoma, where it promotes cellular proliferation. The enzyme is 17 kDa large and can freely diffuse between the nucleus and cytosol explaining its presence in the nucleus. It has also been shown to be actively transported into the nucleus by directly interacting with the N-terminal domain of importin-β upon immunological stimulation of the mast cells. Growing evidence is pointing to the fact that the LysRS-Ap4A-MITF signalling pathway is in fact an integral aspect of controlling MITF transcriptional activity. Activation of the LysRS-Ap4A-MITF signalling pathway by isoproterenol has been confirmed in cardiomyocytes. A heart specific isoform of MITF is a major regulator of cardiac growth and hypertrophy responsible for heart growth and for the physiological response of the cardiomyocytes to beta-adrenergic stimulation. Phosphorylation MITF is phosphorylated on several serine and tyrosine residues. Serine phosphorylation is regulated by several signaling pathways including MAPK/BRAF/ERK, receptor tyrosine kinase KIT, GSK-3 and mTOR. In addition, several kinases including PI3K, AKT, SRC and P38 are also critical activators of MITF phosphorylation. In contrast, tyrosine phosphorylation is induced by the presence of the KIT oncogenic mutation D816V. This KITD816V pathway is dependent on SRC protein family activation signaling. The induction of serine phosphorylation by the frequently altered MAPK/BRAF pathway and the GSK-3 pathway in melanoma regulates MITF nuclear export and thereby decreasing MITF activity in the nucleus. Similarly, tyrosine phosphorylation mediated by the presence of the KIT oncogenic mutation D816V also increases the presence of MITF in the cytoplasm. Interactions Most transcription factors function in cooperation with other factors by protein–protein interactions. Association of MITF with other proteins is a critical step in the regulation of MITF-mediated transcriptional activity. Some commonly studied MITF interactions include those with MAZR, PIAS3, Tfe3, hUBC9, PKC1, and LEF1. Looking at the variety of structures gives insight into MITF's varied roles in the cell. The Myc-associated zinc-finger protein related factor (MAZR) interacts with the Zip domain of MITF. When expressed together, both MAZR and MITF increase promoter activity of the mMCP-6 gene. MAZR and MITF together transactivate the mMCP-6 gene. MAZR also plays a role in the phenotypic expression of mast cells in association with MITF. PIAS3 is a transcriptional inhibitor that acts by inhibiting STAT3's DNA binding activity. PIAS3 directly interacts with MITF, and STAT3 does not interfere with the interaction between PIAS3 and MITF. PIAS3 functions as a key molecule in suppressing the transcriptional activity of MITF. This is important when considering mast cell and melanocyte development. MITF, TFE3 and TFEB are part of the basic helix-loop-helix-leucine zipper family of transcription factors. Each protein encoded by the family of transcription factors can bind DNA. MITF is necessary for melanocyte and eye development and new research suggests that TFE3 is also required for osteoclast development, a function redundant of MITF. The combined loss of both genes results in severe osteopetrosis, pointing to an interaction between MITF and other members of its transcription factor family. In turn, TFEB has been termed as the master regulator of lysosome biogenesis and autophagy. Interestingly, MITF, TFEB and TFE3 separate roles in modulating starvation-induced autophagy have been described in melanoma. Moreover, MITF and TFEB proteins, directly regulate each other’s mRNA and protein expression while their subcellular localization and transcriptional activity are subject to similar modulation, such as the mTOR signaling pathway. UBC9 is a ubiquitin conjugating enzyme whose proteins associates with MITF. Although hUBC9 is known to act preferentially with SENTRIN/SUMO1, an in vitro analysis demonstrated greater actual association with MITF. hUBC9 is a critical regulator of melanocyte differentiation. To do this, it targets MITF for proteasome degradation. Protein kinase C-interacting protein 1 (PKC1) associates with MITF. Their association is reduced upon cell activation. When this happens MITF disengages from PKC1. PKC1 by itself, found in the cytosol and nucleus, has no known physiological function. It does, however, have the ability to suppress MITF transcriptional activity and can function as an in vivo negative regulator of MITF induced transcriptional activity. The functional cooperation between MITF and the lymphoid enhancing factor (LEF-1) results in a synergistic transactivation of the dopachrome tautomerase gene promoter, which is an early melanoblast marker. LEF-1 is involved in the process of regulation by Wnt signaling. LEF-1 also cooperates with MITF-related proteins like TFE3. MITF is a modulator of LEF-1, and this regulation ensures efficient propagation of Wnt signals in many cells. Translational regulation Translational regulation of MITF is still an unexplored area with only two peer-reviewed papers (as of 2019) highlighting the importance. During glutamine starvation of melanoma cells ATF4 transcripts increases as well as the translation of the mRNA due to eIF2α phosphorylation. This chain of molecular events leads to two levels of MITF suppression: first, ATF4 protein binds and suppresses MITF transcription and second, eIF2α blocks MITF translation possibly through the inhibition of eIF2B by eIF2α. MITF can also be directly translationally modified by the RNA helicase DDX3X. The 5' UTR of MITF contains important regulatory elements (IRES) that is recognized, bound and activated by DDX3X. Although, the 5' UTR of MITF only consists of a nucleotide stretch of 123-nt, this region is predicted to fold into energetically favorable RNA secondary structures including multibranched loops and asymmetric bulges that is characteristics of IRES elements. Activation of this cis-regulatory sequences by DDX3X promotes MITF expression in melanoma cells. See also Microphthalmia Splashed white References External links Transcription factors Gene expression Human proteins
Microphthalmia-associated transcription factor
[ "Chemistry", "Biology" ]
3,006
[ "Gene expression", "Signal transduction", "Molecular genetics", "Cellular processes", "Induced stem cells", "Molecular biology", "Biochemistry", "Transcription factors" ]
7,331,527
https://en.wikipedia.org/wiki/Dispensary
A dispensary is an office in a school, hospital, industrial plant, or other organization that dispenses medications, medical supplies, and in some cases even medical and dental treatment. In a traditional dispensary set-up, a pharmacist dispenses medication per the prescription or order form. The English term originated from the medieval Latin noun and is cognate with the Latin verb dispensare, 'to distribute'. The term also refers to legal cannabis dispensaries. The term also has Victorian antiquity, in 1862 the term dispensary was used in the folk song the Blaydon Races. The folk song differentiated the term dispensary from a Doctors surgery and an Infirmary. The advent of huge industrial plants in the late 19th and early 20th centuries, such as large steel mills, created a demand for in-house first responder services, including firefighting, emergency medical services, and even primary care that were closer to the point of need, under closer company control, and in many cases better capitalized than any services that the surrounding town could provide. In such contexts, company doctors and nurses were regularly on duty or on call. Electronic dispensaries are designed to ensure efficient and consistent dispensing of excipient and active ingredients in a secure data environment with full audit traceability. A standard dispensary system consists of a range of modules such as manual dispensing, supervisory, bulk dispensing, recipe management and interfacing with external systems. Such a system might dispense much more than just medical related products, such as alcohol, tobacco or vitamins and minerals. Primary care (Kenya) In Kenya, a dispensary is a small outpatient health facility, usually managed by a registered nurse. It provides the most basic primary healthcare services to rural communities, e.g. childhood immunization, family planning, wound dressing and management of common ailments like colds, diarrhea and simple malaria. The nurses report to the nursing officer at the health center, where they refer patients with complicated diseases to be managed by clinical officers. Primary care (India) In India, a dispensary refers to a small setup with basic medical facilities where a doctor can provide a primary level of care. It does not have a hospitalization facility and is generally owned by a single doctor. In remote areas of India where hospital facilities are not available, dispensaries will be available. Tuberculosis (Turkey) In Turkey, the term dispensary is almost always used in reference to tuberculosis dispensaries () established across the country under a programme to eliminate tuberculosis initiated in 1923, the same year the country was founded. Although more than a hundred such dispensaries continue to operate as of 2023, they have been largely supplanted by hospitals by the end of 20th century with increased access to healthcare. Alcohol (USA) The term dispensary in the United States was used to refer to government agencies that sell alcoholic beverages, particularly in the state of Idaho and the South Carolina. Cannabis North America In Arizona, British Columbia, California, Colorado, Connecticut, Illinois, Maine, Massachusetts, Oregon, Michigan, New Jersey, New Mexico, New York, Rhode Island, Ontario, Quebec, and Washington, medical cannabis is sold in specially designated stores called cannabis dispensaries or "compassion clubs". These clubs are for members or patients only, unless legal cannabis has already passed in the state or province in question. In Canada dispensaries are far less abundant than in the USA; most Canadian dispensaries are in British Columbia and Ontario. Uruguay In 2013 Uruguay became the first country to legalize marijuana cultivation, sale and consumption. The government is building a network of dispensaries that are meant to help to track marijuana sales and consumption. The move was meant to decrease the role of the criminal world in distribution and sales of it. See also References Pharmacy
Dispensary
[ "Chemistry" ]
820
[ "Pharmacology", "Pharmacy" ]
7,331,570
https://en.wikipedia.org/wiki/Pressure%20switch
A pressure switch is a form of switch that operates an electrical contact when a certain set fluid pressure has been reached on its input. The switch may be designed to make contact either on pressure rise or on pressure fall. Pressure switches are widely used in industry to automatically supervise and control systems that use pressurized fluids. Another type of pressure switch detects mechanical force; for example, a pressure-sensitive mat is used to automatically open doors on commercial buildings. Such sensors are also used in security alarm applications such as pressure sensitive floors. Construction and types A pressure switch for sensing fluid pressure contains a capsule, bellows, Bourdon tube, diaphragm or piston element that deforms or displaces proportionally to the applied pressure. The resulting motion is applied, either directly or through amplifying levers, to a set of switch contacts. Since pressure may be changing slowly and contacts should operate quickly, some kind of over-center mechanism such as a miniature snap-action switch is used to ensure quick operation of the contacts. One sensitive type of pressure switch uses mercury switches mounted on a Bourdon tube; the shifting weight of the mercury provides a useful over-center characteristic. The pressure switch may be adjustable, by moving the contacts or adjusting tension in a counterbalance spring. Industrial pressure switches may have a calibrated scale and pointer to show the set point of the switch. A pressure switch will have a hysteresis, that is, a differential range around its setpoint, known as the switch's deadband, inside which small changes of pressure do not influence the state of the contacts. Some types allow adjustment of the differential. The pressure-sensing element of a pressure switch may be arranged to respond to the difference of two pressures. Such switches are useful when the difference is significant, for example, to detect a clogged filter in a water supply system. The switches must be designed to respond only to the difference and not to false-operate for changes in the common mode pressure. The contacts of the pressure switch may be rated a few tenths of an ampere to around 15 amperes, with smaller ratings found on more sensitive switches. Often a pressure switch will operate a relay or other control device, but some types can directly control small electric motors or other loads. Since the internal parts of the switch are exposed to the process fluid, they must be chosen to balance strength and life expectancy against compatibility with process fluids. For example, rubber diaphragms are commonly used in contact with water, but would quickly degrade if used in a system containing mineral oil. Switches designed for use in hazardous areas with flammable gas have enclosure to prevent an arc at the contacts from igniting the surrounding gas. Switch enclosures may also be required to be weatherproof, corrosion resistant, or submersible. An electronic pressure switch incorporates some variety of pressure transducer (strain gauge, capacitive element, or other) and an internal circuit to compare the measured pressure to a set point. Such devices may provide improved repeatability, accuracy and precision over a mechanical switch. Pneumatic Uses of pneumatic pressure switches include: Switch a household well water pump automatically when water is drawn from the pressure tank. Switching off an electrically driven gas compressor when a set pressure is achieved in the reservoir Switching off a gas compressor, whenever there is no feed in the suction stage. in-cell charge control in a battery Switching on an alarm light in the cockpit of an aircraft if cabin pressure (based on altitude) is critically low. Air filled hoses that activate switches when vehicles drive over them. Common for counting traffic and at gas stations. Hydraulic Hydraulic pressure switches have various uses in automobiles, for example, to warn if the engine's oil pressure falls below a safe level, or to control automatic transmission torque converter lock-up. Prior to the 1960s, a pressure switch was used in the hydraulic braking circuit to control power to the brake lights; more recent automobiles use a switch directly activated by the brake pedal. In dust control systems (bag filter), a pressure switch is mounted on the header which will raise an alarm when air pressure in the header is less than necessary. A differential pressure switch may be installed across a filter element to sense increased pressure drop, indicating the need for filter cleaning or replacement. Examples Pressure sensitive mat A pressure sensitive mat provides a contact signal when force is applied anywhere within the area of the mat. Some mats provide a single signal, while others can resolve the position of the applied force within the mat. Pressure sensitive mats can be used to operate electrically operated doors, or as part of an interlock system to ensure machine operators are clear of dangerous areas of a process before it operates. Pressure sensitive mats can be used to detect persons walking over a particular point, as part of a security alarm system or to count attendance, or for other purposes. See also Dynamic pressure List of sensors Pressure sensor References External links Pneumatic tools Hydraulic tools Security technology
Pressure switch
[ "Physics" ]
1,013
[ "Physical systems", "Hydraulics", "Hydraulic tools" ]
7,331,693
https://en.wikipedia.org/wiki/Joel%20Lebowitz
Joel Louis Lebowitz (born May 10, 1930) is a mathematical physicist widely acknowledged for his outstanding contributions to statistical physics, statistical mechanics and many other fields of Mathematics and Physics. Lebowitz has published more than five hundred papers concerning statistical physics and science in general, and he is one of the founders and editors of the Journal of Statistical Physics, one of the most important peer-reviewed journals concerning scientific research in this area. He has been president of the New York Academy of Sciences. Lebowitz is the George William Hill Professor of Mathematics and Physics at Rutgers University. He is also an active member of the human rights community and a long-term co-chair of the Committee of Concerned Scientists. Biography Lebowitz was born in Taceva, then in Czechoslovakia, now Ukraine, in 1930 into a Jewish family. During World War II he was deported with his family to Auschwitz, where his father, his mother, and his younger sister were killed in 1944. After being liberated from the camp, he moved to United States by boat, and he studied in an Orthodox Jewish school and Brooklyn College. He earned his PhD at Syracuse University in 1956 under the supervision of Peter G. Bergmann. Then he continued his research with Lars Onsager, at Yale University, where he got a faculty position. He moved to the Stevens Institute of Technology in 1957 and to the Belfer Graduate School of Science of Yeshiva University in 1959. Finally he got a faculty position at Rutgers University in 1977, where he holds the prestigious George William Hill Professor position. During his years at the Yeshiva University and Rutgers University he has been in contact with several scientists, and artists, like Fumio Yoshimura and Kate Millett. In 1975 he founded the Journal of Statistical Physics. In 1979 he was president of the New York Academy of Sciences. He has been one of the most active supporters of dissident scientists in the former Soviet Union, especially refusenik scientists. Scientific legacy Lebowitz has had many important contributions to statistical mechanics and mathematical physics. He proved, along with Elliott Lieb, that the Coulomb interactions obey the thermodynamic limit. He also established what are now known as Lebowitz inequalities for the ferromagnetic Ising model. His current interests are in problems of non-equilibrium statistical mechanics. He became editor-in-chief the Journal of Statistical Physics in 1975, one of the most important journals in the field, a position he remained in until September 2018. Lebowitz hosts a biannual series of conferences held, first at Yeshiva University and later at Rutgers University, which has been running for 60 years. He is also known as a co-editor of an influential review series, Phase Transitions and Critical Phenomena. Awards and honors Lebowitz has been awarded several honors, such as the Boltzmann Medal (1992), the Nicholson Medal (1994) awarded by the American Physical Society, the Delmer S. Fahrney Medal (1995), the Henri Poincaré Prize (2000), the Volterra Award (2001), the Heineman Prize for Mathematical Physics (2021). His Heineman Prize citation reads: "For seminal contributions to nonequilibrium and equilibrium statistical mechanics, in particular, studies of large deviations in nonequilibrium steady states and rigorous analysis of Gibbs equilibrium ensembles." Among other recognitions, Lebowitz was awarded the Max Planck Medal in 2007 "for his important contributions to the statistical physics of equilibrium and non-equilibrium systems, in particular his contributions to the theory of phase transitions, the dynamics of infinite systems, and the stationary non-equilibrium states" and "for his promoting of new directions of this field at its farthest front, and for enthusiastically introducing several generations of scientists to the field." In 2014 he received the Grande Médaille of the French Academy of Sciences. In 2022 he was awarded the Dirac Medal of the ICTP. Lebowitz is a member of the United States National Academy of Sciences. In 1966, he became a fellow of the American Physical Society. and in 2012, he became a fellow of the American Mathematical Society. He received an honorary Doctor of Science degree at Syracuse University's 158th Commencement in 2012. References External links Laudatio for Joel L. Lebowitz by David Ruelle (IHES, Paris) at the Poincaré Prize Ceremony (2000) 1930 births Living people Mathematical physicists Members of the United States National Academy of Sciences Ukrainian Jews American people of Ukrainian-Jewish descent 20th-century American mathematicians Rutgers University faculty Syracuse University College of Arts and Sciences alumni Auschwitz concentration camp survivors Fellows of the American Mathematical Society Thermodynamicists Brooklyn College alumni Fellows of the American Physical Society Winners of the Max Planck Medal
Joel Lebowitz
[ "Physics", "Chemistry" ]
974
[ "Thermodynamics", "Thermodynamicists" ]
7,332,296
https://en.wikipedia.org/wiki/VSMPO-AVISMA
VSMPO-AVISMA Corporation () is the world's largest titanium producer. Located in Verkhnyaya Salda, Russia, VSMPO-AVISMA also operates facilities in Ukraine, England, Switzerland, Germany, United States. The company produces titanium, aluminum, magnesium and steel alloys. VSMPO-AVISMA does a great deal of business with aerospace companies around the world. As of February 2022, VSMPO produced 90% of titanium in Russia and exported it to 50 countries all over the globe. VSMPO is mostly owned by Industrial Investments LLC, whose main owner is the russian billionaire Mikhail Shelkov. VSMPO stands for VerkhneSaldinskoye Metallurgicheskoye Proizvodstvennoye Ob'yedineniye (, or Metal-producing company of Verkhnyaya Salda; and AVISMA for AVIatsionnyye Spetsial'nyye MAterialy or AVIation Special MAterials). History After Rosoboronexport obtained an 66% stake in October 2006 of VSMPO-Avisma, Sergey Chemezov became chairman of VSMPO-AVISMA in November 2006. On 27 December 2007 US Boeing and VSMPO-AVISMA created a joint venture Ural Boeing Manufacturing (UBM) and signed a contract on titanium products deliveries until 2015. Boeing planned to invest 27 million dollars in the production of titanium parts for its needs. The company is a key organizer of the Titanium Valley project in Sverdlovsk Oblast. Despite VSMPO-AVISMA's being a russian company, it was not affected by the American and European sanctions during the first phase of the Russo-Ukrainian War. In December 2020, the company was temporarily included in the American sanction list due to its connections with the Russian Armed Forces, but later the US lifted the sanctions against VSMPO-AVISMA. In November 2021, “VSMPO-AVISMA” and Boeing signed an agreement to increase producing capacity of UBM as well as project investment cost of R&D sector. The agreement maintains the role of “VSMPO-AVISMA” as the biggest titanium supplier for Boeing. Only four months later, following the Russian invasion of Ukraine, the agreement was terminated, as well as a 2007 agreement. In November 2021, the Arbitration Court of Sverdlovsk Oblast ordered to the company to pay 651 million roubles for pollution on ground area of 347538 square meters pro rata to Federal Service for Supervision of Natural Resources legal action. Following the Russian invasion of Ukraine in February 2022, several international companies ceased their collaboration with VSMPO-AVISMA. Thus, in March 2022, Rolls-Royce Holdings and Boeing suspended purchasing titanium from the company for an indefinite period. Related organizations Subsidiaries CJSC Tube Works VSMPO-AVISMA Nikopol, Ukraine VSMPO-Tirus UK Redditch, United Kingdom VSMPO-Tirus UK is VSMPO-AVISMA's sales and distribution centre for UK customers. VSMPO – Tirus GmbH Frankfurt-am-Main, Germany VSMPO-Tirus US Highlands Ranch, Colorado (Head Office) Leetsdale, Pennsylvania (Eastern production center) Ontario, California (Western production center) VSMPO Tirus AG Lausanne, Switzerland NF&M International Monaca, Pennsylvania References Literature External links Firmakes Titanium Mining companies of Russia Steel companies of Russia Mining companies of the Soviet Union Non-renewable resource companies established in 1941 Companies listed on the Moscow Exchange Russian brands Companies based in Sverdlovsk Oblast 1941 establishments in the Soviet Union Manufacturing companies established in 1941 Titanium companies
VSMPO-AVISMA
[ "Chemistry" ]
772
[ "Metallurgical industry of Russia", "Metallurgical industry by country" ]
7,333,367
https://en.wikipedia.org/wiki/Industrial%20control%20system
An industrial control system (ICS) is an electronic control system and associated instrumentation used for industrial process control. Control systems can range in size from a few modular panel-mounted controllers to large interconnected and interactive distributed control systems (DCSs) with many thousands of field connections. Control systems receive data from remote sensors measuring process variables (PVs), compare the collected data with desired setpoints (SPs), and derive command functions that are used to control a process through the final control elements (FCEs), such as control valves. Larger systems are usually implemented by supervisory control and data acquisition (SCADA) systems, or DCSs, and programmable logic controllers (PLCs), though SCADA and PLC systems are scalable down to small systems with few control loops. Such systems are extensively used in industries such as chemical processing, pulp and paper manufacture, power generation, oil and gas processing, and telecommunications. Discrete controllers The simplest control systems are based around small discrete controllers with a single control loop each. These are usually panel mounted which allows direct viewing of the front panel and provides means of manual intervention by the operator, either to manually control the process or to change control setpoints. Originally these would be pneumatic controllers, a few of which are still in use, but nearly all are now electronic. Quite complex systems can be created with networks of these controllers communicating using industry-standard protocols. Networking allows the use of local or remote SCADA operator interfaces, and enables the cascading and interlocking of controllers. However, as the number of control loops increase for a system design there is a point where the use of a programmable logic controller (PLC) or distributed control system (DCS) is more manageable or cost-effective. Distributed control systems A distributed control system (DCS) is a digital process control system (PCS) for a process or plant, wherein controller functions and field connection modules are distributed throughout the system. As the number of control loops grows, DCS becomes more cost effective than discrete controllers. Additionally, a DCS provides supervisory viewing and management over large industrial processes. In a DCS, a hierarchy of controllers is connected by communication networks, allowing centralized control rooms and local on-plant monitoring and control. A DCS enables easy configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other computer systems such as production control. It also enables more sophisticated alarm handling, introduces automatic event logging, removes the need for physical records such as chart recorders and allows the control equipment to be networked and thereby located locally to the equipment being controlled to reduce cabling. A DCS typically uses custom-designed processors as controllers and uses either proprietary interconnections or standard protocols for communication. Input and output modules form the peripheral components of the system. The processors receive information from input modules, process the information and decide control actions to be performed by the output modules. The input modules receive information from sensing instruments in the process (or field) and the output modules transmit instructions to the final control elements, such as control valves. The field inputs and outputs can either be continuously changing analog signals e.g. current loop or 2 state signals that switch either on or off, such as relay contacts or a semiconductor switch. Distributed control systems can normally also support Foundation Fieldbus, PROFIBUS, HART, Modbus and other digital communication buses that carry not only input and output signals but also advanced messages such as error diagnostics and status signals. SCADA systems Supervisory control and data acquisition (SCADA) is a control system architecture that uses computers, networked data communications and graphical user interfaces for high-level process supervisory management. The operator interfaces which enable monitoring and the issuing of process commands, such as controller setpoint changes, are handled through the SCADA supervisory computer system. However, the real-time control logic or controller calculations are performed by networked modules which connect to other peripheral devices such as programmable logic controllers and discrete PID controllers which interface to the process plant or machinery. The SCADA concept was developed as a universal means of remote access to a variety of local control modules, which could be from different manufacturers allowing access through standard automation protocols. In practice, large SCADA systems have grown to become very similar to distributed control systems in function, but using multiple means of interfacing with the plant. They can control large-scale processes that can include multiple sites, and work over large distances. This is a commonly-used architecture industrial control systems, however there are concerns about SCADA systems being vulnerable to cyberwarfare or cyberterrorism attacks. The SCADA software operates on a supervisory level as control actions are performed automatically by RTUs or PLCs. SCADA control functions are usually restricted to basic overriding or supervisory level intervention. A feedback control loop is directly controlled by the RTU or PLC, but the SCADA software monitors the overall performance of the loop. For example, a PLC may control the flow of cooling water through part of an industrial process to a set point level, but the SCADA system software will allow operators to change the set points for the flow. The SCADA also enables alarm conditions, such as loss of flow or high temperature, to be displayed and recorded. Programmable logic controllers PLCs can range from small modular devices with tens of inputs and outputs (I/O) in a housing integral with the processor, to large rack-mounted modular devices with a count of thousands of I/O, and which are often networked to other PLC and SCADA systems. They can be designed for multiple arrangements of digital and analog inputs and outputs, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory. History Process control of large industrial plants has evolved through many stages. Initially, control was from panels local to the process plant. However this required personnel to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-staffed central control room. Often the controllers were behind the control room panels, and all automatic and manual control outputs were individually transmitted back to plant in the form of pneumatic or electrical signals. Effectively this was the centralisation of all the localised panels, with the advantages of reduced manpower requirements and consolidated overview of the process. However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware so system changes required reconfiguration of signals by re-piping or re-wiring. It also required continual operator movement within a large control room in order to monitor the whole process. With the coming of electronic processors, high-speed electronic signalling networks and electronic graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around the plant and would communicate with the graphic displays in the control room. The concept of distributed control was realised. The introduction of distributed control allowed flexible interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high-level overviews of plant status and production levels. For large control systems, the general commercial name distributed control system (DCS) was coined to refer to proprietary modular systems from many manufacturers which integrated high-speed networking and a full suite of displays and control racks. While the DCS was tailored to meet the needs of large continuous industrial processes, in industries where combinatorial and sequential logic was the primary requirement, the PLC evolved out of a need to replace racks of relays and timers used for event-driven control. The old controls were difficult to re-configure and debug, and PLC control enabled networking of signals to a central control area with electronic displays. PLCs were first developed for the automotive industry on vehicle production lines, where sequential logic was becoming very complex. It was soon adopted in a large number of other event-driven applications as varied as printing presses and water treatment plants. SCADA's history is rooted in distribution applications, such as power, natural gas, and water pipelines, where there is a need to gather remote data through potentially unreliable or intermittent low-bandwidth and high-latency links. SCADA systems use open-loop control with sites that are widely separated geographically. A SCADA system uses remote terminal units (RTUs) to send supervisory data back to a control centre. Most RTU systems always had some capacity to handle local control while the master station is not available. However, over the years RTU systems have grown more and more capable of handling local control. The boundaries between DCS and SCADA/PLC systems are blurring as time goes on. The technical limits that drove the designs of these various systems are no longer as much of an issue. Many PLC platforms can now perform quite well as a small DCS, using remote I/O and are sufficiently reliable that some SCADA systems actually manage closed-loop control over long distances. With the increasing speed of today's processors, many DCS products have a full line of PLC-like subsystems that weren't offered when they were initially developed. In 1993, with the release of IEC-1131, later to become IEC-61131-3, the industry moved towards increased code standardization with reusable, hardware-independent control software. For the first time, object-oriented programming (OOP) became possible within industrial control systems. This led to the development of both programmable automation controllers (PAC) and industrial PCs (IPC). These are platforms programmed in the five standardized IEC languages: ladder logic, structured text, function block, instruction list and sequential function chart. They can also be programmed in modern high-level languages such as C or C++. Additionally, they accept models developed in analytical tools such as MATLAB and Simulink. Unlike traditional PLCs, which use proprietary operating systems, IPCs utilize Windows IoT. IPC's have the advantage of powerful multi-core processors with much lower hardware costs than traditional PLCs and fit well into multiple form factors such as DIN rail mount, combined with a touch-screen as a panel PC, or as an embedded PC. New hardware platforms and technology have contributed significantly to the evolution of DCS and SCADA systems, further blurring the boundaries and changing definitions. Security SCADA and PLCs are vulnerable to cyber attack. The U.S. Government Joint Capability Technology Demonstration (JCTD) known as MOSAICS (More Situational Awareness for Industrial Control Systems) is the initial demonstration of cybersecurity defensive capability for critical infrastructure control systems. MOSAICS addresses the Department of Defense (DOD) operational need for cyber defense capabilities to defend critical infrastructure control systems from cyber attack, such as power, water and wastewater, and safety controls, affect the physical environment. The MOSAICS JCTD prototype will be shared with commercial industry through Industry Days for further research and development, an approach intended to lead to an innovative, game-changing capabilities for cybersecurity for critical infrastructure control systems. See also Automation Plant process and emergency shutdown systems MTConnect OPC Foundation Safety instrumented system (SIS) Control system security Operational Technology References Further reading Guide to Industrial Control Systems (ICS) Security, SP800-82 Rev2, National Institute of Standards and Technology, May 2015. External links Proview, an open source process control system Telemetry Control system Control engineering Manufacturing
Industrial control system
[ "Engineering" ]
2,427
[ "Manufacturing", "Automation", "Industrial engineering", "Control engineering", "Mechanical engineering", "Industrial automation" ]
7,333,443
https://en.wikipedia.org/wiki/Fair%20Trade%20Certified%20Mark
The Fair Trade Certified Mark is a fair trade certification mark used primarily in the United States and Canada. It appears on products as an independent guarantee that disadvantaged producers in the developing world are getting a better deal. The Fair Trade Certified Mark is the North American equivalent of the International Fairtrade Certification Mark used in Europe, Africa, Asia, Australia and New Zealand. For a product to carry either Certification Marks, it must come from Fair Trade USA inspected and certified producer organizations. The crops must be grown and harvested in accordance with the fair trade standards set by Fair Trade USA. Some of the supply chains are also monitored by FLOCERT to ensure the integrity of labelled products. Only Fair Trade USA (formerly "TransFair USA") licensees can use the Fair Trade Certified Mark on their products. The Fair Trade Certified Mark in the United States was introduced by TransFair USA on the American market in 1998. In 2012 a variation of the US Fair Trade certification mark was adopted with the benefit of being registered globally as a trade mark. The mark is designed to pop better on the shelf through a far simpler design and the use of color. The one basket with outstretched hands indicates sharing and a "give and take" between producers and purchases. The green signals the environmental strength of Fair Trade. References External links Fair Trade Certified Certification marks Fair trade Consumer symbols Ecolabelling Social impact Labor-related organizations
Fair Trade Certified Mark
[ "Mathematics" ]
282
[ "Symbols", "Certification marks" ]
7,334,144
https://en.wikipedia.org/wiki/Year%202011%20problem
The year 2011 problem or the Y1C problem () was a potential problem involving computers and computer systems in Taiwan in the night of 31 December 2010 and 1 January 2011. Similar to the year 2000 problem faced by much of the world in the lead-up to 2000, the year 2011 problem is a side effect of Taiwan's use of the Republic of China calendar for official purposes. This calendar is based on the founding of the Republic of China in 1912 (year 1), so the year 2011 on the Gregorian calendar corresponds to year 100 on Taiwan's official calendar, which posed potential problems for any program that only treats years as two-digit values. Reported problems As most Taiwanese had anticipated the problem after the year 2000 problem, the Y1C computer bug impact was minimal. Many computers were already using a three-digit system for dates, with a zero being used as the first digit for years below 100 (Gregorian 2010 or earlier). Some government documents such as driver's licenses already refer to years over 100; nothing more than minor glitches were reported. Some iPhone users reported that their alarm tool failed to function on 1 January 2011. See also Time formatting and storage bugs References External links Minguo Year 100 Problem Service Web (Traditional Chinese Only) Calendars Software bugs Society of Taiwan Taiwan under Republic of China rule 2011 in Taiwan Time formatting and storage bugs
Year 2011 problem
[ "Physics" ]
280
[ "Spacetime", "Calendars", "Physical quantities", "Time" ]
7,334,145
https://en.wikipedia.org/wiki/Cell%20phone%20lot
A cell phone lot is a parking lot, typically located at airports, where people can wait before picking up passengers. The purpose of these lots is to reduce congestion at arrival sections by preventing cars from continuously circling around the airport or waiting on the sides of highways to avoid paying fees at the airport parking lots. Once the passenger's flight lands, after they collect their luggage and are ready to be picked up, they call the person waiting in the cell phone lot. These lots are usually free and only minutes away from the terminals. References Airport infrastructure
Cell phone lot
[ "Engineering" ]
111
[ "Airport infrastructure", "Aerospace engineering" ]
7,334,163
https://en.wikipedia.org/wiki/List%20of%20compilers
This page is intended to list all current compilers, compiler generators, interpreters, translators, tool foundations, assemblers, automatable command line interfaces (shells), etc. Ada compilers ALGOL 60 compilers ALGOL 68 compilers cf. ALGOL 68s specification and implementation timeline Assemblers (Intel *86) Assemblers (Motorola 68*) Assemblers (Zilog Z80) Assemblers (other) BASIC compilers BASIC interpreters C compilers Notes: C++ compilers Notes: C# compilers COBOL compilers Common Lisp compilers D compilers DIBOL/DBL compilers ECMAScript interpreters Eiffel compilers Forth compilers and interpreters Fortran compilers Go compilers Haskell compilers ISLISP compilers and interpreters Java compilers Lisaac compiler Pascal compilers Perl interpreters PHP compilers PL/I compilers Python compilers and interpreters Ruby compilers and interpreters Rust compilers Smalltalk compilers Tcl interpreters Command language interpreters Rexx interpreters CLI compilers Source-to-source compilers This list is incomplete. A more extensive list of source-to-source compilers can be found here. Free/libre and open source compilers Production quality, free/libre and open source compilers. Amsterdam Compiler Kit (ACK) [C, Pascal, Modula-2, Occam, and BASIC] [Unix-like] Clang C/C++/Objective-C Compiler AMD Optimizing C/C++ Compiler FreeBASIC [Basic] [DOS/Linux/Windows] Free Pascal [Pascal] [DOS/Linux/Windows(32/64/CE)/MacOS/NDS/GBA/..(and many more)] GCC: C, C++ (G++), Java (GCJ), Ada (GNAT), Objective-C, Objective-C++, Fortran (GFortran), and Go (GCCGo); also available, but not in standard are: Modula-2, Modula-3, Pascal, PL/I, D, Mercury, VHDL; Linux, the BSDs, macOS, NeXTSTEP, Windows and BeOS, among others Local C compiler [C] [Linux, SPARC, MIPS] The LLVM Compiler Infrastructure which is also frequently used for research Portable C Compiler [C] [Unix-like] Open Watcom [C, C++, and Fortran] [Windows and OS/2, Linux/FreeBSD WIP] TenDRA [C/C++] [Unix-like] Tiny C Compiler [C] [Linux, Windows] Open64, supported by AMD on Linux. XPL PL/I dialect (several systems) Swift [Apple OSes, Linux, Windows (as of version 5.3)] Research compilers Research compilers are mostly not robust or complete enough to handle real, large applications. They are used mostly for fast prototyping new language features and new optimizations in research areas. Open64: A popular research compiler. Open64 merges the open source changes from the PathScale compiler mentioned. ROSE: an open source compiler framework to generate source-to-source analyzers and translators for C/C++ and Fortran, developed at Lawrence Livermore National Laboratory MILEPOST GCC: interactive plugin-based open-source research compiler that combines the strength of GCC and the flexibility of the common Interactive Compilation Interface that transforms production compilers into interactive research toolsets. Interactive Compilation Interface – a plugin system with high-level API to transform production-quality compilers such as GCC into powerful and stable research infrastructure while avoiding developing new research compilers from scratch Phoenix optimization and analysis framework by Microsoft Edison Design Group: provides production-quality front end compilers for C, C++, and Java (a number of the compilers listed on this page use front end source code from Edison Design Group). Additionally, Edison Design Group makes their proprietary software available for research uses. See also Compiler Comparison of integrated development environments List of command-line interpreters Footnotes References External links List of C++ compilers, maintained by C++'s inventor, Bjarne Stroustrup List of free C/C++ compilers and interpreters List of compiler resources Compilers
List of compilers
[ "Technology" ]
916
[ "Computing-related lists", "Lists of software" ]
7,335,240
https://en.wikipedia.org/wiki/Jacques%20Bouveresse
Jacques Bouveresse (; 20 August 1940 – 9 May 2021) was a French philosopher who wrote on subjects including Ludwig Wittgenstein, Robert Musil, Karl Kraus, philosophy of science, epistemology, philosophy of mathematics and analytical philosophy. Bouveresse was called "an avis rara among the better known French philosophers in his championing of critical standards of thought." He was Professor Emeritus at the Collège de France where until 2010 he held the chair of philosophy of language and epistemology. His disciple Claudine Tiercelin was appointed to a chair of metaphysics and philosophy of knowledge upon his retirement. Education and career Born on 20 August 1940 in Épenoy in the Doubs département of France into a farming family, Jacques Bouveresse completed his secondary education at the seminary of Besançon. He spent two years of preparation for the baccalauréat in philosophy and scholastic theology at Faverney in Haute-Saône. He followed his preparatory literary classes at the Lycée Lakanal in Sceaux, and in 1961 entered the École Normale Supérieure in Paris. He presented his doctoral thesis in philosophy on Wittgenstein, entitled "Le mythe de l'intériorité. Expérience, signification et langage privé chez Wittgenstein". Beginning with his earliest works, he consistently constructed his own philosophical and intellectual path, without following the normal routes and modes of academia. In 1976, Wittgenstein was practically unknown in France, as were Musil and the logic and analytical philosophy which Bouveresse had begun to study in the 1960s. These two last domains notably propelled him towards the lectures of Jules Vuillemin and Gilles Gaston Granger, who at the time were practically alone in occupying themselves with these problems, and with whom he maintained a lasting friendship. Academic career: 1966–1969: Assistant to the Section de Philosophie of the University of Paris (teaching logic) 1969–1971: Maître-Assistant to the UER de Philosophie of the Université Paris I 1971–1975: Attached to the CNRS 1975–1979: Maître de Conférences at the Université Paris I 1979–1983: Professor at the University of Geneva 1983–1995: Professor at the University of Paris From 1995: Professor at the Collège de France in the chair of philosophie du langage et de la connaissance. Works Bouveresse's philosophy is a continuation of the intellectual and philosophical tradition of central Europe (Brentano, Boltzmann, Helmholtz, Frege, the Vienna Circle, Kurt Gödel). His philosophical programme is in nearly all respects similar to the one conducted by many present day Analytic philosophers. The thought of Robert Musil Jacques Bouveresse is interested in the thought of the early 20th-century Austrian novelist Robert Musil (who wrote a thesis on philosophy), famous for his novel The Man Without Qualities, as well as the aversion/fascination with which Paul Valéry regarded philosophy. Incompleteness and philosophy Apart from his work on Ludwig Wittgenstein, Jacques Bouveresse is interested in the incompleteness theorems of Kurt Gödel and their philosophical consequences. It is on this account that he has attacked, in a popular work Prodiges et vertiges de l'analogie, the use made of these theorems by Régis Debray. Bouveresse denounces the literary distortion of a scientific concept for the purpose of a thesis. This distortion, according to him, has no other purpose than to overwhelm a readership which lacks the training necessary to comprehend such complex theorems. Bouveresse's reproach to Debray is not that he uses a scientific concept for the purpose of an analogy, but that he uses such a difficult to understand theorem in the attempt to provide an absolute justification in the form of the classic sophism of the argument from authority. According to Bouveresse, the incompleteness of a formal system which applies to certain mathematical systems in no way implies the incompleteness of sociology, which is not a formal system. Bibliography (in French) Unless stated otherwise, published by Éditions de Minuit. Most books published by the Collège de France editions are freely accessible on the Collège's website. 1969: La philosophie des sciences, du positivisme logique in Histoire de la philosophie, vol. 4. Ed. François Châtelet 1971: La parole malheureuse. De l'Alchimie linguistique à la grammaire philosophique 1973: Wittgenstein : la rime et la raison, science, éthique et esthétique 1976: Le mythe de l'intériorité. Expérience, signification et langage privé chez Wittgenstein 1984: Le philosophe chez les autophages 1984: Rationalité et cynisme 1987: La force de la règle, Wittgenstein et l'invention de la nécessité 1988: Le pays des possibles, Wittgenstein, les mathématiques et le monde réel 1991: Philosophie, mythologie et pseudo-science, Wittgenstein lecteur de Freud, Éditions de l'éclat 1991: Herméneutique et linguistique, suivi de Wittgenstein et la philosophie du langage, Éditions de l'éclat 1993: L'homme probable, Robert Musil, le hasard, la moyenne et l'escargot de l'Histoire, Éditions de l'éclat 1994: 'Wittgenstein', in Michel Meyer, La philosophie anglo-saxonne, PUF 1995: , Éditions Jacqueline Chambon 1996: La demande philosophique. Que veut la philosophie et que peut-on vouloir d'elle ?, Éditions de l'Eclat 1997: Dire et ne rien dire, l'illogisme, l'impossibilité et le non-sens, Éditions Jacqueline Chambon 1998: Le Philosophe et le réel. Entretiens avec Jean-Jacques Rosat, Hachette 1999: Prodiges et vertiges de l'analogie. De l'abus des belles-lettres dans la pensée, Éditions Liber-Raisons d'agir 2000: Essais I - Wittgenstein, la modernité, le progrès et le déclin , Agone 2001: Essais II - L'Epoque, la mode, la morale, la satire, Agone 2001: Schmock ou le triomphe du journalisme, La grande bataille de Karl Kraus, Seuil 2003: Essais III : Wittgenstein ou les sortilèges du langage, Agone 2001: La voix de l'âme et les chemins de l'esprit , Seuil, coll. Liber 2004: Bourdieu savant & politique, Agone 2004: Langage, perception et réalité, tome 2, Physique, phénoménologie et grammaire, Ed. Jacqueline Chambon 2004: Essais IV - Pourquoi pas des philosophes, Agone 2005: Robert Musil. L'homme probable, le hasard, la moyenne et l'escargot de l'histoire, (new edition of 1993 above), Éditions de l'éclat 2006: Essais V - Descartes, Leibniz, Kant, Agone Peut-on ne pas croire ? Sur la vérité, la croyance et la foi, Agone, 2007 Satire & prophétie : les voix de Karl Kraus, Agone, 2007 La Connaissance de l'écrivain : sur la littérature, la vérité et la vie, Agone, 2008 Que peut-on faire de la religion ?, Agone, 2011 Essais VI. Les Lumières des positivistes, Agone, 2011 (ISBN 978-2-7489-0066-8) Dans le labyrinthe : nécessité, contingence et liberté chez Leibniz. Cours 2009 & 2010, Publications du Collège de France, 2013 Qu’est-ce qu’un système philosophique ? Cours 2007 & 2008, Publications du Collège de France, 2013 Études de philosophie du langage, Publications du Collège de France, 2013 À temps et à contretemps. Conférences publiques, La philosophie de la connaissance au Collège de France, 2013 Why I am so very unFrench, and other essays, Publications du Collège de France, 2013 Le danseur et sa corde, Agone, 2014 De la philosophie considérée comme un sport, Agone, 2015 Le Troisième monde. Signification, vérité et connaissance chez Frege, Publications du Collège de France, 2015 L’Éthique de la croyance et la question du ‘poids de l’autorité’, Publications du Collège de France, 2015 Une épistémologie réaliste est-elle possible ? Réflexions sur le réalisme structurel de Poincaré, Publications du Collège de France, 2015. Ernest Renan, la science, la métaphysique, la religion et la question de leur avenir, Publications du Collège de France, 2015 Nietzsche contre Foucault : Sur la vérité, la connaissance et le pouvoir, Agone, 2016 Percevoir la musique. Helmholtz et la théorie physiologique de la musique, Éditions L'improviste, Collection « Les Aéronautes de l'esprit », 2016 Le Mythe moderne du progrès, Agone, 2017 Le Parler de la musique, I. La musique, le langage, la culture et l’Histoire, L’Improviste, 2017 L’Histoire de la philosophie, l’histoire des sciences, et la philosophie de l’histoire de la philosophie, Publications du Collège de France, 2017 Les Premiers jours de l’inhumanité. Karl Kraus et la guerre, Hors d’atteinte, 2019 Le Parler de la musique, II. La Musique chez les Wittgenstein, L’Improviste, 2019 Le Parler de la musique, III. Entre Brahms et Wagner : Nietzsche, Wittgenstein, la philosophie et la musique, L’Improviste, 2020 Les foudres de Nietzsche: Et l'aveuglement des disciples, Hors-d'atteinte, 212 p., 2021 Les vagues du langage : le « paradoxe de Wittgenstein » ou comment peut-on suivre une règle ?, Seuil, 2022 Co-edited books (with Herman Parret) Meaning and Understanding, De Gruyter, 1981. (with Sandra Laugier and Jean-Jacques Rosat) Wittgenstein, dernières pensées, Agone, 2002. (with Jean-Jacques Rosat) Philosophies de la perception. Phénoménologie, grammaire et sciences cognitives, Odile Jacob, 2003. (with Delphine Chapuis-Schmitz and Jean-Jacques Rosat) L’Empirisme logique à la limite. Schlick, le langage et l’expérience, CNRS-éditions, 2006. (with Pierre Wagner) Mathématiques et expérience : 1919-1938. L'application et l'interprétation des mathématiques dans la philosophie de l'empirisme logique de l'entre-deux guerres, Odile Jacob, 2008. References External links Qu'appellent-ils « penser »? Bouveresse on the affaire Sokal and its consequences. Entretien paru dans l'Humanité (January 2004). Interview about Que peut-on faire de la religion? (December 2012). 1940 births 2021 deaths People from Doubs Lycée Lakanal alumni École Normale Supérieure alumni Academic staff of the Collège de France Academic staff of the University of Paris Academic staff of the University of Geneva Analytic philosophers Philosophers of language French epistemologists 20th-century French philosophers 21st-century French philosophers 21st-century French writers Legion of Honour refusals French philosophers of science Philosophers of mathematics Philosophers of social science Critics of postmodernism French male writers
Jacques Bouveresse
[ "Mathematics" ]
2,672
[ "Philosophers of mathematics" ]
7,335,490
https://en.wikipedia.org/wiki/Third%20Tunnel%20of%20Aggression
The Third Tunnel of Aggression (Korean: 제3땅굴; Third Infiltration Tunnel or 3rd Tunnel) is one of four known tunnels under the border between North Korea and South Korea, extending south of Panmunjom. Background Only from Seoul, the incomplete tunnel was discovered in October 1978 following the detection of an underground explosion in June 1978, apparently caused by the tunnellers who had progressed under the south side of the Korean Demilitarized Zone (DMZ). It took four months to locate the tunnel precisely and dig an intercept tunnel. The incomplete tunnel is long, at its maximum height and wide. It runs through bedrock at a depth of about below ground. It was apparently designed for a surprise attack on Seoul from North Korea, and could, according to visitor information in the tunnel, accommodate 30,000 men per hour along with light weaponry. Upon discovery of the third tunnel, the United Nations Command accused North Korea of threatening the 1953 Korean Armistice Agreement signed at the end of the Korean War. Its description as a "tunnel of aggression" was given by South Korea, who considered it an act of aggression on the part of North Korea. Initially, North Korea denied building the tunnel. North Korea then declared it part of a coal mine, the tunnel having been blackened by construction explosions. Signs in the tunnel claim that there is no geological likelihood of coal being in the area. The walls of the tunnel where tourists are taken are observably granite, a stone of igneous origin, whereas coal would be found in stone of sedimentary origin. A total of four tunnels have been discovered so far, but there are believed to be up to twenty more. The South Korean Armed Forces still devotes specialist resources to finding infiltration tunnels, though tunnels are much less significant now that North Korean long-range artillery and missiles have become more effective. Tourist site The tunnel is now a tourist site, though still well guarded. Visitors enter either by walking down a long steep incline that starts in a lobby with a gift shop or via a rubber-tyred train that contains a driver at the front or the back (depending on the direction as there is only one set of rails) and padded seats facing forward and backwards in rows for up to three passengers each. Photography is forbidden within the tunnel. The South Koreans have blocked the actual Military Demarcation Line in the tunnel with three concrete barricades. Tourists can walk as far as the third barricade, and the second barricade is visible through a small window in the third. See also North Korean infiltration tunnels Korean People's Army List of tunnels in North Korea References External links Korean Demilitarized Zone Tunnel warfare Tunnels in North Korea Tunnels in South Korea 1978 in North Korea 1978 in South Korea Espionage North Korea–South Korea relations
Third Tunnel of Aggression
[ "Engineering" ]
560
[ "Military engineering", "Tunnel warfare" ]
7,335,684
https://en.wikipedia.org/wiki/Spandrel%20%28biology%29
In evolutionary biology, a spandrel is a phenotypic trait that is a byproduct of the evolution of some other characteristic, rather than a direct product of adaptive selection. Stephen Jay Gould and Richard Lewontin brought the term into biology in their 1979 paper "The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme". Adaptationism is a point of view that sees most organismal traits as adaptive products of natural selection. Gould and Lewontin sought to temper what they saw as adaptationist bias by promoting a more structuralist view of evolution. The term "spandrel" originates from architecture, where it refers to the roughly triangular spaces between the top of an arch and the ceiling. Etymology The term was coined by paleontologist Stephen Jay Gould and population geneticist Richard Lewontin in their paper "The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme" (1979). Evolutionary biologist Günter P. Wagner called the paper "the most influential structuralist manifesto". In their paper, Gould and Lewontin employed the analogy of spandrels in Renaissance architecture, such as the curved areas of masonry between arches supporting a dome that arise as a consequence of decisions about the shape of the arches and the base of the dome, rather than being designed for the artistic purposes for which they were often employed. The authors singled out properties like the necessary number of four spandrels and their specific three-dimensional shape. At the time, it was widely thought in the scientific community that everything an animal has developed that has a positive effect on that animal's fitness was due to natural selection or some adaptation. Gould and Lewontin proposed an alternative hypothesis: that due to adaptation and natural selection, byproducts are also formed. Because these byproducts of adaptations that had no real relative advantage to survival, they were termed spandrels. In the biological sense, a "spandrel" might result from a requirement inherent in the body plan of an organism, or as a byproduct of some other constraint on adaptive evolution. In response to the position that spandrels are just small, unimportant byproducts, Gould and Lewontin argue that "we must not recognize that small means unimportant. Spandrels can be as prominent as primary adaptations". A main example used by Gould and Lewontin is the human brain. Many secondary processes and actions come in addition to the main functions of the human brain. These secondary processes and thoughts can eventually turn into an adaptation or provide a fitness advantage to humans. Just because something is a secondary trait or byproduct of an adaptation does not mean it has no use. In 1982, Gould and Vrba introduced the term "exaptation" for characteristics that enhance fitness in their present role but were not built for that role by natural selection. Exaptations may be divided into two subcategories: pre-adaptations and spandrels. Spandrels are characteristics that did not originate by the direct action of natural selection and that were later co-opted for a current use. Gould saw the term to be optimally suited for evolutionary biology for "the concept of a nonadaptive architectural by-product of definite and necessary form – a structure of particular size and shape that then becomes available for later and secondary utility". Criticism of the term Gould and Lewontin's proposal generated a large literature of critique, which Gould characterised as being grounded in two ways. First, a terminological claim was offered that the "spandrels" of Basilica di San Marco were not spandrels at all, but rather were pendentives. Gould responded, "The term spandrel may be extended from its particular architectural use for two-dimensional byproducts to the generality of 'spaces left over', a definition that properly includes the San Marco pendentives." Other critics, such as Daniel Dennett, further claimed (in Darwin's Dangerous Idea and elsewhere) that these pendentives are not merely architectural by-products as Gould and Lewontin supposed. Dennett argues that alternatives to pendentives, such as corbels or squinches, would have served equally well from an architectural standpoint, but pendentives were deliberately selected due to their aesthetic value. Critics such as H. Allen Orr argued that Lewontin and Gould's oversight in this regard illustrates their underestimation of the pervasiveness of adaptations found in nature. Response to criticism Gould responded that critics ignore that later selective value is a separate issue from origination as necessary consequences of structure; he summarised his use of the term 'spandrel' in 1997: "Evolutionary biology needs such an explicit term for features arising as byproducts, rather than adaptations, whatever their subsequent exaptive utility ... Causes of historical origin must always be separated from current utilities; their conflation has seriously hampered the evolutionary analysis of form in the history of life." Gould cites the masculinized genitalia of female hyenas and the brooding chamber of some snails as examples of evolutionary spandrels. Gould (1991) outlines some considerations for grounds for assigning or denying a structure the status of spandrel, pointing first to the fact that a structure originating as a spandrel through primary exaptation may have been further crafted for its current utility by a suite of secondary adaptations, thus the grounds of how well crafted a structure is for a function cannot be used as grounds for assigning or denying spandrel status. The nature of the current utility of a structure also does not provide a basis for assigning or denying spandrel status, nor does he see the origin of a structure as having any relationship to the extent or vitality of a later co-opted role, but places importance on the later evolutionary meaning of a structure. This seems to imply that the design and secondary utilization of spandrels may feed back into the evolutionary process and thus determine major features of the entire structure. The grounds Gould does accept to have validity in assigning or denying a structure the status of spandrel are historical order and comparative anatomy. Historical order involves the use of historical evidence to determine which feature arose as a primary adaptation and which one appeared subsequently as a co-opted by-product. In the absence of historical evidence, inferences are drawn about the evolution of a structure through comparative anatomy. Evidence is obtained by comparing current examples of the structure in a cladistic context and by subsequently trying to determine a historical order from the distribution yielded by tabulation. Examples of spandrels Human chin The human chin has been proposed as an example of a spandrel, since modern humans (Homo sapiens) are the only species with a chin, an anatomical feature with no known function. Alternatively however, it has been suggested that chins may be the result of selection, based on an analysis of the rate of chin evolution in the fossil record. Language There is disagreement among experts as to whether language is a spandrel. Linguist Noam Chomsky and Gould himself have both argued that human language may have originated as a spandrel. Chomsky writes that the language faculty, and the property of discrete infinity or recursion that plays a central role in his theory of universal grammar (UG), may have evolved as a spandrel. In this view, Chomsky initially pointed to language being a result of increased brain size and increasing complexity, though he provides no definitive answers as to what factors may have led to the brain attaining the size and complexity of which discrete infinity is a consequence. Steven Pinker and Ray Jackendoff say Chomsky's case is unconvincing. Pinker contends that the language faculty is not a spandrel, but rather a result of natural selection. Newmeyer (1998) instead views the lack of symmetry, irregularity and idiosyncrasy that universal grammar tolerates and the widely different principles of organization of its various sub-components and consequent wide variety of linking rules relating them as evidence that such design features do not qualify as an exaptation. He suggests that universal grammar cannot be derivative and autonomous at the same time, and that Chomsky wants language to be an epiphenomenon and an "organ" simultaneously, where an organ is defined as a product of a dedicated genetic blueprint. Rudolph Botha counters that Chomsky has offered his conception of the feature of recursion but not a theory of the evolution of the language faculty as a whole. Music There is disagreement among experts as to whether music is a spandrel. Pinker has written that "As far as biological cause and effect are concerned, music is useless. It shows no signs of design for attaining a goal such as long life, grandchildren, or accurate perception and prediction of the world", and "I suspect that music is auditory cheesecake, an exquisite confection crafted to tickle the sensitive spots of at least six of our mental faculties." Dunbar found this conclusion odd, and stated that "it falls foul of what we might refer to as the Spandrel Fallacy: 'I haven't really had time to determine empirically whether or not something has a function, so I'll conclude that it can't possibly have one.'" Dunbar states that there are at least two potential roles of music in evolution: "One is its role in mating and mate choice, the other is its role in social bonding." See also Atavism Pleiotropy, a gene that has an effect on more than one trait Vestigiality Exaptation References Sources Stephen Jay Gould (2002). The Structure of Evolutionary Theory. Belknap Press of Harvard University Press, Cambridge, Massachusetts Phillip Stevens Thurtle. "The G Files: Linking 'The Selfish Gene' And 'The Thinking Reed'" Daniel Dennett (1995). Darwin's Dangerous Idea: Evolution and the Meanings of Life. Simon & Schuster. . Marc D. Hauser, Noam Chomsky, and W. Tecumseh Fitch (2002). "The Faculty of Language: What Is It, Who Has It, and How Did It Evolve?" Science 298:1569–1579. Robert Mark (1996). "Architecture and Evolution" American Scientist (July–August): 383–389. 1979 neologisms Evolutionary biology concepts
Spandrel (biology)
[ "Biology" ]
2,161
[ "Evolutionary biology concepts" ]
7,336,655
https://en.wikipedia.org/wiki/Paxillin
Paxillin is a protein that in humans is encoded by the PXN gene. Paxillin is expressed at focal adhesions of non-striated cells and at costameres of striated muscle cells, and it functions to adhere cells to the extracellular matrix. Mutations in PXN as well as abnormal expression of paxillin protein has been implicated in the progression of various cancers. Structure Human paxillin is 64.5 kDa in molecular weight and 591 amino acids in length. The C-terminal region of paxillin is composed of four tandem double zinc finger LIM domains that are cysteine/histidine-rich with conserved repeats; these serve as binding sites for the protein tyrosine phosphatase-PEST, tubulin and serves as the targeting motif for focal adhesions. The N-terminal region of paxillin has five highly conserved leucine-rich sequences termed LD motifs, which mediate several interactions, including that with pp125FAK and vinculin. The LD motifs are predicted to form amphipathic alpha helices, with each leucine residue positioned on one face of the alpha helix to form a hydrophobic protein-binding interface. The N-terminal region also has a proline-rich domain that has potential for Src-SH3 binding. Three N-terminal YXXP motifs may serve as binding sites for talin or v-Crk SH2. Function Paxillin is a signal transduction adaptor protein discovered in 1990 in the laboratory of Keith Burridge The C-terminal region of paxillin contains four LIM domains that target paxillin to focal adhesions. It is presumed through a direct association with the cytoplasmic tail of beta-integrin. The N-terminal region of paxillin is rich in protein–protein interaction sites. The proteins that bind to paxillin are diverse and include protein tyrosine kinases, such as Src and focal adhesion kinase (FAK), structural proteins, such as vinculin and actopaxin, and regulators of actin organization, such as COOL/PIX and PKL/GIT. Paxillin is tyrosine-phosphorylated by FAK and Src upon integrin engagement or growth factor stimulation, creating binding sites for the adapter protein Crk. In striated muscle cells, paxillin is important in costamerogenesis, or the formation of costameres, which are specialized focal adhesion-like structures in muscle cells that tether Z-disc structures across the sarcolemma to the extracellular matrix. The current working model of costamerogenesis is that in cultured, undifferentiated myoblasts, alpha-5 integrin, vinculin and paxillin are in complex and located primarily at focal adhesions. During early differentiation, premyofibril formation through sarcomerogenesis occurs, and premyofibrils assemble at structures that are typical of focal adhesions in non-muscle cells; a similar phenomenon is observed in cultured cardiomyocytes. Premyofibrils become nascent myofibrils, which progressively align to form mature myofibrils and nascent costamere structures appear. Costameric proteins redistribute to form mature costameres. While the precise functions of paxillin in this process are still being unveiled, studies investigating binding partners of paxillin have provided mechanistic understanding of its function. The proline-rich region of paxillin specifically binds to the second SH3 domain of ponsin, which occurs after the onset of the myogenic differentiation and with expression restricted to costameres. We also know that the binding of paxillin to focal adhesion kinase (FAK) is critical for directing paxillin function. The phosphorylation of FAK at serine-910 regulates the interaction of FAK with paxillin, and controls the stability of paxillin at costameres in cardiomyocytes, with phosphorylation reducing the half-life of paxillin. This is important to understand because the stability of the FAK-paxillin interaction is likely inversely related to the stability of the vinculin-paxillin interaction, which would likely indicate the strength of the costamere interaction as well as sarcomere reorganization; processes which have been linked to dilated cardiomyopathy. Additional studies have shown that paxillin itself is phosphorylated, and this participates in hypertrophic signaling pathways in cardiomyocytes. Treatment of cardiomyocytes with the hypertrophic agonist, phenylephrine stimulated a rapid increase in tyrosine phosphorylation paxillin, which was mediated by protein tyrosine kinases. The structural reorganization of paxillin in cardiomyocytes has also been detected in mouse models of dilated cardiomyopathy. In a mouse model of tropomodulin overexpression, paxillin distribution was revamped coordinate with increased phosphorylation and cleavage of paxillin. Similarly, paxillin was shown to have altered localization in cardiomyocytes from transgenic mice expressing a constitutively-active rac1. These data show that alterations in costameric organization, in part via paxillin redistribution, may be a pathogenic mechanism in dilated cardiomyopathy. In addition, in mice subjected to pressure overload-induced cardiac hypertrophy, inducing hypertrophic cardiomyopathy, paxillin expression levels increased, suggesting a role for paxillin in both types of cardiomyopathy. Clinical significance Paxillin has been shown to have a clinically-significant role in patients with several cancer types. Enhanced expression of paxillin has been detected in premalignant areas of hyperplasia, squamous metaplasia and goblet cell metaplasia, as well as dysplastic lesions and carcinoma in high-risk patients with lung adenocarcinoma. Mutations in PXN have been associated with enhanced tumor growth, cell proliferation, and invasion in lung cancer tissues. During tumor transformation, a consistent finding is that paxillin protein is recruited and phosphorylated. Paxillin plays a role in the MET tyrosine kinase signaling pathway, which is upregulated in many cancers. Interactions Paxillin has been shown to interact with: PTP-PEST, tubulin, VCL, pp125FAK. SRC SORBS1. PARVA and ILK References Further reading External links MBInfo: Paxillin Paxillin Info with links in the Cell Migration Gateway Proteins
Paxillin
[ "Chemistry" ]
1,405
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
7,336,809
https://en.wikipedia.org/wiki/Intracellular%20calcium-sensing%20proteins
Intracellular calcium-sensing proteins are proteins that act in the second messenger system. Examples include: calmodulin calnexin calreticulin gelsolin References External links Human proteins
Intracellular calcium-sensing proteins
[ "Chemistry" ]
39
[ "Biochemistry stubs", "Protein stubs" ]
7,337,068
https://en.wikipedia.org/wiki/Olfactory%20marker%20protein
In molecular biology, olfactory marker protein is a protein involved in signal transduction. It is a highly expressed, cytoplasmic protein found in mature olfactory sensory receptor neurons of all vertebrates. OMP is a modulator of the olfactory signal transduction cascade. The crystal structure of OMP reveals a beta sandwich consisting of eight strands in two sheets with a jelly-roll topology. Three highly conserved regions have been identified as possible protein–protein interaction sites in OMP, indicating a possible role for OMP in modulating such interactions, thereby acting as a molecular switch. External links References Protein families
Olfactory marker protein
[ "Biology" ]
126
[ "Protein families", "Protein classification" ]
7,337,089
https://en.wikipedia.org/wiki/Delta-sleep-inducing%20peptide
Delta-sleep-inducing peptide (DSIP) is a neuropeptide that when infused into the mesodiencephalic ventricle of recipient rabbits induces spindle and delta EEG activity and reduced motor activities. Its amino acid sequence is Trp-Ala-Gly-Gly-Asp-Ala-Ser-Gly-Glu (WAGGDASGE). The gene has yet to be found in rabbits, along with any receptors or precursor peptides. However, searches through BLAST have found that it aligns with a hypothetical Amycolatopsis coloradensis protein. This could indicate that DSIP has a bacterial origin. Discovery Delta-sleep-inducing peptide was first discovered in 1974 by the Swiss Schoenenberger-Monnier group who isolated it from the cerebral venous blood of rabbits in an induced state of sleep. It was primarily believed to be involved in sleep regulation due to its apparent ability to induce slow-wave sleep in rabbits, but studies on the subject have been contradictory. DSIP-like material has been found in human breast milk. Structure and interactions DSIP is an amphiphilic peptide of molecular weight 850 daltons with the amino acid motif:N-Trp-Ala-Gly-Gly-Asp-Ala-Ser-Gly-Glu-C It has been found in both free and bound forms in the hypothalamus, limbic system and pituitary as well as various peripheral organs, tissues and body fluids. In the pituitary it co-localises with many peptide and non-peptide mediators such as corticotropin-like intermediate peptide (CLIP), adrenocorticotrophic hormone (ACTH), melanocyte-stimulating hormone (MSH), thyroid-stimulating hormone (TSH) and melanin concentrating hormone (MCH). It is abundant in the gut secretory cells and in the pancreas where it co-localises with glucagon. In the brain its action may be mediated by NMDA receptors. In another study delta-sleep-inducing peptide stimulated acetyltransferase activity through α1 receptors in rats. It is unknown where DSIP is synthesized. In vitro it has been found to have a low molecular stability with a half life of only 15 minutes due to the action of a specific aminopeptidase-like enzyme. It has been suggested that in the body it complexes with carrier proteins to prevent degradation, or exists as a component of a large precursor molecule, but as yet no structure or gene has been found for this precursor. Evidence supports the current belief that it is regulated by glucocorticoids. Gimble et al. suggest that DSIP interacts with components of the MAPK cascade and is homologous to glucocorticoid-induced leucine zipper (GILZ). GILZ can be induced by Dexamethasone. It prevents Raf-1 activation, which inhibits phosphorylation and activation of ERK. Function Many roles for DSIP have been suggested following research carried out using peptide analogues with a greater molecular stability and through measuring DSIP-like immunological (DSIP-LI) response by injecting DSIP antiserum and antibodies. Roles in endocrine regulation Decreases basal corticotropin level and blocks its release. Stimulates release of luteinizing hormone (LH). Stimulates release of somatoliberin and somatotrophin secretion and inhibits somatostatin secretion. Roles in physiological processes Can act as a stress limiting factor. May have a direct or indirect effect on body temperature and alleviating hypothermia. Can normalize blood pressure and myocardial contraction. It has been shown to enhance the efficiency of oxidative phosphorylation in rat mitochondria in vitro, suggesting it may have antioxidant effects. There is also conflicting evidence as to its involvement in sleep patterns. Some studies suggest a link between DSIP and slow-wave sleep (SWS) promotion and suppression of rapid eye movement sleep (REM), while some studies show no correlation. Stronger effects on sleep have been noted for the synthesized analogues of DSIP. It may affect human lens epithelial cell function via the MAPK pathway, which is involved in cell proliferation, differentiation, motility, survival, and apoptosis. Roles in disease and medicine It has been found to have anticarcinogenic properties. In a study on mice, injecting a preparation of DSIP over the mice's lifetime decreased total spontaneous tumor incidence 2.6-fold. The same study found it to also have geroprotective effects: it slowed down the age-related switching-off of oestrous function; it decreased by 22.6% the frequency of chromosome aberrations in bone marrow cells and it increased by 24.1% maximum life span in comparison with the control group. Levels of DSIP may be significant in patients diagnosed with major depressive disorder (MDD). In several studies, levels of DSIP in the plasma and cerebrospinal fluid are significantly deviated from the norm in patients with MDD, though there are contradictions as to whether levels are higher or lower than healthy control patients. Studies have demonstrated a direct link between GILZ expression (homologous to DSIP) and adipogenesis which has links to obesity and metabolic syndrome. In studies on rats with metaphit-induced epilepsy DSIP acted as an anticonvulsant, significantly decreasing the incidence and duration of fits suggesting DSIP as a potential treatment for epilepsy. DSIP has been found to have an analgesic effect. In studies on mice it was found to have a potent antinociceptive effect when administered intracerebroventricularly or intracisternally (see: Route of administration). Due to its possible effects on sleep and nociception, trials have been carried out to determine whether DSIP can be used as an anaesthetic. One such study found that administration of DSIP to humans as an adjunct to isoflurane anaesthesia actually increased the heart rate and reduced the depth of anaesthesia instead of deepening it as expected. Low plasma concentrations of DSIP have been found in patients with Cushing's syndrome. In Alzheimer's patients levels of DSIP have been found to be slightly elevated, though this is unlikely to be causal. A preparation of DSIP, Deltaran, has been used to correct central nervous system function in children after antiblastic therapy. Ten children aged 3–16 years were given a ten-day course of Deltaran and their bioelectric activity recorded. It was found that the chemotherapy-induced impairment in the bioelectrical activity of 9 out of the 10 children was reduced by administration of DSIP. DSIP can act antagonistically on opiate receptors to significantly inhibit the development of opioid and alcohol dependence and is currently being used in clinical trials to treat withdrawal syndrome. In one such trial it was reported that in 97% of opiate-dependent and 87% of alcohol-dependent patients the symptoms were alleviated by DSIP administration. In some studies administration of DSIP has alleviated narcolepsy and normalized disturbed sleeping patterns. Safety and possible side-effects of long-term DSIP use haven't been established in clinical research studies. References External links "Deltaran" Nonapeptides Sleep physiology
Delta-sleep-inducing peptide
[ "Biology" ]
1,559
[ "Behavior", "Sleep physiology", "Sleep" ]
7,337,217
https://en.wikipedia.org/wiki/Molecular%20orbital%20diagram
A molecular orbital diagram, or MO diagram, is a qualitative descriptive tool explaining chemical bonding in molecules in terms of molecular orbital theory in general and the linear combination of atomic orbitals (LCAO) method in particular. A fundamental principle of these theories is that as atoms bond to form molecules, a certain number of atomic orbitals combine to form the same number of molecular orbitals, although the electrons involved may be redistributed among the orbitals. This tool is very well suited for simple diatomic molecules such as dihydrogen, dioxygen, and carbon monoxide but becomes more complex when discussing even comparatively simple polyatomic molecules, such as methane. MO diagrams can explain why some molecules exist and others do not. They can also predict bond strength, as well as the electronic transitions that can take place. History Qualitative MO theory was introduced in 1928 by Robert S. Mulliken and Friedrich Hund. A mathematical description was provided by contributions from Douglas Hartree in 1928 and Vladimir Fock in 1930. Basics Molecular orbital diagrams are diagrams of molecular orbital (MO) energy levels, shown as short horizontal lines in the center, flanked by constituent atomic orbital (AO) energy levels for comparison, with the energy levels increasing from the bottom to the top. Lines, often dashed diagonal lines, connect MO levels with their constituent AO levels. Degenerate energy levels are commonly shown side by side. Appropriate AO and MO levels are filled with electrons by the Pauli Exclusion Principle, symbolized by small vertical arrows whose directions indicate the electron spins. The AO or MO shapes themselves are often not shown on these diagrams. For a diatomic molecule, an MO diagram effectively shows the energetics of the bond between the two atoms, whose AO unbonded energies are shown on the sides. For simple polyatomic molecules with a "central atom" such as methane () or carbon dioxide (), a MO diagram may show one of the identical bonds to the central atom. For other polyatomic molecules, an MO diagram may show one or more bonds of interest in the molecules, leaving others out for simplicity. Often even for simple molecules, AO and MO levels of inner orbitals and their electrons may be omitted from a diagram for simplicity. In MO theory molecular orbitals form by the overlap of atomic orbitals. Because σ bonds feature greater overlap than π bonds, σ bonding and σ* antibonding orbitals feature greater energy splitting (separation) than π and π* orbitals. The atomic orbital energy correlates with electronegativity as more electronegative atoms hold their electrons more tightly, lowering their energies. Sharing of molecular orbitals between atoms is more important when the atomic orbitals have comparable energy; when the energies differ greatly the orbitals tend to be localized on one atom and the mode of bonding becomes ionic. A second condition for overlapping atomic orbitals is that they have the same symmetry. Two atomic orbitals can overlap in two ways depending on their phase relationship (or relative signs for real orbitals). The phase (or sign) of an orbital is a direct consequence of the wave-like properties of electrons. In graphical representations of orbitals, orbital phase is depicted either by a plus or minus sign (which has no relationship to electric charge) or by shading one lobe. The sign of the phase itself does not have physical meaning except when mixing orbitals to form molecular orbitals. Two same-sign orbitals have a constructive overlap forming a molecular orbital with the bulk of the electron density located between the two nuclei. This MO is called the bonding orbital and its energy is lower than that of the original atomic orbitals. A bond involving molecular orbitals which are symmetric with respect to any rotation around the bond axis is called a sigma bond (σ-bond). If the phase cycles once while rotating round the axis, the bond is a pi bond (π-bond). Symmetry labels are further defined by whether the orbital maintains its original character after an inversion about its center; if it does, it is defined gerade, g. If the orbital does not maintain its original character, it is ungerade, u. Atomic orbitals can also interact with each other out-of-phase which leads to destructive cancellation and no electron density between the two nuclei at the so-called nodal plane depicted as a perpendicular dashed line. In this anti-bonding MO with energy much higher than the original AO's, any electrons present are located in lobes pointing away from the central internuclear axis. For a corresponding σ-bonding orbital, such an orbital would be symmetrical but differentiated from it by an asterisk as in σ*. For a π-bond, corresponding bonding and antibonding orbitals would not have such symmetry around the bond axis and be designated π and π*, respectively. The next step in constructing an MO diagram is filling the newly formed molecular orbitals with electrons. Three general rules apply: The Aufbau principle states that orbitals are filled starting with the lowest energy The Pauli exclusion principle states that the maximum number of electrons occupying an orbital is two, with opposite spins Hund's rule states that when there are several MO's with equal energy, the electrons occupy the MO's one at a time before two electrons occupy the same MO. The filled MO highest in energy is called the highest occupied molecular orbital (HOMO) and the empty MO just above it is then the lowest unoccupied molecular orbital (LUMO). The electrons in the bonding MO's are called bonding electrons and any electrons in the antibonding orbital would be called antibonding electrons. The reduction in energy of these electrons is the driving force for chemical bond formation. Whenever mixing for an atomic orbital is not possible for reasons of symmetry or energy, a non-bonding MO is created, which is often quite similar to and has energy level equal or close to its constituent AO, thus not contributing to bonding energetics. The resulting electron configuration can be described in terms of bond type, parity and occupancy for example dihydrogen 1σg2. Alternatively it can be written as a molecular term symbol e.g. 1Σg+ for dihydrogen. Sometimes, the letter n is used to designate a non-bonding orbital. For a stable bond, the bond order defined as must be positive. The relative order in MO energies and occupancy corresponds with electronic transitions found in photoelectron spectroscopy (PES). In this way it is possible to experimentally verify MO theory. In general, sharp PES transitions indicate nonbonding electrons and broad bands are indicative of bonding and antibonding delocalized electrons. Bands can resolve into fine structure with spacings corresponding to vibrational modes of the molecular cation (see Franck–Condon principle). PES energies are different from ionisation energies which relates to the energy required to strip off the th electron after the first electrons have been removed. MO diagrams with energy values can be obtained mathematically using the Hartree–Fock method. The starting point for any MO diagram is a predefined molecular geometry for the molecule in question. An exact relationship between geometry and orbital energies is given in Walsh diagrams. s-p mixing The phenomenon of s-p mixing occurs when molecular orbitals of the same symmetry formed from the combination of 2s and 2p atomic orbitals are close enough in energy to further interact, which can lead to a change in the expected order of orbital energies. When molecular orbitals are formed, they are mathematically obtained from linear combinations of the starting atomic orbitals. Generally, in order to predict their relative energies, it is sufficient to consider only one atomic orbital from each atom to form a pair of molecular orbitals, as the contributions from the others are negligible. For instance, in dioxygen the 3σg MO can be roughly considered to be formed from interaction of oxygen 2pz AOs only. It is found to be lower in energy than the 1πu MO, both experimentally and from more sophisticated computational models, so that the expected order of filling is the 3σg before the 1πu. Hence the approximation to ignore the effects of further interactions is valid. However, experimental and computational results for homonuclear diatomics from Li2 to N2 and certain heteronuclear combinations such as CO and NO show that the 3σg MO is higher in energy than (and therefore filled after) the 1πu MO. This can be rationalised as the first-approximation 3σg has a suitable symmetry to interact with the 2σg bonding MO formed from the 2s AOs. As a result, the 2σg is lowered in energy, whilst the 3σg is raised. For the aforementioned molecules this results in the 3σg being higher in energy than the 1πu MO, which is where s-p mixing is most evident. Likewise, interaction between the 2σu* and 3σu* MOs leads to a lowering in energy of the former and a raising in energy of the latter. However this is of less significance than the interaction of the bonding MOs. Diatomic MO diagrams A diatomic molecular orbital diagram is used to understand the bonding of a diatomic molecule. MO diagrams can be used to deduce magnetic properties of a molecule and how they change with ionization. They also give insight to the bond order of the molecule, how many bonds are shared between the two atoms. The energies of the electrons are further understood by applying the Schrödinger equation to a molecule. Quantum Mechanics is able to describe the energies exactly for single electron systems but can be approximated precisely for multiple electron systems using the Born-Oppenheimer Approximation, such that the nuclei are assumed stationary. The LCAO-MO method is used in conjunction to further describe the state of the molecule. Diatomic molecules consist of a bond between only two atoms. They can be broken into two categories: homonuclear and heteronuclear. A homonuclear diatomic molecule is one composed of two atoms of the same element. Examples are H2, O2, and N2. A heteronuclear diatomic molecule is composed of two atoms of two different elements. Examples include CO, HCl, and NO. Dihydrogen The smallest molecule, hydrogen gas exists as dihydrogen (H-H) with a single covalent bond between two hydrogen atoms. As each hydrogen atom has a single 1s atomic orbital for its electron, the bond forms by overlap of these two atomic orbitals. In the figure the two atomic orbitals are depicted on the left and on the right. The vertical axis always represents the orbital energies. Each atomic orbital is singly occupied with an up or down arrow representing an electron. Application of MO theory for dihydrogen results in having both electrons in the bonding MO with electron configuration 1σg2. The bond order for dihydrogen is (2-0)/2 = 1. The photoelectron spectrum of dihydrogen shows a single set of multiplets between 16 and 18 eV (electron volts). The dihydrogen MO diagram helps explain how a bond breaks. When applying energy to dihydrogen, a molecular electronic transition takes place when one electron in the bonding MO is promoted to the antibonding MO. The result is that there is no longer a net gain in energy. The superposition of the two 1s atomic orbitals leads to the formation of the σ and σ* molecular orbitals. Two atomic orbitals in phase create a larger electron density, which leads to the σ orbital. If the two 1s orbitals are not in phase, a node between them causes a jump in energy, the σ* orbital. From the diagram you can deduce the bond order, how many bonds are formed between the two atoms. For this molecule it is equal to one. Bond order can also give insight to how close or stretched a bond has become if a molecule is ionized. Dihelium and diberyllium Dihelium (He-He) is a hypothetical molecule and MO theory helps to explain why dihelium does not exist in nature. The MO diagram for dihelium looks very similar to that of dihydrogen, but each helium has two electrons in its 1s atomic orbital rather than one for hydrogen, so there are now four electrons to place in the newly formed molecular orbitals. The only way to accomplish this is by occupying both the bonding and antibonding orbitals with two electrons, which reduces the bond order ((2−2)/2) to zero and cancels the net energy stabilization. However, by removing one electron from dihelium, the stable gas-phase species ion is formed with bond order 1/2. Another molecule that is precluded based on this principle is diberyllium. Beryllium has an electron configuration 1s22s2, so there are again two electrons in the valence level. However, the 2s can mix with the 2p orbitals in diberyllium, whereas there are no p orbitals in the valence level of hydrogen or helium. This mixing makes the antibonding 1σu orbital slightly less antibonding than the bonding 1σg orbital is bonding, with a net effect that the whole configuration has a slight bonding nature. This explains the fact that the diberyllium molecule exists and has been observed in the gas phase. The slight bonding nature explains the low dissociation energy of only 59 kJ·mol−1. Dilithium MO theory correctly predicts that dilithium is a stable molecule with bond order 1 (configuration 1σg21σu22σg2). The 1s MOs are completely filled and do not participate in bonding. Dilithium is a gas-phase molecule with a much lower bond strength than dihydrogen because the 2s electrons are further removed from the nucleus. In a more detailed analysis which considers the environment of each orbital due to all other electrons, both the 1σ orbitals have higher energies than the 1s AO and the occupied 2σ is also higher in energy than the 2s AO (see table 1). Diboron The MO diagram for diboron (B-B, electron configuration 1σg21σu22σg22σu21πu2) requires the introduction of an atomic orbital overlap model for p orbitals. The three dumbbell-shaped p-orbitals have equal energy and are oriented mutually perpendicularly (or orthogonally). The p-orbitals oriented in the z-direction (pz) can overlap end-on forming a bonding (symmetrical) σ orbital and an antibonding σ* molecular orbital. In contrast to the sigma 1s MO's, the σ 2p has some non-bonding electron density at either side of the nuclei and the σ* 2p has some electron density between the nuclei. The other two p-orbitals, py and px, can overlap side-on. The resulting bonding orbital has its electron density in the shape of two lobes above and below the plane of the molecule. The orbital is not symmetric around the molecular axis and is therefore a pi orbital. The antibonding pi orbital (also asymmetrical) has four lobes pointing away from the nuclei. Both py and px orbitals form a pair of pi orbitals equal in energy (degenerate) and can have higher or lower energies than that of the sigma orbital. In diboron the 1s and 2s electrons do not participate in bonding but the single electrons in the 2p orbitals occupy the 2πpy and the 2πpx MO's resulting in bond order 1. Because the electrons have equal energy (they are degenerate) diboron is a diradical and since the spins are parallel the molecule is paramagnetic. In certain diborynes the boron atoms are excited and the bond order is 3. Dicarbon Like diboron, dicarbon (C-C electron configuration:1σg21σu22σg22σu21πu4) is a reactive gas-phase molecule. The molecule can be described as having two pi bonds but without a sigma bond. Dinitrogen With nitrogen, we see the two molecular orbitals mixing and the energy repulsion. This is the reasoning for the rearrangement from a more familiar diagram. The σ from the 2p is more non-bonding due to mixing, and same with the 2s σ. This also causes a large jump in energy in the 2p σ* orbital. The bond order of diatomic nitrogen is three, and it is a diamagnetic molecule. The bond order for dinitrogen (1σg21σu22σg22σu21πu43σg2) is three because two electrons are now also added in the 3σ MO. The MO diagram correlates with the experimental photoelectron spectrum for nitrogen. The 1σ electrons can be matched to a peak at 410 eV (broad), the 2σg electrons at 37 eV (broad), the 2σu electrons at 19 eV (doublet), the 1πu4 electrons at 17 eV (multiplets), and finally the 3σg2 at 15.5 eV (sharp). Dioxygen Oxygen has a similar setup to H2, but now we consider 2s and 2p orbitals. When creating the molecular orbitals from the p orbitals, the three atomic orbitals split into three molecular orbitals, a singly degenerate σ and a doubly degenerate π orbital. Another property we can observe by examining molecular orbital diagrams is the magnetic property of diamagnetic or paramagnetic. If all the electrons are paired, there is a slight repulsion and it is classified as diamagnetic. If unpaired electrons are present, it is attracted to a magnetic field, and therefore paramagnetic. Oxygen is an example of a paramagnetic diatomic. The bond order of diatomic oxygen is two. MO treatment of dioxygen is different from that of the previous diatomic molecules because the pσ MO is now lower in energy than the 2π orbitals. This is attributed to interaction between the 2s MO and the 2pz MO. Distributing 8 electrons over 6 molecular orbitals leaves the final two electrons as a degenerate pair in the 2pπ* antibonding orbitals resulting in a bond order of 2. As in diboron, these two unpaired electrons have the same spin in the ground state, which is a paramagnetic diradical triplet oxygen. The first excited state has both HOMO electrons paired in one orbital with opposite spins, and is known as singlet oxygen. The bond order decreases and the bond length increases in the order (112.2 pm), (121 pm), (128 pm) and (149 pm). Difluorine and dineon In difluorine two additional electrons occupy the 2pπ* with a bond order of 1. In dineon (as with dihelium) the number of bonding electrons equals the number of antibonding electrons and this molecule does not exist. Dimolybdenum and ditungsten Dimolybdenum (Mo2) is notable for having a sextuple bond. This involves two sigma bonds (4dz2 and 5s), two pi bonds (using 4dxz and 4dyz), and two delta bonds (4dx2 − y2 and 4dxy). Ditungsten (W2) has a similar structure. MO energies overview Table 1 gives an overview of MO energies for first row diatomic molecules calculated by the Hartree-Fock-Roothaan method, together with atomic orbital energies. Heteronuclear diatomics In heteronuclear diatomic molecules, mixing of atomic orbitals only occurs when the electronegativity values are similar. In carbon monoxide (CO, isoelectronic with dinitrogen) the oxygen 2s orbital is much lower in energy than the carbon 2s orbital and therefore the degree of mixing is low. The electron configuration 1σ21σ*22σ22σ*21π43σ2 is identical to that of nitrogen. The g and u subscripts no longer apply because the molecule lacks a center of symmetry. In hydrogen fluoride (HF), the hydrogen 1s orbital can mix with fluorine 2pz orbital to form a sigma bond because experimentally the energy of 1s of hydrogen is comparable with 2p of fluorine. The HF electron configuration 1σ22σ23σ21π4 reflects that the other electrons remain in three lone pairs and that the bond order is 1. The more electronegative atom is the more energetically excited because it more similar in energy to its atomic orbital. This also accounts for the majority of the electron negativity residing around the more electronegative molecule. Applying the LCAO-MO method allows us to move away from a more static Lewis structure type approach and actually account for periodic trends that influence electron movement. Non-bonding orbitals refer to lone pairs seen on certain atoms in a molecule. A further understanding for the energy level refinement can be acquired by delving into quantum chemistry; the Schrödinger equation can be applied to predict movement and describe the state of the electrons in a molecule. NO Nitric oxide is a heteronuclear molecule that exhibits mixing. The construction of its MO diagram is the same as for the homonuclear molecules. It has a bond order of 2.5 and is a paramagnetic molecule. The energy differences of the 2s orbitals are different enough that each produces its own non-bonding σ orbitals. Notice this is a good example of making the ionized NO+ stabilize the bond and generate a triple bond, also changing the magnetic property to diamagnetic. HF Hydrogen fluoride is another example of a heteronuclear molecule. It is slightly different in that the π orbital is non-bonding, as well as the 2s σ. From the hydrogen, its valence 1s electron interacts with the 2p electrons of fluorine. This molecule is diamagnetic and has a bond order of one. Triatomic molecules Carbon dioxide Carbon dioxide, , is a linear molecule with a total of sixteen bonding electrons in its valence shell. Carbon is the central atom of the molecule and a principal axis, the z-axis, is visualized as a single axis that goes through the center of carbon and the two oxygens atoms. For convention, blue atomic orbital lobes are positive phases, red atomic orbitals are negative phases, with respect to the wave function from the solution of the Schrödinger equation. In carbon dioxide the carbon 2s (−19.4 eV), carbon 2p (−10.7 eV), and oxygen 2p (−15.9 eV)) energies associated with the atomic orbitals are in proximity whereas the oxygen 2s energy (−32.4 eV) is different. Carbon and each oxygen atom will have a 2s atomic orbital and a 2p atomic orbital, where the p orbital is divided into px, py, and pz. With these derived atomic orbitals, symmetry labels are deduced with respect to rotation about the principal axis which generates a phase change, pi bond (π) or generates no phase change, known as a sigma bond (σ). Symmetry labels are further defined by whether the atomic orbital maintains its original character after an inversion about its center atom; if the atomic orbital does retain its original character it is defined gerade, g, or if the atomic orbital does not maintain its original character, ungerade, u. The final symmetry-labeled atomic orbital is now known as an irreducible representation. Carbon dioxide’s molecular orbitals are made by the linear combination of atomic orbitals of the same irreducible representation that are also similar in atomic orbital energy. Significant atomic orbital overlap explains why sp bonding may occur. Strong mixing of the oxygen 2s atomic orbital is not to be expected and are non-bonding degenerate molecular orbitals. The combination of similar atomic orbital/wave functions and the combinations of atomic orbital/wave function inverses create particular energies associated with the nonbonding (no change), bonding (lower than either parent orbital energy) and antibonding (higher energy than either parent atomic orbital energy) molecular orbitals. Water For nonlinear molecules, the orbital symmetries are not σ or π but depend on the symmetry of each molecule. Water () is a bent molecule (105°) with C2v molecular symmetry. The possible orbital symmetries are listed in the table below. For example, an orbital of B1 symmetry (called a b1 orbital with a small b since it is a one-electron function) is multiplied by -1 under the symmetry operations C2 (rotation about the 2-fold rotation axis) and σv'(yz) (reflection in the molecular plane). It is multiplied by +1(unchanged) by the identity operation E and by σv(xz) (reflection in the plane bisecting the H-O-H angle). The oxygen atomic orbitals are labeled according to their symmetry as a1 for the 2s orbital and b1 (2px), b2 (2py) and a1 (2pz) for the three 2p orbitals. The two hydrogen 1s orbitals are premixed to form a1 (σ) and b2 (σ*) MO. Mixing takes place between same-symmetry orbitals of comparable energy resulting a new set of MO's for water: 2a1 MO from mixing of the oxygen 2s AO and the hydrogen σ MO. 1b2 MO from mixing of the oxygen 2py AO and the hydrogen σ* MO. 3a1 MO from mixing of the a1 AOs. 1b1 nonbonding MO from the oxygen 2px AO (the p-orbital perpendicular to the molecular plane). In agreement with this description the photoelectron spectrum for water shows a sharp peak for the nonbonding 1b1 MO (12.6 eV) and three broad peaks for the 3a1 MO (14.7 eV), 1b2 MO (18.5 eV) and the 2a1 MO (32.2 eV). The 1b1 MO is a lone pair, while the 3a1, 1b2 and 2a1 MO's can be localized to give two O−H bonds and an in-plane lone pair. This MO treatment of water does not have two equivalent rabbit ear lone pairs. Hydrogen sulfide (H2S) too has a C2v symmetry with 8 valence electrons but the bending angle is only 92°. As reflected in its photoelectron spectrum as compared to water the 5a1 MO (corresponding to the 3a1 MO in water) is stabilised (improved overlap) and the 2b2 MO (corresponding to the 1b2 MO in water) is destabilized (poorer overlap). References External links MO diagrams at meta-synthesis.com Link MO diagrams at chem1.com Link Molecular orbitals at winter.group.shef.ac.uk Link Chemical bonding
Molecular orbital diagram
[ "Physics", "Chemistry", "Materials_science" ]
5,677
[ "Chemical bonding", "Condensed matter physics", "nan" ]
7,337,484
https://en.wikipedia.org/wiki/Alpha%20granule
Alpha granules, (α-granules) also known as platelet alpha-granules are a cellular component of platelets. Platelets contain different types of granules that perform different functions, and include alpha granules, dense granules, and lysosomes. Of these, alpha granules are the most common, making up 50% to 80% of the secretory granules. Alpha granules contain several growth factors. Contents Contents include insulin-like growth factor 1, platelet-derived growth factors, TGF beta, platelet factor 4 (which is a heparin-binding chemokine) and other clotting proteins (such as thrombospondin, fibronectin, factor V, and von Willebrand factor). The alpha granules express the adhesion molecule P-selectin and CD63. These are transferred to the membrane after synthesis. The other type of granules within platelets are called dense granules. Clinical significance A deficiency of alpha granules is known as gray platelet syndrome. See also Platelet rich fibrin References Growth factors
Alpha granule
[ "Chemistry" ]
230
[ "Biochemistry stubs", "Growth factors", "Molecular and cellular biology stubs", "Signal transduction" ]
7,337,520
https://en.wikipedia.org/wiki/Yahoo%20Screen
The company Yahoo! ran several similar video services. Yahoo! Video, a video hosting service, was established in 2006. Later, the ability to upload videos was removed, changing it to a more pure video on demand service; the website became a portal for curated video content hosted by Yahoo's properties. In 2011, the service was re-launched as Yahoo! Screen, placing a larger focus on original content and web series. Created for the service were the series Burning Love, Electric City, Ghost Ghirls, Losing It with John Stamos, Sin City Saints, and Other Space. Yahoo! Screen also acquired the sitcom Community for an additional season, following its cancellation after the fifth season on NBC. In January 2016, following a $42 million write-down on the poor performance of its original content, Yahoo! Screen was shut down. In August 2016, Yahoo! announced a partnership with the subscription video-on-demand service Hulu to move its free video library to a de facto successor known as Yahoo! View. Yahoo! View streamed recent episodes of television series from the ABC, NBC, and Fox networks in the United States, as well as a moderate selection of archived programs from various distributors, the "skinny bundle" model. Yahoo! View was decommissioned on June 30, 2019. History Yahoo! Video was intended to be as a video sharing website on which users could upload videos, similar to YouTube. At launch, Yahoo! Video started as an internet-wide video search engine. Yahoo added the ability to upload and share video clips in June 2006. A re-designed site was launched in February 2008 that changed the focus to Yahoo!-hosted video only. On December 15, 2010, Yahoo! Video's functionality to upload video was removed for its relaunch as Yahoo! Screen the following year. All user-generated content was removed on March 15, 2011. The content that Yahoo! deleted was saved by the Archive Team. The Yahoo! Screen rebrand was launched October 2011, alongside eight original programs. Yahoo! Screen has streamed three seasons of its Emmy-nominated original series, Burning Love, which was syndicated for TV through E! in 2013. On April 24, 2013, Yahoo! acquired rights to stream content from the NBC series Saturday Night Live, including archive clips from current and past seasons, behind the scenes footage, and other content. Yahoo! held non-exclusive international rights to the archive content, and non-exclusive rights to clips from the current season. In June 2014, Yahoo! announced that it had picked up former NBC sitcom Community for its sixth season, which premiered via Yahoo! Screen on March 17, 2015. Within a month of Community'''s season six premiere, Yahoo! had premiered full first seasons of two new original series, Sin City Saints and Other Space, but available only in the United States. Also in 2014, Yahoo! expanded its licensing agreement with Vevo to allow Vevo's content (music videos, concerts, etc.) to appear on the platform. Community ultimately would not be as profitable for the company as it hoped with The A.V. Club blaming the acquisition for the platform's eventual demise. In June 2015, Yahoo! Screen won the worldwide rights to distribute the National Football League's International Series game between the Buffalo Bills and Jacksonville Jaguars, set to take place October 25. The one-off stream was the first NFL game to be broadcast almost exclusively through the Internet, with no television broadcast outside Buffalo, Jacksonville and other international markets. On January 4, 2016, following a $42 million write-down in the third quarter of 2015, as a result of the poor performance of its three original series, Yahoo! Screen as a portal was discontinued. Yahoo's original video content was re-located to relevant portals of the site; in particular, its original television series were moved to an "originals" section on the Yahoo! TV site. On August 8, 2016, Hulu announced they would end their free viewing tier and move exclusively to a subscription service. That same day, they announced a partnership with Yahoo! to spin out its free video on demand streaming service, which features recent episodes of series from ABC, NBC, and Fox, into a new service known as Yahoo! View''. It features the five most recent episodes of the networks' series; new episodes are added eight days after their original broadcast. It also integrated with Tumblr to provide access to fan content related to programs. Yahoo View! ceased operations on June 30, 2019. See also List of Yahoo! Screen original programming References External links Advertising video on demand Former video hosting services Screen Streaming media systems Internet properties established in 2006 Internet properties disestablished in 2011 Internet properties established in 2011 Internet properties disestablished in 2016 Internet properties established in 2016 Internet properties disestablished in 2019 Defunct video on demand services
Yahoo Screen
[ "Technology" ]
989
[ "Streaming media systems", "Telecommunications systems", "Computer systems" ]
1,586,105
https://en.wikipedia.org/wiki/Lattice%20protein
Lattice proteins are highly simplified models of protein-like heteropolymer chains on lattice conformational space which are used to investigate protein folding. Simplification in lattice proteins is twofold: each whole residue (amino acid) is modeled as a single "bead" or "point" of a finite set of types (usually only two), and each residue is restricted to be placed on vertices of a (usually cubic) lattice. To guarantee the connectivity of the protein chain, adjacent residues on the backbone must be placed on adjacent vertices of the lattice. Steric constraints are expressed by imposing that no more than one residue can be placed on the same lattice vertex. Because proteins are such large molecules, there are severe computational limits on the simulated timescales of their behaviour when modeled in all-atom detail. The millisecond regime for all-atom simulations was not reached until 2010, and it is still not possible to fold all real proteins on a computer. Simplification significantly reduces the computational effort in handling the model, although even in this simplified scenario the protein folding problem is NP-complete. Overview Different versions of lattice proteins may adopt different types of lattice (typically square and triangular ones), in two or three dimensions, but it has been shown that generic lattices can be used and handled via a uniform approach. Lattice proteins are made to resemble real proteins by introducing an energy function, a set of conditions which specify the interaction energy between beads occupying adjacent lattice sites. The energy function mimics the interactions between amino acids in real proteins, which include steric, hydrophobic and hydrogen bonding effects. The beads are divided into types, and the energy function specifies the interactions depending on the bead type, just as different types of amino acids interact differently. One of the most popular lattice models, the hydrophobic-polar model (HP model), features just two bead types—hydrophobic (H) and polar (P)—and mimics the hydrophobic effect by specifying a favorable interaction between H beads. For any sequence in any particular structure, an energy can be rapidly calculated from the energy function. For the simple HP model, this is an enumeration of all the contacts between H residues that are adjacent in the structure but not in the chain. Most researchers consider a lattice protein sequence protein-like only if it possesses a single structure with an energetic state lower than in any other structure, although there are exceptions that consider ensembles of possible folded states. This is the energetic ground state, or native state. The relative positions of the beads in the native state constitute the lattice protein's tertiary structure. Lattice proteins do not have genuine secondary structure; however, some researchers have claimed that they can be extrapolated onto real protein structures which do include secondary structure, by appealing to the same law by which the phase diagrams of different substances can be scaled onto one another (the theorem of corresponding states). By varying the energy function and the bead sequence of the chain (the primary structure), effects on the native state structure and the kinetics of folding can be explored, and this may provide insights into the folding of real proteins. Some of the examples include study of folding processes in lattice proteins that have been discussed to resemble the two-phase folding kinetics in proteins. Lattice protein was shown to have quickly collapsed into compact state and followed by slow subsequent structure rearrangement into native state. Attempts to resolve Levinthal paradox in protein folding are another efforts made in the field. As an example, study conducted by Fiebig and Dill examined searching method involving constraints in forming residue contacts in lattice protein to provide insights to the question of how a protein finds its native structure without global exhaustive searching. Lattice protein models have also been used to investigate the energy landscapes of proteins, i.e. the variation of their internal free energy as a function of conformation. Lattices A lattice is a set of orderly points that are connected by "edges". These points are called vertices and are connected to a certain number other vertices in the lattice by edges. The number of vertices each individual vertex is connected to is called the coordination number of the lattice, and it can be scaled up or down by changing the shape or dimension (2-dimensional to 3-dimensional, for example) of the lattice. This number is important in shaping the characteristics of the lattice protein because it controls the number of other residues allowed to be adjacent to a given residue. It has been shown that for most proteins the coordination number of the lattice used should fall between 3 and 20, although most commonly used lattices have coordination numbers at the lower end of this range. Lattice shape is an important factor in the accuracy of lattice protein models. Changing lattice shape can dramatically alter the shape of the energetically favorable conformations. It can also add unrealistic constraints to the protein structure such as in the case of the parity problem where in square and cubic lattices residues of the same parity (odd or even numbered) cannot make hydrophobic contact. It has also been reported that triangular lattices yield more accurate structures than other lattice shapes when compared to crystallographic data. To combat the parity problem, several researchers have suggested using triangular lattices when possible, as well as a square matrix with diagonals for theoretical applications where the square matrix may be more appropriate. Hexagonal lattices were introduced to alleviate sharp turns of adjacent residues in triangular lattices. Hexagonal lattices with diagonals have also been suggested as a way to combat the parity problem. Hydrophobic-polar model The hydrophobic-polar protein model is the original lattice protein model. It was first proposed by Dill et al. in 1985 as a way to overcome the significant cost and difficulty of predicting protein structure, using only the hydrophobicity of the amino acids in the protein to predict the protein structure. It is considered to be the paradigmatic lattice protein model. The method was able to quickly give an estimate of protein structure by representing proteins as "short chains on a 2D square lattice" and has since become known as the hydrophobic-polar model. It breaks the protein folding problem into three separate problems: modeling the protein conformation, defining the energetic properties of the amino acids as they interact with one another to find said conformation, and developing an efficient algorithm for the prediction of these conformations. It is done by classifying amino acids in the protein as either hydrophobic or polar and assuming that the protein is being folded in an aqueous environment. The lattice statistical model seeks to recreate protein folding by minimizing the free energy of the contacts between hydrophobic amino acids. Hydrophobic amino acid residues are predicted to group around each other, while hydrophilic residues interact with the surrounding water. Different lattice types and algorithms were used to study protein folding with HP model. Efforts were made to obtain higher approximation ratios using approximation algorithms in 2 dimensional and 3 dimensional, square and triangular lattices. Alternative to approximation algorithms, some genetic algorithms were also exploited with square, triangular, and face-centered-cubic lattices. Problems and alternative models The simplicity of the hydrophobic-polar model has caused it to have several problems that people have attempted to correct with alternative lattice protein models. Chief among these problems is the issue of degeneracy, which is when there is more than one minimum energy conformation for the modeled protein, leading to uncertainty about which conformation is the native one. Attempts to address this include the HPNX model which classifies amino acids as hydrophobic (H), positive (P), negative (N), or neutral (X) according to the charge of the amino acid, adding additional parameters to reduce the number of low energy conformations and allowing for more realistic protein simulations. Another model is the Crippen model which uses protein characteristics taken from crystal structures to inform the choice of native conformation. Another issue with lattice models is that they generally don't take into account the space taken up by amino acid side chains, instead considering only the α-carbon. The side chain model addresses this by adding a side chain to the vertex adjacent to the α-carbon. References Protein structure NP-complete problems
Lattice protein
[ "Chemistry", "Mathematics" ]
1,660
[ "Protein structure", "Computational problems", "Structural biology", "Mathematical problems", "NP-complete problems" ]
1,586,159
https://en.wikipedia.org/wiki/Electrochromism
Electrochromism is a phenomenon in which a material displays changes in color or opacity in response to an electrical stimulus. In this way, a smart window made of an electrochromic material can block specific wavelengths of ultraviolet, visible or (near) infrared light. The ability to control the transmittance of near-infrared light can increase the energy efficiency of a building, reducing the amount of energy needed to cool during summer and heat during winter. As the color change is persistent and energy needs only to be applied to effect a change, electrochromic materials are used to control the amount of light and heat allowed to pass through a surface, most commonly "smart windows". One popular application is in the automobile industry where it is used to automatically tint rear-view mirrors in various lighting conditions. Principle The phenomenon of electrochromism occurs in some transition metal oxides which conduct both electrons and ions, such as tungsten trioxide (WO3). These oxides have octahedral structures of oxygen which surround a central metal atom and are joined together at the corners. This arrangement produces a three-dimensional nanoporous structure with "tunnels" between individual octahedral segments. These tunnels allow dissociated ions to pass through the substance when they are motivated by an electric field. Common ions used for this purpose are H+ and Li+. The electric field is typically induced by two flat, transparent electrodes which sandwich the ion-containing layers. As a voltage is applied across these electrodes, the difference in charge between the two sides causes the ions to penetrate the oxide as the charge-balancing electrons flow between the electrodes. These electrons change the valency of the metal atoms in the oxide, reducing their charge, as in the following example of tungsten trioxide: + n( + e−) → This is a redox reaction since the electroactive metal accepts electrons from the electrodes, forming a half-cell. Strictly speaking, the electrode as a chemical unit comprises the flat plate as well as the semiconducting substance in contact with it. However, the term "electrode" often refers to only the flat plate(s), more specifically called the electrode "substrate". Photons that reach the oxide layer can cause an electron to move between two nearby metal ions. The energy provided by the photon causes the movement of an electron which in turn causes optical absorption of the photon. For example, the following process occurs in tungsten oxide for two tungsten ions a and b: + + photon → + Electrochromic materials Electrochromic materials, also known as chromophores, affect the optical color or opacity of a surface when a voltage is applied. Among the metal oxides, tungsten oxide (WO3) is the most extensively studied and well-known electrochromic material. Others include molybdenum, titanium and niobium oxides, although these are less effective optically. Viologens are a class of organic materials that are being intensively investigated for electrochromic applications. These 4,4′-bipyridine compounds display reversible color changes between a colorless and a deep-blue color due to redox reactions. Researchers can "tune" them to a deep blue or intense green. As organic materials, viologens are seen as promising alternatives for electronic applications, compared to metal-based systems, which tend to be expensive, toxic, and a problem to recycle. Possible advantages of viologens include their optical contrast, coloration efficiency, redox stability, ease of design, and potential to scale up for large-area preparation. Viologens have been used with phenylenediamine by Gentex Corporation, which has commercialized auto-dimming rearview mirrors and smart windows in Boeing 787 aircraft. Viologen has been used in conjunction with titanium dioxide (TiO2, also known as titania) in the creation of small digital displays. A variety of conducting polymers are also of interest for displays, including polypyrrole, PEDOT, and polyaniline. Synthesis of tungsten oxide Many methods have been used to synthesize tungsten oxide, including chemical vapor deposition (CVD), sputtering, thermal evaporation, spray pyrolysis (from a vapor or sol-gel), and hydrothermal synthesis (from a liquid). In industry, sputtering is the most common method for the deposition of tungsten oxide. For material synthesis, sol-gel process is widely used due to its advantages of simple process, low cost, and easy control. Sol-gel process In the sol-gel process of tungsten trioxide, is dissolved in alcohol and then oxidized by purging into its solution: + → + The formation of is performed by the reaction of alcohol and chlorine that used for the reduction of to obtain a blue solution of : + → + + → nanoparticles can also be obtained by precipitation of ammonium tungstate para pentahydrate, , or nitric acid, , under acidic conditions from aqueous solutions. Working principle of electrochromic windows Multiple layers are needed for a functional smart window with electrochromic characteristics. The first and last are transparent glass made of silica (), the two electrodes are needed to apply the voltage, which in turn will push (or pull) ions from the ion storage layer, through the electrolyte into the electrochromic material (or vice versa). Applying a high voltage (4 V or more) will push lithium-ions into the electrochromic layer, deactivating the electrochromic material. The window is fully transparent now. By applying a lower voltage (2.5 V for example) the concentration of Li-ions in the electrochromic layer decreases, thus activating (N)IR-active tungsten oxide. This activation causes reflection of infrared light, thus lowering the greenhouse effect, which in turn reduces the amount of energy needed for air conditioning. Depending on the electrochromic material used, different parts of the spectrum can be blocked, this way UV, visible and IR light can be independently reflected at the will of a user. Applications Several electrochromic devices have been developed. Electrochromism is commonly used in the production of electrochromic windows or "smart glass", and more recently electrochromic displays on paper substrate as anti-counterfeiting systems integrated into packaging. NiO materials have been widely studied as counter electrodes for complementary electrochromic devices, particularly for smart windows. ICE 3 high speed trains use electrochromic glass panels between the passenger compartment and the driver's cabin. The standard mode is clear, and can be switched by the driver to frosted. Electrochromic windows are used in the Boeing 787 Dreamliner in the form of a dimmable panel between the exterior window and interior dust cover, allowing crew and passengers to control the transparency of the windows. See also Electrochromic devices Smart glass Electronic paper Phosphaphenalene Further reading References External links Tutorial on electrochromatic displays at Gent University (archived from the original on 6 January 2012) Article on energy efficiency of electrochromic windows at National Renewable Energy Laboratory (archived from the original on 21 July 2017) Video of electrochromic glass changing from translucent to transparent at YouTube Chromism Scattering, absorption and radiative transfer (optics)
Electrochromism
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,545
[ " absorption and radiative transfer (optics)", "Spectrum (physical sciences)", "Chromism", "Materials science", "Scattering", "Smart materials", "Spectroscopy" ]
1,586,161
https://en.wikipedia.org/wiki/Strake
On a vessel's hull, a strake is a longitudinal course of planking or plating which runs from the boat's stempost (at the bows) to the sternpost or transom (at the rear). The garboard strakes are the two immediately adjacent to the keel on each side. The word derives from traditional wooden boat building methods, used in both carvel and clinker construction. In a metal ship, a strake is a course of plating. Construction In small boats strakes may be single continuous pieces of wood. In larger wooden vessels strakes typically comprise several planks which are either scarfed, or butt-jointed and reinforced with a butt block. Where the transverse sections of the vessel's shape are fuller, the strakes are wider; they taper toward the ends. In a riveted steel ship, the strakes were usually lapped and joggled (one strake given projections to match indentions in the one adjoining), but where a smoother finish was sought they might be riveted on a butt strap, though this was weaker. In modern welded construction, the plates are normally butt-welded with full penetration welds all round to adjoining plates within the strake and to adjoining strakes. Terminology In boat and ship construction, strakes immediately adjacent to either side of the keel are known as the garboard strakes or A strakes. The next two are the first broad or B strake and second broad or C strake. Working upward come the bottom strakes, lowers, bilge strakes, topside strakes, and uppers also named sequentially as the D strake, E strake, etc. The uppermost along the topsides is called the sheer strake. Strakes are joined to the stem by their hood ends. A rubbing strake was traditionally built in just below a carvel sheer strake. It was much less broad but thicker than other strakes so that it projected and took any rubbing against piers or other boats when the boat was in use. In clinker boats, the rubbing strake was applied to the outside of the sheer strake. Many current pleasure craft reflect this history in that they have a mechanically attached (and therefore replaceable) rub rail at the location formerly occupied by a rubbing strake, often doubling to cover the joint between a GRP hull and its innerliner. Inflatable dinghies and RIBs usually have a rubbing strake (typically a glued-on rubber extrusion) at the edge. A "stealer" is a short strake employed to reduce the width of plank required where the girth of the hull increases or to accommodate a tuck in the shape. It is commonly employed in carvel and iron/steel shipbuilding, but very few clinker craft use them. Sources Shipbuilding fi:Laidoitus
Strake
[ "Engineering" ]
624
[ "Shipbuilding", "Marine engineering" ]
1,586,291
https://en.wikipedia.org/wiki/Fr%C3%A9chet%20filter
In mathematics, the Fréchet filter, also called the cofinite filter, on a set is a certain collection of subsets of (that is, it is a particular subset of the power set of ). A subset of belongs to the Fréchet filter if and only if the complement of in is finite. Any such set is said to be , which is why it is alternatively called the cofinite filter on . The Fréchet filter is of interest in topology, where filters originated, and relates to order and lattice theory because a set's power set is a partially ordered set under set inclusion (more specifically, it forms a lattice). The Fréchet filter is named after the French mathematician Maurice Fréchet (1878-1973), who worked in topology. Definition A subset of a set is said to be cofinite in if its complement in (that is, the set ) is finite. If the empty set is allowed to be in a filter, the Fréchet filter on , denoted by is the set of all cofinite subsets of . That is: If is a finite set, then every cofinite subset of is necessarily not empty, so that in this case, it is not necessary to make the empty set assumption made before. This makes a on the lattice the power set of with set inclusion, given that denotes the complement of a set in The following two conditions hold: Intersection condition If two sets are finitely complemented in , then so is their intersection, since and Upper-set condition If a set is finitely complemented in , then so are its supersets in . Properties If the base set is finite, then since every subset of , and in particular every complement, is then finite. This case is sometimes excluded by definition or else called the improper filter on Allowing to be finite creates a single exception to the Fréchet filter’s being free and non-principal since a filter on a finite set cannot be free and a non-principal filter cannot contain any singletons as members. If is infinite, then every member of is infinite since it is simply minus finitely many of its members. Additionally, is infinite since one of its subsets is the set of all where The Fréchet filter is both free and non-principal, excepting the finite case mentioned above, and is included in every free filter. It is also the dual filter of the ideal of all finite subsets of (infinite) . The Fréchet filter is necessarily an ultrafilter (or maximal proper filter). Consider the power set where is the natural numbers. The set of even numbers is the complement of the set of odd numbers. Since neither of these sets is finite, neither set is in the Fréchet filter on However, an (and any other non-degenerate filter) is free if and only if it includes the Fréchet filter. The ultrafilter lemma states that every non-degenerate filter is contained in some ultrafilter. The existence of free ultrafilters was established by Tarski in 1930, relying on a theorem equivalent to the axiom of choice, and is used in the construction of the hyperreals in nonstandard analysis. Examples If is a finite set, assuming that the empty set can be in a filter, then the Fréchet filter on consists of all the subsets of . On the set of natural numbers, the set of infinite intervals is a Fréchet filter base, that is, the Fréchet filter on consists of all supersets of elements of . See also References External links J.B. Nation, Notes on Lattice Theory, course notes, revised 2017. Order theory Topology
Fréchet filter
[ "Physics", "Mathematics" ]
759
[ "Topology", "Space", "Geometry", "Spacetime", "Order theory" ]
1,586,359
https://en.wikipedia.org/wiki/Adaptation%20and%20Natural%20Selection
Adaptation and Natural Selection: A Critique of Some Current Evolutionary Thought is a 1966 book by the American evolutionary biologist George C. Williams. Williams, in what is now considered a classic by evolutionary biologists, outlines a gene-centered view of evolution, disputes notions of evolutionary progress, and criticizes contemporary models of group selection, including the theories of Alfred Emerson, A. H. Sturtevant, and to a smaller extent, the work of V. C. Wynne-Edwards. The book takes its title from a lecture by George Gaylord Simpson in January 1947 at Princeton University. Aspects of the book were popularised by Richard Dawkins in his 1976 book The Selfish Gene. The aim of the book is to "clarify certain issues in the study of adaptation and the underlying evolutionary processes." Though more technical than a popular science book, its target audience is not specialists but biologists in general and the more advanced students of the topic. It was mostly written in the summer of 1963 when Williams utilized the University of California, Berkeley's library. Contents Williams argues that adaptation is "a special and onerous concept that should not be used unnecessarily". He writes that something should not be assigned a function unless it is uncontroversially the result of design rather than chance. For instance he considers mutations to be errors only, not a process that has persisted to provide variation and evolutionary potential. If something is considered (after critical appraisal) to be an adaptation, then we should assume the unit of selection in the process was as simple as possible, provided it is compatible with the evidence. For example, selection between individuals should be preferred to group selection as an explanation if both seem plausible. Williams writes that the only way adaptations can come into existence or persist is by natural selection. Dealing with the idea of evolutionary progress, Williams argues that for natural selection to work, there have to be "certain quantitative relationships among sampling errors, selection coefficients, and rates of random change." It is put forward that Mendelian selection of alleles (alternative versions of a gene) is the only kind of selection imaginable that satisfies these requirements. Elaborating on the nature of selection, he writes that it only works on the basis of whether alleles are better or worse than others in the population, in terms of their immediate fitness effects. Survival of the population is beside the point, e.g. populations don't take any measures to avoid impending extinction. Finally he evaluates various ideas about progress in evolution, denying that selection will bring about the kind of progress that some have suggested. The author concludes that his view on the topic is similar to that of most of his colleagues, but worries that it is misrepresented to the public "when biologists become self-consciously philosophical". See also Antagonistic pleiotropy hypothesis Ecology Genetic anthropomorphism Morphogenesis Reproduction Scientific method Social animal References External links Adaptation and Natural Selection – Princeton University Press (on Internet Archive) 1966 in biology 1966 non-fiction books American non-fiction books Books about evolution Books by George C. Williams English-language non-fiction books Modern synthesis (20th century) Princeton University Press books Selection
Adaptation and Natural Selection
[ "Biology" ]
658
[ "Evolutionary processes", "Selection" ]
1,586,487
https://en.wikipedia.org/wiki/Noctua%20%28constellation%29
Noctua (Latin: owl) was a constellation near the tail of Hydra in the southern celestial hemisphere, but is no longer recognized. It was introduced by Alexander Jamieson in his 1822 work, A Celestial Atlas, and appeared in a derived collection of illustrated cards, Urania's Mirror. Now designated Asterism a, the owl was composed of the stars Sigma Librae, 4 Librae and 54–57 Hydrae, which range from 3rd to 6th magnitude. The French astronomer Pierre Charles Le Monnier had introduced a bird on Hydra's tail as the constellation Solitaire, named for the extinct flightless bird, the Rodrigues solitaire, but the image was that of a rock thrush which had been classified in the genus Turdus, giving rise to the constellation name Turdus Solitarius, the solitary thrush. It has also been depicted as a mockingbird. The boundaries of the constellation were defined as longitude 0° to 26°30' and from the ecliptic to 15° S. NGC 5694 The Noctua could be used as an observer's guide to try to detect de globular star cluster NGC 5694, of which the location is immediately west of it. References External links Ian Ridpath's Star Tales – Noctua Obsolete Constellations: Noctua, the owl Former constellations
Noctua (constellation)
[ "Astronomy" ]
282
[ "Former constellations", "Constellations" ]
1,586,499
https://en.wikipedia.org/wiki/Weighting%20curve
A weighting curve is a graph of a set of factors, that are used to 'weight' measured values of a variable according to their importance in relation to some outcome. An important example is frequency weighting in sound level measurement where a specific set of weighting curves known as A-, B-, C-, and D-weighting as defined in IEC 61672 are used. Unweighted measurements of sound pressure do not correspond to perceived loudness because the human ear is less sensitive at low and high frequencies, with the effect more pronounced at lower sound levels. The four curves are applied to the measured sound level, for example by the use of a weighting filter in a sound level meter, to arrive at readings of loudness in phons or in decibels (dB) above the threshold of hearing (see A-weighting). Weighting curves in electronic engineering, audio, and broadcasting Although A-weighting with a slow RMS detector, as commonly used in sound level meters is frequently used when measuring noise in audio circuits, a different weighting curve, ITU-R 468 noise weighting uses a psophometric weighting curve and a quasi-peak detector. This method, formerly known as CCIR weighting, is preferred by the telecommunications industry, broadcasters, and some equipment manufacturers as it reflects more accurately the audibility of pops and short bursts of random noise as opposed to pure tones. Psophometric weighting is used in telephony and telecommunications where narrow-band circuits are common. Hearing weighting curves are also used for sound in water. Other applications of weighting Acoustics is by no means the only subject which finds use for weighting curves however, and they are widely used in deriving measures of effect for sun exposure, gamma radiation exposure, and many other things. In the measurement of gamma rays or other ionising radiation, a radiation monitor or dosimeter will commonly use a filter to attenuate those energy levels or wavelengths that cause the least damage to the human body, while letting through those that do the most damage, so that any source of radiation may be measured in terms of its true danger rather than just its "strength". The sievert is a unit of weighted radiation dose for ionising radiation, which supersedes the older weighted unit the rem (roentgen equivalent man). Weighting is also applied to the measurement of sunlight when assessing the risk of skin damage through sunburn, since different wavelengths have different biological effects. Common examples are the SPF of sunscreen, and the ultraviolet index. Another use of weighting is in television, where the red, green, and blue components of the signal are weighted according to their perceived brightness. This ensures compatibility with black-and-white receivers, and also benefits noise performance and allows separation into meaningful luminance and chrominance signals for transmission. See also Weight function Weighted arithmetic mean Weighting Weighting filter A-weighting B-, C-, D-, G-, and Z-weightings M-weighting References Audio engineering Noise Sound
Weighting curve
[ "Engineering" ]
635
[ "Electrical engineering", "Audio engineering" ]
1,586,505
https://en.wikipedia.org/wiki/Psalterium%20Georgii
Psalterium Georgii (also Harpa Georgii) (Latin for George's harp) was a constellation created by Maximilian Hell in 1789 to honor George III of Great Britain. Johann ‍Bode ‍depicted ‍the ‍constellation ‍on ‍his ‍‍Uranographia ‍atlas ‍of ‍1801 under the name ‍Harpa ‍Georgii. It was created from stars in northern Eridanus and was next to the constellation Taurus, and included 10 Tauri. It is no longer in use. External links Ian Ridpath's Star Tales: Harpa Georgii Psalterium Georgii Former constellations
Psalterium Georgii
[ "Astronomy" ]
118
[ "Former constellations", "Constellations" ]
1,586,531
https://en.wikipedia.org/wiki/Quadrans%20Muralis
Quadrans Muralis (Latin for mural quadrant) was a constellation created by the French astronomer Jérôme Lalande in 1795. It depicted a wall-mounted quadrant with which he and his nephew Michel Lefrançois de Lalande had charted the celestial sphere, and was named Le Mural in the French atlas. It was between the constellations of Boötes and Draco, near the tail of Ursa Major, containing stars between β Bootis (Nekkar) and η Ursae Majoris (Alkaid). Johann Elert Bode converted its name to Latin as Quadrans Muralis and shrank the constellation a little in his 1801 Uranographia star atlas, to avoid it clashing with neighboring constellations. In 1922, Quadrans Muralis was omitted when the International Astronomical Union (IAU) formalised its list of officially recognized constellations. Notable features The variable star BP Boötis was a member of the constellation. 39 Boötis is a double star that was transferred by Lalande into Quadrans. The Quadrantid meteor shower is still named after the obsolete constellation. References Former constellations
Quadrans Muralis
[ "Astronomy" ]
234
[ "Former constellations", "Constellations" ]
1,586,565
https://en.wikipedia.org/wiki/Robur%20Carolinum
Robur Carolinum (Latin for Charles' oak) was a constellation created by the English astronomer Edmond Halley in 1679. The name refers to the Royal Oak where Charles II was said to have hidden from the troops of Oliver Cromwell after the Battle of Worcester. It was between the constellations of Centaurus and Carina, extending into half of Vela. Robur Carolinum as a constellation never gained popularity, probably because it used the star Eta Carinae and the Eta Carinae Nebula, and was soon dropped from use after only fifty years. Nicolas Louis de Lacaille also complained bitterly that it took some of the finest stars from Argo Navis. Its brightest star was Beta Carinae (β Car) or Miaplacidus, which was known as α Roburis or α Roburis Carolii. References Former constellations 1679 establishments in England 1679 in science Trees in culture Cultural depictions of Charles II of England
Robur Carolinum
[ "Astronomy" ]
192
[ "Former constellations", "Constellations" ]
1,586,599
https://en.wikipedia.org/wiki/Sceptrum%20Brandenburgicum
Sceptrum Brandenburgicum (or Sceptrum Brandenburgium – Latin for scepter of Brandenburg) was a constellation created in 1688 by Gottfried Kirch, astronomer of the Prussian Royal Society of Sciences. It represented the scepter used by the royal family of the Brandenburgs. It was west from the constellation of Lepus. The constellation was quickly forgotten and is no longer in use. Its name was, however, partially inherited by one of its brightest stars, Sceptrum, which today is denoted 53 Eridani. This name is still in use today. External links Sceptrum Brandenburgium Star Tales – Sceptrum Brandenburgicum Former constellations Brandenburg-Prussia
Sceptrum Brandenburgicum
[ "Astronomy" ]
138
[ "Former constellations", "Astronomy stubs", "Constellations" ]
1,586,664
https://en.wikipedia.org/wiki/Rangifer%20%28constellation%29
Rangifer was a small constellation between the constellations of Cassiopeia and Camelopardalis. It was also known as Tarandus. Both words mean "reindeer" in Latin. "Rangifer" is the generic name of the reindeer, and "tarandus" is the specific name. The constellation may also be referred to as Renne. The constellation was found in the northern sky near the now obsolete constellation of Custos Messium (the harvest keeper). The constellation is no longer in use. History The constellation Rangifer was created by the French astronomer Pierre Charles Le Monnier in 1736 to commemorate the expedition of Maupertuis to Lapland. Geodetical observations from the expedition proved Earth's oblateness. References External links Rangifer, the reindeer: Ian Ridpath's Star Tales Tarandus vel Rangifer, the reindeer: Shane Horvatin Former constellations 1736 in France
Rangifer (constellation)
[ "Astronomy" ]
191
[ "Former constellations", "Constellations" ]
1,586,674
https://en.wikipedia.org/wiki/Solarium%20%28constellation%29
Solarium (Latin for sundial) was a constellation located between the constellations of Horologium, Dorado and Hydrus. It was introduced in 1822 on the Celestial Atlas of Alexander Jamieson, who substituted it for the constellation Reticulum invented by Nicolas Louis de Lacaille. A decade later, it was picked up by Elijah Hinsdale Burritt to whom it is sometimes attributed. It was never popular and is no longer in use. References Former constellations
Solarium (constellation)
[ "Astronomy" ]
100
[ "Former constellations", "Constellations" ]
1,586,699
https://en.wikipedia.org/wiki/Taurus%20Poniatovii
Taurus Poniatovii (Latin for Poniatowski's bull) was a constellation created by the former rector of Vilnius University, Marcin Odlanicki Poczobutt, in 1777 to honor Stanislaus Poniatowski, King of Poland and Grand Duke of Lithuania. It consisted of stars that are today considered part of Ophiuchus and Aquila. It is no longer in use. It was wedged in between Ophiuchus, Aquila and Serpens Cauda. A depiction of the constellation can be found on the wall of the Vilnius University Astronomical Observatory. The stars The stars were picked for the resemblance of their arrangement to the Hyades group which form the "head" of Taurus. Before the definition of Taurus Poniatovii, some of these had been part of the obsolete constellation River Tigris. The brightest of these stars is 72 Oph (3.7 magnitude) in the "horn" of Taurus Poniatovii. The "face" of Taurus Poniatovii is formed by 67 Oph (4.0), 68 Oph (4.4) and 70 Oph (4.0). The five brightest stars belong to loose open cluster Collinder 359 or Melotte 186. Barnard's Star is also inside the boundaries of this former constellation. Some minor stars (5th and 6th magnitude) now in Aquila formed the "rear" of Taurus Poniatovii. See also List of stars in Ophiuchus Scutum, a constellation created in 1684 by Polish astronomer Johannes Hevelius (Jan Heweliusz), to commemorate the victory of the Polish forces led by King John III Sobieski in the Battle of Vienna. References External links Taurus Poniatovii Shane Horvatin Taurus Poniatovii in Ian Ridpath's Star Tales Former constellations Ophiuchus Cattle in culture
Taurus Poniatovii
[ "Astronomy" ]
404
[ "Former constellations", "Ophiuchus", "Constellations" ]
1,586,709
https://en.wikipedia.org/wiki/Telescopium%20Herschelii
Telescopium Herschelii (Latin for Herschel's telescope), also formerly known as Tubus Hershelli Major, is a former constellation in the northern celestial hemisphere. Maximilian Hell established it in 1789 to honour Sir William Herschel's discovery of the planet Uranus. It fell out of use by the end of the 19th century. θ Geminorum at apparent magnitude 4.8 was the constellation's brightest star. History It was one of two constellations created by Maximilian Hell in 1789 to honour the famous English astronomer Sir William Herschel's discovery of the planet Uranus. Named Tubus Hershelli Major by Hell, it was located in the constellation Auriga near the border to Lynx and Gemini and depicted Herschel's 20-ft-long telescope. Its sibling was Tubus Hershelli Minor, which lay between Orion and Taurus. The two telescopes lay near Zeta Tauri, near where the planet Uranus was first spotted. Johann Elert Bode renamed the constellation Telescopium Herschelii and omitted the smaller telescope constellation in his 1801 Uranographia star atlas. In his atlas, the constellation depicted Herschel's earlier 7-foot telescope. It was ignored by some celestial cartographers such as Argelander in 1843, Proctor in 1876, Rosser in 1879 and Pritchard in 1885, yet did appear in two works of the 1890s. However, it was noted by Allen in 1899 that it was becoming obsolete. In 1930, when the official borders of the constellations were drawn up, its stars were absorbed into Auriga, Gemini and Lynx. Stars ψ2 Aurigae (also known as 50 Aurigae), with an apparent magnitude of 4.8, was the second-brightest star in the constellation, Bode assigning it the designation 'a'. Located 420 ± 20 light-years distant from earth, it is an orange giant of spectral type K3III., although the magnitude 3.60 star θ Geminorum is brighter. Other stars belonging to the constellation include ψ4, ψ5, ψ7, ψ8, ψ9, 63, 64, 65 and 66 Aurigae, and o Geminorum. Thought to be around 4 billion years old, ψ5 Aurigae is a sunlike star of spectral type G0V that is around 1.07 times as massive as the Sun and 1.18 times as wide. It appears to have a circumstellar disk of dust, known as a debris disk. List This is the list of notable stars in the obsolete constellation Telescopium Herschelii, sorted by decreasing brightness. See also List of stars in Telescopium Herschelii Telescopium References Former constellations
Telescopium Herschelii
[ "Astronomy" ]
577
[ "Former constellations", "Constellations" ]
1,586,715
https://en.wikipedia.org/wiki/Jackson-Gwilt%20Medal
The Jackson-Gwilt Medal is an award that has been issued by the Royal Astronomical Society (RAS) since 1897. The original criteria were for the invention, improvement, or development of astronomical instrumentation or techniques; for achievement in observational astronomy; or for achievement in research into the history of astronomy. In 2017, the history of astronomy category was removed for subsequent awards and was transferred to a new award, the Agnes Mary Clerke Medal. The frequency of the medal has varied over time. Initially, it was irregular, with gaps of between three and five years between awards. From 1968 onwards, it was awarded regularly every three years; from 2004 every two years; and since 2008 it has been awarded every year. The award is named after Hannah Jackson née Gwilt. She was a niece of Joseph Gwilt (an architect and Fellow of the RAS) and daughter of George Gwilt (another Fellow); Hannah donated the original funds for the medal. It is the second oldest award issued by the RAS, after the Gold Medal. List of winners Source is unless otherwise noted. See also List of astronomy awards References Awards established in 1897 Awards of the Royal Astronomical Society 1897 establishments in the United Kingdom
Jackson-Gwilt Medal
[ "Astronomy" ]
245
[ "Awards of the Royal Astronomical Society", "Astronomy prizes" ]
1,586,721
https://en.wikipedia.org/wiki/ABO%20blood%20group%20system
The ABO blood group system is used to denote the presence of one, both, or neither of the A and B antigens on erythrocytes (red blood cells). For human blood transfusions, it is the most important of the 44 different blood type (or group) classification systems currently recognized by the International Society of Blood Transfusions (ISBT) as of December 2022. A mismatch in this serotype (or in various others) can cause a potentially fatal adverse reaction after a transfusion, or an unwanted immune response to an organ transplant. Such mismatches are rare in modern medicine. The associated anti-A and anti-B antibodies are usually IgM antibodies, produced in the first years of life by sensitization to environmental substances such as food, bacteria, and viruses. The ABO blood types were discovered by Karl Landsteiner in 1901; he received the Nobel Prize in Physiology or Medicine in 1930 for this discovery. ABO blood types are also present in other primates such as apes, monkeys and Old World monkeys. History Discovery The ABO blood types were first discovered by an Austrian physician, Karl Landsteiner, working at the Pathological-Anatomical Institute of the University of Vienna (now Medical University of Vienna). In 1900, he found that red blood cells would clump together (agglutinate) when mixed in test tubes with sera from different persons, and that some human blood also agglutinated with animal blood. He wrote a two-sentence footnote: This was the first evidence that blood variations exist in humans — it was believed that all humans have similar blood. The next year, in 1901, he made a definitive observation that blood serum of an individual would agglutinate with only those of certain individuals. Based on this he classified human blood into three groups, namely group A, group B, and group C. He defined that group A blood agglutinates with group B, but never with its own type. Similarly, group B blood agglutinates with group A. Group C blood is different in that it agglutinates with both A and B. This was the discovery of blood groups for which Landsteiner was awarded the Nobel Prize in Physiology or Medicine in 1930. In his paper, he referred to the specific blood group interactions as isoagglutination, and also introduced the concept of agglutinins (antibodies), which is the actual basis of antigen-antibody reaction in the ABO system. He asserted: Thus, he discovered two antigens (agglutinogens A and B) and two antibodies (agglutinins — anti-A and anti-B). His third group (C) indicated absence of both A and B antigens, but contains anti-A and anti-B. The following year, his students Adriano Sturli and Alfred von Decastello discovered the fourth type (but not naming it, and simply referred to it as "no particular type"). In 1910, Ludwik Hirszfeld and Emil Freiherr von Dungern introduced the term 0 (null) for the group Landsteiner designated as C, and AB for the type discovered by Adriano sturli and Alfred von decastello (https://www.rockefeller.edu/our-scientists/karl-landsteiner/2554-nobel-prize/). They were also the first to explain the genetic inheritance of the blood groups. Classification systems Czech serologist Jan Janský independently introduced blood type classification in 1907 in a local journal. He used the Roman numerical I, II, III, and IV (corresponding to modern O, A, B, and AB). Unknown to Janský, an American physician William L. Moss devised a slightly different classification using the same numerical; his I, II, III, and IV corresponding to modern AB, A, B, and O. These two systems created confusion and potential danger in medical practice. Moss's system was adopted in Britain, France, and US, while Janský's was preferred in most European countries and some parts of US. To resolve the chaos, the American Association of Immunologists, the Society of American Bacteriologists, and the Association of Pathologists and Bacteriologists made a joint recommendation in 1921 that the Jansky classification be adopted based on priority. But it was not followed particularly where Moss's system had been used. In 1927, Landsteiner had moved to the Rockefeller Institute for Medical Research in New York. As a member of a committee of the National Research Council concerned with blood grouping, he suggested to substitute Janský's and Moss's systems with the letters O, A, B, and AB. (There was another confusion on the use of figure 0 for German null as introduced by Hirszfeld and von Dungern, because others used the letter O for ohne, meaning without or zero; Landsteiner chose the latter.) This classification was adopted by the National Research Council and became variously known as the National Research Council classification, the International classification, and most popularly the "new" Landsteiner classification. The new system was gradually accepted and by the early 1950s, it was universally followed. Other developments The first practical use of blood typing in transfusion was by an American physician Reuben Ottenberg in 1907. Large-scale application began during the First World War (1914–1915) when citric acid began to be used for blood clot prevention. Felix Bernstein demonstrated the correct blood group inheritance pattern of multiple alleles at one locus in 1924. Watkins and Morgan, in England, discovered that the ABO epitopes were conferred by sugars, to be specific, N-acetylgalactosamine for the A-type and galactose for the B-type. After much published literature claiming that the ABH substances were all attached to glycosphingolipids, Finne et al. (1978) found that the human erythrocyte glycoproteins contain polylactosamine chains that contains ABH substances attached and represent the majority of the antigens. The main glycoproteins carrying the ABH antigens were identified to be the Band 3 and Band 4.5 proteins and glycophorin. Later, Yamamoto's group showed the precise glycosyl transferase set that confers the A, B and O epitopes. Genetics Blood groups are inherited from both parents. The ABO blood type is controlled by a single gene (the ABO gene) with three types of alleles inferred from classical genetics: i, IA, and IB. The I designation stands for isoagglutinogen, another term for antigen. The gene encodes a glycosyltransferase—that is, an enzyme that modifies the carbohydrate content of the red blood cell antigens. The gene is located on the long arm of the ninth chromosome (9q34). The IA allele gives type A, IB gives type B, and i gives type O. As both IA and IB are dominant over i, only ii people have type O blood. Individuals with IAIA or IAi have type A blood, and individuals with IBIB or IBi have type B. IAIB people have both phenotypes, because A and B express a special dominance relationship: codominance, which means that type A and B parents can have an AB child. A couple with type A and type B can also have a type O child if they are both heterozygous (IBi and IAi). The cis-AB phenotype has a single enzyme that creates both A and B antigens. The resulting red blood cells do not usually express A or B antigen at the same level that would be expected on common group A1 or B red blood cells, which can help solve the problem of an apparently genetically impossible blood group. Individuals with the rare Bombay phenotype (hh) produce antibodies against the A, B, and O groups and can only receive transfusions from other hh individuals. The table above summarizes the various blood groups that children may inherit from their parents. Genotypes are shown in the second column and in small print for the offspring: AO and AA both test as type A; BO and BB test as type B. The four possibilities represent the combinations obtained when one allele is taken from each parent; each has a 25% chance, but some occur more than once. The text above them summarizes the outcomes. Historically, ABO blood tests were used in paternity testing, but in 1957 only 50% of American men falsely accused were able to use them as evidence against paternity. Occasionally, the blood types of children are not consistent with expectations—for example, a type O child can be born to an AB parent—due to rare situations, such as Bombay phenotype and cis AB. Subgroups The A blood type contains about 20 subgroups, of which A1 and A2 are the most common (over 99%). A1 makes up about 80% of all A-type blood, with A2 making up almost all of the rest. These two subgroups are not always interchangeable as far as transfusion is concerned, as some A2 individuals produce antibodies against the A1 antigen. Complications can sometimes arise in rare cases when typing the blood. With the development of DNA sequencing, it has been possible to identify a much larger number of alleles at the ABO locus, each of which can be categorized as A, B, or O in terms of the reaction to transfusion, but which can be distinguished by variations in the DNA sequence. There are six common alleles in white individuals of the ABO gene that produce one's blood type: The same study also identified 18 rare alleles, which generally have a weaker glycosylation activity. People with weak alleles of A can sometimes express anti-A antibodies, though these are usually not clinically significant as they do not stably interact with the antigen at body temperature. Cis AB is another rare variant, in which A and B genes are transmitted together from a single parent. Distribution and evolutionary history The distribution of the blood groups A, B, O and AB varies across the world according to the population. There are also variations in blood type distribution within human subpopulations. In the UK, the distribution of blood type frequencies through the population still shows some correlation to the distribution of placenames and to the successive invasions and migrations including Celts, Norsemen, Danes, Anglo-Saxons, and Normans who contributed the morphemes to the placenames and the genes to the population. The native Celts tended to have more type O blood, while the other populations tended to have more type A. The two common O alleles, O01 and O02, share their first 261 nucleotides with the group A allele A01. However, unlike the group A allele, a guanosine base is subsequently deleted. A premature stop codon results from this frame-shift mutation. This variant is found worldwide, and likely predates human migration from Africa. The O01 allele is considered to predate the O02 allele. Some evolutionary biologists theorize that there are four main lineages of the ABO gene and that mutations creating type O have occurred at least three times in humans. From oldest to youngest, these lineages comprise the following alleles: A101/A201/O09, B101, O02 and O01. The continued presence of the O alleles is hypothesized to be the result of balancing selection. Both theories contradict the previously held theory that type O blood evolved first. Origin theories It is possible that food and environmental antigens (bacterial, viral, or plant antigens) have epitopes similar enough to A and B glycoprotein antigens. The antibodies created against these environmental antigens in the first years of life can cross-react with ABO-incompatible red blood cells that it comes in contact with during blood transfusion later in life. Anti-A antibodies are hypothesized to originate from immune response towards influenza virus, whose epitopes are similar enough to the α-D-N-galactosamine on the A glycoprotein to be able to elicit a cross-reaction. Anti-B antibodies are hypothesized to originate from antibodies produced against Gram-negative bacteria, such as E. coli, cross-reacting with the α-D-galactose on the B glycoprotein. However, it is more likely that the force driving evolution of allele diversity is simply negative frequency-dependent selection; cells with rare variants of membrane antigens are more easily distinguished by the immune system from pathogens carrying antigens from other hosts. Thus, individuals possessing rare types are better equipped to detect pathogens. The high within-population diversity observed in human populations would, then, be a consequence of natural selection on individuals. Clinical relevance The carbohydrate molecules on the surfaces of red blood cells have roles in cell membrane integrity, cell adhesion, membrane transportation of molecules, and acting as receptors for extracellular ligands, and enzymes. ABO antigens are found having similar roles on epithelial cells as well as red blood cells. Bleeding and thrombosis (von Willebrand factor) The ABO antigen is also expressed on the von Willebrand factor (vWF) glycoprotein, which participates in hemostasis (control of bleeding). In fact, having type O blood predisposes to bleeding, as 30% of the total genetic variation observed in plasma vWF is explained by the effect of the ABO blood group, and individuals with group O blood normally have significantly lower plasma levels of vWF (and Factor VIII) than do non-O individuals. In addition, vWF is degraded more rapidly due to the higher prevalence of blood group O with the Cys1584 variant of vWF (an amino acid polymorphism in VWF): the gene for ADAMTS13 (vWF-cleaving protease) maps to human chromosome 9 band q34.2, the same locus as ABO blood type. Higher levels of vWF are more common amongst people who have had ischemic stroke (from blood clotting) for the first time. The results of this study found that the occurrence was not affected by ADAMTS13 polymorphism, and the only significant genetic factor was the person's blood group. ABO(H) blood group antigens are also carried by other hemostatically relevant glycoproteins, such as platelet glycoprotein Ibα, which is a ligand for vWF on platelets. The significance of ABO(H) antigen expression on these other hemostatic glycoproteins is not fully defined, but may also be relevant for bleeding and thrombosis. ABO hemolytic disease of the newborn ABO blood group incompatibilities between the mother and child do not usually cause hemolytic disease of the newborn (HDN) because antibodies to the ABO blood groups are usually of the IgM type, which do not cross the placenta. However, in an O-type mother, IgG ABO antibodies are produced and the baby can potentially develop ABO hemolytic disease of the newborn. Clinical applications In human cells, the ABO alleles and their encoded glycosyltransferases have been described in several oncologic conditions. Using anti-GTA/GTB monoclonal antibodies, it was demonstrated that a loss of these enzymes was correlated to malignant bladder and oral epithelia. Furthermore, the expression of ABO blood group antigens in normal human tissues is dependent the type of differentiation of the epithelium. In most human carcinomas, including oral carcinoma, a significant event as part of the underlying mechanism is decreased expression of the A and B antigens. Several studies have observed that a relative down-regulation of GTA and GTB occurs in oral carcinomas in association with tumor development. More recently, a genome wide association study (GWAS) has identified variants in the ABO locus associated with susceptibility to pancreatic cancer. In addition, another large GWAS study has associated ABO-histo blood groups as well as FUT2 secretor status with the presence in the intestinal microbiome of specific bacterial species. In this case the association was with Bacteroides and Faecalibacterium spp. Bacteroides of the same OTU (operational taxonomic unit) have been shown to be associated with inflammatory bowel disease, thus the study suggests an important role for the ABO histo-blood group antigens as candidates for direct modulation of the human microbiome in health and disease. Clinical marker A multi-locus genetic risk score study based on a combination of 27 loci, including the ABO gene, identified individuals at increased risk for both incident and recurrent coronary artery disease events, as well as an enhanced clinical benefit from statin therapy. The study was based on a community cohort study (the Malmo Diet and Cancer study) and four additional randomized controlled trials of primary prevention cohorts (JUPITER and ASCOT) and secondary prevention cohorts (CARE and PROVE IT-TIMI 22). Alteration of ABO antigens for transfusion In April 2007, an international team of researchers announced in the journal Nature Biotechnology an inexpensive and efficient way to convert types A, B, and AB blood into type O. This is done by using glycosidase enzymes from specific bacteria to strip the blood group antigens from red blood cells. The removal of A and B antigens still does not address the problem of the Rh blood group antigen on the blood cells of Rh positive individuals, and so blood from Rh negative donors must be used. The modified blood is named "enzyme converted to O" (ECO blood) but despite the early success of converting B- to O-type RBCs and clinical trials without adverse effects transfusing into A- and O-type patients, the technology has not yet become clinical practice. Another approach to the blood antigen problem is the manufacture of artificial blood, which could act as a substitute in emergencies. Pseudoscience In Japan and other parts of East Asia, there is a popular belief in Blood type personality theory, which claims that blood types predict or influence personality. This claim is not scientifically based, and there is scientific consensus that no such link exists; the scientific community considers it a pseudoscience and a superstition. The belief originated in the 1930s, when it was introduced as part of Japan's eugenics program. Its popularity faded following Japan's defeat in World War 2 and Japanese support for eugenics faltered, but it resurfaced in the 1970s by a journalist named Masahiko Nomi. Despite its status as a pseudoscience, it remains widely popular throughout East Asia. Other popular ideas are blood type-specific dietary needs, that group A causes severe hangovers, that group O is associated with better teeth, and that those with group A2 have the highest IQ scores. As with blood type personality theory, these and other popular ideas lack scientific evidence, and many are discredited or pseudoscientific. See also Secretor status — secretion of ABO antigens in body fluids References Further reading External links ABO at BGMUT Blood Group Antigen Gene Mutation Database at NCBI, NIH Encyclopædia Britannica, ABO blood group system National Blood Transfusion Service Blood antigen systems Transfusion medicine Antigenic determinant Hematopathology Glycoproteins Serology Genes on human chromosome 9
ABO blood group system
[ "Chemistry" ]
4,125
[ "Glycoproteins", "Glycobiology" ]
1,586,724
https://en.wikipedia.org/wiki/Triangulum%20Minus
Triangulum Minus (Latin for the Smaller Triangle) was a constellation created by Johannes Hevelius. Its name is sometimes wrongly written as Triangulum Minor. It was formed from the southern parts of his Triangula (plural form of Triangulum), alongside Triangulum Majus, but is no longer in use. The triangle was defined by the fifth-magnitude stars ι Trianguli (6 Tri), 10 Trianguli, and 12 Trianguli. Also known as TZ Trianguli, 6 Trianguli is a multiple star system with a combined magnitude of 4.7, whose main component is a yellow giant of spectral type G5III. References External links Ian Ridpath's Star Tales Triangulum Minor Shane Horvatin Former constellations Constellations listed by Johannes Hevelius
Triangulum Minus
[ "Astronomy" ]
170
[ "Former constellations", "Astronomy stubs", "Constellations", "Constellations listed by Johannes Hevelius" ]
1,586,736
https://en.wikipedia.org/wiki/Zolt%C3%A1n%20Tibor%20Balogh
Zoltán "Zoli" Tibor Balogh (December 7, 1953 – June 19, 2002) was a Hungarian-born mathematician, specializing in set-theoretic topology. His father, Tibor Balogh, was also a mathematician. His best-known work concerned solutions to problems involving normality of products, most notably the first ZFC construction of a small (cardinality continuum) Dowker space. He also solved Nagami's problem (normal + screenable does not imply paracompact), and the second and third Morita conjectures about normality in products. References External links Memorial with photograph Zoli -- Topology Proceedings 27 (2003) Author profile in the database zbMATH 20th-century Hungarian mathematicians 1953 births 2002 deaths Topologists
Zoltán Tibor Balogh
[ "Mathematics" ]
161
[ "Topologists", "Topology" ]
1,586,757
https://en.wikipedia.org/wiki/Fast%20Lane%20%28electronic%20toll%20collection%29
Fast Lane was the original branding for the electronic toll collection system used on toll roads in Massachusetts, including the Massachusetts Turnpike, Sumner Tunnel, Ted Williams Tunnel, and Tobin Bridge. It was introduced in 1998, and later folded into the E-ZPass branding in 2012. Fast Lane transponders were fully interoperable with member agencies of the E-ZPass Interagency Group, however Fast Lane transponders (or the "E-ZPass MA" replacement transponders exclusive to Massachusetts) afford users discounted tolls in some junctions that out-of-state users are not offered. In 2012 the Department of Transportation began the process of converting all existing Fast Lanes to E-ZPass lanes and also began to phase out the Fast Lane name. The Fast Lane website is now branded as E-ZPass MA. With the change, the toll collection system has ceased to have corporate sponsorship. Towards the end of 2016 and into 2017, the entire toll road system in the Commonwealth was converted to open-road tolling, thus the system no longer has any booths. History The original electronic toll collection system in Massachusetts was called MassPass and was installed at the Ted Williams Tunnel. This system was scrapped and replaced by the current E-ZPass-compatible system in 1998 for the Ted Williams Tunnel and the Massachusetts Turnpike Boston extension and extended to the rest of the turnpike in 1999. When the system was first introduced, AAA gave out to its Western Massachusetts members an orange Fast Lane pass. This pass could be used from exits 1 to 6 without toll because these exits did not previously charge tolls. The orange passes were eliminated when tolls were reinstated on that section of the Turnpike on October 15, 2013. In 2011, MassDOT announced that the Fast Lane branding would be dropped beginning in mid-2012 and rebranded to the typical E-ZPass and switch to the purple and white signage. This has occurred along all tolls, officially phasing out Fast Lane in itself. Sponsors The system was sponsored by Citizens Bank for the Massachusetts Turnpike and tunnels, and by TD Bank, N.A. for the Tobin Bridge (formerly administered by the separate Massachusetts Port Authority). Along the Turnpike it was branded the Citizens Bank Fast Lane, whereas at the Tobin Bridge it was branded TD Bank Fast Lane. Until 2005, Fleet Bank sponsored the Fast Lane system. It inherited the sponsorship upon merging with BankBoston, the founding financial institution of the local system along with the Massachusetts Turnpike. Cost The Massachusetts Turnpike Authority originally charged a one-time fee to buy the transponder. They had planned to replace that charge with a $0.50 monthly fee, but both fees were eliminated due to criticism of Massachusetts Turnpike Easter Sunday congestion in 2009. Discounts "Fast Lane"/"E-Z Pass MA" subscribers from any state receive (over cash or E-Z Pass customers subscribed via other states) a 25¢ discount at Allston-Brighton tolls, 50¢ discount at Sumner and Ted Williams tunnels and Tobin Memorial Bridge. Several discounts are available with a special transponder obtained by application: Residents of Charlestown and Chelsea pay $0.30 on Tobin Bridge. Residents of East Boston, South Boston, and North End pay $0.40 at Sumner and Ted Williams tunnels. Carpools with 3 or more passengers receive various discounts. Legality of discounts The legality and constitutionality of offering discounts to holders of transponders issued by Massachusetts as opposed to transponders issued by other states has been upheld twice at the federal appellate level, in 2003 by the First Circuit in a case arising out of Massachusetts and in 2010 by the Third Circuit in a case arising out of New Jersey. Significantly, both courts based their rulings on the fact that Massachusetts transponders are available on equal terms to in-state and out-of-state residents and that anyone is allowed to have a transponder from more than one state at a time, choosing which transponder to use in each toll transaction to obtain the cheaper rate. The court in the New Jersey case noted parenthetically that "...out-of-state residents who commute regularly to Boston each day might very well decide to carry only a [Massachusetts] Fast Lane transponder." Notes External links MassDOT Highway Division 1st Circuit opinion in Doran v. Massachusetts Turnpike Authority upholding Fast Lane only discounts Electronic toll collection Toll roads in Massachusetts Radio-frequency identification
Fast Lane (electronic toll collection)
[ "Engineering" ]
907
[ "Radio-frequency identification", "Radio electronics" ]
1,586,760
https://en.wikipedia.org/wiki/Turdus%20Solitarius
Turdus Solitarius (Latin for solitary thrush) was a constellation created by French astronomer Pierre Charles Le Monnier in 1776 from stars of Hydra's tail. It was named after the Rodrigues solitaire, an extinct flightless bird that was endemic to the island of Rodrigues East of Madagascar in the Indian Ocean. It was replaced by another constellation, Noctua (the Owl), in A Celestial Atlas (1822) by the British amateur astronomer Alexander Jamieson, but neither was adopted by the International Astronomical Union among its 88 recognized constellations. References External links Ian Ridpath's Star Tales Turdus Solitarius Turdus Solitarius Shane Horvatin Former constellations 1776 in France
Turdus Solitarius
[ "Astronomy" ]
148
[ "Former constellations", "Astronomy stubs", "Constellations" ]
1,586,804
https://en.wikipedia.org/wiki/Vaalharts%20Irrigation%20Scheme
The Vaalharts Irrigation Scheme is one of the largest irrigation schemes in the world covering 369.50 square kilometres in the Northern Cape Province of South Africa. It is named after the Vaal River and the Harts River, the Vaal River being its major tributary. Water from a diversion weir in the Vaal River, near Warrenton, flows through a 1,176 km long network of canals. This system provides irrigation water to a total of 39,820ha scheduled land, industrial water to six towns and other industrial water users. This farmland is divided into individual blocks which each have their own letter, or letter group for identification. The blocks are divided into streets which have numbers that count up from the first one out. The canals divide into all of the blocks and the streets. There are a total of 6 plots per street and each plot also has a number from one to six. To reference a specific plot, you take the plot number, follow it up with the block letter and then the street number. e.g. 3 G 13. Each plot feeds off of the canal into their own dams with their own hatch from the canal. The water then goes through pumps to be sprinkled across the farmland. The most popular method to do this is with pivots. See also Hartswater Windsorton References External links Vaalharts Irrigation Scheme Study, South Africa Northern Cape Irrigation projects Irrigation in South Africa Vaal River
Vaalharts Irrigation Scheme
[ "Engineering" ]
294
[ "Irrigation projects" ]
1,586,839
https://en.wikipedia.org/wiki/Cerberus%20%28constellation%29
Cerberus is an obsolete constellation created by Hevelius in the 17th century, whose stars are now included in the constellation Hercules. It was depicted as a three-headed snake that Hercules is holding in his hand. The constellation is no longer in use. This constellation "figure typified the serpent ... infesting the country around Taenarum the Μέτωπον of Greece, the modern Cape Matapan." The presence of Cerberus (Kerberos) at Taenarum (Tainaron) is mentioned by Strabo, Statius, and Seneca the Younger. John Senex combined this constellation with the likewise obsolete constellation Ramus Pomifer, an apple branch held by Hercules, in his 1721 star map to create "Cerberus et Ramus". Notes External links Ian Ridpath's Star Tales: "Cerberus" Cerberus Obsolete constellations: Shane Horvatin Former constellations Constellations listed by Johannes Hevelius
Cerberus (constellation)
[ "Astronomy" ]
206
[ "Former constellations", "Astronomy stubs", "Constellations", "Constellations listed by Johannes Hevelius" ]
1,586,849
https://en.wikipedia.org/wiki/Custos%20Messium
Custos Messium (Latin for harvest-keeper) — also known as “Vineyard Keeper," “Le Messier,” "Mietitore," and "Erndtehüter" — was a constellation created by Joseph Jérôme Lefrançois de Lalande to honor Charles Messier. It was introduced in 1775, and was located between the constellations Camelopardalis, Cassiopeia, Cepheus, and next to another subsequently abandoned constellation, Rangifer the Reindeer. Custos Messium is no longer recognized. Etymology Custos is derived from the classical Latin "custōs” meaning “Guardian” or “Keeper”. Messium is derived from the classical Latin “messis” meaning "harvest.” History After the discovery of comet C/1774 P1, (also known as Comet Montaigne]), Messier extensively observed and recorded information about the comet. Lalande noticed that the path the comet followed passed through several unformed stars that were associated with Camelopardalis. To unify the stars, as well as honor Charles Messier for his dedication to astronomy and comet discovery, Lalande introduced Le Messier, or Custos Messium. The stars in Custos Messium are anonymous and nearly invisible to the naked eye. Several factors went into Lalande’s decision of introducing a harvest keeper. Evidence suggests that Lalande was trying to avoid putting a living figure among the stars, but by deriving the Latin word for harvest, messium, from Messier’s surname, Lalande was able to find a clever way to allude to Messier. The location of the constellation is also believed to be carefully considered. The surrounding constellations, Cassiopeia, Cepheus, and Camelopardalis, all have roots that connect them to agriculture. Similarly, the Phoenicians viewed the part of the sky Custos Messium was located in as a giant wheat field. The location could also suggest that Custos Messium was meant to serve as the northern hemisphere counterpart to the southern hemisphere constellation Polophylax, or the Guardian of the Pole. This idea was due to both constellations being circumpolar, as well as the idea of being representational guardians. Custos Messium was popularized by its early adaptation in Johann Elert Bode’s Vorstellung der Gestirne. The constellation was also included in a number of astronomy literatures at the time, such as the German addition of John Flamsteed’s Atlas Coelestis, Bode’s Uranographia, and Bode’s Allgemeine Beschreibung und Nachweisung der Gestirne. Custos Messium remained in circulation for around a century, slowly fading out of astronomy texts by the mid-nineteenth century, and completely falling out of recognition by the end of the nineteenth century. The border of the constellation Cassiopia was carefully drawn to incorporate the majority of the stars belonging to Custos Messium. Stars The brightest star in the constellation was 50 Cassiopeiae. Other stars include 23 Cassiopeiae, 47 Cassiopeiae, 49 Cassiopeiae, and γ Camelopardalis. The stars were returned to their original constellations when the International Astronomical Union did not include Custos Messium on the list of the 88 official constellations in 1922. References SEDS retrieved 23 August 2006 External links Michael E. Bakich (1995 22/09/2011) Former constellations
Custos Messium
[ "Astronomy" ]
733
[ "Former constellations", "Constellations" ]
1,586,871
https://en.wikipedia.org/wiki/Lochium%20Funis
Lochium Funis (Latin for the log and line) was a constellation created by Johann Bode in 1801 next to the constellation Pyxis, an earlier invention of Nicolas Louis de Lacaille. It represented the log and line used by seamen for measuring a ship's speed through the water. It was never used by other astronomers. External links Lochium Funis, Ian Ridpath's Star Tales Former constellations
Lochium Funis
[ "Astronomy" ]
88
[ "Former constellations", "Astronomy stubs", "Constellations" ]
1,586,886
https://en.wikipedia.org/wiki/Malus%20%28constellation%29
Malus (Latin for mast) was a subdivision of the ancient constellation Argo Navis proposed in 1844 by the English astronomer John Herschel. It would have replaced Pyxis, the compass, which was introduced in the 1750s by Nicolas Louis de Lacaille. Herschel's suggestion was not widely adopted and Malus is not now recognized by astronomers. See also Argo Navis References External links Malus, ‍an ‍attempted ‍replacement ‍for ‍Pyxis, Ian Ridpath's Star Tales Former constellations
Malus (constellation)
[ "Astronomy" ]
106
[ "Former constellations", "Constellations" ]
1,586,903
https://en.wikipedia.org/wiki/Officina%20Typographica
Officina Typographica (Latin for printing office) was a constellation located east of Sirius and Canis Major, north of Puppis, and south of Monoceros. It was drawn up by Johann Bode and Joseph Jérôme de Lalande in 1798, and included in the former's star atlas Uranographia in 1801, honouring the printing press of Johannes Gutenberg. Lalande reported wanting to honour French and German discoveries in the same manner that Nicolas-Louis de Lacaille had done for his new constellations. It was called Buchdrucker-Werkstatt by Bode initially, and later Atelier Typographique in the 1825 work Urania's Mirror, Atelier de l’Imprimeur by Preyssinger in 1862 and Antlia Typographiae in 1888. The constellation appeared in later star atlases through the 19th century but was rarely used by the end of the century; Richard Hinckley Allen noted its most recent use had been in 1878 in Father Angelo Secchi's planisphere, but stated "it is seldom found in the maps of our day.". The stars were later absorbed into northern Puppis, and remained permanently there after the setting of the constellation boundaries in 1928. References Former constellations
Officina Typographica
[ "Astronomy" ]
267
[ "Former constellations", "Constellations" ]
1,586,949
https://en.wikipedia.org/wiki/Sceptrum%20et%20Manus%20Iustitiae
Sceptrum et Manus Iustitiae (Latin for scepter and hand of justice) was a constellation created by Augustin Royer in 1679 to honor king Louis XIV of France. It was formed from stars of what is today the constellations Lacerta and western Andromeda. Due to the awkward name the constellation was modified and name changed a couple of times, for example some old star maps show Sceptrum Imperiale, Stellio and Scettro, and Johannes Hevelius's star map divides the area between the new Lacerta and as a chain end fettering Andromeda. The connection with the later constellation Frederici Honores, that occupied the chain end of Andromeda, is unclear, except that both represent a regal spire attributed to varying regents. References Former constellations 1679 in France Lacerta Andromeda (constellation)
Sceptrum et Manus Iustitiae
[ "Astronomy" ]
179
[ "Lacerta", "Former constellations", "Andromeda (constellation)", "Astronomy stubs", "Constellations" ]
1,587,123
https://en.wikipedia.org/wiki/Land%C3%A9%20g-factor
In physics, the Landé g-factor is a particular example of a g-factor, namely for an electron with both spin and orbital angular momenta. It is named after Alfred Landé, who first described it in 1921. In atomic physics, the Landé g-factor is a multiplicative term appearing in the expression for the energy levels of an atom in a weak magnetic field. The quantum states of electrons in atomic orbitals are normally degenerate in energy, with these degenerate states all sharing the same angular momentum. When the atom is placed in a weak magnetic field, however, the degeneracy is lifted. Description The factor comes about during the calculation of the first-order perturbation in the energy of an atom when a weak uniform magnetic field (that is, weak in comparison to the system's internal magnetic field) is applied to the system. Formally we can write the factor as, The orbital is equal to 1, and under the approximation , the above expression simplifies to Here, J is the total electronic angular momentum, L is the orbital angular momentum, and S is the spin angular momentum. Because for electrons, one often sees this formula written with 3/4 in place of . The quantities gL and gS are other g-factors of an electron. For an atom, and for an atom, . If we wish to know the g-factor for an atom with total atomic angular momentum (nucleus + electrons), such that the total atomic angular momentum quantum number can take values of , giving Here is the Bohr magneton and is the nuclear magneton. This last approximation is justified because is smaller than by the ratio of the electron mass to the proton mass. A derivation The following working is a common derivation. Both orbital angular momentum and spin angular momentum of electron contribute to the magnetic moment. In particular, each of them alone contributes to the magnetic moment by the following form where Note that negative signs in the above expressions are because an electron carries negative charge, and the value of can be derived naturally from Dirac's equation. The total magnetic moment , as a vector operator, does not lie on the direction of total angular momentum , because the g-factors for orbital and spin part are different. However, due to Wigner-Eckart theorem, its expectation value does effectively lie on the direction of which can be employed in the determination of the g-factor according to the rules of angular momentum coupling. In particular, the g-factor is defined as a consequence of the theorem itself Therefore, One gets See also Einstein–de Haas effect, Zeeman effect, g-factor (physics). References Atomic physics Nuclear physics
Landé g-factor
[ "Physics", "Chemistry" ]
546
[ "Quantum mechanics", "Atomic physics", " molecular", "Nuclear physics", "Atomic", " and optical physics" ]
1,587,228
https://en.wikipedia.org/wiki/Christien%20Rioux
Christien Rioux, also known by his handle DilDog, is the co-founder and chief scientist for the Burlington, Massachusetts based company Veracode, for which he is the main patent holder. Educated at MIT, Rioux was a computer security researcher at L0pht Heavy Industries and then at the company @stake (later bought by Symantec). While at @stake, he looked for security weaknesses in software and led the development of Smart Risk Analyzer (SRA). He co-authored the best-selling Windows password auditing tool @stake LC (L0phtCrack) and the AntiSniff network intrusion detection system. He is also a member of Cult of the Dead Cow and its Ninja Strike Force. Formerly, he was a member of L0pht. DilDog is best known as the author of the original code for Back Orifice 2000, an open source remote administration tool. He is also well known as the author of "The Tao of Windows Buffer Overflow." References Hackers Cult of the Dead Cow members L0pht Living people Massachusetts Institute of Technology alumni American computer programmers Year of birth missing (living people)
Christien Rioux
[ "Technology" ]
241
[ "Lists of people in STEM fields", "Hackers" ]
1,587,315
https://en.wikipedia.org/wiki/HoHoCon
HoHoCon (or XmasCon) was a conference series which took place shortly before or after Christmas in Houston, Texas, sponsored by Drunkfux and the hacker ezine Cult of the Dead Cow. The fourth and fifth HoHoCons were also sponsored by Phrack magazine and took place in Austin, Texas. Phreaking software BlueBEEP was released at HoHoCon '93. HoHoCon is generally credited as being the first "modern" hacker con, although Summercon was around before it. It inspired later conventions such as DEF CON and HOPE. All together, there were five conferences. Conferences HoHoCon '90 (XmasCon) (Houston, Texas, December 28 - 30 1990) HoHoCon '91 (Houston, Texas, December 27 - 29 1991) HoHoCon '92 (Houston, Texas, December 18 - 20 1992) HoHoCon '93 (Austin, Texas, December 17 - 19 1993) HoHoCon '94 (Austin, Texas, December 30, 1994 - January 1, 1995) References Cult of the Dead Cow Culture of Houston Hacker conventions
HoHoCon
[ "Technology" ]
228
[ "Computing stubs", "Computer conference stubs" ]
1,587,530
https://en.wikipedia.org/wiki/Achmet%20%28oneiromancer%29
Achmet, son of Seirim (), the author of a work on the interpretation of dreams, the Oneirocriticon of Achmet, is probably not the same person as Abu Bekr Mohammed Ben Sirin, whose work on the same subject is still extant in Arabic in the Royal Library at Paris, and who was born AH 33 (AD 653-4) and died AH 110 (AD 728-9). The two names Ahmed or Achimet and Mohammed consist in Arabic of four letters each, and differ only in the first. There are many differences between Achmet's work, in the form in which we have it, and that of Ibn Sirin, as the writer of the former (or the translator) appears from internal evidence to have been certainly a Christian, (c. 2, 150, &c.) It exists only in Greek, or rather it has only been published in that language. It consists of three hundred and four chapters, and professes to be derived from what has been written on the same subject by the Indians, Persians, and Egyptians. It was translated out of Greek into Latin about the year 1160, by Leo Tuscus, of which work two specimens are to be found in Gasp. Barthii Adversaria. Around 1165, it was used as a source by Pascalis Romanus for his Liber thesauri occulti, a Latin compilation on dream interpretation that also draws on Artemidorus. It was first published at Frankfort, 1577, 8vo., in a Latin translation, made by Leunclavius, from a very imperfect Greek manuscript, with the title "Apomasaris Apotelesmata, sive de Significatis et Eventis Insomniorum, ex Indorum, Persarum, Aegyptiorumque Disciplina." The word Apomasares is a corruption of the name of the famous Albumasar, or Abu Ma'shar, and Leunclavius afterwards acknowledged his mistake in attributing the work to him. It was published in Greek and Latin by Rigaltius, and appended to his edition of the Oneirocritica of Artemidorus, Lutet. Paris. 1603, 4to., and some Greek various readings are inserted by Jacobus De Rhoer in his Otium Daventriense. It has also been translated into Italian, French, and German. Teachings In a dream, a tall, kind eunuch represents an angel. Notes References Achmet from Smith's Dictionary of Greek and Roman Biography and Mythology (1867), from which this article was originally derived Oneirocriticon of Achmet Mavroudi, Maria : A Byzantine Book on Dream Interpretation. Brill, 2002. Divination Dream Occult writers
Achmet (oneiromancer)
[ "Biology" ]
592
[ "Dream", "Behavior", "Sleep" ]
1,587,570
https://en.wikipedia.org/wiki/Herman%20te%20Riele
Hermanus Johannes Joseph te Riele (born 5 January 1947) is a Dutch mathematician at CWI in Amsterdam with a specialization in computational number theory. He is known for proving the correctness of the Riemann hypothesis for the first 1.5 billion non-trivial zeros of the Riemann zeta function with Jan van de Lune and Dik Winter, for disproving the Mertens conjecture with Andrew Odlyzko, and for factoring large numbers of world record size. In 1987, he found a new upper bound for π(x) − Li(x). In 1970, Te Riele received an engineer's degree in mathematical engineering from Delft University of Technology and, in 1976, a PhD degree in mathematics and physics from University of Amsterdam (1976). References External links 20th-century Dutch mathematicians 21st-century Dutch mathematicians 1947 births Delft University of Technology alumni Living people Number theorists Scientists from The Hague
Herman te Riele
[ "Mathematics" ]
189
[ "Number theorists", "Number theory" ]
1,587,656
https://en.wikipedia.org/wiki/Virtual%20private%20database
A virtual private database or VPD masks data in a larger database so that only a subset of the data appears to exist, without actually segregating data into different tables, schemas or databases. A typical application is constraining sites, departments, individuals, etc. to operate only on their own records and at the same time allowing more privileged users and operations (e.g. reports, data warehousing, etc.) to access on the whole table. The term is typical of the Oracle DBMS, where the implementation is very general: tables can be associated to SQL functions, which return a predicate as a SQL expression. Whenever a query is executed, the relevant predicates for the involved tables are transparently collected and used to filter rows. SELECT, INSERT, UPDATE and DELETE can have different rules. External links Using Virtual Private Database to Implement Application Security Policies http://www.oracle-base.com/articles/8i/VirtualPrivateDatabases.php Data security Types of databases
Virtual private database
[ "Engineering" ]
212
[ "Cybersecurity engineering", "Data security" ]
1,587,659
https://en.wikipedia.org/wiki/Triumph%20Triple
The Triumph Triples are a family of modern DOHC inline three-cylinder motorcycle engines made from 1990 onwards by the Triumph Motorcycle Company at their Hinckley, Leicestershire factory. The inspiration for the later triples was the pushrod Triumph Trident, produced from 1968 to 1974 at the Triumph factory at Meriden Works. The Triumph Triple motorcycle engine has been used in the Trident, Thunderbird, Adventurer, Legend, Tiger, Speed Triple, Sprint ST & RS, Sprint Executive, Trophy, Street Triple, and Daytona models. Bike magazine ranked the Hinckley Triumph Triple as the 10th best motorcycle engine of all time. First generation The first generation motor from the reborn Triumph company in 1989 was available as an inline 3-cylinder carburated 4-stroke of either called "750", or 885 cc called "900". The primary difference between the two engines was the stroke. The shorter stroke, higher revving 750 used a bore/stroke of 76.0 x 55.0 mm while the 900 used a longer stroke of 65.0 mm. The 750 engine with its eager revving performance was initially believed to be the finer machine, but the longer-legged 900 proved more popular. As a result, the smaller 750 became a budget model and was eventually phased out. Both the 750 and 900 were sold as roadsters called "Tridents". The Sprint 900, a sport tourer with a cockpit fairing, joined the Trident range. 1990s variants The first variation on the 900 triple theme appeared in 1992 with the Tiger 900. This made use of softer cam profiles to produce a less powerful engine but with an even broader spread of torque. Further changes appeared a few years later with the Daytona Super III. This time Triumph collaborated with the tuning gurus at Cosworth to produce the first high performance variant of the triple. Using higher compression pistons and a redesigned cylinder head claimed power was increased from to . In 1995 another variation of the 900 triple engine was introduced in the Thunderbird 900, a model intended for Triumph's first foray back into the US market. It had softer cam profiles and new carburettors, so power dropped again in favour of docility. The engine also received a cosmetic overhaul, by adding polished alloy covers and fake cooling fins on the barrels. In 1997 a sportier machine was produced, the Thunderbird Sport, using the Thunderbird engine with 6 speed gearbox and unrestricted air intake to give more power, as opposed to , twin front discs and other details changes to produce an engine in a remarkably similar state of tune to the original Tiger. Fuel injection redesign The triple received its first major update in 1997 with a ground up redesign to produce the fuel injected T595 Daytona engine, and the T509 Speed Triple engine, the latter using the original bore and stroke of the first generation engine. The claimed power outputs for these engines were respectively. Over the next few years the 885 engine grew to 955 cc and was used in the newly launched Sprint ST and the later Sprint RS. In this updated form it was still claimed to produce , the more powerful being kept for the Daytona. The injected 885 cc triple lived on for another couple of years in an updated Tiger. Triumph made minor updates until 2001, when it performed a major update, first appearing in the Tiger 955i and soon spreading across the rest of the range. Power and torque was increased across the range and this updated model was meant to remedy the faults apparent with the earlier 955 engine. The most lively performer to use this updated triple was the Daytona 955i, in this form claiming , the most powerful triple to emerge from Triumph. The 900 triple in its original form lingered on until 2002 in the form of the Trophy 900, being outlived by its four-cylinder relative, the Trophy 1200. 1050 cc redesign In 2005 the next generation of the triple emerged in the form of the Sprint ST 1050, swiftly followed by the Speed Triple 1050. The last of the 955 engined bikes – the Tiger – was updated, receiving the cases of the 1050 engine and other small changes although staying at 955 cc capacity until replaced by an all new Tiger 1050 in 2006. 2006 also saw the last year for the Daytona 955i, ending the production of big bore sporting triples. Coincidentally, this year the Sprint 1050 engine received a higher state of tune by lifting the max torque to occur at 7500rpm, closer to the now discontinued Daytona's 8200rpm point. 2.3 litre Rocket III engine In mid-2004, Triumph introduced an entirely new triple for use in a new heavyweight cruiser motorcycle, the Rocket III. The engine is 2294 cc, the largest purpose-built mass-produced motorcycle engine in existence. It is liquid-cooled and mounted inline with the frame. As a first for Triumph it was paired with a shaft final drive. It produces at 6000 rpm and of torque. In 2006, the Rocket III was joined by the Rocket III Classic, a more conservatively styled cruiser. Middleweight 675 cc engine In 2006, Triumph abandoned its earlier flirtations with four-cylinder middleweight bikes, and unveiled a 675 cc triple engine to power the all new Daytona 675 sport bike. The engine is liquid-cooled, fuel-injected, transversely-mounted and produces at 12,500 rpm and of torque at 11,750 rpm. The Daytona 675 has competed very successfully with the Japanese 600 cc inline fours that had dominated the market. In 2007, a de-tuned version of this engine, with a less severe cam and a slightly lower redline, was used in Street Triple 675 roadster. Middleweight 765 cc engine In 2017, Triumph introduced a 765 cc engine, designed for Moto2 (2019 onwards) and the new Street Triple line (not yet Daytona). Middleweight 660 cc engine In October 2020, Triumph introduced a 660 cc variant of the 675 cc triple for the new 2021 Triumph Trident budget bike. 1160 cc redesign In January 2021, Triumph introduced an all-new 1160 cc engine (inspired by the 765 platform and Moto2), designed for the Speed Triple 1200 RS. Bore was increased to 90mm and stroke was decreased to 60.8mm with peak power at the crank increased to 178 hp and 92 ft-lbs. The redline increased to 11,150 rpm, compression ratio was increased to 13.2 with engine weight reduced by 7 kg and powertrain inertia decreased by 12%. References Triple Motorcycle engines
Triumph Triple
[ "Technology" ]
1,325
[ "Motorcycle engines", "Engines" ]
1,587,753
https://en.wikipedia.org/wiki/Z1%20%28computer%29
The Z1 was a motor-driven mechanical computer designed by German inventor Konrad Zuse from 1936 to 1937, which he built in his parents' home from 1936 to 1938. It was a binary, electrically driven, mechanical calculator, with limited programmability, reading instructions from punched celluloid film. The “Z1” was the first freely programmable computer in the world that used Boolean logic and binary floating-point numbers; however, it was unreliable in operation. It was completed in 1938 and financed completely by private funds. This computer was destroyed in the bombardment of Berlin in December 1943, during World War II, together with all construction plans. The Z1 was the first in a series of computers that Zuse designed. Its original name was "V1" for Versuchsmodell 1 (meaning Experimental Model 1). After WW2, it was renamed "Z1" to differentiate it from the flying bombs designed by Robert Lusser. The Z2 and Z3 were follow-ups based on many of the same ideas as the Z1. Design The Z1 contained almost all the parts of a modern computer, i.e. control unit, memory, micro sequences, floating-point logic, and input-output devices. The Z1 was freely programmable via punched tape and a punched tape reader. There was a clear separation between the punched tape reader, the control unit for supervising the whole machine and the execution of the instructions, the arithmetic unit, and the input and output devices. The input tape unit read perforations in 35-millimeter film. The Z1 was a 22-bit floating-point value adder and subtractor, with some control logic to make it capable of more complex operations such as multiplication (by repeated additions) and division (by repeated subtractions). The Z1's instruction set had eight instructions and it took between one and twenty-one cycles per instruction. The Z1 had a 16-word floating point memory, where each word of memory could be read from – and written to – the control unit. The mechanical memory units were unique in their design and were patented by Konrad Zuse in 1936. The machine was only capable of executing instructions while reading from the punched tape reader, so the program itself was not loaded in its entirety into internal memory in advance. The input and output were in decimal numbers, with a decimal exponent and the units had special machinery for converting these to and from binary numbers. The input and output instructions would be read or written as floating-point numbers. The program tape was a 35 mm film with the instructions encoded in punched holes. Construction "Z1 was a machine weighing about 1 tonne in weight, which consisted of some 20,000 parts. It was a programmable computer, based on binary floating-point numbers and a binary switching system. It consisted completely of thin metal sheets, which Zuse and his friends produced using a jigsaw." "The [data] input device was a keyboard...The Z1's programs (Zuse called them Rechenpläne, computing plans) were stored on punch tapes using an 8-bit code" Construction of the Z1 was privately financed. Zuse got money from his parents, his sister Lieselotte, some students of the fraternity AV Motiv (cf. Helmut Schreyer), and Kurt Pannke (a calculating machine manufacturer in Berlin) to do so. Zuse constructed the Z1 in his parents' apartment; in fact, he was allowed to use the living room for his construction. In 1936, Zuse quit his job in airplane construction to build the Z1. Zuse is said to have used "thin metal strips" and perhaps "metal cylinders" or glass plates to construct Z1. There were probably no commercial relays in it (though the Z3 is said to have used a few telephone relays). The only electrical unit was an electric motor to give the clock frequency of 1 Hz (cycle per second) to the machine. 'The memory was constructed from thin strips of slotted metal and small pins and proved faster, smaller, and more reliable, than relays. The Z2 used the mechanical memory of the Z1 but used relay-based arithmetic. The Z3 was experimentally built entirely of relays. The Z4 was the first attempt at a commercial computer, reverting to the faster and more economical mechanical slotted metal strip memory, with relay processing, of the Z2, but the war interrupted the Z4 development.' The Z1 was never very reliable in operation because of poor synchronization caused by internal and external stresses on the mechanical parts. While various sources make various statements about exactly how Zuse's computers were constructed, a clear understanding is gradually emerging. Reconstruction The original Z1 was destroyed by the Allied air raids in 1943, but in the 1980s Zuse decided to rebuild the machine. The first sketches of the Z1 reconstruction were drawn in 1984. He constructed (with the help of two engineering students) thousands of elements of the Z1 again, and finished rebuilding the device in 1989. This replication has a 64-word memory instead of a 16-word one. The rebuilt Z1 (pictured) is displayed at the German Museum of Technology in Berlin. Quotation See also History of computing hardware Analytical Engine Difference engine Z2 Z3 Z4 References Further reading (NB. This is a translation of the original German title .) (NB. Paper describes the design principles of Zuse Z1.) External links 1930s computers Z01 Mechanical computers German inventions of the Nazi period Computer-related introductions in 1938 Konrad Zuse Computers designed in Germany
Z1 (computer)
[ "Physics", "Technology" ]
1,165
[ "Physical systems", "Machines", "Mechanical computers" ]
1,587,764
https://en.wikipedia.org/wiki/Pain%20asymbolia
Pain asymbolia, also called pain dissociation, is a condition in which pain is experienced without unpleasantness. This usually results from injury to the brain, lobotomy, cingulotomy or morphine analgesia. Preexisting lesions of the insula may abolish the aversive quality of painful stimuli while preserving the location and intensity aspects. Typically, patients report that they have pain but are not bothered by it; they recognize the sensation of pain but are mostly or completely immune to suffering from it. The pathophysiology of this disease revolves around a disconnect between the insular cortex secondary to damage and the limbic system, specifically the cingulate gyrus whose prime response to the pain perceived by insular cortex is to tether it with an agonizing emotional response thus signaling the individual of its propensity to inflict actual harm. However, a disconnect is not the only prime causative factor, as damage to these aforementioned cortical structures also results in the same symptomology. See also Physical pain Psychological pain Suffering Congenital insensitivity to pain References External links http://www.archipel.uqam.ca/2996/1/Frak.PDF Pain Neuroscience
Pain asymbolia
[ "Biology" ]
267
[ "Neuroscience" ]
1,587,972
https://en.wikipedia.org/wiki/Operation%20Wilfred
Operation Wilfred was a British naval operation during the Second World War that involved the mining of the channels between Norway and its offshore islands to prevent the transport of Swedish iron ore through neutral Norwegian waters. The Allies assumed that Wilfred would provoke German retaliation in Norway and prepared Plan R4 to occupy Narvik, Stavanger, Bergen and Trondheim. On 8 April 1940, the operation was partly carried out but was overtaken by events, when the Germans began Operation Weserübung on 9 April, the invasion of Norway and Denmark, which began the Norwegian Campaign. Background British plans The British War cabinet expended considerable energy on plans for land operations in Scandinavia during the winter of 1939–1940. The Winter War (30 November 1939 – 13 March 1940) between the Soviet Union and Finland could be used as a pretext. The deputy permanent under-secretary at the Foreign Office, Orme Sargent, wrote and advocated the seizure of the Lapland iron ore fields to prevent a Finnish defeat and German control of Sweden. German iron-ore imports from Sweden were about in 1938; about had been denied Germany by the Allied blockade since 1939. In the summer the ore was sent from Luleå in the Gulf of Bothnia but the winter ice closed this route and ore was sent instead by rail to Narvik, for shipment to Germany. At the Admiralty, Winston Churchill, the First Lord of the Admiralty, wanted an offensive policy, particularly after the Altmark incident (16–17 February 1940). British ships had entered Norwegian territorial waters to rescue merchant sailors being held on Altmark and taken to Germany after being taken prisoner when their ships had been sunk by the heavy cruiser . On 20 February 1940, Churchill ordered the Admiralty urgently to prepare a minelaying plan which "being minor and innocent may be called Wilfred". Churchill thought that a landing in Norway, without Norwegian acquiescence, was a mistake, even if there was no more than a minor exchange of fire with the Norwegian army. Churchill held that laying mines in the (Inner Leads) in Norwegian waters, could be done without a confrontation with the Royal Norwegian Navy (). The War Cabinet and the Ministry of Economic Warfare hesitated to support hostilities in Norwegian waters, because of the effect that they could have on British imports from Norway and Sweden. On 29 February, the prime Minister, Neville Chamberlain, decided to wait and see. Landing plans Despite the uncertainty, the Allied army high commands worked on plans for land operations in Scandinavia. In Operation Avonmouth, three battalions of and a British infantry brigade, with three ski companies attached, were to land at Narvik and advance along the railway to take over the iron ore fields in Lapland. The French and Foreign Legionnaires were to continue east towards Finland but keep away from the Red Army and risk being cut off by a German force when the ice in the Gulf of Bothnia thawed. Operation Stratford, was a plan for five battalions of British infantry to garrison Stavanger, Bergen and Trondheim to deny the Germans bridgeheads. In Operation Plymouth three divisions were to stand ready to cross to Trondheim to aid Sweden if the Swedish government requested it. French ships and troops assembled in the French Channel Ports and Brest. Up to 100,000 British and 50,000 French troops with generous air and naval support might participate, the main effort being in Norway, with 10,000 to 15,000 troops advancing into Finland. German counter-landings were expected in southern Norway up to Stavanger. The latest date that the Gulf of Bothnia could be expected to remain frozen was 3 April. The French view was that an operation in Scandinavia had many advantages it would divert German troops from the Maginot Line and if iron ore deliveries to Germany were prevented it would have a severe effect on the German war economy. The British would have to carry the naval burden and a few thousand troops of the French Foreign Legion would show the French government's determination to fight. Admiral Gabriel Auphan, the Deputy Chief of the Maritime Staff () wrote later, and the Prime Minister, Edouard Daladier, wanted swift action. The Norwegians had been warned in January and could be ignored during a "swift occupation of the main Norwegian ports and landing of an expeditionary force". Having decided to wait, the British on 1 March resolved to try to obtain permission from the Norwegians and the Swedes to allow the transit of a military force to Finland via Narvik, Kiruna and Gällivare but the Norwegian prime minister rejected the request on 4 March, the Swedish prime minister having rejected the request the day before. On 11 March the French told the War Cabinet that Daladier would be forced to resign over the Finland question, unless something was done. The British agreed to dispatch troops to Narvik regardless of whether the Norwegians acquiesced. Plan R3 In Plan R3, Major-General Pierse Macksey, the commander of the 49th (West Riding) Infantry Division, was made land commander and Admiral Edward Evans the naval commander, with Audet in command of the . The British commanders were briefed on 12 March that they were to land a force at Narvik, assist Finland and deny Russia and Germany the Swedish iron ore fields for as long as they could. The force was only to attempt a landing if the Norwegians made only token resistance. Force was not to be used except in self-defence. The plan caused confusion in the War Cabinet because several partly-trained British divisions were to be imposed on Norway and Sweden. Reaching Finland was unlikely and the force might have to re-embark if the Norwegians resisted. During 12 March the War Cabinet decided only to implement the Narvik landing and seize the railway terminus. On 13 March the embarkation began, only to be cancelled that day on the news of the Finnish capitulation to the USSR. Churchill and the Chief of the Imperial General Staff, General Sir Edmund Ironside tried to get permission to land at Narvik but were rebuffed, most of the troops for Operation Avonmouth being sent to France and the sent to their base. The French ships resumed their normal duties and the British ships went back to the Northern Patrol. Operation Royal Marine By late March 1940, after the resignation of Daladier and the appointment of Paul Reynaud as prime minister of France, at the Supreme War Council, Chamberlain presented Operation Royal Marine, a scheme to put floating mines into the Rhine to disrupt river traffic downriver in the Rhineland. The French agreed to the plan provided that it was linked to mining operations in the Norwegian Leads. By 1 April a warning would have been sent to the Norwegian and Swedish governments that the Allies would stop the passage of German iron-ore ships. A few days later mines would be laid in the Leads and operations against German shipping would be undertaken as floating mines were to be placed in the Rhine and other German rivers. Churchill and Ironside managed to get a decision that British and French troops were to go to Narvik and advance to the frontier with Sweden. The French Admiral Darlan saw the landing plan as a catalyst to bring out the German fleet and sent orders that the French forces which had just been disbanded to be reassembled; the War Office began to gather the forces that had dispersed after the cancellation of Operation Stratford and Operation Avonmouth. German plans On 3 April, the British began to receive reports of an accumulation of shipping and troops in the Baltic German ports of Rostock, Stettin and Swinemunde. It was assumed that it was part of a force being sent to counter an Allied move against Scandinavia (the Germans had some awareness of Allied plans as a result of their own intelligence) and so that day, the British took the decision to proceed with the mining of the iron ore route separately from Operation Royal Marine, setting a date of 8 April for the Admiralty to implement it. Prelude Operation Wilfred The mining plan became Operation Wilfred and the new landing operation Plan, R4. Force WV, consisting of four destroyer minelayers and four escorting destroyers was to lay mines off just south of the Lofoten Islands in Vestfjorden (67°24'N, 14°36'E) in the channel leading to Narvik. Force WS, the auxiliary minelayer and four destroyers was to lay mines off Stadtlandet (62°N, 5°E). Force WB, with two destroyers, was to lay a dummy minefield off the Bud headland, south of Kristiansund (62°54'N, 6°55'E) if the Norwegians swept the mines, they were to be replaced by the minelayers. Plan R4 The British anticipated that Operation Wilfred would prompt German retaliation and Plan R4 was a scheme to forestall German landings by occupying Stavanger, Bergen, Trondheim and Narvik as soon as the Germans revealed their intentions. Brigadier C. G. Phillips and two battalions of infantry for Bergen and two for Stavanger embarked at Rosyth on 7 April in the cruisers , , and . Troops for Narvik were assembled on the Clyde to commence embarkation on the morning of 8 April, to depart later in the day, in six destroyers, escorted by the cruisers and ; Admiral Evans and Major-General Mackesy on Aurora. Although waiting on the Germans conceded the initiative, sixteen submarines were sent to patrol the likely German approach routes to give warning. An infantry battalion bound for Trondheim was due to follow on 9 April. Plan R4 expected that the British troops would be able to hold their positions until reinforced. Operation On 3 April, the cruisers Berwick, York, Devonshire and Glasgow with the destroyers , , , , and embarked their troops at Rosyth to be transported to Norway for Plan R4. Additional troops embarked onto transport ships in the Clyde with other troops, held in readiness until evidence of German intentions gave a pretext to send them to Norway. On 5 April a large force of warships, escorted by the battlecruiser and the cruiser , comprising elements of Operation Wilfred and Plan R4 set out from the main British naval base at Scapa Flow for the Norwegian coast. On 7 April, the force split, one to carry on to Narvik, the others to carry out Wilfred to the south. If the Norwegians swept the minefields, the British would lay new ones close by. If the Norwegians challenged the British ships, the latter were to inform them that they were there to protect merchant vessels. The British would then withdraw, leaving the Norwegians to guard the area. As Force WS sailed for Stadtlandet on 7 April, German ships were sighted in the Heligoland Bight on passage to Norway and the mine laying was cancelled. Early the next day, 8 April, the day scheduled for Wilfred, the British government informed the Norwegian authorities of its intention to mine Norwegian territorial waters. Soon afterwards, Force WB simulated mine laying off the Bud headland by using oil drums and patrolled the area to "warn" shipping of the danger. Force WV laid the minefield in the mouth of Vestfjord. At 05:15 that morning, the Allies broadcast a statement to the world that justified their action and defined the mined areas. The Norwegian government issued a strong protest and demanded their immediate removal; the German fleet was already advancing up the Norwegian coasts. Later that day, the ore carrier , sailing from Stettin, in northern Germany was sunk in the Skagerrak by the Polish submarine . The ship was carrying troops, horses and tanks for the German invasion of Norway, part of Operation Weserübung. Around half of the 300 men on board were drowned, survivors telling the crews of the Norwegian fishing boats that picked them up that they were on their way to Bergen to defend it from the British. Aftermath Analysis Operation Wilfred was complete, the southern ships of Force WS and Force WB rejoined the Home Fleet and took part in Operation Rupert, British operations against the German invasion of Norway. Force WV to the north confronted the German landings. The Norwegians were taken by surprise by the German invasion on 9 April, which began with German landings in the Norwegian cities of Stavanger, Oslo, Trondheim, Narvik and Bergen. British and French troops landed at Narvik on 14 April to assist the Norwegians, pushing the Germans out of the town and almost forcing them to surrender. Despite Allied landings between 18 and 23 April, the Norwegians surrendered on 9 June 1940. Operation Wilfred failed to cut off iron ore shipments to Germany but for the rest of the war British ships and aircraft could enter Norwegian waters and attack German ships at will. Subsequent events (Lieutenant-Commander Gerard Roope), had become detached from the main force on 6 April to look for a man lost overboard and encountered the German heavy cruiser . Glowworm carried out a torpedo attack and after receiving return fire and suffering severe damage, she rammed Admiral Hipper, sinking soon afterwards, with the loss 111 men; Roope was awarded a posthumous Victoria Cross. Renown, which had diverted to assist Glowworm, fought the Action off Lofoten with the German battleships and off the coast. The Germans disengaged from the battle, drawing Renown and her escorts away from the German landings at Narvik. The 2nd Destroyer Flotilla, which had taken part in the mining of the Vestfjord, took part in the First Naval Battle of Narvik (10 April). Icarus captured Alster (11 April) and took part in the Second Naval Battle of Narvik (13 April 1940). British order of battle Home Fleet From Rosyth, 7 April From Rosyth, 8 April Covering force Force WV (Mouth of Vestfjord) Force WS (Stadtlandet) Force WB (Bud headland) See also Operation Catherine (proposed Baltic operation) Notes Footnotes References Further reading Norwegian campaign 1940 in Norway Military operations directly affecting Sweden during World War II Maritime incidents in Norway Norway–United Kingdom relations Mine warfare Blockades of World War II
Operation Wilfred
[ "Engineering" ]
2,848
[ "Military engineering", "Mine warfare" ]
1,588,048
https://en.wikipedia.org/wiki/Akathaso
Akathaso () are Burmese nats (spirits) who inhabit the tops of trees and serve as guardians of the sky. They are related to Thitpin Saunt Nat and Myay Saunt Nat,who respectively live on the trunks and roots of the trees. Myay Saunt Nats are guardian spirits of the earth while Thitpin Saunt Nats are guardian spirits of trees. Gallery Notes References Burmese nats Sky and weather deities my:ဘုမ္မစိုး
Akathaso
[ "Physics" ]
96
[ "Weather", "Sky and weather deities", "Physical phenomena" ]
1,588,071
https://en.wikipedia.org/wiki/Gairdner%20Foundation
The Gairdner Foundation is a non-profit organization devoted to the recognition of outstanding achievements in biomedical research worldwide. It was created in 1957 by James Arthur Gairdner to recognize and reward the achievements of medical researchers whose work contributes significantly to improving the quality of human life. Since the first awards were made in 1959, the Gairdner Awards have become Canada's most prestigious medical awards, recognizing and celebrating the research of the world’s best and brightest biomedical researchers. Since 1959, more than 390 Canada Gairdner Awards have been given to scientists from 35 countries; of these recipients, 98 have subsequently gone on to win a Nobel Prize. History The Gairdner Foundation was created in 1957 by James Arthur Gairdner (1893-1971). Known as Big Jim to his grandchildren, he was, indeed, a larger than life figure. Described by his friends as a talented maverick and visionary, Gairdner was a colorful personality who lived large. He was, by turns, an athlete, a soldier, a stockbroker, a businessman, a philanthropist and a landscape painter. When he died, he left his private estate to the Town of Oakville as an art gallery, which still operates today. While he had always had an interest in medicine, it was the onset of severe arthritis in his early 50s that led Gairdner to become involved with the newly created Canadian Arthritis and Rheumatism Society. In 1957 he donated $500,000 to establish a foundation to recognize major research contributions in the conquest of disease and human suffering. The Gairdner Foundation was thus born, which was to be his most lasting legacy. Awards There are three types of awards: Canada Gairdner International Awards: given annually to individuals from a diversity of fields for outstanding discoveries or contributions to biomedical science. Canada Gairdner Global Health Award: recognizes those who have made scientific advances in one of four areas: basic science, clinical science or population or environmental health. The advances must have made, or have the potential to make a significant impact on health in the developing world. Canada Gairdner Wightman Award, given to a Canadian who has demonstrated outstanding leadership in medicine and medical science. Each laureate receives $100,000 CDN that they can put towards anything they wish. Laureates in the past have put their winnings towards their labs, their research or even paid for their niece to attend medical school. The Canada Gairdner Awards are supported by the governments of Canada, Alberta, Quebec and Ontario. In February 2008 the Federal Government announced a $20 million allocation to the Gairdner Foundation to increase the prizes to $100,000 each, and institute a new individual prize in Global Health. Commencing in 2009, the Awards have been renamed the Canada Gairdner International Awards. Board of directors A 14-member Board of Directors consisting of three members of the Gairdner family and twelve leading figures in Canadian business and scientific life oversee the work of the Foundation. The Directors provide logistical support to the Medical Review Panel and the Medical Advisory Board, and are also engaged in fundraising for the Foundation and planning for its future growth. Awards Adjudication Committees The Gairdner reputation rests squarely on the outstanding quality of its adjudication process. The model for adjudication that James Gairdner outlined in 1959 remains essentially intact. The nominations for the Canada Gairdner International Awards go through a two-stage adjudication process. The first assessment is done by a group of over 30 leading scientists from across Canada. They select a short list of approximately 20 candidates, which is then given to The Medical Advisory Board (MAB), composed of 24 Canadian and international scientists. Each January, the MAB meets in Toronto to review the nominations submitted by the Medical Review Panel. After an in-depth study and lengthy discussion of each nominee, comparing their work with others in their respective field, secret ballots are cast and the five annual winners chosen. The Canada Gairdner Global Health Award was initiated in 2009 – when Gairdner received a $20 million allocation from the Government of Canada – and it quickly became the most important award in the field. The winners are selected by the Global Health Advisory Committee, a group of 12 domestic and international scientists. After a comprehensive evaluation process, the committee selects an eventual winner from the pool of submitted nominations through a secret ballot. Gairdner National Program Each October, as part of the Gairdner's mandate to communicate the work of medical researchers to others, the most recent Canada Gairdner awardees, along with awardees from years past, visit universities across Canada to provide academic lectures on their various areas of expertise. References Biomedical research foundations Non-profit organizations based in Ontario Medical and health organizations based in Ontario Scientific organizations established in 1957
Gairdner Foundation
[ "Engineering", "Biology" ]
966
[ "Biotechnology organizations", "Biomedical research foundations" ]
1,588,104
https://en.wikipedia.org/wiki/Spacer%20GIF
A spacer GIF is a small, transparent GIF image that is used in web design and HTML coding. They were used to control the visual layout of HTML elements on a web page, at a time when the HTML standard alone did not allow this. They became mostly obsolete after the browser wars-fueled addition of layout attributes to HTML 2.0 table tags, and were mostly unused by the time Cascading Style Sheets became widely adopted. History David Siegel's 1996 book Creating Killer Web Sites was the first known to publish the Spacer GIF technique. According to Siegel, he invented the trick in his living room. The Cascading Style Sheets (CSS) standard diminished the use of spacer GIFs for laying out web pages. CSS can achieve the same effects in a number of ways, such as by changing the margin or padding on a given element or by explicitly setting a relative position. Usage It was recognised early on that although the size of table cells could not be set directly, each cell could contain an image through an IMG tag. The size of image tags could be set independently, with their WIDTH and HEIGHT attributes. The table cell would then resize itself automatically to just contain this image. It was also realized that the displayed size was controlled entirely by the attributes and was independent of the actual size of the image file used (although a real image file was still needed). Accordingly, the same image file could be used for all the many spacer images needed on a web page. The only requirement was that this image was invisible, either by being the same color as the page, or by being transparent. Spacer GIFs themselves were small transparent image files. GIF files were used as it was a common format that supported transparency, unlike JPEG. These files were commonly named spacer.gif, transparent.gif or 1x1.gif. Prior to the widespread adoption of Cascading Style Sheets (CSS), the spacer GIFs were used to control blank space within a web page, that can be resized according to the HTML attributes it is given. The reason a spacer GIF is invisible is so that an HTML developer can create a table cell and fill the background with a specific color that can be viewed through the transparent spacer GIF. For instance, a developer seeking to create a square blue box 500 pixels on a side could use a separate blue 500×500 graphic at the expense of additional bandwidth. Instead, the developer can specify the table cell background color and specify the dimensions of a pre-existing transparent spacer GIF. Drawbacks Designs produced often looked perfect on the designer's display, but could look entirely different on the reader's display. Different screen resolution, browser rendering engines, as well as user font preferences, could change the layout of the design considerably. Many designs became simply unreadable, especially as small-screen and mobile devices became popular. Implementing a design with spacer GIFs could be extremely tedious - particularly when making small changes to an existing design. Obsolescence The technique was obsolete for designing web pages by around 1998. Implementation of CSS allowed sizes of HTML objects to be set directly. Although CSS' adoption was slow, owing to poor browser implementations and developer inertia, the basic ability to control element placement as enabled by the use of spacer GIFs was usable by about 1997. In addition, table- and grid-based layouts were replaced by fluid layouts in an attempt to respond to the growing use of mobile devices to access web content. These design methodologies abandoned the attempt to control two-dimensional layout between elements. Instead the elements would be offered to the reader's browser and the browser would place them as best it could, according to the size of the reader's browsing window. Fluid design layouts made the setting of page element sizes on the user's browser less important. This was particularly evident where it removed the need to set sizes in absolute units such as pixels. As the web designer had never been able to control the size of the reader's screen window, the attempt to set sizes rigidly had always been a mistake. References External links Single-Pixel GIF Trick @ CKWS, by David Siegel Spacer GIF example CSS2 Box Model Specification, World Wide Web Consortium PHP example to programmatically generate the smallest GIF possible spacer representations as file and data urls in both GIF and PNG format The Tiniest GIF Ever nginx Module ngx_http_empty_gif_module Spacergif.org spacer image API Web design Cascading Style Sheets Web 1.0
Spacer GIF
[ "Engineering" ]
957
[ "Design", "Web design" ]
1,588,203
https://en.wikipedia.org/wiki/Canada%20Gairdner%20Wightman%20Award
The Canada Gairdner Wightman Award is annually awarded by the Gairdner Foundation to a Canadian who has demonstrated outstanding leadership in the field of medicine and medical science. Award winners Source: Gairdner- Past Recipients See also Gairdner Foundation Gairdner Foundation Global Health Award Gairdner Foundation International Award List of medicine awards External links Canada Gairdner Wightman Award The Gairdner Foundation Canadian science and technology awards Medicine awards
Canada Gairdner Wightman Award
[ "Technology" ]
90
[ "Science and technology awards", "Medicine awards" ]
1,588,279
https://en.wikipedia.org/wiki/Combinatorial%20principles
In proving results in combinatorics several useful combinatorial rules or combinatorial principles are commonly recognized and used. The rule of sum, rule of product, and inclusion–exclusion principle are often used for enumerative purposes. Bijective proofs are utilized to demonstrate that two sets have the same number of elements. The pigeonhole principle often ascertains the existence of something or is used to determine the minimum or maximum number of something in a discrete context. Many combinatorial identities arise from double counting methods or the method of distinguished element. Generating functions and recurrence relations are powerful tools that can be used to manipulate sequences, and can describe if not resolve many combinatorial situations. Rule of sum The rule of sum is an intuitive principle stating that if there are a possible outcomes for an event (or ways to do something) and b possible outcomes for another event (or ways to do another thing), and the two events cannot both occur (or the two things can't both be done), then there are a + b total possible outcomes for the events (or total possible ways to do one of the things). More formally, the sum of the sizes of two disjoint sets is equal to the size of their union. Rule of product The rule of product is another intuitive principle stating that if there are a ways to do something and b ways to do another thing, then there are a · b ways to do both things. Inclusion–exclusion principle The inclusion–exclusion principle relates the size of the union of multiple sets, the size of each set, and the size of each possible intersection of the sets. The smallest example is when there are two sets: the number of elements in the union of A and B is equal to the sum of the number of elements in A and B, minus the number of elements in their intersection. Generally, according to this principle, if A1, …, An are finite sets, then Rule of division The rule of division states that there are n/d ways to do a task if it can be done using a procedure that can be carried out in n ways, and for every way w, exactly d of the n ways correspond to way w. Bijective proof Bijective proofs prove that two sets have the same number of elements by finding a bijective function (one-to-one correspondence) from one set to the other. Double counting Double counting is a technique that equates two expressions that count the size of a set in two ways. Pigeonhole principle The pigeonhole principle states that if a items are each put into one of b boxes, where a > b, then one of the boxes contains more than one item. Using this one can, for example, demonstrate the existence of some element in a set with some specific properties. Method of distinguished element The method of distinguished element singles out a "distinguished element" of a set to prove some result. Generating function Generating functions can be thought of as polynomials with infinitely many terms whose coefficients correspond to terms of a sequence. This new representation of the sequence opens up new methods for finding identities and closed forms pertaining to certain sequences. The (ordinary) generating function of a sequence an is Recurrence relation A recurrence relation defines each term of a sequence in terms of the preceding terms. Recurrence relations may lead to previously unknown properties of a sequence, but generally closed-form expressions for the terms of a sequence are more desired. References Mathematical principles
Combinatorial principles
[ "Mathematics" ]
710
[ "Mathematical principles", "Discrete mathematics", "Combinatorics" ]
1,588,344
https://en.wikipedia.org/wiki/Equivalence%20point
The equivalence point, or stoichiometric point, of a chemical reaction is the point at which chemically equivalent quantities of reactants have been mixed. For an acid-base reaction the equivalence point is where the moles of acid and the moles of base would neutralize each other according to the chemical reaction. This does not necessarily imply a 1:1 molar ratio of acid:base, merely that the ratio is the same as in the chemical reaction. It can be found by means of an indicator, for example phenolphthalein or methyl orange. The endpoint (related to, but not the same as the equivalence point) refers to the point at which the indicator changes color in a colorimetric titration. Methods to determine the equivalence point Different methods to determine the equivalence point include: pH indicator A pH indicator is a substance that changes color in response to a chemical change. An acid-base indicator (e.g., phenolphthalein) changes color depending on the pH. Redox indicators are also frequently used. A drop of indicator solution is added to the titration at the start; when the color changes the endpoint has been reached, this is an approximation of the equivalence point. Conductance The conductivity of a solution depends on the ions that are present in it. During many titrations, the conductivity changes significantly. (For instance, during an acid-base titration, the H3O+ and OH− ions react to form neutral H2O. This changes the conductivity of the solution.) The total conductance of the solution depends also on the other ions present in the solution (such as counter ions). Not all ions contribute equally to the conductivity; this also depends on the mobility of each ion and on the total concentration of ions (ionic strength). Thus, predicting the change in conductivity is harder than measuring it. Color change In some reactions, the solution changes color without any added indicator. This is often seen in redox titrations, for instance, when the different oxidation states of the product and reactant produce different colors. Precipitation If the reaction forms a solid, then a precipitate will form during the titration. A classic example is the reaction between Ag+ and Cl− to form the very insoluble salt AgCl. Surprisingly, this usually makes it difficult to determine the endpoint precisely. As a result, precipitation titrations often have to be done as back titrations. Isothermal titration calorimeter An isothermal titration calorimeter uses the heat produced or consumed by the reaction to determine the equivalence point. This is important in biochemical titrations, such as the determination of how substrates bind to enzymes. Thermometric titrimetry Thermometric titrimetry is an extraordinarily versatile technique. This is differentiated from calorimetric titrimetry by the fact that the heat of the reaction (as indicated by temperature rise or fall) is not used to determine the amount of analyte in the sample solution. Instead, the equivalence point is determined by the rate of temperature change. Because thermometric titrimetry is a relative technique, it is not necessary to conduct the titration under isothermal conditions, and titrations can be conducted in plastic or even glass vessels, although these vessels are generally enclosed to prevent stray draughts from causing "noise" and disturbing the endpoint. Because thermometric titrations can be conducted under ambient conditions, they are especially well-suited to routine process and quality control in industry. Depending on whether the reaction between the titrant and analyte is exothermic or endothermic, the temperature will either rise or fall during the titration. When all analyte has been consumed by reaction with the titrant, a change in the rate of temperature increase or decrease reveals the equivalence point and an inflection in the temperature curve can be observed. The equivalence point can be located precisely by employing the second derivative of the temperature curve. The software used in modern automated thermometric titration systems employ sophisticated digital smoothing algorithms so that "noise" resulting from the highly sensitive temperature probes does not interfere with the generation of a smooth, symmetrical second derivative "peak" which defines the endpoint. The technique is capable of very high precision, and coefficients of variance (CV's) of less than 0.1 are common. Modern thermometric titration temperature probes consist of a thermistor which forms one arm of a Wheatstone bridge. Coupled to high resolution electronics, the best thermometric titration systems can resolve temperatures to 10−5K. Sharp equivalence points have been obtained in titrations where the temperature change during the titration has been as little as 0.001K. The technique can be applied to essentially any chemical reaction in a fluid where there is an enthalpy change, although reaction kinetics can play a role in determining the sharpness of the endpoint. Thermometric titrimetry has been successfully applied to acid-base, redox, EDTA, and precipitation titrations. Examples of successful precipitation titrations are sulfate by titration with barium ions, phosphate by titration with magnesium in ammoniacal solution, chloride by titration with silver nitrate, nickel by titration with dimethylglyoxime and fluoride by titration with aluminium (as K2NaAlF6) Because the temperature probe does not need to be electrically connected to the solution (as in potentiometric titrations), non-aqueous titrations can be carried out as easily as aqueous titrations. Solutions which are highly colored or turbid can be analyzed by thermometric without further sample treatment. The probe is essentially maintenance-free. Using modern, high precision stepper motor driven burettes, automated thermometric titrations are usually complete in a few minutes, making the technique an ideal choice where high laboratory productivity is required. Spectroscopy Spectroscopy can be used to measure the absorption of light by the solution during the titration, if the spectrum of the reactant, titrant or product is known. The relative amounts of the product and reactant can be used to determine the equivalence point. Alternatively, the presence of free titrant (indicating that the reaction is complete) can be detected at very low levels. An example of robust endpoint detectors for etching of semiconductors is EPD-6, a system probing reactions at up to six different wavelengths. Amperometry Amperometry can be used as a detection technique (amperometric titration). The current due to the oxidation or reduction of either the reactants or products at a working electrode will depend on the concentration of that species in solution. The equivalence point can then be detected as a change in the current. This method is most useful when the excess titrant can be reduced, as in the titration of halides with Ag+. (This is handy also in that it ignores precipitates.) See also Titration References External links Equivalence points of virtual and real acid-base titrations - Software Program Example of robust industrial endpoint detector Graphical method to solve acid-base problems, including titrations Graphic and numerical solver for acid-base problems -Software Program for phone and tablets Titration
Equivalence point
[ "Chemistry" ]
1,531
[ "Instrumental analysis", "Titration" ]
1,588,451
https://en.wikipedia.org/wiki/Screw%20thread
A screw thread is a helical structure used to convert between rotational and linear movement or force. A screw thread is a ridge wrapped around a cylinder or cone in the form of a helix, with the former being called a straight thread and the latter called a tapered thread. A screw thread is the essential feature of the screw as a simple machine and also as a threaded fastener. The mechanical advantage of a screw thread depends on its lead, which is the linear distance the screw travels in one revolution. In most applications, the lead of a screw thread is chosen so that friction is sufficient to prevent linear motion being converted to rotary, that is so the screw does not slip even when linear force is applied, as long as no external rotational force is present. This characteristic is essential to the vast majority of its uses. The tightening of a fastener's screw thread is comparable to driving a wedge into a gap until it sticks fast through friction and slight elastic deformation. Applications Screw threads have several applications: Fastening: Fasteners such as wood screws, plastic screws, machine screws, nuts, and bolts. Connecting threaded pipes and hoses to each other and to caps and fixtures. Gear reduction via worm drives Moving objects linearly by converting rotary motion to linear motion, as in the leadscrew of a jack. Measuring by correlating linear motion to rotary motion (and simultaneously amplifying it), as in a micrometer. Both moving objects linearly and simultaneously measuring the movement, combining the two aforementioned functions, as in a leadscrew of a lathe. In all of these applications, the screw thread has two main functions: It converts rotary motion into linear motion. It prevents linear motion without the corresponding rotation. Design Gender Every matched pair of threads, external and internal, can be described as male and female. Generally speaking, the threads on an external surface are considered male, while the ones on an internal surface are considered female. For example, a screw has male threads, while its matching hole (whether in nut or substrate) has female threads. This property is called gender. Assembling a male-threaded fastener to a female-threaded one is called mating. Handedness The helix of a thread can twist in two possible directions, which is known as handedness. Most threads are oriented so that the threaded item, when seen from a point of view on the axis through the center of the helix, moves away from the viewer when it is turned in a clockwise direction, and moves towards the viewer when it is turned counterclockwise. This is known as a right-handed (RH) thread, because it follows the right-hand grip rule. Threads oriented in the opposite direction are known as left-handed (LH). By common convention, right-handedness is the default handedness for screw threads. Therefore, most threaded parts and fasteners have right-handed threads. Left-handed thread applications include: Where the rotation of a shaft would cause a conventional right-handed nut to loosen rather than to tighten due to applied torque or to fretting induced precession. Examples include: The left foot pedal on a bicycle The left grinding wheel on a bench grinder The axle nuts, or less commonly, lug nuts on the left side of some automobiles The securing nut on some circular saw blades – the large torque at startup should tend to tighten the nut The spindle on brushcutter and line trimmer heads, so that the torque tends to tighten rather than loosen the connection The hand-tightened nut holding the fan blade to the motor spindle in many designs of oscillating table fans and floor standing fans In combination with right-hand threads in turnbuckles and clamping studs In some gas supply connections to prevent dangerous misconnections, for example: In gas welding the flammable gas supply uses left-handed threads, while the oxygen supply if there is one has a conventional thread The POL valve for LPG cylinders In a situation where neither threaded pipe end can be rotated to tighten or loosen the joint (e.g. in traditional heating pipes running through several rooms in a building). In such a case, the coupling will have one right-handed and one left-handed thread. In some instances, for example early ballpoint pens, to provide a "secret" method of disassembly In artillery projectiles, anything that screws into the projectile must be given consideration as to what will happen when the projectile is fired, e.g., anything that screws into the base from the bottom of the projectile must be left hand threaded In mechanisms to give a more intuitive action as: The leadscrew of the cross slide of a lathe to cause the cross slide to move away from the operator when the leadscrew is turned clockwise The depth of cut screw of a "Bailey" (or "Stanley-Bailey") type metal plane (tool) for the blade to move in the direction of a regulating right hand finger Some Edison base lamps and fittings (such as those formerly used on the New York City Subway or the pre-World War I Sprague-Thomson rolling stock of the Paris Metro) have a left-hand thread to deter theft, because they cannot be used in other light fixtures Form The cross-sectional shape of a thread is often called its form or threadform (also spelled thread form). It may be square, triangular, trapezoidal, or other shapes. The terms form and threadform sometimes refer to all design aspects taken together (cross-sectional shape, pitch, and diameters), but commonly refer to the standardized geometry used by the screw. Major categories of threads include machine threads, material threads, and power threads. Most triangular threadforms are based on an isosceles triangle. These are usually called V-threads or vee-threads because of the shape of the letter V. For 60° V-threads, the isosceles triangle is, more specifically, equilateral. For buttress threads, the triangle is scalene. The theoretical triangle is usually truncated to varying degrees (that is, the tip of the triangle is cut short). A V-thread in which there is no truncation (or a minuscule amount considered negligible) is called a sharp V-thread. Truncation occurs (and is codified in standards) for practical reasons—the thread-cutting or thread-forming tool cannot practically have a perfectly sharp point, and truncation is desirable anyway, because otherwise: The cutting or forming tool's edge will break too easily; The part or fastener's thread crests will have burrs upon cutting, and will be too susceptible to additional future burring resulting from dents (nicks); The roots and crests of mating male and female threads need clearance to ensure that the sloped sides of the V meet properly despite error in pitch diameter and dirt and nick-induced burrs. The point of the threadform adds little strength to the thread. In ball screws, the male-female pairs have bearing balls in between. Roller screws use conventional thread forms and threaded rollers instead of balls. Angle The included angle characteristic of the cross-sectional shape is often called the thread angle. For most V-threads, this is standardized as 60 degrees, but any angle can be used. The cross section to measure this angle lies on a plane which includes the axis of the cylinder or cone on which the thread is produced. Lead, pitch, and starts Lead () and pitch are closely related concepts. They can be confused because they are the same for most screws. Lead is the distance along the screw's axis that is covered by one complete rotation of the screw thread (360°). Pitch is the distance from the crest of one thread to the next one at the same point. Because the vast majority of screw threadforms are single-start threadforms, their lead and pitch are the same. Single-start means that there is only one "ridge" wrapped around the cylinder of the screw's body. Each time that the screw's body rotates one turn (360°), it has advanced axially by the width of one ridge. "Double-start" means that there are two "ridges" wrapped around the cylinder of the screw's body. Each time that the screw's body rotates one turn (360°), it has advanced axially by the width of two ridges. Another way to express this is that lead and pitch are parametrically related, and the parameter that relates them, the number of starts, very often has a value of 1, in which case their relationship becomes equality. In general, lead is equal to pitch times the number of starts. Whereas metric threads are usually defined by their pitch, that is, how much distance per thread, inch-based standards usually use the reverse logic, that is, how many threads occur per a given distance. Thus, inch-based threads are defined in terms of threads per inch (TPI). Pitch and TPI describe the same underlying physical property—merely in different terms. When the inch is used as the unit of measurement for pitch, TPI is the reciprocal of pitch and vice versa. For example, a -20 thread has 20 TPI, which means that its pitch is inch (). As the distance from the crest of one thread to the next, pitch can be compared to the wavelength of a wave. Another wave analogy is that pitch and TPI are inverses of each other in a similar way that period and frequency are inverses of each other. Coarse versus fine Coarse threads are those with larger pitch (fewer threads per axial distance), and fine threads are those with smaller pitch (more threads per axial distance). Coarse threads have a larger threadform relative to screw diameter, where fine threads have a smaller threadform relative to screw diameter. This distinction is analogous to that between coarse teeth and fine teeth on a saw or file, or between coarse grit and fine grit on sandpaper. The common V-thread standards (ISO 261 and Unified Thread Standard) include a coarse pitch and a fine pitch for each major diameter. For example, -13 belongs to the UNC series (Unified National Coarse) and -20 belongs to the UNF series (Unified National Fine). Similarly, M10 (10 mm nominal outer diameter) as per ISO 261 has a coarse thread version at 1.5 mm pitch and a fine thread version at 1.25 mm pitch. The term coarse here does not mean lower quality, nor does the term fine imply higher quality. The terms when used in reference to screw thread pitch have nothing to do with the tolerances used (degree of precision) or the amount of craftsmanship, quality, or cost. They simply refer to the size of the threads relative to the screw diameter. Coarse threads are more resistant to stripping and cross threading because they have greater flank engagement. Coarse threads install much faster as they require fewer turns per unit length. Finer threads are stronger as they have a larger stress area for the same diameter thread. Fine threads are less likely to vibrate loose as they have a smaller helix angle and allow finer adjustment. Finer threads develop greater preload with less tightening torque. Diameters There are three characteristic diameters (⌀) of threads: major diameter, minor diameter, and pitch diameter: Industry standards specify minimum (min.) and maximum (max.) limits for each of these, for all recognized thread sizes. The minimum limits for external (or bolt, in ISO terminology), and the maximum limits for internal (nut), thread sizes are there to ensure that threads do not strip at the tensile strength limits for the parent material. The minimum limits for internal, and maximum limits for external, threads are there to ensure that the threads fit together. Major diameter The major diameter of threads is the larger of two extreme diameters delimiting the height of the thread profile, as a cross-sectional view is taken in a plane containing the axis of the threads. For a screw, this is its outside diameter (OD). The major diameter of a nut cannot be directly measured (as it is obstructed by the threads themselves) but it may be tested with go/no-go gauges. The major diameter of external threads is normally smaller than the major diameter of the internal threads, if the threads are designed to fit together. But this requirement alone does not guarantee that a bolt and a nut of the same pitch would fit together: the same requirement must separately be made for the minor and pitch diameters of the threads. Besides providing for a clearance between the crest of the bolt threads and the root of the nut threads, one must also ensure that the clearances are not so excessive as to cause the fasteners to fail. Minor diameter The minor diameter is the lower extreme diameter of the thread. Major diameter minus minor diameter, divided by two, equals the height of the thread. The minor diameter of a nut is its inside diameter. The minor diameter of a bolt can be measured with go/no-go gauges or, directly, with an optical comparator. As shown in the figure at right, threads of equal pitch and angle that have matching minor diameters, with differing major and pitch diameters, may appear to fit snugly, but only do so radially; threads that have only major diameters matching (not shown) could also be visualized as not allowing radial movement. The reduced material condition, due to the unused spaces between the threads, must be minimized so as not to overly weaken the fasteners. In order to fit a male thread into the corresponding female thread, the female major and minor diameters must be slightly larger than the male major and minor diameters. However this excess does not usually appear in tables of sizes. Calipers measure the female minor diameter (inside diameter, ID), which is less than caliper measurement of the male major diameter (outside diameter, OD). For example, tables of caliper measurements show 0.69 female ID and 0.75 male OD for the standards of "3/4 SAE J512" threads and "3/4-14 UNF JIS SAE-J514 ISO 8434-2". Note the female threads are identified by the corresponding male major diameter (3/4 inch), not by the actual measurement of the female threads. Pitch diameter The pitch diameter (PD, or D2) of a particular thread, internal or external, is the diameter of a cylindrical surface, axially concentric to the thread, which intersects the thread flanks at equidistant points. When viewed in a cross-sectional plane containing the axis of the thread, the distance between these points being exactly one half the pitch distance. Equivalently, a line running parallel to the axis and a distance D2 away from it, the "PD line," slices the sharp-V form of the thread, having flanks coincident with the flanks of the thread under test, at exactly 50% of its height. We have assumed that the flanks have the proper shape, angle, and pitch for the specified thread standard. It is generally unrelated to the major (D) and minor (D1) diameters, especially if the crest and root truncations of the sharp-V form at these diameters are unknown. Everything else being ideal, D2, D, & D1, together, would fully describe the thread form. Knowledge of PD determines the position of the sharp-V thread form, the sides of which coincide with the straight sides of the thread flanks: e.g., the crest of the external thread would truncate these sides a radial displacement D − D2 away from the position of the PD line. Provided that there are moderate non-negative clearances between the root and crest of the opposing threads, and everything else is ideal, if the pitch diameters of a screw and nut are exactly matched, there should be no play at all between the two as assembled, even in the presence of positive root-crest clearances. This is the case when the flanks of the threads come into intimate contact with one another, before the roots and crests do, if at all. However, this ideal condition would in practice only be approximated and would generally require wrench-assisted assembly, possibly causing the galling of the threads. For this reason, some allowance, or minimum difference, between the PDs of the internal and external threads has to generally be provided for, to eliminate the possibility of deviations from the ideal thread form causing interference and to expedite hand assembly up to the length of engagement. Such allowances, or fundamental deviations, as ISO standards call them, are provided for in various degrees in corresponding classes of fit for ranges of thread sizes. At one extreme, no allowance is provided by a class, but the maximum PD of the external thread is specified to be the same as the minimum PD of the internal thread, within specified tolerances, ensuring that the two can be assembled, with some looseness of fit still possible due to the margin of tolerance. A class called interference fit may even provide for negative allowances, where the PD of the screw is greater than the PD of the nut by at least the amount of the allowance. The pitch diameter of external threads is measured by various methods: A dedicated type of micrometer, called a thread mic or pitch mic, which has a V-anvil and a conical spindle tip, contacts the thread flanks for a direct reading. A general-purpose micrometer (flat anvil and spindle) is used over a set of three wires that rest on the thread flanks, and a known constant is subtracted from the reading. (The wires are truly gauge pins, being ground to precise size, although "wires" is their common name.) This method is called the 3-wire method. Sometimes grease is used to hold the wires in place, helping the user to juggle the part, mic, and wires into position. An optical comparator may also be used to determine PD graphically. Classes of fit The way in which male and female fit together, including play and friction, is classified (categorized) in thread standards. Achieving a certain class of fit requires the ability to work within tolerance ranges for dimension (size) and surface finish. Defining and achieving classes of fit are important for interchangeability. Classes include 1, 2, 3 (loose to tight); A (external) and B (internal); and various systems such as H and D limits. Tolerance classes Thread limit Thread limit or pitch diameter limit is a standard used for classifying the tolerance of the thread pitch diameter for taps. For imperial, H or L limits are used which designate how many units of 0.0005 inch over or undersized the pitch diameter is from its basic value, respectively. Thus a tap designated with an H limit of 3, denoted H3, would have a pitch diameter 0.0005 × 3 = 0.0015 inch larger than base pitch diameter and would thus result in cutting an internal thread with a looser fit than say an H2 tap. Metric uses D or DU limits which is the same system as imperial, but uses D or DU designators for over and undersized respectively, and goes by units of . Generally taps come in the range of H1 to H5 and rarely L1. The pitch diameter of a thread is measured where the radial cross section of a single thread equals half the pitch, for example: 16 pitch thread = in = 0.0625in the pitch actual pitch diameter of the thread is measured at the radial cross section measures 0.03125in. Interchangeability To achieve a predictably successful mating of male and female threads and assured interchangeability between males and between females, standards for form, size, and finish must exist and be followed. Standardization of threads is discussed below. Thread depth Screw threads are almost never made perfectly sharp (no truncation at the crest or root), but instead are truncated, yielding a final thread depth that can be expressed as a fraction of the pitch value. The UTS and ISO standards codify the amount of truncation, including tolerance ranges. A perfectly sharp 60° V-thread will have a depth of thread ("height" from root to crest) equal to 0.866 of the pitch. This fact is intrinsic to the geometry of an equilateral triangle — a direct result of the basic trigonometric functions. It is independent of measurement units (inch vs mm). However, UTS and ISO threads are not sharp threads. The major and minor diameters delimit truncations on either side of the sharp V. The nominal diameter of Metric (e.g. M8) and Unified (e.g.  in) threads is the theoretical major diameter of the male thread, which is truncated (diametrically) by of the pitch from the dimension over the tips of the "fundamental" (sharp cornered) triangles. The resulting flats on the crests of the male thread are theoretically one eighth of the pitch wide (expressed with the notation p or 0.125p), although the actual geometry definition has more variables than that. A full (100%) UTS or ISO thread has a height of around 0.65p. Threads can be (and often are) truncated a bit more, yielding thread depths of 60% to 75% of the 0.65p value. For example, a 75% thread sacrifices only a small amount of strength in exchange for a significant reduction in the force required to cut the thread. The result is that tap and die wear is reduced, the likelihood of breakage is lessened and higher cutting speeds can often be employed. This additional truncation is achieved by using a slightly larger tap drill in the case of female threads, or by slightly reducing the diameter of the threaded area of workpiece in the case of male threads, the latter effectively reducing the thread's major diameter. In the case of female threads, tap drill charts typically specify sizes that will produce an approximate 75% thread. A 60% thread may be appropriate in cases where high tensile loading will not be expected. In both cases, the pitch diameter is not affected. The balancing of truncation versus thread strength is similar to many engineering decisions involving the strength, weight and cost of material, as well as the cost to machine it. Taper Tapered threads are used on fasteners and pipe. A common example of a fastener with a tapered thread is a wood screw. The threaded pipes used in some plumbing installations for the delivery of fluids under pressure have a threaded section that is slightly conical. Examples are the NPT and BSP series. The seal provided by a threaded pipe joint is created when a tapered externally threaded end is tightened into an end with internal threads. For most pipe joints, a good seal requires the application of a separate sealant into the joint, such as thread seal tape, or a liquid or paste pipe sealant such as pipe dope. History The screw thread concept seems to have occurred first to Archimedes, who briefly wrote on spirals as well as designed several simple devices applying the screw principle. Leonardo da Vinci understood the screw principle, and left drawings showing how threads could be cut by machine. In the 1500s, screws appeared in German watches, and were used to fasten suits of armor. In 1569, Besson invented the screw-cutting lathe, but the method did not gain traction and screws continued to be made largely by hand for another 150 years. In the 1800s, screw manufacturing began in England during the Industrial Revolution. In these times, there was no such thing as standardization. The bolts made by one manufacturer would not fit the nuts of another. Standardization Standardization of screw threads has evolved since the early nineteenth century to facilitate compatibility between different manufacturers and users. The standardization process is still ongoing; in particular there are still (otherwise identical) competing metric and inch-sized thread standards widely used. Standard threads are commonly identified by short letter codes (M, UNC, etc.) which also form the prefix of the standardized designations of individual threads. Additional product standards identify preferred thread sizes for screws and nuts, as well as corresponding bolt head and nut sizes, to facilitate compatibility between spanners (wrenches) and other tools. ISO standard threads The most common threads in use are the ISO metric screw threads (M) for most purposes, and BSP threads (R, G) for pipes. These were standardized by the International Organization for Standardization (ISO) in 1947. Although metric threads were mostly unified in 1898 by the International Congress for the standardization of screw threads, separate metric thread standards were used in France, Germany, and Japan, and the Swiss had a set of threads for watches. Other current standards In particular applications and certain regions, threads other than the ISO metric screw threads remain commonly used, sometimes because of special application requirements, but mostly for reasons of backward compatibility: Unified Thread Standard (UTS), is the dominant thread standard used in the United States and Canada. It is defined in ANSI/ASME B1.1 Unified Inch Screw Threads, (UN and UNR Thread Form). In some cases products are still made according to the old American National Standard Series, which has slightly different specifications, and has been technically obsolete since 1949. The old national standard is compatible with the newer unified standard, but is long out of date. This unified standard includes: Unified Coarse (UNC), the successor to the obsolete National Coarse (NC) thread. Unified Fine (UNF), the successor to the obsolete National Fine (NF) thread. Unified Extra Fine (UNEF) Unified Special (UNS) National pipe thread, used in North America for several purposes. National Pipe Taper (NPT) National Pipe Taper Fuel (NPTF), also known as Dryseal, a better sealing version of NPT. National Pipe Taper Railing fittings (NPTR) National Pipe Straight Coupling (NPSC) National Pipe Straight Mechanical (NPSM) National Pipe Straight Locknut (NPSL) National Pipe Straight Hose coupling (NPSH) British Standard Whitworth (BSW), and for other Whitworth threads including: British Standard Fine (BSF) Cycle Engineers' Institute (CEI) or British Standard Cycle (BSC) British standard pipe thread (BSP) which exists in a taper and non taper variant; used for other purposes as well British Standard Pipe Taper (BSPT) British Association screw threads (BA), primarily electronic/electrical, moving coil meters and to mount optical lenses British Standard Buttress Threads (BS 1657:1950) British Standard for Spark Plugs BS 45:1972 British Standard Brass a fixed pitch 26 TPI thread Glass Packaging Institute threads (GPI), primarily for glass bottles and vials Power screw threads Acme thread form Square thread form Buttress thread Royal Microscopical Society (RMS) thread, also known as society thread, is a special 0.8-inch diameter × 36 thread-per-inch (TPI) Whitworth thread form used for microscope objective lenses. Microphone stands: -inch 27 threads per inch (TPI) Unified Special thread (UNS, USA and the rest of the world) -inch BSW (not common in the US, but used in the rest of the world) -inch BSW (not common in the US, but used in the rest of the world) Stage lighting suspension bolts (in some countries only; some have gone entirely metric, others such as Australia have reverted to the BSW threads, or have never fully converted): -inch BSW for lighter luminaires -inch BSW for heavier luminaires Tapping screw threads (ST) – ISO 1478 Aerospace inch threads (UNJ) – ISO 3161, controlled root radius on male threads for greater fatigue strength with larger minor diameter on female threads to clear radius. Aerospace metric threads (MJ) – ISO 5855 Tyre valve threads (V) – ISO 4570 Metal bone screws (HA, HB) – ISO 5835 Panzergewinde (Pg) (German) is an old German 80° thread (DIN 40430) that remained in use until 2000 in some electrical installation accessories in Germany. Fahrradgewinde (Fg) (English: bicycle thread) is a German bicycle thread standard (per DIN 79012 and DIN 13.1), which encompasses a lot of CEI and BSC threads as used on cycles and mopeds everywhere (http://www.fahrradmonteur.de/fahrradgewinde.php) Edison base Incandescent light bulb holder screw thread Fire hose connection (NFPA standard 194) Hose Coupling Screw Threads (ANSI/ASME B1.20.7-1991 [R2003]) for garden hoses and accessories Löwenherz thread, a German metric thread used for measuring instruments Sewing machine thread History of standardization The first historically important intra-company standardization of screw threads began with Henry Maudslay around 1800, when the modern screw-cutting lathe made interchangeable V-thread machine screws a practical commodity. During the next 40 years, standardization continued to occur on the intra- and inter-company levels. No doubt many mechanics of the era participated in this zeitgeist; Joseph Clement was one of those whom history has noted. In 1841, Joseph Whitworth created a design that, through its adoption by many British railway companies, became a standard for the United Kingdom and British Empire called British Standard Whitworth. During the 1840s through 1860s, this standard was often used in the United States as well, in addition to myriad intra- and inter-company standards. In April 1864, William Sellers presented a paper to the Franklin Institute in Philadelphia, proposing a new standard to replace the US' poorly standardized screw thread practice. Sellers simplified the Whitworth design by adopting a thread profile of 60° and a flattened tip (in contrast to Whitworth's 55° angle and rounded tip). The 60° angle was already in common use in America, but Sellers's system promised to make it and all other details of threadform consistent. The Sellers thread, easier to produce, became an important standard in the U.S. during the late 1860s and early 1870s, when it was chosen as a standard for work done under U.S. government contracts, and it was also adopted as a standard by highly influential railroad industry corporations such as the Baldwin Locomotive Works and the Pennsylvania Railroad. Other firms adopted it, and it soon became a national standard for the U.S., later becoming generally known as the United States Standard thread (USS thread). Over the next 30 years the standard was further defined and extended and evolved into a set of standards including National Coarse (NC), National Fine (NF), and National Pipe Taper (NPT). Meanwhile, in Britain, the British Association screw threads were also developed and refined for small instrumentation and electrical equipment. These were based on the metric Thury thread, but like Whitworth etc. were defined using Imperial units. During this era, in continental Europe, the British and American threadforms were well known, but also various metric thread standards were evolving, which usually employed 60° profiles. Some of these evolved into national or quasi-national standards. They were mostly unified in 1898 by the International Congress for the standardization of screw threads at Zürich, which defined the new international metric thread standards as having the same profile as the Sellers thread, but with metric sizes. Efforts were made in the early 20th century to convince the governments of the U.S., UK, and Canada to adopt these international thread standards and the metric system in general, but they were defeated with arguments that the capital cost of the necessary retooling would drive some firms from profit to loss and hamper the economy. Sometime between 1912 and 1916, the Society of Automobile Engineers (SAE) created an "SAE series" of screw thread sizes reflecting parentage from earlier USS and American Society of Mechanical Engineers (ASME) standards. During the late 19th and early 20th centuries, engineers found that ensuring the reliable interchangeability of screw threads was a multi-faceted and challenging task that was not as simple as just standardizing the major diameter and pitch for a certain thread. It was during this era that more complicated analyses made clear the importance of variables such as pitch diameter and surface finish. A tremendous amount of engineering work was done throughout World War I and the following interwar period in pursuit of reliable interchangeability. Classes of fit were standardized, and new ways of generating and inspecting screw threads were developed (such as production thread-grinding machines and optical comparators). Therefore, in theory, one might expect that by the start of World War II, the problem of screw thread interchangeability would have already been completely solved. Unfortunately, this proved to be false. Intranational interchangeability was widespread, but international interchangeability was less so. Problems with lack of interchangeability among American, Canadian, and British parts during World War II led to an effort to unify the inch-based standards among these closely allied nations, and the Unified Thread Standard was adopted by the Screw Thread Standardization Committees of Canada, the United Kingdom, and the United States on November 18, 1949, in Washington, D.C., with the hope that they would be adopted universally. (The original UTS standard may be found in ASA (now ANSI) publication, Vol. 1, 1949.) UTS consists of Unified Coarse (UNC), Unified Fine (UNF), Unified Extra Fine (UNEF) and Unified Special (UNS). The standard was widely taken up in the UK, although a small number of companies continued to use the UK's own British standards for Whitworth (BSW), British Standard Fine (BSF) and British Association (BA) microscrews. However, internationally, the metric system was eclipsing inch-based measurement units. In 1947, the ISO was founded; and in 1960, the metric-based International System of Units (abbreviated SI from the French Système International) was created. With continental Europe and much of the rest of the world turning to SI and ISO metric screw thread, the UK gradually leaned in the same direction. The ISO metric screw thread is now the standard that has been adopted worldwide and is slowly displacing all former standards, including UTS. In the U.S., where UTS is still prevalent, over 40% of products contain at least some ISO metric screw threads. The UK has completely abandoned its commitment to UTS in favour of ISO metric threads, and Canada is in between. Globalization of industries produces market pressure in favor of phasing out minority standards. A good example is the automotive industry; U.S. auto parts factories long ago developed the ability to conform to the ISO standards, and today very few parts for new cars retain inch-based sizes, regardless of being made in the U.S. Even today, over a half century since the UTS superseded the USS and SAE series, companies still sell hardware with designations such as "USS" and "SAE" to convey that it is of inch sizes as opposed to metric. Most of this hardware is in fact made to the UTS, but the labeling and cataloging terminology is not always precise. Engineering drawing In American engineering drawings, ANSI Y14.6 defines standards for indicating threaded parts. Parts are indicated by their nominal diameter (the nominal major diameter of the screw threads), pitch (number of threads per inch), and the class of fit for the thread. For example, “.750-10 UNC-2A” is male (A) with a nominal major diameter of 0.750 inches, 10 threads per inch, and a class-2 fit; “.500-20 UNF-1B” would be female (B) with a 0.500-inch nominal major diameter, 20 threads per inch, and a class-1 fit. An arrow points from this designation to the surface in question. Manufacturing There are many ways to generate a screw thread, including the traditional subtractive types (for example, various kinds of cutting [single-pointing, taps and dies, die heads, milling]; molding; casting [die casting, sand casting]; forming and rolling; grinding; and occasionally lapping to follow the other processes); newer additive techniques; and combinations thereof. Inspection Another common inspection point is the straightness of a bolt or screw. This topic comes up often when there are assembly issues with predrilled holes as the first troubleshooting point is to determine if the fastener or the hole is at fault. ASME B18.2.9 "Straightness Gage and Gaging for Bolts and Screws" was developed to address this issue. Per the scope of the standard, it describes the gage and procedure for checking bolt and screw straightness at maximum material condition (MMC) and provides default limits when not stated in the applicable product standard. See also Anti-seize compound British Standard Cycle Dryseal Pipe Threads Form Filter thread Metric: M Profile Thread Form National Thread Form Nut (hardware) Tapered thread Tap and die Thread angle Thread pitch gauge Thread protector Thread-locking fluid Notes References . . External links International Thread Standards ModelFixings – Thread Data NASA RP-1228 Fastener Design Manual Fasteners Screws Threading (manufacturing)
Screw thread
[ "Engineering" ]
7,723
[ "Construction", "Fasteners" ]
1,588,509
https://en.wikipedia.org/wiki/Unimodality
In mathematics, unimodality means possessing a unique mode. More generally, unimodality means there is only a single highest value, somehow defined, of some mathematical object. Unimodal probability distribution In statistics, a unimodal probability distribution or unimodal distribution is a probability distribution which has a single peak. The term "mode" in this context refers to any peak of the distribution, not just to the strict definition of mode which is usual in statistics. If there is a single mode, the distribution function is called "unimodal". If it has more modes it is "bimodal" (2), "trimodal" (3), etc., or in general, "multimodal". Figure 1 illustrates normal distributions, which are unimodal. Other examples of unimodal distributions include Cauchy distribution, Student's t-distribution, chi-squared distribution and exponential distribution. Among discrete distributions, the binomial distribution and Poisson distribution can be seen as unimodal, though for some parameters they can have two adjacent values with the same probability. Figure 2 and Figure 3 illustrate bimodal distributions. Other definitions Other definitions of unimodality in distribution functions also exist. In continuous distributions, unimodality can be defined through the behavior of the cumulative distribution function (cdf). If the cdf is convex for x < m and concave for x > m, then the distribution is unimodal, m being the mode. Note that under this definition the uniform distribution is unimodal, as well as any other distribution in which the maximum distribution is achieved for a range of values, e.g. trapezoidal distribution. Usually this definition allows for a discontinuity at the mode; usually in a continuous distribution the probability of any single value is zero, while this definition allows for a non-zero probability, or an "atom of probability", at the mode. Criteria for unimodality can also be defined through the characteristic function of the distribution or through its Laplace–Stieltjes transform. Another way to define a unimodal discrete distribution is by the occurrence of sign changes in the sequence of differences of the probabilities. A discrete distribution with a probability mass function, , is called unimodal if the sequence has exactly one sign change (when zeroes don't count). Uses and results One reason for the importance of distribution unimodality is that it allows for several important results. Several inequalities are given below which are only valid for unimodal distributions. Thus, it is important to assess whether or not a given data set comes from a unimodal distribution. Several tests for unimodality are given in the article on multimodal distribution. Inequalities Gauss's inequality A first important result is Gauss's inequality. Gauss's inequality gives an upper bound on the probability that a value lies more than any given distance from its mode. This inequality depends on unimodality. Vysochanskiï–Petunin inequality A second is the Vysochanskiï–Petunin inequality, a refinement of the Chebyshev inequality. The Chebyshev inequality guarantees that in any probability distribution, "nearly all" the values are "close to" the mean value. The Vysochanskiï–Petunin inequality refines this to even nearer values, provided that the distribution function is continuous and unimodal. Further results were shown by Sellke and Sellke. Mode, median and mean Gauss also showed in 1823 that for a unimodal distribution and where the median is ν, the mean is μ and ω is the root mean square deviation from the mode. It can be shown for a unimodal distribution that the median ν and the mean μ lie within (3/5)1/2 ≈ 0.7746 standard deviations of each other. In symbols, where | . | is the absolute value. In 2020, Bernard, Kazzi, and Vanduffel generalized the previous inequality by deriving the maximum distance between the symmetric quantile average and the mean, The maximum distance is minimized at (i.e., when the symmetric quantile average is equal to ), which indeed motivates the common choice of the median as a robust estimator for the mean. Moreover, when , the bound is equal to , which is the maximum distance between the median and the mean of a unimodal distribution. A similar relation holds between the median and the mode θ: they lie within 31/2 ≈ 1.732 standard deviations of each other: It can also be shown that the mean and the mode lie within 31/2 of each other: Skewness and kurtosis Rohatgi and Szekely claimed that the skewness and kurtosis of a unimodal distribution are related by the inequality: where κ is the kurtosis and γ is the skewness. Klaassen, Mokveld, and van Es showed that this only applies in certain settings, such as the set of unimodal distributions where the mode and mean coincide. They derived a weaker inequality which applies to all unimodal distributions: This bound is sharp, as it is reached by the equal-weights mixture of the uniform distribution on [0,1] and the discrete distribution at {0}. Unimodal function As the term "modal" applies to data sets and probability distribution, and not in general to functions, the definitions above do not apply. The definition of "unimodal" was extended to functions of real numbers as well. A common definition is as follows: a function f(x) is a unimodal function if for some value m, it is monotonically increasing for x ≤ m and monotonically decreasing for x ≥ m. In that case, the maximum value of f(x) is f(m) and there are no other local maxima. Proving unimodality is often hard. One way consists in using the definition of that property, but it turns out to be suitable for simple functions only. A general method based on derivatives exists, but it does not succeed for every function despite its simplicity. Examples of unimodal functions include quadratic polynomial functions with a negative quadratic coefficient, tent map functions, and more. The above is sometimes related to as , from the fact that the monotonicity implied is strong monotonicity. A function f(x) is a weakly unimodal function if there exists a value m for which it is weakly monotonically increasing for x ≤ m and weakly monotonically decreasing for x ≥ m. In that case, the maximum value f(m) can be reached for a continuous range of values of x. An example of a weakly unimodal function which is not strongly unimodal is every other row in Pascal's triangle. Depending on context, unimodal function may also refer to a function that has only one local minimum, rather than maximum. For example, local unimodal sampling, a method for doing numerical optimization, is often demonstrated with such a function. It can be said that a unimodal function under this extension is a function with a single local extremum. One important property of unimodal functions is that the extremum can be found using search algorithms such as golden section search, ternary search or successive parabolic interpolation. Other extensions A function f(x) is "S-unimodal" (often referred to as "S-unimodal map") if its Schwarzian derivative is negative for all , where is the critical point. In computational geometry if a function is unimodal it permits the design of efficient algorithms for finding the extrema of the function. A more general definition, applicable to a function f(X) of a vector variable X is that f is unimodal if there is a one-to-one differentiable mapping X = G(Z) such that f(G(Z)) is convex. Usually one would want G(Z) to be continuously differentiable with nonsingular Jacobian matrix. Quasiconvex functions and quasiconcave functions extend the concept of unimodality to functions whose arguments belong to higher-dimensional Euclidean spaces. See also Bimodal distribution Read's conjecture References Functions and mappings Mathematical relations Theory of probability distributions
Unimodality
[ "Mathematics" ]
1,747
[ "Mathematical analysis", "Functions and mappings", "Predicate logic", "Mathematical objects", "Basic concepts in set theory", "Mathematical relations" ]
1,588,553
https://en.wikipedia.org/wiki/Universal%20integration%20platform
A universal integration platform is a development- and/or configuration-time analog of a universal server. The emphasis on the term: "platform" implies a middleware environment from which integration oriented solutions are derived. Likewise, the term: "Universal" implies depth and breadth of integration capabilities that transcend disparate operating systems, protocols, APIs, data sources, programming languages, composite processes, discrete services, and monolithic applications. Related Technologies Integration platform Enterprise service bus (ESB) Enterprise information integration (EII) Relevant Protocols WebDAV HTTP SOAP UDDI SMTP POP3 IMAP NNTP m-BizMaker Relevant Data Access APIs ODBC JDBC ADO.NET OLE DB Typical Data Sources SQL XML exposed via URIs Free Text Universal Integration Platform Solutions Virtuoso Universal Server from OpenLink Software Prova Enterprise application integration
Universal integration platform
[ "Technology" ]
176
[ "Computing stubs" ]
1,588,678
https://en.wikipedia.org/wiki/Phenobarbital
Phenobarbital, also known as phenobarbitone or phenobarb, sold under the brand name Luminal among others, is a medication of the barbiturate type. It is recommended by the World Health Organization (WHO) for the treatment of certain types of epilepsy in developing countries. In the developed world, it is commonly used to treat seizures in young children, while other medications are generally used in older children and adults. It is also used for veterinary purposes. It may be administered by slow intravenous infusion (IV infusion), intramuscularly (IM), or orally (swallowed by mouth). Subcutaneous administration is not recommended. The IV or IM (injectable forms) may be used to treat status epilepticus if other drugs fail to achieve satisfactory results. Phenobarbital is occasionally used to treat insomnia, anxiety, and benzodiazepine withdrawal (as well as withdrawal from certain other drugs in specific circumstances), and prior to surgery as an anxiolytic and to induce sedation. It usually begins working within five minutes when used intravenously and half an hour when administered orally. Its effects last for between four hours and two days. Potentially serious side effects include a decreased level of consciousness and respiratory depressant. There is potential for both abuse and withdrawal following long-term use. It may also increase the risk of suicide. It is pregnancy category D in Australia, meaning that it may cause harm when taken during pregnancy. If used during breastfeeding it may result in drowsiness in the baby. Phenobarbital works by increasing the activity of the inhibitory neurotransmitter GABA. Phenobarbital was discovered in 1912 and is the oldest still commonly used anti-seizure medication. It is on the World Health Organization's List of Essential Medicines. Medical uses Phenobarbital is used in the treatment of all types of seizures, except absence seizures. It is no less effective at seizure control than phenytoin, but phenobarbital is not as well tolerated. Phenobarbital may provide a clinical advantage over carbamazepine for treating partial onset seizures. Carbamazepine may provide a clinical advantage over phenobarbital for generalized onset tonic-clonic seizures. The first-line drugs for treatment of status epilepticus are benzodiazepines, such as lorazepam, clonazepam, midazolam, or diazepam. If these fail, then phenytoin may be used, with phenobarbital being an alternative in the US (favored in infants), but used only third-line in the UK. Failing that, the only treatment is anaesthesia in intensive care. The World Health Organization (WHO) gives phenobarbital a first-line recommendation in the developing world and it is commonly used there. Phenobarbital is the first-line choice for the treatment of neonatal seizures. Concerns that neonatal seizures in themselves could be harmful make most physicians treat them aggressively. No reliable evidence, though, supports this approach. Phenobarbital is sometimes used for alcohol detoxification and benzodiazepine detoxification for its sedative and anti-convulsant properties. The benzodiazepines chlordiazepoxide (Librium) and oxazepam (Serax) have largely replaced phenobarbital for detoxification. Phenobarbital is useful for insomnia and anxiety. Other uses Phenobarbital properties can effectively reduce tremors and seizures associated with abrupt withdrawal from benzodiazepines. Phenobarbital is occasionally prescribed in low doses to aid in the conjugation of bilirubin in people with Crigler–Najjar syndrome, type II, or in people with Gilbert's syndrome. In infants suspected of neonatal biliary atresia, phenobarbital is used in preparation for a 99mTc-IDA hepatobiliary (HIDA; hepatobiliary 99mTc-iminodiacetic acid) study that differentiates atresia from hepatitis or cholestasis. In massive doses, phenobarbital is prescribed to terminally ill people to allow them to end their life through physician-assisted suicide. Like other barbiturates, phenobarbital can be used recreationally, but this is reported to be relatively infrequent. The synthesis of a photoswitchable analog (DASA-barbital) and phenobarbital has been described for use as a research compound in photopharmacology. Side effects Sedation and hypnosis are the principal side effects (occasionally, they are also the intended effects) of phenobarbital. Central nervous system effects, such as dizziness, nystagmus and ataxia, are also common. In elderly patients, it may cause excitement and confusion, while in children, it may result in paradoxical hyperactivity. Phenobarbital is a cytochrome P450 hepatic enzyme inducer. It binds transcription factor receptors that activate cytochrome P450 transcription, thereby increasing its amount and thus its activity. Caution is to be used with children. Among anti-convulsant drugs, behavioural disturbances occur most frequently with clonazepam and phenobarbital. Contraindications Acute intermittent porphyria, hypersensitivity to any barbiturate, prior dependence on barbiturates, severe respiratory insufficiency (as with chronic obstructive pulmonary disease), severe liver failure, pregnancy, and breastfeeding are contraindications for phenobarbital use. Overdose Phenobarbital causes a depression of the body's systems, mainly the central and peripheral nervous systems. Thus, the main characteristic of phenobarbital overdose is a "slowing" of bodily functions, including decreased consciousness (even coma), bradycardia, bradypnea, hypothermia, and hypotension (in massive overdoses). Overdose may also lead to pulmonary edema and acute renal failure as a result of shock and can result in death. The electroencephalogram (EEG) of a person with phenobarbital overdose may show a marked decrease in electrical activity, to the point of mimicking brain death. This is due to profound depression of the central nervous system and is usually reversible. Treatment of phenobarbital overdose is supportive, and mainly consists of the maintenance of airway patency (through endotracheal intubation and mechanical ventilation), correction of bradycardia and hypotension (with intravenous fluids and vasopressors, if necessary), and removal of as much drug as possible from the body. In very large overdoses, multi-dose activated charcoal is a mainstay of treatment as the drug undergoes enterohepatic recirculation. Urine alkalization (achieved with sodium bicarbonate) enhances renal excretion. Hemodialysis is effective in removing phenobarbital from the body and may reduce its half-life by up to 90%. No specific antidote for barbiturate poisoning is available. Mechanism of action Phenobarbital acts as an allosteric modulator which extends the amount of time the chloride ion channel is open by interacting with GABAA receptor subunits. Through this action, phenobarbital increases the flow of chloride ions into the neuron which decreases the excitability of the post-synaptic neuron. Hyperpolarizing this post-synaptic membrane leads to a decrease in the general excitatory aspects of the post-synaptic neuron. By making it harder to depolarize the neuron, the threshold for the action potential of the post-synaptic neuron will be increased. Direct blockade of glutamatergic AMPA and kainate receptors are also believed to contribute to the hypnotic/anticonvulsant effect that is observed with phenobarbital. Pharmacokinetics Phenobarbital has an oral bioavailability of about 90%. Peak plasma concentrations (Cmax) are reached eight to 12 hours after oral administration. It is one of the longest-acting barbiturates available – it remains in the body for a very long time (half-life of two to seven days) and has very low protein binding (20 to 45%). Phenobarbital is metabolized by the liver, mainly through hydroxylation and glucuronidation and induces many isozymes of the cytochrome P450 system. Cytochrome P450 2B6 (CYP2B6) is specifically induced by phenobarbital via the CAR/RXR nuclear receptor heterodimer. It is excreted primarily by the kidneys. History The first barbiturate drug, barbital, was synthesized in 1902 by German chemists Emil Fischer and Joseph von Mering and was first marketed as Veronal by Friedr. Bayer et comp. By 1904, several related drugs, including phenobarbital, had been synthesized by Fischer. Phenobarbital was brought to market in 1912 by the drug company Bayer as the brand Luminal. It remained a commonly prescribed sedative and hypnotic until the introduction of benzodiazepines in the 1960s. Phenobarbital's soporific, sedative and hypnotic properties were well known in 1912, but it was not yet known to be an effective anti-convulsant. The young doctor Alfred Hauptmann gave it to his epilepsy patients as a tranquilizer and discovered their seizures were susceptible to the drug. Hauptmann performed a careful study of his patients over an extended period. Most of these patients were using the only effective drug then available, bromide, which had terrible side effects and limited efficacy. On phenobarbital, their epilepsy was much improved: The worst patients had fewer and lighter seizures and some patients became seizure-free. In addition, they improved physically and mentally as bromides were removed from their regimen. Patients who had been institutionalised due to the severity of their epilepsy were able to leave and, in some cases, resume employment. Hauptmann dismissed concerns that its effectiveness in stalling seizures could lead to patients developing a build-up that needed to be "discharged". As he expected, withdrawal of the drug led to an increase in seizure frequency – it was not a cure. The drug was quickly adopted as the first widely effective anti-convulsant, though World War I delayed its introduction in the U.S. In 1939, a German family asked Adolf Hitler to have their disabled son killed; the five-month-old boy was given a lethal dose of Luminal after Hitler sent his own doctor to examine him. A few days later 15 psychiatrists were summoned to Hitler's Chancellery and directed to commence a clandestine program of involuntary euthanasia. In 1940, at a clinic in Ansbach, Germany, around 50 intellectually disabled children were injected with Luminal and killed that way. A plaque was erected in their memory in 1988 in the local hospital at Feuchtwanger Strasse 38, although a newer plaque does not mention that patients were killed using barbiturates on site. Luminal was used in the Nazi children's euthanasia program until at least 1943. Phenobarbital was used to treat neonatal jaundice by increasing liver metabolism and thus lowering bilirubin levels. In the 1950s, phototherapy was discovered, and became the standard treatment. Phenobarbital was used for over 25 years as prophylaxis in the treatment of febrile seizures. Although an effective treatment in preventing recurrent febrile seizures, it had no positive effect on patient outcome or risk of developing epilepsy. The treatment of simple febrile seizures with anticonvulsant prophylaxis is no longer recommended. Society and culture Names Phenobarbital is the INN and phenobarbitone is the BAN. Synthesis Barbiturate drugs are obtained via condensation reactions between a derivative of diethyl malonate and urea in the presence of a strong base. The synthesis of phenobarbital uses this common approach as well but differs in the way in which this malonate derivative is obtained. The reason for this difference is because aryl halides do not typically undergo nucleophilic substitution in Malonic ester synthesis in the same way as aliphatic organosulfates or halocarbons do. To overcome this lack of chemical reactivity two dominant synthetic approaches using benzyl cyanide as a starting material have been developed: The first of these methods consists of a Pinner reaction of benzyl cyanide, giving phenylacetic acid ethyl ester. Subsequently, this ester undergoes cross Claisen condensation using diethyl oxalate, giving diethyl ester of phenyloxobutandioic acid. Upon heating this intermediate easily loses carbon monoxide, yielding diethyl phenylmalonate. Malonic ester synthesis using ethyl bromide leads to the formation of α-phenyl-α-ethylmalonic ester. Finally, a condensation reaction with urea gives phenobarbital. The second approach utilizes diethyl carbonate in the presence of a strong base to give α-phenylcyanoacetic ester. Alkylation of this ester using ethyl bromide proceeds via a nitrile anion intermediate to give the α-phenyl-α-ethylcyanoacetic ester. This product is then further converted into the 4-iminoderivative upon condensation with urea. Finally acidic hydrolysis of the resulting product gives phenobarbital. A new synthetic route based on diethyl 2-ethyl-2-phenylmalonate and urea has been described. Regulation The level of regulation includes Schedule IV non-narcotic (depressant) (ACSCN 2285) in the United States under the Controlled Substances Act 1970—but along with a few other barbiturates and at least one benzodiazepine, and codeine, dionine, or dihydrocodeine at low concentrations, it also has exempt prescription and had at least one exempt OTC combination drug now more tightly regulated for its ephedrine content. The phenobarbitone/phenobarbital exists in subtherapeutic doses which add up to an effective dose to counter the overstimulation and possible seizures from a deliberate overdose in ephedrine tablets for asthma, which are now regulated at the federal and state level as: a restricted OTC medicine and/or watched precursor, uncontrolled but watched/restricted prescription drug & watched precursor, a Schedule II, III, IV, or V prescription-only controlled substance & watched precursor, or a Schedule V (which also has possible regulations at the county/parish, town, city, or district as well aside from the fact that the pharmacist can also choose not to sell it, and photo ID and signing a register is required) exempt Non-Narcotic restricted/watched OTC medicine. Selected overdoses A mysterious woman, known as the Isdal Woman, was found dead in Bergen, Norway, on 29 November 1970. Her death was caused by some combination of burns, phenobarbital, and carbon monoxide poisoning; many theories about her death have been posited, and it is believed that she may have been a spy. British veterinarian Donald Sinclair, better known as the character Siegfried Farnon in the "All Creatures Great and Small" book series by James Herriot, committed suicide at the age of 84 by injecting himself with an overdose of phenobarbital. Activist Abbie Hoffman also committed suicide by consuming phenobarbital, combined with alcohol, on 12 April 1989; the residue of around 150 pills was found in his body at autopsy. Thirty-nine members of the Heaven's Gate UFO cult committed mass suicide in March 1997 by drinking a lethal dose of phenobarbital and vodka "and then lay down to die" hoping to enter an alien spacecraft. Veterinary uses Phenobarbital is one of the first-line drugs of choice to treat epilepsy in dogs, as well as cats. It is also used to treat feline hyperesthesia syndrome in cats when anti-obsessional therapies prove ineffective. It may also be used to treat seizures in horses when benzodiazepine treatment has failed or is contraindicated. References Barbiturates Anxiolytics CYP3A4 inducers Hypnotics World Health Organization essential medicines IARC Group 2B carcinogens Drugs developed by Bayer Wikipedia medicine articles ready to translate
Phenobarbital
[ "Biology" ]
3,581
[ "Hypnotics", "Behavior", "Sleep" ]
1,588,843
https://en.wikipedia.org/wiki/Monterrey%20International%20Airport
General Mariano Escobedo International Airport () , simply known as Monterrey International Airport (), is an international airport located in Apodaca, Nuevo León, Mexico serving Greater Monterrey. It operates flights to Mexico, the United States, Canada, Latin America, Asia and Europe. The airport serves as the main hub for Viva, Magnicharters, and the regional carrier Aerus. It is also a focus city for Volaris, Aeromexico Connect, and the regional airline TAR Aerolíneas. The airport also serves cargo and charter flights, hosts facilities for Mexican Airspace Navigation Services, and facilitates various tourism-related activities, flight training, and general aviation. Monterrey Airport is operated by Grupo Aeroportuario Centro Norte OMA and it is named after General Mariano Escobedo, a prominent military figure born in Nuevo León. In terms of passenger numbers and aircraft movements, Monterrey International Airport ranks as the fourth busiest airport in Mexico, holding the 12th position in Latin America and the 52nd position in North America. Furthermore, it stands at the fifth position in terms of cargo traffic in the country. Notably, the airport has experienced rapid growth, handling 10,943,186 passengers in 2022 and an increased number of 13,326,936 passengers in 2023, showcasing one of the fastest influx growth rates in recent years. History Inaugurated on November 25, 1970, the airport marked its beginning with the landing of a Mexicana de Aviación Boeing 727. Its establishment was prompted by the limitations and safety concerns of the Del Norte International Airport, which hindered further expansion. The initial terminal, now known as Terminal A, efficiently served 346,000 passengers in its inaugural year. Responding to the increasing economic activity in Nuevo León, Monterrey Airport underwent a substantial expansion of its terminal building from 1976 to 1982. As part of this development, the Satellite Building was constructed, interconnected with the main terminal via an underground corridor. Over the years, these enhancements have contributed to the airport's role as a key transportation hub in Northern Mexico. In the mid-2000s, Aeroméxico introduced significant international flights. From 2005 to 2009, Monterrey gained its first nonstop link to Europe with a flight to Madrid operated using a Boeing 767. Additional European connections included a flight to Rome from 2008 to 2009. Subsequently, in 2014, Monterrey witnessed its inaugural flight to Asia as Aeromexico transferred its Mexico City-Tokyo route stopover to Monterrey, replacing Tijuana Airport. A direct flight to Seoul Incheon Airport was also introduced. However, the only remaining overseas destination is the route to Madrid operated by Aeromexico. In July 2022, Vinci Airports acquired a 30% stake in Grupo Aeroportuario Centro Norte OMA, the entity responsible for overseeing 13 airports across Mexico. This strategic move showcased the airport's ongoing evolution within the broader landscape of aviation management. Simultaneously, addressing the escalating demand for air travel, the Monterrey airport initiated a comprehensive renovation and expansion project for Terminal A in November 2019. This multifaceted project, executed in two phases, involves the enlargement of the departures concourse and check-in area, and the construction of Pier 1 with additional boarding gates. The subsequent phase encompasses the establishment of a new security checkpoint, Pier 2 with supplementary boarding gates, and the expansion of public areas, slated for completion by 2025. Facilities The airport is situated at an elevation of above mean sea level and features two runways. The primary runway, designated 11/29, boasts a asphalt surface, equipped with an ILS approach system, VHF omnidirectional radio range (VOR), and DME station. Another runway, 16/34, measuring with an asphalt surface, is seldom used. Although the main runway, 11/29, can accommodate larger aircraft like the Boeing 747-400, the airport primarily serves narrow-body aircraft. There are three terminals: Terminal A: 9 contact positions, 12 remote positions Terminal B: 6 contact positions, 7 remote positions Terminal C: 8 remote positions Terminal A Terminal A encompasses check-in facilities, baggage claim, shopping areas, restaurants, customs, airport and airline offices, and various services. The connected satellite building, accessed via an underground corridor, houses VIP lounges, customs and immigration services, and 14 boarding gates. The Satellite building is divided into North and South Concourses, catering to domestic and international flights, respectively. Operational challenges, including delayed flights, stem from a reduced number of gates, jetbridges and hardstands capable of handling large aircraft. Terminals C and B serve as relief systems for Terminal A, and there are plans to remodel and expand the Satellite building, adding new jetbridges and remote hardstands. Passengers in Terminal A can access lounges like American Express Centurion, Salón Beyond (Citibanamex), and the OMA Premium Lounge on the Ground Floor. Airlines serving Terminal A include Volaris, Magnicharters, Air Canada, American Airlines, American Eagle, Copa Airlines, and United Airlines. Terminal B Opened in September 2010, Terminal B is a two-story facility comprising eight gates, six of which are equipped with jetways, and two apron gates available for use by smaller aircraft. It can handle up to 2 million passengers annually. The terminal provides standard international airport services such as check-in areas, a security checkpoint, departures concourse, arrivals facilities with baggage claim areas, taxi stands, and car rental services. Terminal B also features multiple VIP lounges, including the Salón Premier of Aeroméxico on the Ground Floor, The American Express Centurión lounge on the landside, and the OMA Premium Lounge. This terminal serves as a hub for SkyTeam, including the services of Aeromexico, Aeromexico Connect and Delta Air Lines. Other airlines serving Terminal B are regional airlines TAR Aerolíneas and Aerus. Terminal C Opened on November 30, 2006, Terminal C serves as the primary hub for Viva. The terminal, housed in a single-story building, features essential facilities. The departures area includes a check-in area, a security checkpoint, and a departures concourse with amenities such as a Duty-Free Store, an OMA Premium Lounge, and a food court. Services for arriving passengers include customs and immigration facilities, along with car rental services and taxi stands serving both Arrival and Departure Areas. Terminal C is currently grappling with overcrowding issues, largely due to Viva Aerobus operating its largest hub from this terminal. Interterminal Shuttle Free shuttle service is provided between Terminals A, B, and C at the Monterrey Airport from 5:00 to midnight, with an approximate waiting time of 10 minutes. Boarding areas are located at the main entrance of each terminal building. Air Cargo Terminal The recently built Air Cargo Terminal occupies for its operations. This terminal serves courier companies, both nationally and internationally, with notable names such as FedEx, DHL, UPS, and Estafeta. Other Facilities The Airport Boulevard boasts a range of amenities, including hotels, restaurants, and diverse establishments. Notably, Viva Aerobus has its corporate headquarters in the Cargo Zone of Terminal C. Additionally, Grupo Aeroportuario Centro Norte, the company managing the airport, also has its headquarters in the air cargo zone. The airport offers various facilities, including a general aviation terminal with a general aviation platform, a VIP lounge, a pilots' lounge, and a passenger lounge. The airport hosts the Monterrey Area Control Center (ACC), one of four such centers in Mexico, alongside the Mexico City ACC, Mérida ACC, and Mazatlán ACC. Operated by the Mexican Airspace Navigation Services (), the Monterrey ACC provides air traffic control services for aircraft within the Monterrey Flight Data Region (FDRG), covering the northeastern region of Mexico. This region shares its boundaries with six other Area Control Centers. It borders the Mazatlán ACC to the west, the Houston ARTCC (KZHU) to the north, the Mexico ACC to the south, and the Mérida ACC to the east. Airlines and destinations Passenger Notes Viva flight to La Paz makes a stopover in Culiacán. Cargo Destinations map Statistics Passengers Busiest routes Ground transportation Monterrey Airport is located northeast of Downtown Monterrey. The airport is accessible solely by road. Local bus, shuttle, and taxi services, as well as long-distance bus services to various cities in Nuevo León, Coahuila, Tamaulipas, San Luis Potosí, and Texas, are available. The travel time by car is typically 30 minutes, but it can extend to 60 minutes during rush hours. The airport provides extensive short- and long-term parking facilities, and each terminal has multiple taxi and car rental service stands. Local Bus The Ruta Express, a public bus line, operates from the airport to Y-Griega Station on Line 1 of the Monterrey Metro. Gupo Senda, a bus company, offers services to the Y-Griega metro station and San Jerónimo Bus Station, while Noreste provides hourly services from Monterrey Airport to the Central Bus Station. Two bus stops are located at the airport: one between Terminal A and Terminal B and another in front of Terminal C. Tickets can be purchased at information desks in the terminals (130 MXN) or online through the website (110 MXN). The travel time by bus to Monterrey Central Bus Station, situated 3 kilometers northwest of Macroplaza, is approximately 60 minutes. From there, passengers can transfer to the Metro and long-distance bus services. Private Shuttle VivaBus offers shuttle transportation exclusively for Viva Aerobus passengers travelling to the Central Bus Station and Terminal Fierro (near Y-Griega Station). Transporte Aeroméxico provides hourly shuttle services from Terminal B to Y-Griega metro station, Garza Sada Bus Station, and Son Mar Hotel (located two blocks away from the Central Bus Station). Additionally, Aero Contaxi offers shuttle services from Terminal C to the Y-Griega metro station, Garza Sada Bus Station, and the Central Bus Station. Long-Distance Bus Various bus companies offer services to nearby cities, including Saltillo. Noreste operates coach buses with direct services to cities in Tamaulipas and Texas, while Senda provides coach buses with direct service to Saltillo, Monclova, Piedras Negras, and Ramos Arizpe in Coahuila, Reynosa and Nuevo Laredo in Tamaulipas, and Matehuala in San Luis Potosí. Taxi Golden offers taxi and van services throughout the metropolitan area, Suburban allows online reservations for taxis to and from Monterrey City. Airport-exclusive companies such as Taxi Aeropuerto provide services throughout the city. Taxis Aeropuerto Monterrey offers services to and from Monterrey Airport. Taxis Totsa provides taxi services throughout the metropolitan area and nearby municipalities, including Saltillo. TPA offers taxi and van services within the metropolitan area. Accidents and incidents On February 11, 2010, MexicanaClick de Aviación Flight 7222, operated by Fokker 100 XA-SHJ, suffered an undercarriage malfunction on approach to Quetzalcóatl International Airport, Nuevo Laredo. A low fly-past confirmed that both main gears had not deployed. The aircraft diverted to Monterrey. It was substantially damaged in the landing, having departed the runway and spun through 180°. On April 13, 2010, an Aerounion – Aerotransporte de Carga Union Airbus A-300B4-200, registration XA-TUE performing a freight flight, AeroUnion Flight 302 from Mexico (Mexico) to Monterrey (Mexico) with five crew, crashed on approach to land on General Mariano Escobedo International Airport's runway 11. The aircraft came to rest on a highway at around 23:30L (04:30Z Apr 14). All on board died, one person in a truck on the highway was also reported killed, and the airplane was destroyed after a large fire broke out. On November 24, 2010, a Mexican Air Force AN-32 cargo flight crashed when taking off from General Mariano Escobedo International Airport for a flight to Mexico City. All five crew members died. On December 9, 2012, a Learjet 25 carrying Mexican-American singer Jenni Rivera and four other passengers, and two crew crashed seven minutes after take-off, while on its way to Toluca. All seven occupants died. See also List of the busiest airports in Mexico List of airports in Mexico List of airports by ICAO code: M List of busiest airports in North America List of the busiest airports in Latin America Transportation in Mexico Tourism in Mexico Area control center List of area control centers Flight information region References External links Monterrey Airport information at Great Circle Mapper Monterrey International Airport location Grupo Aeroportuario Centro Norte Mexican Air Traffic Control Services Servicios a la Navegación en el Espacio Aéreo Mexicano Airports in Mexico Airports in Nuevo León Airports established in 1970 Transportation in Nuevo León Tourist attractions in Nuevo León Transportation in Monterrey Monterrey Air traffic control centers WAAS reference stations
Monterrey International Airport
[ "Technology" ]
2,681
[ "Global Positioning System", "WAAS reference stations" ]
1,589,032
https://en.wikipedia.org/wiki/Restriction%20%28mathematics%29
In mathematics, the restriction of a function is a new function, denoted or obtained by choosing a smaller domain for the original function The function is then said to extend Formal definition Let be a function from a set to a set If a set is a subset of then the restriction of to is the function given by for Informally, the restriction of to is the same function as but is only defined on . If the function is thought of as a relation on the Cartesian product then the restriction of to can be represented by its graph, where the pairs represent ordered pairs in the graph Extensions A function is said to be an of another function if whenever is in the domain of then is also in the domain of and That is, if and A (respectively, , etc.) of a function is an extension of that is also a linear map (respectively, a continuous map, etc.). Examples The restriction of the non-injective function to the domain is the injection The factorial function is the restriction of the gamma function to the positive integers, with the argument shifted by one: Properties of restrictions Restricting a function to its entire domain gives back the original function, that is, Restricting a function twice is the same as restricting it once, that is, if then The restriction of the identity function on a set to a subset of is just the inclusion map from into The restriction of a continuous function is continuous. Applications Inverse functions For a function to have an inverse, it must be one-to-one. If a function is not one-to-one, it may be possible to define a partial inverse of by restricting the domain. For example, the function defined on the whole of is not one-to-one since for any However, the function becomes one-to-one if we restrict to the domain in which case (If we instead restrict to the domain then the inverse is the negative of the square root of ) Alternatively, there is no need to restrict the domain if we allow the inverse to be a multivalued function. Selection operators In relational algebra, a selection (sometimes called a restriction to avoid confusion with SQL's use of SELECT) is a unary operation written as or where: and are attribute names, is a binary operation in the set is a value constant, is a relation. The selection selects all those tuples in for which holds between the and the attribute. The selection selects all those tuples in for which holds between the attribute and the value Thus, the selection operator restricts to a subset of the entire database. The pasting lemma The pasting lemma is a result in topology that relates the continuity of a function with the continuity of its restrictions to subsets. Let be two closed subsets (or two open subsets) of a topological space such that and let also be a topological space. If is continuous when restricted to both and then is continuous. This result allows one to take two continuous functions defined on closed (or open) subsets of a topological space and create a new one. Sheaves Sheaves provide a way of generalizing restrictions to objects besides functions. In sheaf theory, one assigns an object in a category to each open set of a topological space, and requires that the objects satisfy certain conditions. The most important condition is that there are restriction morphisms between every pair of objects associated to nested open sets; that is, if then there is a morphism satisfying the following properties, which are designed to mimic the restriction of a function: For every open set of the restriction morphism is the identity morphism on If we have three open sets then the composite (Locality) If is an open covering of an open set and if are such that for each set of the covering, then ; and (Gluing) If is an open covering of an open set and if for each a section is given such that for each pair of the covering sets the restrictions of and agree on the overlaps: then there is a section such that for each The collection of all such objects is called a sheaf. If only the first two properties are satisfied, it is a pre-sheaf. Left- and right-restriction More generally, the restriction (or domain restriction or left-restriction) of a binary relation between and may be defined as a relation having domain codomain and graph Similarly, one can define a right-restriction or range restriction Indeed, one could define a restriction to -ary relations, as well as to subsets understood as relations, such as ones of the Cartesian product for binary relations. These cases do not fit into the scheme of sheaves. Anti-restriction The domain anti-restriction (or domain subtraction) of a function or binary relation (with domain and codomain ) by a set may be defined as ; it removes all elements of from the domain It is sometimes denoted  ⩤  Similarly, the range anti-restriction (or range subtraction) of a function or binary relation by a set is defined as ; it removes all elements of from the codomain It is sometimes denoted  ⩥ See also References Sheaf theory
Restriction (mathematics)
[ "Mathematics" ]
1,041
[ "Topology", "Sheaf theory", "Mathematical structures", "Category theory" ]
1,589,043
https://en.wikipedia.org/wiki/Cassini%20and%20Catalan%20identities
__notoc__ Cassini's identity (sometimes called Simson's identity) and Catalan's identity are mathematical identities for the Fibonacci numbers. Cassini's identity, a special case of Catalan's identity, states that for the nth Fibonacci number, Note here is taken to be 0, and is taken to be 1. Catalan's identity generalizes this: Vajda's identity generalizes this: History Cassini's formula was discovered in 1680 by Giovanni Domenico Cassini, then director of the Paris Observatory, and independently proven by Robert Simson (1753). However Johannes Kepler presumably knew the identity already in 1608. Catalan's identity is named after Eugène Catalan (1814–1894). It can be found in one of his private research notes, entitled "Sur la série de Lamé" and dated October 1879. However, the identity did not appear in print until December 1886 as part of his collected works . This explains why some give 1879 and others 1886 as the date for Catalan's identity . The Hungarian-British mathematician Steven Vajda (1901–95) published a book on Fibonacci numbers (Fibonacci and Lucas Numbers, and the Golden Section: Theory and Applications, 1989) which contains the identity carrying his name. However, the identity had been published earlier in 1960 by Dustan Everman as problem 1396 in The American Mathematical Monthly, and in 1901 by Alberto Tagiuri in Periodico di Matematica. Proof of Cassini identity Proof by matrix theory A quick proof of Cassini's identity may be given by recognising the left side of the equation as a determinant of a 2×2 matrix of Fibonacci numbers. The result is almost immediate when the matrix is seen to be the th power of a matrix with determinant −1: Proof by induction Consider the induction statement: The base case is true. Assume the statement is true for . Then: so the statement is true for all integers . Proof of Catalan identity We use Binet's formula, that , where and . Hence, and . So, Using , and again as , The Lucas number is defined as , so Because Cancelling the 's gives the result. Notes References External links Proof of Cassini's identity Proof of Catalan's Identity Cassini formula for Fibonacci numbers Fibonacci and Phi Formulae Mathematical identities Fibonacci numbers Articles containing proofs Giovanni Domenico Cassini
Cassini and Catalan identities
[ "Mathematics" ]
516
[ "Mathematical theorems", "Recurrence relations", "Fibonacci numbers", "Golden ratio", "Mathematical relations", "Articles containing proofs", "Mathematical identities", "Mathematical problems", "Algebra" ]
1,589,135
https://en.wikipedia.org/wiki/Hyperbolic%20manifold
In mathematics, a hyperbolic manifold is a space where every point looks locally like hyperbolic space of some dimension. They are especially studied in dimensions 2 and 3, where they are called hyperbolic surfaces and hyperbolic 3-manifolds, respectively. In these dimensions, they are important because most manifolds can be made into a hyperbolic manifold by a homeomorphism. This is a consequence of the uniformization theorem for surfaces and the geometrization theorem for 3-manifolds proved by Perelman. Rigorous definition A hyperbolic -manifold is a complete Riemannian -manifold of constant sectional curvature . Every complete, connected, simply-connected manifold of constant negative curvature is isometric to the real hyperbolic space . As a result, the universal cover of any closed manifold of constant negative curvature is . Thus, every such can be written as where is a torsion-free discrete group of isometries on . That is, is a discrete subgroup of . The manifold has finite volume if and only if is a lattice. Its thick–thin decomposition has a thin part consisting of tubular neighborhoods of closed geodesics and ends which are the product of a Euclidean ()-manifold and the closed half-ray. The manifold is of finite volume if and only if its thick part is compact. Examples The simplest example of a hyperbolic manifold is hyperbolic space, as each point in hyperbolic space has a neighborhood isometric to hyperbolic space. A simple non-trivial example, however, is the once-punctured torus. This is an example of an (Isom(), )-manifold. This can be formed by taking an ideal rectangle in – that is, a rectangle where the vertices are on the boundary at infinity, and thus don't exist in the resulting manifold – and identifying opposite images. In a similar fashion, we can construct the thrice-punctured sphere, shown below, by gluing two ideal triangles together. This also shows how to draw curves on the surface – the black line in the diagram becomes the closed curve when the green edges are glued together. As we are working with a punctured sphere, the colored circles in the surface – including their boundaries – are not part of the surface, and hence are represented in the diagram as ideal vertices. Many knots and links, including some of the simpler knots such as the figure eight knot and the Borromean rings, are hyperbolic, and so the complement of the knot or link in is a hyperbolic 3-manifold of finite volume. Important results For the hyperbolic structure on a finite volume hyperbolic -manifold is unique by Mostow rigidity and so geometric invariants are in fact topological invariants. One of these geometric invariants used as a topological invariant is the hyperbolic volume of a knot or link complement, which can allow us to distinguish two knots from each other by studying the geometry of their respective manifolds. See also Hyperbolic 3-manifold Hyperbolic space Hyperbolization theorem Margulis lemma Normally hyperbolic invariant manifold References Hyperbolic geometry Manifolds Riemannian manifolds
Hyperbolic manifold
[ "Mathematics" ]
635
[ "Space (mathematics)", "Riemannian manifolds", "Metric spaces", "Topological spaces", "Topology", "Manifolds" ]
1,589,303
https://en.wikipedia.org/wiki/Behavioural%20sciences
Behavioural science is the branch of science concerned with human behaviour. While the term can technically be applied to the study of behaviour amongst all living organisms, it is nearly always used with reference to humans as the primary target of investigation (though animals may be studied in some instances, e.g. invasive techniques). The behavioural sciences sit in between the conventional natural sciences and social studies in terms of scientific rigor. It encompasses fields such as psychology, neuroscience, linguistics, and economics. Scope The behavioural sciences encompass both natural and social scientific disciplines, including various branches of psychology, neuroscience and biobehavioural sciences, behavioural economics and certain branches of criminology, sociology and political science. This interdisciplinary nature allows behavioural scientists to coordinate findings from psychological experiments, genetics and neuroimaging, self-report studies, interspecies and cross-cultural comparisons, and correlational and longitudinal designs to understand the nature, frequency, mechanisms, causes and consequences of given behaviours. With respect to the applied behavioural science and behavioural insights, the focus is usually narrower, tending to encompass cognitive psychology, social psychology and behavioural economics generally, and invoking other more specific fields (e.g. health psychology) where needed. In applied settings behavioural scientists exploit their knowledge of cognitive biases, heuristics, and peculiarities of how decision-making is affected by various factors to develop behaviour change interventions or develop policies which 'nudge' people to acting more auspiciously (see Applications below). Future and emerging techniques Robila explains how using modern technology to study and understand behavioral patterns on a greater scale, such as artificial intelligence, machine learning, and greater data has a future in brightening up behavioral science assistance/ research. Creating cutting-edge therapies and interventions with immersive technology like virtual reality/ AI would also be beneficial to behavioral science future(s). These concepts are only a hint of the many paths behavioral science may take in the future. Applications Insights from several pure disciplines across behavioural sciences are explored by various applied disciplines and practiced in the context of everyday life and business. Consumer behaviour, for instance, is the study of the decision making process consumers make when purchasing goods or services. It studies the way consumers recognise problems and discover solutions. Behavioural science is applied in this study by examining the patterns consumers make when making purchases, the factors that influenced those decisions, and how to take advantage of these patterns. Organisational behaviour is the application of behavioural science in a business setting. It studies what motivates employees, how to make them work more effectively, what influences this behaviour, and how to use these patterns in order to achieve the company's goals. Managers often use organisational behaviour to better lead their employees. Using insights from psychology and economics, behavioural science can be leveraged to understand how individuals make decisions regarding their health and ultimately reduce disease burden through interventions such as loss aversion, framing, defaults, nudges, and more. Other applied disciplines of behavioural science include operations research and media psychology. Differentiation from social sciences The terms behavioural sciences and social sciences are interconnected fields that both study systematic processes of behaviour, but they differ on their level of scientific analysis for various dimensions of behaviour. Behavioural sciences abstract empirical data to investigate the decision process and communication strategies within and between organisms in a social system. This characteristically involves fields like psychology, social neuroscience, ethology, and cognitive science. In contrast, social sciences provide a perceptive framework to study the processes of a social system through impacts of a social organisation on the structural adjustment of the individual and of groups. They typically include fields like sociology, economics, public health, anthropology, demography, and political science. Many subfields of these disciplines test the boundaries between behavioural and social sciences. For example, political psychology and behavioural economics use behavioural approaches, despite the predominant focus on systemic and institutional factors in the broader fields of political science and economics. See also Behaviour Human behaviour loss aversion List of academic disciplines Science Fields of science Natural sciences Social sciences History of science History of technology References Selected bibliography George Devereux: From anxiety to method in the behavioral sciences, The Hague, Paris. Mouton & Co, 1967 E.D. Klemke, R. Hollinger & A.D. Kline, (eds.) (1980). Introductory Readings in the Philosophy of Science. Prometheus Books, New York. Neil J. Smelser & Paul B. Baltes, eds. (2001). International Encyclopedia of the Social & Behavioral Sciences, 26 v. Oxford: Elsevier. Mills, J. A. (1998). Control a history of behavioral psychology. New York University Press. External links Cognitive science
Behavioural sciences
[ "Biology" ]
977
[ "Behavioural sciences", "Behavior" ]
1,589,445
https://en.wikipedia.org/wiki/Flare%20star
A flare star is a variable star that can undergo unpredictable dramatic increases in brightness for a few minutes. It is believed that the flares on flare stars are analogous to solar flares in that they are due to the magnetic energy stored in the stars' atmospheres. The brightness increase is across the spectrum, from X-rays to radio waves. Flare activity among late-type stars was first reported by A. van Maanen in 1945, for WX Ursae Majoris and YZ Canis Minoris. However, the best-known flare star is UV Ceti, first observed to flare in 1948. Today similar flare stars are classified as UV Ceti type variable stars (using the abbreviation UV) in variable star catalogs such as the General Catalogue of Variable Stars. Most flare stars are dim red dwarfs, although recent research indicates that less massive brown dwarfs might also be capable of flaring. The more massive RS Canum Venaticorum variables (RS CVn) are also known to flare, but it is understood that these flares are induced by a companion star in a binary system which causes the magnetic field to become tangled. Additionally, nine stars similar to the Sun had also been seen to undergo flare events prior to the flood of superflare data from the Kepler observatory. It has been proposed that the mechanism for this is similar to that of the RS CVn variables in that the flares are being induced by a companion, namely an unseen Jupiter-like planet in a close orbit. Stellar Flare Model The Sun is known to flare and solar flares have been extensively studied over all the spectrum. Even though the Sun on average shows less variability and weaker flares compared with other stars that are similar to the Sun in spectral type, rotation period and age, it is generally thought that other stellar flares and the solar flares share the same or similar processes. Thus the solar flare model has been used as the framework for understanding other stellar flares. The general idea is that flares are generated through the reconnection of the magnetic field lines in the corona. There are several phases for the flare: preflare phase, impulsive phase, flash phase and decay phase. Those phases have different timescales and different emissions across the spectrum. During the preflare phase, which usually lasts for a few minutes, the coronal plasmas slowly heats up to temperatures of tens of millions Kelvin. This phase is mostly visible to soft X-rays and EUV. During the impulsive phase, which lasts for three to ten minutes, a large number of electrons and sometimes also ions are accelerated to extremely high energies ranging from keV to MeV. The radiation can be seen as gyrosynchrotron radiation in the radio wavelengths and bremsstrahlung radiation in the hard X-rays wavelengths. This is the phase where most of the energy is released. The later flash phase is defined by the rapid increase in Hα emissions. The free streaming particles travel along the magnetic lines, propagating energy from the corona to the lower chromosphere. The material in the chromosphere is then heated up and expands to the corona. Emission in the flash phase is primarily due to thermal radiation from the heated stellar atmosphere. As the material reaches the corona, the intensive release of energy slows down and cooling starts. During the decay phase which lasts for one to several hours, the corona returns back to its original state. This is the model for how isolated star generates flares but this is not the only way. Interactions between a star and the companion or sometimes the environment can also produce flares. In binary systems such as RS Canum Venaticorum variable stars (RS CVn), flares can be produced through the interactions between the magnetic fields of the two bodies in the systems. For stars that have an accretion disk, which most of the time are protostars or pre-main sequence stars, the interactions of magnetic field between the stars and the disk can also cause flares. Nearby flare stars Flare stars are intrinsically faint, but have been found to distances of 1,000 light years from Earth. On April 23, 2014, NASA's Swift satellite detected the strongest, hottest, and longest-lasting sequence of stellar flares ever seen from a nearby red dwarf, DG Canum Venaticorum. The initial blast from this record-setting series of explosions was as much as 10,000 times more powerful than the largest solar flare ever recorded. Proxima Centauri The Sun's nearest stellar neighbor Proxima Centauri is a flare star that undergoes occasional increases in brightness because of magnetic activity. The star's magnetic field is created by convection throughout the stellar body, and the resulting flare activity generates a total X-ray emission similar to that produced by the Sun. Wolf 359 The flare star Wolf 359 is another near neighbor (2.39 ± 0.01 parsecs). This star, also known as Gliese 406 and CN Leo, is a red dwarf of spectral class M6.5 that emits X-rays. It is a UV Ceti flare star, and has a relatively high flare rate. The mean magnetic field has a strength of about (), but this varies significantly on time scales as short as six hours. By comparison, the magnetic field of the Sun averages (), although it can rise as high as () in active sunspot regions. Barnard's Star Barnard's Star is the fourth nearest star to the Sun. Given its age, at 7–12 billion years of age, Barnard's Star is considerably older than the Sun. It was long assumed to be quiescent in terms of stellar activity. However, in 1998, astronomers observed an intense stellar flare, showing that Barnard's Star is a flare star. EV Lacertae EV Lacertae is located 16.5 light-years away, and is the nearest star in its constellation. It is a young star, about 300 million years old, and has a strong magnetic field. In 2008, it produced a record-setting flare that was thousands of times more powerful than the largest observed solar flare. TVLM513-46546 TVLM 513-46546 is a very low mass M9 flare star, at the boundary between red dwarfs and brown dwarfs. Data from Arecibo Observatory at radio wavelengths determined that the star flares every 7054 s with a precision of one one-hundredth of a second. 2MASS J18352154-3123385 A The more massive member of the binary star 2MASS J1835, an M6.5 star, has strong X-ray activity indicative of a flare star, although it has never been directly observed to flare. Record-setting flares The most powerful stellar flare detected, as of December 2005, may have come from the active binary II Peg. Its observation by Swift suggested the presence of hard X-rays in the well-established Neupert effect as seen in solar flares. See also References External links UV Ceti and the flare stars, Autumn 2003 Variable Star Of The Season prepared by Matthew Templeton, AAVSO (www.aavso.org) Stellar Flares - D. Montes, UCM. Star types
Flare star
[ "Astronomy" ]
1,495
[ "Star types", "Astronomical classification systems" ]
1,589,448
https://en.wikipedia.org/wiki/Kremer%20prize
The Kremer prizes are a series of monetary awards, established in 1959 by the industrialist Henry Kremer. Royal Aeronautical Society Human Powered Flight Group The Royal Aeronautical Society's "Man Powered Aircraft Group" was formed in 1959 by the members of the Man Powered Group of the College of Aeronautics at Cranfield when they were invited to join the Society. Its title was changed from "Man" to "Human" in 1988 because of the many successful flights made by female pilots. Under the auspices of the Society, in 1959 the industrialist Henry Kremer offered the first Kremer prizes, of £5,000 for the first human-powered aircraft to fly a figure-of-eight course round two markers half-a-mile apart. It was conditional that the designer, entrant pilot, place of construction and flight must all be British. In 1973 Kremer increased the prize to £50,000 and opened it to all nationalities, to stimulate interest. The first Kremer prize of £50,000 was won on 23 August 1977 by Dr. Paul MacCready when his Gossamer Condor, piloted by Bryan Allen, was the first human-powered aircraft to fly a figure eight around two markers one half mile apart, starting and ending the course at least above the ground. The second Kremer prize, of £100,000, was won on 12 June 1979, again by Paul MacCready, when Bryan Allen flew MacCready's Gossamer Albatross from England to France. A Kremer prize of £20,000 for speed was won in 1984 by a design team of the Massachusetts Institute of Technology for flying their MIT Monarch B craft on a triangular course in under three minutes (for an average speed of ). Further segments of a total prize pot of £100,000 were to be awarded for every improvement in speed of at least 5%; the next segment was won in the MacCready Bionic Bat with a flight of 163.28 seconds on 18 July 1984, piloted by Parker MacCready. The third segment was won by Holger Rochelt flying Musculair 1 designed by Günther Rochelt. The fourth segment was won on 2 December 1984, with a flight of 143.08 seconds in the MacCready Bionic Bat piloted by Bryan Allen. The fifth and final segment was won with a flight of 122.01 seconds by Holger Rochelt flying Musculair 2, after which the prize competition was withdrawn by the Royal Aeronautical Society on grounds of safety. There are currently three Kremer Prizes that have not yet been awarded, for a total of £150,000: 26 mile Marathon course in under an hour (£50,000), Sporting aircraft challenge stressing maneuverability (£100,000), Local challenge that is limited to youth groups (under 18 years) in the UK. See also List of engineering awards Sikorsky Prize References External links Royal Aeronautical Society Human Powered Flight Group Human-powered aircraft Aerospace engineering awards British science and technology awards Awards established in 1959 Royal Aeronautical Society
Kremer prize
[ "Engineering" ]
626
[ "Aerospace engineering awards", "Aerospace engineering organizations", "Royal Aeronautical Society", "Aerospace engineering" ]
1,589,554
https://en.wikipedia.org/wiki/Inversion%20of%20control
In software engineering, inversion of control (IoC) is a design principle in which custom-written portions of a computer program receive the flow of control from an external source (e.g. a framework). The term "inversion" is historical: a software architecture with this design "inverts" control as compared to procedural programming. In procedural programming, a program's custom code calls reusable libraries to take care of generic tasks, but with inversion of control, it is the external source or framework that calls the custom code. Inversion of control has been widely used by application development frameworks since the rise of GUI environments and continues to be used both in GUI environments and in web server application frameworks. Inversion of control makes the framework extensible by the methods defined by the application programmer. Event-driven programming is often implemented using IoC so that the custom code need only be concerned with the handling of events, while the event loop and dispatch of events/messages is handled by the framework or the runtime environment. In web server application frameworks, dispatch is usually called routing, and handlers may be called endpoints. Alternative meaning The phrase "inversion of control" has separately also come to be used in the community of Java programmers to refer specifically to the patterns of dependency injection (passing to objects the services they need) that occur with "IoC containers" in Java frameworks such as the Spring framework. In this different sense, "inversion of control" refers to granting the framework control over the implementations of dependencies that are used by application objects rather than to the original meaning of granting the framework control flow (control over the time of execution of application code, e.g. callbacks). Overview As an example, with traditional programming, the main function of an application might make function calls into a menu library to display a list of available commands and query the user to select one. The library thus would return the chosen option as the value of the function call, and the main function uses this value to execute the associated command. This style was common in text-based interfaces. For example, an email client may show a screen with commands to load new mail, answer the current mail, create new mail, etc., and the program execution would block until the user presses a key to select a command. With inversion of control, on the other hand, the program would be written using a software framework that knows common behavioral and graphical elements, such as windowing systems, menus, controlling the mouse, and so on. The custom code "fills in the blanks" for the framework, such as supplying a table of menu items and registering a code subroutine for each item, but it is the framework that monitors the user's actions and invokes the subroutine when a menu item is selected. In the mail client example, the framework could follow both the keyboard and mouse inputs and call the command invoked by the user by either means and at the same time monitor the network interface to find out if new messages arrive and refresh the screen when some network activity is detected. The same framework could be used as the skeleton for a spreadsheet program or a text editor. Conversely, the framework knows nothing about Web browsers, spreadsheets, or text editors; implementing their functionality takes custom code. Inversion of control carries the strong connotation that the reusable code and the problem-specific code are developed independently even though they operate together in an application. Callbacks, schedulers, event loops, and the template method are examples of design patterns that follow the inversion of control principle, although the term is most commonly used in the context of object-oriented programming. (Dependency injection is an example of the separate, specific idea of "inverting control over the implementations of dependencies" popularised by Java frameworks.) Inversion of control is sometimes referred to as the "Hollywood Principle: Don't call us, we'll call you". Background Inversion of control is not a new term in computer science. Martin Fowler traces the etymology of the phrase back to 1988, but it is closely related to the concept of program inversion described by Michael Jackson in his Jackson Structured Programming methodology in the 1970s. A bottom-up parser can be seen as an inversion of a top-down parser: in the one case, the control lies with the parser, while in the other case, it lies with the receiving application. The term was used by Michael Mattsson in a thesis (with its original meaning of a framework calling application code instead of vice versa) and was then taken from there by Stefano Mazzocchi and popularized by him in 1999 in a defunct Apache Software Foundation project, Avalon, in which it referred to a parent object passing in a child object's dependencies in addition to controlling execution flow. The phrase was further popularized in 2004 by Robert C. Martin and Martin Fowler, the latter of whom traces the term's origins to the 1980s. Description In traditional programming, the flow of the business logic is determined by objects that are statically bound to one another. With inversion of control, the flow depends on the object graph that is built up during program execution. Such a dynamic flow is made possible by object interactions that are defined through abstractions. This run-time binding is achieved by mechanisms such as dependency injection or a service locator. In IoC, the code could also be linked statically during compilation, but finding the code to execute by reading its description from external configuration instead of with a direct reference in the code itself. In dependency injection, a dependent object or module is coupled to the object it needs at run time. Which particular object will satisfy the dependency during program execution typically cannot be known at compile time using static analysis. While described in terms of object interaction here, the principle can apply to other programming methodologies besides object-oriented programming. In order for the running program to bind objects to one another, the objects must possess compatible interfaces. For example, class A may delegate behavior to interface I which is implemented by class B; the program instantiates A and B, and then injects B into A. Use The Mesa Programming environment for XDE, 1985 Visual Basic (classic), 1991. HTML DOM event The Spring Framework ASP.NET Core Template method pattern Example code HTML DOM events Web browsers implement inversion of control for DOM events in HTML. The application developer uses to register a callback. <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>DOM Level 2</title> </head> <body> <h1>DOM Level 2 Event handler</h1> <p><large><span id="output"></span></large></p> <script> var registeredListener = function () { document.getElementById("output").innerHTML = "<large>The registered listener was called.</large>"; } document.addEventListener( "click", registeredListener, true ); document.getElementById("output").innerHTML = "<large>The event handler has been registered. If you click the page, your web browser will call the event handler.</large>" </script> </body> </html> Web application frameworks This example code for an ASP.NET Core web application creates a web application host, registers an endpoint, and then passes control to the framework: var builder = WebApplication.CreateBuilder(args); var app = builder.Build(); app.MapGet("/", () => "Hello World!"); app.Run(); See also Abstraction layer Archetype pattern Asynchronous I/O Aspect-oriented programming Callback (computer science) Closure (computer science) Continuation Delegate (CLI) Dependency inversion principle Flow-based programming Implicit invocation Interrupt handler Message Passing Monad (functional programming) Observer pattern Publish/subscribe Service locator pattern Signal (computing) Software framework Strategy pattern User exit Visitor pattern XSLT References External links Inversion of Control explanation and implementation example Inversion of Control Software architecture Architectural pattern (computer science) Programming principles Component-based software engineering Software design patterns
Inversion of control
[ "Technology" ]
1,726
[ "Component-based software engineering", "Components" ]
1,589,620
https://en.wikipedia.org/wiki/OxiClean
OxiClean is an American brand of household cleaners, including OxiClean Versatile Stain Remover, which is a laundry additive, spot stain remover, and household cleaner marketed by Church & Dwight. It was formerly owned by Orange Glo International from its introduction in 1997 until it was acquired in 2006. History When it was introduced by Orange Glo International in 1997, it was marketed through infomercials with Billy Mays in the US and Canada as a "miracle cleanser" starting in 2000. Church & Dwight acquired the OxiClean brand (along with Orange Glo and several others) through its acquisition of Orange Glo International in 2006; at that point the OxiClean brand expanded into laundry detergent with the introduction of the OxiClean Detergent Ball, followed by OxiClean Liquid Laundry Detergent in 2014. It continued to be endorsed by Mays until his death in 2009; the product is now seen endorsed by Mays' friend and co-worker Anthony Sullivan. Mays and Sullivan were featured on the show PitchMen on the Discovery Channel in which the product was featured on several occasions. Description One of the active ingredients in OxiClean is sodium percarbonate (Na2CO3•H2O2), an adduct of sodium carbonate (Na2CO3) and hydrogen peroxide (H2O2). This breaks down into hydrogen peroxide when dissolved in water. These ingredients break down safely in the environment and leave no toxic byproducts. Related products include OxiClean Laundry Stain Remover, OxiClean MaxForce Spray, OxiClean Power Paks, OxiClean Triple Power Stain Fighter, OxiClean White Revive and OxiClean Baby Stain Soaker. The Clorox Company has a competing product, Clorox 2, which has similar ingredients but also includes the activator TAED (tetraacetylethylenediamine) to convert the peroxide into peracetic acid (also known as peroxyacetic acid, or PAA). Another competing product, Biz Laundry Booster, has added enzymes to break down organic stains and claims to outperform OxiClean in some situations. References External links Cleaning products Church & Dwight brands Infomercials Cleaning product brands
OxiClean
[ "Chemistry" ]
466
[ "Cleaning products", "Products of chemical industry" ]
1,589,678
https://en.wikipedia.org/wiki/Modprobe
modprobe is a Linux program originally written by Rusty Russell and used to add a loadable kernel module to the Linux kernel or to remove a loadable kernel module from the kernel. It is commonly used indirectly: udev relies upon modprobe to load drivers for automatically detected hardware. Modprobe is distributed as part of the software package "kmod" (maintained by Lucas De Marchi and others). It was previously developed as: "module-init-tools", for Linux kernel version 2.6 and later (maintained by Jon Masters and others) "modutils" for use with Linux versions 2.2.x and 2.4.x. . Operation The program offers more full-featured "Swiss-army-knife" features than the more basic and utilities, with the following benefits: An ability to make more intuitive decisions about which modules to load awareness of module dependencies, so that when requested to load a module, adds other required modules first the resolution of recursive module dependencies as required If invoked with no switches, the program by default adds/inserts/installs the named module into the kernel. Root privileges are typically required for these changes. Any arguments appearing after the module name are passed to the kernel (in addition to any options listed in the configuration file). In some versions of modprobe, the configuration file is called modprobe.conf, and in others, the equivalent is the collection of files called <modulename> in the /etc/modprobe.d directory. modprobe looks only in the standard module directories, to install modules from the working directory insmod is still required. The user can also make a symbolic link of the module to the standard path, so depmod will find and load it like any other installed module. Features The program also has more configuration features than other similar utilities. It is possible to define module aliases allowing for some automatic loading of modules. When the kernel requires a module, it actually runs modprobe to request it; however, the kernel has a description of only some module properties (for example, a device major number, or the number of a network protocol), and modprobe does the job of translating that to an actual module name via aliases. This program also has the ability to run programs before or after loading or unloading a given module; for example, setting the mixer right after loading a sound card module, or uploading the firmware to a device immediately prior to enabling it. Although these actions must be implemented by external programs, modprobe takes care of synchronizing their execution with module loading/unloading. Blacklist There are cases where two or more modules both support the same devices, or a module invalidly claims to support a device: the blacklist keyword indicates that all of a particular module's internal aliases are to be ignored. There are a couple of ways to blacklist a module, and depending on the method used to load it depends on where this is configured. There are two ways to blacklist a module using modprobe, employing the modprobe.conf system, the first is to use its blacklisting system in /etc/modprobe.d/. Any filename ending with .conf can be used: cat /etc/modprobe.d/blacklist.conf blacklist ieee1394 blacklist ohci1394 blacklist eth1394 blacklist sbp2 An install primitive is the highest priority in the config file and will be used instead of the blacklisting method above, requiring this second method: cat /etc/modprobe.d/ieee1394.conf install ieee1394 /bin/true install ohci1394 /bin/true install eth1394 /bin/true install sbp2 /bin/true Alternately, you can modify /etc/modprobe.conf: alias sub_module /dev/null alias module_main /dev/null options module_main needed_option=0 See also lsmod References External links modprobe man page. modprobe.conf modules.dep Command-line software Linux kernel-related software
Modprobe
[ "Technology" ]
880
[ "Command-line software", "Computing commands" ]
1,589,698
https://en.wikipedia.org/wiki/Beam%20tetrode
A beam tetrode, sometimes called a beam power tube, is a type of vacuum tube or thermionic valve that has two grids and forms the electron stream from the cathode into multiple partially collimated beams to produce a low potential space charge region between the anode and screen grid to return anode secondary emission electrons to the anode when the anode potential is less than that of the screen grid. Beam tetrodes are usually used for power amplification, from audio frequency to radio frequency. The beam tetrode produces greater output power than a triode or pentode with the same anode supply voltage. The first beam tetrode marketed was the Marconi N40, introduced in 1935. Beam tetrodes manufactured and used in the 21st century include the 4CX250B, KT66 and variants of the 6L6. History In amplifier circuits, the useful anode voltage - anode current region of operation of the conventional tetrode tube was limited by the detrimental effect of secondary emission from the anode at anode potentials less than that of the screen grid. The detrimental effect of anode secondary emission was solved by Philips/Mullard with the introduction of a suppressor grid, which resulted in the pentode design. Since Philips held a patent on this design, other manufacturers were keen to produce pentode type tubes without infringing the patent. In the UK, three EMI engineers (Isaac Shoenberg, Cabot Bull and Sidney Rodda) filed a patent on an alternative design in 1933. Their design had the following features (compared to the normal pentode): The apertures of the control and screen grids were aligned, by winding the grids with the same pitch (the grids of the pentode used different pitches). Greater distance between the screen grid and the anode than an ordinary tetrode or pentode. An auxiliary electrode structure at or near cathode potential and substantially outside of the electron stream, to establish a low electrostatic potential region between the screen grid and anode, limit the included angle of the beam and prevent anode secondary electrons outside of the beam region from reaching the screen (the pentode has a suppressor grid in the electron stream). The design is today known as the beam tetrode but historically was also known as a kinkless tetrode, since it had the same number of grids as the conventional tetrode but without the negative resistance kink in the anode current vs anode voltage characteristic curves of a true tetrode. Some authors, notably outside the United Kingdom, argue that the beam plates constitute a fifth electrode. The EMI design had the following advantages over the pentode: The design produced more output power than a similar power pentode. The transconductance was higher than a similar power pentode. The plate resistance was lower than a similar power pentode. The screen grid current was about 5–10% of the anode current compared with about 20% for the pentode, thus the beam tetrode was more power-efficient. The design produced less third-harmonic distortion in class A operation than a comparable power pentode. The new tube was introduced at the Physical and Optical Societies' Exhibition in January 1935 as the Marconi N40. Around one thousand of the N40 output tetrodes were produced, but MOV (Marconi-Osram Valve) company, under the joint ownership of EMI and GEC, considered the design too difficult to manufacture due to the need for good alignment of the grid wires. As MOV had a design-share agreement with RCA of America, the design was passed to that company. RCA had the resources to produce a workable design, which resulted in the 6L6. Not long after, the beam tetrode appeared in a variety of offerings, including the 6V6 in December 1936, the MOV KT66 in 1937 and the KT88 in 1956, designed specifically for audio and highly prized by collectors today. After the Phillips patent on the suppressor grid had expired, many beam tetrodes were referred to as "beam power pentodes". In addition, there were some examples of beam tetrodes designed to work in place of pentodes. The ubiquitous EL34, although manufactured by Mullard/Phillips and other European manufacturers as a true pentode, was also produced by other manufacturers (namely GE, Sylvania, and MOV) as a beam tetrode instead. The 6CA7 as manufactured by Sylvania and GE is a beam tetrode drop-in replacement for an EL34, and the KT77 is a similar design to the 6CA7 made by MOV. A beam tetrode family widely used in the US comprised the 25L6, 35L6, and 50L6, and their miniature versions the 50B5 and 50C5. This family is not to be confused with the 6L6 despite similar designations. They were used in millions of All American Five AM radio receivers. Most of these used a transformerless power supply circuit. In American radio receivers with transformer power supplies, built from about 1940–1950, the 6V6, 6V6G, 6V6GT and miniature 6AQ5 beam tetrodes were very commonly used. In military equipment, the 807 and 1625, with rated anode dissipations of 25 watts and operating from a supply of up to 750 volts, were in widespread use as the final amplifier in radio-frequency transmitters of up to 50 watts output power and in push-pull applications for audio. These tubes were very similar to a 6L6 but had a somewhat higher anode dissipation rating and the anode was connected to the top cap instead of a pin at the base. Large numbers entered the market after World War II and were used widely by radio amateurs in the USA and Europe through the 1950s and 1960s. In the 1950s, the ultra-linear audio amplifier circuit was developed for beam tetrodes. This amplifier circuit links the screen grids to taps on the output transformer, and provides reduced intermodulation distortion. Operation The beam tetrode eliminates the dynatron region or tetrode kink of the screen grid tube by developing a low potential space charge region between the screen grid and anode that returns anode secondary emission electrons to the anode. The anode characteristic of the beam tetrode is less rounded at lower anode voltages than that of the power pentode, resulting in greater power output and less third harmonic distortion with the same anode supply voltage. In beam tetrodes, the apertures of the control grid and the screen grid are aligned. The wires of the screen grid are aligned with those of the control grid so that the screen grid lies in the shadow of the control grid. This reduces the screen grid current, contributing to the tube's greater power conversion efficiency. Alignment of the grid apertures concentrates the electrons into dense beams in the space between the screen grid and the anode, permitting the anode to be placed closer to the screen grid than would be possible without the beam density. The intense negative space charge of these beams developed when the anode potential is less than that of the screen grid prevents secondary electrons from the anode from reaching the screen grid. In receiving type beam tetrodes, beam confining plates are introduced outside of the beam region to constrain the electron beams to certain sectors of the anode which are sections of a cylinder. These beam confining plates also set up a low electrostatic potential region between the screen grid and anode and return anode secondary electrons from outside of the beam region to the anode. In beam tetrodes that have complete cylindrical symmetry, a kinkless characteristic can be achieved without the need for beam confining plates. This form of construction is usually adopted in larger tubes with an anode power rating of 100W or more. The Eimac 4CX250B (rated at 250W anode dissipation) is an example of this class of beam tetrode. Note that a radically different approach is taken to the design of the support system for the electrodes in these types. The 4CX250B is described by its manufacturer as a 'radial beam power tetrode, drawing attention to the symmetry of its electrode system. Beam tetrode application circuits often include components to prevent spurious oscillation, suppress transient voltages and smooth out frequency response. In radio frequency applications, shielding is required between the plate circuit components and grid circuit components. Dissection of a beam tetrode References External links Tube Data Archive, thousands of tube data sheets Beam tetrode - additional data information and graphs Vacuum tubes ru:Тетрод#Лучевой тетрод
Beam tetrode
[ "Physics" ]
1,850
[ "Vacuum tubes", "Vacuum", "Matter" ]
1,589,701
https://en.wikipedia.org/wiki/Nemawashi
Nemawashi () is a Japanese business informal process of laying the foundation for some proposed change or project by talking to the people concerned and gathering support and feedback before a formal announcement. It is considered an important element in any major change in the Japanese business environment before any formal steps are taken. Successful nemawashi enables changes to be carried out with the consent of all sides, avoiding embarrassment. Nemawashi literally translates as "turning the roots", from ne (, "root") and mawasu (, "to turn something, to put something around something else"). Its original meaning was literal: in preparation for transplanting a tree, one would carefully dig around a tree some time before transplanting, and trim the roots to encourage the growth of smaller roots that will help the tree become established in its new location. Nemawashi is often cited as an example of a Japanese word which is difficult to translate effectively, because it is tied so closely to Japanese culture itself, although it is often translated as "laying the groundwork." In Japan, high-ranking people expect to be let in on new proposals prior to an official meeting. If they find out about something for the first time during the meeting, they will feel that they have been ignored, and they may reject it for that reason alone. Thus, it's important to approach these people individually before the meeting. This provides an opportunity to introduce the proposal to them and gauge their reaction. This is also a good chance to hear their input. The term is associated with forming a consensus, along with ringiseido (which is a more formal process). There is debate whether Nemawashi is truly co-operative, or if sometimes those consulted have little choice but to agree. The process can be time consuming. See also Japanese management culture Lobbying Polder model - Dutch form of consensus building Toyota Production System References External links Kirai, a geek in Japan: Nemawashi Japanese words and phrases Japanese business terms Economy of Japan Lean manufacturing
Nemawashi
[ "Engineering" ]
415
[ "Lean manufacturing" ]
1,589,872
https://en.wikipedia.org/wiki/Insulating%20concrete%20form
Insulating concrete form or insulated concrete form (ICF) is a system of formwork for reinforced concrete usually made with a rigid thermal insulation that stays in place as a permanent interior and exterior substrate for walls, floors, and roofs. The forms are interlocking modular units that are dry-stacked (without mortar) and filled with concrete. The units lock together somewhat like Lego bricks and create a form for the structural walls or floors of a building. ICF construction has become commonplace for both low rise commercial and high performance residential construction as more stringent energy efficiency and natural disaster resistant building codes are adopted. Development The first expanded polystyrene ICF Wall forms were developed in the late 1960s with the expiration of the original patent and the advent of modern foam plastics by BASF. Canadian contractor Werner Gregori filed the first patent for a foam concrete form in 1966 with a block "measuring 16 inches high by 48 inches long with a tongue-and-groove interlock, metal ties, and a waffle-grid core." It is right to point out that a primordial form of ICF formwork dates back to 1907, as evidenced by the patent entitled “building-block”, inventor L. R. Franklin. This patent claimed a parallelepiped-shaped brick having a central cylindrical cavity, connected to the upper and lower faces by countersink. The adoption of ICF construction has steadily increased since the 1970s, though it was initially hampered by lack of awareness, building codes, and confusion caused by many different manufacturers selling slightly different ICF designs rather than focusing on industry standardization. ICF construction is now part of most building codes and accepted in most jurisdictions in the developed world. Construction Insulating concrete forms are manufactured from any of the following materials: Polystyrene foam (most commonly expanded or extruded) Polyurethane foam (including soy-based foam) Cement-bonded wood fiber Cement-bonded polystyrene beads Cellular concrete Reinforcing steel bars (rebar) are usually placed inside the forms before concrete is poured to give the concrete flexural strength, similar to bridges and high-rise buildings made of reinforced concrete. Like other concrete formwork, the forms are filled with concrete in 1-foot to 4-foot high "lifts" to manage the concrete pressure and reduce the risk of blowouts. After the concrete has cured, the forms are left in place permanently to provide a variety of benefits, depending on materials used: Thermal insulation Soundproofing Good surface burning characteristics rating Space to run electrical conduit and plumbing. The form material on either side of the walls can easily accommodate electrical and plumbing installations. Backing for drywall or other finishes on the interior and stucco, brick, or other siding on the exterior Improved indoor air quality Regulated humidity levels and mitigated mold growth (hygric buffer) Categorization Insulating concrete forms are commonly categorized in two manners. Organizations whose first concern relates to the concrete classify them first by the shape of the concrete inside the form. Organizations whose first concern relates to the fabrication of the forms classify them first by the characteristics of the forms themselves. By concrete shape Flat Wall System For Flat Wall System ICFs, the concrete has the shape of a flat wall of solid reinforced concrete, similar to the shape of a concrete wall constructed using removable forms. Grid System Screen Grid System For Screen Grid System ICFs, the concrete has the shape of the metal in a screen, with horizontal and vertical channels of reinforced concrete separated by areas of solid form material. Waffle Grid System For Waffle Grid System ICFs, the concrete has the shape of a hybrid between Screen Grid and Flat Wall system concrete, with a grid of thicker reinforced concrete and having thinner concrete in the center areas where a screen grid would have solid ICF material.. Post and Lintel System For Post and Lintel System ICFs, the concrete has a horizontal member, called a lintel, only at the top of the wall (Horizontal concrete at the bottom of the wall is often present in the form of the building's footer or the lintel of the wall below.) and vertical members, called posts, between the lintel and the surface on which the wall is resting. By form characteristic Block The exterior shape of the ICF is similar to that of a Concrete masonry unit, although ICF blocks are often larger in size as they are made from a material having a lower specific gravity. Very frequently, the edges of block ICFs are made to interlock, reducing or eliminating the need for the use of a bonding material between the blocks. Panel Panel ICFs have the flat rectangular shape of a section of flat wall they are often the height of the wall and have a width limited by the manipulability of the material at larger sizes and by the general usefulness of the panel size for constructing walls. Plank Plank ICFs have the size of Block ICFs in one dimension and Panel ICFs in the other dimension. Characteristics Energy efficiency Minimal, if any, air leaks, which improves comfort and reduces heat loss compared to walls without a solid air barrier High thermal resistance (R-value) typically above 3 K⋅m2/W (in US customary units: R-17); this results in saving energy compared with uninsulated masonry (see comparison) Continuous insulation without thermal bridges or "insulation gaps", as is common in framed construction Thermal mass, when used well and combined with passive solar design, can play an important role in further reductions in energy use, especially in climates where it is common to have outside temperatures swing above inside temperatures during the day and below at night. Strength Insulating concrete forms create a structural concrete wall, either monolithic or post and beam, that is up to ten times stronger than wood framed structures. Structural integrity for better resistance to forces of nature, compared to framed walls. The components of ICF systems — both the poured concrete and the material used to make the ICF — do not rot when they get wet. Sound absorption ICF walls have much lower rates of acoustic transmission. Standard thickness ICF walls have shown sound transmission coefficients (STC) between 46 and 72 compared to 36 for standard fiberglass insulation and drywall. The level of sound attenuation achieved is a function of wall thickness, mass, component materials and air tightness. Fire protection ICF walls can have four- to six-hour fire resistance rating and negligible surface burning properties. The International Building Code: 2603.5.2 requires plastic foam insulation (e.g. Polystyrene foam, Polyurethane foam) to be separated from the building interior by a thermal barrier (e.g. drywall), regardless of the fire barrier provided by the central concrete. Forms made from cement bonded – wood fibers (eg), polystyrene beads (e.g.), or air (i.e. cellular concrete – e.g.) have a fire rating inherently. Indoor air quality Because they are generally constructed without a sheet plastic vapor barrier, ICF walls can regulate humidity levels, mitigate the potential for mold and facilitate a more comfortable interior while maintaining high thermal performance. Foams, however, can give off gasses, something that is not well studied. Environmental sensitivity ICF walls can be made with a variety of recycled materials that can minimize the environmental impact of the building. The large volume of concrete used in ICF walls has been criticized, as concrete production is a large contributor to greenhouse gas emissions. Vermin Because the entire interior space of ICF walls is continuously occupied (no gaps as can occur between blown or fiberglass insulation and a wood frame wall) they pose more difficulty for casual transit by insects and vermin. Additionally, while plastic foam forms can occasionally be tunneled through, interior concrete wall, and the Portland cement of cement-bonded type forms create a much more challenging barrier to insects and vermin than do walls made of wood. Building design considerations When designing a building to be constructed with ICF walls, consideration must be given to supporting the weight of any walls not resting directly on other walls or the building's foundation. Consideration must also be given to the understanding that the load-bearing part of an ICF wall is the concrete, which, without special preparations, does not extend in any direction to the edge of the form. For grid and post & lintel systems, the placement of vertical members of the concrete must be organized in such a fashion (e.g., starting at opposite corners or breaks (e.g. doorways) and working to meet in unbroken wall) as to properly transfer load from the lintel (or bond beam) to the surface supporting the wall. In Australia, ICF products are considered to be combustible as they have not passed AS 1530.1-1994 lab testing. Nevertheless they have achieved AS 1530.8.1-2007 accreditation for use in some bushfire prone areas. Their application is limited to low rise commercial & residential. Building process ICF construction is less demanding, owing to its modularity. Less-skilled labor can be employed to lay the ICF forms, though careful consideration must be made when pouring the concrete to make sure it consolidates fully and cures evenly without cracking. Unlike traditional wood beam construction, no additional structural support other than temporary scaffolding is required for openings, doors, windows, or utilities, though modifying the structure after the concrete cures requires special concrete cutting tools. Floors and foundations ICF walls are conventionally placed on a monolithic slab with embedded rebar dowels connecting the walls to the foundation. ICF decking is becoming an increasingly popular addition to general ICF wall construction. ICF decking weighs up to 40% less than standard concrete flooring and provides superior insulation. ICF decking can also be designed in conjunction with ICF walls to form a continuous monolithic structure joined together by rebar. ICF deck roofs are popular in storm-areas, but it is harder to build complex roof shapes and concrete can be poured only up to a point on angled surfaces, often 7:12 maximum pitch. Walls ICF walls are constructed one row at a time, usually starting at the corners and working toward the middle of the walls. End blocks are then cut to fit so as to waste the least material possible. As the wall rises, blocks are staggered to avoid long vertical seams that can weaken the polystyrene formwork. Structure frames known as bucks are placed around openings to give added strength to the openings and to serve as attachment points for windows and doors. Interior and exterior finishes and facades are affixed directly to the ICF surface or tie ends, depending on the type of ICF. Brick and masonry facades require an extended ledge or shelf angle at the main floor level, but otherwise no modifications are necessary. Interior ICF polystyrene wall surfaces must be covered with drywall panels or other wall coatings. During the first months immediately after construction, minor problems with interior humidity may be evident as the concrete cures, which can damage the drywall. Dehumidification can be accomplished with small residential dehumidifiers or using the building's air conditioning system. Depending on the experience of the contractor and their quality of work, improperly installed exterior foam insulation could be easy access for groundwater and insects. To help prevent these problems, some manufacturers make insecticide-treated foam blocks and promote installation of drainage sheeting and other methods for waterproofing. Drain tiles are installed to eliminate water. Plumbing and electrical Plumbing and electrical conduit can be placed inside the forms and poured into place, though settling problems could cause pipes to break, creating costly repairs. For this reason, plumbing and conduit as well as electrical cables are usually embedded directly into the foam before the wall coverings are applied. A hot knife or electric chainsaw is commonly used to create openings in the foam to lay piping and cabling. electrical cables are inserted into the ICF using a Cable Punch. while ICFs made from other materials are typically cut or routed with simple carpentry tools. Versions of simple carpentry tools suitable for cement-bonded type forms are made for similar use with autoclaved aerated concrete. Cost The initial cost of using ICFs rather than conventional construction techniques is sensitive to the price of materials and labor, but building using ICF may add 3 to 5 percent to the total purchase price over building using wood frame. In most cases ICF construction will cost about 40% less than conventional (basement) construction because of the labor savings from combining multiple steps into one step. Above grade, ICF construction is typically more expensive, but when adding large openings, ICF construction becomes very cost effective. Large openings in conventional construction require large headers and supporting posts, whereas ICF construction reduces the cost, as only reinforcing steel is needed directly around the opening. ICF construction can allow up to 60% smaller heating and cooling units to service the same floor area, which can cut the cost of the final house by an estimated $0.75 per square foot. So, the estimated net extra cost can be as much as $0.25 to $3.25. ICF homes can also qualify for tax credits in some jurisdictions ((cn)), further lowering the costs. ICF buildings are less expensive over time, as they require less energy to heat and cool the same size space compared to a variety of other common construction methods. Additionally, insurance costs can be much lower, as ICF homes are much less susceptible to damage from earthquakes, floods, hurricanes, fires, and other natural disasters. Maintenance and upkeep costs are also lessened, as ICF buildings do not contain wood, which can rot over time or be attacked by insects and rodents. Advantages In seismic and hurricane-prone areas, ICF construction provides strength, impact-resistance, durability, excellent sound insulation, and airtightness. ICF construction is ideal in moderate and mixed climates with significant daily temperature variations, in buildings designed to benefit from thermal mass strategies. Insulating R-Value alone (R-value) of ICFs range from R-12 to R-28, which can be a good R-value for walls. The energy savings compared to framed walls is in a range of 50% to 70%. Disadvantages In the United Kingdom, buildings constructed using ICF may be unsuitable for homeowners wishing to free up capital using equity release. See also Polystyrene Reinforced concrete Thermal insulation Energy conversion efficiency Energy conservation References Angeli C., "BUILDING SAFETY, EFFICIENCY AND COST SAVINGS - Scientific studies on ICF constructive system - Insulating Concrete Form" - Youcanprint, 2020, Building engineering Building insulation materials Building materials Building technology Earthquake engineering Structural engineering
Insulating concrete form
[ "Physics", "Engineering" ]
3,019
[ "Structural engineering", "Building engineering", "Architecture", "Construction", "Materials", "Civil engineering", "Earthquake engineering", "Matter", "Building materials" ]
1,589,960
https://en.wikipedia.org/wiki/KT%20%28energy%29
kT (also written as kBT) is the product of the Boltzmann constant, k (or kB), and the temperature, T. This product is used in physics as a scale factor for energy values in molecular-scale systems (sometimes it is used as a unit of energy), as the rates and frequencies of many processes and phenomena depend not on their energy alone, but on the ratio of that energy and kT, that is, on (see Arrhenius equation, Boltzmann factor). For a system in equilibrium in canonical ensemble, the probability of the system being in state with energy E is proportional to More fundamentally, kT is the amount of heat required to increase the thermodynamic entropy of a system by k. In physical chemistry, as kT often appears in the denominator of fractions (usually because of Boltzmann distribution), sometimes β = is used instead of kT, turning into . RT RT is the product of the molar gas constant, R, and the temperature, T. This product is used in physics and chemistry as a scaling factor for energy values in macroscopic scale (sometimes it is used as a pseudo-unit of energy), as many processes and phenomena depend not on the energy alone, but on the ratio of energy and RT, i.e. . The SI units for RT are joules per mole (J/mol). It differs from kT only by a factor of the Avogadro constant, NA. Its dimension is energy or ML2T−2, expressed in SI units as joules (J): kT = References Thermodynamics Statistical mechanics
KT (energy)
[ "Physics", "Chemistry", "Mathematics" ]
345
[ "Thermodynamics stubs", "Statistical mechanics stubs", "Thermodynamics", "Statistical mechanics", "Physical chemistry stubs", "Dynamical systems" ]
1,590,049
https://en.wikipedia.org/wiki/List%20of%20TRS-80%20clones
The following is a list of clones of Tandy's TRS-80 model I and III home computers: Aster CT-80 by Aster b.v. DGT-100 and DGT-1000 by Digitus D8000, D8001 and D8002 by Dismac Komtek I by Komtek Technologies Le Guépard by HBN Electronic Sa LNW-80 by LNW Research Max-80 by Lobo Systems Meritum by Mera-Elzab MTI Mod III Plus by Microcomputer Technology Inc. CP-300 and CP-500 by Prológica Pentasonic PROF 80 R1001 by Radionic Sysdata Jr by Sysdata Eletrônica Ltda Video Genie (also known as the "Dick Smith System-80" or the "PMC-80") by EACA Misedo 85 by Montex HT-1080Z School Computer (Híradástechnikai Szövetkezet, Hungary) SpotLight I (스포트라이트I) by Hanguk Sangyeok (한국상역) Stolový Počítač SP830 by ZVT References TRS-80 clones TRS-80 TRS-80 clones
List of TRS-80 clones
[ "Technology" ]
279
[ "Computing-related lists", "Lists of computer hardware" ]
1,590,096
https://en.wikipedia.org/wiki/List%20of%20Apple%20II%20clones
Apple II clones are computers that share functionality with the Apple II line but were not created by Apple. The systems were cloned, both in the United States and abroad, in a similar way to the IBM PC. According to some sources (see below), more than 190 different models of Apple II clones were manufactured. Most could not be legally imported into the United States. Apple sued and sought criminal charges against clone makers in more than a dozen countries. Background Many Apple II clones had fruit-related names, without explicitly stating that they were Apple II clones. An example was Pineapple, who Apple successfully forced to change its name to "Pinecom". Agat was a series of Apple II compatible computers produced in the Soviet Union between 1984 and 1993, widely used in schools in the 80's. The first mass-produced models, the Agat 4 and Agat 7, had different memory layouts and video modes from Apple II, which made them only partially compatible with Apple II software. Agats were not direct clones of Apple II, but rather uniquely designed computers based on 6502 CPU and emulated Apple II architecture. That helped developers to port Apple II software titles to Agat. A later model, the Agat 9, had an Apple II compatibility mode out of the box. Soviet engineers and enthusiasts developed thousands of software titles for Agat, including system software, business applications and educational software. Bulgarian Pravetz series 8 was an Apple II clone with Cyrillic support. Basis, a German company, created the Basis 108, a clone for the Apple II that included both a 6502 processor and the Zilog Z80, allowing it to run the CP/M operating system as well as most Apple II software. This machine was unusual in that it was housed in a heavy cast aluminum chassis. The Basis 108 was equipped with built-in Centronics (parallel) and RS232c (serial) ports, as well as the standard six Apple II compatible slots. Unlike the Apple II it came with a detached full-stroke keyboard (AZERTY/QWERTY) of 100 keys plus 15 function keys and separate numeric and editing keypads. Another European Apple II clone was the Pearcom Pear II, which was larger than the original as it sported not eight but fourteen expansion slots. It also had a numerical keypad. Pearcom initially used a pear shaped rainbow logo, but stopped after Apple threatened to take legal action. A Bosnian company named IRIS Computers (subsidiary of an electric company in Bosnia and Herzegovina and Yugoslavia’s ENERGOINVEST) produced Apple II clones starting in the early 1980s. Their official brand name was IRIS 8. They were very expensive and hard to obtain and were produced primarily for use in early computerized digital telephone systems and for education. Their use in offices of state companies, R&D labs and in the Yugoslav army was also reported. IRIS 8 machines looked like early IBM PCs, with a separate central unit accompanied by a cooling system and two 5.25-inch disks, monitor, and keyboard. Compatibility with the original Apple II was complete. Elite high schools in Yugoslavia and especially Bosnia and Herzegovina were equipped with clusters of 8, 16, or 32 IRIS 8 computers connected in a local network administrated by an IRIS 16 PC clone. Between 10,000 to 20,000 IRIS 8s are believed to have been produced. An Australian-produced clone of the Apple II was the Medfly, named after the Mediterranean fruit fly that attacks apples. The Medfly computer featured a faster processor, more memory, detached keyboard, lower and upper case characters, and a built-in disk controller. Until 1992 in Brazil, it was illegal to import microcomputers. Because of that, the illegal cloning industry of Apple II-based computers was strong there. In the early 1980s, there were around 20 different clones of Apple II Plus computers in that country, all of them using illegally copied software and hardware (since the Apple II and II Plus used commonly available TTL integrated circuits). Some of the names include Elppa ("Apple" spelled backwards), Maxtro, Exato MC4000 (by CCE), AP II (by Unitron), and even an "Apple II Plus" (manufactured by a company called Milmar, which was using the name illegally). There were only two clones of the Apple IIe, since it used custom IC chips that could not be copied, and therefore had to be reverse-engineered and developed in-country. These clones were the TK3000 IIe by Microdigital and Exato IIe by CCE. In addition, the Laser IIc was manufactured by Milmar and, despite the name, was a clone of the Apple II Plus, not of the Apple IIc, although it had a design similar to that of the Apple IIc, with an integrated floppy controller and 80-column card, but without an integrated floppy disk drive. The Ace clones from Franklin Computer Corporation were the best known Apple II clones and had the most lasting impact, as Franklin copied Apple's ROMs and software and freely admitted to doing so. Franklin's response was that a computer's ROM was simply a pattern of switches locked into a fixed position, and one cannot copyright a pattern of switches. Apple fought Franklin in court for about five years to get its clones off the market, and was ultimately successful when a court ruled that software stored in ROM was in fact copyrightable in the US. (See Apple Computer, Inc. v. Franklin Computer Corp.) Franklin later released non-infringing but less-compatible clones; these could run ProDOS and AppleWorks and had an Applesoft-like BASIC, but compatibility with other software was hit-or-miss. Apple also challenged VTech's Laser 128 in court. The Laser 128 was an enhanced clone of the Apple IIc first released in 1984. This suit proved less fruitful for Apple, because VTech had reverse-engineered the Monitor ROM rather than copying it and had licensed Applesoft BASIC from its creator, Microsoft. Apple had neglected to obtain exclusive rights to the Applesoft dialect of BASIC from Microsoft; VTech was the first cloner to license it. The Laser 128 proved popular and remained on the market for many years, both in its original form and in accelerated versions that ran faster than 1 MHz. Although it was not fully compatible with the Apple II, it was close, and its popularity ensured that most major developers tested their software on a Laser as well as on genuine Apple machines. Because it was frequently sold via mail order and mass-market retailers such as Sears, the Laser 128 cut into the sales of low-cost competitors such as Commodore Business Machines as much as it did Apple's. While the first Apple II clones were generally exact copies of their Apple counterparts that competed mainly on price, many clones had extra capabilities too. A Franklin model, the Ace 1000, sported a numeric keypad and lower-case long before these features were added to the Apple II line. The Laser 128 series is sometimes credited with spurring Apple to release the Apple IIc Plus; the built-in 3-inch drive and accelerated processor were features Laser had pioneered. The Laser 128 also had a IIe-style expansion slot on the side that could be used to add peripheral cards. Bell & Howell, an audiovisual equipment manufacturer whose products (particularly film projectors) were ubiquitous in American schools, offered what appeared at first glance to be an Apple II Plus clone in a distinctive black plastic case. However, these were in fact real Apple II Plus units manufactured by Apple for B&H for a brief period of time. Many schools had a few of these Black Apples in their labs. ITT made the ITT 2020, a licensed Apple II Plus clone, in the UK. It has the same shape as the Apple II but was matte silver (it was sometimes known as the "silver Apple") and was not an exact copy functionally. The ITT2020 produced a PAL video signal for the European market, where the domestic US market used NTSC. Software using the BIOS worked correctly on both the Apple and ITT, but software written to access the Apple's display hardware directly, bypassing the BIOS, displayed with vertical stripes on the ITT 2020. The Apple II itself was later introduced in the UK, and both the Apple II and ITT 2020 were sold for a time, the ITT at a lower price. Syscom 2 Inc (from Carson City, NV) created the Syscom 2 Apple II+ clone. The case looked nearly identical. It had 48 KB of RAM and the normal expansion capabilities. These clones also supported lower case characters, toggled with a ^O keystroke. An unknown company produced a clone called the RX-8800. One new feature it had was a numeric keypad. The SEKON, made in Taiwan, had the same color plastic case as an Apple II, sported 48 KB of RAM standard, and a lower-uppercase switch, located where the power light indicator was typically situated on Apple II's. Additionally, it featured a 5-amp power supply which supplied ample power for add-on cards. SEKON avoided shipments being confiscated by US Customs, by shipping their computers without ROMs, leaving it to the dealers to populate the boards upon arrival to their private stores. Often these machines would boot up with a familiar logo of the Apple II after the dealers removed E-proms of original Apple ROMs and added them in. The reason for such activity was so that users could obtain a fully Apple-compatible clone for usually around US$600, as opposed to US$2500 from Apple. Norwegian company West Computer AS introduced an Apple II clone West PC-800 in 1984. The computer was designed as an alarm center allowing use of several CPUs (6502, Z80, 8086, 68000) and operating systems. Expansion cards Although not technically a clone, Quadram produced an add-in ISA card, called the Quadlink, that provided hardware emulation of an Apple II+ for the IBM PC. The card had its own 6502 CPU and dedicated 80 K RAM (64 K for applications, plus 16 K to hold a reverse-engineered Apple ROM image, loaded at boot-time), and installed "between" the PC and its floppy drive(s), color display, and speaker, in a pass-through configuration. This allowed the PC to operate in a dual-boot fashion: when booted through the Quadlink, the PC could run the majority of Apple II software, and read and write Apple-formatted floppies through the standard PC floppy drive. Because it had a dedicated processor, rather than any form of software emulation, this system ran at nearly the same speed as an equivalent Apple machine. Another company, Diamond Computer Systems, produced a similar series of cards called the Trackstar, that had a dual pair of 6502 CPUs, and ran Apple II software using an Apple licensed ROM. The original Trackstar (and "128" and "Plus" model) was Apple II Plus compatible, while the "Trackstar E", Apple IIe compatible. The original offered 64K of usable Apple II RAM, while the other models 128K RAM (192K is on board, with the additional memory reserved for the Trackstar itself). The original Trackstar also contained a Z80 CPU, allowing it to run both Apple DOS and Apple CP/M software, however the newer Trackstar models did not, and thus dropped CP/M compatibility. The Trackstar also had a connector allowing use of an actual Apple floppy drive, which enhanced its compatibility with software that took advantage of Apple hardware for copy-protection. North American clones United States Albert CompuSource Abacus CompuSource Orange Peel Collins Orange+ Two Formula II kit ("Fully compatible with Apple II+") Franklin Ace series InterTek System IV Laser 128 MicroSCI Havac Micro-Craft Dimension 68000 Sekon Syscom 2 Unitronics Sonic Canada Apco Arcomp Super 400 Super 800 CV-777 Golden II (Spiral) Logistics Arrow 1000 Arrow 2000 Mackintosh Microcom II+ Microcom IIe MIPC O.S. Micro Systems OS-21 OS-22 Orange Computers Orangepeel Peach Microcomputer Brazilian clones CCE MC-4000 - Page in Portuguese (Exato Pro) MC-4000 //e - Page in Portuguese (Exato IIe) Del MC01 - Page in Portuguese (Unreleased Apple II+ clone) Microcraft Craft II Plus Microdigital Microdigital TK2000 Color (not 100% binary-compatible) Microdigital TK2000 II Color (not 100% binary-compatible) Microdigital TK-3000 IIe - Page in Portuguese Microdigital TK-3000 //e System Microdigital TK-3000 //e Compact Micronix Dactron E - Page in Portuguese Polymax Maxxi - Page in Portuguese Spectrum Equipam. Eletrônicos Ind.Com.Ltda Spectrum ED - Page in Portuguese (Apple IIe) Spectrum Microengenho I - Page in Portuguese (Apple II) Spectrum Microengenho II - Page in Portuguese (Apple IIe) Unitron (not to be confused with the Taiwanese Unitron, the makers of the infamous U2000 and the U2200 systems) Unitron APII - Page in Portuguese Unitron APII TI D-8100 (Dismac) Victor do Brasil Eletrônica Ltda Elppa II Plus (1983) Elppa II Plus TS (1983) Elppa Jr. (1984) Micronix Ind. e Com.de Computadores ltda Dactron Dactron E DGT AT (Digitus Ind.e Com.Serv.de Eletrônica Ltda - 1985) DM II (D.M. Eletrônica Ltda - 1983) Link 323 (Link Tecnologia - 1984) Maneger I (Magenex Eletrônica Ltda - 1983) Maxxi (Polymax Sistemas e Periféricos Ltda - 1982) Ômega - Ind e Com. Ltda MC 100 (1983) MC 400 (1984) MG-8065 (Magenex Eletrônica Ltda - 1983) (Milmar Ind. e Com. Ltda) Apple II Plus (1983) Apple II TN-1 (1984) Apple II Senior (1984) Apple II Master (1984) Apple Laser IIc (1985) Chinese clones China China Education Computer CEC-I CEC-M CEC-G CEC-E CEC-2000 Venus II Series (Apple II+ Clone) Venus IIA Venus IIB ChangJiang-I (Apple II+ Clone) DJS-033 Series (Apple II+ Clone) DJS-033e Series (Apple IIe Clone) Hong Kong ACC 8000 (a.k.a. Accord 8000) Basis Medfly CTC (Computer Technologies Corporation) Wombat Wombat AB Wombat Professional Pineapple Computers Pineapple 48K Color Computer (or "ananas") Pineapple DP-64E Teleco Electronics ATEX 2000 Personal Computer VTech (Video Technology) Laser 128 Laser 3000 Taiwan AP Computer BAT 250 Chia-ma SPS-109 Chin Hsin Industrial RX-8800 Copam Electronics Base 48 Base 64 Base 64A Base 64D Fugu Elite 5 Golden Formosa Microcomputer Golden II Happy Home Computer Co. Multi-System I.H. Panda CAT-100 CAT-200 CAT-400 IMC IMC-320 IMC-480 IMC-640 IMC-640E IMC-2001 (with officially licensed DOS 3.3 from Apple; after battle in court IMC Taiwan got an agreement with Apple to officially license them DOS 3.3) IMC Fox IMC Junior IMC Portcom II Lazar II Mitac LIC-2001A/LIC-2001 (Little Intelligent Computer) LIC-3001 (Little Intelligent Computer) Multitech Microprofessor II (MPF II) Microprofessor III (MPF III) Panda 64 Rakoa Computer Rakoa I SMC-II MCAD (Microcomputer Aided Design System) Sages Computer Zeus 2001 Surwave Electronics Amigo 202 Amigo 505 The Jow Dian Enterprise ZD-103 (The ZD 8/16 Personal Computer) Unitron U2000 Unitron U2200 European clones Austria Zema Twin Bulgaria IMKO-1 IMKO-2 Pravetz series 8 Pravetz 82 Pravetz 8A Pravetz 8M Pravetz 8E Pravetz 8C (produced as late as 1994) France 3CI Robot (non-Apple II clone, but comes with a dedicated cash register for hairdressing salons) TMS Vela (TMS means Troyes Micro Service) Germany Basis Microcomputer GmbH Basis 108 Basis 208 Blaupunkt Blaupunkt Apple II Citron II CSC Euro 1000 CSC Euro Plus CSC Euro Profi CSC Euro Super ComputerTechnik Space 83 ComputerTechnik SK-747/IBS Space-83 Eurocon II Eurocon II+ ITT 2020 (Europlus) Precision Echo Phase II (Basis 108 with a light milk chocolate brown case) Greece Gigatronics KAT Italy Asem AM-64e Selcom Lemon II Staff C1 The Netherlands AVT Electronics AVT Comp 2 Computer Hobbyvereniging Eindhoven CHE-1 Pearcom Pear II Norway West PC-800 Poland Lidia Spain Katson Katson II Yugoslavia Ananas Marta kompjuteri Israel General 48A General 64A RMC Kosmos 285 Spring (sold, inter alia, in Israel) Winner 64K Elite //E East Asian clones Japan Akihabara Japple Honda Computers (also known as Pete Perkins Apple) it used a custom Vectorio motherboard with a custom user EPROM socket (shown on Thames Television in 1984) Wakou Marvel 2000 Singapore Creative Labs CUBIC-88 Creative Labs CUBIC-99 Lingo 128 Personal Computer South Korea Hyosung PC-8000 Sambo TriGem20 Sambo Busicom SE-6003 E-Haeng Cyborg-3 Zungwon HART Champion-86XT Sanho ACME 2000 Australian clones Dick Smith Cat (VTech Laser 3000) Energy Control 128. Came with a 128k ROM with Forth built in. Needed a hardware modification to run ProDos. Soviet clones Agat Agat-4 Agat-7 Agat-8 Agat-9 Unknown models Bannana Banana CB-777 (confiscated by Apple Computer) CV-777 REON TK 8000 (confiscated by Apple Computer) Other models AES easy3 AMI II Aloha 50 Aton II Bimex BOSS-1 Elppa II General 64 Iris 8 Ivel Z3 Lingo 8 MCP Mango II Mind II Multi-system computer Orange Shuttle (computer) Space II Tiger TC-80A Plug-in Apple II compatibility boards Apple IIe Card (Macintosh LC) Diamond Trackstar (IBM PC) Trackstar Trackstar 128 Trackstar Plus Trackstar E Mimic Systems Spartan (Commodore 64) Quadram Quadlink (IBM PC) Titan III (Apple III) III Plus II III Plus IIe External links epocalc Apple II clones listing References Apple II clones Apple II
List of Apple II clones
[ "Technology" ]
4,022
[ "Computing-related lists", "Lists of computer hardware" ]
1,590,256
https://en.wikipedia.org/wiki/Closeness%20%28mathematics%29
Closeness is a basic concept in topology and related areas in mathematics. Intuitively, we say two sets are close if they are arbitrarily near to each other. The concept can be defined naturally in a metric space where a notion of distance between elements of the space is defined, but it can be generalized to topological spaces where we have no concrete way to measure distances. The closure operator closes a given set by mapping it to a closed set which contains the original set and all points close to it. The concept of closeness is related to limit point. Definition Given a metric space a point is called close or near to a set if , where the distance between a point and a set is defined as where inf stands for infimum. Similarly a set is called close to a set if where . Properties if a point is close to a set and a set then and are close (the converse is not true!). closeness between a point and a set is preserved by continuous functions closeness between two sets is preserved by uniformly continuous functions Closeness relation between a point and a set Let be some set. A relation between the points of and the subsets of is a closeness relation if it satisfies the following conditions: Let and be two subsets of and a point in . If then is close to . if is close to then if is close to and then is close to if is close to then is close to or is close to if is close to and for every point , is close to , then is close to . Topological spaces have a closeness relationship built into them: defining a point to be close to a subset if and only if is in the closure of satisfies the above conditions. Likewise, given a set with a closeness relation, defining a point to be in the closure of a subset if and only if is close to satisfies the Kuratowski closure axioms. Thus, defining a closeness relation on a set is exactly equivalent to defining a topology on that set. Closeness relation between two sets Let , and be sets. if and are close then and if and are close then and are close if and are close and then and are close if and are close then either and are close or and are close if then and are close Generalized definition The closeness relation between a set and a point can be generalized to any topological space. Given a topological space and a point , is called close to a set if . To define a closeness relation between two sets the topological structure is too weak and we have to use a uniform structure. Given a uniform space, sets A and B are called close to each other if they intersect all entourages, that is, for any entourage U, (A×B)∩U is non-empty. See also Topological space Uniform space References General topology
Closeness (mathematics)
[ "Mathematics" ]
564
[ "General topology", "Topology" ]
1,590,295
https://en.wikipedia.org/wiki/Syslog
In computing, syslog is a standard for message logging. It allows separation of the software that generates messages, the system that stores them, and the software that reports and analyzes them. Each message is labeled with a facility code, indicating the type of system generating the message, and is assigned a severity level. Computer system designers may use syslog for system management and security auditing as well as general informational, analysis, and debugging messages. A wide variety of devices, such as printers, routers, and message receivers across many platforms use the syslog standard. This permits the consolidation of logging data from different types of systems in a central repository. Implementations of syslog exist for many operating systems. When operating over a network, syslog uses a client-server architecture where a syslog server listens for and logs messages coming from clients. History Syslog was developed in the 1980s by Eric Allman as part of the Sendmail project. It was readily adopted by other applications and has since become the standard logging solution on Unix-like systems. A variety of implementations also exist on other operating systems and it is commonly found in network devices, such as routers. Syslog originally functioned as a de facto standard, without any authoritative published specification, and many implementations existed, some of which were incompatible. The Internet Engineering Task Force documented the status quo in RFC 3164 in August 2001. It was standardized by RFC 5424 in March 2009. Various companies have attempted to claim patents for specific aspects of syslog implementations. This has had little effect on the use and standardization of the protocol. Message components The information provided by the originator of a syslog message includes the facility code and the severity level. The syslog software adds information to the information header before passing the entry to the syslog receiver. Such components include an originator process ID, a timestamp, and the hostname or IP address of the device. Facility A facility code is used to specify the type of system that is logging the message. Messages with different facilities may be handled differently. The list of facilities available is described by the standard: The mapping between facility code and keyword is not uniform in different operating systems and syslog implementations. Severity level The list of severities is also described by the standard: The meaning of severity levels other than Emergency and Debug are relative to the application. For example, if the purpose of the system is to process transactions to update customer account balance information, an error in the final step should be assigned Alert level. However, an error occurring in an attempt to display the ZIP code of the customer may be assigned Error or even Warning level. The server process that handles display of messages usually includes all lower (more severe) levels when the display of less severe levels is requested. That is, if messages are separated by individual severity, a Warning level entry will also be included when filtering for Notice, Info and Debug messages. Message In RFC 3164, the message component (known as MSG) was specified as having these fields: TAG, which should be the name of the program or process that generated the message, and CONTENT which contains the details of the message. Described in RFC 5424, "MSG is what was called CONTENT in RFC 3164. The TAG is now part of the header, but not as a single field. The TAG has been split into APP-NAME, PROCID, and MSGID. This does not totally resemble the usage of TAG, but provides the same functionality for most of the cases." Popular syslog tools such as NXLog, Rsyslog conform to this new standard. The content field should be encoded in a UTF-8 character set and octet values in the traditional ASCII control character range should be avoided. Logger Generated log messages may be directed to various destinations including console, files, remote syslog servers, or relays. Most implementations provide a command line utility, often called logger, as well as a software library, to send messages to the log. To display and monitor the collected logs one needs to use a client application or access the log file directly on the system. The basic command line tools are tail and grep. The log servers can be configured to send the logs over the network (in addition to the local files). Some implementations include reporting programs for filtering and displaying of syslog messages. Network protocol When operating over a network, syslog uses a client-server architecture where the server listens on a well-known or registered port for protocol requests from clients. Historically the most common transport layer protocol for network logging has been User Datagram Protocol (UDP), with the server listening on port 514. Because UDP lacks congestion control mechanisms, Transmission Control Protocol (TCP) port 6514 is used; Transport Layer Security is also required in implementations and recommended for general use. Limitations Since each process, application, and operating system was written independently, there is little uniformity to the payload of the log message. For this reason, no assumption is made about its formatting or contents. A syslog message is formatted (RFC 5424 gives the Augmented Backus–Naur form (ABNF) definition), but its MSG field is not. The network protocol is simplex communication, with no means of acknowledging the delivery to the originator. Outlook Various groups are working on draft standards detailing the use of syslog for more than just network and security event logging, such as its proposed application within the healthcare environment. Regulations, such as the Sarbanes–Oxley Act, PCI DSS, HIPAA, and many others, require organizations to implement comprehensive security measures, which often include collecting and analyzing logs from many different sources. The syslog format has proven effective in consolidating logs, as there are many open-source and proprietary tools for reporting and analysis of these logs. Utilities exist for conversion from Windows Event Log and other log formats to syslog. Managed Security Service Providers attempt to apply analytical techniques and artificial intelligence algorithms to detect patterns and alert customers to problems. Internet standard documents The Syslog protocol is defined by Request for Comments (RFC) documents published by the Internet Engineering Task Force (Internet standards). The following is a list of RFCs that define the syslog protocol: (obsoleted by ) See also Audit trail Common Log Format Console server Data logging Log management and intelligence Logparser Netconf NXLog Rsyslog Security Event Manager Server log Simple Network Management Protocol (SNMP) syslog-ng Web counter Web log analysis software References External links Internet Engineering Task Force: Datatracker: syslog Working Group (concluded) National Institute of Standards and Technology: "Guide to Computer Security Log Management" (Special Publication 800-92) (white paper) Network Management Software: "Understanding Syslog: Servers, Messages & Security" Paessler IT Explained - Syslog MonitorWare: All about Syslog Internet protocols Internet Standards Network management Log file formats System administration
Syslog
[ "Technology", "Engineering" ]
1,457
[ "Computer networks engineering", "Computer logging", "System administration", "Information systems", "Log file formats", "Network management" ]
1,590,390
https://en.wikipedia.org/wiki/Quasiperiodic%20motion
In mathematics and theoretical physics, quasiperiodic motion is motion on a torus that never comes back to the same point. This behavior can also be called quasiperiodic evolution, dynamics, or flow. The torus may be a generalized torus so that the neighborhood of any point is more than two-dimensional. At each point of the torus there is a direction of motion that remains on the torus. Once a flow on a torus is defined or fixed, it determines trajectories. If the trajectories come back to a given point after a certain time then the motion is periodic with that period, otherwise it is quasiperiodic. The quasiperiodic motion is characterized by a finite set of frequencies which can be thought of as the frequencies at which the motion goes around the torus in different directions. For instance, if the torus is the surface of a doughnut, then there is the frequency at which the motion goes around the doughnut and the frequency at which it goes inside and out. But the set of frequencies is not uniqueby redefining the way position on the torus is parametrized another set of the same size can be generated. These frequencies will be integer combinations of the former frequencies (in such a way that the backward transformation is also an integer combination). To be quasiperiodic, the ratios of the frequencies must be irrational numbers. In Hamiltonian mechanics with position variables and associated rates of change it is sometimes possible to find a set of conserved quantities. This is called the fully integrable case. One then has new position variables called action-angle coordinates, one for each conserved quantity, and these action angles simply increase linearly with time. This gives motion on "level sets" of the conserved quantities, resulting in a torus that is an -manifoldlocally having the topology of -dimensional space. The concept is closely connected to the basic facts about linear flow on the torus. These essentially linear systems and their behaviour under perturbation play a significant role in the general theory of non-linear dynamic systems. Quasiperiodic motion does not exhibit the butterfly effect characteristic of chaotic systems. In other words, starting from a slightly different initial point on the torus results in a trajectory that is always just slightly different from the original trajectory, rather than the deviation becoming large. Rectilinear motion Rectilinear motion along a line in a Euclidean space gives rise to a quasiperiodic motion if the space is turned into a torus (a compact space) by making every point equivalent to any other point situated in the same way with respect to the integer lattice (the points with integer coordinates), so long as the direction cosines of the rectilinear motion form irrational ratios. When the dimension is 2, this means the direction cosines are incommensurable. In higher dimensions it means the direction cosines must be linearly independent over the field of rational numbers. Torus model If we imagine that the phase space is modelled by a torus T (that is, the variables are periodic, like angles), the trajectory of the quasiperiodic system is modelled by a curve on T that wraps around the torus without ever exactly coming back on itself. Assuming the dimension of T is at least two, these can be thought of as one-parameter subgroups of the torus given group structure (by specifying a certain point as the identity element). Quasiperiodic functions A quasiperiodic motion can be expressed as a function of time whose value is a vector of "quasiperiodic functions". A quasiperiodic function on the real line is a function obtained from a function on a standard torus T (defined by angles), by means of a trajectory in the torus in which each angle increases at a constant rate. There are "internal frequencies", being the rates at which the angles progress, but as mentioned above the set is not uniquely determined. In many cases the function in the torus can be expressed as a multiple Fourier series. For equal to 2 this is: If the trajectory is then the quasiperiodic function is: This shows that there may be an infinite number of frequencies in the expansion, not multiples of a finite number of frequencies. Depending on which coefficients are non-zero the "internal frequencies" and themselves may not contribute terms in this expansion, even if one uses an alternative set of internal frequencies such as and If the are non-zero only when the ratio is some specific constant, then the function is actually periodic rather than quasiperiodic. See Kronecker's theorem for the geometric and Fourier theory attached to the number of modes. The closure of (the image of) any one-parameter subgroup in T is a subtorus of some dimension d. In that subtorus the result of Kronecker applies: there are d real numbers, linearly independent over the rational numbers, that are the corresponding frequencies. In the quasiperiodic case, where the image is dense, a result can be proved on the ergodicity of the motion: for any measurable subset A of T (for the usual probability measure), the average proportion of time spent by the motion in A is equal to the measure of A. Terminology and history The theory of almost periodic functions is, roughly speaking, for the same situation but allowing T to be a torus with an infinite number of dimensions. The early discussion of quasi-periodic functions, by Ernest Esclangon following the work of Piers Bohl, in fact led to a definition of almost-periodic function, the terminology of Harald Bohr. Ian Stewart wrote that the default position of classical celestial mechanics, at this period, was that motions that could be described as quasiperiodic were the most complex that occurred. For the Solar System, that would apparently be the case if the gravitational attractions of the planets to each other could be neglected: but that assumption turned out to be the starting point of complex mathematics. The research direction begun by Andrei Kolmogorov in the 1950s led to the understanding that quasiperiodic flow on phase space tori could survive perturbation. NB: The concept of quasiperiodic function, for example the sense in which theta functions and the Weierstrass zeta function in complex analysis are said to have quasi-periods with respect to a period lattice, is something distinct from this topic. References See also Quasiperiodicity Kolmogorov–Arnold–Moser theorem Dynamical systems
Quasiperiodic motion
[ "Physics", "Mathematics" ]
1,346
[ "Mechanics", "Dynamical systems" ]
1,590,591
https://en.wikipedia.org/wiki/Glycopeptide%20antibiotic
Glycopeptide antibiotics are a class of drugs of microbial origin that are composed of glycosylated cyclic or polycyclic nonribosomal peptides. Significant glycopeptide antibiotics include the anti-infective antibiotics vancomycin, teicoplanin, telavancin, ramoplanin, avoparcin and decaplanin, corbomycin, complestatin and the antitumor antibiotic bleomycin. Vancomycin is used if infection with methicillin-resistant Staphylococcus aureus (MRSA) is suspected. Mechanism and classification Some members of this class of drugs inhibit the synthesis of cell walls in susceptible microbes by inhibiting peptidoglycan synthesis. The core class (including vancomycin) binds to acyl-D-alanyl-D-alanine in lipid II, preventing the addition of new units to the peptidoglycan. Of this core class, one may distinguish multiple generations: the first generation includes vancomycin and teicoplanin, while the semisynthetic second generation includes lipoglycopeptides like telavancin, oritavancin and dalbavancin. The extra lipophilicity not only enhances Lipid II binding but also creates a second mechanism of action whereby the antibiotic dissolves into the membrane and makes it more permeable. Corbomycin and complestatin are structurally and ancestrally related to vancomycin, but they work by inhibiting autolysins through binding to peptidoglycan, therefore preventing cell division, neither is an approved drug. Ramoplanin, although a "glycopeptide" in the literal sense, has a quite different structural core. It not only binds to Lipid II but also attacks MurG and transglycosylases (glycosyltransferases) which polymerize amino acid/sugar building blocks into peptidoglycan. It has been described as a "first-in-class" antibiotic, representing glycolipodepsipeptide antibiotics. Bleomycin also has a different core. Its mode of action is also unrelated to the cell wall, instead causing DNA damage in tumor cells. Use Due to their toxicity, the use of first-generation glycopeptide antibiotics is restricted to patients who are critically ill, who have a demonstrated hypersensitivity to the β-lactams, or who are infected with β-lactam-resistant species, as in the case of methicillin-resistant Staphylococcus aureus. These antibiotics are effective principally against Gram-positive cocci. First-generation examples exhibit a narrow spectrum of action and are bactericidal only against the enterococci. Some tissues are not penetrated very well by glycopeptides, and they do not penetrate into the cerebrospinal fluid. The second-generation glycopeptides, or "lipoglycopeptides", have better binding to Lipid II due to the lipophilic moieties, expanding the antibacterial spectrum. Telavancin also has a hydrophilic moiety attached to enhance tissue distribution and reduce nephrotoxicity. History Vancomycin was isolated in 1953 and used clinically by 1958, while teicoplanin was discovered in 1978 and became clinically-available in 1984. Telavancin is a semi-synthetic lipoglycopeptide derivative of vancomycin approved by FDA in 2009. Teicoplanin has historically been more widely marketed - and thus more used - in Europe compared to the U.S. It has more fatty acid chains than vancomycin and is considered to be 50 to 100 times more lipophilic. Teicoplanin also has an increased half-life compared to vancomycin, as well as having better tissue penetration. It can be two to four times more active than vancomycin, but it does depend upon the organism. Teicoplanin is more acidic, forming water-soluble salts, so it can be given intramuscularly. Teicoplanin is much better at penetrating leukocytes and phagocytes than vancomycin. Since 2002, isolates of vancomycin-resistant Staphylococcus aureus (VRSA) have been found in the USA and other countries. Glycopeptides have typically been considered the last effective line of defense for cases of MRSA, however, several newer classes of antibiotics have proven to have activity against MRSA- including, in 2000, linezolid of the oxazolidinone class, and in 2003 daptomycin of the lipopeptide class. Research Several derivatives of vancomycin are currently being developed, including oritavancin and dalbavancin (both lipoglycopeptides). Possessing longer half-lives than vancomycin, these newer candidates may demonstrate improvements over vancomycin due to less frequent dosing and activity against vancomycin-resistant bacteria. Administration Vancomycin is usually given intravenously, as an infusion, and can cause tissue necrosis and phlebitis at the injection site if given too rapidly. Pain at the site of injection is indeed a common adverse event. One of the side effects is red man syndrome, an idiosyncratic reaction to bolus caused by histamine release. Some other side-effects of vancomycin are nephrotoxicity including kidney failure and interstitial nephritis, blood disorders including neutropenia, and deafness, which is reversible once therapy has stopped. Over 90% of the dose is excreted in the urine, therefore there is a risk of accumulation in patients with renal impairment, so therapeutic drug monitoring (TDM) is recommended. Oral preparations of vancomycin are available, however, they are not absorbed from the lumen of the gut, so are of no use in treating systemic infections. The oral preparations are formulated for the treatment of infections within the gastrointestinal tract, Clostridioides difficile, for example. References Carbohydrate chemistry Antimicrobial peptides
Glycopeptide antibiotic
[ "Chemistry" ]
1,325
[ "Glycopeptides", "Glycopeptide antibiotics", "Carbohydrate chemistry", "nan", "Chemical synthesis", "Glycobiology" ]