id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
65,977,551 | https://en.wikipedia.org/wiki/Uwu | uwu (), also stylized UwU, is an emoticon representing a cute face. The u characters represent closed eyes, while the w represents a cat mouth. It is used to express various warm, happy, or affectionate feelings.
Usage and variants
The emoticon uwu is often used to denote cuteness (kawaii), happiness, or tenderness. Excessive usage of the emoticon can also have the intended effect of annoying its recipient. It is popularly used in the furry fandom.
The emoticon also has a more surprised and sometimes allusive variant, owo (also stylized OwO; ; also associated with the furry fandom and often the response "what's this?") that may also denote cuteness, as well as curiosity and perplexion. owo gained popularity in 2018; as opposed to uwu, the o characters represent open eyes. It is also sometimes used for trolling. Another variant, TwT, is often used to symbolize crying, with each T representing a closed eye with tears streaming down.
History
The emoticon uwu is known to date back as far as April 11, 2000, when it was used by furry artist Ghislain Deslierres in a post on the furry art site VCL (Vixen Controlled Library). A 2005 anime fanfiction contained another early use of the word. The origin of the term is unknown, with many people believing it to originate in Internet chat rooms. By 2014, the emoticon had spread across the Internet into Tumblr, becoming an Internet subculture.
The word uwu is included in the Royal Spanish Academy's word observatory, defined as an "emoticon used to show happiness or tenderness".
Notable uses
In 2018, the official Twitter account tweeted "uwu" in response to a tweet by an artist.
In 2020, the U.S. Army Esports Twitter account tweeted "uwu" in reply to a tweet by Discord, which was met by significant backlash from Twitter users. This event culminated in a trend of attempting to get banned from the U.S. Army Esports Discord server as quickly as possible, with a common technique being to link to the Wikipedia article on war crimes committed by the United States.
See also
List of emoticons
Notes
References
External links
Emoticons
Furry fandom
Internet memes introduced in 2000
2000 neologisms
Internet slang | Uwu | Mathematics | 506 |
2,916,615 | https://en.wikipedia.org/wiki/Force%20field%20%28chemistry%29 | In the context of chemistry, molecular physics, physical chemistry, and molecular modelling, a force field is a computational model that is used to describe the forces between atoms (or collections of atoms) within molecules or between molecules as well as in crystals. Force fields are a variety of interatomic potentials. More precisely, the force field refers to the functional form and parameter sets used to calculate the potential energy of a system on the atomistic level. Force fields are usually used in molecular dynamics or Monte Carlo simulations. The parameters for a chosen energy function may be derived from classical laboratory experiment data, calculations in quantum mechanics, or both. Force fields utilize the same concept as force fields in classical physics, with the main difference being that the force field parameters in chemistry describe the energy landscape on the atomistic level. From a force field, the acting forces on every particle are derived as a gradient of the potential energy with respect to the particle coordinates.
A large number of different force field types exist today (e.g. for organic molecules, ions, polymers, minerals, and metals). Depending on the material, different functional forms are usually chosen for the force fields since different types of atomistic interactions dominate the material behavior.
There are various criteria that can be used for categorizing force field parametrization strategies. An important differentiation is 'component-specific' and 'transferable'. For a component-specific parametrization, the considered force field is developed solely for describing a single given substance (e.g. water). For a transferable force field, all or some parameters are designed as building blocks and become transferable/ applicable for different substances (e.g. methyl groups in alkane transferable force fields). A different important differentiation addresses the physical structure of the models: All-atom force fields provide parameters for every type of atom in a system, including hydrogen, while united-atom interatomic potentials treat the hydrogen and carbon atoms in methyl groups and methylene bridges as one interaction center. Coarse-grained potentials, which are often used in long-time simulations of macromolecules such as proteins, nucleic acids, and multi-component complexes, sacrifice chemical details for higher computing efficiency.
Force fields for molecular systems
The basic functional form of potential energy for modeling molecular systems includes intramolecular interaction terms for interactions of atoms that are linked by covalent bonds and intermolecular (i.e. nonbonded also termed noncovalent) terms that describe the long-range electrostatic and van der Waals forces. The specific decomposition of the terms depends on the force field, but a general form for the total energy in an additive force field can be written as
where the components of the covalent and noncovalent contributions are given by the following summations:
The bond and angle terms are usually modeled by quadratic energy functions that do not allow bond breaking. A more realistic description of a covalent bond at higher stretching is provided by the more expensive Morse potential. The functional form for dihedral energy is variable from one force field to another. Additional, "improper torsional" terms may be added to enforce the planarity of aromatic rings and other conjugated systems, and "cross-terms" that describe the coupling of different internal variables, such as angles and bond lengths. Some force fields also include explicit terms for hydrogen bonds.
The nonbonded terms are computationally most intensive. A popular choice is to limit interactions to pairwise energies. The van der Waals term is usually computed with a Lennard-Jones potential or the Mie potential and the electrostatic term with Coulomb's law. However, both can be buffered or scaled by a constant factor to account for electronic polarizability. A large number of force fields based on this or similar energy expressions have been proposed in the past decades for modeling different types of materials such as molecular substances, metals, glasses etc. - see below for a comprehensive list of force fields.
Bond stretching
As it is rare for bonds to deviate significantly from their equilibrium values, the most simplistic approaches utilize a Hooke's law formula:
where is the force constant, is the bond length, and is the value for the bond length between atoms and when all other terms in the force field are set to 0. The term is at times differently defined or taken at different thermodynamic conditions.
The bond stretching constant can be determined from the experimental infrared spectrum, Raman spectrum, or high-level quantum-mechanical calculations. The constant determines vibrational frequencies in molecular dynamics simulations. The stronger the bond is between atoms, the higher is the value of the force constant, and the higher the wavenumber (energy) in the IR/Raman spectrum.
Though the formula of Hooke's law provides a reasonable level of accuracy at bond lengths near the equilibrium distance, it is less accurate as one moves away. In order to model the Morse curve better one could employ cubic and higher powers. However, for most practical applications these differences are negligible, and inaccuracies in predictions of bond lengths are on the order of the thousandth of an angstrom, which is also the limit of reliability for common force fields. A Morse potential can be employed instead to enable bond breaking and higher accuracy, even though it is less efficient to compute. For reactive force fields, bond breaking and bond orders are additionally considered.
Electrostatic interactions
Electrostatic interactions are represented by a Coulomb energy, which utilizes atomic charges to represent chemical bonding ranging from covalent to polar covalent and ionic bonding. The typical formula is the Coulomb law:
where is the distance between two atoms and . The total Coulomb energy is a sum over all pairwise combinations of atoms and usually excludes .
Atomic charges can make dominant contributions to the potential energy, especially for polar molecules and ionic compounds, and are critical to simulate the geometry, interaction energy, and the reactivity. The assignment of charges usually uses some heuristic approach, with different possible solutions.
Force fields for crystal systems
Atomistic interactions in crystal systems significantly deviate from those in molecular systems, e.g. of organic molecules. For crystal systems, in particular multi-body interactions are important and cannot be neglected if a high accuracy of the force field is the aim. For crystal systems with covalent bonding, bond order potentials are usually used, e.g. Tersoff potentials. For metal systems, usually embedded atom potentials are used. For metals, also so-called Drude model potentials have been developed, which describe a form of attachment of electrons to nuclei.
Parameterization
In addition to the functional form of the potentials, a force fields consists of the parameters of these functions. Together, they specify the interactions on the atomistic level. The parametrization, i.e. determining of the parameter values, is crucial for the accuracy and reliability of the force field. Different parametrization procedures have been developed for the parametrization of different substances, e.g. metals, ions, and molecules. For different material types, usually different parametrization strategies are used. In general, two main types can be distinguished for the parametrization, either using data/ information from the atomistic level, e.g. from quantum mechanical calculations or spectroscopic data, or using data from macroscopic properties, e.g. the hardness or compressibility of a given material. Often a combination of these routes is used. Hence, one way or the other, the force field parameters are always determined in an empirical way. Nevertheless, the term 'empirical' is often used in the context of force field parameters when macroscopic material property data was used for the fitting. Experimental data (microscopic and macroscopic) included for the fit, for example, the enthalpy of vaporization, enthalpy of sublimation, dipole moments, and various spectroscopic properties such as vibrational frequencies. Often, for molecular systems, quantum mechanical calculations in the gas phase are used for parametrizing intramolecular interactions and parametrizing intermolecular dispersive interactions by using macroscopic properties such as liquid densities. The assignment of atomic charges often follows quantum mechanical protocols with some heuristics, which can lead to significant deviation in representing specific properties.
A large number of workflows and parametrization procedures have been employed in the past decades using different data and optimization strategies for determining the force field parameters. They differ significantly, which is also due to different focuses of different developments. The parameters for molecular simulations of biological macromolecules such as proteins, DNA, and RNA were often derived/ transferred from observations for small organic molecules, which are more accessible for experimental studies and quantum calculations.
Atom types are defined for different elements as well as for the same elements in sufficiently different chemical environments. For example, oxygen atoms in water and an oxygen atoms in a carbonyl functional group are classified as different force field types. Typical molecular force field parameter sets include values for atomic mass, atomic charge, Lennard-Jones parameters for every atom type, as well as equilibrium values of bond lengths, bond angles, and dihedral angles. The bonded terms refer to pairs, triplets, and quadruplets of bonded atoms, and include values for the effective spring constant for each potential.
Heuristic force field parametrization procedures have been very successfully for many year, but recently criticized. since they are usually not fully automated and therefore subject to some subjectivity of the developers, which also brings problems regarding the reproducibility of the parametrization procedure.
Efforts to provide open source codes and methods include openMM and openMD. The use of semi-automation or full automation, without input from chemical knowledge, is likely to increase inconsistencies at the level of atomic charges, for the assignment of remaining parameters, and likely to dilute the interpretability and performance of parameters.
Force field databases
A large number of force fields has been published in the past decades - mostly in scientific publications. In recent years, some databases have attempted to collect, categorize and make force fields digitally available. Therein, different databases, focus on different types of force fields. For example, the openKim database focuses on interatomic functions describing the individual interactions between specific elements. The TraPPE database focuses on transferable force fields of organic molecules (developed by the Siepmann group). The MolMod database focuses on molecular and ionic force fields (both component-specific and transferable).
Transferability and mixing function types
Functional forms and parameter sets have been defined by the developers of interatomic potentials and feature variable degrees of self-consistency and transferability. When functional forms of the potential terms vary or are mixed, the parameters from one interatomic potential function can typically not be used together with another interatomic potential function. In some cases, modifications can be made with minor effort, for example, between 9-6 Lennard-Jones potentials to 12-6 Lennard-Jones potentials. Transfers from Buckingham potentials to harmonic potentials, or from Embedded Atom Models to harmonic potentials, on the contrary, would require many additional assumptions and may not be possible.
In many cases, force fields can be straight forwardly combined. Yet, often, additional specifications and assumptions are required.
Limitations
All interatomic potentials are based on approximations and experimental data, therefore often termed empirical. The performance varies from higher accuracy than density functional theory (DFT) calculations, with access to million times larger systems and time scales, to random guesses depending on the force field. The use of accurate representations of chemical bonding, combined with reproducible experimental data and validation, can lead to lasting interatomic potentials of high quality with much fewer parameters and assumptions in comparison to DFT-level quantum methods.
Possible limitations include atomic charges, also called point charges. Most force fields rely on point charges to reproduce the electrostatic potential around molecules, which works less well for anisotropic charge distributions. The remedy is that point charges have a clear interpretation and virtual electrons can be added to capture essential features of the electronic structure, such additional polarizability in metallic systems to describe the image potential, internal multipole moments in π-conjugated systems, and lone pairs in water. Electronic polarization of the environment may be better included by using polarizable force fields or using a macroscopic dielectric constant. However, application of one value of dielectric constant is a coarse approximation in the highly heterogeneous environments of proteins, biological membranes, minerals, or electrolytes.
All types of van der Waals forces are also strongly environment-dependent because these forces originate from interactions of induced and "instantaneous" dipoles (see Intermolecular force). The original Fritz London theory of these forces applies only in a vacuum. A more general theory of van der Waals forces in condensed media was developed by A. D. McLachlan in 1963 and included the original London's approach as a special case. The McLachlan theory predicts that van der Waals attractions in media are weaker than in vacuum and follow the like dissolves like rule, which means that different types of atoms interact more weakly than identical types of atoms. This is in contrast to combinatorial rules or Slater-Kirkwood equation applied for development of the classical force fields. The combinatorial rules state that the interaction energy of two dissimilar atoms (e.g., C...N) is an average of the interaction energies of corresponding identical atom pairs (i.e., C...C and N...N). According to McLachlan's theory, the interactions of particles in media can even be fully repulsive, as observed for liquid helium, however, the lack of vaporization and presence of a freezing point contradicts a theory of purely repulsive interactions. Measurements of attractive forces between different materials (Hamaker constant) have been explained by Jacob Israelachvili. For example, "the interaction between hydrocarbons across water is about 10% of that across vacuum". Such effects are represented in molecular dynamics through pairwise interactions that are spatially more dense in the condensed phase relative to the gas phase and reproduced once the parameters for all phases are validated to reproduce chemical bonding, density, and cohesive/surface energy.
Limitations have been strongly felt in protein structure refinement. The major underlying challenge is the huge conformation space of polymeric molecules, which grows beyond current computational feasibility when containing more than ~20 monomers. Participants in Critical Assessment of protein Structure Prediction (CASP) did not try to refine their models to avoid "a central embarrassment of molecular mechanics, namely that energy minimization or molecular dynamics generally leads to a model that is less like the experimental structure". Force fields have been applied successfully for protein structure refinement in different X-ray crystallography and NMR spectroscopy applications, especially using program XPLOR. However, the refinement is driven mainly by a set of experimental constraints and the interatomic potentials serve mainly to remove interatomic hindrances. The results of calculations were practically the same with rigid sphere potentials implemented in program DYANA (calculations from NMR data), or with programs for crystallographic refinement that use no energy functions at all. These shortcomings are related to interatomic potentials and to the inability to sample the conformation space of large molecules effectively. Thereby also the development of parameters to tackle such large-scale problems requires new approaches. A specific problem area is homology modeling of proteins. Meanwhile, alternative empirical scoring functions have been developed for ligand docking, protein folding, homology model refinement, computational protein design, and modeling of proteins in membranes.
It was also argued that some protein force fields operate with energies that are irrelevant to protein folding or ligand binding. The parameters of proteins force fields reproduce the enthalpy of sublimation, i.e., energy of evaporation of molecular crystals. However, protein folding and ligand binding are thermodynamically closer to crystallization, or liquid-solid transitions as these processes represent freezing of mobile molecules in condensed media. Thus, free energy changes during protein folding or ligand binding are expected to represent a combination of an energy similar to heat of fusion (energy absorbed during melting of molecular crystals), a conformational entropy contribution, and solvation free energy. The heat of fusion is significantly smaller than enthalpy of sublimation. Hence, the potentials describing protein folding or ligand binding need more consistent parameterization protocols, e.g., as described for IFF. Indeed, the energies of H-bonds in proteins are ~ -1.5 kcal/mol when estimated from protein engineering or alpha helix to coil transition data, but the same energies estimated from sublimation enthalpy of molecular crystals were -4 to -6 kcal/mol, which is related to re-forming existing hydrogen bonds and not forming hydrogen bonds from scratch. The depths of modified Lennard-Jones potentials derived from protein engineering data were also smaller than in typical potential parameters and followed the like dissolves like rule, as predicted by McLachlan theory.
Force fields available in literature
Different force fields are designed for different purposes:
Classical
AMBER (Assisted Model Building and Energy Refinement) – widely used for proteins and DNA.
CFF (Consistent Force Field) – a family of force fields adapted to a broad variety of organic compounds, includes force fields for polymers, metals, etc. CFF was developed by Arieh Warshel, Lifson, and coworkers as a general method for unifying studies of energies, structures, and vibration of general molecules and molecular crystals. The CFF program, developed by Levitt and Warshel, is based on the Cartesian representation of all the atoms, and it served as the basis for many subsequent simulation programs.
CHARMM (Chemistry at HARvard Molecular Mechanics) – originally developed at Harvard, widely used for both small molecules and macromolecules
COSMOS-NMR – hybrid QM/MM force field adapted to various inorganic compounds, organic compounds, and biological macromolecules, including semi-empirical calculation of atomic charges NMR properties. COSMOS-NMR is optimized for NMR-based structure elucidation and implemented in COSMOS molecular modelling package.
CVFF – also used broadly for small molecules and macromolecules.
ECEPP – first force field for polypeptide molecules - developed by F.A. Momany, H.A. Scheraga and colleagues. ECEPP was developed specifically for the modeling of peptides and proteins. It uses fixed geometries of amino acid residues to simplify the potential energy surface. Thus, the energy minimization is conducted in the space of protein torsion angles. Both MM2 and ECEPP include potentials for H-bonds and torsion potentials for describing rotations around single bonds. ECEPP/3 was implemented (with some modifications) in Internal Coordinate Mechanics and FANTOM.
GROMOS (GROningen MOlecular Simulation) – a force field that comes as part of the GROMOS software, a general-purpose molecular dynamics computer simulation package for the study of biomolecular systems. GROMOS force field A-version has been developed for application to aqueous or apolar solutions of proteins, nucleotides, and sugars. A B-version to simulate gas phase isolated molecules is also available.
IFF (Interface Force Field) – covers metals, minerals, 2D materials, and polymers. It uses 12-6 LJ and 9-6 LJ interactions. IFF was developed as for compounds across the periodic table. It assigs consistent charges, utilizes standard conditions as a reference state, reproduces structures, energies, and energy derivatives, and quantifies limitations for all included compounds. The Interface force field (IFF) assumes one single energy expression for all compounds across the periodic (with 9-6 and 12-6 LJ options). The IFF is in most parts non-polarizable, but also comprises polarizable parts, e.g. for some metals (Au, W) and pi-conjugated molecules
MMFF (Merck Molecular Force Field) – developed at Merck for a broad range of molecules.
MM2 was developed by Norman Allinger mainly for conformational analysis of hydrocarbons and other small organic molecules. It is designed to reproduce the equilibrium covalent geometry of molecules as precisely as possible. It implements a large set of parameters that is continuously refined and updated for many different classes of organic compounds (MM3 and MM4).
OPLS (Optimized Potential for Liquid Simulations) (variants include OPLS-AA, OPLS-UA, OPLS-2001, OPLS-2005, OPLS3e, OPLS4) – developed by William L. Jorgensen at the Yale University Department of Chemistry.
QCFF/PI – A general force fields for conjugated molecules.
UFF (Universal Force Field) – A general force field with parameters for the full periodic table up to and including the actinoids, developed at Colorado State University. The reliability is known to be poor due to lack of validation and interpretation of the parameters for nearly all claimed compounds, especially metals and inorganic compounds.
Polarizable
Several force fields explicitly capture polarizability, where a particle's effective charge can be influenced by electrostatic interactions with its neighbors. Core-shell models are common, which consist of a positively charged core particle, representing the polarizable atom, and a negatively charged particle attached to the core atom through a spring-like harmonic oscillator potential. Recent examples include polarizable models with virtual electrons that reproduce image charges in metals and polarizable biomolecular force fields.
AMBER – polarizable force field developed by Jim Caldwell and coworkers.
AMOEBA (Atomic Multipole Optimized Energetics for Biomolecular Applications) – force field developed by Pengyu Ren (University of Texas at Austin) and Jay W. Ponder (Washington University). AMOEBA force field is gradually moving to more physics-rich AMOEBA+.
CHARMM – polarizable force field developed by S. Patel (University of Delaware) and C. L. Brooks III (University of Michigan). Based on the classical Drude oscillator developed by Alexander MacKerell (University of Maryland, Baltimore) and Benoit Roux (University of Chicago).
CFF/ind and ENZYMIX – The first polarizable force field which has subsequently been used in many applications to biological systems.
COSMOS-NMR (Computer Simulation of Molecular Structure) – developed by Ulrich Sternberg and coworkers. Hybrid QM/MM force field enables explicit quantum-mechanical calculation of electrostatic properties using localized bond orbitals with fast BPT formalism. Atomic charge fluctuation is possible in each molecular dynamics step.
DRF90 – developed by P. Th. van Duijnen and coworkers.
NEMO (Non-Empirical Molecular Orbital) – procedure developed by Gunnar Karlström and coworkers at Lund University (Sweden)
PIPF – The polarizable intermolecular potential for fluids is an induced point-dipole force field for organic liquids and biopolymers. The molecular polarization is based on Thole's interacting dipole (TID) model and was developed by Jiali Gao Gao Research Group | at the University of Minnesota.
Polarizable Force Field (PFF) – developed by Richard A. Friesner and coworkers.
SP-basis Chemical Potential Equalization (CPE) – approach developed by R. Chelli and P. Procacci.
PHAST – polarizable potential developed by Chris Cioce and coworkers.
ORIENT – procedure developed by Anthony J. Stone (Cambridge University) and coworkers.
Gaussian Electrostatic Model (GEM) – a polarizable force field based on Density Fitting developed by Thomas A. Darden and G. Andrés Cisneros at NIEHS; and Jean-Philip Piquemal at Paris VI University.
Atomistic Polarizable Potential for Liquids, Electrolytes, and Polymers(APPLE&P), developed by Oleg Borogin, Dmitry Bedrov and coworkers, which is distributed by Wasatch Molecular Incorporated.
Polarizable procedure based on the Kim-Gordon approach developed by Jürg Hutter and coworkers (University of Zürich)
GFN-FF (Geometry, Frequency, and Noncovalent Interaction Force-Field) – a completely automated partially polarizable generic force-field for the accurate description of structures and dynamics of large molecules across the periodic table developed by Stefan Grimme and Sebastian Spicher at the University of Bonn.
WASABe v1.0 PFF (for Water, orgAnic Solvents, And Battery electrolytes) An isotropic atomic dipole polarizable force field for accurate description of battery electrolytes in terms of thermodynamic and dynamic properties for high lithium salt concentrations in sulfonate solvent by Oleg Starovoytov
XED (eXtended Electron Distribution) - a polarizable force-field created as a modification of an atom-centered charge model, developed by Andy Vinter. Partially charged monopoles are placed surrounding atoms to simulate more geometrically accurate electrostatic potentials at a fraction of the expense of using quantum mechanical methods. Primarily used by software packages supplied by Cresset Biomolecular Discovery.
Reactive
EVB (Empirical valence bond) – reactive force field introduced by Warshel and coworkers for use in modeling chemical reactions in different environments. The EVB facilitates calculating activation free energies in condensed phases and in enzymes.
ReaxFF – reactive force field (interatomic potential) developed by Adri van Duin, William Goddard and coworkers. It is slower than classical MD (50x), needs parameter sets with specific validation, and has no validation for surface and interfacial energies. Parameters are non-interpretable. It can be used atomistic-scale dynamical simulations of chemical reactions. Parallelized ReaxFF allows reactive simulations on >>1,000,000 atoms on large supercomputers.
Coarse-grained
DPD (Dissipative particle dynamics) – This is a method commonly applied in chemical engineering. It is typically used for studying the hydrodynamics of various simple and complex fluids which require consideration of time and length scales larger than those accessible to classical Molecular dynamics. The potential was originally proposed by Hoogerbrugge and Koelman with later modifications by Español and Warren The current state of the art was well documented in a CECAM workshop in 2008. Recently, work has been undertaken to capture some of the chemical subtitles relevant to solutions. This has led to work considering automated parameterisation of the DPD interaction potentials against experimental observables.
MARTINI – a coarse-grained potential developed by Marrink and coworkers at the University of Groningen, initially developed for molecular dynamics simulations of lipids, later extended to various other molecules. The force field applies a mapping of four heavy atoms to one CG interaction site and is parameterized with the aim of reproducing thermodynamic properties.
SAFT – A top-down coarse-grained model developed in the Molecular Systems Engineering group at Imperial College London fitted to liquid phase densities and vapor pressures of pure compounds by using the SAFT equation of state.
SIRAH – a coarse-grained force field developed by Pantano and coworkers of the Biomolecular Simulations Group, Institut Pasteur of Montevideo, Uruguay; developed for molecular dynamics of water, DNA, and proteins. Free available for AMBER and GROMACS packages.
VAMM (Virtual atom molecular mechanics) – a coarse-grained force field developed by Korkut and Hendrickson for molecular mechanics calculations such as large scale conformational transitions based on the virtual interactions of C-alpha atoms. It is a knowledge based force field and formulated to capture features dependent on secondary structure and on residue-specific contact information in proteins.
Machine learning
MACE (Multi Atomic Cluster Expansion) is a highly accurate machine learning force field architecture that combines the rigorous many-body expansion of the total potential energy with rotationally equivariant representations of the system.
ANI (Artificial Narrow Intelligence) is a transferable neural network potential, built from atomic environment vectors, and able to provide DFT accuracy in terms of energies.
FFLUX (originally QCTFF) A set of trained Kriging models which operate together to provide a molecular force field trained on Atoms in molecules or Quantum chemical topology energy terms including electrostatic, exchange and electron correlation.
TensorMol, a mixed model, a neural network provides a short-range potential, whilst more traditional potentials add screened long-range terms.
Δ-ML not a force field method but a model that adds learnt correctional energy terms to approximate and relatively computationally cheap quantum chemical methods in order to provide an accuracy level of a higher order, more computationally expensive quantum chemical model.
SchNet a Neural network utilising continuous-filter convolutional layers, to predict chemical properties and potential energy surfaces.
PhysNet is a Neural Network-based energy function to predict energies, forces and (fluctuating) partial charges.
Water
The set of parameters used to model water or aqueous solutions (basically a force field for water) is called a water model. Many water models have been proposed; some examples are TIP3P, TIP4P, SPC, flexible simple point charge water model (flexible SPC), ST2, and mW. Other solvents and methods of solvent representation are also applied within computational chemistry and physics; these are termed solvent models.
Modified amino acids
Forcefield_PTM – An AMBER-based forcefield and webtool for modeling common post-translational modifications of amino acids in proteins developed by Chris Floudas and coworkers. It uses the ff03 charge model and has several side-chain torsion corrections parameterized to match the quantum chemical rotational surface.
Forcefield_NCAA - An AMBER-based forcefield and webtool for modeling common non-natural amino acids in proteins in condensed-phase simulations using the ff03 charge model. The charges have been reported to be correlated with hydration free energies of corresponding side-chain analogs.
Other
LFMM (Ligand Field Molecular Mechanics) - functions for the coordination sphere around transition metals based on the angular overlap model (AOM). Implemented in the Molecular Operating Environment (MOE) as DommiMOE and in Tinker
VALBOND - a function for angle bending that is based on valence bond theory and works for large angular distortions, hypervalent molecules, and transition metal complexes. It can be incorporated into other force fields such as CHARMM and UFF.
See also
References
Further reading
Intermolecular forces
Molecular physics
Molecular modelling | Force field (chemistry) | Physics,Chemistry,Materials_science,Engineering | 6,397 |
21,965,993 | https://en.wikipedia.org/wiki/Stable%20cell | In cellular biology, stable cells are cells that multiply only when needed. They spend most of the time in the quiescent G0 phase of the cell cycle but can be stimulated to enter the cell cycle when needed. Examples include the liver, the proximal tubules of the kidney and endocrine glands.
See also
Labile cells, which multiply constantly throughout life
Permanent cells, which don't have the ability to divide
Cell biology | Stable cell | Biology | 93 |
75,192,485 | https://en.wikipedia.org/wiki/Amylin%20receptor | The amylin receptors (AMYRs) are heterodimers of the calcitonin receptor that are bound to by amylin with high affinity and consist of AMY1, AMY2, and AMY3. Amylin mimetics that are agonists at the amylin receptors are being developed as therapies for diabetes and obesity, and one, pramlintide, has been FDA approved. The AMY1 receptor may be activated by both amylin and the calcitonin gene-related peptide (CGRP) and could play a role in the effects of CGRP receptor antagonists developed for migraine. Dual agonists of the amylin and calcitonin receptors (DACRAs) are under development for obesity. Amylin and its receptors are believed to play a role in Alzheimer's disease.
References
Receptor heteromers | Amylin receptor | Chemistry | 179 |
64,777,898 | https://en.wikipedia.org/wiki/Kersti%20Hermansson | Kersti Hermansson (born in 1951) is a Professor for Inorganic Chemistry at Uppsala University.
Education and professional career
She did her PhD on "The Electron Distribution in the Bound Water Molecule" in 1984. From 1984 to 1986, she had a postdoctoral fellowship from the Swedish Research Council with Dr. E. Clementi at IBM-Kingston, USA. From 1986–1988, she was a Högskolelektor in Inorganic Chemistry at Uppsala University. In 1988, she was a docent of Inorganic Chemistry at Uppsala University. In 1996, she was a Biträdande professor. Since 2000, she is a professor of Inorganic Chemistry at Uppsala University. During this time (2008-2013), she was a part-time guest professor at KTH Stockholm.
Research
Her research focuses on condensed-matter chemistry including the investigation of chemical bonding and development of quantum chemical methods.
Awards
She received several prizes for her research:
"Letterstedska priset" from the Swedish Royal Academy of Sciences (KVA) (1987)
"Oskarspriset" from Uppsala University (1988)
"Norblad-Ekstrand" medal in gold from the Swedish Chemical Society (2003)
Member of Kungl. Vetenskapssamhället (Academia regia scientiarum Upsaliensis, KVSU), Uppsala (since 1988)
Member of Royal Society of Science (since 2002)
Member of Royal Swedish Academy of Sciences (since 2007)
Adjunct professor at the Kasetsart University, Bangkok (2005)
Honorary guest professor at the Department of Ion Physics and Applied Physics, Innsbruck University (since June 2009)
References
Academic staff of Uppsala University
Quantum chemistry
Living people
Swedish Royal Academies
Kersti Hermansson
IBM Fellows
1951 births | Kersti Hermansson | Physics,Chemistry | 350 |
2,940,678 | https://en.wikipedia.org/wiki/Backhousia%20citriodora | Backhousia citriodora, commonly known as lemon myrtle, lemon scented myrtle or lemon scented ironwood, is a flowering plant in the family Myrtaceae. It is native to the subtropical rainforests of central and south-eastern Queensland, Australia, with a natural distribution from Mackay to Brisbane.
Description and ecology
The species can reach in height, but is often smaller. The leaves are evergreen, opposite, lanceolate, long and broad, glossy green, with an entire margin. The flowers are creamy-white, in diameter, produced in clusters at the ends of the branches from summer through to autumn. After petal fall, the calyx is persistent.
A significant fungal pathogen, myrtle rust (Uredo rangelii) was detected in lemon myrtle plantations in January 2011. Myrtle rust severely damages new growth and threatens lemon myrtle production.
Etymology
Lemon myrtle was given the botanical name Backhousia citriodora by Ferdinand von Mueller in 1853 after his friend, the English botanist, James Backhouse.
The common name reflects the strong lemon smell of the crushed leaves. 'Lemon scented myrtle' was the primary common name until the shortened trade name, 'lemon myrtle', was created by the native foods industry to market the leaf for culinary use. Lemon myrtle is now the more common name for the plant and its products.
Lemon myrtle is sometimes confused with 'lemon ironbark', which is Eucalyptus staigeriana. Other common names are sweet verbena tree, lemon scented verbena (not to be confused with lemon verbena), and sweet verbena myrtle.
Uses
History
Aboriginal Australians have long used lemon myrtle, both in cuisine and as a healing plant. The oil has the highest citral purity; typically higher than lemongrass. It is also considered to have a "cleaner and sweeter" aroma than comparable sources of citral–lemongrass and Litsea cubeba. In 1888, Bertram first isolated the essential oil from B. citriodora. In 1925, it was found to be significantly germicidal, and it was later shown to be antimicrobial.
In the 1940s, Tarax was the first company to use B. citriodora oil as a lemon flavouring during World War II. In 1989, B. citriodora was investigated as a potential leaf spice and commercial crop by Peter Hardwick, who commissioned the Wollongbar Agricultural Institute to analyse B. citriodora selections using gas chromatography. In 2001, a Standards for Oil of B. citriodora was established by The Essential Oils Unit, Wollongbar, and Standards Australia.
Culinary
Lemon myrtle is one of the well known bushfood flavours and is sometimes referred to as the "Queen of the lemon herbs". The leaf is often used as dried flakes, or in the form of an encapsulated flavour essence for enhanced shelf-life. It has a range of uses, such as lemon myrtle flakes in shortbread; flavouring in pasta; whole leaf with baked fish; infused in macadamia or vegetable oils; and made into tea, including tea blends. It can also be used as a lemon flavour replacement in milk-based foods, such as cheesecake, lemon flavoured ice-cream and sorbet without the curdling problem associated with lemon fruit acidity.
Backhousia citriodora has two essential oil chemotypes. The citral chemotype is more prevalent and is cultivated in Australia for flavouring and essential oil. Citral as an isolate in steam distilled lemon myrtle oil is typically 90–98%, and oil yield 1–3% from fresh leaf. The citronellal chemotype is uncommon, and can be used as an insect repellent. The dried leaf has free radical scavenging ability.
Antimicrobial
Lemon myrtle essential oil possesses antimicrobial properties; however, the undiluted essential oil is toxic to human cells in vitro. When diluted to approximately 1%, absorption through the skin and subsequent damage is thought to be minimal. Lemon myrtle oil has a high Rideal–Walker coefficient, a measure of antimicrobial potency. Use of lemon myrtle oil as a treatment for skin lesions caused by molluscum contagiosum virus (MCV), a disease typically affecting children and immuno-compromised patients, has been investigated. Nine of sixteen patients who were treated with 10% strength lemon myrtle oil showed a significant improvement, compared to none in the control group. A study in 2003 which investigated the effectiveness of different preparations of lemon myrtle against bacteria and fungi concluded that the plant had potential as an antiseptic or as a surface disinfectant, or as an anti-microbial food additive. The oil is a popular ingredient in health care and cleaning products, especially soaps, lotions, skin-whitening preparations and shampoos.
Cultivation
Lemon myrtle is a cultivated ornamental plant. It can be grown from tropical to warm temperate climates, and may handle cooler districts provided it can be protected from frost when young. In cultivation it rarely exceeds about and usually has a dense canopy. The principal attraction to gardeners is the lemon smell, which perfumes both the leaves and flowers of the tree. Lemon myrtle is a hardy plant, which tolerates all but the poorest drained soils. It can be slow growing but responds well to slow-release fertilisers.
Seedling lemon myrtle go through a shrubby, slow juvenile growth stage, before developing a dominant trunk. Lemon myrtle can also be propagated from cutting, but is slow to strike. A study into the plant growing adventitious roots found that "actively growing axillary buds, wide stems and mature leaves" are good indicators that a cutting will take root successfully and survive. A further study on temperature recommended glasshouses for growing cuttings throughout the year. Growing cuttings from mature trees bypasses the shrubby juvenile stage. Cutting propagation is also used to provide a consistent product in commercial production.
In plantation cultivation the tree is typically maintained as a shrub by regular harvesting from the top and sides. Mechanical harvesting is used in commercial plantations. It is important to retain some lower branches when pruning for plant health. The harvested leaves are dried for leaf spice, or distilled for the essential oil.
The majority of commercial lemon myrtle is grown in Queensland and the north coast of New South Wales, Australia.
A 2009 study has suggested that drying lemon myrtle leaves at higher temperatures improves the citral content of the dried leaves, but discolours the leaves more.
See also
Citral
Lemon verbena
References
Further reading
APNI Australian Plant Name Index
External links
Australian Bushfood and Native Medicine Forum
Broad range of lemon myrtle products and recipes
Lemon Myrtle from Vic Cherikoff
citriodora
Flora of Queensland
Myrtales of Australia
Trees of Australia
Bushfood
Crops originating from Australia
Medicinal plants of Australia
Essential oils
Taxa named by Ferdinand von Mueller | Backhousia citriodora | Chemistry | 1,423 |
31,453,721 | https://en.wikipedia.org/wiki/MagicWB | MagicWB is a third-party Workbench enhancer for AmigaOS. It was developed in 1992-1997 by Martin Huttenloher.
History
The idea to enhance Workbench arose when the author got bored with the gray and abstract icons provided by Commodore. The original Amiga icons could use only four colours and even those were scarcely used. The background patterns supplied with the operating system for Workbench were also minimal. Desire to develop a complete package to enhance Workbench look with a complete set of icons, background patterns, and new fonts was born.
Features
The original Amiga icon sets supported only four colours. MagicWB extended the palette of icons to 8 colours allowing more colourful icons. The MagicWB grew so popular that it became de facto standard for many major third party packages developed for Amiga. One of those is MUI which used extensively MagicWB palette in its GUI widget library.
The design style in MagicWB was XEN. The package includes 9 replacement fonts (Topaz, XEN and Courier) and new background patterns for the Workbench desktop.
Other Workbench enhancers
The MagicWB package grew popular but many users found limiting icon sets to 8 fixed colours too limiting. Competing Workbench enhancer, NewIcons, was developed to allow more colourful icons. This eventually led to GlowIcons and finally to true colour PNG icons.
References
See also
AmigaOS
Computer icons | MagicWB | Technology | 295 |
1,226,666 | https://en.wikipedia.org/wiki/Adams%20operation | In mathematics, an Adams operation, denoted ψk for natural numbers k, is a cohomology operation in topological K-theory, or any allied operation in algebraic K-theory or other types of algebraic construction, defined on a pattern introduced by Frank Adams. The basic idea is to implement some fundamental identities in symmetric function theory, at the level of vector bundles or other representing object in more abstract theories.
Adams operations can be defined more generally in any λ-ring.
Adams operations in K-theory
Adams operations ψk on K theory (algebraic or topological) are characterized by the following properties.
ψk are ring homomorphisms.
ψk(l)= lk if l is the class of a line bundle.
ψk are functorial.
The fundamental idea is that for a vector bundle V on a topological space X, there is an analogy between Adams operators and exterior powers, in which
ψk(V) is to Λk(V)
as
the power sum Σ αk is to the k-th elementary symmetric function σk
of the roots α of a polynomial P(t). (Cf. Newton's identities.) Here Λk denotes the k-th exterior power. From classical algebra it is known that the power sums are certain integral polynomials Qk in the σk. The idea is to apply the same polynomials to the Λk(V), taking the place of σk. This calculation can be defined in a K-group, in which vector bundles may be formally combined by addition, subtraction and multiplication (tensor product). The polynomials here are called Newton polynomials (not, however, the Newton polynomials of interpolation theory).
Justification of the expected properties comes from the line bundle case, where V is a Whitney sum of line bundles. In this special case the result of any Adams operation is naturally a vector bundle, not a linear combination of ones in K-theory. Treating the line bundle direct factors formally as roots is something rather standard in algebraic topology (cf. the Leray–Hirsch theorem). In general a mechanism for reducing to that case comes from the splitting principle for vector bundles.
Adams operations in group representation theory
The Adams operation has a simple expression in group representation theory. Let G be a group and ρ a representation of G with character χ. The representation ψk(ρ) has character
References
Algebraic topology
Symmetric functions | Adams operation | Physics,Mathematics | 484 |
14,557,753 | https://en.wikipedia.org/wiki/Delaware%20Biotechnology%20Institute | The Delaware Biotechnology Institute (DBI) at the University of Delaware is a partnership among government, academia and industry with an aim to establish Delaware as a notable hub for biotechnology and life sciences.
Adjacent to the University of Delaware main campus, DBI's research facility is located in the Delaware Technology Park. The DBI laboratory houses more than 180 faculty and students.
Research at the Delaware Biotechnology Institute has application in agriculture, environmental science, and human health, featuring leading-edge work in bioinformatics, genomics and small RNA biology, materials science, molecular medicine and proteomics.
Partner Institutions
University of Delaware
Delaware State University
Delaware Technical & Community College
Wesley College
References
External links
Delaware Biotechnology Institute official site
University of Delaware
Biotechnology organizations
2001 establishments in Delaware
Organizations established in 2001 | Delaware Biotechnology Institute | Engineering,Biology | 157 |
422,887 | https://en.wikipedia.org/wiki/Labor%20camp | A labor camp (or labour camp, see spelling differences) or work camp is a detention facility where inmates are forced to engage in penal labor as a form of punishment. Labor camps have many common aspects with slavery and with prisons (especially prison farms). Conditions at labor camps vary widely depending on the operators. Convention no. 105 of the United Nations International Labour Organization (ILO), adopted internationally on 27 June 1957, intended to abolish camps of forced labor.
In the 20th century, a new category of labor camps developed for the imprisonment of millions of people who were not criminals per se, but political opponents (real or imagined) and various so-called undesirables under communist and fascist regimes.
Precursors
Early-modern states could exploit convicts by combining prison and useful work in manning their galleys.
This became the sentence of many Christian captives in the Ottoman Empire
and of Calvinists (Huguenots) in pre-Revolutionary France.
20th century
Albania
Allies of World War II
The Allies of World War II operated a number of work camps after the war. At the Yalta Conference in 1945, it was agreed that German forced labor was to be utilized as reparations. The majority of the camps were in the Soviet Union, but more than one million Germans were forced to work in French coal-mines and British agriculture, as well as 500,000 in US-run Military Labor Service Units in occupied Germany itself. See Forced labor of Germans after World War II.
Bulgaria
Burma
According to the New Statesman, Burmese military government operated, from 1962 to 2011, about 91 labour camps for political prisoners.
China
The anti-communist Kuomintang operated various camps between 1938 and 1949, including the Northwestern Youth Labor Camp for young activists and students.
The Chinese Communist Party has operated many labor camps for some crimes at least since taking power in 1949. Many leaders of China were put into labor camps after purges, including Deng Xiaoping and Liu Shaoqi. May Seventh Cadre Schools are an example of Cultural Revolution-era labor camps.
Cuba
Beginning in November 1965, people classified as "against the government" were summoned to work camps referred to as "Military Units to Aid Production" (UMAP).
Czechoslovakia
After the communists took over Czechoslovakia in 1948, many forced labor camps were created. The inmates included political prisoners, clergy, kulaks, Boy Scout leaders and many other groups of people that were considered enemies of the state. About half of the prisoners worked in the uranium mines. These camps lasted until 1961.
Also between 1950 and 1954 many men were considered "politically unreliable" for compulsory military service, and were conscripted to labour battalions (Czech: Pomocné technické prapory (PTP)) instead.
Communist Hungary
Following sentence, political prisoners were imprisoned. To serve this purpose, a large number of internment camps (e.g., in Kistarcsa, Recsk (Recsk forced labor camp), Tiszalök, Kazincbarcika and according to the latest research, in Bernátkút and Sajóbábony) were placed under the supervision of the State Protection Authority. The most notorious of these camps were in Recsk, Kistarcsa, Tiszalök and Kazincbarcika.
Italian Libya
During the colonisation of Libya the Italians deported most of the Libyan population in Cyrenaica to concentration camps and used the survivors to build in semi-slave conditions the coastal road and new agricultural projects.
Germany
During World War II the Nazis operated several categories of Arbeitslager (Labor Camps) for different categories of inmates. The largest number of them held Jewish civilians forcibly abducted in the occupied countries (see Łapanka) to provide labor in the German war industry, repair bombed railroads and bridges or work on farms. By 1944, 19.9% of all workers were foreigners, either civilians or prisoners of war.
The Nazis employed many slave laborers. They also operated concentration camps, some of which provided free forced labor for industrial and other jobs while others existed purely for the extermination of their inmates. A notable example is the Mittelbau-Dora labor camp complex that serviced the production of the V-2 rocket. See List of German concentration camps for more.
The Nazi camps played a key role in the extermination of millions. The phrase ("Work makes one free") has become a symbol of The Holocaust.
Imperial Japan
During the early 20th century, the Empire of Japan used the forced labor of millions of civilians from conquered countries and prisoners of war, especially during the Second Sino-Japanese War and the Pacific War, on projects such as the Death Railway. Hundreds of thousands of people died as a direct result of the overwork, malnutrition, preventable disease and violence which were commonplace on these projects.
North Korea
North Korea is known to operate six camps with prison-labor colonies for political criminals (Kwan-li-so). The total number of prisoners in these colonies is 150,000 to 200,000. Once condemned as a political criminal in North Korea, the defendant and his/or her family are incarcerated for life in one of the camps without trial and cut off from all outside contact.
See also: North Korean prison system
Romania
Russia and the Soviet Union
Imperial Russia operated a system of remote Siberian forced labor camps as part of its regular judicial system, called katorga.
The Soviet Union took over the already extensive katorga system and expanded it immensely, eventually organizing the Gulag to run the camps. In 1954, a year after Stalin's death, the new Soviet government of Nikita Khrushchev began to release political prisoners and close down the camps. By the end of the 1950s, virtually all "corrective labor camps" were reorganized, mostly into the system of corrective labor colonies. Officially, the Gulag was terminated by the MVD order 20 of January 25, 1960.
During the period of Stalinism, the Gulag labor camps in the Soviet Union were officially called "Corrective labor camps". The term "labor colony"; more exactly, "Corrective labor colony", (, abbr. ИТК), was also in use, most notably the ones for underaged (16 years or younger) convicts and captured besprizorniki (street children, literally, "children without family care"). After the reformation of the camps into the Gulag, the term "corrective labor colony" essentially encompassed labor camps.
Russian Federation
Sweden
14 labor camps were operated by the Swedish state during World War II. The majority of internees were communists, but radical social democrats, syndicalists, anarchists, trade unionists, anti-fascists and other "unreliable elements" of Swedish society, as well as German dissidents and deserters from the Wehrmacht, were also interned. The internees were placed in the labor camps indefinitely, without trial, and without being informed of the accusations made against them. Officially, the camps were called "labor companies" (Swedish: arbetskompanier). The system was established by the Royal Board of Social Affairs and sanctioned by the third cabinet of Per Albin Hansson, a grand coalition which included all parties represented in the Swedish Riksdag, with the notable exception of the Communist Party of Sweden.
After the war, many former camp inmates had difficulty finding a job, since they had been branded as "subversive elements".
Turkey
United States
During the United States occupation of Haiti, the United States Marine Corps and their Gendarmerie of Haiti subordinates enforced a corvée system upon Haitians. The corvée resulted in the deaths of hundreds, and possibly thousands, of Haitians, with Haitian American academic Michel-Rolph Trouillot estimating that about 5,500 Haitians died in labor camps. In addition, Roger Gaillard writes that some Haitians were killed fleeing the camps or if they did not work satisfactorily.
Vietnam
Yugoslavia
The Goli Otok prison camp for political opponents ran from 1946 to 1956.
21st century
China
The Standing Committee of the National People's Congress of the People's Republic of China, which closed on December 28, 2013, passed a decision on abolishing the legal provisions on reeducation through labor. However, penal labor allegedly continues to exist in Xinjiang internment camps.
North Korea
North Korea is known to operate six camps with prison-labor colonies for political criminals (Kwan-li-so). The total number of prisoners in these colonies is 150,000 – 200,000. Once condemned as a political criminal in North Korea, the defendant and their families are incarcerated for lifetime in one of the camps without trial, and are cut off from all outside contact.
United States
In 1997, a United States Army document was developed that "provides guidance on establishing prison camps on [US] Army installations."
See also
Chain gang
Civilian Inmate Labor Program
Extermination through labor
Penal colony
References
External links
camps
Prison camps
Total institutions | Labor camp | Biology | 1,839 |
38,584,702 | https://en.wikipedia.org/wiki/Flights%20%28rotary%20dryer%29 | Flights, also commonly referred to as "material lifters" or "shovelling plates" are used in rotary dryers and rotary coolers to shower material through the process gas stream. Fixed to the interior of the rotary drum, these fin-like structures scoop material up from the material bed at the bottom of the drum and shower it through the gas stream as the drum rotates. This showering creates a curtain of material spanning the width of the drum, helping to maximize the efficiency of heat transfer.
Depending on the needs of the material and the process, a variety of flight designs and placement patterns are used in order to create a maximum efficiency curtain while still retaining the integrity of the product.
References
Liquid-solid separation | Flights (rotary dryer) | Chemistry | 146 |
859,878 | https://en.wikipedia.org/wiki/Panharmonicon | The Panharmonicon was a musical instrument invented in 1805 by Johann Nepomuk Mälzel, a contemporary and friend of Beethoven. Beethoven composed his piece "Wellington's Victory" (Op. 91) to be played on Mälzel's mechanical orchestral organ and also to commemorate Arthur Wellesley's victory over the French at the Battle of Vitoria in 1813. It was one of the first automatic playing machines, similar to the later Orchestrion.
The Panharmonicon could imitate many orchestral instruments as well as sounds like gunfire and cannon shots. One instrument was destroyed in the Landesgewerbemuseum in Stuttgart during an air raid in World War II. Friedrich Kaufmann copied this automatic playing machine in 1808, and his family produced Orchestrions from that time on.
One of Mälzel's Panharmonicons was sent to Boston 1811 and was exhibited there and then in New York City and other cities.
Mälzel toured with this instrument in the United States from February 7, 1826, until his death in 1838.
In 1817 Flight & Robson in London built a similar automatic instrument called Apollonicon, advised by the blind organist John Purkis, who had previously written and arranged music for the Panharmonicon.
In 1821 Dietrich Nikolaus Winkel copied some features of the Panharmonicon in Amsterdam for his instrument, the Componium, which was also capable of aleatoric composition.
In 1823, William M. Goodrich copied Mälzel's Panharmonicon in Boston, MA.
References
Hans-W. Schmitz: Johann Nepomuk Mälzel und das Panharmonicon. Von den Anfängen der Orchestermaschinen. In: Das Mechanische Musikinstrument, 7. Jahrgang, No. 19, März 1981
External links
Mechanical Music Digest Archives
Ludwig Van Beethoven Tripod Website
Mad About Beethoven
Synthmuseum.com
Mechanical musical instruments
Keyboard instruments | Panharmonicon | Physics,Technology | 401 |
391,832 | https://en.wikipedia.org/wiki/Cobordism | In mathematics, cobordism is a fundamental equivalence relation on the class of compact manifolds of the same dimension, set up using the concept of the boundary (French bord, giving cobordism) of a manifold. Two manifolds of the same dimension are cobordant if their disjoint union is the boundary of a compact manifold one dimension higher.
The boundary of an -dimensional manifold is an -dimensional manifold that is closed, i.e., with empty boundary. In general, a closed manifold need not be a boundary: cobordism theory is the study of the difference between all closed manifolds and those that are boundaries. The theory was originally developed by René Thom for smooth manifolds (i.e., differentiable), but there are now also versions for
piecewise linear and topological manifolds.
A cobordism between manifolds and is a compact manifold whose boundary is the disjoint union of and , .
Cobordisms are studied both for the equivalence relation that they generate, and as objects in their own right. Cobordism is a much coarser equivalence relation than diffeomorphism or homeomorphism of manifolds, and is significantly easier to study and compute. It is not possible to classify manifolds up to diffeomorphism or homeomorphism in dimensions ≥ 4 – because the word problem for groups cannot be solved – but it is possible to classify manifolds up to cobordism. Cobordisms are central objects of study in geometric topology and algebraic topology. In geometric topology, cobordisms are intimately connected with Morse theory, and -cobordisms are fundamental in the study of high-dimensional manifolds, namely surgery theory. In algebraic topology, cobordism theories are fundamental extraordinary cohomology theories, and categories of cobordisms are the domains of topological quantum field theories.
Definition
Manifolds
Roughly speaking, an -dimensional manifold is a topological space locally (i.e., near each point) homeomorphic to an open subset of Euclidean space . A manifold with boundary is similar, except that a point of is allowed to have a neighborhood that is homeomorphic to an open subset of the half-space
Those points without a neighborhood homeomorphic to an open subset of Euclidean space are the boundary points of ; the boundary of is denoted by . Finally, a closed manifold is, by definition, a compact manifold without boundary ().
Cobordisms
An -dimensional cobordism is a quintuple consisting of an -dimensional compact differentiable manifold with boundary, ; closed -manifolds , ; and embeddings , with disjoint images such that
The terminology is usually abbreviated to . and are called cobordant if such a cobordism exists. All manifolds cobordant to a fixed given manifold form the cobordism class of .
Every closed manifold is the boundary of the non-compact manifold ; for this reason we require to be compact in the definition of cobordism. Note however that is not required to be connected; as a consequence, if and , then and are cobordant.
Examples
The simplest example of a cobordism is the unit interval . It is a 1-dimensional cobordism between the 0-dimensional manifolds , . More generally, for any closed manifold , is a cobordism from to .
If consists of a circle, and of two circles, and together make up the boundary of a pair of pants (see the figure at right). Thus the pair of pants is a cobordism between and . A simpler cobordism between and is given by the disjoint union of three disks.
The pair of pants is an example of a more general cobordism: for any two -dimensional manifolds , , the disjoint union is cobordant to the connected sum The previous example is a particular case, since the connected sum is isomorphic to The connected sum is obtained from the disjoint union by surgery on an embedding of in , and the cobordism is the trace of the surgery.
Terminology
An n-manifold M is called null-cobordant if there is a cobordism between M and the empty manifold; in other words, if M is the entire boundary of some (n + 1)-manifold. For example, the circle is null-cobordant since it bounds a disk. More generally, a n-sphere is null-cobordant since it bounds a (n + 1)-disk. Also, every orientable surface is null-cobordant, because it is the boundary of a handlebody. On the other hand, the 2n-dimensional real projective space is a (compact) closed manifold that is not the boundary of a manifold, as is explained below.
The general bordism problem is to calculate the cobordism classes of manifolds subject to various conditions.
Null-cobordisms with additional structure are called fillings. Bordism and cobordism are used by some authors interchangeably; others distinguish them. When one wishes to distinguish the study of cobordism classes from the study of cobordisms as objects in their own right, one calls the equivalence question bordism of manifolds, and the study of cobordisms as objects cobordisms of manifolds.
The term bordism comes from French , meaning boundary. Hence bordism is the study of boundaries. Cobordism means "jointly bound", so M and N are cobordant if they jointly bound a manifold; i.e., if their disjoint union is a boundary. Further, cobordism groups form an extraordinary cohomology theory, hence the co-.
Variants
The above is the most basic form of the definition. It is also referred to as unoriented bordism. In many situations, the manifolds in question are oriented, or carry some other additional structure referred to as G-structure. This gives rise to "oriented cobordism" and "cobordism with G-structure", respectively. Under favourable technical conditions these form a graded ring called the cobordism ring , with grading by dimension, addition by disjoint union and multiplication by cartesian product. The cobordism groups are the coefficient groups of a generalised homology theory.
When there is additional structure, the notion of cobordism must be formulated more precisely: a G-structure on W restricts to a G-structure on M and N. The basic examples are G = O for unoriented cobordism, G = SO for oriented cobordism, and G = U for complex cobordism using stably complex manifolds. Many more are detailed by Robert E. Stong.
In a similar vein, a standard tool in surgery theory is surgery on normal maps: such a process changes a normal map to another normal map within the same bordism class.
Instead of considering additional structure, it is also possible to take into account various notions of manifold, especially piecewise linear (PL) and topological manifolds. This gives rise to bordism groups , which are harder to compute than the differentiable variants.
Surgery construction
Recall that in general, if X, Y are manifolds with boundary, then the boundary of the product manifold is .
Now, given a manifold M of dimension n = p + q and an embedding define the n-manifold
obtained by surgery, via cutting out the interior of and gluing in along their boundary
The trace of the surgery
defines an elementary cobordism (W; M, N). Note that M is obtained from N by surgery on This is called reversing the surgery.
Every cobordism is a union of elementary cobordisms, by the work of Marston Morse, René Thom and John Milnor.
Examples
As per the above definition, a surgery on the circle consists of cutting out a copy of and gluing in The pictures in Fig. 1 show that the result of doing this is either (i) again, or (ii) two copies of
For surgery on the 2-sphere, there are more possibilities, since we can start by cutting out either or
Morse functions
Suppose that f is a Morse function on an (n + 1)-dimensional manifold, and suppose that c is a critical value with exactly one critical point in its preimage. If the index of this critical point is p + 1, then the level-set N := f−1(c + ε) is obtained from M := f−1(c − ε) by a p-surgery. The inverse image W := f−1([c − ε, c + ε]) defines a cobordism (W; M, N) that can be identified with the trace of this surgery.
Geometry, and the connection with Morse theory and handlebodies
Given a cobordism (W; M, N) there exists a smooth function f : W → [0, 1] such that f−1(0) = M, f−1(1) = N. By general position, one can assume f is Morse and such that all critical points occur in the interior of W. In this setting f is called a Morse function on a cobordism. The cobordism (W; M, N) is a union of the traces of a sequence of surgeries on M, one for each critical point of f. The manifold W is obtained from M × [0, 1] by attaching one handle for each critical point of f.
The Morse/Smale theorem states that for a Morse function on a cobordism, the flowlines of f′ give rise to a handle presentation of the triple (W; M, N). Conversely, given a handle decomposition of a cobordism, it comes from a suitable Morse function. In a suitably normalized setting this process gives a correspondence between handle decompositions and Morse functions on a cobordism.
History
Cobordism had its roots in the (failed) attempt by Henri Poincaré in 1895 to define homology purely in terms of manifolds . Poincaré simultaneously defined both homology and cobordism, which are not the same, in general. See Cobordism as an extraordinary cohomology theory for the relationship between bordism and homology.
Bordism was explicitly introduced by Lev Pontryagin in geometric work on manifolds. It came to prominence when René Thom showed that cobordism groups could be computed by means of homotopy theory, via the Thom complex construction. Cobordism theory became part of the apparatus of extraordinary cohomology theory, alongside K-theory. It performed an important role, historically speaking, in developments in topology in the 1950s and early 1960s, in particular in the Hirzebruch–Riemann–Roch theorem, and in the first proofs of the Atiyah–Singer index theorem.
In the 1980s the category with compact manifolds as objects and cobordisms between these as morphisms played a basic role in the Atiyah–Segal axioms for topological quantum field theory, which is an important part of quantum topology.
Categorical aspects
Cobordisms are objects of study in their own right, apart from cobordism classes. Cobordisms form a category whose objects are closed manifolds and whose morphisms are cobordisms. Roughly speaking, composition is given by gluing together cobordisms end-to-end: the composition of (W; M, N) and (W ′; N, P) is defined by gluing the right end of the first to the left end of the second, yielding (W ′ ∪N W; M, P). A cobordism is a kind of cospan: M → W ← N. The category is a dagger compact category.
A topological quantum field theory is a monoidal functor from a category of cobordisms to a category of vector spaces. That is, it is a functor whose value on a disjoint union of manifolds is equivalent to the tensor product of its values on each of the constituent manifolds.
In low dimensions, the bordism question is relatively trivial, but the category of cobordism is not. For instance, the disk bounding the circle corresponds to a nullary (0-ary) operation, while the cylinder corresponds to a 1-ary operation and the pair of pants to a binary operation.
Unoriented cobordism
The set of cobordism classes of closed unoriented n-dimensional manifolds is usually denoted by (rather than the more systematic ); it is an abelian group with the disjoint union as operation. More specifically, if [M] and [N] denote the cobordism classes of the manifolds M and N respectively, we define ; this is a well-defined operation which turns into an abelian group. The identity element of this group is the class consisting of all closed n-manifolds which are boundaries. Further we have for every M since . Therefore, is a vector space over , the field with two elements. The cartesian product of manifolds defines a multiplication so
is a graded algebra, with the grading given by the dimension.
The cobordism class of a closed unoriented n-dimensional manifold M is determined by the Stiefel–Whitney characteristic numbers of M, which depend on the stable isomorphism class of the tangent bundle. Thus if M has a stably trivial tangent bundle then . In 1954 René Thom proved
the polynomial algebra with one generator in each dimension . Thus two unoriented closed n-dimensional manifolds M, N are cobordant, if and only if for each collection of k-tuples of integers such that the Stiefel-Whitney numbers are equal
with the ith Stiefel-Whitney class and the -coefficient fundamental class.
For even i it is possible to choose , the cobordism class of the i-dimensional real projective space.
The low-dimensional unoriented cobordism groups are
This shows, for example, that every 3-dimensional closed manifold is the boundary of a 4-manifold (with boundary).
The Euler characteristic modulo 2 of an unoriented manifold M is an unoriented cobordism invariant. This is implied by the equation
for any compact manifold with boundary .
Therefore, is a well-defined group homomorphism. For example, for any
In particular such a product of real projective spaces is not null-cobordant. The mod 2 Euler characteristic map is onto for all and a group isomorphism for
Moreover, because of , these group homomorphisms assemble into a homomorphism of graded algebras:
Cobordism of manifolds with additional structure
Cobordism can also be defined for manifolds that have additional structure, notably an orientation. This is made formal in a general way using the notion of X-structure (or G-structure). Very briefly, the normal bundle ν of an immersion of M into a sufficiently high-dimensional Euclidean space gives rise to a map from M to the Grassmannian, which in turn is a subspace of the classifying space of the orthogonal group: ν: M → Gr(n, n + k) → BO(k). Given a collection of spaces and maps Xk → Xk+1 with maps Xk → BO(k) (compatible with the inclusions BO(k) → BO(k+1), an X-structure is a lift of ν to a map . Considering only manifolds and cobordisms with X-structure gives rise to a more general notion of cobordism. In particular, Xk may be given by BG(k), where G(k) → O(k) is some group homomorphism. This is referred to as a G-structure. Examples include G = O, the orthogonal group, giving back the unoriented cobordism, but also the subgroup SO(k), giving rise to oriented cobordism, the spin group, the unitary group U(k), and the trivial group, giving rise to framed cobordism.
The resulting cobordism groups are then defined analogously to the unoriented case. They are denoted by .
Oriented cobordism
Oriented cobordism is the one of manifolds with an SO-structure. Equivalently, all manifolds need to be oriented and cobordisms (W, M, N) (also referred to as oriented cobordisms for clarity) are such that the boundary (with the induced orientations) is , where −N denotes N with the reversed orientation. For example, boundary of the cylinder M × I is : both ends have opposite orientations. It is also the correct definition in the sense of extraordinary cohomology theory.
Unlike in the unoriented cobordism group, where every element is two-torsion, 2M is not in general an oriented boundary, that is, 2[M] ≠ 0 when considered in
The oriented cobordism groups are given modulo torsion by
the polynomial algebra generated by the oriented cobordism classes
of the complex projective spaces (Thom, 1952). The oriented cobordism group is determined by the Stiefel–Whitney and Pontrjagin characteristic numbers (Wall, 1960). Two oriented manifolds are oriented cobordant if and only if their Stiefel–Whitney and Pontrjagin numbers are the same.
The low-dimensional oriented cobordism groups are :
The signature of an oriented 4i-dimensional manifold M is defined as the signature of the intersection form on and is denoted by It is an oriented cobordism invariant, which is expressed in terms of the Pontrjagin numbers by the Hirzebruch signature theorem.
For example, for any i1, ..., ik ≥ 1
The signature map is onto for all i ≥ 1, and an isomorphism for i = 1.
Cobordism as an extraordinary cohomology theory
Every vector bundle theory (real, complex etc.) has an extraordinary cohomology theory called K-theory. Similarly, every cobordism theory ΩG has an extraordinary cohomology theory, with homology ("bordism") groups and cohomology ("cobordism") groups for any space X. The generalized homology groups are covariant in X, and the generalized cohomology groups are contravariant in X. The cobordism groups defined above are, from this point of view, the homology groups of a point: . Then is the group of bordism classes of pairs (M, f) with M a closed n-dimensional manifold M (with G-structure) and f : M → X a map. Such pairs (M, f), (N, g) are bordant if there exists a G-cobordism (W; M, N) with a map h : W → X, which restricts to f on M, and to g on N.
An n-dimensional manifold M has a fundamental homology class [M] ∈ Hn(M) (with coefficients in in general, and in in the oriented case), defining a natural transformation
which is far from being an isomorphism in general.
The bordism and cobordism theories of a space satisfy the Eilenberg–Steenrod axioms apart from the dimension axiom. This does not mean that the groups can be effectively computed once one knows the cobordism theory of a point and the homology of the space X, though the Atiyah–Hirzebruch spectral sequence gives a starting point for calculations. The computation is only easy if the particular cobordism theory reduces to a product of ordinary homology theories, in which case the bordism groups are the ordinary homology groups
This is true for unoriented cobordism. Other cobordism theories do not reduce to ordinary homology in this way, notably framed cobordism, oriented cobordism and complex cobordism. The last-named theory in particular is much used by algebraic topologists as a computational tool (e.g., for the homotopy groups of spheres).
Cobordism theories are represented by Thom spectra MG: given a group G, the Thom spectrum is composed from the Thom spaces MGn of the standard vector bundles over the classifying spaces BGn. Note that even for similar groups, Thom spectra can be very different: MSO and MO are very different, reflecting the difference between oriented and unoriented cobordism.
From the point of view of spectra, unoriented cobordism is a product of Eilenberg–MacLane spectra – MO = H(∗(MO)) – while oriented cobordism is a product of Eilenberg–MacLane spectra rationally, and at 2, but not at odd primes: the oriented cobordism spectrum MSO is rather more complicated than MO.
Other results
In 1959, C.T.C. Wall proved that two manifolds are cobordant if and only if their Pontrjagin numbers and Stiefel numbers are the same.
See also
h-cobordism
Link concordance
List of cohomology theories
Symplectic filling
Cobordism hypothesis
Cobordism ring
Timeline of bordism
Notes
References
John Frank Adams, Stable homotopy and generalised homology, Univ. Chicago Press (1974).
Sergei Novikov, Methods of algebraic topology from the point of view of cobordism theory, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 855–951.
Lev Pontryagin, Smooth manifolds and their applications in homotopy theory American Mathematical Society Translations, Ser. 2, Vol. 11, pp. 1–114 (1959).
Daniel Quillen, On the formal group laws of unoriented and complex cobordism theory Bull. Amer. Math. Soc., 75 (1969) pp. 1293–1298.
Douglas Ravenel, Complex cobordism and stable homotopy groups of spheres, Acad. Press (1986).
Yuli B. Rudyak, On Thom spectra, orientability, and (co)bordism, Springer (2008).
Robert E. Stong, Notes on cobordism theory, Princeton Univ. Press (1968).
René Thom, Quelques propriétés globales des variétés différentiables, Commentarii Mathematici Helvetici 28, 17-86 (1954).
External links
Bordism on the Manifold Atlas.
B-Bordism on the Manifold Atlas.
Differential topology
Algebraic topology
Surgery theory | Cobordism | Mathematics | 4,797 |
608,935 | https://en.wikipedia.org/wiki/Marginalia | Marginalia (or apostils) are marks made in the margins of a book or other document. They may be scribbles, comments, glosses (annotations), critiques, doodles, drolleries, or illuminations.
Biblical manuscripts
Biblical manuscripts have notes in the margin, for liturgical use. Numbers of texts' divisions are given at the margin (, Ammonian Sections, Eusebian Canons). There are some scholia, corrections and other notes usually made later by hand in the margin. Marginalia may also be of relevance because many ancient or medieval writers of marginalia may have had access to other relevant texts that, although they may have been widely copied at the time, have since then been lost due to wars, prosecution, or censorship. As such, they might give clues to an earlier, more widely known context of the extant form of the underlying text than is currently appreciated. For this reason, scholars of ancient texts usually try to find as many still existing manuscripts of the texts they are researching, because the notes scribbled in the margin might contain additional clues to the interpretation of these texts.
History
The scholia on classical manuscripts are the earliest known form of marginalia.
In Europe, before the invention of the printing press, books were copied by hand, originally onto vellum and later onto paper. Paper was expensive and vellum was much more expensive. A single book cost as much as a house. Books, therefore, were long-term investments expected to be handed down to succeeding generations. Readers commonly wrote notes in the margins of books in order to enhance the understanding of later readers. Of the 52 extant manuscript copies of Lucretius' "De rerum natura" (On the Nature of Things) available to scholars, all but three contain marginal notes.
The practice of writing in the margins of books gradually declined over several centuries after the invention of the printing press. Printed books gradually became much less expensive, so they were no longer regarded as long-term assets to be improved for succeeding generations. The first Gutenberg Bible was printed in the 1450s. Hand annotations occur in most surviving books through the end of the 1500s. Marginalia did not become unusual until sometime in the 1800s.
Fermat's claim, written around 1637, of a proof of Fermat's last theorem too big to fit in the margin is the most famous mathematical marginal note. Voltaire, in the 1700s, annotated books in his library so extensively that his annotations have been collected and published. The first recorded use of the word marginalia is in 1819 in Blackwood's Magazine. From 1845 to 1849 Edgar Allan Poe titled some of his reflections and fragmentary material "Marginalia". Five volumes of Samuel T. Coleridge's marginalia have been published. Beginning in the 1990s, attempts have been made to design and market e-book devices permitting a limited form of marginalia.
Some famous marginalia were serious works, or drafts thereof, written in margins due to scarcity of paper. Voltaire composed in book margins while in prison, and Sir Walter Raleigh wrote a personal statement in margins just before his execution.
Recent studies
Marginalia can add to or detract from the value of an association copy of a book, depending on the author of the marginalia and on the book.
Catherine C. Marshall, doing research on the future of user interface design, has studied the phenomenon of user annotation of texts. She discovered that in several university departments, students would scour the piles of textbooks at used book dealers for consistently annotated copies. The students had a good appreciation for their predecessors' distillation of knowledge. In recent years, the marginalia left behind by university students as they engage with library textbooks has also been a topic of interest to sociologists looking to understand the experience of being a university student.
The former Moscow correspondent of The Financial Times, John Lloyd, has stated that he was shown Stalin's copy of Machiavelli's The Prince, with marginal comments.
American poet Billy Collins has explored the phenomenon of annotation within his poem titled "Marginalia".
A study on medieval and Renaissance manuscripts where snails are depicted on marginalia shows that these illustrations are a comic relief due to the similarity between the armor of knights and the shell of snails.
Writers known for their marginalia
David Foster Wallace
Edgar Allan Poe
Herman Melville
Isaac Newton
John Adams
Machiavelli
Mark Twain
Michel de Montaigne
Oscar Wilde
Pierre de Fermat
Samuel T. Coleridge
Sylvia Plath
Hester Thrale Piozzi
Voltaire
See also
Annotation, often in the form of a margin note but written by another hand.
Interpolation (manuscripts)
References
Other resources
Alston, R. C. Books with Manuscript: A short title catalog of Books with Manuscript Notes in the British Library. London: British Library, 1994.
Camille, M. (1992). Image on the edge: the margins of medieval art. Harvard University Press.
Coleridge, S. T. Marginalia, Ed. George Walley and H. J. Jackson. The Collected works of Samuel Taylor Coleridge 12. Bolligen Series 75. 5 vols. Princeton University Press, 1980-.
Jackson, H. J. Marginalia: Readers writing in Books, New Haven: Yale University Press, 2001. N.B: one of the first books on this subject
Screti, Z. (2024). Finding the Marginal in Marginalia: The Importance of Including Marginalia Descriptions in Catalog Entries. Collections, 20(1), 122-141.
Spedding, P., & Tankard, P. (2021). Marginal notes: social reading and the literal margins. Palgrave Macmillan.
External links
. Barry Brahier, 2006 (University of Minnesota).
Book design
Book collecting
Writing | Marginalia | Engineering | 1,203 |
27,461,561 | https://en.wikipedia.org/wiki/Theories%20of%20cloaking | Theories of cloaking discusses various theories based on science and research, for producing an electromagnetic cloaking device. Theories presented employ transformation optics, event cloaking, dipolar scattering cancellation, tunneling light transmittance, sensors and active sources, and acoustic cloaking.
A cloaking device is one where the purpose of the transformation is to hide something, so that a defined region of space is invisibly isolated from passing electromagnetic fields (see Metamaterial cloaking) or sound waves. Objects in the defined location are still present, but incident waves are guided around them without being affected by the object itself. Along with this basic "cloaking device", other related concepts have been proposed in peer reviewed, scientific articles, and are discussed here. Naturally, some of the theories discussed here also employ metamaterials, either electromagnetic or acoustic, although often in a different manner than the original demonstration and its successor, the broad-band cloak.
The first electromagnetic cloak
The first electromagnetic cloaking device was produced in 2006, using gradient-index metamaterials. This has led to the burgeoning field of transformation optics (and now transformation acoustics), where the propagation of waves is precisely manipulated by controlling the behaviour of the material through which the light (sound) is travelling.
Ordinary spatial cloaking
Waves and the host material in which they propagate have a symbiotic relationship: both act on each other. A simple spatial cloak relies on fine tuning the properties of the propagation medium in order to direct the flow smoothly around an object, like water flowing past a rock in a stream, but without reflection, or without creating turbulence. Another analogy is that of a flow of cars passing a symmetrical traffic island – the cars are temporarily diverted, but can later reassemble themselves into a smooth flow that holds no information about whether the traffic island was small or large, or whether flowers or a large advertising billboard might have been planted on it.
Although both analogies given above have an implied direction (that of the water flow, or of the road orientation), cloaks are often designed so as to be isotropic, i.e. to work equally well for all orientations. However, they do not need to be so general, and might only work in two dimensions, as in the original electromagnetic demonstration, or only from one side, as for the so-called carpet cloak.
Spatial cloaks have other characteristics: whatever they contain can (in principle) be kept invisible forever, since an object inside the cloak may simply remain there. Signals emitted by the objects inside the cloak that are not absorbed can likewise be trapped forever by its internal structure. If a spatial cloak could be turned off and on again at will, the objects inside would then appear and disappear accordingly.
Space-time cloaking
The event cloak is a means of manipulating electromagnetic radiation in space and time in such a way that a certain collection of happenings, or events, is concealed from distant observers. Conceptually, a safecracker can enter a scene, steal the cash and exit, whilst a surveillance camera records the safe door locked and undisturbed all the time. The concept utilizes the science of metamaterials in which light can be made to behave in ways that are not found in naturally occurring materials.
The event cloak works by designing a medium in which different parts of the light illuminating a certain region can be either slowed or accelerated. A leading portion of the light is accelerated so that it arrives before the events occur, whilst a trailing part is slowed and arrives too late. After their occurrence, the light is reformed by slowing the leading part and accelerating the trailing part. The distant observer only sees a continuous illumination, whilst the events that occurred during the dark period of the cloak's operation remain undetected. The concept can be related to traffic flowing along a highway: at a certain point some cars are accelerated up, whilst the ones behind are slowed. The result is a temporary gap in the traffic allowing a pedestrian to cross. After this, the process can be reversed so that the traffic resumes its continuous flow without a gap. Regarding the cars as light particles (photons), the act of the pedestrian crossing the road is never suspected by the observer down the highway, who sees an uninterrupted and unperturbed flow of cars.
For absolute concealment, the events must be non-radiating. If they do emit light during their occurrence (e.g. by fluorescence), then this light is received by the distant observer as a single flash.
Applications of the Event Cloak include the possibility to achieve `interrupt-without-interrupt' in data channels that converge at a node. A primary calculation can be temporarily suspended to process priority information from another channel. Afterwards the suspended channel can be resumed in such a way as to appear as though it was never interrupted.
The idea of the event cloak was first proposed by a team of researchers at Imperial College London (UK) in 2010, and published in the Journal of Optics. An experimental demonstration of the basic concept using nonlinear optical technology has been presented in a preprint on the Cornell physics arXiv. This uses time lenses to slow down and speed up the light, and thereby improves on the original proposal from McCall et al. which instead relied on the nonlinear refractive index of optical fibres. The experiment claims a cloaked time interval of about 10 picoseconds, but that extension into the nanosecond and microsecond regimes should be possible.
An event cloaking scheme that requires a single dispersive medium (instead of two successive media with opposite dispersion) has also been proposed based on accelerating wavepackets. The idea is based on modulating a part of a monochromatic light wave with a discontinuous nonlinear frequency chirp so that two opposite accelerating caustics are created in space–time as the different frequency components propagate at different group velocities in the dispersive medium. Due to the structure of the frequency chirp, the expansion and contraction of the time gap happen continuously in the same medium thus creating a biconvex time gap that conceals the enclosed events.
Anomalous localized resonance cloaking
In 2006, the same year as the first metamaterial cloak, another type of cloak was proposed. This type of cloaking exploits resonance of light waves while matching the resonance of another object. In particular a particle placed near a superlens would appear to disappear as the light surrounding the particle resonates as the same frequency as the superlens. The resonance would effectively cancel out the light reflecting from the particle, rendering the particle electromagnetically invisible.
Cloaking objects at a distance
In 2009, a passive cloaking device was designed to be an 'external invisibility device' that leaves the concealed object out in the open so that it can ‘see’ its surroundings. This is based on the premise that cloaking research has not adequately provided a solution to an inherent problem; because no electromagnetic radiation can enter or leave the cloaked space, this leaves the concealed object of the cloak without ability to detect visually, or otherwise, anything outside the cloaked space.
Such a cloaking device is also capable of ‘cloaking’ only parts of an object, such as opening a virtual peep hole on a wall so as to see the other side.
The traffic analogy used above for the spatial cloak can be adapted (albeit imperfectly) to describe this process. Imagine that a car has broken down in the vicinity of the roundabout, and is disrupting the traffic flow, causing cars to take different routes or creating a traffic jam. This exterior cloak corresponds to a carefully misshapen roundabout which manages to cancel or counteract the effect of the broken down car – so that as the traffic flow departs, there is again no evidence in it of either the roundabout or of the broken down car.
Plasmonic cover
The plasmonic cover, mentioned alongside metamaterial covers (see plasmonic metamaterials), theoretically utilizes plasmonic resonance effects to reduce the total scattering cross section of spherical and cylindrical objects. These are lossless metamaterial covers near their plasma resonance which could possibly induce a dramatic drop in the scattering cross section, making these objects nearly “invisible” or “transparent” to an outside observer. Low loss, even no-loss, passive covers might be utilized that do not require high dissipation, but rely on a completely different mechanism.
Materials with either negative or low value constitutive parameters, are required for this effect. Certain metals near their plasma frequency, or metamaterials with negative parameters could fill this need. For example, several noble metals achieve this requirement because of their electrical permittivity at the infra-red or visible wavelengths with relatively low loss.
Currently only microscopically small objects could possibly appear transparent.
These materials are further described as a homogeneous, isotropic, metamaterial covers near plasma frequency dramatically reducing the fields scattered by a given object. Furthermore, These do not require any absorptive process, any anisotropy or inhomogeneity, and nor any interference cancellation.
The "classical theory" of metamaterial covers works with light of only one specific frequency.
A new research, of Kort-Kamp et al, who won the prize “School on Nonlinear Optics and Nanophotonics” of 2013, shows that is possible to tune the metamaterial to different light frequencies.
Tunneling light transmission cloak
As implied in the nomenclature, this is a type of light transmission. Transmission of light (EM radiation) through an object such as metallic film occurs with an assist of tunnelling between resonating inclusions. This effect can be created by embedding a periodic configuration of dielectrics in a metal, for example. By creating and observing transmission peaks interactions between the dielectrics and interference effects cause mixing and splitting of resonances. With an effective permittivity close to unity, the results can be used to propose a method for turning the resulting materials invisible.
More research in cloaking technology
There are other proposals for use of the cloaking technology.
In 2007 cloaking with metamaterials is reviewed and deficiencies are presented. At the same time, theoretical solutions are presented that could improve the capability to cloak objects. Later in 2007, a mathematical improvement in the cylindrical shielding to produce an electromagnetic "wormhole", is analyzed in three dimensions. Electromagnetic wormholes, as an optical device (not gravitational) are derived from cloaking theories has potential applications for advancing some current technology.
Other advances may be realized with an acoustic superlens. In addition, acoustic metamaterials have realized negative refraction for sound waves. Possible advances could be enhanced ultrasound scans, sharpening sonic medical scans, seismic maps with more detail, and buildings no longer susceptible to earthquakes. Underground imaging may be improved with finer details. The acoustic superlens, acoustic cloaking, and acoustic metamaterials translates into novel applications for focusing, or steering, sonic waves.
Acoustic cloaking technology could be used to stop a sonar-using observer from detecting the presence of an object that would normally be detectable as it reflects or scatters sound waves. Ideally, the technology would encompass a broad spectrum of vibrations on a variety of scales. The range might be from miniature electronic or mechanical components up to large earthquakes. Although most progress has been made on mathematical and theoretical solutions, a laboratory metamaterial device for evading sonar has been recently demonstrated. It can be applied to sound wavelengths from 40 to 80 kHz.
Waves also apply to bodies of water. A theory has been developed for a cloak that could "hide", or protect, man-made platforms, ships, and natural coastlines from destructive ocean waves, including tsunamis.
See also
Chirality (electromagnetism)
Invisibility
Metamaterial absorber
Metamaterial antennas
Negative index metamaterials
Nonlinear metamaterials
Photonic metamaterials
Photonic crystal
Seismic metamaterials
Split-ring resonator
Tunable metamaterials
Books
Metamaterials Handbook
Metamaterials: Physics and Engineering Explorations
References
Metamaterials
Theoretical physics | Theories of cloaking | Physics,Materials_science,Engineering | 2,494 |
248,717 | https://en.wikipedia.org/wiki/Typical%20set | In information theory, the typical set is a set of sequences whose probability is close to two raised to the negative power of the entropy of their source distribution. That this set has total probability close to one is a consequence of the asymptotic equipartition property (AEP) which is a kind of law of large numbers. The notion of typicality is only concerned with the probability of a sequence and not the actual sequence itself.
This has great use in compression theory as it provides a theoretical means for compressing data, allowing us to represent any sequence Xn using nH(X) bits on average, and, hence, justifying the use of entropy as a measure of information from a source.
The AEP can also be proven for a large class of stationary ergodic processes, allowing typical set to be defined in more general cases.
Additionally, the typical set concept is foundational in understanding the limits of data transmission and error correction in communication systems. By leveraging the properties of typical sequences, efficient coding schemes like Shannon's source coding theorem and channel coding theorem are developed, enabling near-optimal data compression and reliable transmission over noisy channels.
(Weakly) typical sequences (weak typicality, entropy typicality)
If a sequence x1, ..., xn is drawn from an independent identically-distributed random variable (IID) X defined over a finite alphabet , then the typical set, Aε(n)(n) is defined as those sequences which satisfy:
where
is the information entropy of X. The probability above need only be within a factor of 2n ε. Taking the logarithm on all sides and dividing by -n, this definition can be equivalently stated as
For i.i.d sequence, since
we further have
By the law of large numbers, for sufficiently large n
Properties
An essential characteristic of the typical set is that, if one draws a large number n of independent random samples from the distribution X, the resulting sequence (x1, x2, ..., xn) is very likely to be a member of the typical set, even though the typical set comprises only a small fraction of all the possible sequences. Formally, given any , one can choose n such that:
The probability of a sequence from X(n) being drawn from Aε(n) is greater than 1 − ε, i.e.
If the distribution over is not uniform, then the fraction of sequences that are typical is
as n becomes very large, since where is the cardinality of .
For a general stochastic process {X(t)} with AEP, the (weakly) typical set can be defined similarly with p(x1, x2, ..., xn) replaced by p(x0τ) (i.e. the probability of the sample limited to the time interval [0, τ]), n being the degree of freedom of the process in the time interval and H(X) being the entropy rate. If the process is continuous valued, differential entropy is used instead.
Example
Counter-intuitively, the most likely sequence is often not a member of the typical set. For example, suppose that X is an i.i.d Bernoulli random variable with p(0)=0.1 and p(1)=0.9. In n independent trials, since p(1)>p(0), the most likely sequence of outcome is the sequence of all 1's, (1,1,...,1). Here the entropy of X is H(X)=0.469, while
So this sequence is not in the typical set because its average logarithmic probability cannot come arbitrarily close to the entropy of the random variable X no matter how large we take the value of n.
For Bernoulli random variables, the typical set consists of sequences with average numbers of 0s and 1s in n independent trials. This is easily demonstrated: If p(1) = p and p(0) = 1-p, then for n trials with m 1's, we have
The average number of 1's in a sequence of Bernoulli trials is m = np. Thus, we have
For this example, if n=10, then the typical set consist of all sequences that have a single 0 in the entire sequence. In case p(0)=p(1)=0.5, then every possible binary sequences belong to the typical set.
Strongly typical sequences (strong typicality, letter typicality)
If a sequence x1, ..., xn is drawn from some specified joint distribution defined over a finite or an infinite alphabet , then the strongly typical set, Aε,strong(n) is defined as the set of sequences which satisfy
where is the number of occurrences of a specific symbol in the sequence.
It can be shown that strongly typical sequences are also weakly typical (with a different constant ε), and hence the name. The two forms, however, are not equivalent. Strong typicality is often easier to work with in proving theorems for memoryless channels. However, as is apparent from the definition, this form of typicality is only defined for random variables having finite support.
Jointly typical sequences
Two sequences and are jointly ε-typical if the pair is ε-typical with respect to the joint distribution and both and are ε-typical with respect to their marginal distributions and . The set of all such pairs of sequences is denoted by . Jointly ε-typical n-tuple sequences are defined similarly.
Let and be two independent sequences of random variables with the same marginal distributions and . Then for any ε>0, for sufficiently large n, jointly typical sequences satisfy the following properties:
Applications of typicality
Typical set encoding
In information theory, typical set encoding encodes only the sequences in the typical set of a stochastic source with fixed length block codes. Since the size of the typical set is about 2nH(X), only nH(X) bits are required for the coding, while at the same time ensuring that the chances of encoding error is limited to ε. Asymptotically, it is, by the AEP, lossless and achieves the minimum rate equal to the entropy rate of the source.
Typical set decoding
In information theory, typical set decoding is used in conjunction with random coding to estimate the transmitted message as the one with a codeword that is jointly ε-typical with the observation. i.e.
where are the message estimate, codeword of message and the observation respectively. is defined with respect to the joint distribution where is the transition probability that characterizes the channel statistics, and is some input distribution used to generate the codewords in the random codebook.
Universal null-hypothesis testing
Universal channel code
See also
Asymptotic equipartition property
Source coding theorem
Noisy-channel coding theorem
References
C. E. Shannon, "A Mathematical Theory of Communication", Bell System Technical Journal, vol. 27, pp. 379–423, 623-656, July, October, 1948
David J. C. MacKay. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003.
Information theory
Probability theory | Typical set | Mathematics,Technology,Engineering | 1,484 |
74,838,640 | https://en.wikipedia.org/wiki/TERN-501 | TERN-501 is a selective thyromimetic drug that is being developed for the treatment of non-alcoholic fatty liver disease.
References
Thyroid hormone receptor beta agonists
Chloroarenes
Oxadiazoles
Pyridazines
Anilides
Isopropyl compounds
Experimental drugs developed for non-alcoholic fatty liver disease | TERN-501 | Chemistry | 69 |
74,919,811 | https://en.wikipedia.org/wiki/Sabine%20Hadida | Sabine Hadida is a pharmacologist and senior vice president at Vertex Pharmaceuticals. She works at Vertex's cystic fibrosis research center in San Diego. She was awarded the Breakthrough Prize in Life Sciences in 2024.
Education
Hadida has a bachelor's degree, master's degree, and Ph.D. in pharmacy from the University of Barcelona, Spain. She worked as a postdoctoral fellow at the University of Pittsburgh studying fluorous chemistry.
Career
At Vertex Pharmaceuticals, Hadida led the chemistry team to work on drug treatments for cystic fibrosis and pain.
Awards and honors
Hadida has garnered over 30 peer reviewed scientific articles and over 60 U.S. patents. She is the recipient of the 2022 Drug Hunter Award, 2019 Distinguished Scientist Award by the American Chemistry Society, San Diego Chapter, and the 2013 American Chemistry Society Heroes of Chemistry award.
In September 2023, she received the 2024 Breakthrough Prize in Life Sciences alongside Paul Negulescu and Frederick Van Goor, for developing treatment for cystic fibrosis.
References
Living people
Medical researchers
Pharmacologists
Women pharmacologists
Year of birth missing (living people)
Women medical researchers | Sabine Hadida | Chemistry | 240 |
3,045,014 | https://en.wikipedia.org/wiki/Pleasure%20principle%20%28psychology%29 | In Freudian psychoanalysis, the pleasure principle () is the instinctive seeking of pleasure and avoiding of pain to satisfy biological and psychological needs. Specifically, the pleasure principle is the animating force behind the id.
Precursors
Epicurus in the ancient world, and later Jeremy Bentham, laid stress upon the role of pleasure in directing human life, the latter stating: "Nature has placed mankind under the governance of two sovereign masters, pain and pleasure".
Freud's most immediate predecessor and guide however was Gustav Theodor Fechner and his psychophysics.
Freudian developments
Freud used the idea that the mind seeks pleasure and avoids pain in his Project for a Scientific Psychology of 1895, as well as in the theoretical portion of The Interpretation of Dreams of 1900, where he termed it the 'unpleasure principle'.
In the Two Principles of Mental Functioning of 1911, contrasting it with the reality principle, Freud spoke for the first time of "the pleasure-unpleasure principle, or more shortly the pleasure principle". In 1923, linking the pleasure principle to the libido he described it as the watchman over life; and in Civilization and Its Discontents of 1930 he still considered that "what decides the purpose of life is simply the programme of the pleasure principle".
While on occasion Freud wrote of the near omnipotence of the pleasure principle in mental life, elsewhere he referred more cautiously to the mind's strong (but not always fulfilled) tendency towards the pleasure principle.
Two principles
Freud contrasted the pleasure principle with the counterpart concept of the reality principle, which describes the capacity to defer gratification of a desire when circumstantial reality disallows its immediate gratification. In infant and early childhood, the id rules behavior by obeying only the pleasure principle. People at that age only seek immediate gratification, aiming to satisfy cravings such as hunger and thirst, and at later ages the id seeks out sex.
Maturity is learning to endure the pain of deferred gratification. Freud argued that "an ego thus educated has become 'reasonable'; it no longer lets itself be governed by the pleasure principle, but obeys the reality principle, which also, at bottom, seeks to obtain pleasure, but pleasure which is assured through taking account of reality, even though it is pleasure postponed and diminished".
The beyond
In his book Beyond the Pleasure Principle, published in 1921, Freud considered the possibility of "the operation of tendencies beyond the pleasure principle, that is, of tendencies more primitive than it and independent of it". By examining the role of repetition compulsion in potentially over-riding the pleasure principle, Freud ultimately developed his opposition between Libido, the life instinct, and the death drive.
See also
References
External links
Pleasure/unpleasure principle
Psychoanalytic terminology
Motivation
Positive psychology
Pleasure
Energy and instincts
Freudian psychology | Pleasure principle (psychology) | Biology | 584 |
15,487,064 | https://en.wikipedia.org/wiki/S-TADIL%20J | S-TADIL J, or Satellite TADIL J, is a real-time Beyond Line-of-Sight (BLOS) Tactical Digital Information Link (TADIL) supporting the exchange of the same J Series message set that is implemented on Link-16 via the Joint Tactical Information Distribution System (JTIDS). S-TADIL J provides for robust continuous connectivity between Navy ships that are beyond JTIDS line-of-sight (LOS) transmission range. S-TADIL J is designed to support and significantly improve long-range TADIL connectivity between widely dispersed fleet operational forces. With the deployment of S-TADIL J, operational units will have three possible data link paths that can be used to support multi-ship data link-coordinated operations. S-TADIL J supports the same levels of surveillance and weapon coordination data exchange provided by Link-11 and Link-16. The TADIL J message standard is implemented on S-TADIL J to provide for the same level of information content as Link-16.
Change of terminology
In the US, the term Tactical Digital Information Link (TADIL) is obsolete (per DISA guidance) and is now more commonly known as Tactical Data Link (TDL).
See also
Tactical Data Links (TDLs)
Standard Interface for Multiple Platform Evaluation (SIMPLE), allows (Beyond Line of Sight) transmission of M-Series and J-Series messages over IP-based protocols.
Joint Range Extension Applications Protocol (JREAP), allows transmission of M-Series and J-Series messages over long-distance networks.
External links
Federation of American Scientists article: Tactical Digital Information Links (TADIL)
Military radio systems
Military equipment of NATO
Military communications | S-TADIL J | Engineering | 345 |
59,202,539 | https://en.wikipedia.org/wiki/%CE%92-Isophorone | β-Isophorone is an organic compound with the formula (CH3)3C6H7O. Classified as a β,γ-unsaturated ketone, it is an isomer of and common impurity in the major industrial intermediate α-isophorone, which is produced from acetone. Like the alpha isomer, beta-isophorone is a colorless liquid.
See also
Phorone
References
Ketones
Ketone solvents
Cyclohexenes | Β-Isophorone | Chemistry | 104 |
6,014,225 | https://en.wikipedia.org/wiki/Pressure%20drop | Pressure drop (often abbreviated as "dP" or "ΔP") is defined as the difference in total pressure between two points of a fluid carrying network. A pressure drop occurs when frictional forces, caused by the resistance to flow, act on a fluid as it flows through a conduit (such as a channel, pipe, or tube). This friction converts some of the fluid's hydraulic energy to thermal energy (i.e., internal energy). Since the thermal energy cannot be converted back to hydraulic energy, the fluid experiences a drop in pressure, as is required by conservation of energy.
The main determinants of resistance to fluid flow are fluid velocity through the pipe and fluid viscosity. Pressure drop increases proportionally to the frictional shear forces within the piping network. A piping network containing a high relative roughness rating as well as many pipe fittings and joints, tube convergence, divergence, turns, surface roughness, and other physical properties will affect the pressure drop. High flow velocities or high fluid viscosities result in a larger pressure drop across a pipe section, valve, or elbow joint. Low velocity will result in less (or no) pressure drop. The fluid may also be biphasic as in pneumatic conveying with a gas and a solid; in this case, the friction of the solid must also be taken into consideration for calculating the pressure drop.
Applications
Fluid in a system will always flow from a region of higher pressure to a region of lower pressure, assuming it has a path to do so. All things being equal, a higher pressure drop will lead to a higher flow (except in cases of choked flow).
The pressure drop of a given system will determine the amount of energy needed to convey fluid through that system. For example, a larger pump could be required to move a set amount of water through smaller-diameter pipes (with higher velocity and thus higher pressure drop) as compared to a system with larger-diameter pipes (with lower velocity and thus lower pressure drop).
Calculation of pressure drop
Pressure drop is related inversely to pipe diameter to the fifth power. For example, halving a pipe's diameter would increase the pressure drop by a factor of (e.g. from 2 psi to 64 psi), assuming no change in flow.
Pressure drop in piping is directly proportional to the length of the piping—for example, a pipe with twice the length will have twice the pressure drop, given the same flow rate. Piping fittings (such as elbow and tee joints) generally lead to greater pressure drop than straight pipe. As such, a number of correlations have been developed to calculate equivalent length of fittings.
Certain valves are provided with an associated flow coefficient, commonly known as or . The flow coefficient relates pressure drop, flow rate, and specific gravity for a given valve.
Many empirical calculations exist for calculation of pressure drop, including:
Darcy–Weisbach equation, to calculate pressure drop in a pipe
Hagen–Poiseuille equation
See also
ΔP
head loss
References
External links
Mechanics
Fluid dynamics | Pressure drop | Physics,Chemistry,Engineering | 634 |
26,628,083 | https://en.wikipedia.org/wiki/Omega-categorical%20theory | In mathematical logic, an omega-categorical theory is a theory that has exactly one countably infinite model up to isomorphism. Omega-categoricity is the special case κ = = ω of κ-categoricity, and omega-categorical theories are also referred to as ω-categorical. The notion is most important for countable first-order theories.
Equivalent conditions for omega-categoricity
Many conditions on a theory are equivalent to the property of omega-categoricity. In 1959 Erwin Engeler, Czesław Ryll-Nardzewski and Lars Svenonius, proved several independently. Despite this, the literature still widely refers to the Ryll-Nardzewski theorem as a name for these conditions. The conditions included with the theorem vary between authors.
Given a countable complete first-order theory T with infinite models, the following are equivalent:
The theory T is omega-categorical.
Every countable model of T has an oligomorphic automorphism group (that is, there are finitely many orbits on Mn for every n).
Some countable model of T has an oligomorphic automorphism group.
The theory T has a model which, for every natural number n, realizes only finitely many n-types, that is, the Stone space Sn(T) is finite.
For every natural number n, T has only finitely many n-types.
For every natural number n, every n-type is isolated.
For every natural number n, up to equivalence modulo T there are only finitely many formulas with n free variables, in other words, for every n, the nth Lindenbaum–Tarski algebra of T is finite.
Every model of T is atomic.
Every countable model of T is atomic.
The theory T has a countable atomic and saturated model.
The theory T has a saturated prime model.
Examples
The theory of any countably infinite structure which is homogeneous over a finite relational language is omega-categorical. More generally, the theory of the Fraïssé limit of any uniformly locally finite Fraïssé class is omega-categorical. Hence, the following theories are omega-categorical:
The theory of dense linear orders without endpoints (Cantor's isomorphism theorem)
The theory of the Rado graph
The theory of infinite linear spaces over any finite field
The theory of atomless Boolean algebras
Notes
References
Model theory
Mathematical theorems | Omega-categorical theory | Mathematics | 499 |
70,193,166 | https://en.wikipedia.org/wiki/Michner%20Plating%20site | The Michner Plating Co.–Mechanic Street Site, as dubbed by the EPA, is a 140,000 square foot industrial complex that sits upstream to the Grand River in Jackson, Michigan. on the corner of N. Mechanic Street and W. Trail Street.
Architecture
The first portion of the main structure was completed between 1907 and 1910, three stories, and made out of reinforced concrete. During the site's main expansion in 1920, a new brick-wall, steel-framed, wood-floored building was annexed onto this. A new basement with access doors was added along with that initial expansion movement.
Following this, two new similarly structured buildings were constructed. These extensions, utilizing large attic spaces to trap heat in winter, and shingles to absorb heat in summer. Architecture modernized the plant for agriculture, allowing seeds on the third floor to stay warm in winter during packaging, in which produce was exposed. New boilers were added, and placed in the northern basement.
After being converted to a plating-facility, the open floor plan was used for machinery, mostly the ceiling-mounted conveyor line which wrapped around the first and second floors. New fiberglass was installed over windows on the building, this originally would prevent metal degradation from UV light.
Isbell's Seeds
The very first industrial buildings on this site were constructed in the late 19th and early 20th century; these companies were Weeks Drug & Chemical, Lewis Blessings Cigar & Paper Box, and Novelty Manufacturing.
By 1920 Lewis Blessings and Weeks Drug & Chemicals' property on Mechanic Street had been acquired by S.M. Isbell Seed Co.
Isbell Seed, was Michigan's biggest supplier of agricultural produce, specifically beans at the time. After the acquisition of the buildings, the S.M. Isbell Company preceded in demolition of multiple structures on site, excluding Weeks's three-story storage building. Isbell Seed expanded the Weeks storage building until 1930, when Isbell evidently fell victim to the Great depression.
When looking over that site today viewers are able to observe the original Isbell signs that'd been painted on the building's facade, standing the test of time.
Michner acquisition
In 1935 the three-story complex was purchased by Joseph Michner, in which he would found Michner Plating Corp. The company manufactured and plated automobile parts, particularly seat belts and other small parts. Engraving, heat treating, chrome, and electroplating also made up the bulk of manufacturing that occurred at Michner Plating Co.
The Michner Plating Corporation renovated the Isbell buildings for plating, repainting them with the Michner name, and continue the expansion of the buildings. The company also bought the former Novelty Mfg., site allowing them to expand on the north end of the site.
Michner Adjusts Properties for Reuse:
1936: New chrome line building with conveyor access to the former Isbell structures
1940s: Northern loading area restructured with more coverage and access
1962: Two floor office building and loading room, under new management of Walter Michner
1963: Former Isbell Complex sold to SalCo Engineering & Manufacturing
1965: Nickel & zinc plating lines expansion, heat treatment machine installed
Later years
After the 1963 site split Michner Plating Co. began receiving numerous OSHA violation notices for their Mechanic St. site, under handling of chemicals and chemical disposal. In 2007 Michner ceased operations at the Mechanic St. site because of a high operation cost with small outcome, during continuous increase in demand for plating. This resulted in the relocation to their Angling Rd. property, where Jason Michner would take over company operations. The building's vacancy allowed for Michner to salvage metals by cutting pipes, which sparked a small fire in the unused, wood-beam building. Michner Plating Co. later entered foreclosure in 2013 due to $1.6 million unpaid back-taxes.
SalCo relocated services to Micor Drive in 2015 following the EPA's investigation into the site; the investigation concluded with the discovery of over '1,100 drums, vats, totes, and other containers potentially containing cyanide, zinc cyanide, nickel chloride, chromic acid, hydrogen peroxide, sulfuric acid, ignitable wastes, reactive wastes and other chemicals'. Groundwater soil tests were also performed across the site for PFAS, outsourcing from a flooded basement used to store industrial materials, a hazard due to its proximity to the Grand River and private wells, achieving a Hazardous Ranking System score of 39.12.
In 2016 the US Nuclear Regulatory Commission inquired about former-tenant Novelty Manufacturing's business in radium isotopes, used in their foot-warmer contraption, which could potentially harm the fragile environment around the facility. In 2018, after performing a radiological survey, the NRC informed the EPA that no such radium isotope contamination had been found during the survey.
Current status
During the late 2010s the abandoned buildings became a popular spot to vandalize and explore. Due to Jackson's graffiti ordinance none of the graffiti, has been repainted since being obtained by Jackson County. The complex seems to be very popular among local underground photographers; some intruders have brought grills into the buildings.
In 2021 a portion of tar concrete roof in the north building collapsed due to severe weathering, and previous neglect. The southern portion that was previously occupied by SalCo took on flooding, asbestos-insulated piping was damaged, and bricks fell. The entire Michner Plating Site is still too unsafe for industrial use, but has been considered for partial redevelopments in its strategic location.
The EPA resumed cleanup in 2021, and designated Michner Plating as a Superfund Site; citing numerous barrels containing industrial contaminants, buried within the building's foundation. Jackson County is developing a plan to demolish the building for site reuse by December 2022. All entry points in both buildings have been sealed for the cleanup process to prevent interruption.
References
Brick buildings and structures in the United States
Buildings and structures completed in 1965
Buildings and structures in Jackson County, Michigan
Mill architecture
Superfund sites in Michigan
Unused buildings in Michigan | Michner Plating site | Engineering | 1,254 |
343,257 | https://en.wikipedia.org/wiki/List%20of%20craters%20on%20Mars%3A%20A%E2%80%93G | This is a partial list of craters on Mars. There are hundreds of thousands of impact craters on Mars, but only some of them have names. This list here only contains named Martian craters starting with the letter A – G (see also lists for H – N and O – Z).
Large Martian craters (greater than 60 kilometers in diameter) are named after famous scientists and science fiction authors; smaller ones (less than 60 km in diameter) get their names from towns on Earth. Craters cannot be named for living people, and small crater names are not intended to be commemorative – that is, a small crater isn't actually named after a specific town on Earth, but rather its name comes at random from a pool of terrestrial place names, with some exceptions made for craters near landing sites. Latitude and longitude are given as planetographic coordinates with west longitude.
A
B
C
D
E
F
G
See also
List of catenae on Mars
List of craters on Mars
List of mountains on Mars
References
External links
USGS: Martian system nomenclature
USGS: Mars Nomenclature: Craters
Mars: A–G | List of craters on Mars: A–G | Astronomy | 221 |
25,672,752 | https://en.wikipedia.org/wiki/Quantum%20inverse%20scattering%20method | In quantum physics, the quantum inverse scattering method (QISM), similar to the closely related algebraic Bethe ansatz, is a method for solving integrable models in 1+1 dimensions, introduced by Leon Takhtajan and L. D. Faddeev in 1979.
It can be viewed as a quantized version of the classical inverse scattering method pioneered by Norman Zabusky and Martin Kruskal used to investigate the Korteweg–de Vries equation and later other integrable partial differential equations. In both, a Lax matrix features heavily and scattering data is used to construct solutions to the original system.
While the classical inverse scattering method is used to solve integrable partial differential equations which model continuous media (for example, the KdV equation models shallow water waves), the QISM is used to solve many-body quantum systems, sometimes known as spin chains, of which the Heisenberg spin chain is the best-studied and most famous example. These are typically discrete systems, with particles fixed at different points of a lattice, but limits of results obtained by the QISM can give predictions even for field theories defined on a continuum, such as the quantum sine-Gordon model.
Discussion
The quantum inverse scattering method relates two different approaches:
the Bethe ansatz, a method of solving integrable quantum models in one space and one time dimension.
the inverse scattering transform, a method of solving classical integrable differential equations of the evolutionary type.
This method led to the formulation of quantum groups, in particular the Yangian. The center of the Yangian, given by the quantum determinant plays a prominent role in the method.
An important concept in the inverse scattering transform is the Lax representation. The quantum inverse scattering method starts by the quantization of the Lax representation and reproduces the results of the Bethe ansatz. In fact, it allows the Bethe ansatz to be written in a new form: the algebraic Bethe ansatz. This led to further progress in the understanding of quantum integrable systems, such as the quantum Heisenberg model, the quantum nonlinear Schrödinger equation (also known as the Lieb–Liniger model or the Tonks–Girardeau gas) and the Hubbard model.
The theory of correlation functions was developed, relating determinant representations, descriptions by differential equations and the Riemann–Hilbert problem. Asymptotics of correlation functions which include space, time and temperature dependence were evaluated in 1991.
Explicit expressions for the higher conservation laws of the integrable models were obtained in 1989.
Essential progress was achieved in study of ice-type models: the bulk free energy of the
six vertex model depends on boundary conditions even in the thermodynamic limit.
Procedure
The steps can be summarized as follows :
Take an R-matrix which solves the Yang–Baxter equation.
Take a representation of an algebra satisfying the RTT relations.
Find the spectrum of the generating function of the centre of .
Find correlators.
References
Exactly solvable models
Quantum mechanics | Quantum inverse scattering method | Physics | 629 |
9,701,718 | https://en.wikipedia.org/wiki/Dice-S%C3%B8rensen%20coefficient | The Dice-Sørensen coefficient (see below for other names) is a statistic used to gauge the similarity of two samples. It was independently developed by the botanists Lee Raymond Dice and Thorvald Sørensen, who published in 1945 and 1948 respectively.
Name
The index is known by several other names, especially Sørensen–Dice index, Sørensen index and Dice's coefficient. Other variations include the "similarity coefficient" or "index", such as Dice similarity coefficient (DSC). Common alternate spellings for Sørensen are Sorenson, Soerenson and Sörenson, and all three can also be seen with the –sen ending (the Danish letter ø is phonetically equivalent to the German/Swedish ö, which can be written as oe in ASCII).
Other names include:
F1 score
Czekanowski's binary (non-quantitative) index
Measure of genetic similarity
Zijdenbos similarity index, referring to a 1994 paper of Zijdenbos et al.
Formula
Sørensen's original formula was intended to be applied to discrete data. Given two sets, X and Y, it is defined as
where |X| and |Y| are the cardinalities of the two sets (i.e. the number of elements in each set).
The Sørensen index equals twice the number of elements common to both sets divided by the sum of the number of elements in each set. Equivalently, the index is the size of the intersection as a fraction of the average size of the two sets.
When applied to Boolean data, using the definition of true positive (TP), false positive (FP), and false negative (FN), it can be written as
.
It is different from the Jaccard index which only counts true positives once in both the numerator and denominator. DSC is the quotient of similarity and ranges between 0 and 1. It can be viewed as a similarity measure over sets.
Similarly to the Jaccard index, the set operations can be expressed in terms of vector operations over binary vectors a and b:
which gives the same outcome over binary vectors and also gives a more general similarity metric over vectors in general terms.
For sets X and Y of keywords used in information retrieval, the coefficient may be defined as twice the shared information (intersection) over the sum of cardinalities :
When taken as a string similarity measure, the coefficient may be calculated for two strings, x and y using bigrams as follows:
where nt is the number of character bigrams found in both strings, nx is the number of bigrams in string x and ny is the number of bigrams in string y. For example, to calculate the similarity between:
night
nacht
We would find the set of bigrams in each word:
{ni,ig,gh,ht}
{na,ac,ch,ht}
Each set has four elements, and the intersection of these two sets has only one element: ht.
Inserting these numbers into the formula, we calculate, s = (2 · 1) / (4 + 4) = 0.25.
Continuous Dice Coefficient
Source:
For a discrete (binary) ground truth and continuous measures in the interval [0,1], the following formula can be used:
Where and
c can be computed as follows:
If which means no overlap between A and B, c is set to 1 arbitrarily.
Difference from Jaccard
This coefficient is not very different in form from the Jaccard index. In fact, both are equivalent in the sense that given a value for the Sørensen–Dice coefficient , one can calculate the respective Jaccard index value and vice versa, using the equations and .
Since the Sørensen–Dice coefficient does not satisfy the triangle inequality, it can be considered a semimetric version of the Jaccard index.
The function ranges between zero and one, like Jaccard. Unlike Jaccard, the corresponding difference function
is not a proper distance metric as it does not satisfy the triangle inequality. The simplest counterexample of this is given by the three sets , and . We have and . To satisfy the triangle inequality, the sum of any two sides must be greater than or equal to that of the remaining side. However, .
Applications
The Sørensen–Dice coefficient is useful for ecological community data (e.g. Looman & Campbell, 1960). Justification for its use is primarily empirical rather than theoretical (although it can be justified theoretically as the intersection of two fuzzy sets). As compared to Euclidean distance, the Sørensen distance retains sensitivity in more heterogeneous data sets and gives less weight to outliers. Recently the Dice score (and its variations, e.g. logDice taking a logarithm of it) has become popular in computer lexicography for measuring the lexical association score of two given words.
logDice is also used as part of the Mash Distance for genome and metagenome distance estimation
Finally, Dice is used in image segmentation, in particular for comparing algorithm output against reference masks in medical applications.
Abundance version
The expression is easily extended to abundance instead of presence/absence of species. This quantitative version is known by several names:
Quantitative Sørensen–Dice index
Quantitative Sørensen index
Quantitative Dice index
Bray–Curtis similarity (1 minus the Bray-Curtis dissimilarity)
Czekanowski's quantitative index
Steinhaus index
Pielou's percentage similarity
1 minus the Hellinger distance
Proportion of specific agreement or positive agreement
See also
Correlation
F1 score
Jaccard index
Hamming distance
Mantel test
Morisita's overlap index
Overlap coefficient
Renkonen similarity index
Tversky index
Universal adaptive strategy theory (UAST)
References
External links
Information retrieval evaluation
String metrics
Measure theory
Similarity measures | Dice-Sørensen coefficient | Physics | 1,221 |
52,848,120 | https://en.wikipedia.org/wiki/Clebsch%20representation | In physics and mathematics, the Clebsch representation of an arbitrary three-dimensional vector field is:
where the scalar fields and are known as Clebsch potentials or Monge potentials, named after Alfred Clebsch (1833–1872) and Gaspard Monge (1746–1818), and is the gradient operator.
Background
In fluid dynamics and plasma physics, the Clebsch representation provides a means to overcome the difficulties to describe an inviscid flow with non-zero vorticity – in the Eulerian reference frame – using Lagrangian mechanics and Hamiltonian mechanics. At the critical point of such functionals the result is the Euler equations, a set of equations describing the fluid flow. Note that the mentioned difficulties do not arise when describing the flow through a variational principle in the Lagrangian reference frame. In case of surface gravity waves, the Clebsch representation leads to a rotational-flow form of Luke's variational principle.
For the Clebsch representation to be possible, the vector field has (locally) to be bounded, continuous and sufficiently smooth. For global applicability has to decay fast enough towards infinity. The Clebsch decomposition is not unique, and (two) additional constraints are necessary to uniquely define the Clebsch potentials. Since is in general not solenoidal, the Clebsch representation does not in general satisfy the Helmholtz decomposition.
Vorticity
The vorticity is equal to
with the last step due to the vector calculus identity So the vorticity is perpendicular to both and while further the vorticity does not depend on
Notes
References
Vector calculus
Fluid dynamics
Plasma theory and modeling | Clebsch representation | Physics,Chemistry,Engineering | 344 |
54,295,335 | https://en.wikipedia.org/wiki/Magnadur | Magnadur is a sintered barium ferrite, specifically BaFe12O19 in an anisotropic form. It is used for making permanent magnets. The material was invented by Mullard and was used initially particularly for focussing rings on cathode-ray tubes. Magnadur magnets retain their magnetism well, and are often used in education. Magnadur can also be used in DC motors.
Physical characteristics
Remanence 0.9 T
Coercivity 110 kA/m
Maximum energy product, 20 kJm - at 86 kAm
References
Ferromagnetic materials | Magnadur | Physics,Chemistry | 123 |
37,806,728 | https://en.wikipedia.org/wiki/HD%205608 | HD 5608 is an orange-hued star in the northern constellation of Andromeda with one known planet, HD 5608 b. It is a dim star near the lower limit of visibility to the naked eye, having an apparent visual magnitude of +5.98. The distance to HD 5608, as estimated from an annual parallax shift of , is . It is moving closer to the Earth with a heliocentric radial velocity of −23 km/s, and is expected to make its closest approach in 1.285 million years when it comes to within .
This is a K-type subgiant star on the red giant branch track with a stellar classification of K0 IV. It has 1.5 times the mass of the Sun and, at the age of three billion years, has expanded to five times the Sun's radius. It is radiating 13 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,897 K. It has a higher than solar metallicity – a term astronomers use to describe the abundance of elements other than hydrogen and helium.
HD 5608 has a co-moving companion, HD 5608 B, at an angular separation of , which has been directly imaged. The physical separation of the pair is calculated as or , depending on the assumptions. It has an H band magnitude difference of 9.40 with the primary and an estimated mass of . A second companion at a separation of is a background star. This companion star has since been characterized by radial velocity and astrometry in addition to imaging.
Planetary companion
In 2012, the Okayama Planet Search Program reported the detection of a substellar companion in orbit around HD 5608, based upon Doppler measurements between 2003 and 2011 from the Okayama observatory in Kurashiki. These showed a linear trend indicating the existence of a distant companion. The data showed an additional periodicity of around 766 days. This object shows a minimum mass of , a semimajor axis of , and an eccentricity of 0.19. The high eccentricity of this planet could have been induced by the low mass companion star HD 5608 B via the Kozai mechanism.
References
K-type subgiants
Planetary systems with one confirmed planet
Binary stars
Andromeda (constellation)
0275
BD+33 0140
005608
004552 | HD 5608 | Astronomy | 484 |
5,458,288 | https://en.wikipedia.org/wiki/Horse%20Tamers | The colossal pair of marble "Horse Tamers"—often identified as Castor and Pollux—have stood since antiquity near the site of the Baths of Constantine on the Quirinal Hill, Rome. Napoleon's agents wanted to include them among the classical booty removed from Rome after the 1797 Treaty of Tolentino, but they were too large to be buried or to be moved very far. They are fourth-century Roman copies of Greek originals. They gave to the Quirinal its medieval name , which lingered into the nineteenth century. Their coarseness has been noted, while the vigor—notably that of the horses—has been admired. The Colossi of the Quirinal are the original exponents of this theme of dominating power, which has appealed to powerful patrons since the seventeenth century, from Marly-le-Roi to Saint Petersburg.
The huge sculptures were noted in the medieval guidebook for pilgrims, Mirabilia Urbis Romae. Their ruinous bases still bore inscriptions OPUS FIDIÆ and OPUS PRAXITELIS, hopeful attributions that must have dated from Late Antiquity (Haskell and Penny 1981, p 136). The Mirabilia confidently reported that these were "the names of two seers who had arrived in Rome under Tiberius, naked, to tell the 'bare truth' that the princes of the world were like horses which had not yet been mounted by a true king."
Between 1589 and 1591, Sixtus V had them restored and set on new pedestals flanking a fountain, another engineering triumph for Domenico Fontana, who had moved and re-erected the obelisk in Piazza San Pietro. In 1783-86 they were re-set at an angle, and an obelisk, which had recently been found at the Mausoleum of Augustus, was re-erected between them. (The present granite basin, which had served for watering cattle in the Roman Forum was set between them in 1818.)
An interpretation of their subject as Alexander and Bucephalus was proposed in 1558 by Onofrio Panvinio, who suggested that Constantine had removed them from Alexandria, where they would have referred to the familiar legend of the city's founder. This became a popular alternative to their identification as the Dioscuri. According to a story long repeated by popular guides, they were created by Phidias and Praxiteles competing for fame, despite these two long preceding Alexander.
Other works
About 1560 a second pair of colossal marble figures accompanied by horses were unearthed and set up on either side of the entrance to the Campidoglio.
The fame of the Horse Tamers recommended them for other situations where the ruling of base natures by higher nature was iconographically desirable. The Marly Horses made by Guillaume Coustou the Elder for Louis XV at Marly-le-Roi were re-set triumphantly in Paris at the time of the French Revolution, flanking the entrance to the Champs-Elysées In the 1640s, bronze replicas were to flank the entrance to the Louvre: moulds were taken for the purpose, but the project foundered. Paolo Triscornia carved what seem to have been the first full-scale replicas of the groups for the entrance of the Manège (the riding school of the royal guards) in St. Petersburg" (Haskell and Penny p 139). The standing of the heroic nudes had risen with the new approach to Antiquity of Neoclassicism: Sir Richard Westmacott was commissioned to cast a full-scale bronze of the "Phidias" figure, supplied with a shield and sword, as a tribute to the Duke of Wellington; it was erected at Hyde Park Corner opposite the Iron Duke's London residence Apsley House, where some French affected to think it was the Duke himself, stark naked. Christian Friedrich Tieck placed copies of the figures, in cast iron, atop Karl Friedrich Schinkel's Altes Museum, Berlin. In St Petersburg, the Anichkov Bridge has four colossal bronze Horse Tamer sculptures by Baron Peter Klodt von Urgensburg (illustration, left). In Brooklyn's Prospect Park, at the Ocean Parkway ("Park Circle") entrance, stands a pair of bronze Horse Tamers sculptures (1899) by Frederick MacMonnies, installed as the newly combined City of New York was spreading across the Long Island landscape.
Notes
References
See also
4th-century Roman sculptures
Roman copies of Greek sculptures
Horses in art
Outdoor sculptures in Rome
Castor and Pollux
Pope Sixtus V
Alexander the Great in art | Horse Tamers | Astronomy | 933 |
9,024,703 | https://en.wikipedia.org/wiki/List%20of%20UN%20numbers%201701%20to%201800 | UN numbers from UN1701 to UN1800 as assigned by the United Nations Committee of Experts on the Transport of Dangerous Goods are as follows:
UN 1701 to UN 1800
n.o.s. = not otherwise specified meaning a collective entry to which substances, mixtures, solutions or articles may be assigned if a) they are not mentioned by name in 3.2 Dangerous Goods List AND b) they exhibit chemical, physical and/or dangerous properties corresponding to the Class, classification code, packing group and the name and description of the n.o.s. entry
See also
Lists of UN numbers
References
External links
ADR Dangerous Goods, cited on 7 May 2015.
UN Dangerous Goods List from 2015, cited on 7 May 2015.
UN Dangerous Goods List from 2013, cited on 7 May 2015.
Lists of UN numbers | List of UN numbers 1701 to 1800 | Chemistry,Technology | 166 |
28,165,618 | https://en.wikipedia.org/wiki/Super%20LCD | Super LCD (SLCD) is a display technology used by numerous manufacturers for mobile device displays. It is mostly used by HTC, though Super LCD panels are actually produced by S-LCD Corporation.
Super LCD differs from a regular LCD in that it does not have an air gap between the outer glass and the display element. This produces less glare and makes the user feel "closer" to the display itself. Super LCD's benefits also include lower power consumption and improved outdoor visibility. Super LCD has been succeeded by the newer Super LCD2 displays.
Some manufacturers had moved back to SLCD displays because of the expense and lack of production capacity of AMOLED displays. SLCD screens have remained popular as power consumption is comparably lower when displaying lighter colors such as the white background found on most internet pages. While AMOLED technology generally displays darker blacks and saturated colours making videos and images appear clearer and more vibrant, SLCD technology avoids the need for pentile subpixel formations (which use a larger shared blue subpixel to avoid fading over time) and thus creates sharper detail making text and videos appear clearer.
Also known as: "S-LCD", "S-LCD2", "SLCD", "SLCD2", "Super LCD2"
External links
AMOLED vs LCD - Differences explained on Android Authority
Smartphones
Display technology | Super LCD | Engineering | 281 |
529,449 | https://en.wikipedia.org/wiki/Audio%20control%20surface | In the domain of digital audio, a control surface is a human interface device (HID) that allows the user to control a digital audio workstation or other digital audio application. Generally, a control surface will contain one or more controls that can be assigned to parameters in the software, allowing tactile control of the software. As digital audio software is complex and can play any number of functions in the audio chain, control surfaces can be used to control many aspects of music production, including virtual instruments, samplers, signal processors, mixers, DJ software, and music sequencers.
A control surface is a physical interface, often resembling an analog Mixing console, used to manage your DAW (Digital audio workstation), plug-ins, and other audio software. There are many configurations of a control surfaces. They can contain a single fader like the Sparrow 3x100mm, to a large advanced console like the Avid S6. Control surfaces often features faders, knobs (rotary encoders), and buttons that can be assigned to parameters in the software. Other control surfaces are designed to give a musician control over the sequencer while recording, and thus provide transport controls (remote control of record, playback and song position). Control surfaces are often incorporated into MIDI controllers to give the musician more control over an instrument. Control surfaces with motorized faders can read and write mix automation.
The control surface connects to the host computer via many different interfaces. MIDI was the first major interface created for this purpose, although many devices now use USB, FireWire, or Ethernet.
History
In the1950s the history of Mixing Control Surfaces began with the Altec Revocon remote mixing controller for sound reinforcement. It was the first type of equipment that allowed a sound engineer to control a backstage or booth mixer from anywhere in the audience or space that they were in with motorized controls on the mixer. Soon after in the 1960s, Fairchild introduced the Integra Control Surface mixers. These mixers were the first ones that incorporated channel strips that look like fader channels today. The Integra Control Surface controlled a rack mixer using LDR (light-dependent resistors) and reed relays. The next major development happened another decade later in the 1970s when motorized faders (flying Faders) were invented. This allowed for the integration of mix automation capabilities into consoles, allowing the position of the physical faders to correspond with automation data. A good example of this is if there needed to be a fade out. The automation data would tell the fader from its current position down to 0 over the specified time period. This technology was expensive at first, but the products improved through the 1980s allowing the cost to decrease and making them more commonplace.
Robert Moog introduced the first MIDI keyboards in 1982 which are control surfaces. Shortly after, Sequential Circuits released the Prophet 600 which was a keyboard with a MIDI interface as well as Roland releasing the MPU-401 which offered software that allowed the device to communicate directly with a PC. Both were released in 1983.
Types of Controls on Control Surfaces
Source:
Non-Motorized Faders
Traditional slider-style level controls. They work well in a live production environment, listening to others while creating music, and providing playback to DJs when looping. They are not recommended for mixing with DAWS due to the lack of data output.
Motorized Faders
The faders are motorized for playback of a mix. They represent the physical position of a channel's level at a specific position in a mix's timeline. You can create new data by grabbing it during the mix and it will overwrite what is there.
Buttons
Often will have an LED to indicate on or off. They can indicate channel mutes, solo sends, talkback, or perform navigation within a menu.
Transport Controls
These are the buttons that look like a conventional tape. They allow you to play, stop, fast forward.
Display/Meters
Mimics the meters on screen on the DAW's computer display. It may show different settings, levels, routing, time code, and other information.
Examples
Smart AV Tango - A hybrid controller with a 22" touch screen, compatible with major DAWs for MAC & PC.
M-Audio ProjectMix - Control surface that can control many different applications.
Mackie Control - Serves a similar purpose as the ProjectMix.
References
External links
An introduction to control surfaces
Audio electronics | Audio control surface | Engineering | 893 |
28,738,842 | https://en.wikipedia.org/wiki/C16H18N2O3 | The molecular formula C16H18N2O3 (molar mass: 286.331 g/mol) may refer to:
Cromakalim
Difenoxuron
Molecular formulas | C16H18N2O3 | Physics,Chemistry | 40 |
35,857,329 | https://en.wikipedia.org/wiki/Steiner%20point%20%28triangle%29 | In triangle geometry, the Steiner point is a particular point associated with a triangle. It is a triangle center and it is designated as the center X(99) in Clark Kimberling's Encyclopedia of Triangle Centers. Jakob Steiner (1796–1863), Swiss mathematician, described this point in 1826. The point was given Steiner's name by Joseph Neuberg in 1886.
Definition
The Steiner point is defined as follows. (This is not the way in which Steiner defined it.)
Let be any given triangle. Let be the circumcenter and be the symmedian point of triangle . The circle with as diameter is the Brocard circle of triangle . The line through perpendicular to the line intersects the Brocard circle at another point . The line through perpendicular to the line intersects the Brocard circle at another point . The line through perpendicular to the line intersects the Brocard circle at another point . (The triangle is the Brocard triangle of triangle .) Let be the line through parallel to the line , be the line through parallel to the line and be the line through parallel to the line . Then the three lines , and are concurrent. The point of concurrency is the Steiner point of triangle .
In the Encyclopedia of Triangle Centers the Steiner point is defined as follows;
Let be any given triangle. Let be the circumcenter and be the symmedian point of triangle . Let be the reflection of the line in the line , be the reflection of the line in the line and be the reflection of the line in the line . Let the lines and intersect at , the lines and intersect at and the lines and intersect at . Then the lines , and are concurrent. The point of concurrency is the Steiner point of triangle .
Trilinear coordinates
The trilinear coordinates of the Steiner point are given below.
Properties
The Steiner circumellipse of triangle , also called the Steiner ellipse, is the ellipse of least area that passes through the vertices , and . The Steiner point of triangle lies on the Steiner circumellipse of triangle .
Canadian mathematician Ross Honsberger stated the following as a property of Steiner point: The Steiner point of a triangle is the center of mass of the system obtained by suspending at each vertex a mass equal to the magnitude of the exterior angle at that vertex. The center of mass of such a system is in fact not the Steiner point, but the Steiner curvature centroid, which has the trilinear coordinates . It is the triangle center designated as X(1115) in Encyclopedia of Triangle Centers.
The Simson line of the Steiner point of a triangle is parallel to the line where is the circumcenter and is the symmmedian point of triangle .
Tarry point
The Tarry point of a triangle is closely related to the Steiner point of the triangle. Let be any given triangle. The point on the circumcircle of triangle diametrically opposite to the Steiner point of triangle is called the Tarry point of triangle . The Tarry point is a triangle center and it is designated as the center X(98) in Encyclopedia of Triangle Centers. The trilinear coordinates of the Tarry point are given below:
where is the Brocard angle of triangle
and
Similar to the definition of the Steiner point, the Tarry point can be defined as follows:
Let be any given triangle. Let be the Brocard triangle of triangle . Let be the line through perpendicular to the line , be the line through perpendicular to the line and be the line through perpendicular to the line . Then the three lines , and are concurrent. The point of concurrency is the Tarry point of triangle .
References
Triangle centers | Steiner point (triangle) | Physics,Mathematics | 748 |
10,271,359 | https://en.wikipedia.org/wiki/Implicit%20cognition | Implicit cognition refers to cognitive processes that occur outside conscious awareness or conscious control. This includes domains such as learning, perception, or memory which may influence a person's behavior without their conscious awareness of those influences.
Overview
Implicit cognition is everything one does and learns unconsciously or without any awareness that one is doing it. An example of implicit cognition could be when a person first learns to ride a bike: at first they are aware that they are learning the required skills. After having stopped for many years, when the person starts to ride the bike again they do not have to relearn the motor skills required, as their implicit knowledge of the motor skills takes over and they can just start riding the bike as if they had never stopped. In other words, they do not have to think about the actions that they are performing in order to ride the bike. It can be seen from this example that implicit cognition is involved with many of the different mental activities and everyday situations in people's daily lives. There are many processes in which implicit memory works, which include learning, our social cognition, and our problem-solving skills.
History
Implicit cognition was first discovered in the year of 1649 by Descartes in his Passions of the Soul. He said in one of his writings that he saw that unpleasant childhood experiences remain imprinted in a child's brain until its death without any conscious memory of it remaining. Even though this idea was never accepted by any of his peers, in 1704 Gottfried Wilhelm Leibniz in his New Essays Concerning Human Understanding stressed the importance of unconscious perceptions which he said were the ideas that we are not consciously aware of yet still influence people's behavior. He claimed that people have residual effects of prior impressions without any remembrance of them. In 1802 French philosopher Maine de Biran in his The Influence of Habit on the Faculty of Thinking was the first person after Leibniz to systematically discuss implicit memory stating that after enough repetition, a habit can become automatic or completed without any conscious awareness. In 1870 Ewald Hering said that it was essential to consider unconscious memory, which is involved in the involuntary recall, and the development of automatic and unconscious habitual actions.
Assessment
One of the most popular metrics utilizing implicit cognition is the implicit-association test. The IAT is designed to detect unconscious associations between concepts, making it a useful assessment in the field of social psychology. A controversial application of the IAT is the assessment of implicit stereotypes, such as associations between particular racial categories and stereotypes about those groups. There is significant academic and popular debate regarding its validity, reliability, and usefulness in assessing implicit bias.
Implicit learning
Implicit learning starts in our early childhood, this means that people are not able to learn the proper grammar and rules to speaking a language until the age of seven, yet children can learn to talk by the age of four. One of the ways that this is possible is through implicit learning and association. Children learn their first language from what they hear when they are listening to adults and through their own talking activities. This goes to show that the way children learn language involves implicit learning.
Studies on implicit learning
A study was conducted with amnesiac patients in an attempt to demonstrate that amnesiac patients that were unable to learn a list of words or pictures when their performance was tested were able to complete or put together fragmented words and incomplete pictures. This was found to be true as the patients were able to perform better when asked to complete words or pictures. A possible explanation for this could be that implicit memory should be less susceptible to damage that may happen to the brain than explicit memory. There was a case where a 54-year-old man that had bitemporal damage worse on the right side had a hard time remembering things from his own life as well as famous events, names and even; yet he was able to perform within the normal limits with a word completion task involving famous names and with judgments of famous faces. This is a prime example that implicit memory can be less vulnerable to brain damage.
A famous study investigated the Identification blindsight effect with individuals who had suffered damage to one-half of the visual cortex and was blind in the opposite half of the visual field. It was discovered that when objects or pictures were shown to these blind areas, the participants said that they saw no object or picture, but a certain number were able to identify the stimulus as either a cross or circle when asked to guess at a considerably higher rate than would have been expected by chance. The reason that this happened is that the information was able to be processed through the first three stages of selection, organization, and interpretation or comprehension of the perceptual cycle but failed at only the last stage of retention and memory where the identified image is entered into their awareness. Thus stimuli can enter implicit memory even when people are unable to consciously perceive them.
Implicit social relations
Implicit cognition also plays a role in social cognition. People tend to see objects and individuals as more encouraging or acceptable the more often that people are exposed to them. An example includes the False-fame Effect. Graf and Masson (1993) conducted a study where they showed participants a list with both famous and non-famous names. When it was shown around people were able to recall the famous names more than the non-famous names initially, but after about a 24-hour delay participants began to associate the non-famous names with famous people. This supports implicit cognition because the participants began to unconsciously associate the non-famous names with famous people.
Although the process is unconscious, implicit cognition influences how people view each other as well as their interactions with one another. People tend to view those who look alike as belonging together or to similar groups and associate them with the social groups that existed in their high school years. These groups represented different relations between the students and were made up of students who were perceived as having similarities among each other.
A study was conducted to see the amount of distance that participants put between individuals given certain circumstances. The participants were asked to place figures of individuals where they thought the figures should be standing given certain circumstances. It was found that people typically place men and women close to each other, to make little families formed with the figures of a woman, a man, and children. The participants did the same when asked to show friends and/or acquaintances, the two figures were placed relatively close to one another rather than if they were asked to represent strangers. When asked to represent strangers the participants placed the figures far apart. There are two parts to the view of social relations that is liking relations where the ultimate goal is to be together, then there is the disliking relation view which is separation from the person. An example of this could be when someone is walking down a hallway and see someone whom they know and like that person is more likely to wave and say hello to them. On the other hand, say the person they see is someone whom they dislike, their response will be the opposite as they try to either avoid them or get away from them as quickly as possible showing the separation between the two of them. There are two views to the theory of social relation, one of them is that people are out to mainly seek dominance of those around them, while the other view is that people mainly see the relations as either belonging or not belonging or liking and disliking on another. It is seen that males mainly seek dominance against one another as they are competitive and looking to outdo one another. Females on the other hand it is seen that women perceive their social views and values as more of the belonging or liking scale in terms of their closeness to one another. Implicit cognition not only involved how people view each other but also how they view themselves. This means that our own image is constructed from what others see of us rather than our own views. The way that we view ourselves is from what others see us as, or from the times that we compare ourselves to other people. The way that this plays a role in implicit cognition is because of all of these actions people do unconsciously, or they are unaware that they are making this decision. Men do not consciously seek to be dominant over one another as women do not consciously arrange their social views or values in terms of their closeness. These are each thing that people do without their conscious knowledge of these actions, which ties in with implicit cognition.
Implicit attitudes
Implicit attitudes (also called automatic attitudes) are mental evaluations that occur without the awareness of the person.
Although there is a debate about whether these can be measured fully, attitudes have been assessed with the implicit association test (IAT). The test claims to measure people's implicit associations with certain groups or races. But the controversy lies on whether it does predict people's future behaviors. Some claim that the IAT does predict if someone will act differently toward a certain group others believe there is not enough evidence to assure this will happen.
It is not well known how these are developed many believe that they come from past experiences, pleasant experiences or unpleasant ones can influence how a person's attitudes are formed towards a specific thing. This explanation implies that attitudes could be unpleasant if the previous experience was also an unpleasant one, they can also be formed because of early experiences in the early stages of life. Another possible explanation is the fact that implicit attitudes can also stem from affective experiences, there is evidence that the amygdala is involved in affective or emotional reactions to stimuli. A third explanation involves cultural biases, this was shown in a study done by Greenwald, McGhee, and Schwartz (1998) that in-group bias was more prevalent when the in-group was more in tune with their ancestral culture (for example knowing the language).
Evidence suggests that early and affective experiences might affect implicit attitudes and associations more than the other explanations provided.
Implicit behaviors
There are scenarios when we act on something and then think back about handling it in a different situation or manner. That is implicit cognition coming into play, the mind will then go based on ethical and similar situations when interacting with a certain thought. Implicit cognition and its automated thought process allow a person to decide something out of impulse. It is often defined as an involuntary process where tasks are easily absent of consciousness. There are plenty of factors that influence behaviors and thought processes. Such as social learning, stigmas, and two major aspects of implicit and explicit cognition. Implicit on one hand is obtained through social aspects and association, while explicit cognition is gained through propositional attitudes or beliefs of certain thoughts, Implicit cognition can be incorporated with a mixture of attention, goals, self-association, and at times even motivational processes. Researchers have used different methods to test these theories of behavior's correlation with implicit cognition. Using Implicit Association Tests (IAT's) is a method that is significantly used, according to Fazio & Olsen (2003) and Richetin & Richardson (2008). Since published, approximately ten years or so, it has been widely used in influencing research on implicit attitudes. Implicit cognition is a process based on automatic mental interpretations. It's what a person really thinks, yet is not consciously aware of. Behavior is then affected, usually causing negative influences, both theoretical and empirical reasons presume that automatic cognitive processes are contributed to aggressive behaviors.
Impulse behaviors are often created without awareness. Negativity is a characteristic of implicit cognition since it is an automated response. Explicit cognition is rarely used when trying to discover the behavior of one thought process. Researchers again use IAT's to determine one's thoughts and how a person incorporates these automatic processes, findings consider that implicit cognition may direct what behaviors a person may choose when facing extreme stimuli. For example, death can be perceived as positive, negative, or a combination of the two. Depending on the attributes of death, it can include a general perspective or a "me" attribute. implied that implicit association with death and or suicide initiates a final process when deciding how to cope with these extreme measures. Self-harm is another characteristic associated with implicit cognition. Because although we may think of it, it is controlled subconsciously. IAT's showed that there was a stronger correlation between implicit cognition and death/suicide than self-harm. The idea of pain may influence a person to think twice, while suicide may seem quick, thus the automatic process can show how effective this negative behavior and implicit cognition come hand in hand. Automated processes don't allow a person to thoroughly create a conscious choice, therefore creating a negative influence to behavior. Another negative behavior that can be associated with implicit cognition is depression. Whether a person takes a positive or negative outlook on a certain situation can produce if a person will be associated with depression. It is easier to determine an implicit mindset simply because it is outside of awareness. Implicit processes are considered critical when determining a person's reactions to a certain schema. Implicit cognition is often immediately affective towards a person's reaction. Implicit cognitions also consisted of negative schemas that included hidden cognitive frameworks and activation of stress. Awareness was often misinterpreted and implicit cognition emerged because of these negative schemas. Behaviors merged through implicit cognition involve a variety of addictive behaviors, problematic thinking, depression, aggression, suicide, death, and other negative factors. Certain life situations add to this schema, whether it be stressful situations, sudden, or anything along these lines, aspects of implicit cognition are used and evaluated.
Implicit cognition can also be associated with mental illness and the way thoughts are processed. Automatic stigmas and attitudes may anticipate other cognitive and behavioral tendencies. A person with mental illness may be correlated with a guilt-related, self-associated personality. Because of these associations it may be managed outside one's own control and awareness, showing how implicit cognition is affected. However, a dual process can be assessed within implicit and explicit cognition. An agreement between the two thought processes may be an issue, explicit may not be in contact with implicit, therefore causing more of a problem. Mental illness can include both implicit and explicit attitudes, however, implicit self-concepts gave more negative consequences when dealing with mental illness. Much of implicit problems happened to be associated with alcohol, however, this isn't the goal in order to describe a mental process and implicit cognition. The most widely influenced mental illness in association with implicit cognition would be Schizophrenia. Since a person with this illness has a problem detecting what is real and what is not, implicit memory is often used with these patients. However, since it cannot really be detected if it is emotional, mental, or a combination of both some aspects of this illness is usually exercised uninterrupted, and unconsciously. Since schizophrenia is widely varied and has different characteristics, we cannot quite measure the outcome of implicit cognition.
Definition
Implicit cognition refers to perceptual, memory, comprehension, and performance processes that occur through unconscious awareness. For example, when a patient is discharged after surgery, the effects of the anesthesia can cause abnormal behaviors without any conscious awareness. According to Wiers et al. (2006), some scholars argue implicit cognition is misinterpreted and could be used to improve behaviors while others highlight the dangers of it. Research studies have shown implicit cognition is a strong predictor for several issues like substance abuse, misconduct, and mental disorders. These inherent thoughts are influenced by early adolescent experiences primarily a negative impact from culture. Adolescents who experience a rough childhood early on develop low levels of self-esteem. Therefore, the cognition to act dangerously is an oblivious development. Research for implicit cognition has started to grow especially within mental disorders.
In mental disorders
Schemas are used to interpret the functions involved when individuals would make sense of their surroundings. This cognition happens through an explicit process of recalling an item routinely or an implicit process that is outside conscious awareness control. A recent study suggests individuals who have experienced a difficult upbringing develop schemata of fear as well as anxiety and will react almost immediately when they feel threatened. People who are anxious predominantly focus on any peril-related stimuli since they are hyper-vigilant. For example, an anxious individual who is about to cross the street at the same time a car drives to a stop sign. The anxious person will automatically assume the driver will not stop. This is recognition of threat through a semantic process that instantaneously occurs. Ambiguous cues are viewed as a threat since there is no relevant knowledge to make sense of. People will have a difficult time understanding and will respond negatively. This kind of behavior can explain how implicit cognition may be an influence on pathological anxiety.
The ideas of psychotic patients who have low self-esteem are prone to more serious illnesses. This concept was examined through both implicit and explicit perspectives by measuring the self-esteem of paranoia and depression patients. Previous research suggests that negative implicit cognition is not the symptom of depression and paranoia, but it is an antecedent for the onset. Current research proposes that high implicit self-esteem is linked to less paranoia. It is imperative for patients who have low self-esteem to become more overt about these situations. Another study found a substantial association between adverse self-questioning in implicit cognition and depression. People who do not think highly of themselves are more likely to be depressed because of this involuntary implicit learning.
Implicit cognition is another influential predictor for bipolar disorder and unipolar disorder. The research proposes patients with bipolar disorder show more common implicit and depressive self-referencing than unipolar patients. Implicit cognition plays a strong role in patients with the both bipolar and unipolar disorder. These patients have dysfunctional self-schemata, which is viewed as a vulnerability for potential illnesses. Patients who have this vulnerability usually do not seek mental assistance which can later become more problematic to treat. Bipolar disorder patients with low implicit self-esteem are more defensive. This is an unconscious reaction to be manic protective when they feel threatened in any way.
Since the growing research of implicit cognition is associated with abnormalities, researchers attempted to find a connection between implicit neuroticism and schizophrenia. Indeed, there was a correlation; participants with schizophrenia were high in implicit neuroticism and low in implicit extraversion when compared to people who were mentally healthy. Participants were given questionnaires that ask personality questions such as "I enjoy being the center of attention". Implicit cognition constitutes low levels of extraversion because these participants are known to avoid any coping. Schizophrenia patients and healthy individuals differ in associative representation pertaining to themselves in neuroticism features. People who are schizophrenic develop implicit learning, meaning they have an error-free learning style so they never take feedback from anyone else.
Research on suicide can be a difficult process because suicidal patients are commonly covert about their intentions to avoid hospitalization. An implicit cognition associated with self-task was applied in one experiment to unveil any suspicious behavior of people who might attempt suicide. This study found patients who were released from mental hospitals showed significant implicit association to attempt suicide. The Implicit Association Task would predict whether a patient was likely to attempt suicide depending on their response. An individual's implicit cognition may lead to behavior to best cope with stress. This behavior may be suicide, substance abuse, or even violence. However, implicit association with death will show those most at risk for attempting suicide because this individual is looking for the best solution for ending their stress.
See also
Alief (mental state)
Consciousness
Implicit assumption
Implicit attitude
Implicit memory
Implicit stereotypes
Relational frame theory
Response priming
Subliminal stimuli
Notes
References
Ione, Amy. Implicit Cognition and Consciousness in Scientific Speculation and Development (Retrieved January 30, 2008)
External links
Project Implicit
ContextualPsychology.org
Behavioural sciences
Cognitive biases | Implicit cognition | Biology | 3,988 |
9,928,955 | https://en.wikipedia.org/wiki/Rastra | Rastra is a registered tradename for a particular insulating concrete form (ICF) construction system, Rastra created the name Insulating COMPOUND Concrete Form (ICCF), used to make walls for buildings. It is one of the earliest such products, first patented in 1965 in Austria. Rastra is in production since 1972, and is composed of concrete and Thastyron. Thastyron is a mixture of plastic foam and cementitious binder that is composed of eighty-five percent recycled post consumer polystyrene waste that is molded into blocks and panels.
Production
Rastra is sustainable in its production because no energy is used in the curing process, and only one to three kilowatt hour (kWh) are required to make each >10sqft block. After the blocks are trimmed to exact size, the remaining debris is recycled to create new blocks. No byproducts are released in the production process that are considered a burden to the environment.
Building
Rastra blocks come in different sizes, and can be easily cut with woodworking tools to form the desired shape. These blocks are commonly attached together with clamps or glue to form a grid-like system inside. Rebar is then run through the grid, which is then filled with concrete.
History
Polystyrene concrete was invented in 1960. BASF, a German chemical conglomerate, originally created this product, but found no successful applications. An Austrian-Swiss-based company modified the product and created what is known as Rastra.
Fire rating
As a thermal barrier, Rastra has a four-hour fire rating with no flame spread and no smoke development. A five-hour fire endurance test of a ten-inch-thick wall with temperatures exceeding two thousand degrees Fahrenheit on the face of the wall showed that the wall did not conduct heat. This lowers the risk of health hazards during a fire and also makes building repairs easier afterwards.
Physical properties
Thastyron has a compressive strength of 56 pound-force per square inch (psi) and a tensile strength of 43 psi. Rastra has a low toxicity level. Rastra is highly frost, fungus, and mildew-resistant. The sound insulation is greater than 50 decibel(dB).
Insulation
As a heat insulation, Rastra keeps a room at a constant temperature and evens out temperature changes, both of which can lower energy use. It also has a low heat penetration depth, meaning the wall surface keeps a constant temperature.
References
Building materials
Concrete | Rastra | Physics,Engineering | 511 |
31,473,944 | https://en.wikipedia.org/wiki/Zinag | Zinag is an alloy of three metallic materials (zinc, aluminium and silver), the composition of the alloy gives excellent mechanical and anticorrosive properties, this is an alloy of low density that can be used for many process such as automotive area, medical, aerospace, construction industry, etc., the silver gives the superplasticity which makes this alloy can be deformed without losing its mechanical properties.
With this alloy can be made different process such as: Zinagizado, Metal Foams.
References
Further reading
Zinc alloys
Aluminium compounds
Silver compounds | Zinag | Chemistry | 117 |
591,513 | https://en.wikipedia.org/wiki/Optical%20cavity | An optical cavity, resonating cavity or optical resonator is an arrangement of mirrors or other optical elements that confines light waves similarly to how a cavity resonator confines microwaves. Optical cavities are a major component of lasers, surrounding the gain medium and providing feedback of the laser light. They are also used in optical parametric oscillators and some interferometers. Light confined in the cavity reflects multiple times, producing modes with certain resonance frequencies. Modes can be decomposed into longitudinal modes that differ only in frequency and transverse modes that have different intensity patterns across the cross section of the beam. Many types of optical cavities produce standing wave modes.
Different resonator types are distinguished by the focal lengths of the two mirrors and the distance between them. Flat mirrors are not often used because of the difficulty of aligning them to the needed precision. The geometry (resonator type) must be chosen so that the beam remains stable, i.e. the size of the beam does not continually grow with multiple reflections. Resonator types are also designed to meet other criteria such as a minimum beam waist or having no focal point (and therefore no intense light at a single point) inside the cavity.
Optical cavities are designed to have a large Q factor, meaning a beam undergoes many oscillation cycles with little attenuation. In the regime of high Q values, this is equivalent to the frequency line width being small compared to the resonant frequency of the cavity.
Resonator modes
Light confined in a resonator will reflect multiple times from the mirrors, and due to the effects of interference, only certain patterns and frequencies of radiation will be sustained by the resonator, with the others being suppressed by destructive interference. In general, radiation patterns which are reproduced on every round-trip of the light through the resonator are the most stable. These are known as the modes of the resonator.
Resonator modes can be divided into two types: longitudinal modes, which differ in frequency from each other; and transverse modes, which may differ in both frequency and the intensity pattern of the light. The basic, or fundamental transverse mode of a resonator is a Gaussian beam.
Resonator types
The most common types of optical cavities consist of two facing plane (flat) or spherical mirrors. The simplest of these is the plane-parallel or Fabry–Pérot cavity, consisting of two opposing flat mirrors. While simple, this arrangement is rarely used in large-scale lasers due to the difficulty of alignment; the mirrors must be aligned parallel within a few seconds of arc, or "walkoff" of the intracavity beam will result in it spilling out of the sides of the cavity. However, this problem is much reduced for very short cavities with a small mirror separation distance (L < 1 cm). Plane-parallel resonators are therefore commonly used in microchip and microcavity lasers and semiconductor lasers. In these cases, rather than using separate mirrors, a reflective optical coating may be directly applied to the laser medium itself. The plane-parallel resonator is also the basis of the Fabry–Pérot interferometer.
For a resonator with two mirrors with radii of curvature R1 and R2, there are a number of common cavity configurations. If the two radii are equal to half the cavity length (R1 = R2 = L / 2), a concentric or spherical resonator results. This type of cavity produces a diffraction-limited beam waist in the centre of the cavity, with large beam diameters at the mirrors, filling the whole mirror aperture. Similar to this is the hemispherical cavity, with one plane mirror and one mirror of radius equal to the cavity length.
A common and important design is the confocal resonator, with mirrors of equal radii to the cavity length (R1 = R2 = L). This design produces the smallest possible beam diameter at the cavity mirrors for a given cavity length, and is often used in lasers where the purity of the transverse mode pattern is important.
A concave-convex cavity has one convex mirror with a negative radius of curvature. This design produces no intracavity focus of the beam, and is thus useful in very high-power lasers where the intensity of the light might be damaging to the intracavity medium if brought to a focus.
Less common resonator types include optical ring resonators and whispering-gallery mode resonators, in which a resonance is formed by waves moving in a closed loop rather than reflecting between two mirrors.
Stability
Only certain ranges of values for R1, R2, and L produce stable resonators in which periodic refocussing of the intracavity beam is produced. If the cavity is unstable, the beam size will grow without limit, eventually growing larger than the size of the cavity mirrors and being lost. By using methods such as ray transfer matrix analysis, it is possible to calculate a stability criterion:
Values which satisfy the inequality correspond to stable resonators.
The stability can be shown graphically by defining a stability parameter, g for each mirror:
,
and plotting g1 against g2 as shown. Areas bounded by the line g1 g2 = 1 and the axes are stable. Cavities at points exactly on the line are marginally stable; small variations in cavity length can cause the resonator to become unstable, and so lasers using these cavities are in practice often operated just inside the stability line.
A simple geometric statement describes the regions of stability: A cavity is stable if the line segments between the mirrors and their centers of curvature overlap, but one does not lie entirely within the other.
In the confocal cavity, if a ray is deviated from its original direction in the middle of the cavity, its displacement after reflecting from one of the mirrors is larger than in any other cavity design. This prevents amplified spontaneous emission and is important for designing high power amplifiers with good beam quality.
Practical resonators
If the optical cavity is not empty (e.g., a laser cavity which contains the gain medium), the value of L needs to be adjusted to account for the index of refraction of the medium. Optical elements such as lenses placed in the cavity alter the stability and mode size. In addition, for most gain media, thermal and other inhomogeneities create a variable lensing effect in the medium, which must be considered in the design of the laser resonator.
Practical laser resonators may contain more than two mirrors; three- and four-mirror arrangements are common, producing a "folded cavity". Commonly, a pair of curved mirrors form one or more confocal sections, with the rest of the cavity being quasi-collimated and using plane mirrors. The shape of the laser beam depends on the type of resonator: The beam produced by stable, paraxial resonators can be well modeled by a Gaussian beam. In special cases the beam can be described as a single transverse mode and the spatial properties can be well described by the Gaussian beam, itself. More generally, this beam may be described as a superposition of transverse modes. Accurate description of such a beam involves expansion over some complete, orthogonal set of functions (over two-dimensions) such as Hermite polynomials or the Ince polynomials. Unstable laser resonators on the other hand, have been shown to produce fractal shaped beams.
Some intracavity elements are usually placed at a beam waist between folded sections. Examples include acousto-optic modulators for cavity dumping and vacuum spatial filters for transverse mode control. For some low power lasers, the laser gain medium itself may be positioned at a beam waist. Other elements, such as filters, prisms and diffraction gratings often need large quasi-collimated beams.
These designs allow compensation of the cavity beam's astigmatism, which is produced by Brewster-cut elements in the cavity. A Z-shaped arrangement of the cavity also compensates for coma while the 'delta' or X-shaped cavity does not.
Out of plane resonators lead to rotation of the beam profile and more stability. The heat generated in the gain medium leads to frequency drift of the cavity, therefore the frequency can be actively stabilized by locking it to unpowered cavity. Similarly the pointing stability of a laser may still be improved by spatial filtering by an optical fibre.
Alignment
Precise alignment is important when assembling an optical cavity. For best output power and beam quality, optical elements must be aligned such that the path followed by the beam is centered through each element.
Simple cavities are often aligned with an alignment laser—a well-collimated visible laser that can be directed along the axis of the cavity. Observation of the path of the beam and its reflections from various optical elements allows the elements' positions and tilts to be adjusted.
More complex cavities may be aligned using devices such as electronic autocollimators and laser beam profilers.
Optical delay lines
Optical cavities can also be used as multipass optical delay lines, folding a light beam so that a long path-length may be achieved in a small space. A plane-parallel cavity with flat mirrors produces a flat zigzag light path, but as discussed above, these designs are very sensitive to mechanical disturbances and walk-off. When curved mirrors are used in a nearly confocal configuration, the beam travels on a circular zigzag path. The latter is called a Herriott-type delay line. A fixed insertion mirror is placed off-axis near one of the curved mirrors, and a mobile pickup mirror is similarly placed near the other curved mirror. A flat linear stage with one pickup mirror is used in case of flat mirrors and a rotational stage with two mirrors is used for the Herriott-type delay line.
The rotation of the beam inside the cavity alters the polarization state of the beam. To compensate for this, a single pass delay line is also needed, made of either a three or two mirrors in a 3d respective 2d retro-reflection configuration on top of a linear stage. To adjust for beam divergence a second car on the linear stage with two lenses can be used. The two lenses act as a telescope producing a flat phase front of a Gaussian beam on a virtual end mirror.
See also
Optical feedback
Multiple-prism grating laser oscillator (or Multiple-prism grating laser cavity)
Coupled mode theory
Vertical-cavity surface-emitting laser
References
Further reading
Koechner, William. Solid-state laser engineering, 2nd ed. Springer Verlag (1988).
An excellent two-part review of the history of optical cavities:
Cavity, optical
Laser science | Optical cavity | Materials_science,Engineering | 2,222 |
2,936,393 | https://en.wikipedia.org/wiki/Contact%20explosive | A contact explosive is a chemical substance that explodes violently when it is exposed to a relatively small amount of energy (e.g. friction, pressure, sound, light). Though different contact explosives have varying amounts of energy sensitivity, they are all much more sensitive relative to other kinds of explosives. Contact explosives are a part of a group of explosives called primary explosives, which are also very sensitive to stimuli but not to the degree of contact explosives. The extreme sensitivity of contact explosives is due to either chemical composition, bond type, or structure.
Types
These are some common contact explosives.
Reasons for instability
Composition
Presence of nitrogen
Explosives that are nitrogen-based are incredibly volatile due to the stability of nitrogen in its diatomic state, N2. Most organic explosives are explosive because they contain nitrogen. They are defined as nitro compounds.
Nitro compounds are explosive because although the diatomic form of nitrogen is very stable—that is, the triple bond that holds N2 together is very strong, and therefore has a great deal of bond energy—the nitro compounds themselves are unstable, as the bonds between nitrogen atoms and other atoms in nitro compounds are weak by comparison. Therefore, little energy is required to overcome these weak bonds, but a great deal of energy is released in the exothermic process in which the strong triple bonds in N2 are formed. The rapidity of the reaction, due to the weakness of the bonds in nitro compounds, and the high quantity of overall energy released, due to the much higher strength of the triple bonds, produce the explosive qualities of these compounds.
Oxidizer and fuel
Some contact explosives contain an oxidizer and a fuel in their composition. Chemicals like gasoline, a fuel, burn instead of explode because they must come into contact with oxygen in the combustion reaction. However, if the compound already contains both the oxidant and fuel, it produces a much faster and violent reaction.
Bonds and structure
The structures and bonds that make up a contact explosive contribute to its instability. Covalent compounds that have a large unequal sharing of electrons have the capability to fall apart very easily and explosively. Nitrogen triiodide is a perfect example of this property. The three huge iodine atoms try to attach themselves to one small nitrogen ion, which means that the atoms are holding on to each other through a very weak bond. The weak bond between each atom is like a thread just waiting to break. Therefore, any small amount of applied energy cuts this thread and releases the iodine and nitrogen atoms to react with the fuel, allowing the reaction to occur quickly and release a large amount of energy.
The shape of the contact explosive molecule plays a role in its instability as well. Using nitrogen triiodide as an example again, its pyramidal shape forces the three iodine atoms to be incredibly close to each other. The shape further strains the already weak bonds that holds together this molecule.
Uses
Contact explosives are used in a variety of fields.
Military
Militaries use a variety of contact explosives in combat. Some can be manufactured into different types of bombs, tactical grenades, and even explosive bullets. Dry picric acid, which is more powerful than TNT, was used in blasting charges and artillery shells. A lot of contact explosives are used in detonators. For explosives that use secondary explosives, contact explosives are used in the detonators to set off an energy chain reaction that will eventually set off the secondary explosive.
Compounds like lead azide are used to manufacture bullets that explode into shrapnel on impact.
Flash powders are used in a variety of military and police tactical pyrotechnics. Stun grenades, flash bangs, and flares all use flash powder to create bright, flashing lights and loud noise that disorients the enemy.
On the other hand, many of these cheap, volatile contact explosives are also used in improvised explosive devices (IEDs) created by terrorists and suicide bombers. For example, acetone peroxide passes through explosive detectors and is incredibly powerful, unstable, and deadly. Evidence for the instability of these IEDs lies in the multiple reports of premature or wrongful IED explosions. However, when these explosives are used correctly, they have devastating consequences. The July 7, 2005, London bombings, the 2015 Paris attacks, and the 2016 Brussels bombings all used explosives that contained acetone peroxide.
Medicine
Angina pectoris, a symptom of Ischaemic heart disease, is treated with nitroglycerin. Nitroglycerin is known as a vasodilator. Vasodilators work by relaxing the heart's blood vessels so the heart does not need to work as hard. Picric acid specifically has been used for burn treatment and as an Antiseptic.
Theatrical/fireworks
The same flash powder used for military tactical pyrotechnics can also be used for several theatrical special effects. They are used to produce loud, bright flashes of light for effect. Though some flash powders are too volatile and dangerous to be safely used, there are milder compounds that are still incorporated into performances today.
Silver Fulminate is used to make noise-makers, small contact poppers, and several other novelty fireworks. It is most widely used in bang snaps. In these small explosives, a minuscule amount of silver fulminate is encased in gravel and cigarette paper. Even with this small amount of silver fulminate, it produces a loud, sharp bang.
See also
Shock sensitivity
References
External links
List of shock-sensitive materials
Explosives | Contact explosive | Chemistry | 1,129 |
17,920,440 | https://en.wikipedia.org/wiki/Slave%20boson | The slave boson method is a technique for dealing with models of strongly correlated systems, providing a method to second-quantize valence fluctuations within a restrictive manifold of states.
In the 1960s the physicist John Hubbard introduced an operator, now named the "Hubbard operator" to describe the creation of an electron within a restrictive manifold of valence configurations. Consider for example, a rare earth or actinide ion in which strong Coulomb interactions restrict the charge fluctuations to two valence states, such as the Ce4+(4f0) and Ce3+ (4f1) configurations of a mixed-valence cerium compound. The corresponding quantum states of these two states are the singlet state and the magnetic state, where is the spin. The fermionic Hubbard operators that link these states are then
The algebra of operators is closed by introducing the two bosonic operators
Together, these operators satisfy the graded Lie algebra
where the and the sign is chosen to be negative, unless both and are fermions, in which case it is positive. The Hubbard operators are the generators of the super group SU(2|1). This non-canonical algebra means that these operators do not satisfy a Wick's theorem, which prevents a conventional diagrammatic or field theoretic treatment.
In 1983 Piers Coleman introduced the slave boson formulation of the Hubbard operators, which enabled valence fluctuations to be treated within a field-theoretic approach. In this approach, the spinless configuration of the ion is represented by a spinless "slave boson"
, whereas the magnetic configuration is represented by an Abrikosov slave fermion. From these considerations, it is seen that the Hubbard operators can be written as
and
This factorization of the Hubbard operators faithfully preserves the graded Lie algebra. Moreover, the Hubbard operators so written commute with the conserved quantity
In Hubbard's original approach, , but by generalizing this quantity to larger values, higher irreducible representations of SU(2|1) are generated.
The slave boson representation can be extended from two component to component fermions, where the spin index runs over values. By allowing to become large, while maintaining the ratio , it is possible to develop a controlled large- expansion.
The slave boson approach has since been widely applied to strongly correlated electron systems, and has proven useful in developing the resonating valence bond theory (RVB) of high temperature superconductivity and the understanding of heavy fermion compounds.
Bibliography
Condensed matter physics | Slave boson | Physics,Chemistry,Materials_science,Engineering | 516 |
884,375 | https://en.wikipedia.org/wiki/Ely%20Cathedral | Ely Cathedral, formally the , is an Anglican cathedral in the city of Ely, Cambridgeshire, England.
The cathedral can trace its origin to the abbey founded in Ely in 672 by St Æthelthryth (also called Etheldreda). The earliest parts of the present building date to 1083, and it was granted cathedral status in 1109. Until the Reformation, the cathedral was dedicated to St Etheldreda and St Peter, at which point it was refounded as the Cathedral Church of the Holy and Undivided Trinity of Ely. It is the cathedral of the Diocese of Ely, which covers most of Cambridgeshire and western Norfolk, Essex, and Bedfordshire. It is the seat of the Bishop of Ely and a suffragan bishop, the Bishop of Huntingdon.
Architecturally, Ely Cathedral is outstanding both for its scale and stylistic details. Having been built in a monumental Romanesque style, the galilee porch, lady chapel and choir were rebuilt in an exuberant Decorated Gothic. Its most notable feature is the central octagonal tower, with lantern above, which provides a unique internal space and, along with the West Tower, dominates the surrounding landscape.
The cathedral is a major tourist destination, receiving around 250,000 visitors per year, and sustains a daily pattern of morning and evening services.
Anglo-Saxon abbey
Ely Abbey was founded in 672, by Æthelthryth (St Etheldreda), a daughter of Anna, King of East Anglia. It was a mixed community of men and women. Later accounts suggest her three successor abbesses were also members of the East Anglian Royal family. In later centuries, the depredations of Viking raids may have resulted in its destruction, or at least the loss of all records. It is possible that some monks provided a continuity through to its refoundation in 970, under the Rule of St Benedict. The precise siting of Æthelthryth's original monastery is not known. The presence of her relics, bolstered by the growing body of literature on her life and miracles, was a major driving force in the success of the refounded abbey. The church building of 970 was within or near the nave of the present building, and was progressively demolished from 1102 alongside the construction of the Norman church. The obscure Ermenilda of Ely also became an abbess sometime after her husband, Wulfhere of Mercia, died in 675.
Present-day church
The cathedral is built from stone quarried from Barnack in Northamptonshire (bought from Peterborough Abbey, whose lands included the quarries, for 8,000 eels a year), with decorative elements carved from Purbeck Marble and local clunch. The plan of the building is cruciform (cross-shaped), with an additional transept at the western end. The total length is , and the nave at over long remains one of the longest in Britain. The west tower is high. The unique Octagon 'Lantern Tower' is wide and is high. Internally, from the floor to the central roof boss the lantern is high. The cathedral is known locally as "the ship of the Fens", because of its prominent position above the surrounding flat landscape.
Norman abbey church
Having a pre-Norman history spanning 400 years and a re-foundation in 970, Ely over the course of the next hundred years had become one of England's most successful Benedictine abbeys, with a famous saint, treasures, library, book production of the highest order and lands exceeded only by Glastonbury. However the imposition of Norman rule was particularly problematic at Ely. Newly arrived Normans such as Picot of Cambridge were taking possession of abbey lands, there was appropriation of daughter monasteries such as Eynesbury by French monks, and interference by the Bishop of Lincoln was undermining its status. All this was exacerbated when, in 1071, Ely became a focus of English resistance, through such people as Hereward the Wake, culminating in the Siege of Ely, for which the abbey suffered substantial fines.
Under the Normans almost every English cathedral and major abbey was rebuilt from the 1070s onwards. If Ely was to maintain its status then it had to initiate its own building work, and the task fell to Abbot Simeon. He was the brother of Walkelin, the then Bishop of Winchester, and had himself been the prior at Winchester Cathedral when the rebuilding began there in 1079. In 1083, a year after Simeon's appointment as abbot of Ely, and when he was 90 years old, building work began. The years since the conquest had been turbulent for the Abbey, but the unlikely person of an aged Norman outsider effectively took sides with the Ely monks, reversed the decline in the abbey's fortunes, and found the resources, administrative capacity, identity and purpose to begin a mighty new building.
The design had many similarities to Winchester, a cruciform plan with central crossing tower, aisled transepts, a three-storey elevation and a semi-circular apse at the east end. It was one of the largest buildings under construction north of the Alps at the time. The first phase of construction took in the eastern arm of the church, and the north and south transepts. However, a significant break in the way the masonry is laid indicates that, with the transepts still unfinished, there was an unplanned halt to construction that lasted several years. It would appear that when Abbott Simeon died in 1093, an extended interregnum caused all work to cease. The administration of Ranulf Flambard may have been to blame. He illegally kept various posts unfilled, including that of Abbot of Ely, so he could appropriate the income. In 1099 he got himself appointed Bishop of Durham, in 1100 Abbot Richard was appointed to Ely and building work resumed. It is Abbot Richard who asserted Ely's independence from the Diocese of Lincoln, and pressed for it to be made a diocese in its own right, with the abbey church as its cathedral. Although Abbot Richard died in 1107, his successor Hervey le Breton was able to achieve this and become the first Bishop of Ely in 1109. This period at the start of the twelfth century was when Ely re-affirmed its link with its Anglo-Saxon past. The struggle for independence coincided with the period when resumption of building work required the removal of the shrines from the old building and the translation of the relics into the new church. This appears to have allowed, in the midst of a Norman-French hierarchy, an unexpectedly enthusiastic development of the cult of these pre-Norman saints and benefactors.
The Norman east end and the whole of the central area of the crossing are now entirely gone, but the architecture of the transepts survives in a virtually complete state, to give a good impression of how it would have looked. Massive walls pierced by Romanesque arches would have formed aisles running around all sides of the choir and transepts. Three tiers of archways rise from the arcaded aisles. Galleries with walkways could be used for liturgical processions, and above that is the Clerestory with a passage within the width of the wall.
Construction of the nave was underway from around 1115, and roof timbers dating to 1120 suggest that at least the eastern portion of the nave roof was in place by then. The great length of the nave required that it was tackled in phases and after completing four bays, sufficient to securely buttress the crossing tower and transepts, there was a planned pause in construction. By 1140 the nave had been completed together with the western transepts and west tower up to triforium level, in the fairly plain early Romanesque style of the earlier work. Another pause now occurred, for over 30 years, and when it resumed, the new mason found ways to integrate the earlier architectural elements with the new ideas and richer decorations of early Gothic.
The West Tower
The half-built west tower and upper parts of the two western transepts were completed under Bishop Geoffrey Ridel (1174–89), to create an exuberant west front, richly decorated with intersecting arches and complex mouldings. The new architectural details were used systematically to the higher storeys of the tower and transepts. Rows of trefoil heads and use of pointed instead of semicircular arches, results in a west front with a high level of orderly uniformity.
Originally the west front had transepts running symmetrically either side of the west tower. Stonework details on the tower show that an octagonal tower was part of the original design, although the current western octagonal tower was installed in 1400. Numerous attempts were made, during all phases of its construction to correct problems from subsidence in areas of soft ground at the western end of the cathedral. In 1405–1407, to cope with the extra weight from the octagonal tower, four new arches were added at the west crossing to strengthen the tower. The extra weight of these works may have added to the problem, as at the end of the fifteenth century the north-west transept collapsed. A great sloping mass of masonry was built to buttress the remaining walls, which remain in their broken-off state on the north side of the tower.
Galilee Porch
The Galilee Porch is now the principal entrance into the cathedral for visitors. Its original liturgical functions are unclear, but its location at the west end meant it may have been used as a chapel for penitents, a place where liturgical processions could gather, or somewhere the monks could hold business meetings with women, who were not permitted into the abbey. It also has a structural role in buttressing the west tower. The walls stretch over two storeys, but the upper storey now has no roof, it having been removed early in the nineteenth century. Its construction dating is also uncertain. Records suggest it was initiated by Bishop Eustace (1197–1215), and it is a notable example of Early English Gothic style. But there are doubts about just how early, especially as Eustace had taken refuge in France in 1208, and had no access to his funds for the next 3 years. George Gilbert Scott argued that details of its decoration, particularly the 'syncopated arches' and the use of Purbeck marble shafts, bear comparison with St Hugh's Choir, Lincoln Cathedral, and the west porch at St Albans, which both predate Eustace, whereas the foliage carvings and other details offer a date after 1220, suggesting it could be a project taken up, or re-worked by Bishop Hugh of Northwold.
Presbytery and East end
The first major reworking of an element of the Norman building was undertaken by Hugh of Northwold (bishop 1229–54). The eastern arm had been only four bays, running from the choir (then located at the crossing itself) to the high altar and the shrine to Etheldreda. In 1234 Northwold began an eastward addition of six further bays, which were built over 17 years, in a richly ornamented style with extensive use of Purbeck marble pillars and foliage carvings. It was built using the same bay dimensions, wall thicknesses and elevations as the Norman parts of the nave, but with an Early English Gothic style that makes it 'the most refined and richly decorated English building of its period'. St Etheldreda's remains were translated to a new shrine immediately east of the high altar within the new structure, and on completion of these works in 1252 the cathedral was reconsecrated in the presence of King Henry III and Prince Edward. As well as a greatly expanded presbytery, the new east end had the effect of inflating still further the significance of St Etheldreda's shrine. Surviving fragments of the shrine pedestal suggest its decoration was similar to the interior walls of the Galilee porch. The relics of the saints Wihtburh, Seaxburh (sisters of St Etheldreda) and Ermenilda (daughter of St Seaxburh of Ely) would also have been accommodated, and the new building provided much more space for pilgrims to visit the shrines, via a door in the North Transept. The presbytery has subsequently been used for the burials and memorials of over 100 individuals connected with the abbey and cathedral.
Lady Chapel
In 1321, under the sacrist Alan of Walsingham, work began on a large free-standing Lady Chapel, linked to the north aisle of the chancel by a covered walkway. The chapel is long and wide, and was built in an exuberant 'Decorated' Gothic style over the course of the next 30 years. Masons and finances were unexpectedly required for the main church from 1322, which must have slowed the progress of the chapel. The north and south wall each have five bays, comprising large traceried windows separated by pillars each of which has eight substantial niches and canopies which once held statues.
Below the window line, and running round three sides of the chapel is an arcade of richly decorated 'nodding ogees', with Purbeck marble pillars, creating scooped out seating booths. There are three arches per bay plus a grander one for each main pillar, each with a projecting pointed arch covering a subdividing column topped by a statue of a bishop or king. Above each arch is a pair of spandrels containing carved scenes which create a cycle of 93 carved relief sculptures of the life and miracles of the Virgin Mary. The carvings and sculptures would all have been painted. The window glass would all have been brightly coloured with major schemes perhaps of biblical narratives, of which a few small sections have survived. At the reformation, the edict to remove images from the cathedral was carried out very thoroughly by Bishop Thomas Goodrich. The larger statues have gone. The relief scenes were built into the wall, so each face or statue was individually hacked off, but leaving many finely carved details, and numerous puzzles as to what the original scenes showed. After the reformation it was redeployed as the parish church (Holy Trinity) for the town, a situation which continued up to 1938.
In 2000 a life-size statue of the Virgin Mary by David Wynne was installed above the lady chapel altar. The statue was criticised by local people and the cathedral dean said he had been inundated with letters of complaint.
Octagon
The central octagonal tower, with its vast internal open space and its pinnacles and lantern above, forms the most distinctive and celebrated feature of the cathedral. However, what Pevsner describes as Ely's 'greatest individual achievement of architectural genius' came about through a disaster at the centre of the cathedral. On the night of 12–13 February 1322, possibly as a result of digging foundations for the Lady Chapel, the Norman central crossing tower collapsed. Work on the Lady Chapel was suspended as attention transferred to dealing with this disaster. Instead of being replaced by a new tower on the same ground plan, the crossing was enlarged to an octagon, removing all four of the original tower piers and absorbing the adjoining bays of the nave, chancel and transepts to define an open area far larger than the square base of the original tower. The construction of this unique and distinctive feature was overseen by Alan of Walsingham. The extent of his influence on the design continues to be a matter of debate, as are the reasons such a radical step was taken. Mistrust of the soft ground under the failed tower piers may have been a major factor in moving all the weight of the new tower further out.
The large stone octagonal tower, with its eight internal archways, leads up to timber vaulting that appears to allow the large glazed timber lantern to balance on their slender struts. The roof and lantern are actually held up by a complex timber structure above the vaulting which could not be built in this way today because there are no trees big enough. The central lantern, also octagonal in form, but with angles offset from the great Octagon, has panels showing pictures of musical angels, which can be opened, with access from the Octagon roof-space, so that real choristers can sing from on high. More wooden vaulting forms the lantern roof. At the centre is a wooden boss carved from a single piece of oak, showing Christ in Majestry. The elaborate joinery and timberwork was brought about by William Hurley, master carpenter in the royal service.
It is unclear what damage was caused to the Norman chancel by the fall of the tower, but the three remaining bays were reconstructed under Bishop John Hotham (1316–1337) in an ornate Decorated style with flowing tracery. Structural evidence shows that this work was a remodelling rather than a total rebuilding. New choirstalls with carved misericords and canopy work were installed beneath the octagon, in a similar position to their predecessors. Work was resumed on the Lady Chapel, and the two westernmost bays of Northwold's presbytery were adapted by unroofing the triforia so as to enhance the lighting of Etheldreda's shrine. Starting at about the same time the remaining lancet windows of the aisles and triforia of the presbytery were gradually replaced by broad windows with flowing tracery. At the same period extensive work took place on the monastic buildings, including the construction of the elegant chapel of Prior Crauden.
Chantry Chapels
In the late fifteenth and early sixteenth centuries elaborate chantry chapels were inserted in the easternmost bays of the presbytery aisles, on the north for Bishop John Alcock (1486–1500) and on the south for Bishop Nicholas West (1515–33).
John Alcock was born in around 1430, the son of a Hull merchant, but achieved high office in both church and state. Amongst his many duties and posts he was given charge of Edward IV's sons, who became known as the Princes in the Tower. That Alcock faithfully served Edward IV and his sons as well Henry VII adds to the mystery of how their fate was kept secret. Appointed bishop of Rochester and then Worcester by Edward IV, he was also declared 'Lord President of Wales' in 1476. On Henry VII's victory over Richard III in 1485, Alcock became interim Lord Chancellor and in 1486 was appointed Bishop of Ely. As early as 1476 he had endowed a chantry for his parents at Hull, but the resources Ely put at his disposal allowed him to found Jesus College, Cambridge and build his own fabulous chantry chapel in an ornate style. The statue niches with their architectural canopies are crammed so chaotically together that some of the statues never got finished as they were so far out of sight. Others, although completed, were overlooked by the destructions of the reformation, and survived when all the others were destroyed. The extent that the chapel is squashed in, despite cutting back parts of the Norman walls, raises the possibility that the design, and perhaps even some of the stonework, was done with a more spacious bay at Worcester in mind. On his death in 1500 he was buried within his chapel.
Nicholas West had studied at Cambridge, Oxford and Bologna, had been a diplomat in the service of Henry VII and Henry VIII, and became Bishop of Ely in 1515. For the remaining 19 years of his life he 'lived in greater splendour than any other prelate of his time, having more than a hundred servants.' He was able to build the magnificent Chantry chapel at the south-east corner of the presbytery, panelled with niches for statues (which were destroyed or disfigured just a few years later at the reformation), and with fan tracery forming the ceiling, and West's tomb on the south side.
In 1771 the chapel was also used to house the bones of seven Saxon 'benefactors of the church'. These had been translated from the old Saxon Abbey into the Norman building, and had been placed in a wall of the choir when it stood in the Octagon. When the choir stalls were moved, their enclosing wall was demolished, and the bones of Wulfstan (died 1023), Osmund of Sweden, Athelstan of Elmham, Ælfwine of Elmham, Ælfgar of Elmham, Eadnoth of Dorchester and Byrhtnoth, eorldorman of Essex, were found, and relocated into West's chapel. Also sharing Nicholas West's chapel, against the east wall, is the tomb memorial to the bishop Bowyer Sparke, who died in 1836.
Dissolution and Reformation
On 18 November 1539 the royal commissioners took possession of the monastery and all its possessions, and for nearly two years its future hung in the balance as Henry VIII and his advisers considered what role, if any, Cathedrals might play in the emerging Protestant church. On 10 September 1541 a new charter was granted to Ely, at which point Robert Steward, the last prior, was re-appointed as the first dean, who, with eight prebendaries formed the dean and chapter, the new governing body of the cathedral. Under Bishop Thomas Goodrich's orders, first the shrines to the Anglo-Saxon saints were destroyed, and as iconoclasm increased, nearly all the stained glass and much of the sculpture in the cathedral was destroyed or defaced during the 1540s. In the Lady Chapel the free-standing statues were destroyed and all 147 carved figures in the frieze of St Mary were decapitated, as were the numerous sculptures on West's chapel. The Cathedrals were eventually spared on the basis of three useful functions: propagation of true worship of God, educational activity, and care of the poor. To this end, vicars choral, lay clerks and boy choristers were all appointed (many having previously been members of the monastic community), to assist in worship. A grammar school with 24 scholars was established in the monastic buildings, and in the 1550s plate and vestments were sold to buy books and establish a library. The passageway running to the Lady Chapel was turned into an almshouse for six bedemen. The Lady Chapel itself was handed over to the town as Holy Trinity Parish Church in 1566, replacing a very unsatisfactory lean-to structure that stood against the north wall of the nave. Many of the monastic buildings became the houses of the new Cathedral hierarchy, although others were demolished. Much of the Cathedral itself had little purpose. The whole East end was used simply as a place for burials and memorials. The cathedral was damaged in the Dover Straits Earthquake of 6 April 1580, where stones fell from the vaulting.
Difficult as the sixteenth century had been for the cathedral, it was the period of the Commonwealth that came nearest to destroying both the institution and the buildings. Throughout the 1640s, with Oliver Cromwell's army occupying the Isle of Ely, a puritanical regime of worship was imposed. Bishop Matthew Wren was arrested in 1642 and spent the next 18 years in the Tower of London. That no significant destruction of images occurred during the Civil War and the Commonwealth would appear to be because it had been done so thoroughly 100 years before. In 1648 parliament encouraged the demolition of the buildings, so that the materials could be sold to pay for 'relief of sick and maimed soldiers, widows and children'. That this did not happen, and that the building suffered nothing worse than neglect, may have been due to protection by Oliver Cromwell, although the uncertainty of the times, and apathy rather than hostility to the building may have been as big a factor.
Restoration
When Charles II was invited to return to Britain, alongside the political restoration there began a process of re-establishing the Church of England. Matthew Wren, whose high church views had kept him in prison throughout the period of the Commonwealth, was able to appoint a new cathedral chapter. The dean, by contrast was appointed by the crown. The three big challenges for the new hierarchy were to begin repairs on the neglected buildings, to re-establish Cathedral services, and to recover its lands, rights and incomes. The search for lost deeds and records to establish their rights took over 20 years but most of the rights to the dispersed assets appear to have been regained.
In the 1690s a number of very fine baroque furnishings were introduced, notably a marble font (for many years kept in St Peter’s Church, Prickwillow,) and an organ case mounted on the Romanesque pulpitum (the stone screen dividing the nave from the liturgical choir) with trumpeting angels and other embellishments. In 1699 the north-west corner of the north transept collapsed and had to be rebuilt. The works included the insertion of a fine classical doorway in the north face. Christopher Wren has sometimes been associated with this feature, and he may have been consulted by Robert Grumbold, the mason in charge of the project. Grumbold had worked with Wren on Trinity College Library in Cambridge a few years earlier, and Wren would have been familiar with the Cathedral through his uncle Matthew Wren, bishop from 1638 to 1667. He was certainly among the people with whom the dean (John Lambe 1693–1708) discussed the proposed works during a visit to London. The damaged transept took from 1699 to 1702 to rebuild, and with the exception of the new doorway, the works faithfully re-instated the Romanesque walls, windows, and detailing. This was a landmark approach in the history of restoration.
Bentham and Essex
Two people stand out in Ely Cathedral's eighteenth-century history, one a minor canon and the other an architectural contractor. James Bentham (1709–1794), building on the work of his father Samuel, studied the history of both the institution and architecture of the cathedral, culminating in 1771 with his publication of The History and Antiquities of the Conventual and Cathedral Church of Ely. He sought out original documents to provide definitive biographical lists of abbots, priors, deans and bishops, alongside a history of the abbey and cathedral, and was able to set out the architectural development of the building with detailed engravings and plans. These plans, elevations and sections had been surveyed by the architect James Essex (1722–1784), who by this means was able to both highlight the poor state of parts of the building, and understand its complex interdependencies.
The level of expertise that Bentham and Essex brought to the situation enabled a well-prioritised series of repairs and sensitive improvements to be proposed that occupied much of the later eighteenth century. Essex identified the decay of the octagon lantern as the starting point of a major series of repairs, and was appointed in 1757 to oversee the work. 400 years of weathering and decay may have removed many of the gothic features, and shortage of funds allied to a Georgian suspicion of ornament resulted in plain and pared down timber and leadwork on the lantern. He was then able to move on to re-roof the entire eastern arm and restore the eastern gable which had been pushed outwards some .
Bentham and Essex were both enthusiastic proponents of a longstanding plan to relocate the 14th-century choir stalls from under the octagon. With the octagon and east roof dealt with, the scheme was embarked on in 1769, with Bentham, still only a minor canon, appointed as clerk of works. By moving the choir stalls to the far east end of the cathedral, the octagon became a spacious public area for the first time, with vistas to east and west and views of the octagon vaulting. They also removed the Romanesque pulpitum and put in a new choir screen two bays east of the octagon, surmounted by the 1690s organ case. Despite their antiquarian interests, Bentham and Essex appear to have dismantled the choir stalls with alarming lack of care, and saw no problem in clearing away features at the east end, and removing the pulpitum and medieval walls surrounding the choir stalls. The north wall turned out to incorporate the bones of seven 'Saxon worthies' which would have featured on the pilgrim route into the pre-Reformation cathedral. The bones were rehoused in Bishop West's Chapel. The choir stalls, with their misericords were however retained, and the restoration as a whole was relatively sympathetic by the standards of the period.
The Victorians
The next major period of restoration began in the 1840s and much of the oversight was the responsibility of Dean George Peacock (1839–58). In conjunction with the Cambridge Professor Robert Willis, he undertook thorough investigations into the structure, archaeology and artistic elements of the building, and made a start on what became an extensive series of refurbishments by restoring the south-west transept. This had been used as a 'workshop', and by stripping out more recent material and restoring the Norman windows and arcading, they set a pattern that would be adopted in much of the Victorian period works. In 1845, by which time the cathedral had works underway in many areas, a visiting architect, George Basevi, who was inspecting the west tower, tripped, and fell 36 feet to his death. He was given a burial in the north choir aisle. Works at this time included cleaning back thick layers of limewash, polishing pillars of Purbeck marble, painting and gilding roof bosses and corbels in the choir, and a major opening up of the West tower. A plaster vault was removed that had been put in only 40 years before, and the clock and bells were moved higher. The addition of iron ties and supports allowed removal of vast amounts of infill that was supposed to strengthen the tower, but had simply added more weight and compounded the problems.
George Gilbert Scott
George Gilbert Scott was, by 1847, emerging as a successful architect and keen exponent of the Gothic Revival. He was brought in, as a professional architect to bolster the enthusiastic amateur partnership of Peacock and Willis, initially in the re-working of the fourteenth-century choir stalls. Having been at the East end for 80 years, Scott oversaw their move back towards the Octagon, but this time remaining within the eastern arm, keeping the open space of the Octagon clear. This was Scott's first cathedral commission. He went on to work on a new carved wooden screen and brass gates, moved the high altar two bays westwards, and installed a lavishly carved and ornamented alabaster reredos carved by Rattee and Kett, a new font for the south-west transept, a new Organ case and later a new pulpit, replacing the neo-Norman pulpit designed by John Groves in 1803. In 1876 Scott's designs for the octagon lantern parapet and pinnacles were implemented, returning it to a form which, to judge from pre-Essex depictions, seems to be genuinely close to the original. Various new furnishings replaced the baroque items installed in the 1690s.
Stained glass
In 1845 Edward Sparke, son of the bishop Bowyer Sparke, and himself a canon, spearheaded a major campaign to re-glaze the cathedral with coloured glass. At that time there was hardly any medieval glass (mostly a few survivals in the Lady Chapel) and not much of post-reformation date. An eighteenth-century attempt to get James Pearson to produce a scheme of painted glass had produced only one window and some smaller fragments. With the rediscovery of staining techniques, and the renewed enthusiasm for stained glass that swept the country as the nineteenth century progressed, almost all areas of the cathedral received new glazing. Under Sparke's oversight, money was found from donors, groups, bequests, even gifts by the artists themselves, and by Edward Sparke himself. A wide variety of designers and manufacturers were deliberately used, to help find the right firm to fill the great lancets at the east end. In the event, it was William Wailes who undertook this in 1857, having already begun the four windows of the octagon, as well as contributions to the south west transept, south aisle and north transept. Other windows were by the Gérente brothers, William Warrington, Alexander Gibbs, Clayton and Bell, Ward and Nixon, Hardman & Co., and numerous other individuals and firms from England and France.
A timber boarded ceiling was installed in the nave and painted with scenes from the Old and New Testaments, first by Henry Styleman Le Strange and then, after Le Strange's death in 1862, completed by Thomas Gambier Parry, who also repainted the interior of the octagon.
A further major programme of structural restoration took place between 1986 and 2000 under Deans William Patterson (1984–90) and Michael Higgins (1991–2003), directed by successive Surveyors to the Fabric, initially Peter Miller and from 1994 Jane Kennedy. Much of this restoration work was carried out by Rattee and Kett. In 2000 a Processional Way was built, restoring the direct link between the north choir aisle and the Lady Chapel.
In 1972, the Stained Glass Museum was established to preserve windows from churches across the country that were being closed by redundancy. It opened to the public in 1979 in the north triforium of Ely Cathedral and following an appeal, an improved display space was created in the south triforium opening in 2000. Besides rescued pieces, the collection includes examples from Britain and abroad that have been donated or purchased through bequests, or are on loan from the Victoria and Albert Museum, the Royal Collection, and Friends of Friendless Churches.
Religious community
Ely has been an important centre of Christian worship since the seventh century AD. Most of what is known about its history before the Norman Conquest comes from Bede's Historia ecclesiastica gentis Anglorum written early in the eighth century and from the Liber Eliensis, an anonymous chronicle written at Ely some time in the twelfth century, drawing on Bede for the very early years, and covering the history of the community until the twelfth century. According to these sources the first Christian community here was founded by Æthelthryth (romanised as "Etheldreda"), daughter of the Anglo-Saxon King Anna of East Anglia, who was born at Exning near Newmarket. She may have acquired land at Ely from her first husband Tondberht, described by Bede as a "prince" of the South Gyrwas. After the end of her second marriage to Ecgfrith, a prince of Northumbria, in 673 she set up and ruled as abbess a dual monastery at Ely for men and for women. When she died, a shrine was built there to her memory. This monastery is recorded as having been destroyed in about 870 in the course of Danish invasions. However, while the lay settlement of the time would have been a minor one, it is likely that a church survived there until its refoundation in the tenth century. The history of the religious community during that period is unclear, but accounts of the refoundation in the tenth century suggest that there had been an establishment of secular priests.
In the course of the revival of the English church under Dunstan, Archbishop of Canterbury, and Aethelwold, Bishop of Winchester, Ely Abbey was reestablished in 970 as a community of Benedictine monks. This was one of a wave of monastic refoundations which locally included Peterborough and Ramsey (see English Benedictine Reform). Ely became one of the leading Benedictine houses in late Anglo-Saxon England. Following the Norman conquest of England in 1066 the abbey allied itself with the local resistance to Norman rule led by Hereward the Wake. The new regime having established control of the area, after the death of the abbot Thurstan, a Norman successor Theodwine was installed. In 1109 Ely attained cathedral status with the appointment of Hervey le Breton as bishop of the new diocese which was taken out of the very large diocese of Lincoln. This involved a division of the monastic property between the bishopric and the monastery, whose establishment was reduced from 70 to 40 monks, headed by a prior; the bishop being titular abbot. From 1216 the cathedral priory was part of the Canterbury Province of the English Benedictine Congregation, an umbrella chapter made up of the abbots and priors of the Benedictine houses of England, remaining so until the dissolution.
In 1539, during the Dissolution of the Monasteries, Ely Cathedral Priory surrendered to Henry VIII's commissioners. The cathedral was refounded by royal charter in 1541 with the former prior Robert Steward as dean and the majority of the former monks as prebendaries and minor canons, supplemented by Matthew Parker, later Archbishop of Canterbury, and Richard Cox, later Bishop of Ely. With a brief interruption from 1649 to 1660 during the Commonwealth, when all cathedrals were abolished, this foundation has continued in its essentials to the twenty-first century, with a reduced number of residentiary canons now supplemented by a number of lay canons appointed under a Church Measure of 1999.
As with other cathedrals, Ely's pattern of worship centres around the Opus Dei, the daily programme of services drawing significantly on the Benedictine tradition. It also serves as the mother church of the diocese and ministers to a substantial local congregation. At the Dissolution the veneration of St Etheldreda was suppressed, her shrine in the cathedral was destroyed, and the dedication of the cathedral to her and St Peter was replaced by the present dedication to the Holy and Undivided Trinity. Since 1873 the practice of honouring her memory has been revived, and annual festivals are celebrated, commemorating events in her life and the successive "translations" – removals of her remains to new shrines – which took place in subsequent centuries.
Dean and chapter
:
Dean – Mark Bonney (since 22 September 2012 installation)
Precentor – James Garrard (since 29 November 2008 installation)
Canon residentiary – James Reveley
Canon residentiary and (Diocesan) Initial Ministerial Education (IME) co-ordinator – Jessica Martin (since 10 September 2016 installation)
Burials
The burials below are listed in date order
Æthelthryth – Abbess of Ely in 679. The shrine was destroyed in 1541, her relics are alleged to be in St Etheldreda's Church, Ely Place, London and St Etheldreda's Roman Catholic Church, Ely
Seaxburh – Abbess of Ely in about 699
Wihtburh – possible sister of Æthelthryth, founder and abbess of convent in Dereham. Died 743 and buried in the cemetery of Ely Abbey, reinterred in her church in Dereham 798, remains stolen in 974 and buried in Ely Abbey
Byrhtnoth – patron of Ely Abbey, died leading Anglo-Saxon forces at the Battle of Maldon in 991
Eadnoth the Younger – Abbot of Ramsey, Bishop of Dorchester, killed in 1016 fighting against Cnut, his body was seized and hidden by Ely monks and subsequently venerated as Saint Eadnoth the Martyr
Wulfstan II – Archbishop of York (1002–1023), he died in York but according to his wishes he was buried in the monastery of Ely. Miracles are ascribed to his tomb by the Liber Eliensis
Alfred Aetheling – son of the English king Æthelred the Unready (1012–1037)
Hervey le Breton – First Bishop of Ely (1109–1131)
Nigel – Bishop of Ely (1133–1169), may have been buried here
Geoffrey Ridel – the nineteenth Lord Chancellor of England and Bishop of Ely (1173–1189)
Eustace – Bishop of Ely (1197–1215), also the twenty-third Lord Chancellor of England and Lord Keeper of the Great Seal. Buried near the altar of St Mary
John of Fountains – Bishop of Ely (1220–1225), "in the pavement" near the high altar
Geoffrey de Burgo – Bishop of Ely (1225–1228), buried in north choir but no surviving tomb or monument has been identified as his
Hugh of Northwold – Bishop of Ely (1229–1254), buried next to a shrine to St Etheldreda in the presbytery that he built, his tomb was moved to the north choir aisle but the location of his remains is unclear
William of Kilkenny – Lord Chancellor of England and Bishop of Ely (1254–1256), his heart was buried here, having died in Spain on a diplomatic mission for the king
Hugh de Balsham – Bishop of Ely (1256–1286), founder of Peterhouse, his tomb has not been firmly identified
John Kirkby – Lord High Treasurer of England and Bishop of Ely (1286–1290), a marble tomb slab located in the north choir aisle may possibly be from his tomb
William of Louth – Bishop of Ely (1290–1298), his elaborate tomb is near the entrance to the Lady Chapel in the south choir aisle
John Hotham – Chancellor of the Exchequer, Lord High Treasurer, Lord Chancellor and Bishop of Ely (1316–1337), died after two years of paralysis
John Barnet – Bishop of Ely (1366–1373)
Louis II de Luxembourg – Cardinal, Archbishop of Rouen and Bishop of Ely (1437–1443). He is not known to have ever visited the cathedral; after his death at Hatfield his bowels were interred in the church there, his heart at Rouen and his body at Ely on the south side of the Presbytery
John Tiptoft – 1st Earl of Worcester ('The Butcher of England') (1427–1470), in a large tomb in the South Choir Aisle
William Grey – Lord High Treasurer of England and Bishop of Ely (1454–1478)
John Alcock – Lord Chancellor of England and Bishop of Ely (1486–1500), in the Alcock Chantry
Richard Redman – Bishop of Ely (1501–1505)
Nicholas West – Bishop of Ely (1515–1534), buried in the Bishop West Chantry Chapel, which he built, at the eastern end of the South Choir Aisle
Thomas Goodrich – Bishop of Ely (1534–1554), buried in the South Choir
Robert Steward – First Dean of Ely (1541–1557)
Richard Cox – Bishop of Ely (1559–1581), buried in a tomb over which the choir box was built
Martin Heton – Bishop of Ely (1599–1609)
Humphrey Tyndall – Dean of Ely (1591–1614)
Henry Caesar – Dean of Ely (1614–1636)
Benjamin Lany – Bishop of Ely (1667–1675)
Peter Gunning – Bishop of Ely (1675–1684)
Simon Patrick – Bishop of Ely (1691–1707)
William Marsh – Gentleman of Ely (1642–1708), Marble mural erected above the entrance to the Lady Chapel.
John Moore – Bishop of Ely (1707–1714)
William Fleetwood – Bishop of Ely (1714–1723), in the north chancel aisle
Robert Moss – Dean of Ely (1713–1729)
Thomas Green – Bishop of Ely (1723–1738)
Robert Butts – Bishop of Ely (1738–1748)
Matthias Mawson – Bishop of Ely (1754–1771)
Edmund Keene – Bishop of Ely (1771–1781), in the Bishop West Chantry Chapel (his wife, Mary, was buried in the south side of the choir)
Bowyer Sparke – Bishop of Ely (1812–1836), in the Bishop West Chantry Chapel
George Basevi – Architect. Died 1845, aged 51, after falling through an opening in the floor of the old bell chamber of the west tower of Ely Cathedral while inspecting repairs. Buried in North Choir Aisle under a monumental brass
Joseph Allen – Bishop of Ely (1836–1845)
William Hodge Mill – (1792–1853) the first principal of Bishop's College, Calcutta, and later Regius Professor of Hebrew at Cambridge and Canon at Ely Cathedral
James Woodford – Bishop of Ely (1873–1885), in Matthew Wren's chapel on the south side of the choir
Harry Legge-Bourke – died 1973 while Member of Parliament for the Isle of Ely
Music
The cathedral retains six professional adult lay clerks who sing in the Cathedral Choir along with boy and girl choristers aged 7 to 13 who receive choristerships funded by the cathedral to attend the King's Ely school as day or boarding pupils. From 2021, boy and girl choristers sing an equal number of services, and receive an equal scholarship off of school fees at King’s Ely. The Director of Music leads the Boy choristers, and the Girl choristers are led by Sarah MacDonald.
The Octagon Singers and Ely Imps are voluntary choirs of local adults and children respectively.
Organ
Details of the organ from the National Pipe Organ Register
Organists
The following is a list of organists recorded since the cathedral was refounded in 1541 following the Second Act of Dissolution. Where not directly appointed as Organist, the position is inferred by virtue of their appointment as Master of the Choristers, or most recently as Director of Music.
Stained Glass Museum
The south triforium is home to the Stained Glass Museum, a collection of stained glass from the thirteenth century to the present that is of national importance and includes works from notable contemporary artists including Ervin Bossanyi.
In popular culture
The cathedral was the subject of a watercolour by J. M. W. Turner, in about 1796.
The cathedral appears on the horizon in the cover photo of Pink Floyd's 1994 album The Division Bell, and in the music video of a single from that album, "High Hopes".
Pink Floyd's David Gilmour recorded orchestral and choral parts for his 2024 album Luck and Strange at the cathedral.
The covers of a number of John Rutter's choral albums feature an image of the cathedral, a reference to early recordings of his music being performed and recorded in the Lady chapel.
Direct references to the cathedral appear in the children's book Tom's Midnight Garden by Philippa Pearce. A full-length movie with the same title was released in 1999.
A section of the film Elizabeth: The Golden Age was filmed at the cathedral in June 2006.
Filming for The Other Boleyn Girl took place at the cathedral in August 2007.
Parts of Marcus Sedgwick's 2000 novel Floodland take place at the cathedral after the sea has consumed the land around it, turning Ely into an island.
Direct references to Ely Cathedral are made in Jill Dawson's 2006 novel Watch Me Disappear.
A week's filming took place in November 2009 at the cathedral, when it substituted for Westminster Abbey in The King's Speech.
In April 2013 Mila Kunis was at the cathedral filming Jupiter Ascending.
In 2013, in the movie Snowpiercer, the west tower appeared in a collection of frozen ruined man-made structures in the dystopian future when a view of the outside world was briefly shown as the train Snowpiercer was encircling the globe.
The film Assassin's Creed shot scenes in Ely Cathedral in July 2013.
The film Macbeth used the cathedral for filming in February and March 2014.
In 2016 the cathedral was substituted for Westminster Abbey again in the Netflix original series The Crown.
Shooting for the 2023 film Maestro took place at the cathedral between October 20 and 22, 2022
See also
References
Further reading
W. E. Dickson. Ely Cathedral (Isbister & Co., 1897).
Richard John King. Handbook to the Cathedrals of England – Vol. 3, (John Murray, 1862).
D. J. Stewart. On the architectural history of Ely cathedral (J. Van Voorst, 1868).
Peter Meadows and Nigel Ramsay, eds., A History of Ely Cathedral (The Boydell Press, 2003).
Lynne Broughton, Interpreting Ely Cathedral (Ely Cathedral Publications, 2008).
John Maddison, Ely Cathedral: Design and Meaning (Ely Cathedral Publications, 2000).
Janet Fairweather, trans., Liber Eliensis: A History of the Isle of Ely from the Seventh Century to the Twelfth Compiled by a Monk of Ely in the Twelfth Century (The Boydell Press, 2005).
Peter Meadows, ed., Ely: Bishops and Diocese, 1109–2009 (The Boydell Press, 2010).
External links
Descriptive tour of Ely Cathedral
The Stained Glass Museum at Ely Cathedral
A history of the choristers of Ely Cathedral
Flickr images tagged Ely Cathedral
Discussion of the lady chapel by Janina Ramirez and Will Shank: Art Detective Podcast, 20 Feb 2017
Anglican cathedrals in England
Buildings and structures in Cambridgeshire
Ely, Cambridgeshire
Monasteries in Cambridgeshire
Anglo-Saxon monastic houses
English churches with Norman architecture
English Gothic architecture in Cambridgeshire
Tourist attractions in Cambridgeshire
Benedictine monasteries in England
Pre-Reformation Roman Catholic cathedrals
Grade I listed cathedrals
Grade I listed churches in Cambridgeshire
Museums in Cambridgeshire
Art museums and galleries in Cambridgeshire
Glass museums and galleries
Edward Blore buildings
Burial sites of the House of Wuffingas
Basilicas (Church of England)
Burial sites of the House of Luxembourg
ja:イーリー#イーリー大聖堂 | Ely Cathedral | Materials_science,Engineering | 10,061 |
7,569,885 | https://en.wikipedia.org/wiki/Artificial%20lift | Artificial lift is the use of artificial means to increase the flow of liquids, such as crude oil or water, from a production well. Generally this is achieved by the use of a mechanical device inside the well (known as pump or velocity string) or by decreasing the weight of the hydrostatic column by injecting gas into the liquid some distance down the well. A newer method called Continuous Belt Transportation (CBT) uses an oil absorbing belt to extract from marginal and idle wells. Artificial lift is needed in wells when there is insufficient pressure in the reservoir to lift the produced fluids to the surface, but often used in naturally flowing wells (which do not technically need it) to increase the flow rate above what would flow naturally. The produced fluid can be oil, water or a mix of oil and water, typically mixed with some amount of gas.
Usage
Any liquid-producing reservoir will have a 'reservoir pressure': some level of energy or potential that will force fluid (liquid, gas or both) to areas of lower energy or potential. The concept is similar to that of water pressure in a municipal water system. As soon as the pressure inside a production well is decreased below the reservoir pressure, the reservoir will act to fill the well back up, just like opening a valve on a water system. Depending on the depth of the reservoir and density of the fluid, the reservoir may or may not have enough potential to push the fluid to the surface - a deeper well or a heavier mixture results in a higher pressure requirement.
Technologies
Hydraulic pumping systems
Hydraulic pumping systems transmit energy to the bottom of the well by means of pressurized power fluid that flows down in the wellbore tubular to a subsurface pump. There are at least three types of hydraulic subsurface pump:
a reciprocating piston pump, where one side is powered by the (injected) drive fluid while the other side pumps the produced fluids to surface
a jet pump, where the (injected) drive fluid passes through a nozzle-throat venturi combination, mixes with produced fluids and by the venturi effect creates a high pressure at the discharge side of the pump.
a hydraulically driven downhole turbine (HSP), whereby the downhole drive motor is a turbine, mechanically connected to the impeller-pump section which pumps the fluid.
These systems are very versatile and have been used in shallow depths (1,000 ft) to deeper wells (18,000 ft), low rate wells with production in the tens of barrels per day to wells producing in excess of 20,000 bbl (3,200 m3) per day. In most cases the drive (injected) fluid can be water or produced fluids (oil/water mix). Certain chemicals can be mixed in with the injected fluid to help control corrosion, paraffin and emulsion problems. Hydraulic pumping systems are also suitable for deviated wells where conventional pumps such as the rod pump are not feasible.
Like all systems, these systems have their operating envelopes, though with hydraulic pumps these are often misunderstood by designers. Some types of hydraulic pumps may be sensitive to solids, while jetpumps for example can pump solids volume fractions of more than 50%. They are considered the least efficient lift method, though this differs for the different types of hydraulic pumps, and also when looking at full system losses the differences in many installations are negligible.
The life-cycle cost of these systems is similar to other types of artificial lift when appropriately designed, bearing in mind that they are typically low maintenance, with jet pumps for instance having slightly higher operating (energy) costs with substantially lower purchase cost and virtually no repair cost.
ESP
Electric Submersible Pumps (ESP) consist of a downhole pump (a series of centrifugal pumps), an electrical motor which transforms the electrical power into kinetic energy to turn the pump, a separator or protector to prevent produced fluids from entering the electrical motor, and an electric power cable that connects the motor to the surface control panel. ESP is a very versatile artificial lift method and can be found in operating environments all over the world. They can handle a very wide range of flow rates (from 200 to per day) and lift requirements (from virtually zero to 10,000 ft (3,000 m) of lift). They can be modified to handle contaminants commonly found in oil, aggressive corrosive fluids such as H2S and CO2, and exceptionally high downhole temperatures. Increasing water cut has been shown to have no significant detrimental effect on the ESP performance. It is possible to locate them in vertical, deviated, or horizontal wells, but it is recommended to deploy them in a straight section of casing for optimum run life performance.
Although latest developments are aimed to enhance the ESP capabilities to handle gas and sand, they still need more technological development to avoid gas locks and internal erosion. Until recently, ESPs have come with an often prohibitive price tag due to the cost of deployment which can be in excess of $20,000.
Various tools such as Automatic Diverter Valves (ADV), SandCats and other Tubing String and Pump Tools enhance the performance of the ESP. The majority of systems deployed in today's market are Dual ESP Systems which is a simple arrangement of two ESPs in the same well. This delivers a complete downhole system booster or back up - downtime is minimal, workovers cost less and there are savings in other operational areas. ESP Dual Systems bring a significant enhancement of well profitability.
Gas Lift
Gas lift is another widely used artificial lift method. As the name denotes, gas is injected in the tubing to reduce the weight of the hydrostatic column, thus reducing the back pressure and allowing the reservoir pressure to push the mixture of produce fluids and gas up to the surface. The gas lift can be deployed in a wide range of well conditions (from to ). Gas lifts can cope well with abrasive elements and sand, and the cost of workover is minimum.
Gas lifted wells are equipped with side pocket mandrels and gas lift injection valves. This arrangement allows a deeper gas injection in the tubing. The gas lift system has some disadvantages. There has to be a source of gas, some flow assurance problems such as hydrates can be triggered by the gas lift.
This uses the injection of gas into the fluid stream which reduces the fluid density and lowers the bottom hole pressure. As the gas rises the bubbles help to push the oil ahead. The degree of the effect depends on continuous or intermittent flow of the gas. The gas can be injected at a single point below the fluid or may be supplemented by multipoint injection. An intermitter at the surface controls the timing of the gas injection. The mechanisms are either pressure or fluid operated. They may be throttling valves or casing pressure operated valve.
Fluid operated valves require a rise in tubing pressure to open and drop to close.
A throttling pressure valve is opened by casing pressure build up and closed by casing pressure drop.
Conventional gas lift valves are attached to gas lift mandrels and wire line retrievable gas lift valves which are set in side pocket mandrels.
Rod pumps
Rod pumps are long slender cylinders with both fixed and moveable elements inside. The pump is designed to be inserted inside the tubing of a well and its main purpose is to gather fluids from beneath it and lift them to the surface. The most important components are: the barrel, valves (traveling and fixed) and the piston. It also has another 18 to 30 components which are called "fittings".
Components
Every part of the pump is important for its correct operation. The most commonly used parts are described below:
Barrel: The barrel is a long cylinder, which can be from 10 to long, with a diameter of to . After experience with several materials for its construction, the American Petroleum Institute (API) standardized the use of two materials or compositions for this part: carbon steel and brass, both with an inside coating of chrome. The advantage of brass against the harder carbon steel is its 100% resistance to corrosion.
Piston/Plunger: This is a nickel-metal sprayed steel cylinder that goes inside the barrel. Its main purpose is to create a sucking effect that lifts the fluids beneath it and then, with the help of the valves, take the fluids above it, progressively, out of the well. It achieves this with a reciprocating up and down movement.
Valves: The valves have two components - the seat and the ball - which create a complete seal when closed. The most commonly used seats are made of carbon nitride and the ball is often made of silicon nitride. In the past, balls of iron, ceramic and titanium were used. Titanium balls are still being used but only where crude oil is extremely dense and/or the quantity of fluid to be lifted is large. The most common configuration of a rod pump requires two valves, called the traveling valve and the fixed (or static or standing) valve.
Piston rod: This is a rod that connects the piston with the outside of the pump. Its main purpose is to transfer the up/down reciprocating energy produced by the "Nodding Donkey" (pumping unit) installed above ground.
Fittings: The rest of the parts of the pump are called fittings and are, basically, small pieces designed to keep everything held together in the right place. Most of these parts are designed to let the fluids pass uninterrupted.
Filter/Strainer: The job of the filter, as implied, is to stop large fragments of rock, rubber or any other garbage that might be loose in the well from being sucked into the pump. There are several types of filters, with the most common being an iron cylinder with enough holes in it to permit the entrance of the amount of fluid the pump needs.
Sub-Surface Pumping
The sub-surface pump displaces the fluid at the bottom of the well thus lowering the bottom hole pressure.
The movement of the plunger and the traveling valve helps to create a low pressure thus moving fluid up the well. The traveling valve is opened on the down stroke and closed on the upstroke. It is on the up stroke that it carries the fluid up the well. The sucker rod is usually 25 ft. long. There are 3 types of pumping units: Class 1, Mark 2, or air balanced. By changing the stroke length or the pump rate the production rate can be changed.
The production measured in barrels per day can be calculated with the following formula: P=SxNxC, where P=Production in barrels per day, S=Downhole stroke length (inches), N=Number of strokes per minute, C=A constant derived from the following:
Plunger Diameter = Constant "C"
1 1/16" = 0.132
1 1/4" = 0.182
1 1/2" = 0.262
1 3/4" = 0.357
2" = 0.468
2 1/4" = 0.590
2 1/2" = 0.728
2 3/4" = 0.881
3 1/4" = 1.231
3 3/4" = 1.639
For an online calculator: Don-Nan Sucker Rod Pump Production Calculator (bpd)
Production at 100% is theoretical. 80% is a more realistic production calculation.
Hybrid Gas Lift and Rod Pump
A new technology has recently been developed which combines gas lift with a rod pump, dedicating two separate tubing strings in the wellbore for each lift method. This technique is designed specifically to artificially lift the unique geometry of horizontal/deviated wells and also vertical wells that have deep or very long perforated intervals, or have too high of a gas liquid ratio (GLR) for conventional artificial lift methods. In this design, the rod pump is placed in the vertical portion of the well above the deviated or perforated interval, while relatively low pressure-low volume gas is used to lift reservoir liquids from the deviated or extended perforated interval to above the rod pump. Once the liquids are raised above the pump, they become trapped above a packer and then enter the pump chamber where they are transported to the surface.
This design overcomes high maintenance costs, gas interference issues, and depth limitations of installing conventional pumping systems into the deviated or extended perforated intervals and also overcomes the significant back pressure exerted on the reservoir by conventional gas lift.
PCP
Progressing Cavity Pumps (PCP) are also widely applied in the oil industry. The PCP consists of a stator and a rotor. The rotor is rotated using either a top side motor or a bottom hole motor. The rotation-created sequential cavities and the produced fluids are pushed to the surface.
The PCP is a flexible system with a wide range of applications in terms of rate (up to and depth ). They offer outstanding resistance to abrasives and solids but they are restricted to setting depths and temperatures. Some components of the produced fluids like aromatics can also deteriorate the stator's elastomer.
Rodless Pumping
These can be either hydraulic or electric submersible. The hydraulic uses high pressure power fluid to operate down hole fluid engine. The engine in turn drives a piston that moves the fluid to the surface. The power fluid system can be either open or closed, it depends on whether the power fluid can be mixed with well fluid. This type of system usually has above ground power fluid pumps and a reservoir. The electric submersible is another type of rodless pumping system. This uses an electric pump submerged in the well and connected to a series of transformers and control equipment that power and control the pumping rate. In this system the electric motor is isolated from the oil by a protector. The fluid intake which is before the pump mechanism has a gas separator, also the junction box on the surface helps to dissipate any gas that may have come up the power lines.
Essentially the rod and rodless pumping mechanisms help to achieve the fluid movement by reducing the bottom hole pressure by displacing the fluid above it all by mechanical means.
Another method is the plunger lift mechanism which utilizes the tubing string as the barrel. It uses gas to power a plunger.
There are several variations of these methods that can be used. They include; jet pumping involving a hydraulic pump and nozzle that transfers fluid momentum directly to the producing fluid or chamber lift which is a modified gas lift mechanism that has no back pressure. There are also modified rod pumping design units that use either a winch or pneumatic mechanism to work.
Continuous Belt Transportation
This method uses an oil absorbing continuous belt to transport heavy oil as an alternative to pumping. A single sided “O” shape belt driven by a Moebius surface unit cycles continuously to the underground unit, below the static level, capturing the oil and transporting up to the surface unit for collection. The oleophilic properties of the belt ensure that sand, paraffin, and most of the water are not captured.
Due to its relatively low rate of oil capture at below 130 barrels per day at a maximum depth of 4000 meters, and very low cost of operation, this method is used primarily in stripper, marginal, idle, and abandoned wells. The optimal oil composition for CBT are reservoirs with medium, heavy and very heavy oil, at a maximum temperature of 130 deg. Celsius. High-volume, light oil wells are not suitable for this method.
See also
Plunger lift
References
"Upwing Energy: Artificial Lift for Natural Gas"
"Your one source of ESP system and HPS in North America"
"NOV - Artificial Lift Page"
" Schlumberger Page on Artificial Lift" Accessed Jan 24 2007
Petroleum Engineering Handbook Bradley H, Society of Petroleum Engineers, Richardson, TX, U.S.A, 1987
External links
Defining Artificial Lift
References
Pumps
Petroleum production
Oil wells | Artificial lift | Physics,Chemistry | 3,269 |
4,168,493 | https://en.wikipedia.org/wiki/Chromium%28II%29%20chloride | Chromium(II) chloride describes inorganic compounds with the formula CrCl2(H2O)n. The anhydrous solid is white when pure, however commercial samples are often grey or green; it is hygroscopic and readily dissolves in water to give bright blue air-sensitive solutions of the tetrahydrate Cr(H2O)4Cl2. Chromium(II) chloride has no commercial uses but is used on a laboratory-scale for the synthesis of other chromium complexes.
Synthesis
CrCl2 is produced by reducing chromium(III) chloride either with hydrogen at 500 °C:
2CrCl3 + H2 → 2CrCl2 + 2HCl
or by electrolysis.
On the laboratory scale, LiAlH4, zinc, and related reductants produce chromous chloride from chromium(III) precursors:
4 CrCl3 + LiAlH4 → 4 CrCl2 + LiCl + AlCl3 + 2 H2
2 CrCl3 + Zn → 2 CrCl2 + ZnCl2
CrCl2 can also be prepared by treating a solution of chromium(II) acetate with hydrogen chloride:
Cr2(OAc)4 + 4 HCl → 2 CrCl2 + 4 AcOH
Treatment of chromium powder with concentrated hydrochloric acid gives a blue hydrated chromium(II) chloride, which can be converted to a related acetonitrile complex.
Cr + nH2O + 2HCl → CrCl2(H2O)n + H2
Structure and properties
Anhydrous CrCl2 is white however commercial samples are often grey or green. It crystallizes in the Pnnm space group, which is an orthorhombically distorted variant of the rutile structure; making it isostructural to calcium chloride. The Cr centres are octahedral, being distorted by the Jahn-Teller Effect.
The hydrated derivative, CrCl2(H2O)4, forms monoclinic crystals with the P21/c space group. The molecular geometry is approximately octahedral consisting of four short Cr—O bonds (2.078 Å) arranged in a square planar configuration and two longer Cr—Cl bonds (2.758 Å) in a trans configuration.
Reactions
The reduction potential for Cr3+ + e− ⇄ Cr2+ is −0.41. Since the reduction potential of H+ to H2 in acidic conditions is +0.00, the chromous ion has sufficient potential to reduce acids to hydrogen, although this reaction does not occur without a catalyst.
Organic chemistry
Chromium(II) chloride is used as precursor to other inorganic and organometallic chromium complexes. Alkyl halides and nitroaromatics are reduced by CrCl2. The moderate electronegativity of chromium and the range of substrates that CrCl2 can accommodate make organochromium reagents very synthetically versatile. It is a reagent in the Nozaki-Hiyama-Kishi reaction, a useful method for preparing medium-size rings. It is also used in the Takai olefination to form vinyl iodides from aldehydes in the presence of iodoform.
References
Chromium(II) compounds
Chlorides
Metal halides
Reducing agents | Chromium(II) chloride | Chemistry | 710 |
46,876,332 | https://en.wikipedia.org/wiki/S5.92 | The S5.92 is a Russian rocket engine, currently used on the Fregat upper stage. It burns a hypergolic mixture of unsymmetrical dimethylhydrazine (UDMH) fuel with dinitrogen tetroxide (N2O4) oxidizer in the gas-generator cycle.
Design
The S5.92 has two throttle settings. The highest produces of thrust, a specific impulse of 327 seconds, and a 3-second ignition transient. The lower throttle level produces of thrust, specific impulse of 316 seconds, and a 2.5 second ignition transient. It is rated for 50 ignitions, and 300 days between ignitions.
History
It was originally designed by the famous A.M. Isayev Chemical Engineering Design Bureau, for the two spacecraft of the Phobos program. While the Mars missions were unsuccessful, the spacecraft manufacturer, NPO Lavochkin, found a market niche for the technology. Thus, the engine was adapted for use on the optional Fregat upper stage of the Soyuz and Zenit launch vehicles.
See also
Fregat - The upper stage that is powered by the S5.92.
Soyuz - A medium lift rocket that uses the Fregat stage.
Zenit-3F - A heavy lift rocket that uses the Fregat stage.
References
External links
KB KhIMMASH Official Page (in Russian)
NPO Lavochkin Fregat Page (in Russian)
Rocket engines of Russia
Rocket engines of the Soviet Union
Rocket engines using hypergolic propellant
Rocket engines using the gas-generator cycle
KB KhimMash rocket engines | S5.92 | Astronomy | 334 |
2,132,322 | https://en.wikipedia.org/wiki/Stokes%20operators | The Stokes operators are the quantum mechanical operators corresponding to the classical Stokes parameters. These matrix operators are identical to the Pauli matrices .
External links
Stokes operators, angular momentum and radiation phase.
Quantum mechanics | Stokes operators | Physics | 41 |
50,396,981 | https://en.wikipedia.org/wiki/Cellosaurus | Cellosaurus is an online knowledge base on cell lines, which attempts to document all cell lines used in biomedical research. It is provided by the Swiss Institute of Bioinformatics (SIB). It is an ELIXIR Core Data Resource as well as an IRDiRC's Recognized Resource. It is the contributing resource for cell lines on the Resource Identification Portal. As of December 2022, it contains information for more than 144,000 cell lines.
Its scope includes immortalised cell lines, naturally immortal cell lines (example: embryonic stem cells) and finite life cell lines when those are distributed and used widely. The Cellosaurus provides a wealth of manually curated information; for each cell line it lists a recommended name, synonyms and the species of origin. Other types of information include standardised disease terminology (for cancer or genetic disorder cell lines), the transformant used to immortalise a cell line, transfected or knocked-out genes, microsatellite instability, doubling time, gender and age of donor (patient or animal), important sequence variations, web links, publication references and cross-references to close to 100 different databases, ontologies, cell collections and other relevant resources.
Since many cell lines used in research have been misidentified or contaminated, the Cellosaurus keeps track of problematic cell lines, including all those listed in the International Cell Line Authentication Committee (ICLAC) tables. For human as well as some dog cell lines, it provides short tandem repeat (STR) profile information. Since July 2018, cell lines in the Cellosaurus are represented as items in Wikidata. In March 2020, the Cellosaurus created a page containing cell line information relevant to SARS-CoV-2 in response to the COVID-19 pandemic.
The Cellosaurus encyclopedia is widely recognized as an authoritative source for cell line information, providing unique identifiers and as source of curated information.
References
External links
Cellosaurus
Introductory video on the Cellosaurus
GitHub directory of Cellosaurus
Record in FAIRsharing.org
Record in Identifiers.org
Molecular biology
Biological databases
Cell biology
Cell culture | Cellosaurus | Chemistry,Biology | 435 |
13,745,959 | https://en.wikipedia.org/wiki/HD%20173780 | HD 173780 is a single star in the northern constellation Lyra, near the southern constellation border with Hercules. It is an orange-hued star that is faintly visible to the naked eye with an apparent visual magnitude of +4.84. This object is located at a distance of approximately 237 light years from the Sun based on parallax, but is drifting closer with a radial velocity of −17 km/s.
This is an aging giant star with a stellar classification of K3III. It is a red clump giant, indicating it is on the horizontal branch and is generating energy through the fusion of helium at its core. The star is 2.4 billion years old with 1.7 times the mass of the Sun. With the supply of hydrogen exhausted at its core, it has expanded to 16 times the radius of the Sun. The star is radiating 92 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 4,468 K.
References
K-type giants
Horizontal-branch stars
Lyra
Durchmusterung objects
173780
092088
7064 | HD 173780 | Astronomy | 227 |
25,106,789 | https://en.wikipedia.org/wiki/Winkler%20vine | The Winkler Vine was an example of large-vine grape culture. The vine was named after Albert J. Winkler, Chair of the Department of Viticulture and Enology (1935–1957) at University of California, Davis. Planted in 1979, the Winkler vine was a Vitis vinifera cv. Mission, grafted on to a Vitis rupestris St George rootstock. A great deal of research has been carried out on pruning, training and vine size; by researchers such as: Ravaz, Winkler, Shaulis, Koblet, Howell, Carbonneau, Smart & Clingeleffer. Minimal pruning is a pruning/training system adapted from the principles of big-vine theory.
History
The Mission grape cultivar was originally brought to South America from Spain by Catholic missionaries and was introduced into California in the 18th century. Historically Mission was the most widely planted cultivar in California, up to the 1850s.
The Winkler Vine was trained on a 60x60ft steel arbour, covering 1/12 of an acre and was capable of producing over a tonne of fruit. The Winkler vine was damaged by a tractor 'early in its life' and carried a large canker; eventually the vine became infected with the wood rot, Eutypa, causing its death in spring 2008.
Big vine viticulture
The vine was the classic example of big vine culture, where vines are trained and pruned to allow the plant to express its natural vigour; rather than forcing a vine into a restricted trellis system by heavy pruning. Eventually a 'big-vine' will reach its maximum size and yield compensation will result in 'balanced' cropping. Big vine production of grapes is not economically feasible (due to the time taken to establish the plant); however it demonstrates the natural ability of a vine produce in sustainable manner (yield vs. fruit ripeness vs. carbohydrate storage).
References
2008 endings
Individual plants
University of California, Davis campus
Viticulture | Winkler vine | Biology | 420 |
56,315,819 | https://en.wikipedia.org/wiki/Arithmetic%20billiards | In recreational mathematics, arithmetic billiards provide a geometrical method to determine the least common multiple and the greatest common divisor of two natural numbers. It makes use of reflections inside a rectangle which has sides with length of the two given numbers. This is an easy example of trajectory analysis used in dynamical billiards.
Arithmetic billiards can be used to show how two numbers interact. Drawing squares within the rectangle of length and width of one allows a reader to find information of the two numbers. Coprime integers will interact with every unit square within the rectangle.
Properties
Arithmetic billiards is a name given to finding both the least common multiple (LCM) and the greatest common divisor (GCD) of two integers using a geometric method. It is named this way due to looking similar to the movement of a billiard ball.
A rectangle is drawn with a base of the larger number, and height of the larger number. Constructing a path at the bottom left corner at 45° angle, keep the line going until it hits a side of the rectangle. Every time that the path hits a side, it reflects with the same angle (the path makes either a left or a right 90° turn). Eventually (i.e. after a finite number of reflections) the path hits a corner and there it stops.
If one side length divides the other, the path is a zigzag consisting of one or more segments.
Else, the path has self-intersections and consists of segments of various lengths in two orthogonal directions.
In general, the path is the intersection of the rectangle with a grid of squares (oriented at 45° with respect to the rectangle sides).
Features
For the rectangle, shaped similar to a billiard table, we give and as the side lengths. This can be divided into unit squares. The least common multiple is the number of unit squares crossed by the arithmetic billiard path or, equivalently, the length of the path divided by .
Suppose that none of the two side lengths divides the other. Then the first segment of the arithmetic billiard path contains the point of self-intersection which is closest to the starting point. The greatest common divisor is the number of unit squares crossed by the first segment of the path up to that point of self-intersection. The path goes through each unit square if and only if and are coprime integers - they have a GCD of 1.
The number of bouncing points for the arithmetic billiard path on the two sides of length equals , and similarly for the two sides of length . In particular, if and are coprime, then the total number of contact points between the path and the perimeter of the rectangle (i.e. the bouncing points plus starting and ending corner) equals .
The ending corner of the path is opposite to the starting corner if and only if and are exactly divisible by the same power of two (for example, if they are both odd), else it is one of the two adjacent corners, according to whether or has more factors in its prime factorisation. The path will be symmetric if the starting and the ending corner are opposite. The path will be point symmetric in conjunction
with the center of the rectangle, else it is symmetric with respect to the bisector of the side connecting the starting and the ending corner.
The contact points between the arithmetic billiard path and the rectangle perimeter are evenly distributed: the distance along the perimeter (i.e. possibly going around the corner) between two such neighbouring points equals . Setting coordinates in the rectangle such that the starting point is and the opposite corner is . Then any point on the arithmetic billiard path which has integer coordinates has the property that the sum of the coordinates is even (the parity cannot change by moving along diagonals of unit squares). The points of self-intersection of the path, the bouncing points, and the starting and ending corner are exactly the points in the rectangle whose coordinates are multiples of and such that the sum of the coordinates is an even multiple of .
Proof
There are a few ways to show a proof of arithmetic billiards.
Consider a square with side . By displaying multiple copies of the original rectangle (with mirror symmetry) we can visualise the arithmetic billiard path as a diagonal of that square. In other words, we can think of reflecting the rectangle rather than the path segments.
It is convenient to rescale the rectangle dividing and by their greatest common divisor, operation which does not alter the geometry of the path (e.g. the number of bouncing points).
The motion of the path is “time reversible”, meaning that if the path is currently traversing one particular unit square
(in a particular direction), then there is absolutely no doubt from which unit square and from which direction it just came.
One generalisation
If we allow the starting point of the path to be any point in the rectangle with integer coordinates, then there are also periodic paths unless the rectangle sides are coprime. The length of any periodic path equals .
Usage
Arithmetic billiards have been discussed as mathematical puzzles by both Hugo Steinhaus and Martin Gardner. Arithmetic billiards is sometimes used by teachers to show GCD and LCM. They are sometimes referred to by the name 'Paper Pool' due to a common version of billiards called pool. They have been used as a source of questions in mathematical circles.
External links
A website version drawing the lines themed around billiards
References
Arithmetic dynamics
Cue sports
Geometry | Arithmetic billiards | Mathematics | 1,141 |
50,484,294 | https://en.wikipedia.org/wiki/Garden%20waste%20dumping | Garden waste, or green waste dumping is the act of discarding or depositing garden waste somewhere it does not belong.
Garden waste is the accumulated plant matter from gardening activities which involve cutting or removing vegetation, i.e. cutting the lawn, weed removal, hedge trimming or pruning consisting of lawn clippings. leaf matter, wood and soil.
The composition and volume of garden waste can vary from season to season and location to location. A study in Aarhus, Denmark, found that on average, garden waste generation per person ranged between 122 kg to 155 kg per year.
Garden waste may be used to create compost or mulch, which can be used as a soil conditioner, adding valuable nutrients and building humus. The creation of compost requires a balance between, nitrogen, carbon, moisture and oxygen. Without the ideal balance, plant matter may take a long time to break down, drawing nitrogen from other sources, reducing nitrogen availability to existing vegetation which requires it for growth.
The risk of dumping garden waste is that it may contain seeds and plant parts that may grow (propagules), as well as increase fire fuel loads, disrupt visual amenity, accrue economic costs associated with the removal of waste as well as costs associated with the mitigation of associated impacts such as weed control, forest fire.
Cause
There are strong links between weed invasion of natural areas and the proximity and density of housing. The size and duration of the community have a direct relation to the density of weed infestation. Of the various means in which migration of exotic species from gardens take place, such as vegetative dispersal of runners, wind born and fallen seed, garden waste dumping can play a significant role. The results of one North German study found that of the problematic population of Fallopia, app. 29% originated from garden waste. Of a population of Heracleum mantegazzianum, 18% was found by Schepker to be generated by garden waste (as cited by Kowarik & von der Lippe, 2008) pg 24–25.
An Australia government publication suggest that some of the main reasons for the dumping of garden waste can be attributed to lack of care for the environment, convenience, or a reluctance to pay for the correct collection or disposal of the waste. (Environmental Protection Agency [EPA]. 2013). People dump garden waste to avoid disposal fees at landfill sites or because they do not want to spend the time or effort disposing of or recycling their waste properly. This activity is carried out by people in all parts of the community, from householders to businesses, such as professional landscapers and gardeners.
The spread of exotic vegetation can out-compete locally endemic vegetation, altering the composition and structure of an ecosystem.
Dumping of garden waste in particular facilitates the spread of exotic vegetation into forest remnants via the introduction of seeds and propagules contained within the garden waste. Common selection criteria for home gardeners when choosing plants are often based on ease of propagation, suitability to local environmental conditions and novelty. These specific chosen characteristics increase the chance of plant parts and seeds that are introduced into forested areas becoming a problem.
The three major causes of animal habitat degradation are; the disruption or loss of ecosystem functions, the loss of food resources and loss of resident species. Non-native invaders can cause extinctions of vulnerable native species through competition, pest and disease transportation and habitat and ecosystem alteration.
The dumping of garden waste in nature reserves surrounding and near urban areas increases the risk of fires. The dumped garden waste will eventually dry out creating fuel adding to already fallen debris fuel load on which a fire can thrive and spread on. Garden waste can spread weeds and these weeds build fuel for fires. Dumped garden waste can facilitate higher rates of erosion by smothering natural vegetation cover. With no root systems for stabilisation the top soil is vulnerable to erosion (Ritter, J. 2015), This can add higher levels of sediments, contributing to the siltation of creeks and waterways.
If plant matter gets into waterways it can create reduced oxygen levels through the process of decomposition of green waste such as lawn clippings. This directly upsets the quality of water, affecting fish and aquatic wildlife.
This dumping of green waste can also lead to the blocking of drainage systems; directly through the build-up of plant debris, and indirectly through the spread of invasive plant species that colonise wet areas, reducing and or changing the flow of waterways. This change in flow, including path and velocity, can alter hydrological cycles, affecting frequency and intensity of floods.
Impact
Green and garden waste has a direct impact to the visual aesthetics of land and can often attract further illegal dumping.
Increased fire risk
Dumping garden waste in nature reserves and parks surrounding and near urban areas can directly and indirectly affect the existing flora and fauna, as well as human life through the increased risk of fires. The dumped garden waste will eventually dry, creating additional fuel, adding to already fallen debris on which a fire can thrive and spread. Garden waste can spread weeds and these weeds also build fuel for fires. Fires may also spread to the suburban areas where humans can also be impacted by losing their homes from fire, incur injury or death from smoke or burns, and suffer economic loses such as income loss and clean-up costs. Fires can lead to an overall loss of habitat and biodiversity.
Threat to biodiversity
The invasion of exotic plant species into remnant native forests is a threat to biodiversity. Some impacts of habitat degradation include; when native animals, insects and birds become vulnerable and put at risk; loss of food source for native wildlife; disruption of native plant-animal relationships ie pollination and seed dispersal and disconnection of plant-host relationships. Highly adaptive plants chosen for their ease of cultivation out compete more specialised species.
Weed invasion of a forest system can change the processes of plant succession (the system of one species replacing another due to disturbance factors), the composition of the plant community and the composition and availability of nutrients. The change in forest composition can lead to loss of unique plant species.
When a habitat is destroyed, the plants, animals, and other organisms that occupied the habitat have a reduced carrying capacity so that populations decline and extinction becomes a threat. Many endemic organisms have very specific requirements for their survival that can only be found within a certain ecosystem. The term 'hotspot' is used to describe areas featuring exceptional concentrations of endemic species and facing high potential of habitat degradation. The 25 most significant hotspots contain the habitats of 133,149 plant species (44% of all plant species worldwide; table 1) and 9,645 vertebrate species (35% of all vertebrate worldwide; table 2). These endemics are confined to an expanse of 2.1 million square kilometers (1.4% of land surface). Having lost 88% of their primary vegetation, they formerly occupied 17.4 million square kilometers or 11.8% of land surface.
The recruitment of alien invasive species may lead to a homogenisation of landscapes. Although increased bio diversity in subregions created by newly introduced species may occur, the displacement of the existing plant species may lead to reduced biodiversity on a global scale.
When population-level properties that indicate superior competitive ability of the invading species are examined, 13–24 (42–77%) of the species are included, with the majority of species showing traits capable of modifying natural systems at both ecosystem and community/population scales.
Waterways quality
The dumping of green waste such as lawn clippings can block drainage systems which pollutes water therefore effecting water quality and the health of aquatic plants and animals. Dumped garden waste can add high levels of sediments, reducing the light available for photosynthesis. Dumping also block waterways and roads, cause flooding and facilitate higher rates of erosion by smothering natural vegetation cover.
Causes / stakeholders
Illegal dumping is carried out by all types of people in all parts of the community, from householders to businesses and other organizations. Addressing these motivations will enable strategies to be developed that deal with the root causes, rather than the results, of illegal dumping.
Some of the main reasons for this careless disregard for waste can be put down to sheer convenience, lack of care for the environment and also a reluctance to pay for the correct collection or disposal of the waste. The monitoring of illegally dumped garden waste by the community and industries will drive effectual tactics to battle illegal depositing. People dump waste illegally to avoid disposal fees at landfill sites or because they do not want to spend the time or effort disposing of or recycling their waste properly. Alligator weed (Alternanthera philoxeroides (Mart.) Griseb.) is an introduced weed originating from Sri Lanka and is creating major issues throughout the Australia since its introduction into the country. Alligator weed has the potential to affect aquatic and terrestrial biodiversity severely and to cause considerable social and economic costs, particularly in aquatic situations.
Mitigation
Education on the value of biodiversity and the negative effects of weed invasion have been identified as key items in changing current trends. Specific education campaigns on the risks of dumping garden waste could be targeted at high-risk societal groups such as residents of housing in close proximity to reserves as well as members of gardening communities and plant sellers.
Restricting the selection of garden species in new housing developments adjacent to reserves may reduce the effects of illegal dumping, thereby reducing requirement and associated cost of weed management. Creating habitat for wildlife by planting native plants, making a water source available, provide shelter and places to raise young. Healthy ecosystems are necessary for the survival and health of all organisms, and there are a number of ways to reduce negative impact on the environment. Cultivation of native plant species may benefit not only native plant populations but also native animal populations. For example, Sears & Anderson suggest that native bird species diversity in Australia and North America tend to match the volume and diversity of native vegetation. Crisp also explains the percentage of native insect species in a fauna has been found to be consistent with the percentage of native plant species.
Composting is a great way to recycle nutrients back into soils. Mulching the garden with leaves and clippings (BMCC, n.d).
Fostering an appreciation of local natural environmental features and plant species may also help mitigate the issue. as well as the restriction of highly invasive plant species through international policy.
Utilization of green waste bins that are provided by some councils or shires that are emptied via curbside collection (BMCC. n.d). The addition of facilities for waste disposal could also improve the issue (DECC. 2008). Mitigation may involve governments holding campaigns that show people disposing legally and reporting the consequences for disposing illegally. A way Australian governments are addressing the problem is through the increase of fines in conjunction with better law enforcement. In Australia, fines can be up to $1,000,000 and can also incur imprisonment. The Protection of the Environment Operations Act imposes penalties for offences including polluting waters with waste, polluting land, illegally dumping waste or using land as an illegal waste facility.
Australia
The new section of the POEO Act (The Protection of the Environment Operations Act 1997) now imposes further penalties for offences including polluting waters with waste, polluting land, illegally dumping waste or using land as an illegal waste facility (Parrino, Maysaa, Kaoutarani & Salam, 2014). Communities are encouraged to report illegal dumping. In accordance with NSW Illegal Dumping Strategy 2014–16, hefty fines and a maximum jail sentence of 2 years can be handed down to repeat offenders.
References
Biodegradable waste management
Gardening
Biological contamination
Litter | Garden waste dumping | Chemistry | 2,375 |
24,406,065 | https://en.wikipedia.org/wiki/C11H13F3N2 | {{DISPLAYTITLE:C11H13F3N2}}
The molecular formula C11H13F3N2 (molar mass: 230.23 g/mol, exact mass: 230.1031 u) may refer to:
Trifluoromethylphenylpiperazine
1-(4-(Trifluoromethyl)phenyl)piperazine | C11H13F3N2 | Chemistry | 87 |
2,883,528 | https://en.wikipedia.org/wiki/Tennelec | Tennelec was a US electronics company founded in the early 1960s by Edward Fairstein in Oak Ridge, Tennessee. The company came to prominence producing instrumentation for nuclear studies and later, programmable scanning radios.
The TC-200 amplifier was a successful early design that established Tennelec as a leader in the nuclear instrumentation field. Following on the heels of the TC-200 success, the company developed additional components necessary for precise nuclear measurement, including detectors and particle counters.
Tennelec also manufactured innovative scanning radios in the 1970s. The first programmable radio scanner was the Memoryscan from Tennelec Commercial Products Division, introduced in 1974, and later known as the Memoryscan 1 (model MS-1). This was followed by a slightly improved model, the Memoryscan 2 (model MS-2). Prior to the MS-1 and MS-2, scanners were "programmed" by inserting a series of hand-cut crystals tuned for different frequencies. The scanner would then switch between the frequencies, stopping when the user pressed a switch. With the MS-1 and MS-2, the user selected up to 16 frequencies they wanted to monitor by setting them up using binary codes entered via two pushbuttons on the front panel. Sixteen toggle switches allowed the user to select which frequencies were of interest at any given time. The system could cycle through the selected frequencies until stopped. The advantage was that the system could be set up to monitor different sets of frequencies, e.g., police one night, fire departments the next.
The first scanner allowing direct entry of decimal frequencies on a keypad, was the Tennelec MCP-1. The scanner was released at the Winter 1976 Consumer Electronics Show in Chicago. The system was a hit, and was soon picked up for Radio Shack. To help users get started, Radio Shack also purchased thousands of copies of Police Call, a guide to various radio frequencies
Partly due to poor quality control of their scanner line, Tennelec filed for bankruptcy soon after introducing their latest radio models. By this point other manufacturers, in particular SBE and Regency and Electra, had already introduced their own programmable models.
Tennelec, still located in Oak Ridge, TN, is now a division of Canberra Industries, which is owned by Areva.
See also
Scanner (radio)
Consumer Electronics Show
References
External links
Defunct electronics companies of the United States
Companies based in Tennessee
Radio manufacturers | Tennelec | Engineering | 492 |
1,513,917 | https://en.wikipedia.org/wiki/Potassium%20selective%20electrode | Potassium selective electrodes are a type of ion selective electrode used in biochemical and biophysical research, where measurements of potassium concentration in an aqueous solution are required, usually on a real time basis.
These electrodes are typical ion exchange resin membrane electrodes, using valinomycin, a potassium ionophore, as the ion carrier in the membrane to provide the potassium specificity.
This type of ion-selective electrode is subject to interference from (in declining order of magnitude) rubidium, caesium, ammonium, sodium, calcium, magnesium, and lithium. The most significant interference with measurement of potassium concentration is from the ammonium ion, which in practice is a problem where the ammonium concentration is approximately equal to or greater than the potassium concentration. Although sodium is usually present in high concentrations in biological preparations, the degree of interference is low enough to represent an error on the order of only 0.05 parts per million for the normal range of sodium concentration, requiring reduction of sodium only for measurements of very low potassium concentrations. Although the interference from rubidium or caesium is strong enough to require that these ions be present in much lower concentration than the potassium to be measured, this is not usually a problem in most experiments. Interference from calcium, magnesium, or lithium, on the other hand, is weak enough that their presence in normal concentrations is also usually not a problem.
Further reading
Ionophores for potassium-selective electrodes
Potassium ion-selective electrodes
Electrodes | Potassium selective electrode | Chemistry | 303 |
27,378,535 | https://en.wikipedia.org/wiki/Chemical%20oscillator | In chemistry, a chemical oscillator is a complex mixture of reacting chemical compounds in which the concentration of one or more components exhibits periodic changes. They are a class of reactions that serve as an example of non-equilibrium thermodynamics with far-from-equilibrium behavior. The reactions are theoretically important in that they show that chemical reactions do not have to be dominated by equilibrium thermodynamic behavior.
In cases where one of the reagents has a visible color, periodic color changes can be observed. Examples of oscillating reactions are the Belousov–Zhabotinsky reaction (BZ reaction), the Briggs–Rauscher reaction, and the Bray–Liebhafsky reaction.
History
The earliest scientific evidence that such reactions can oscillate was met with extreme scepticism. In 1828, G.T. Fechner published a report of oscillations in a chemical system. He described an electrochemical cell that produced an oscillating current. In 1899, W. Ostwald observed that the rate of chromium dissolution in acid periodically increased and decreased. Both of these systems were heterogeneous and it was believed then, and through much of the last century, that homogeneous oscillating systems were nonexistent. While theoretical discussions date back to around 1910, the systematic study of oscillating chemical reactions and of the broader field of non-linear chemical dynamics did not become well established until the mid-1970s.
Theory
Chemical systems cannot oscillate about a position of final equilibrium because such an oscillation would violate the second law of thermodynamics. For a thermodynamic system which is not at equilibrium, this law requires that the system approach equilibrium and not recede from it. For a closed system at constant temperature and pressure, the thermodynamic requirement is that the Gibbs free energy must decrease continuously and not oscillate. However it is possible that the concentrations of some reaction intermediates oscillate, and also that the rate of formation of products oscillates.
Theoretical models of oscillating reactions have been studied by chemists, physicists, and mathematicians. In an oscillating system the energy-releasing reaction can follow at least two different pathways, and the reaction periodically switches from one pathway to another. One of these pathways produces a specific intermediate, while another pathway consumes it. The concentration of this intermediate triggers the switching of pathways. When the concentration of the intermediate is low, the reaction follows the producing pathway, leading then to a relatively high concentration of intermediate. When the concentration of the intermediate is high, the reaction switches to the consuming pathway.
Different theoretical models for this type of reaction have been created, including the Lotka-Volterra model, the Brusselator and the Oregonator. The latter was designed to simulate the Belousov-Zhabotinsky reaction.
Types
Belousov–Zhabotinsky (BZ) reaction
A Belousov–Zhabotinsky reaction is one of several oscillating chemical systems, whose common element is the inclusion of bromine and an acid. An essential aspect of the BZ reaction is its so-called "excitability"—under the influence of stimuli, patterns develop in what would otherwise be a perfectly quiescent medium. Some clock reactions such as the Briggs–Rauscher reactions and the BZ using the chemical ruthenium bipyridyl as catalyst can be excited into self-organising activity through the influence of light.
Boris Belousov first noted, sometime in the 1950s, that in a mix of potassium bromate, cerium(IV) sulfate, propanedioic acid (another name for malonic acid) and citric acid in dilute sulfuric acid, the ratio of concentration of the cerium(IV) and cerium(III) ions oscillated, causing the colour of the solution to oscillate between a yellow solution and a colorless solution. This is due to the cerium(IV) ions being reduced by propanedioic acid to cerium(III) ions, which are then oxidized back to cerium(IV) ions by bromate(V) ions.
Briggs–Rauscher reaction
The Briggs–Rauscher oscillating reaction is one of a small number of known oscillating chemical reactions. It is especially well suited for demonstration purposes because of its visually striking color changes: the freshly prepared colorless solution slowly turns an amber color, suddenly changing to a very dark blue. This slowly fades to colorless and the process repeats, about ten times in the most popular formulation.
Bray–Liebhafsky reaction
The Bray–Liebhafsky reaction is a chemical clock first described by W. C. Bray in 1921 with the oxidation of iodine to iodate:
5 H2O2 + I2 → 2 + 2 H+ + 4 H2O
and the reduction of iodate back to iodine:
5 H2O2 + 2 + 2 H+ → I2 + 5 O2 + 6 H2O
See also
Catalytic oscillator
Mercury beating heart
Blue bottle experiment
Clock reactions
References
External links
Video of BZ reaction
History of oscillating reactions
Non-equilibrium thermodynamics
Chemistry classroom experiments
Chemical reactions
Clock reactions | Chemical oscillator | Chemistry,Mathematics | 1,105 |
48,321,657 | https://en.wikipedia.org/wiki/Hygrophorus%20erubescens | Hygrophorus erubescens, commonly known as the blotched woodwax or pink waxcap, is an agaric fungus native to Scandinavia, Japan, Central Europe, Great Britain and North America.
Taxonomy
Swedish mycologist Elias Magnus Fries described it as Agaricus erubescens in his 1821 work Systema Mycologicum. The species name is derived from the Latin erubescens, meaning "reddening" or "blushing". It became Hygrophorus erubescens with the raising of Hygrophorus to genus rank. Common names include blotched woodwax, and pink waxcap.
The species is classified in the subsection Pudorini of genus Hygrophorus, along with the closely related species H. pudorinus and H. purpurascens.
Description
The fruit body (mushroom) is a fair size, with a diameter light pink to white cap that can be dotted with darker pink or red marks and bruises yellow. The colour is darker in the cap centre. Convex and flattening with age, the cap often has a boss and an inrolled margin when young. Its surface is slimy or sticky. The white gills are adnate to somewhat decurrent, becoming pale pink as they mature. The stipe is tall and wide. The spore print is white and the oval spores measure 6.5–11 x 4.5–6.5 micrometres. The mushroom has no strong odor or taste, though the former is sometimes described as pleasant.
The species is inedible.
Similar species
The similar-looking Hygrophorus russula can be distinguished by its more crowded gills and preference for hardwood forests, and H. purpurascens has a partial veil. H. capreolaris is more evenly red in color, and does not stain yellow. H. amarus has a bitter-tasting cap and somewhat yellowish gills.
Habitat and distribution
Hygrophorus erubescens fruits from August to October in coniferous forests, particularly spruce, on chalky soils. The mushrooms are found singly or sometimes in large troops. The range in North America is from the Rocky Mountains to the West Coast and Tennessee north to the Great Lakes region and southern parts of Canada. The fungus is classified as extinct in the British Mycological Society's 2006 list of threatened fungi, as it has not been documented in Great Britain since 1878. It is found across Scandinavia, and has been recorded fruiting at high altitudes in alpine-subalpine regions of Russia, and mountainous parts of Central Europe. The species has been found in the East and Middle Black Sea regions of Turkey. In Japan, it is most common in coniferous woods, and has been recorded from Hokkaido and Honshu.
See also
List of Hygrophorus species
References
External links
Fungi described in 1821
Fungi of Europe
Fungi of Japan
Fungi of North America
Fungi of Western Asia
erubescens
Taxa named by Elias Magnus Fries
Fungus species | Hygrophorus erubescens | Biology | 629 |
43,727,603 | https://en.wikipedia.org/wiki/Quantum%20excitation%20%28accelerator%20physics%29 | Quantum excitation is the effect in circular accelerators or storage rings whereby the discreteness of photon emission causes the charged particles (typically electrons) to undergo a random walk or diffusion process.
Mechanism
An electron moving through a magnetic field emits radiation called synchrotron radiation. The expected amount of radiation can be calculated using the classical power. Considering quantum mechanics, however, this radiation is emitted in discrete packets of photons. For this description, the distribution of the number of emitted photons and also the energy spectrum for the electron should be determined instead.
In particular, the normalized power spectrum emitted by a charged particle moving in a bending magnet is given by
This result was originally derived by Dmitri Ivanenko and Arseny Sokolov and independently by Julian Schwinger in 1949.
Dividing each power of this power spectrum by the energy yields the photon flux:
The photon flux from this normalized power spectrum (of all energies) is then
The fact that the above photon flux integral is finite implies discrete photon emission. It is a Poisson process. The emission rate is
For a travelled distance at a speed close to (), the average number of emitted photons by the particle can be expressed as
where is the fine-structure constant. The probability that photons are emitted over is
The photon number curve and the power spectrum curve intersect at the critical energy
where , is the total energy of the charged particle, is the radius of curvature, the classical electron radius, the particle rest mass energy, the reduced Planck constant, and the speed of light.
The mean of the quantum energy is given by and impacts mainly the radiation damping. However, the particle motion perturbation (diffusion) is mainly related by the variance of the quantum energy and leads to an equilibrium emittance. The diffusion coefficient at a given position is given by
Further reading
For an early analysis of the effect of quantum excitation on electron beam dynamics in storage rings, see the article by Matt Sands.
References
Accelerator physics | Quantum excitation (accelerator physics) | Physics | 404 |
12,284 | https://en.wikipedia.org/wiki/Grimoire | A grimoire () (also known as a book of spells, magic book, or a spellbook) is a textbook of magic, typically including instructions on how to create magical objects like talismans and amulets, how to perform magical spells, charms, and divination, and how to summon or invoke supernatural entities such as angels, spirits, deities, and demons. In many cases, the books themselves are believed to be imbued with magical powers. The only contents found in a grimoire would be information on spells, rituals, the preparation of magical tools, and lists of ingredients and their magical correspondences. In this manner, while all books on magic could be thought of as grimoires, not all magical books should be thought of as grimoires.
While the term grimoire is originally European—and many Europeans throughout history, particularly ceremonial magicians and cunning folk, have used grimoires—the historian Owen Davies has noted that similar books can be found all around the world, ranging from Jamaica to Sumatra. He also noted that in this sense, the world's first grimoires were created in Europe and the ancient Near East.
Etymology
The etymology of grimoire is unclear. It is most commonly believed that the term grimoire originated from the Old French word grammaire 'grammar', which had initially been used to refer to all books written in Latin. By the 18th century, the term had gained its now common usage in France and had begun to be used to refer purely to books of magic. Owen Davies presumed this was because "many of them continued to circulate in Latin manuscripts".
However, the term grimoire later developed into a figure of speech among the French indicating something that was hard to understand. In the 19th century, with the increasing interest in occultism among the British following the publication of Francis Barrett's The Magus (1801), the term entered English in reference to books of magic.
History
Ancient period
The earliest known written magical incantations come from ancient Mesopotamia (modern Iraq), where they have been found inscribed on cuneiform clay tablets that archaeologists excavated from the city of Uruk and dated to between the 5th and 4th centuries BC. The ancient Egyptians also employed magical incantations, which have been found inscribed on amulets and other items. The Egyptian magical system, known as heka, was greatly altered and expanded after the Macedonians, led by Alexander the Great, invaded Egypt in 332 BC.
Under the next three centuries of Hellenistic Egypt, the Coptic writing system evolved, and the Library of Alexandria was opened. This likely had an influence upon books of magic, with the trend on known incantations switching from simple health and protection charms to more specific things, such as financial success and sexual fulfillment. Around this time the legendary figure of Hermes Trismegistus developed as a conflation of the Egyptian god Thoth and the Greek Hermes; this figure was associated with writing and magic and, therefore, of books on magic.
The ancient Greeks and Romans believed that books on magic were invented by the Persians. The 1st-century AD writer Pliny the Elder stated that magic had been first discovered by the ancient philosopher Zoroaster around the year 647 BC but that it was only written down in the 5th century BC by the magician Osthanes. His claims are not, however, supported by modern historians.
The ancient Jewish people were often viewed as being knowledgeable in magic, which, according to legend, they had learned from Moses, who had learned it in Egypt. Among many ancient writers, Moses was seen as an Egyptian rather than a Jew. Two manuscripts likely dating to the 4th century, both of which purport to be the legendary eighth Book of Moses (the first five being the initial books in the Biblical Old Testament), present him as a polytheist who explained how to conjure gods and subdue demons.
Meanwhile, there is definite evidence of grimoires being used by certain—particularly Gnostic—sects of early Christianity. In the Book of Enoch found within the Dead Sea Scrolls, for instance, there is information on astrology and the angels. In possible connection with the Book of Enoch, the idea of Enoch and his great-grandson Noah having some involvement with books of magic given to them by angels continued through to the medieval period.
Israelite King Solomon was a Biblical figure associated with magic and sorcery in the ancient world. The 1st-century Romano-Jewish historian Josephus mentioned a book circulating under the name of Solomon that contained incantations for summoning demons and described how a Jew called Eleazar used it to cure cases of possession. The book may have been the Testament of Solomon but was more probably a different work. The pseudepigraphic Testament of Solomon is one of the oldest magical texts. It is a Greek manuscript attributed to Solomon and was likely written in either Babylonia or Egypt sometime in the first five centuries AD; over 1,000 years after Solomon's death.
The work tells of the building of The Temple and relates that construction was hampered by demons until the archangel Michael gave the King a magical ring. The ring, engraved with the Seal of Solomon, had the power to bind demons from doing harm. Solomon used it to lock demons in jars and commanded others to do his bidding, although eventually, according to the Testament, he was tempted into worshiping "false gods", such as Moloch, Baal, and Rapha. Subsequently, after losing favour with God, King Solomon wrote the work as a warning and a guide to the reader.
When Christianity became the dominant faith of the Roman Empire, the early Church frowned upon the propagation of books on magic, connecting it with paganism, and burned books of magic. The New Testament records that after the unsuccessful exorcism by the seven sons of Sceva became known, many converts decided to burn their own magic and pagan books in the city of Ephesus; this advice was adopted on a large scale after the Christian ascent to power.
Medieval period
In the medieval period, the production of grimoires continued in Christendom, as well as amongst Jews and the followers of the newly founded Islamic faith. As the historian Owen Davies noted, "while the [Christian] Church was ultimately successful in defeating pagan worship it never managed to demarcate clearly and maintain a line of practice between religious devotion and magic." The use of such books on magic continued. In Christianised Europe, the Church divided books of magic into two kinds: those that dealt with "natural magic" and those that dealt in "demonic magic".
The former was acceptable because it was viewed as merely taking note of the powers in nature that were created by God; for instance, the Anglo-Saxon leechbooks, which contained simple spells for medicinal purposes, were tolerated. Demonic magic was not acceptable, because it was believed that such magic did not come from God, but from the Devil and his demons. These grimoires dealt in such topics as necromancy, divination and demonology. Despite this, "there is ample evidence that the mediaeval clergy were the main practitioners of magic and therefore the owners, transcribers, and circulators of grimoires," while several grimoires were attributed to Popes.
One such Arabic grimoire devoted to astral magic, the 10th-century Ghâyat al-Hakîm, was later translated into Latin and circulated in Europe during the 13th century under the name of the Picatrix. However, not all such grimoires of this era were based upon Arabic sources. The 13th-century Sworn Book of Honorius, for instance, was (like the ancient Testament of Solomon before it) largely based on the supposed teachings of the Biblical king Solomon and included ideas such as prayers and a ritual circle, with the mystical purpose of having visions of God, Hell, and Purgatory and gaining much wisdom and knowledge as a result. Another was the Hebrew Sefer Raziel Ha-Malakh, translated in Europe as the Liber Razielis Archangeli.
A later book also claiming to have been written by Solomon was originally written in Greek during the 15th century, where it was known as the Magical Treatise of Solomon or the Little Key of the Whole Art of Hygromancy, Found by Several Craftsmen and by the Holy Prophet Solomon. In the 16th century, this work had been translated into Latin and Italian, being renamed the Clavicula Salomonis, or the Key of Solomon.
In Christendom during the medieval age, grimoires were written that were attributed to other ancient figures, thereby supposedly giving them a sense of authenticity because of their antiquity. The German abbot and occultist Trithemius (1462–1516) supposedly had a Book of Simon the Magician, based upon the New Testament figure of Simon Magus.
Similarly, it was commonly believed by medieval people that other ancient figures, such as the poet Virgil, astronomer Ptolemy, and philosopher Aristotle, had been involved in magic, and grimoires claiming to have been written by them were circulated. However, there were those who did not believe this; for instance, the Franciscan friar Roger Bacon (c. 1214–94) stated that books falsely claiming to be by ancient authors "ought to be prohibited by law."
Early modern period
As the early modern period commenced in the late 15th century, many changes began to shock Europe that would have an effect on the production of grimoires. Historian Owen Davies classed the most important of these as the Protestant Reformation, and subsequent Catholic Counter-Reformation; The Witch-hunts, and the advent of printing. The Renaissance saw the continuation of interest in magic that had been found in the Medieval period, and in this period, there was an increased interest in Hermeticism among occultists and ceremonial magicians in Europe, largely fueled by the 1471 translation of the ancient Corpus hermeticum into Latin by Marsilio Ficino (1433–99).
Alongside this, there was a rise in interest in the Jewish mysticism known as the Kabbalah, which was spread across the continent by Pico della Mirandola and Johannes Reuchlin. The most important magician of the Renaissance was Heinrich Cornelius Agrippa (1486–1535), who widely studied occult topics and earlier grimoires and eventually published his own, the Three Books of Occult Philosophy, in 1533. A similar figure was the Swiss magician known as Paracelsus (1493–1541), who published Of the Supreme Mysteries of Nature, in which he emphasised the distinction between good and bad magic. A third such individual was Johann Georg Faust, upon whom several pieces of later literature were written, such as Christopher Marlowe's Doctor Faustus, that portrayed him as consulting with demons.
The idea of demonology had remained strong in the Renaissance, and several demonological grimoires were published, including The Fourth Book of Occult Philosophy, which falsely claimed to having been authored by Cornelius Agrippa, and the Pseudomonarchia Daemonum, which listed 69 demons. To counter this, the Roman Catholic Church authorised the production of many works of exorcism, the rituals of which were often very similar to those of demonic conjuration. Alongside these demonological works, grimoires on natural magic continued to be produced, including Magia Naturalis, written by Giambattista Della Porta (1535–1615).
Iceland held magical traditions in regional work as well, most remarkably the Galdrabók, where numerous symbols of mystic origin are dedicated to the practitioner. These pieces give a perfect fusion of Germanic pagan and Christian influence, seeking splendid help from the Norse gods and referring to the titles of demons.
The advent of printing in Europe meant that books could be mass-produced for the first time and could reach an ever-growing literate audience. Among the earliest books to be printed were magical texts. The nóminas were one example, consisting of prayers to the saints used as talismans. It was particularly in Protestant countries, such as Switzerland and the German states, which were not under the domination of the Roman Catholic Church, where such grimoires were published.
Despite the advent of print, however, handwritten grimoires remained highly valued, as they were believed to contain inherent magical powers, and they continued to be produced. With increasing availability, people lower down the social scale and women began to have access to books on magic; this was often incorporated into the popular folk magic of the average people and, in particular, that of the cunning folk, who were professionally involved in folk magic. These works left Europe and were imported to the parts of Latin America controlled by the Spanish and Portuguese empires and the parts of North America controlled by the British and French empires.
Throughout this period, the Inquisition, a Roman Catholic organisation, had organised the mass suppression of peoples and beliefs that they considered heretical. In many cases, grimoires were found in the heretics' possessions and destroyed. In 1599, the church published the Indexes of Prohibited Books, in which many grimoires were listed as forbidden, including several mediaeval ones, such as the Key of Solomon, which were still popular.
In Christendom, there also began to develop a widespread fear of witchcraft, which was believed to be Satanic in nature. The subsequent hysteria, known as The Witch-hunts, caused the death of around 40,000 people, most of whom were women. Sometimes, those found with grimoires—particularly demonological ones—were prosecuted and dealt with as witches but, in most cases, those accused had no access to such books. Iceland—which had a relatively high literacy rate—proved an exception to this, with a third of the 134 witch trials held involving people who had owned grimoires. By the end of the Early Modern period, and the beginning of the Enlightenment, many European governments brought in laws prohibiting many superstitious beliefs in an attempt to bring an end to the Witch Hunts; this would invariably affect the release of grimoires.
Meanwhile, Hermeticism and the Kabbalah would influence the creation of a mystical philosophy known as Rosicrucianism, which first appeared in the early 17th century, when two pamphlets detailing the existence of the mysterious Rosicrucian group were published in Germany. These claimed that Rosicrucianism had originated with a Medieval figure known as Christian Rosenkreuz, who had founded the Brotherhood of the Rosy Cross; however, there was no evidence for the existence of Rosenkreuz or the Brotherhood.
18th and 19th centuries
The 18th century saw the rise of the Enlightenment, a movement devoted to science and rationalism, predominantly amongst the ruling classes. However, amongst much of Europe, belief in magic and witchcraft persisted, as did the witch trials in certain areas. Governments tried to crack down on magicians and fortune tellers, particularly in France, where the police viewed them as social pests who took money from the gullible, often in a search for treasure. In doing so, they confiscated many grimoires.
Beginning in the 17th century, a new, ephemeral form of printed literature developed in France; the Bibliothèque bleue. Many grimoires published through this circulated among a growing percentage of the populace; in particular, the Grand Albert, the Petit Albert (1782), the Grimoire du Pape Honorius, and the Enchiridion Leonis Papae. The Petit Albert contained a wide variety of magic; for instance, dealing in simple charms for ailments, along with more complex things, such as the instructions for making a Hand of Glory.
In the late 18th and early 19th centuries, following the French Revolution of 1789, a hugely influential grimoire was published under the title of the Grand Grimoire, which was considered particularly powerful, because it involved conjuring and making a pact with the devil's chief minister, Lucifugé Rofocale, to gain wealth from him. A new version of this grimoire was later published under the title of the Dragon rouge and was available for sale in many Parisian bookstores. Similar books published in France at this time included the Black Pullet and the Grimoirium Verum. The Black Pullet, probably authored in late-18th-century Rome or France, differs from the typical grimoires in that it does not claim to be a manuscript from antiquity, but told by a man who was a member of Napoleon's armed expeditionary forces in Egypt.
The widespread availability of printed grimoires in France—despite the opposition of both the rationalists and the church—soon spread to neighbouring countries, such as Spain and Germany. In Switzerland, Geneva was commonly associated with the occult at the time, particularly by Catholics, because it had been a stronghold of Protestantism. Many of those interested in the esoteric traveled from Roman Catholic nations to Switzerland to purchase grimoires or to study with occultists. Soon, grimoires appeared that involved Catholic saints; one example that appeared during the 19th century, and became relatively popular—particularly in Spain—was the Libro de San Cipriano, or The Book of St. Ciprian, which falsely claimed to date from c. 1000. As with most grimoires of this period, it dealt with (among other things) how to discover treasure.
In Germany, with the increased interest in folklore during the 19th century, many historians took an interest in magic and in grimoires. Several published extracts of such grimoires in their own books on the history of magic, thereby helping to further propagate them. Perhaps the most notable of these was the Protestant pastor Georg Conrad Horst (1779–1832) who, from 1821 to 1826, published a six-volume collection of magical texts in which he studied grimoires as a peculiarity of the Medieval mindset.
Another scholar of the time interested in grimoires, the antiquarian bookseller Johann Scheible first published the Sixth and Seventh Books of Moses; two influential magical texts that claimed to have been written by the ancient Jewish figure Moses. The Sixth and Seventh Books of Moses were among the works which later spread to the countries of Scandinavia, where—in Danish and Swedish—grimoires were known as black books and were commonly found among members of the army.
In Britain, new grimoires continued to be produced throughout the 18th century, such as Ebenezer Sibly's A New and Complete Illustration of the Celestial Science of Astrology. In the last decades of that century, London experienced a revival of interest in the occult which was further propagated by Francis Barrett's publication of The Magus in 1801. The Magus contained many things taken from older grimoires—particularly those of Cornelius Agrippa—and, while not achieving initial popularity upon release, it gradually became an influential text.
One of Barrett's pupils, John Parkin, created his own handwritten grimoire The Grand Oracle of Heaven, or, The Art of Divine Magic, although it was never published, largely because Britain was at war with France, and grimoires were commonly associated with the French. The only writer to publish British grimoires widely in the early 19th century was Robert Cross Smith, who released The Philosophical Merlin (1822) and The Astrologer of the Nineteenth Century (1825), but neither sold well.
In the late 19th century, several of these texts (including The Book of Abramelin and the Key of Solomon) were reclaimed by para-Masonic magical organisations, such as the Hermetic Order of the Golden Dawn and Ordo Templi Orientis.
20th and 21st centuries
The Secret Grimoire of Turiel claims to have been written in the 16th century, but no copy older than 1927 has been produced.
A modern grimoire, the Simon Necronomicon, takes its name from a fictional book of magic in the stories of H. P. Lovecraft which was inspired by Babylonian mythology and the Ars Goetia—one of the five books that make up The Lesser Key of Solomon—concerning the summoning of demons. The Azoëtia of Andrew D. Chumbley has been described by Gavin Semple as a modern grimoire.
The neopagan religion of Wicca publicly appeared in the 1940s, and Gerald Gardner introduced the Book of Shadows as a Wiccan grimoire.
The term grimoire commonly serves as an alternative name for a spell book or tome of magical knowledge in fantasy fiction and role-playing games. The most famous fictional grimoire is the Necronomicon, a creation of H. P. Lovecraft.
See also
Table of magical correspondences, a type of reference work used in ceremonial magic
Cyprianus, a name for Scandinavian grimoires
Codex
Key of Solomon
Lesser Key of Solomon
Manuscript
References
Bibliography
External links
Internet Sacred Text Archives: Grimoires
Digitized Grimoires
Scandinavian folklore
Fiction about magic
Magic (supernatural)
Magic items
Non-fiction genres
Religious objects | Grimoire | Physics | 4,300 |
18,195,426 | https://en.wikipedia.org/wiki/Nigerose | Nigerose, also known as sakebiose, is an unfermentable sugar obtained by partial hydrolysis of nigeran, a polysaccharide found in black mold, but is also readily extracted from the dextrans found in rice molds and many other fermenting microorganisms, such as L. mesenteroides. It is a disaccharide made of two glucose residues, connected with a 1->3 link. It is a product of the caramelization of glucose.
References
Disaccharides | Nigerose | Chemistry | 115 |
17,909,998 | https://en.wikipedia.org/wiki/Gallopamil | Gallopamil (INN) is an L-type calcium channel blocker that is an analog of verapamil. It is used in the treatment of abnormal heart rhythms.
Synthesis
The alkylation reaction of 3,4,5-trimethoxyphenylacetonitrile (1) and isopropyl chloride (2), using sodium amide as base gives the intermediate nitrile (3). A second alkylation with a specific alkyl chloride (4) yields gallopamil.
References
Calcium channel blockers
Pyrogallol ethers
Nitriles
Phenethylamines
Isopropyl compounds | Gallopamil | Chemistry | 138 |
55,724,853 | https://en.wikipedia.org/wiki/Emery%20Worldwide%20Airlines%20Flight%2017 | Emery Worldwide Airlines Flight 17 was a regularly scheduled United States domestic cargo flight, flying from Reno, Nevada to Dayton, Ohio with an intermediate stopover at Rancho Cordova, California. On February 16, 2000, the DC-8-71F operating the flight crashed onto an automobile salvage yard shortly after taking off from Sacramento Mather Airport, resulting in the deaths of all three crew members on board. The crew reported control problems during takeoff and attempted unsuccessfully to return to Mather airport.
Aircraft and crew
The aircraft involved in the accident was a 1968-built Douglas DC-8-71, registration N8079U. Operated by United Airlines (1968–1990) and Líneas Aéreas Paraguayas (1990–1994), later modified for service as a freighter before being sold. In March 1994 N8079U was operated by Emery Worldwide Airlines and had accumulated about 84,447 flight hours in 33,395 flight cycles. In July 1983, the Pratt & Whitney JT3D engines were replaced with CFM International CFM56 engines to upgrade the aircraft from a 60-series to a 70-series aircraft.
The flight crew consisted of Captain Kevin Stables (43), who had logged 13,329 flight hours and 2,128 hours in type; First Officer George Land (35), who had logged 4,511 flight hours and 2,080 in type; and Flight Engineer Russell Hicks (38), who had logged 9,775 flight hours and 675 in type.
Accident
The flight was a regular domestic cargo flight from Reno–Tahoe International Airport (RNO) to James M. Cox Dayton International Airport (DAY) with an intermediate stopover at Sacramento Mather Airport in Rancho Cordova, California. The flight was operated by Emery Worldwide Airlines – then a major cargo airline in the U.S. – using a McDonnell Douglas DC-8-71F with the three crew members on board.
After completing the taxi checklist, the crew members initiated the before-take-off checklist at around 19:47 local time. They then advised local traffic that they were going to initiate the take-off from runway 22L. The crew members were later cleared for take-off. The crew applied a continuous nose-down input during the take-off roll.
As the aircraft reached its V1 speed, the captain called "rotate". The pitch then increased from 0.2 to 5.3°. Data from the control column indicated the crew at the time was still applying forward movement to the control column (nose-down input), but somehow the nose rose upward from 14.5 to 17.4° as the crew added more force to the control column. The aircraft reached V2 and began to lift off.
Immediately after the aircraft lifted off from the runway, the aircraft entered a left turn and the first officer quickly stated that Flight 17 would like to return to Sacramento. The engine's speed began to decrease and the stick shaker activated for the first time. The captain declared an emergency on Flight 17, believing a load shift had occurred. The aircraft began to move erratically, and the elevator deflection and the bank angle began to decrease and increase, respectively. The aircraft began to descend.
The captain repeated the emergency declaration as the engine's speed began to increase. At the time, the aircraft was descending with a steepening bank of 11°. The crew then added power and the aircraft began to climb again. As the aircraft continued to climb, the bank angle began to increase to the left. The captain stated that Flight 17 "has an extreme CG problem."
The aircraft then continued to fly in a northwesterly heading. The crew was trying to stabilize the aircraft as it began to sway to the left and to the right. The ground proximity warning system (GPWS) then started to sound. At 19:51, the aircraft's left wing contacted a concrete and steel support column for an overhang attached to a two-story building, located adjacent to the southeast edge of the salvage yard. The DC-8 then crashed onto the salvage yard, touching off "a hellish scene of smoke, flames and exploding cars [that] could be seen for miles". All three crew members on board were killed.
Investigation
An investigation by the National Transportation Safety Board (NTSB) revealed that during the aircraft's rotation, a control rod to the right elevator control tab detached, causing a loss of pitch control. The NTSB further found that an incorrect maintenance procedure, which was implemented by Emery Worldwide Airlines, introduced an incorrect torque-loading on the bolts that were supposed to connect the control rod. The NTSB released its final report in 2003, three years after the accident. The report stated that the crash of Flight 17 was caused by the detachment of the right elevator control tab. The disconnection was caused by the failure to properly secure and inspect the attachment bolt.
The NTSB then added: "The safety issues discussed in this report include DC-8 elevator position indicator installation and usage, adequacy of DC-8 maintenance work cards (required inspection items), and DC-8 elevator control tab design. Safety recommendations are addressed to the Federal Aviation Administration".
Fifteen recommendations were issued by the NTSB. One of these was to evaluate every DC-8 on U.S. soil to prevent further crashes that could be caused by the disconnection of the right elevator tab. The Federal Aviation Administration subsequently found more than 100 maintenance violations by the airline, including one that caused another accident on April 26, 2001.
Emery Worldwide Airlines had its entire fleet grounded on August 13, 2001, and it ceased operations permanently on December 5, 2001.
Dramatization
The crash of Emery Worldwide Airlines Flight 17 was featured in the first episode of the 18th season in the Canadian documentary show Mayday, also known as Air Disasters in the United States and as Air Crash Investigation in Europe and the rest of the world. The episode was titled "Nuts and Bolts".
See also
Trans International Airlines Flight 863 – another accident involving a DC-8 freighter and problems with the right elevator
References
External links
Cockpit Voice Recorder transcript and accident summary
Airliner accidents and incidents in California
Aviation accidents and incidents in the United States in 2000
Aviation accidents and incidents in 2000
2000 in California
February 2000 events in the United States
Accidents and incidents involving the Douglas DC-8
Airliner accidents and incidents caused by mechanical failure
Accidents and incidents involving cargo aircraft | Emery Worldwide Airlines Flight 17 | Materials_science | 1,313 |
75,767,608 | https://en.wikipedia.org/wiki/Sulcus%20primigenius | The (Latin for "initial furrow") was the ancient Roman ritual of plowing the boundary of a new cityparticularly formal coloniesprior to distributing its lots or erecting its walls. The Romans considered the ritual extremely ancient, believing their own founder Romulus had introduced it from the Etruscans, who had also fortified most of their cities. The ritual had the function of rendering the course of the city wall sacrosanct but, owing to the necessity of some profane traffic such as the removal of corpses to graveyards, the city gates were left exempted from the ritual.
Ritual
According to surviving classical sources, the needed to occur on an auspicious day of the Roman calendar, further confirmed by augury or similar consultation of omens. The magistrate or other official in charge of the ceremony personally set a bronze plowshare on a wooden ard, which was then attached to a yoked pair of cattle. All literary sources state that the team should consist of a cow on the left and a bull on the right, driven counterclockwise so that the cow was to the inside and the bull to outside, although surviving numismatic evidence appears to show only bulls or standard oxen instead. The ritual was solemn enough that it needed to be performed togate and with covered head () but, as it required the use of both hands, the magistrate's toga was worn wrapped tightly and cinched in Gabine style. In this manner, the magistrate whipped the cattle around the entire course of the future city walls. All of the clods of earth raised by the plow were supposed to fall to the inside, which was accomplished by keeping the plow crooked and by men following the magistrate and plow. This procedure simultaneously established an initial city wall () from the clods and its protective ditch () from the furrow itself. This course was considered sacred and inviolable, which required that the plow be lifted across the locations of the future city gates so that it would be religiously permissible to enter and leave the town, particularly with profane cargo such as corpses or waste. The cattle were sacrificed at the end of the procedure. The city wall was subsequently raised over the earth beside the furrow, whose inner boundary set the outer limits for subsequent auspices performed by the city.
In Latin, the verb used to describe performing this ritual was ("to trace"). The Romans considered it an inheritance from Etruscan religion, meaning that it was presumably included among the sections on the founding of cities in the now-lost Books of Ritual (). For the Romans, the was the essential establishment of a city to the point that Roman law held as late as Justinian that the furrow of the plow was the formal delimitation of a city's territory. In like manner, plows were used to deconsecrate walls, undoing any former ritual and removing any religious stigma from their destruction.
Rome
Plutarch relates the Roman legend that Romulus was guided in the foundation of Rome by Etruscan priests. The daythe 30th of an early Roman month, a new moonwas supposedly marked by a conjunction of the sun and moon producing an eclipse, although modern scholars consider this a mistaken backward application of celestial tables of Plutarch's time and no actual eclipse occurred within a century of the suggested date. After creating a circular pit or trench (), Romulus had the city's initial settlers throw soil from their homelands into it along with representative sacrifices of the necessities and luxuries of settled life. Plutarch places this in a valley at the Comitium, although most accounts placed Romulus's settlement on the Palatine Hill. Romulus then plowed the , establishing Rome's quadrangular first walls and initial sacred boundary. In his discussion of Claudius's later expansion of the pomerium, Tacitus relates that his own belief was that Romulus's furrow and Rome's initial boundarythough unmarked by the 1st century when he was writinghad included the Altar of Hercules in the Forum Boarium and then ran east along the base of the Palatine to the Altar of Consus before turning north to include the Curia Hostilia and the shrine of the Lares Praestites at the Regia and ending at the Forum Romanum; this is only two sides of the course butsince he ascribes the inclusion of the Forum and the Capitol to Titus Tatiusit presumably would have run along the other two sides of the Palatine. (Lanciani notes several problems with this proposed course, which in the archaic period would have probably run through marshland.) Dionysius of Halicarnassus, possibly overstating the point, states that Romulus's furrow was continuous rather than leaving the necessary spaces at the wall's gates. Dionysius then states that Romulus offered sacrifices and provided public games. Before the settlers could enter the city and build their houses, he lit fires before their tents, which they leapt over to expiate any previous guilt or offense and to purify themselves. They then offered their own sacrifices, each as well as they were able. (Against this, Plutarch held the Parilia festival was long kept without any sacrifice at all to commemorate the sanctity of the event of the city's founding.) When the city's walls were later expanded by Rome's kings and under the Republic, the formal sacred boundary was marked with boundary stones. Varro noted the same had been done at Aricia.
Other settlements
The Romans thought many of the Latin towns had been established by the same ritual and used it for all of their formal colonies. Under influence from the Etruscans and Greeks, such colonies were typically established with Hippodamian grids or similar centuriation, meaning their walls' gates were typically placed at each end of major thoroughfares known as and . The walls frequently varied from perfect squares or rectangles, however, owing to local topography.
The was a common reverse type for coins issued by the colonies, often appearing with their first issues but sometimes continuing in use for centuries thereafter. The typical form was to show a magistrate goading a team of oxen with a raised whip. The design was sometimes localized through the inclusion of legionary vexillas or adjusting the cattle to reflect the size of local livestock. Nearly 30 examples of such issues are known, ranging from Iulia Constantia Zilil in Mauretania to Rhesaina in Mesopotamia.
Literature
In Vergil's Aeneid, the hero Aeneas sees the Carthaginians following the ritual and later lays out Lavinium in Italy with his own plow.
As noted by Varro, Pomponius, Isidore, and St. Augustine, the Romans generally derived the etymology of ("city") itself from ("sphere") with regard to the ritual furrow established at its creation.
See also
Agriculture in ancient Rome
Ancient Roman defensive walls
Glossary of Roman religion
References
Citations
Bibliography
Ancient sources
.
, now lost but cited in Servius and Isidore.
.
, an epitome of Flaccus's now lost De Verborum Significatione.
.
.
.
.
, now lost but cited in Tribonian & al.
.
.
.
.
, Books I & V.
Modern sources
.
.
.
.
.
Ancient Roman city planning
Roman agriculture
Roman law
Topography of the ancient city of Rome
Ancient Roman religious practices
Ancient Roman architecture
Ancient Roman geography
Urban geography
Urban design
City founding
Religious rituals
Animal festival or ritual
State ritual and ceremonies
Rituals attending construction | Sulcus primigenius | Engineering | 1,568 |
63,719,093 | https://en.wikipedia.org/wiki/Bachelier%20model | The Bachelier model is a model of an asset price under Brownian motion presented by Louis Bachelier on his PhD thesis The Theory of Speculation (Théorie de la spéculation, published 1900). It is also called "Normal Model" equivalently (as opposed to "Log-Normal Model" or "Black-Scholes Model"). One early criticism of the Bachelier model is that the probability distribution which he chose to use to describe stock prices allowed for negative prices. (His doctoral dissertation was graded down because of that feature.) The (much) later Black-Scholes-(Merton) Model addresses that issue by positing stock prices as following a log-normal distribution which does not allow negative values. This in turn, implies that returns follow a normal distribution.
On April 8, 2020, the CME Group posted the note CME Clearing Plan to Address the Potential of a Negative Underlying in Certain Energy Options Contracts, saying that after a threshold on price, it would change its standard energy options model from one based on Geometric Brownian Motion and the Black–Scholes model to the Bachelier model. On April 20, 2020, oil futures reached negative values for the first time in history, where Bachelier model took an important role in option pricing and risk management.
The European analytic formula for this model based on a risk neutral argument is derived in Analytic Formula for the European Normal Black Scholes Formula (Kazuhiro Iwasawa, New York University, December 2, 2001).
The implied volatility under the Bachelier model can be obtained by an accurate numerical approximation.
For an extensive review of the Bachelier model, see the review paper, A Black-Scholes User's Guide to the Bachelier Model , which summarizes the results on volatility conversion, risk management, stochastic volatility, and barrier options pricing to facilitate the model transition. The paper also connects the Black-Scholes and Bachelier models by using the displaced Black-Scholes model as a model family.
References
Energy economics
Finance theories
Financial models
Options (finance) | Bachelier model | Environmental_science | 433 |
42,434,747 | https://en.wikipedia.org/wiki/Game%20design | Game design is the process of creating and shaping the mechanics, systems, rules, and gameplay of a game. Game design processes apply to board games, card games, dice games, casino games, role-playing games, sports, war games, or simulation games.In Elements of Game Design, game designer Robert Zubek defines game design by breaking it down into three elements:
Game mechanics and systems, which are the rules and objects in the game.
Gameplay, which is the interaction between the player and the mechanics and systems. In Chris Crawford on Game Design, the author summarizes gameplay as "what the player does".
Player experience, which is how users feel when they are playing the game.
In academic research, game design falls within the field of game studies (not to be confused with game theory, which studies strategic decision making, primarily in non-game situations).
Process of design
Game design is part of a game's development from concept to final form. Typically, the development process is iterative, with repeated phases of testing and revision. During revision, additional design or re-design may be needed.
Development team
Game designer
A game designer (or inventor) is a person who invents a game's concept, central mechanisms, rules, and themes. Game designers may work alone or in teams.
Game developer
A game developer is a person who fleshes out the details of a game's design, oversees its testing, and revises the game in response to player feedback.
Often game designers also do development work on the same project. However, some publishers commission extensive development of games to suit their target audience after licensing a game from a designer. For larger games, such as collectible card games, designers and developers work in teams with separate roles.
Game artist
A game artist creates visual art for games. Game artists are often vital to role-playing games and collectible card games.
Many graphic elements of games are created by the designer when producing a prototype of the game, revised by the developer based on testing, and then further refined by the artist and combined with artwork as a game is prepared for publication or release.
Concept
A game concept is an idea for a game, briefly describing its core play mechanisms, objectives, themes, and who the players represent.
A game concept may be pitched to a game publisher in a similar manner as film ideas are pitched to potential film producers. Alternatively, game publishers holding a game license to intellectual property in other media may solicit game concepts from several designers before picking one to design a game.
Design
During design, a game concept is fleshed out. Mechanisms are specified in terms of components (boards, cards, tokens, etc.) and rules. The play sequence and possible player actions are defined, as well as how the game starts, ends, and win conditions (if any).
Prototypes and play testing
A game prototype is a draft version of a game used for testing. Uses of prototyping include exploring new game design possibilities and technologies.
Play testing is a major part of game development. During testing, players play the prototype and provide feedback on its gameplay, the usability of its components, the clarity of its goals and rules, ease of learning, and entertainment value. During testing, various balance issues may be identified, requiring changes to the game's design. The developer then revises the design, components, presentation, and rules before testing it again. Later testing may take place with focus groups to test consumer reactions before publication.
History
Folk process
Many games have ancient origins and were not designed in the modern sense, but gradually evolved over time through play. The rules of these games were not codified until early modern times and their features gradually developed and changed through the folk process. For example, sports (see history of sports), gambling, and board games are known, respectively, to have existed for at least nine thousand, six thousand, and four thousand years. Tabletop games played today whose descent can be traced from ancient times include chess, go, pachisi, mancala, and pick-up sticks. These games are not considered to have had a designer or been the result of a contemporary design process.
After the rise of commercial game publishing in the late 19th century, many games that had formerly evolved via folk processes became commercial properties, often with custom scoring pads or preprepared material. For example, the similar public domain games Generala, Yacht, and Yatzy led to the commercial game Yahtzee in the mid-1950s.
Today, many commercial games, such as Taboo, Balderdash, Pictionary, or Time's Up!, are descended from traditional parlour games. Adapting traditional games to become commercial properties is an example of game design. Similarly, many sports, such as soccer and baseball, are the result of folk processes, while others were designed, such as basketball, invented in 1891 by James Naismith.
New media
The first games in a new medium are frequently adaptations of older games. Later games often exploit the distinctive properties of a new medium. Adapting older games and creating original games for new media are both examples of game design.
Technological advances have provided new media for games throughout history. For example, accurate topographic maps produced as lithographs and provided free to Prussian officers helped popularize wargaming. Cheap bookbinding (printed labels wrapped around cardboard) led to mass-produced board games with custom boards. Inexpensive (hollow) lead figurine casting contributed to the development of miniature wargaming. Cheap custom dice led to poker dice. Flying discs led to Ultimate frisbee.
Purposes
Games can be designed for entertainment, education, exercise or experimental purposes. Additionally, elements and principles of game design can be applied to other interactions, in the form of gamification. Games have historically inspired seminal research in the fields of probability, artificial intelligence, economics, and optimization theory. Applying game design to itself is a current research topic in metadesign.
Educational purposes
By learning through play children can develop social and cognitive skills, mature emotionally, and gain the self-confidence required to engage in new experiences and environments. Key ways that young children learn include playing, being with other people, being active, exploring and new experiences, talking to themselves, communicating with others, meeting physical and mental challenges, being shown how to do new things, practicing and repeating skills, and having fun.
Play develops children's content knowledge and provides children the opportunity to develop social skills, competencies, and disposition to learn. Play-based learning is based on a Vygotskian model of scaffolding where the teacher pays attention to specific elements of the play activity and provides encouragement and feedback on children's learning. When children engage in real-life and imaginary activities, play can be challenging in children's thinking. To extend the learning process, sensitive intervention can be provided with adult support when necessary during play-based learning.
Design issues by game type
Different types of games pose specific game design issues.
Board games
Board game design is the development of rules and presentational aspects of a board game. When a player takes part in a game, it is the player's self-subjection to the rules that create a sense of purpose for the duration of the game. Maintaining the players' interest throughout the gameplay experience is the goal of board game design. To achieve this, board game designers emphasize different aspects such as social interaction, strategy, and competition, and target players of differing needs by providing for short versus long-play, and luck versus skill. Beyond this, board game design reflects the culture in which the board game is produced.
The most ancient board games known today are over 5000 years old. They are frequently abstract in character and their design is primarily focused on a core set of simple rules. Of those that are still played today, games like go (), mancala (), and chess () have gone through many presentational and/or rule variations. In the case of chess, for example, new variants are developed constantly, to focus on certain aspects of the game, or just for variation's sake.
Traditional board games date from the nineteenth and early twentieth century. Whereas ancient board game design was primarily focused on rules alone, traditional board games were often influenced by Victorian mores. Academic (e.g. history and geography) and moral didacticism were important design features for traditional games, and Puritan associations between dice and the Devil meant that early American game designers eschewed their use in board games entirely. Even traditional games that did use dice, like Monopoly (based on the 1906 The Landlord's Game), were rooted in educational efforts to explain political concepts to the masses. By the 1930s and 1940s, board game design began to emphasize amusement over education, and characters from comic strips, radio programmes, and (in the 1950s) television shows began to be featured in board game adaptations.
Recent developments in modern board game design can be traced to the 1980s in Germany, and have led to the increased popularity of "German-style board games" (also known as "Eurogames" or "designer games"). The design emphasis of these board games is to give players meaningful choices. This is manifested by eliminating elements like randomness and luck to be replaced by skill, strategy, and resource competition, by removing the potential for players to fall irreversibly behind in the early stages of a game, and by reducing the number of rules and possible player options to produce what Alan R. Moon has described as "elegant game design". The concept of elegant game design has been identified by The Boston Globe'''s Leon Neyfakh as related to Mihaly Csikszentmihalyi's the concept of "flow" from his 1990 book, "Flow: The Psychology of Optimal Experience".
Modern technological advances have had a democratizing effect on board game production, with services like Kickstarter providing designers with essential startup capital and tools like 3D printers facilitating the production of game pieces and board game prototypes.Hesse, Monica. "Rolling the dice on a jolly good pastime". The Washington Post. 29 August 2011. A modern adaptation of figure games are miniature wargames like Warhammer 40,000.
Card games
Card games can be designed as gambling games, such as Poker, or simply for fun, such as Go Fish. As cards are typically shuffled and revealed gradually during play, most card games involve randomness, either initially or during play, and hidden information, such as the cards in a player's hand.
How players play their cards, revealing information and interacting with previous plays as they do so, is central to card game design. In partnership card games, such as Bridge, rules limiting communication between players on the same team become an important part of the game design. This idea of limited communication has been extended to cooperative card games, such as Hanabi.
Dice games
Dice games differ from card games in that each throw of the dice is an independent event, whereas the odds of a given card being drawn are affected by all the previous cards drawn or revealed from a deck. For this reason, dice game design often centers around forming scoring combinations and managing re-rolls, either by limiting their number, as in Yahtzee or by introducing a press-your-luck element, as in Can't Stop''.
Casino games
Casino game design can entail the creation of an entirely new casino game, the creation of a variation on an existing casino game, or the creation of a new side bet on an existing casino game.
Casino game mathematician, Michael Shackleford has noted that it is much more common for casino game designers today to make successful variations than entirely new casino games. Gambling columnist John Grochowski points to the emergence of community-style slot machines in the mid-1990s, for example, as a successful variation on an existing casino game type.
Unlike the majority of other games which are designed primarily in the interest of the player, one of the central aims of casino game design is to optimize the house advantage and maximize revenue from gamblers. Successful casino game design works to provide entertainment for the player and revenue for the gambling house.
To maximise player entertainment, casino games are designed with simple easy-to-learn rules that emphasize winning (i.e. whose rules enumerate many victory conditions and few loss conditions), and that provide players with a variety of different gameplay postures (e.g. card hands). Player entertainment value is also enhanced by providing gamblers with familiar gaming elements (e.g. dice and cards) in new casino games.
To maximise success for the gambling house, casino games are designed to be easy for croupiers to operate and for pit managers to oversee.
The two most fundamental rules of casino game design are that the games must be non-fraudable (including being as nearly as possible immune from advantage gambling) and that they must mathematically favor the house winning. Shackleford suggests that the optimum casino game design should give the house an edge of smaller than 5%.
Tabletop role-playing games
The design of tabletop role-playing games typically requires the establishment of setting, characters, and gameplay rules or mechanics. After a role-playing game is produced, additional design elements are often devised by the players themselves. In many instances, for example, character creation is left to the players.
Early role-playing game theories developed on indie role-playing game design forums in the early 2000s.
Game studies
Game design is a topic of study in the academic field of game studies. Game studies is a discipline that deals with the critical study of games, game design, players, and their role in society and culture. Prior to the late-twentieth century, the academic study of games was rare and limited to fields such as history and anthropology. As the video game revolution took off in the early 1980s, so did academic interest in games, resulting in a field that draws on diverse methodologies and schools of thought.
Social scientific approaches have concerned themselves with the question of, "What do games do to people?" Using tools and methods such as surveys, controlled laboratory experiments, and ethnography, researchers have investigated the impacts that playing games have on people and the role of games in everyday life.
Humanities approaches have concerned themselves with the question of, "What meanings are made through games?" Using tools and methods such as interviews, ethnographies, and participant observation, researchers have investigated the various roles that games play in people's lives and the meanings players assign to their experiences.
From within the game industry, central questions include, "How can we create better games?" and, "What makes a game good?" "Good" can be taken to mean different things, including providing an entertaining experience, being easy to learn and play, being innovative, educating the players, and/or generating novel experiences.
See also
Gamification
Play (activity)
Video game design
Notes
References
Further reading
.
.
.
.
Game theory
Leisure activities | Game design | Mathematics,Engineering | 3,060 |
15,759,971 | https://en.wikipedia.org/wiki/Mamadsho%20Ilolov | Mamadsho Ilolovich Ilolov () is President of the Tajik Academy of Sciences, a former member of the Tajikistan Parliament and former Minister of Labor and Social Protection of Population of the Republic of Tajikistan, and former rector of Khorugh State University.
Ilolov was born on March 14, 1948, in GBAO, Republic of Tajikistan. Academician Mamadsho Ilolov has awarded the Degrees of PhD, Grand PhD and Certificate of Full Professor from the World Information Distributed University, a known diploma mill, in 2007. He is currently a President of Academy of Sciences of the Republic of Tajikistan.
In January 2008, Ilolov held a press conference in Dushanbe to announce the creation of a new nanotechnology branch of the Tajik Academy of Science.
Research fields
Differential Equations, Optimal Control Theory, History of Mathematics;
Education
Doctor of Physical and Mathematical Sciences (Grand PhD) of Mathematics, Kyiv 1992,
PhD on Mathematics, Institute of Mathematics, Kiev, 1980,
Diploma, Voronezh State University, Voronezh 1970;
Awards
Medal "Pushkin", awarded by the President of Russian Federation, 2007,
1st Degree order "Sharaf", awarded by the President of the Republic of Tajikistan, 2004
Memberships
Academician of the Academy of Sciences of Tajikistan, 2005,
Foreign Member of the Academy of Sciences of Kazakhstan, 2005,
Corresponding Member of the Academy of Sciences of the Republic of Tajikistan, 1997,
Member of the American Mathematical Society, 1980;
Professional experience
Academy of Sciences of the Republic of Tajikistan, President, 2005–present;
Ministry of Labor and Social Protection of Population of the Republic of Tajikistan, Minister, 2003–2005,
Parliament of the Republic of Tajikistan, Chairman of Committee on Social Policy, 1995–2003,
Khorugh State University, Rector, 1992–1995,
Ministry of Education of Tajikistan, Head of Department, 1991–1992,
Tajik State National University, Assistant of Professor, Professor, 1970–1991;
Personal life
Ilolov Mamadsho and Nazirova Guldavlat married in February 1983, and in December 1983 the couple's first daughter, Purnur Ilolova, was born in Dushanbe, Tajikistan. Their son, Ahmadsho, was born in Khorog, Tajikistan in 1985. In 1988, their second daughter Sadaf was born in Dushanbe.
Publications
Books - 3, scientific papers - 75
External links
World Information Distributed University awarded Ilolov with Grand PhD
Info on Academy of Sciences of Tajikistan on Interacademies website
Ilolov in an International Conference
References
1948 births
Ethnic Tajik people
Living people
Soviet mathematicians
Nanotechnologists
21st-century Tajikistani scientists
Members of the Assembly of Representatives (Tajikistan)
Members of the Tajik Academy of Sciences
Voronezh State University alumni
Recipients of the Medal of Pushkin
Academic staff of Tajik National University
Tajik people from the Soviet Union | Mamadsho Ilolov | Materials_science | 593 |
11,660,809 | https://en.wikipedia.org/wiki/Mobile-ITX | Mobile-ITX is the smallest (by 2009) x86 compliant motherboard form factor presented by VIA Technologies in December, 2009. The motherboard size (CPU module) is . There are no computer ports on the CPU module and it is necessary to use an I/O carrier board. The design is intended for medical, transportation and military embedded markets.
History
The Mobile-ITX form factor was announced by VIA Technologies at Computex in June, 2007. The motherboard size of first prototypes was . The design was intended for ultra-mobile computing such as a smartphone or UMPC.
The prototype boards shown to date include a x86-compliant 1 GHz VIA C7-M processor, 256 or 512 megabytes of RAM, a modified version of the VIA CX700 chipset (called the CX700S), an interface for a cellular radio module (demonstration boards contain a CDMA radio), a DC-DC electrical converter, and various connecting interfaces.
At the announcement, an ultra-mobile PC reference design was shown running Windows XP Embedded.
Notes and references
External links
Mobile-ITX Specification
Motherboard form factors
IBM PC compatibles
Mobile computers | Mobile-ITX | Technology | 240 |
5,831,822 | https://en.wikipedia.org/wiki/Texas%20Institute%20for%20Genomic%20Medicine | The Texas A&M Institute for Genomic Medicine (TIGM) is a research institute of Texas A&M AgriLife Research. It was founded in 2005 under a $50 million award from the Texas Enterprise Fund to accelerate the pace of medical discoveries and foster the development of the biotechnology industry in Texas.
TIGM helps researchers gain faster access to the genetically engineered knockout mice used in medical research. TIGM owns and maintains the world's largest library of embryonic stem cells for C57BL/6 mice. In addition, TIGM has contracted access to the world's largest library of genetically modified 129 mouse cells. The Institute headquarters and laboratory facilities are based on the main campus of Texas A&M University in College Station, Texas.
References
External links
Texas Institute for Genomic Medicine Homepage
Biotechnology organizations
Genomics organizations
Organizations established in 2005
Organizations based in Texas | Texas Institute for Genomic Medicine | Engineering,Biology | 176 |
28,752,783 | https://en.wikipedia.org/wiki/PlusCal | PlusCal (formerly called +CAL) is a formal specification language created by Leslie Lamport, which transpiles to TLA+. In contrast to TLA+'s action-oriented focus on distributed systems, PlusCal most resembles an imperative programming language and is better-suited when specifying sequential algorithms. PlusCal was designed to replace pseudocode, retaining its simplicity while providing a formally-defined and verifiable language. A one-bit clock is written in PlusCal as follows:
-- fair algorithm OneBitClock {
variable clock \in {0, 1};
{
while (TRUE) {
if (clock = 0)
clock := 1
else
clock := 0
}
}
}
See also
FizzBee
TLA+
Pseudocode
References
External links
PlusCal tools and documentation are found on the PlusCal Algorithm Language page.
Formal methods
Formal specification languages
Algorithm description languages
Microsoft Research | PlusCal | Technology,Engineering | 184 |
76,117,622 | https://en.wikipedia.org/wiki/Welding%20table | A welding table is a type of workbench used for holding workpieces during welding. They are made of fireproof and electrically conductive materials, and often have good possibilities for clamping workpieces down, providing increased stability, precision and security. In addition to a welding machine and personal protective equipment, they are often used together with accessories such as measuring tools, magnets and angles. Some welders build their own welding tables.
Fire safety
They are often made of steel, and some welding tables have a zinc plating to prevent slag from sticking to the table. The table can withstand high temperatures and splashes of hot slag, unlike a wooden table which can catch fire more easily. This reduces the risk of risk of fire.
Grounding
They are often connected to ground to prevent voltage leaks during work and to protect the welder from electrical shocks, but also to prevent radio noise from the welding process which otherwise can affect nearby electronics.
Ergonomics
Most tables have an adjustable height so that the welder can sit or stand in a comfortable and ergonomic position.
Some also have the possibility for the work surface to be tilted or rotated so that the workpiece can get the desired position and orientation (pose).
Mass, size and mobility
The space available and the size of the things to be welded will dictate the need. Some tables are very heavy and solid, and can withstand heavy workpieces, while other models are lighter and can only withstand lighter workpieces.
Stationary tables
Some tables are heavy, tough and stable, and are meant to be stationary in one place in a workshop, and are capable of withstanding loads from very heavy objects. A permanent welding table usually has a length of at least 2 meters.
Mobile tables
Some tables are portable so that they can be moved around the workshop as needed, for example in the form of mobile trolley tables with lockable wheels. There has been a trend with smaller tables that can be put together to form a longer work surface depending on the shape and size of what is to be welded, following a similar principle as a sawhorse. Some tables are extendable so that the size can be reduced or increased as necessary.
See also
Workbench (woodworking)
References
Welding
Workbenches | Welding table | Engineering | 461 |
31,478,429 | https://en.wikipedia.org/wiki/Centre%20for%20Gene%20Regulation%20and%20Expression | The Centre for Gene Regulation and Expression, located within the School of Life Sciences, University of Dundee, is a research facility working in the field of gene expression and chromosome biology. Previously part of the Dundee Biocentre and receiving significant Wellcome Trust funding from 1995 onwards, it was awarded Wellcome Trust Centre status in 2008. Professor Tom Owen-Hughes is the centre's director.
The centre aims to enhance our understanding of how genes are regulated at both the single cell and whole organism level. Researchers use a wide range of advanced techniques, including live cell fluorescent imaging and mass spectrometry-based proteomics, to explore the functions of key proteins and molecular mechanisms in cell biology.
Research and discoveries
Live cell imaging and proteomic studies have allowed researchers at the centre to gain fresh understanding of protein function and cell behaviour.
The centre is studying many aspects of the cell cycle, including the way in which chromosomes replicate and separate during cell division and how DNA damage is detected. Failure of these events can lead to major faults within a genome, potentially leading to the rise of cancerous cells. The centre is also investigating how DNA is tightly wound and compacted so that it can fit into the nuclei of eukaryotic cells, as well as the protein-DNA complexes that are involved in this packaging. The controlled unravelling of DNA is an important step in the regulation of gene function.
Researchers
Angus Lamond, a Wellcome Principal Research Fellow, studies the composition and function of organelles and multiprotein complexes found within the nucleus. This work is helping to explain how a cell's nucleus is organised, an area that has particular importance to human diseases such as inherited genetic conditions which can have modified or disrupted organelles.
Jason Swedlow is a Wellcome Trust Senior Research Fellow and investigates how chromosomes are separated during cell division. The driving force behind this process are strands known as microtubules, which pull the chromosomes apart. His work looks at specialised structures known as kinetochores and the mechanisms which monitor the correct attachment of microtubules to the chromosomes.
Tomo Tanaka studies the processes by which eukaryotic cells maintain their genetic integrity. His group use budding yeast to study chromosome duplication and segregation. By understanding the processes that occur during cell division, it is hoped that a better knowledge will be gained of human diseases such as cancer which are often characterised by chromosome instability.
See also
Wellcome Trust
Wellcome Centre for Human Genetics
Wellcome Trust Centre for Neuroimaging
Wellcome Trust Centre for Stem Cell Research
Wellcome Trust Centre for Cell-Matrix Research
References
External links
Centre for Gene Regulation and Expression website
Wellcome Trust website
Biological research institutes in the United Kingdom
Gene expression
Genetics in the United Kingdom
Research institutes in Scotland
Centre for Gene Regulation and Expression
Wellcome Trust
Genetics or genomics research institutions | Centre for Gene Regulation and Expression | Chemistry,Biology | 582 |
6,436,309 | https://en.wikipedia.org/wiki/Master%20of%20Physics | A Master of Physics honours (or MPhys (Hons)) degree is a specific master's degree for courses in the field of physics.
United Kingdom
In England and Wales, the MPhys is an undergraduate award available after pursuing a four-year course of study at a university. In Scotland the course has a five-year duration. In some universities, the degree has the variant abbreviation MSci. These are taught courses, with a research element in the final year — this can vary from a small component to an entire year working with a research group — and are not available as postgraduate qualifications in most cases, although depending on institution the final year can be considered as approximately equivalent to an MSc.
Structure
In terms of course structure, MPhys degrees usually follow the pattern familiar from bachelor's degrees with lectures, laboratory work, coursework and exams each year. Usually one, or more commonly two, substantial projects are to be completed in the fourth year which may well have research elements. At the end of the second or third years, there is usually a threshold of academic performance in examinations to be reached to allow progression into the final year. Final results are, in most cases, awarded on the standard British undergraduate degree classification scale, although some universities award something structurally similar to 'Distinction', 'Merit', 'Pass' or 'Fail', as this is often the way that taught postgraduate master's degrees are classified.
Degree schemes
It is usual for there to be some variation in the MPhys schemes, to allow for students to study the area of physics which most interests them. For example, Lancaster University's physics department offer the following schemes:
MPhys Physics
MPhys Physics, Astrophysics and Cosmology
MPhys Physics with Particle Physics and Cosmology
MPhys Physics with Space Science
MPhys Physics with Biomedical Physics
MPhys Theoretical Physics
MPhys Theoretical Physics with Mathematics
These schemes will usually incorporate the same core modules with additional scheme specific modules. Students tend to take all the same core modules during their first year and start to specialise in their second year. In some cases, optional modules can be taken from other schemes.
See also
British degree abbreviations
Bachelor's degrees
Master's degrees
References
Physics
Physics education | Master of Physics | Physics | 450 |
42,156,215 | https://en.wikipedia.org/wiki/Guardian%20%28polymer%29 | Guardian is the trademark name of a polymer originally manufactured by Securency International, a joint venture between the Reserve Bank of Australia and Innovia Films Ltd. The latter completed acquisition of the former's stake in 2013.
Its production involves gravity feeding a molten polymer, composed of extruded polypropylene and other polyolefins, through a four-storey chamber. This creates sheets of the substrate used as the base material by many central banks in the printing of polymer banknotes.
Production
Polypropylene is processed to create pellets. These pellets are extruded from a core extruder in conjunction with polyolefin pellets from two "skin layer" extruders, and are combined into a molten polymer. This consists of a 37.5μm thick polypropylene sheet sandwiched between two 0.1 μm polyolefin sheets, creating a thin film 37.7 μm thick.
The molten polymer undergoes snap cooling as it passes by gravity feeding through a brass mandrel, which imparts on the thin film many properties, including its transparency. The cast tube material is then reheated and blown into a large bubble using air pressure and temperature. At the base of the four-storey chamber convergence rollers collapse the tube into a flat sheet consisting of two layers of the thin film. This creates the base biaxially-oriented polypropylene substrate of 75.4 μm thickness, called ClarityC by Innovia Films.
The base substrate is slit as it exits the convergence rollers. Four thick layers of (usually white) opacifier are applied to the substrate, two on the upper surface and two on the lower surface. A mask prevents the deposition of the opacifier on parts of the substrate that are intended to remain transparent. These overcoat layers protect the substrate from soiling and impart on it its characteristic texture, and increase the overall thickness to 87.5 μm. The resulting product is the Guardian substrate.
The opacifier conversion phase involves the use of resin and solvents, creating volatile organic compounds (VOCs) as by-products that are combusted in a thermal oxidizer. The resulting polymer substrate then passes through a rotary printing press using chrome-plated copper cylinders. After printing, the holographic security foil is incorporated into the base substrate. This is then cut into sheets and transported to the banknote printing companies in wooden boxes as a secure shipment.
Properties
Guardian is a non-porous and non-fibrous substrate. Because of this, it is "impervious to water and other liquids", and so remains clean for longer than a paper substrate. It is difficult to initiate a tear on the substrate, which has higher tear initiation resistance than paper.
Polymer banknotes
Guardian is used in the printing of polymer banknotes by many central banks.
It is the base material used for currencies printed by:
In 1993, the Bank of Indonesia issued a commemorative banknote and the Central Bank of Kuwait issued a د.ك1 banknote. In 1998, the Bank Negara Malaysia issued a commemorative banknote, and the Central Bank of Sri Lanka issued a commemorative Rs200 banknote. In 1999, the Northern Bank of Northern Ireland issued a commemorative banknote, and the Central Bank of the Republic of China in Taiwan issued a commemorative banknote. In 2000, the Central Bank of Brazil issued a commemorative banknote and the People's Bank of China issued a commemorative ¥100 banknote. In 2001, the Central Bank of Solomon Islands issued a commemorative SI$2 banknote. In 2009, the Bank of Mexico issued a commemorative $100 banknote.
Notes
References
Further reading
Polymers
Brand name materials | Guardian (polymer) | Chemistry,Materials_science | 762 |
857,836 | https://en.wikipedia.org/wiki/List%20of%20Saturn-crossing%20minor%20planets | A Saturn-crosser is a minor planet whose orbit crosses that of Saturn. The known numbered Saturn-crossers (as of 2005) are listed below. There is only one inner-grazer (944 Hidalgo) and no outer-grazers or co-orbitals known; most if not all of the crossers are centaurs. is a damocloid.
Notes: † inner-grazer.
944 Hidalgo †
2060 Chiron
5145 Pholus
5335 Damocles
8405 Asbolus
20461 Dioretsa
31824 Elatus
32532 Thereus
37117 Narcissus
52872 Okyrhoe
60558 Echeclus
See also
List of centaurs (small Solar System bodies)
List of Mercury-crossing minor planets
List of Venus-crossing minor planets
List of Earth-crossing minor planets
List of Mars-crossing minor planets
List of Jupiter-crossing minor planets
List of Uranus-crossing minor planets
List of Neptune-crossing minor planets
Saturn
Saturn-crossing
Saturn-crossing
Minor planet
Solar System | List of Saturn-crossing minor planets | Astronomy | 219 |
44,663,481 | https://en.wikipedia.org/wiki/Seabed%20tractor | A seabed tractor is a type of remotely operated underwater vehicle. They can be used for submarine cable laying or burial of cables or pipelines. This type of vehicle consists of a tracked Crawler excavator device, configured for the task. It is controlled from the vessel by an umbilical cable. Operating seabed tractors is similar to operating Remotely operated underwater vehicles. The seabed tractor operator drives the unit as if on board, using cameras on the unit for visual feedback.
A typical operation involves lowering the seabed tractor onto the seabed over the pipeline when the location has been confirmed. The weight of the tractor remains mostly on the gantry, being controlled by heave-compensation gear on the gantry so that only about 40 tonnes of the weight rests on the seabed. The position reference of the vessel is then transferred to the trim-cube sensors on the seabed tractor support wires, which should remain vertical. The position of the vessel is controlled by the movements of the tractor, with the trim¬cube sensors feeding back wire angle data to the dynamic positioning (DP) system which corrects the position of the vessel to keep the tractor wires vertical. The DP system would be configured with the centre of rotation located on the trencher. Heading can thus be adjusted according to the environment or any other constraints.
When laying a cable, the cable itself is a hazard as it can get caught on the umbilical cable. Seabed tractors are usually somewhat slower and less agile than other ROUVs. They can be used together with another remotely operated underwater vehicle to enhance the overview and to survey the progress and performance.
References
Remotely operated underwater vehicles
Tractors | Seabed tractor | Engineering | 340 |
20,596,907 | https://en.wikipedia.org/wiki/Jollo | Jollo was an online machine translation service where users could instantly translate texts into 23 languages, request human translations from a community of volunteers around the world, and compare the correctness of several leading machine translation websites. It was discontinued in 2012.
System
Jollo was a free Web 2.0 website that attempted to improve the way in which people translate online through the use of existing machine translation websites and a community of volunteers who correct and rate translations. The system relied on a similar methodology as computer-assisted translation to ensure translation quality, and featured a public translation memory that records past translations.
Jollo received some notable media attention, including in The Daily Telegraph. According to the blog KillerStartups, Jollo combined the benefits of the speed of machine translations and human reviews to ensure translation quality. According to Jeffrey Hill from The English Blog, the community features made Jollo an interesting alternative to other online translation services.
Development
The Jollo website was classified as beta. It was developed using LAMP and was praised for its colorful graphics and simple user interface.
Jollo offered a simple web-based API that could be used for translations. For example, the URL: http://www.jollo.com/translate.php?st=I%20love%20you&sl=en&tl=zh was used to translate the sentence "I love you" from English into Chinese.
References
External links
Archive of the Jollo page
Defunct websites
Machine translation
Internet properties disestablished in 2012 | Jollo | Technology | 306 |
61,606,868 | https://en.wikipedia.org/wiki/NGC%204316 | NGC 4316 is an edge-on spiral galaxy located about 70 million light-years away in the constellation Virgo. It was discovered by astronomer Wilhelm Tempel on March 17, 1882. NGC 4316 is a member of the Virgo Cluster and is classified as LINER and as a Seyfert galaxy.
The galaxy has undergone ram-pressure stripping in the past.
On February 28, 2003 a type II supernova known as SN 2003bk was discovered in NGC 4316.
References
External links
Spiral galaxies
LINER galaxies
Seyfert galaxies
Virgo Cluster
Virgo (constellation)
4316
040119
Astronomical objects discovered in 1882
07447 | NGC 4316 | Astronomy | 135 |
61,717,760 | https://en.wikipedia.org/wiki/Harzia%20acremonioides | Harzia acremonioides is a species of seed-borne fungus that occurs in the soil. It has been categorized in the Ceratostomataceae family and under the genus Harzia. The genus Harzia contained up to three accepted species: H. acremonioides, H. verrucose, and H. velatea in 1974. Within the genus Harzia, H. acremonioides is one of the most common species that can be found in all climate regions around the world.
History and taxonomy
The species was first named as Acremonium atrum in 1837 by Corda. Then in 1871, Harz named it Monosporium acremonioides. In 1886, the species got its well-known name Acremoniella atra (Corda) Sacc.. However, Holubová-Jechová pointed out the fact that "atrum" frequently refers to a black fungus, which is not the species look like, so the legitimate name of the species was later replaced by Harzia acremoniodies (Harz) which was given by Costantin in 1888. The species was named Monopodium uredopsis by Delacroix in 1890, and again in 1904, it was named as Eidamia acremonioides by Lindau, and she put the species under a new genus named Eidamia, but the legitimate name of the species remains unchanged.
The genus name of Eidamia was in honour of Michael Emil Eduard Eidam (1845 - 1901), a German apothacary and botanist (mycology) from Breslau.
Growth and morphology
The genus Harzia consists of a hyaline mycelium, a brown thick-walled blastoconidia, and hyaline conidiophores. As of a member of the genus Harzia, the spores of H. acremonioides are large, one-celled, cinnamon brown or golden brown, ovoid to sugblobose, thick-walled, usually smooth-walled, but sometimes with a slight wrinkling or the exposure, and they tend to vary in size.
H. acremonioides are produced asexually, at 20 °C on MEA, its colonies can reach about 3.3 cm diam in about just five days, and 20-30 x 15-20 um, almost smooth-walled obovoid conidia are produced.
Growths of H. acremonioides can be obtained on potato mush agar, potato glucose agar, potato extract agar, and rice; slightly growth can be obtained in solutions of sucrose and maltose and a synthetic nutrient agar. The growth of H. acremonioides on rice is colored brown, and when growing the species on potato mush agar at 20 °C, firmly growth of mycelium was obtained producing megaspores.
Physiology
For H. acremonioides, the production of the conidia and the color of the macrospore are affected by temperature. At 20 °C, optimum growth in all the culture media employed can be obtained, numerous conidia and brown megaspores were formed; at 30 °C, very little growth are obtained, no conidia were observed, and the majority of the megaspores were hyaline; and when grow the species on synthetic nutrient agar, no growth occurred at 25 or 30 °C.
In addition to temperature, the presence of other fungi may also have decided effects on the growth of H. acremonioides. When H. acremonioides is isolated from seeds, the fungus is always associated with another fungus named Alternaria tenuis auct.. With the presence of Alternaria tenuis auct., H. acremonioides grows rapidly, producing abundant spores and mycelium, and when grow H. acremonioides in pure culture, it grows much slowly and produces less mycelium and fewer spores
Habitat and ecology
H. acremonioides is classified as a plant-pathogenic fungus, however, it does not appear to be of serious pathological significance.
Generally, the species has been regarded as a type of Saprotroph. It can directly acquire nutrients from wild plants and cultivated plants includes Pteridium aquilinum, beet, alfalfa, Pteridium aquilinum, Heracleum sphondylium, Scrophularia nodosa, Urtica sp. Rumex acetosella ryegrass and Chrysanthemum cinerariifolium.
H. acremonioides has been found widely distributed on various substrata. It has been reported from peat bogs in Ireland soil in the Netherlands, Germany, Canada, the United States, the British Isles, Australia, Papua, Mozambique, Sierra Leone, Rhodesia and Kenya, carst caves in the USSR and Yugoslavia, coniferous forests in Japan and Hungary, whereas the main recorded substrates are the seeds of different plants that are often found in association with Alternaria alternata. The species has been isolated from seeds from the Netherlands, Denmark, British Columbia, and Ontario., and a variety of seeds that associated with the species includes Allium cepa L. (onion), Beta vulgaris L. (beet), Daucus carota L. var. sat'iaa DC. (carrot), clover, peas, wheat, grass, cotton, radish, timothy and sorghum. It has also been isolated from rotting stems and leaves of clover, radish, Betula alba, tomato, beans, corn, Salsola kali, graminaceous plants.
Mycoparasitism
According to Urbasch’s research in 1986, H. acremonioides can act as a biotrophic parasite to parasitizes the species Stemphylium botryosum with lobed, contact cells that work as appressoria are being utilized, which can cause little damages to the species. However, H. acremonioides can still grow without host fungi.
Besides the ability to parasite the species Stemphylium botryosum, H. acremonioides can also invade the sclerotia of some species which ends up with a drastic reduction in the number of viable sclerotia. In order to control H. acremonioides, pycnidial dust can be used as a seed dressing to protect the seeds.
References
Coronophorales
Fungus species | Harzia acremonioides | Biology | 1,340 |
33,720,269 | https://en.wikipedia.org/wiki/Pyrosoma%20atlanticum | Pyrosoma atlanticum is a pelagic species of marine colonial tunicate in the class Thaliacea found in temperate waters worldwide. The name of the genus comes from the Greek words pyros meaning 'fire' and soma meaning 'body', referring to the bright bioluminescence sometimes emitted. The specific epithet atlanticum refers to the Atlantic Ocean, from where the first specimen of the species was collected for scientific description; it was described in 1804 by François Péron, a French naturalist.
Description
A colony of P. atlanticum is cylindrical and can grow up to long and wide. The constituent zooids form a rigid tube, which may be pale pink, yellowish, or bluish. One end of the tube is narrower and is closed, while the other is open and has a strong diaphragm. The outer surface or test is gelatinised and dimpled with backward-pointing, blunt processes. The individual zooids are up to long and have a broad, rounded branchial sac with gill slits. Along the side of the branchial sac runs the endostyle, which produces mucus filters. Water is moved through the gill slits into the centre of the cylinder by cilia pulsating rhythmically. Plankton and other food particles are caught in mucus filters in the processes as the colony is propelled through the water. P. atlanticum is bioluminescent and can generate a brilliant blue-green light when stimulated.
Distribution and habitat
P. atlanticum is found in temperate waters in all the world's oceans, usually between 50°N and 50°S. It is most plentiful at depths below 250 m (800 ft). Colonies are pelagic and move through the water column. They undergo a large diurnal migration, rising toward the surface in the evening and descending around dawn. Large colonies may rise through a vertical distance of 760 m (2,500 ft) daily, and even small colonies a few millimetres long can cover vertical distances of 90 m (300 ft).
Biology
A study in the Indian Ocean comparing different zooplankton organisms found that colonies of P. atlanticum were the most efficient grazers of particles above 10 μm in diameter, catching a higher proportion of the particles than other grazers. This implies the species uses high biomass intake as a strategy, rather than investing in energy-conservation mechanisms.
Growth occurs by new rings of zooids being budded off around the edge of the elongating colony. A pair of luminescent organs is on either side of the inlet siphon of each zooid. When stimulated, these turn on and off, causing rhythmic flashing. No neural pathway runs between the zooids, but each responds to the light produced by other individuals, and even by light from other nearby colonies.
P. atlanticum remains as one of the least studied planktonic grazers, according to a 2021 study. In the study, the researchers took samples of the pyrosome's microbiome. The results of the study found that a possible source of bioluminescence in P. atlanticum is the abundance of Photobacterium in its microbiome. However, there is still debate, as a 2020 study found a potential endogenous pyrosome luciferase in the organism's transcriptome homologous to Renilla luciferase (RLuc). Further study of the luciferase showed that it reacted with coelenterazine to produce light, much like RLuc.
Ecology
Five specimens of the penaeid shrimp Funchalia were found living inside colonies of P. atlanticum. Other amphipods also lived there, including the hyperiids Phronima and Phronimella spp.
Predators of P. atlanticum include various bony fishes, such as the spiky oreo, the big-eyed cardinalfish, and the pelagic butterfish, dolphins, and whales such as the sperm whale and giant beaked whale.
Synonyms
The following synonyms have been noted:
Dipleurosoma ellipticum Brooks, 1906 – genus transfer and junior synonym
Pyrosoma atlanticum dipleurosoma Metcalf & Hopkins, 1919 – junior synonym
Pyrosoma atlanticum echinatum Metcalf & Hopkins, 1919 – junior synonym
Pyrosoma atlanticum f. elegans Lesueur, 1815 – junior synonym
Pyrosoma atlanticum hawaiiense Metcalf & Hopkins, 1919 – junior synonym
Pyrosoma atlanticum intermedium Metcalf & Hopkins, 1919 – junior synonym
Pyrosoma atlanticum paradoxum Metcalf & Hopkins, 1919 – junior synonym
Pyrosoma atlanticum triangulum Neumann, 1913 – junior synonym
Pyrosoma atlanticum var. giganteum Lesueur, 1815 – junior synonym
Pyrosoma atlanticum var. levatum Seeliger, 1895 – junior synonym
Pyrosoma atlanticum var. tuberculosum Seeliger, 1895 – junior synonym
Pyrosoma benthica Monniot C. & Monniot F., 1966 – junior synonym
Pyrosoma elegans Lesueur, 1813 – junior synonym
Pyrosoma ellipticum (Brooks, 1906) – junior synonym
Pyrosoma giganteum Lesueur, 1815 – junior synonym
Pyrosoma giganteum var. atlanticum Péron, 1804 – status change
Pyrosoma rufum Quoy & Gaimard, 1824 – junior synonym
Pyrosoma triangulum Neumann, 1909 – junior synonym
See also
Jelly-falls
References
External links
Pyrosomatidae
Fauna of the Atlantic Ocean
Fauna of the Indian Ocean
Fauna of the Pacific Ocean
Bioluminescent animals
Animals described in 1804
Taxa named by François Péron
Colonial animals | Pyrosoma atlanticum | Biology | 1,189 |
7,074,727 | https://en.wikipedia.org/wiki/Mesoporous%20silicate | Mesoporous silicates are silicates with a special morphology.
Background
Porous inorganic solids have found great utility as catalysts and sorption media because of their large internal surface area, i.e. the presence of voids of controllable dimensions at the atomic, molecular, and nanometer scales. With increasing environmental concerns worldwide, nanoporous materials have become more important and useful for the separation of polluting species and the recovery of useful ones. In recent years there has been great progress in applying environmentally friendly zeolites in heterogeneous reaction catalysis. The reason for their success is related to their specific features in converting molecules having kinetic diameter below 1 nm, but they become inadequate when reactants with sizes above the dimensions of the pores have to be processed. Research efforts to synthesize zeolites with larger pore diameter, high structural stability and catalytic activity have not given the expected results yet.
Characteristics
The discovery of a new family of mesoporous molecular sieves in the early 1990s by Kuroda et al., known as KSW-1 and FSM-16, and by ExxonMobil, called M41S, opened new possibilities to prepare catalysts for reactions of relatively large molecules. The silicate wall of the pores is amorphous. Mesoporous silicates, such as MCM-41 and SBA-15 (the most common mesoporous silicates), are porous silicates with huge surface areas (normally ≥1000 m2/g), large pore sizes (2 nm ≤ size ≤ 20 nm) and ordered arrays of cylindrical mesopores with very regular pore morphology. The large surface areas of these solids increase the probability that a reactant molecule in solution will come into contact with the catalyst surface and react. The large pore size and ordered pore morphology allow one to be sure that the reactant molecules are small enough to diffuse into the pores.
See also
High-performance liquid chromatography
Gas-liquid chromatography
Silica gel
Nanotechnology
References
Silicates
Catalysts
Silicate | Mesoporous silicate | Chemistry,Materials_science,Engineering | 439 |
2,030,477 | https://en.wikipedia.org/wiki/Connections%20%28British%20TV%20series%29 | Connections is a science education television series created, written, and presented by British science historian James Burke. The series was produced and directed by Mick Jackson of the BBC Science and Features Department and first aired in 1978 (UK) and 1979 (US). It took an interdisciplinary approach to the history of science and invention, and demonstrated how various discoveries, scientific achievements, and historical world events were built from one another successively in an interconnected way to bring about particular aspects of modern technology. The series was noted for Burke's crisp and enthusiastic presentation (and dry humour), historical re-enactments, and intricate working models.
The popular success of the series led to the production of The Day the Universe Changed (1985), a similar programme, but showing a more linear history of several important scientific developments and their more philosophic impact on Western civilisation.
Years later, the success in syndication led to three sequels. Connections2 (1994) and Connections3 (1997) were made for TLC. In November 2023, the six-episode series Connections with James Burke, premièred on Curiosity Stream, again with Burke as the on-screen presenter.
In 2004, KCSM-TV produced a program called Re-Connections, consisting of an interview of Burke and highlights of the original series, for the 25th anniversary of the first broadcast in the US on PBS.
Content
Connections explores an "Alternative View of Change" (the subtitle of the series) that rejects the conventional linear and teleological view of historical progress. Burke contends that one cannot consider the development of any particular piece of the modern world in isolation. Rather, the entire gestalt of the modern world is the result of a web of interconnected events, each one consisting of a person or group acting for reasons of their own motivations (e.g., profit, curiosity, religion) with no concept of the final, modern result to which the actions of either them or their contemporaries would lead. The interplay of the results of these isolated events is what drives history and innovation, and is also the main focus of the series and its sequels.
To demonstrate this view, Burke begins each episode with a particular event or innovation in the past (usually ancient or medieval times) and traces the path from that event through a series of seemingly unrelated connections to a fundamental and essential aspect of the modern world. For example, the episode "The Long Chain" traces the invention of plastics from the development of the fluyt, a type of Dutch cargo ship.
Burke also explores three corollaries to his initial thesis. The first is that, if history is driven by individuals who act only on what they know at the time, and not because of any idea as to where their actions will eventually lead, then predicting the future course of technological progress is merely conjecture. Therefore, if we are astonished by the connections Burke is able to weave among past events, then we will be equally surprised to what the events of today eventually will lead, especially events of which we were not even aware at the time.
The second and third corollaries are explored most in the introductory and concluding episodes, and they represent the downside of an interconnected history. If history progresses because of the synergistic interaction of past events and innovations, then as history does progress, the number of these events and innovations increases. This increase in possible connections causes the process of innovation to not only continue, but also to accelerate. Burke poses the question of what happens when this rate of innovation, or more importantly "change" itself, becomes too much for the average person to handle, and what this means for individual power, liberty, and privacy.
Lastly, if the entire modern world is built from these interconnected innovations, all increasingly maintained and improved by specialists who required years of training to gain their expertise, what chance does the average citizen without this extensive training have in making an informed decision on practical technological issues, such as the building of nuclear power plants or the funding of controversial projects such as stem cell research? Furthermore, if the modern world is increasingly interconnected, what happens when one of those nodes collapses? Does the entire system follow suit?
Episodes
Series 1 (1978)
The original 1978 Connections 10-episode documentary television series was created, written, and presented by science historian James Burke and had a companion book (Connections, based on the series). The 1978 Connections companion book was published about the time the middle of the series was airing, so likely was written in parallel to the series and had a postproduction editing release. The very popular book was re-released as a work in a 1995 edition, in 1998 (relations to sections below is unknown), and again in 2007 as both hardcover or softcover editions. Since the television series varied in content with each corresponding production run and release, the companion volumes (as is suggested by the plethora of ISBN codes) are also likely to be different works. This 1978 work's coverage deviates in some topics and details being both more in depth and a bit broader, from the lighter coverage of the episodes.
Series 2 (1994)
Released as Connections.
Series 3 (1997)
Released as Connections.
Series 4 (2023)
Released as Connections with James Burke on 9 November 2023 on Curiosity Stream.
Related works
The first series received a companion book, also by Burke. The first three Connections series have been released in their entirety as DVD box sets in the US. The ten episodes of series one were released in Europe (Region 2) on 6 February 2017.
Burke also wrote a recurring column for Scientific American which explored the history of science and ideas, going back and forth through time explaining things on the way and, generally, coming back to the starting point. The columns were collated into a book in 1995 ().
Burke produced another documentary series called The Day the Universe Changed in 1985, which explored man's concept of how the universe worked in a manner similar to the original Connections.
Richard Hammond's Engineering Connections, shown on BBC2, follows a similar format, as does Latif Nasser's Connected TV series shown on Netflix.
In video games
Connections, a Myst-style computer game with James Burke and others providing video footage and voice acting, was released in 1995. It was a runner-up for Computer Gaming Worlds award for the best "Classics/Puzzles" game of 1995, which ultimately went to You Don't Know Jack. The editors wrote of Connections, "That you enjoy yourself so much you hardly realize that you're learning is a tribute to the design."
A clip from the episode "Yesterday, Tomorrow and You" appears in the 2016 video game The Witness.
See also
References
External links
Partial series 1–3 on the Internet Archive
Complete Series 1–3 on the Internet Archive
TV Cream on Connections
Ars Technica chats with James Burke
1970s British documentary television series
1978 British television series debuts
1978 British television series endings
BBC television documentaries about history
Documentary television series about science
Films directed by Mick Jackson
British historical television series
History of technology
TLC (TV network) original programming
Transdisciplinarity | Connections (British TV series) | Technology | 1,440 |
2,902,981 | https://en.wikipedia.org/wiki/Windows%20Aero | Windows Aero (a backronym for Authentic, Energetic, Reflective, and Open) is the design language introduced in the Microsoft Windows Vista operating system. The changes introduced by Windows Aero encompassed many elements of the Windows interface, with the introduction of a new visual style with an emphasis on animation, glass, and translucency; interface guidelines for phrasing and tone of instructions and other text in applications were available. New cursors and sounds based on Windows Aero design principles were also introduced.
Windows Aero was used as the design language of Windows Vista and Windows 7. The flat design-based Metro design language was introduced on Windows 8, although aspects of the design and features promoted as part of Aero on Windows Vista and 7 have been retained in later versions of Windows (barring design changes to comply with Metro, MDL2, or Fluent).
Features
Windows Aero is the first major revision to Microsoft's user design guidelines for Microsoft Windows since Windows 95, covering aesthetics, common controls such as buttons and radio buttons, task dialogs, wizards, common dialogs, control panels, icons, fonts, user notifications, and the "tone" of text used.
Windows Aero theme
On Windows Vista and Windows 7 computers that meet certain hardware and software requirements, the Windows Aero theme is used by default, primarily incorporating various animation and transparency effects into the desktop using hardware acceleration and the Desktop Window Manager (DWM). In the "Personalize" section added to Control Panel of Windows Vista, users can customize the "glass" effects to either be opaque or transparent, and change the color it is tinted. Enabling Windows Aero also enables other new features, including an enhanced Alt-Tab menu and taskbar thumbnails with live previews of windows, and "Flip 3D", a window switching mechanism which cascades windows with a 3D effect.
Windows 7 features refinements in Windows Aero, including larger window buttons by default (minimize, maximize, close and query), revised taskbar thumbnails, the ability to manipulate windows by dragging them to the top or sides of the screen (to the side to make it fill half the screen, and to the top to maximize), the ability to hide all windows by hovering the Show Desktop button on the taskbar, and the ability to minimize all other windows by shaking one.
Use of DWM, and by extension the Windows Aero theme, requires a video card with 128MB of graphics memory (or at least 64MB of video RAM and 1GB of system RAM for on-board graphics) supporting pixel shader 2.0, and with WDDM-compatible drivers. Windows Aero is also not available in Windows 7 Starter, only available to a limited extent on Windows Vista Home Basic, and is automatically disabled if a user is detected to be running a non-genuine copy of Windows. Windows Server 2008 and Windows Server 2008 R2 also support Windows Aero as part of the "Desktop Experience" component, which is disabled by default.
Aero Wizards
Wizard 97 had been the prevailing standard for wizard design, visual layout, and functionality used in Windows 98 through to Windows Server 2003, as well as most Microsoft products in that time frame. Aero Wizards are the replacement for Wizard 97, incorporating visual updates to match the aesthetics of the rest of Aero, as well as changing the interaction flow.
More specifically:
To increase the efficiency of the wizard, the "Welcome" pages in Wizard 97 are no longer used. (A precursor to this change was implied in a number of wizards in products such as SQL Server 2005 where a check-box was added to welcome pages, allowing a user to disable the welcome page in future uses of the wizard.)
Aero Wizards can be resized, whereas the Wizard 97 guidelines defined exact sizes for wizard window and content sizes.
The purpose of Aero Wizards are more clearly stated at the top.
A new kind of control called a "Command link" provides a single-click operation to choose from a short list of options.
The notion of "Commit pages" is introduced, where it is made clear that the next step will be the actual process that the wizard is being used to enact. If no follow-up information needs to be communicated, these are the last pages in a wizard. Typically a commit page has a button at the bottom-right that is labeled with the action to be taken, such as "Create account".
The "Back" button has moved to the top-left corner of the wizard window and matches the visual style of the back button in other Vista applications. This is done to give more focus to the commit choices. The "Next" button is only shown on pages where it is necessary.
At the end of a wizard, a "Follow-up page" can be used to direct the user to related tasks that they may be interested in after completing the wizard. For example, a follow-up for a CD burning wizard may present options like "Duplicate this disc" and "Make a disc label".
Notifications
Notifications allow an application or operating system component with an icon in the notification area to create a pop-up window with some information about an event or problem. These windows, first introduced in Windows 2000 and known colloquially as "balloons", are similar in appearance to the speech balloons that are commonly seen in comics. Balloons were often criticized in prior versions of Windows due to their intrusiveness, especially with regard to how they interacted with full-screen applications such as games (the entire application was minimized as the bubble came up). Notifications in Aero aim to be less intrusive by gradually fading in and out, and not appearing at all if a full-screen application or screensaver is being displayed—in these cases, notifications are queued until an appropriate time. Larger icons and multiple font sizes and colors are also introduced with Aero's notification windows.
Font
The Segoe UI typeface is the default font for Aero with languages that use Latin, Greek, and Cyrillic character sets. The default font size is also increased from 8pt to 9pt to improve readability. In the Segoe UI typeface prior to Windows 8, the numeral zero ("0") is narrow, while capital letter "O" is wider (Windows 8's Segoe UI keeps this difference), and numeral one ("1") has a top hook, while capital letter "I" has equal crown and base (Windows 8's "1" has no base, and the "I" does not have a crown or base).
Icons
Aero's base icons were designed by The Iconfactory, which had previously designed Windows XP icons.
Phrasing tone
The Vista User Experience Guidelines also address the issue of "tone" in the writing of text used with the Aero user interface. Prior design guidelines from Microsoft had not done much to address the issue of how user interface text is phrased, and as such, the way that information and requests are presented to the user had not been consistent between parts of the operating system.
The guidelines for Vista and its applications suggest messages that present technically accurate advice concisely, objectively, and positively, and assume an intelligent user motivated to solve a particular problem. Specific advice includes the use of the second person and the active voice (e.g. "Print the photos on your camera") and avoidance of words like "please", "sorry" and "thank you".
History
Windows Vista
The Aero interface was unveiled for Windows Vista as a complete redesign of the Windows interface, replacing Windows XP's "Luna" theme. Until the release of Windows Vista Beta 1 in July 2005, little had been shown of Aero in public or leaked builds, with alpha builds containing interim designs such as "Plex".
Windows Aero incorporated the following features in Windows Vista.
Windows Aero theme: The main component of Aero, it is the successor of Windows XP's "Luna" and changes the look and feel of graphical control elements, including but not limited to buttons, checkboxes, radio buttons, menus, progress bars and default Windows icons. Even message boxes are changed.
Windows Flip improvements: Windows Flip (Alt+Tab) in Windows Vista now shows a live preview of each open window instead of the application icons.
Windows Flip 3D: Windows Flip 3D (Windows key+Tab) renders live images of open windows, allowing one to switch between them while displaying them in a three-dimensional view.
Taskbar live thumbnails – Hovering over the taskbar button of a window displays a preview of that window in the taskbar.
Desktop Window Manager (DWM) – Due to the significant impact of the new changes on hardware and performance, Desktop Window Manager was introduced to achieve hardware acceleration, transferring the duty of UI rendering from CPU to graphic subsystem. DWM in Windows Vista required compatible hardware.
Task Dialogs: Dialog boxes meant to help communicate with the user and receive simple user input. Task Dialogs are more complex than traditional message boxes that only bear a message and a set of command buttons. Task Dialogs may have expandable sections, hyperlinks, checkboxes, progress bars and graphical elements.
Windows 7
Windows Aero is revised in Windows 7, with many UI changes, such as a more touch friendly interface, and many new visual effects and features including pointing device gestures:
Aero Peek: Hovering over a taskbar thumbnail shows a preview of the entire window. Aero Peek is also available through the "Show desktop" button at the right end of the taskbar, which makes all open windows transparent for a quick view of the desktop. A similar feature was patented during Windows Vista development.
Aero Shake: Quickly dragging a window back and forth minimizes all other windows. Shaking it again restores them.
Aero Snap: Dragging a window to the right or left side of the desktop causes the window to fill the respective half of the screen. Snapping a window to the top of the screen maximizes it. Windows can be resized by stretching them to touch the top or bottom of the screen, which fully increases their vertical screen estate, while retaining their width, these windows can then slide horizontally if moved by the title bar, or pulled off, which returns the window to its original height.
Touchscreen gestures and support for high DPI on displays were added.
Title bars of maximized windows remain transparent instead of becoming opaque.
The outline of non-maximized windows is completely white, rather than having a cyan outline on the right side and bottom.
The window color is now affected by a multiplication blending mode. While the amount of blending cannot be adjusted by the user manually through normal means, the amount of color multiplication is adjusted in conjunction when adjusting color intensity. The higher the color intensity, the lower amount of color multiplication and vice versa. This also results in window colors that are not as vivid at lower intensities on darker backgrounds as they would be in Windows Vista.
The taskbar was redesigned to automatically group windows by application, and only show their icon by default. When hovering over the taskbar button of an open program, the button glows the dominant RGB color of its icon, with the effect following the mouse cursor.
Progress indicators are present in taskbar buttons. For example, downloading a file in a web browser can cause the button to fill with color as the operation progresses.
In later versions of Windows
Some of the features introduced in Aero remain in modified forms in later versions of Windows.
Windows 8
While pre-release versions of Windows 8 used an updated version of Windows Aero with a flatter, squared look, the Glass theme was replaced prior to release by a new flat design theme based on Metro design language. Transparency effects were removed from the interface, aside from the taskbar, which maintained transparency but no longer has a blur effect.
Flip 3D is removed; was changed to switch between the desktop and Windows Store apps.
Windows 10
The OS initially maintained an updated version of Metro, but began to increasingly reinstate detailed lighting effects (including those that follow the cursor) and Aero glass-like transparency via its migration to Fluent Design System as a successor to Metro.
Virtual desktops have been added via "Task View". This takes over the keyboard shortcut.
Window snapping has been extended to allow applications to be snapped to quadrants of the screen. When a window is snapped to half of the screen, the user is prompted to select a second window to occupy the other half of the screen.
Aero Shake is now referred to as "Title bar window shake".
Windows 11
Predefined "snap layouts" can be activated by hovering over the maximize/restore button on a titlebar. Sets of windows formed using snap layouts ("snap groups") can be minimized and restored from the taskbar as a group.
Some window title bars are now transparent and blurred using the Mica material new to Windows 11. The Mica material only uses the user's desktop background and blurs it as the texture of some title bars, rather than a real-time blurring effect.
Legacy
Retrospectively, a design style, Internet aesthetic and UI/UX design trend based on Windows Aero called Frutiger Aero has been identified, which was popular from roughly 2004 to 2013. It is characterized by modern and organic themes associated with nature, glass, water and air. The name was coined by Sofi Lee in 2017, as a combination of Aero and the Frutiger typeface, which was popular with corporate materials of the time.
See also
Aqua (user interface)
Compiz
Compositing window manager
Desktop Window Manager
Development of Windows 7
Development of Windows Vista
Features new to Windows 7
Features new to Windows Vista
Kwin
References
External links
Windows 7 Aero Peek Feature
Microsoft Docs - Win32 Apps UX Guidelines
2006 software
3D GUIs
Design language
Graphical user interfaces
Windows components
Windows Vista
Windows 7 | Windows Aero | Engineering | 2,835 |
48,740,815 | https://en.wikipedia.org/wiki/Non-surveyable%20proof | In the philosophy of mathematics, a non-surveyable proof is a mathematical proof that is considered infeasible for a human mathematician to verify and so of controversial validity. The term was coined by Thomas Tymoczko in 1979 in criticism of Kenneth Appel and Wolfgang Haken's computer-assisted proof of the four color theorem, and has since been applied to other arguments, mainly those with excessive case splitting and/or with portions dispatched by a difficult-to-verify computer program. Surveyability remains an important consideration in computational mathematics.
Tymoczko's argument
Tymoczko argued that three criteria determine whether an argument is a mathematical proof:
Convincingness, which refers to the proof's ability to persuade a rational prover of its conclusion;
Surveyability, which refers to the proof's accessibility for verification by members of the human mathematical community; and
Formalizability, which refers to the proof's appealing to only logical relationships between concepts to substantiate its argument.
In Tymoczko's view, the Appel–Haken proof failed the surveyability criterion
by, he argued, substituting experiment for deduction:
Without surveyability, a proof may serve its first purpose of convincing a reader of its result and yet fail at its second purpose of enlightening the reader as to why that result is true—it may play the role of an observation rather than of an argument.
This distinction is important because it means that non-surveyable proofs expose mathematics to a much higher potential for error. Especially in the case where non-surveyability is due to the use of a computer program (which may have bugs), most especially when that program is not published, convincingness may suffer as a result. As Tymoczko wrote:
Counterarguments to Tymoczko's claims of non-surveyability
Tymoczko's view is contested, however, by arguments that difficult-to-survey proofs are not necessarily as invalid as impossible-to-survey proofs.
Paul Teller claimed that surveyability was a matter of degree and reader-dependent, not something a proof does or does not have. As proofs are not rejected when students have trouble understanding them, Teller argues, neither should proofs be rejected (though they may be criticized) simply because professional mathematicians find the argument hard to follow. (Teller disagreed with Tymoczko's assessment that "[The Four-Color Theorem] has not been checked by mathematicians, step by step, as all other proofs have been checked. Indeed, it cannot be checked that way.")
An argument along similar lines is that case splitting is an accepted proof method, and the Appel–Haken proof is only an extreme example of case splitting.
Countermeasures against non-surveyability
On the other hand, Tymoczko's point that proofs must be at least possible to survey and that errors in difficult-to-survey proofs are less likely to fall to scrutiny is generally not contested; instead methods have been suggested to improve surveyability, especially of computer-assisted proofs. Among early suggestions was that of parallelization: the verification task could be split across many readers, each of which could survey a portion of the proof. But modern practice, as made famous by Flyspeck, is to render the dubious portions of a proof in a restricted formalism and then verify them with a proof checker that is available itself for survey. Indeed, the Appel–Haken proof has been thus verified.
Nonetheless, automated verification has yet to see widespread adoption.
References
Mathematical proofs
Proof theory
Automated theorem proving | Non-surveyable proof | Mathematics | 744 |
66,516,263 | https://en.wikipedia.org/wiki/Ernest%20Rabinowicz | Ernest Rabinowicz (1927-2006) was an American mechanical engineer. He was known for his work in tribology at Massachusetts Institute of Technology (MIT).
Education
Rabinowicz received his bachelor's degree in physics from the University of Cambridge in 1947. In 1950, he obtained a doctor of philosophy in physical chemistry from the University of Cambridge under the supervision of David Tabor.
Research
Rabinowicz joined the research staff at MIT in 1950, becoming a professor in 1967. He was best known for his research on the friction and wear of materials. He also wrote several popular science articles on tribology topics in Scientific American. His seminal book "Friction and Wear of Materials" was first published in 1965. According to Google Scholar, his work has been cited on more than 10,000 occasions.
As well as teaching students at MIT, Rabinowicz developed and taught a popular extension course on Friction and Wear to many engineers and scientists from industry. His lectures and teaching videos reached a wide audience and made him one of the best known teachers of tribology in the United States, if not the world.
Awards
Rabinowicz has been awarded the three most prestigious tribology awards. In 1985, he received the Mayo D. Hersey Award from the American Society of Mechanical Engineers. In 1988, he received the International Award from the Society of Tribologists and Lubrication Engineers. In 1998, he was awarded the Tribology Gold Medal by the Institution of Mechanical Engineers.
References
American mechanical engineers
Tribologists
Alumni of the University of Cambridge
1927 births
2006 deaths | Ernest Rabinowicz | Materials_science | 322 |
1,481,340 | https://en.wikipedia.org/wiki/Scott%20Kim | Scott Kim is an American puzzle and video game designer, artist, and author of Korean descent. He started writing an occasional "Boggler" column for Discover magazine in 1990, and became an exclusive columnist in 1999, and created hundreds of other puzzles for magazines such as Scientific American and Games, as well as thousands of puzzles for computer games. He was the holder of the Harold Keables chair at Iolani School in 2008.
Kim was born in 1955 in Washington, D.C., and grew up in Rolling Hills Estates, California. He had an early interest in mathematics, education, and art, and attended Stanford University, receiving a BA in music, and a PhD in Computers and Graphic Design under Donald Knuth. In 1981, he created a book called Inversions, words that can be read in more than one way. His first puzzles appeared in Scientific American in Martin Gardner's
"Mathematical Games" column and he said that the column inspired his own career as a puzzle designer.
Kim is one of the best-known masters of the art of ambigrams.
Kim designed logos for Silicon Graphics, Inc., GOES, The Hackers Conference, the Computer Game Developers Conference, and Dylan.
Kim is a regular speaker on puzzle design, such as at the International Game Developers Conference and Casual Games Conference. His wife, Amy Jo Kim, is the author of Community Building on the Web.
He lives in Burlingame, California with his wife Amy Jo Kim, son Gabriel and daughter Lila Rose.
Works
Inversions, 1981, Byte Books, , a book of 60 original ambigrams
"Letterforms & Illusion", 1989, W. H. Freeman & Co., created with Robin Samelson, accompanies the book, Inversions.
Quintapaths, 1969 (tiling puzzle), published by Kadon since 1999.
Heaven and Earth, Buena Vista / Disney (computer game)
Obsidian, SegaSoft (computer game)
MetaSquares, 1996 (computer game, created with Kai Krause, Phil Clevenger, and Ian Gilman).
The Next Tetris, Hasbro Interactive, PlayStation
Railroad Rush Hour, ThinkFun (toy)
Charlie Blast's Territory (Nintendo 64 game)
The NewMedia Puzzle Workout - collection of Kim's magazine puzzles
Scott Kim's Puzzle Box (monthly Shockwave puzzles at JuniorNet.com)
Brainteasers, Mind Benders, Games, Word Searches, Puzzlers, Mazes & More Calendar 2007, Workman Publishing Company,
Contributed works
Harry Abrams. Escher Interactive (computer game)
Elonka Dunin, The Mammoth Book of Secret Code Puzzles, 2006, Constable & Robinson,
Popcap Games, Bejeweled 2, design of Puzzle Mode puzzles
References
Neurology Now. Biographical article from 2009.
Scott Kim: Puzzlemaster - Kim's website
Dan Burstein, Secrets of Angels & Demons, 2004, CDS Books.
Discover magazine "Boggler" column. (archived copies, 1999-2002)
Susan Lammers, Programmers at Work (Microsoft Press, 1986), 272-285. Interview with Kim.
TED Talks video - at the Entertainment Gathering Conference 2008
1955 births
American male writers
American writers of Korean descent
American video game designers
Living people
Puzzle designers
Recreational mathematicians
Mathematics popularizers
Toy inventors
Stanford University alumni
People from Rolling Hills Estates, California | Scott Kim | Mathematics | 685 |
29,247 | https://en.wikipedia.org/wiki/Sulfuric%20acid | Sulfuric acid (American spelling and the preferred IUPAC name) or sulphuric acid (Commonwealth spelling), known in antiquity as oil of vitriol, is a mineral acid composed of the elements sulfur, oxygen, and hydrogen, with the molecular formula . It is a colorless, odorless, and viscous liquid that is miscible with water.
Pure sulfuric acid does not occur naturally due to its strong affinity to water vapor; it is hygroscopic and readily absorbs water vapor from the air. Concentrated sulfuric acid is a strong oxidant with powerful dehydrating properties, making it highly corrosive towards other materials, from rocks to metals. Phosphorus pentoxide is a notable exception in that it is not dehydrated by sulfuric acid but, to the contrary, dehydrates sulfuric acid to sulfur trioxide. Upon addition of sulfuric acid to water, a considerable amount of heat is released; thus, the reverse procedure of adding water to the acid is generally avoided since the heat released may boil the solution, spraying droplets of hot acid during the process. Upon contact with body tissue, sulfuric acid can cause severe acidic chemical burns and secondary thermal burns due to dehydration. Dilute sulfuric acid is substantially less hazardous without the oxidative and dehydrating properties; though, it is handled with care for its acidity.
Many methods for its production are known, including the contact process, the wet sulfuric acid process, and the lead chamber process. Sulfuric acid is also a key substance in the chemical industry. It is most commonly used in fertilizer manufacture but is also important in mineral processing, oil refining, wastewater treating, and chemical synthesis. It has a wide range of end applications, including in domestic acidic drain cleaners, as an electrolyte in lead-acid batteries, as a dehydrating compound, and in various cleaning agents.
Sulfuric acid can be obtained by dissolving sulfur trioxide in water.
Physical properties
Grades of sulfuric acid
Although nearly 100% sulfuric acid solutions can be made, the subsequent loss of at the boiling point brings the concentration to 98.3% acid. The 98.3% grade, which is more stable in storage, is the usual form of what is described as "concentrated sulfuric acid". Other concentrations are used for different purposes. Some common concentrations are:
"Chamber acid" and "tower acid" were the two concentrations of sulfuric acid produced by the lead chamber process, chamber acid being the acid produced in the lead chamber itself (<70% to avoid contamination with nitrosylsulfuric acid) and tower acid being the acid recovered from the bottom of the Glover tower. They are now obsolete as commercial concentrations of sulfuric acid, although they may be prepared in the laboratory from concentrated sulfuric acid if needed. In particular, "10 M" sulfuric acid (the modern equivalent of chamber acid, used in many titrations), is prepared by slowly adding 98% sulfuric acid to an equal volume of water, with good stirring: the temperature of the mixture can rise to 80 °C (176 °F) or higher.
Sulfuric acid
Sulfuric acid contains not only molecules, but is actually an equilibrium of many other chemical species, as it is shown in the table below.
Sulfuric acid is a colorless oily liquid, and has a vapor pressure of <0.001 mmHg at 25 °C and 1 mmHg at 145.8 °C, and 98% sulfuric acid has a vapor pressure of <1 mmHg at 40 °C.
In the solid state, sulfuric acid is a molecular solid that forms monoclinic crystals with nearly trigonal lattice parameters. The structure consists of layers parallel to the (010) plane, in which each molecule is connected by hydrogen bonds to two others. Hydrates are known for n = 1, 2, 3, 4, 6.5, and 8, although most intermediate hydrates are stable against disproportionation.
Polarity and conductivity
Anhydrous is a very polar liquid, having a dielectric constant of around 100. It has a high electrical conductivity, a consequence of autoprotolysis, i.e. self-protonation:
The equilibrium constant for autoprotolysis (25 °C) is:
= 2.7 × 10−4
The corresponding equilibrium constant for water, Kw is 10−14, a factor of 1010 (10 billion) smaller.
In spite of the viscosity of the acid, the effective conductivities of the and ions are high due to an intramolecular proton-switch mechanism (analogous to the Grotthuss mechanism in water), making sulfuric acid a good conductor of electricity. It is also an excellent solvent for many reactions.
Chemical properties
Acidity
The hydration reaction of sulfuric acid is highly exothermic.
As indicated by its acid dissociation constant, sulfuric acid is a strong acid:
Ka1 = 1000 (pKa1 = −3)
The product of this ionization is , the bisulfate anion. Bisulfate is a far weaker acid:
Ka2 = 0.01 (pKa2 = 2)
The product of this second dissociation is , the sulfate anion.
Dehydration
Concentrated sulfuric acid has a powerful dehydrating property, removing water () from other chemical compounds such as table sugar (sucrose) and other carbohydrates, to produce carbon, steam, and heat. Dehydration of table sugar (sucrose) is a common laboratory demonstration. The sugar darkens as carbon is formed, and a rigid column of black, porous carbon called a carbon snake may emerge.
Similarly, mixing starch into concentrated sulfuric acid gives elemental carbon and water. The effect of this can also be seen when concentrated sulfuric acid is spilled on paper. Paper is composed of cellulose, a polysaccharide related to starch. The cellulose reacts to give a burnt appearance in which the carbon appears much like soot that results from fire.
Although less dramatic, the action of the acid on cotton, even in diluted form, destroys the fabric.
The reaction with copper(II) sulfate can also demonstrate the dehydration property of sulfuric acid. The blue crystals change into white powder as water is removed.
Reactions with salts
Sulfuric acid reacts with most bases to give the corresponding sulfate or bisulfate.
Aluminium sulfate, also known as paper maker's alum, is made by treating bauxite with sulfuric acid:
Sulfuric acid can also be used to displace weaker acids from their salts. Reaction with sodium acetate, for example, displaces acetic acid, , and forms sodium bisulfate:
Similarly, treating potassium nitrate with sulfuric acid produces nitric acid. Sulfuric acid reacts with sodium chloride, and gives hydrogen chloride gas and sodium bisulfate:
When combined with nitric acid, sulfuric acid acts both as an acid and a dehydrating agent, forming the nitronium ion , which is important in nitration reactions involving electrophilic aromatic substitution. This type of reaction, where protonation occurs on an oxygen atom, is important in many organic chemistry reactions, such as Fischer esterification and dehydration of alcohols.
When allowed to react with superacids, sulfuric acid can act as a base and can be protonated, forming the ion. Salts of have been prepared (e.g. trihydroxyoxosulfonium hexafluoroantimonate(V) ) using the following reaction in liquid HF:
The above reaction is thermodynamically favored due to the high bond enthalpy of the Si–F bond in the side product. Protonation using simply fluoroantimonic acid, however, has met with failure, as pure sulfuric acid undergoes self-ionization to give ions:
which prevents the conversion of to by the HF/ system.
Reactions with metals
Even diluted sulfuric acid reacts with many metals via a single displacement reaction, like other typical acids, producing hydrogen gas and salts (the metal sulfate). It attacks reactive metals (metals at positions above copper in the reactivity series) such as iron, aluminium, zinc, manganese, magnesium, and nickel.
Concentrated sulfuric acid can serve as an oxidizing agent, releasing sulfur dioxide:
Lead and tungsten, however, are resistant to sulfuric acid.
Reactions with carbon and sulfur
Hot concentrated sulfuric acid oxidizes carbon (as bituminous coal) and sulfur:
Electrophilic aromatic substitution
Benzene and many derivatives undergo electrophilic aromatic substitution with sulfuric acid to give the corresponding sulfonic acids:
Sulfur–iodine cycle
Sulfuric acid can be used to produce hydrogen from water:
{|
|-
| || || (120 °C, Bunsen reaction)
|-
| || || (830 °C)
|-
| || || (320 °C)
|}
The compounds of sulfur and iodine are recovered and reused, hence the process is called the sulfur–iodine cycle. This process is endothermic and must occur at high temperatures, so energy in the form of heat has to be supplied. The sulfur–iodine cycle has been proposed as a way to supply hydrogen for a hydrogen-based economy. It is an alternative to electrolysis, and does not require hydrocarbons like current methods of steam reforming. But note that all of the available energy in the hydrogen so produced is supplied by the heat used to make it.
Occurrence
Sulfuric acid is rarely encountered naturally on Earth in anhydrous form, due to its great affinity for water. Dilute sulfuric acid is a constituent of acid rain, which is formed by atmospheric oxidation of sulfur dioxide in the presence of water – i.e. oxidation of sulfurous acid. When sulfur-containing fuels such as coal or oil are burned, sulfur dioxide is the main byproduct (besides the chief products carbon oxides and water).
Sulfuric acid is formed naturally by the oxidation of sulfide minerals, such as pyrite:
The resulting highly acidic water is called acid mine drainage (AMD) or acid rock drainage (ARD).
The can be further oxidized to :
The produced can be precipitated as the hydroxide or hydrous iron oxide:
The iron(III) ion ("ferric iron") can also oxidize pyrite:
When iron(III) oxidation of pyrite occurs, the process can become rapid. pH values below zero have been measured in ARD produced by this process.
ARD can also produce sulfuric acid at a slower rate, so that the acid neutralizing capacity (ANC) of the aquifer can neutralize the produced acid. In such cases, the total dissolved solids (TDS) concentration of the water can be increased from the dissolution of minerals from the acid-neutralization reaction with the minerals.
Sulfuric acid is used as a defense by certain marine species, for example, the phaeophyte alga Desmarestia munda (order Desmarestiales) concentrates sulfuric acid in cell vacuoles.
Stratospheric aerosol
In the stratosphere, the atmosphere's second layer that is generally between 10–50 km above Earth's surface, sulfuric acid is formed by the oxidation of volcanic sulfur dioxide by the hydroxyl radical:
Because sulfuric acid reaches supersaturation in the stratosphere, it can nucleate aerosol particles and provide a surface for aerosol growth via condensation and coagulation with other water-sulfuric acid aerosols. This results in the stratospheric aerosol layer.
Extraterrestrial sulfuric acid
The permanent Venusian clouds produce a concentrated acid rain, as the clouds in the atmosphere of Earth produce water rain. Jupiter's moon Europa is also thought to have an atmosphere containing sulfuric acid hydrates.
Manufacturing
Sulfuric acid is produced from sulfur, oxygen and water via the conventional contact process (DCDA) or the wet sulfuric acid process (WSA).
Contact process
In the first step, sulfur is burned to produce sulfur dioxide.
The sulfur dioxide is oxidized to sulfur trioxide by oxygen in the presence of a vanadium(V) oxide catalyst. This reaction is reversible and the formation of the sulfur trioxide is exothermic.
The sulfur trioxide is absorbed into 97–98% to form oleum (), also known as fuming sulfuric acid or pyrosulphuric acid. The oleum is then diluted with water to form concentrated sulfuric acid.
Directly dissolving in water, called the "wet sulfuric acid process", is rarely practiced because the reaction is extremely exothermic, resulting in a hot aerosol of sulfuric acid that requires condensation and separation.
Wet sulfuric acid process
In the first step, sulfur is burned to produce sulfur dioxide:
(−297 kJ/mol)
or, alternatively, hydrogen sulfide () gas is incinerated to gas:
(−1036 kJ/mol)
The sulfur dioxide then oxidized to sulfur trioxide using oxygen with vanadium(V) oxide as catalyst.
(−198 kJ/mol) (reaction is reversible)
The sulfur trioxide is hydrated into sulfuric acid :
(−101 kJ/mol)
The last step is the condensation of the sulfuric acid to liquid 97–98% :
(−69 kJ/mol)
Other methods
Burning sulfur together with saltpeter (potassium nitrate, ), in the presence of steam, has been used historically. As saltpeter decomposes, it oxidizes the sulfur to , which combines with water to produce sulfuric acid.
Prior to 1900, most sulfuric acid was manufactured by the lead chamber process. As late as 1940, up to 50% of sulfuric acid manufactured in the United States was produced by chamber process plants.
A wide variety of laboratory syntheses are known, and typically begin from sulfur dioxide or an equivalent salt. In the metabisulfite method, hydrochloric acid reacts with metabisulfite to produce sulfur dioxide vapors. The gas is bubbled through nitric acid, which will release brown/red vapors of nitrogen dioxide as the reaction proceeds. The completion of the reaction is indicated by the ceasing of the fumes. This method conveniently does not produce an inseparable mist.
Alternatively, dissolving sulfur dioxide in an aqueous solution of an oxidizing metal salt such as copper(II) or iron(III) chloride:
Two less well-known laboratory methods of producing sulfuric acid, albeit in dilute form and requiring some extra effort in purification, rely on electrolysis. A solution of copper(II) sulfate can be electrolyzed with a copper cathode and platinum/graphite anode to give spongy copper at cathode and oxygen gas at the anode. The solution of dilute sulfuric acid indicates completion of the reaction when it turns from blue to clear (production of hydrogen at cathode is another sign):
More costly, dangerous, and troublesome is the electrobromine method, which employs a mixture of sulfur, water, and hydrobromic acid as the electrolyte. The sulfur is pushed to bottom of container under the acid solution. Then the copper cathode and platinum/graphite anode are used with the cathode near the surface and the anode is positioned at the bottom of the electrolyte to apply the current. This may take longer and emits toxic bromine/sulfur-bromide vapors, but the reactant acid is recyclable. Overall, only the sulfur and water are converted to sulfuric acid and hydrogen (omitting losses of acid as vapors):
(electrolysis of aqueous hydrogen bromide)
(initial tribromide production, eventually reverses as depletes)
(bromine reacts with sulfur to form disulfur dibromide)
(oxidation and hydration of disulfur dibromide)
Uses
World production in the year 2004 was about 180 million tonnes, with the following geographic distribution: Asia 35%, North America (including Mexico) 24%, Africa 11%, Western Europe 10%, Eastern Europe and Russia 10%, Australia and Oceania 7%, South America 7%. Most of this amount (≈60%) is consumed for fertilizers, particularly superphosphates, ammonium phosphate and ammonium sulfates. About 20% is used in chemical industry for production of detergents, synthetic resins, dyestuffs, pharmaceuticals, petroleum catalysts, insecticides and antifreeze, as well as in various processes such as oil well acidicizing, aluminium reduction, paper sizing, and water treatment. About 6% of uses are related to pigments and include paints, enamels, printing inks, coated fabrics and paper, while the rest is dispersed into a multitude of applications such as production of explosives, cellophane, acetate and viscose textiles, lubricants, non-ferrous metals, and batteries.
Industrial production of chemicals
The dominant use for sulfuric acid is in the "wet method" for the production of phosphoric acid, used for manufacture of phosphate fertilizers. In this method, phosphate rock is used, and more than 100 million tonnes are processed annually. This raw material is shown below as fluorapatite, though the exact composition may vary. This is treated with 93% sulfuric acid to produce calcium sulfate, hydrogen fluoride (HF) and phosphoric acid. The HF is removed as hydrofluoric acid. The overall process can be represented as:
Ammonium sulfate, an important nitrogen fertilizer, is most commonly produced as a byproduct from coking plants supplying the iron and steel making plants. Reacting the ammonia produced in the thermal decomposition of coal with waste sulfuric acid allows the ammonia to be crystallized out as a salt (often brown because of iron contamination) and sold into the agro-chemicals industry.
Sulfuric acid is also important in the manufacture of dyestuffs solutions.
Industrial cleaning agent
Sulfuric acid is used in steelmaking and other metallurgical industries as a pickling agent for removal of rust and fouling. Used acid is often recycled using a spent acid regeneration (SAR) plant. These plants combust spent acid with natural gas, refinery gas, fuel oil or other fuel sources. This combustion process produces gaseous sulfur dioxide () and sulfur trioxide () which are then used to manufacture "new" sulfuric acid.
Hydrogen peroxide () can be added to sulfuric acid to produce piranha solution, a powerful but very toxic cleaning solution with which substrate surfaces can be cleaned. Piranha solution is typically used in the microelectronics industry, and also in laboratory settings to clean glassware.
Catalyst
Sulfuric acid is used for a variety of other purposes in the chemical industry. For example, it is the usual acid catalyst for the conversion of cyclohexanone oxime to caprolactam, used for making nylon. It is used for making hydrochloric acid from salt via the Mannheim process. Much is used in petroleum refining, for example as a catalyst for the reaction of isobutane with isobutylene to give isooctane, a compound that raises the octane rating of gasoline (petrol). Sulfuric acid is also often used as a dehydrating or oxidizing agent in industrial reactions, such as the dehydration of various sugars to form solid carbon.
Electrolyte
Sulfuric acid acts as the electrolyte in lead–acid batteries (lead-acid accumulator):
At anode:
At cathode:
Overall:
Domestic uses
Sulfuric acid at high concentrations is frequently the major ingredient in domestic acidic drain cleaners which are used to remove lipids, hair, tissue paper, etc. Similar to their alkaline versions, such drain openers can dissolve fats and proteins via hydrolysis. Moreover, as concentrated sulfuric acid has a strong dehydrating property, it can remove tissue paper via dehydrating process as well. Since the acid may react with water vigorously, such acidic drain openers should be added slowly into the pipe to be cleaned.
History
Vitriols
The study of vitriols (hydrated sulfates of various metals forming glassy minerals from which sulfuric acid can be derived) began in ancient times. Sumerians had a list of types of vitriol that they classified according to the substances' color. Some of the earliest discussions on the origin and properties of vitriol is in the works of the Greek physician Dioscorides (first century AD) and the Roman naturalist Pliny the Elder (23–79 AD). Galen also discussed its medical use. Metallurgical uses for vitriolic substances were recorded in the Hellenistic alchemical works of Zosimos of Panopolis, in the treatise Phisica et Mystica, and the Leyden papyrus X. Medieval Islamic alchemists like the authors writing under the name of Jabir ibn Hayyan (died c. 806 – c. 816 AD, known in Latin as Geber), Abu Bakr al-Razi (865 – 925 AD, known in Latin as Rhazes), Ibn Sina (980 – 1037 AD, known in Latin as Avicenna), and Muhammad ibn Ibrahim al-Watwat (1234 – 1318 AD) included vitriol in their mineral classification lists.
Jabir ibn Hayyan, Abu Bakr al-Razi, Ibn Sina, et al.
The Jabirian authors and al-Razi experimented extensively with the distillation of various substances, including vitriols. In one recipe recorded in his ('Book of Secrets'), al-Razi may have created sulfuric acid without being aware of it:
In an anonymous Latin work variously attributed to Aristotle (under the title , 'Book of Aristotle'), to al-Razi (under the title , 'Great Light of Lights'), or to Ibn Sina, the author speaks of an 'oil' () obtained through the distillation of iron(II) sulfate (green vitriol), which was likely 'oil of vitriol' or sulfuric acid. The work refers multiple times to Jabir ibn Hayyan's Seventy Books (), one of the few Arabic Jabir works that were translated into Latin. The author of the version attributed to al-Razi also refers to the as his own work, showing that he erroneously believed the to be a work by al-Razi. There are several indications that the anonymous work was an original composition in Latin, although according to one manuscript it was translated by a certain Raymond of Marseilles, meaning that it may also have been a translation from the Arabic.
According to Ahmad Y. al-Hassan, three recipes for sulfuric acid occur in an anonymous Garshuni manuscript containing a compilation taken from several authors and dating from before . One of them runs as follows:
The water of vitriol and sulphur which is used to irrigate the drugs: yellow vitriol three parts, yellow sulphur one part, grind them and distil them in the manner of rose-water.
A recipe for the preparation of sulfuric acid is mentioned in , an Arabic treatise falsely attributed to the Shi'i Imam Ja'far al-Sadiq (died 765). Julius Ruska dated this treatise to the 13th century, but according to Ahmad Y. al-Hassan it likely dates from an earlier period:
Then distil green vitriol in a cucurbit and alembic, using medium fire; take what you obtain from the distillate, and you will find it clear with a greenish tint.
Vincent of Beauvais, Albertus Magnus, and pseudo-Geber
Sulfuric acid was called 'oil of vitriol' by medieval European alchemists because it was prepared by roasting iron(II) sulfate or green vitriol in an iron retort. The first allusions to it in works that are European in origin appear in the thirteenth century AD, as for example in the works of Vincent of Beauvais, in the Compositum de Compositis ascribed to Albertus Magnus, and in pseudo-Geber's Summa perfectionis.
Producing sulfuric acid from sulfur
A method of producing oleum sulphuris per campanam, or "oil of sulfur by the bell", was known by the 16th century: it involved burning sulfur under a glass bell in moist weather (or, later, under a moistened bell). However, it was very inefficient (according to Gesner, of sulfur converted into less than of acid), and the resulting product was contaminated by sulfurous acid (or rather, solution of sulfur dioxide) so most alchemists (including, for example, Isaac Newton) didn't consider it equivalent with the "oil of vitriol".
In the 17th century, Johann Rudolf Glauber discovered that adding saltpeter (potassium nitrate, ) significantly improves the output, also replacing moisture with steam. As saltpeter decomposes, it oxidizes the sulfur to , which combines with water to produce sulfuric acid. In 1736, Joshua Ward, a London pharmacist, used this method to begin the first large-scale production of sulfuric acid.
Lead chamber process
In 1746 in Birmingham, John Roebuck adapted this method to produce sulfuric acid in lead-lined chambers, which were stronger, less expensive, and could be made larger than the previously used glass containers. This process allowed the effective industrialization of sulfuric acid production. After several refinements, this method, called the lead chamber process or "chamber process", remained the standard for sulfuric acid production for almost two centuries.
Distillation of pyrite
Sulfuric acid created by John Roebuck's process approached a 65% concentration. Later refinements to the lead chamber process by French chemist Joseph Louis Gay-Lussac and British chemist John Glover improved concentration to 78%. However, the manufacture of some dyes and other chemical processes require a more concentrated product. Throughout the 18th century, this could only be made by dry distilling minerals in a technique similar to the original alchemical processes. Pyrite (iron disulfide, ) was heated in air to yield iron(II) sulfate, , which was oxidized by further heating in air to form iron(III) sulfate, , which, when heated to 480 °C, decomposed to iron(III) oxide and sulfur trioxide, which could be passed through water to yield sulfuric acid in any concentration. However, the expense of this process prevented the large-scale use of concentrated sulfuric acid.
Contact process
In 1831, British vinegar merchant Peregrine Phillips patented the contact process, which was a far more economical process for producing sulfur trioxide and concentrated sulfuric acid. Today, nearly all of the world's sulfuric acid is produced using this method.
In the early to mid 19th century "vitriol" plants existed, among other places, in Prestonpans in Scotland, Shropshire and the Lagan Valley in County Antrim, Northern Ireland, where it was used as a bleach for linen. Early bleaching of linen was done using lactic acid from sour milk but this was a slow process and the use of vitriol sped up the bleaching process.
Safety
Laboratory hazards
Sulfuric acid is capable of causing very severe burns, especially when it is at high concentrations. In common with other corrosive acids and alkali, it readily decomposes proteins and lipids through amide and ester hydrolysis upon contact with living tissues, such as skin and flesh. In addition, it exhibits a strong dehydrating property on carbohydrates, liberating extra heat and causing secondary thermal burns. Accordingly, it rapidly attacks the cornea and can induce permanent blindness if splashed onto eyes. If ingested, it damages internal organs irreversibly and may even be fatal. Personal protective equipment should hence always be used when handling it. Moreover, its strong oxidizing property makes it highly corrosive to many metals and may extend its destruction on other materials. Because of such reasons, damage posed by sulfuric acid is potentially more severe than that by other comparable strong acids, such as hydrochloric acid and nitric acid.
Sulfuric acid must be stored carefully in containers made of nonreactive material (such as glass). Solutions equal to or stronger than 1.5 M are labeled "CORROSIVE", while solutions greater than 0.5 M but less than 1.5 M are labeled "IRRITANT". However, even the normal laboratory "dilute" grade (approximately 1 M, 10%) will char paper if left in contact for a sufficient time.
The standard first aid treatment for acid spills on the skin is, as for other corrosive agents, irrigation with large quantities of water. Washing is continued for at least ten to fifteen minutes to cool the tissue surrounding the acid burn and to prevent secondary damage. Contaminated clothing is removed immediately and the underlying skin washed thoroughly.
Dilution hazards
Preparation of diluted acid can be dangerous due to the heat released in the dilution process. To avoid splattering, the concentrated acid is usually added to water and not the other way around. A saying used to remember this is "Do like you oughta, add the acid to the water". Water has a higher heat capacity than the acid, and so a vessel of cold water will absorb heat as acid is added.
Also, because the acid is denser than water, it sinks to the bottom. Heat is generated at the interface between acid and water, which is at the bottom of the vessel. Acid will not boil, because of its higher boiling point. Warm water near the interface rises due to convection, which cools the interface, and prevents boiling of either acid or water.
In contrast, addition of water to concentrated sulfuric acid results in a thin layer of water on top of the acid. Heat generated in this thin layer of water can boil, leading to the dispersal of a sulfuric acid aerosol, or worse, an explosion.
Preparation of solutions greater than 6 M (35%) in concentration is dangerous, unless the acid is added slowly enough to allow the mixture sufficient time to cool. Otherwise, the heat produced may be sufficient to boil the mixture. Efficient mechanical stirring and external cooling (such as an ice bath) are essential.
Reaction rates double for about every 10-degree Celsius increase in temperature. Therefore, the reaction will become more violent as dilution proceeds, unless the mixture is given time to cool. Adding acid to warm water will cause a violent reaction.
On a laboratory scale, sulfuric acid can be diluted by pouring concentrated acid onto crushed ice made from de-ionized water. The ice melts in an endothermic process while dissolving the acid. The amount of heat needed to melt the ice in this process is greater than the amount of heat evolved by dissolving the acid so the solution remains cold. After all the ice has melted, further dilution can take place using water.
Industrial hazards
Sulfuric acid is non-flammable.
The main occupational risks posed by this acid are skin contact leading to burns (see above) and the inhalation of aerosols. Exposure to aerosols at high concentrations leads to immediate and severe irritation of the eyes, respiratory tract and mucous membranes: this ceases rapidly after exposure, although there is a risk of subsequent pulmonary edema if tissue damage has been more severe. At lower concentrations, the most commonly reported symptom of chronic exposure to sulfuric acid aerosols is erosion of the teeth, found in virtually all studies: indications of possible chronic damage to the respiratory tract are inconclusive as of 1997. Repeated occupational exposure to sulfuric acid mists may increase the chance of lung cancer by up to 64 percent. In the United States, the permissible exposure limit (PEL) for sulfuric acid is fixed at 1 mg/m3: limits in other countries are similar. There have been reports of sulfuric acid ingestion leading to vitamin B12 deficiency with subacute combined degeneration. The spinal cord is most often affected in such cases, but the optic nerves may show demyelination, loss of axons and gliosis.
Legal restrictions
International commerce of sulfuric acid is controlled under the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances, 1988, which lists sulfuric acid under Table II of the convention as a chemical frequently used in the illicit manufacture of narcotic drugs or psychotropic substances.
See also
Aqua regia
Diethyl ether – also known as "sweet oil of vitriol"
Piranha solution
Sulfur oxoacid
Sulfuric acid poisoning
References
External links
Sulfuric acid at The Periodic Table of Videos (University of Nottingham)
NIOSH Pocket Guide to Chemical Hazards
CDC – Sulfuric Acid – NIOSH Workplace Safety and Health Topic
Calculators: surface tensions, and densities, molarities and molalities of aqueous sulfuric acid
Acid catalysts
Alchemical substances
Cleaning products
Dehydrating agents
Equilibrium chemistry
Household chemicals
Chalcogen oxoacids
Inorganic solvents
Mineral acids
Oxidizing acids
Photographic chemicals
Sulfates
Sulfur oxoacids
E-number additives | Sulfuric acid | Chemistry | 7,020 |
1,553,972 | https://en.wikipedia.org/wiki/Video%20production | Video production is the process of producing video content. It is the equivalent of filmmaking, but with video recorded either as analog signals on videotape, digitally in video tape or as computer files stored on optical discs, hard drives, SSDs, magnetic tape or memory cards instead of film stock.
Television broadcast
Two styles of producing video are ENG (Electronic news-gathering) and EFP (Electronic field production).
Video production for distance education
Video production for distance education is the process of capturing, editing, and presenting educational material specifically for use in on-line education. Teachers integrate best practice teaching techniques to create scripts, organize content, capture video footage, edit footage using computer based video editing software to deliver final educational material over the Internet. It differs from other types of video production in at least three ways:
It augments traditional teaching tools used in on-line educational programs.
It may incorporate motion video with sound, computer animations, stills, and other digital media.
Capture of content may include use of cell phone integrated cameras and extend to commercial high-definition Broadcast quality cameras.
Webcasting is also being used in education for distance learning projects; one innovative use was the DiveLive programs.
Internet video production
Increasing internet speeds, the transition to digital from physical formats such as tape to file-based media and the availability of cloud-based video services has increased use of the internet to provision services previously delivered on-premise in the context of commercial content creation for example video editing. In some cases the lower costs of equivalent services in the cloud has driven adoption and in others the greater scope for collaboration and time savings.
Individual Internet marketing videos are primarily produced in-house and by small media agencies, while a large volume of videos are produced by big media companies, crowdsourced production marketplaces, or in scalable video production platforms.
See also
B-roll
List of video topics
Television studies
References
External links
Broadcast engineering
Film and video technology
Television terminology
Articles containing video clips | Video production | Engineering | 397 |
903,303 | https://en.wikipedia.org/wiki/Diazotroph | Diazotrophs are bacteria and archaea that fix atmospheric nitrogen (N2) in the atmosphere into bioavailable forms such as ammonia.
A diazotroph is a microorganism that is able to grow without external sources of fixed nitrogen. Examples of organisms that do this are rhizobia and Frankia and Azospirillum. All diazotrophs contain iron-molybdenum or iron-vanadium nitrogenase systems. Two of the most studied systems are those of Klebsiella pneumoniae and Azotobacter vinelandii. These systems are studied because of their genetic tractability and their fast growth.
Etymology
The word diazotroph is derived from the words diazo ("di" = two + "azo" = nitrogen) meaning "dinitrogen (N2)" and troph meaning "pertaining to food or nourishment", in summary dinitrogen utilizing. The word azote means nitrogen in French and was named by French chemist and biologist Antoine Lavoisier, who saw it as the part of air which cannot sustain life.
Types
Diazotrophs are scattered across Bacteria taxonomic groups (as well as a couple of Archaea). Even within a species that can fix nitrogen there may be strains that do not. Fixation is shut off when other sources of nitrogen are available, and, for many species, when oxygen is at high partial pressure. Bacteria have different ways of dealing with the debilitating effects of oxygen on nitrogenases, listed below.
Free-living diazotrophs
Anaerobes—these are obligate anaerobes that cannot tolerate oxygen even if they are not fixing nitrogen. They live in habitats low in oxygen, such as soils and decaying vegetable matter. Clostridium is an example. Sulphate-reducing bacteria are important in ocean sediments (e.g. Desulfovibrio), and some Archean methanogens, like Methanococcus, fix nitrogen in muds, animal intestines and anoxic soils.
Facultative anaerobes—these species can grow either with or without oxygen, but they only fix nitrogen anaerobically. Often, they respire oxygen as rapidly as it is supplied, keeping the amount of free oxygen low. Examples include Klebsiella pneumoniae, Paenibacillus polymyxa, Bacillus macerans, and Escherichia intermedia.
Aerobes—these species require oxygen to grow, yet their nitrogenase is still debilitated if exposed to oxygen. Azotobacter vinelandii is the most studied of these organisms. It uses very high respiration rates, and protective compounds, to prevent oxygen damage. Many other species also reduce the oxygen levels in this way, but with lower respiration rates and lower oxygen tolerance.
Oxygenic photosynthetic bacteria (cyanobacteria) generate oxygen as a by-product of photosynthesis, yet some are able to fix nitrogen as well. These are colonial bacteria that have specialized cells (heterocysts) that lack the oxygen generating steps of photosynthesis. Examples are Anabaena cylindrica and Nostoc commune. Other cyanobacteria lack heterocysts and can fix nitrogen only in low light and oxygen levels (e.g. Plectonema). Some cyanobacteria, including the highly abundant marine taxa Prochlorococcus and Synechococcus do not fix nitrogen, whilst other marine cyanobacteria, such as Trichodesmium and Cyanothece, are major contributors to oceanic nitrogen fixation.
Anoxygenic photosynthetic bacteria do not generate oxygen during photosynthesis, having only a single photosystem which cannot split water. Nitrogenase is expressed under nitrogen limitation. Normally, the expression is regulated via negative feedback from the produced ammonium ion but in the absence of N2, the product is not formed, and the by-product H2 continues unabated [Biohydrogen]. Example species: Rhodobacter sphaeroides, Rhodopseudomonas palustris, Rhodobacter capsulatus.
Symbiotic diazotrophs
Rhizobia—these are the species that associate with legumes, plants of the family Fabaceae. Oxygen is bound to leghemoglobin in the root nodules that house the bacterial symbionts, and supplied at a rate that will not harm the nitrogenase.
Frankias—'actinorhizal' nitrogen fixers. The bacteria also infect the roots, leading to the formation of nodules. Actinorhizal nodules consist of several lobes, each lobe has a similar structure as a lateral root. Frankia is able to colonize in the cortical tissue of nodules, where it fixes nitrogen. Actinorhizal plants and Frankias also produce haemoglobins. Their role is less well established than for rhizobia. Although at first, it appeared that they inhabit sets of unrelated plants (alders, Australian pines, California lilac, bog myrtle, bitterbrush, Dryas), revisions to the phylogeny of angiosperms show a close relatedness of these species and the legumes. These footnotes suggest the ontogeny of these replicates rather than the phylogeny. In other words, an ancient gene (from before the angiosperms and gymnosperms diverged) that is unused in most species was reawakened and reused in these species.
Cyanobacteria—there are also symbiotic cyanobacteria. Some associate with fungi as lichens, with liverworts, with a fern, and with a cycad. These do not form nodules (indeed most of the plants do not have roots). Heterocysts exclude the oxygen, as discussed above. The fern association is important agriculturally: the water fern Azolla harbouring Anabaena is an important green manure for rice culture.
Association with animals—although diazotrophs have been found in many animal guts, there is usually sufficient ammonia present to suppress nitrogen fixation. Termites on a low nitrogen diet allow for some fixation, but the contribution to the termite's nitrogen supply is negligible. Shipworms may be the only species that derive significant benefit from their gut symbionts.
Cultivation
Under the laboratory conditions, extra nitrogen sources are not needed to grow free-living diazotrophs. Carbon sources (such as sucrose or glucose) and a small amount of inorganic salt are added to the medium. Free-living diazotrophs can directly use atmospheric nitrogen (N2). However, while cultivating several symbiotic diazotrophs, such as rhizobia, it is necessary to add nitrogen because rhizobia and other symbiotic nitrogen-fixing bacteria can not use molecular nitrogen (N2) in free-living form and only fix nitrogen during symbiosis with a host plant.
Application
Biofertilizer
Diazotroph fertilizer is a kind of biofertilizer that can use nitrogen-fixing microorganisms to convert molecular nitrogen (N2) into ammonia (which is the formation of nitrogen available for the crops to use). These nitrogen nutrients then can be used in the process of protein synthesis for the plants. This whole process of nitrogen fixation by diazotroph is called biological nitrogen fixation. This biochemical reaction can be carried out under normal temperature and pressure conditions. So it does not require extreme conditions and specific catalysts in fertilizer production. Therefore, produce available nitrogen in this way can be cheap, clean and efficient. Nitrogen-fixing bacteria fertilizer is an ideal and promising biofertilizer.
From the ancient time, people grow the leguminous crops to make the soil more fertile. And the reason for this is: the root of leguminous crops are symbiotic with the rhizobia (a kind of diazotroph). These rhizobia can be considered as a natural biofertilizer to provide available nitrogen in the soil. After harvesting the leguminous crops, and then grow other crops (may not be leguminous), they can also use these nitrogen remain in the soil and grow better.
Diazotroph biofertilizers used today include Rhizobium, Azotobacter, Azospirilium and Blue green algae (a genus of cyanobacteria). These fertilizer are widely used and commenced into industrial production. So far in the market, nitrogen fixation biofertilizer can be divided into liquid fertilizer and solid fertilizer. Most of the fertilizers are fermented in the way of liquid fermentation. After fermentation, the liquid bacteria can be packaged, which is the liquid fertilizer, and the fermented liquid can also be adsorbed with sterilized peat and other carrier adsorbents to form a solid microbial fertilizer. These nitrogen-fixation fertilizer has a certain effect on increasing the production of cotton, rice, wheat, peanuts, rape, corn, sorghum, potatoes, tobacco, sugarcane and various vegetables.
Importance
In organisms the symbiotic associations greatly exceed the free-living species, with the exception of cyanobacteria.
Biologically available nitrogen such as ammonia is the primary limiting factor for life on earth. Diazotroph plays an important roles in nitrogen cycle of the earth. In the terrestrial ecosystem, the diazotroph fix the (N2) from the atmosphere and provide the available nitrogen for the primary producer. Then the nitrogen is transferred to higher trophical levels and human beings. The formation and storage of nitrogen will all influenced by the transformation process. Also the available nitrogen fixed by the diazotroph is environmentally sustainable, which can reduce the use of fertilizer, which can be an important topic in agricultural research.
In marine ecosystem, prokaryotic phytoplankton (such as cyanobacteria) is the main nitrogen fixer, then the nitrogen consumed by higher trophical levels. The fixed N released from these organisms is a component of ecosystem N inputs. And also the fixed N is important for the coupled C cycle. A greater oceanic inventory of fixed N may increase the primary production and export of organic C to the deep ocean.
References
External links
Marine Nitrogen Fixation - The Basics (USC Capone Lab)
Azotobacter
Rhizobia
Frankia & Actinorhizal Plants
Nitrogen cycle
Environmental microbiology | Diazotroph | Chemistry,Environmental_science | 2,237 |
8,596,033 | https://en.wikipedia.org/wiki/Moravian%20star | A Moravian star () is an illuminated decoration used during the Christian liturgical seasons of Advent, Christmas, and Epiphany representing the Star of Bethlehem pointing towards the infant Jesus. The Moravian Church teaches:
The Moravian star is popular in places where there are Moravian Christian congregations world wide The stars take their English name from the Moravian Church, originating in Moravia. In Germany, they are known as Herrnhut stars, named after the Moravian Mother Community in Saxony, Germany, where they were first commercially produced. With the rise of ecumenism, the use the Moravian star has spread beyond the Moravian Church to other Christian denominations, such as the Lutheran Church and Catholic Church, as well as the Methodist Church.
History
The first Moravian star is known to have originated in the 1830s at the Moravian Boys' School in Niesky, Germany, as a geometry lesson or project. The first mention is of a 110-point star for the 50th anniversary of the Paedagogium (classical school for boys) in Niesky. Around 1880, Peter Verbeek, an alumnus of the school, began making the stars and their instructions available for sale through his bookstore. His son Harry went on to found the Herrnhut Star Factory, which was the main source of stars until World War I. Although heavily damaged at the end of World War II, the Star Factory resumed manufacturing them. Briefly taken over by the government of East Germany in the 1950s, the factory was returned to the Moravian Church-owned Abraham Dürninger Company, which continues to make the stars in Herrnhut. Other star-making companies and groups have sprung up since then. Some Moravian congregations have congregation members who build and sell the stars as fund raisers.
Cultural and Religious importance
The star was soon adopted throughout the Moravian Church as an Advent symbol.
Moravian stars continue to be displayed during Advent, Christmas, and Epiphany throughout the world, even in areas without a significant Moravian Church presence. Many Moravian households display their stars year-round. The stars are often seen in Moravian nativity and Christmas village displays as a representation of the Star of Bethlehem pointing towards the infant Jesus. They are properly displayed from the first Sunday in Advent (the fourth Sunday before Christmas) until the Festival of Epiphany (January 6). Large advent stars shine in the dome of the Frauenkirche in Dresden and over the altar of the Thomaskirche in Leipzig. A Moravian star, one of the largest in the world, sits atop the North Tower of Atrium Health Wake Forest Baptist during the Advent and Christmas seasons. The city of Winston-Salem, North Carolina, which traces its origins to Salem has Moravian origins dating to 1766, uses the Moravian star as their official Christmas street decoration. Another star sits under Wake Forest University's Wait Chapel during the Advent and Christmas seasons as well.
The use of the stars during the Advent, Christmas, and Epiphany seasons is also a tradition in the West Indies, Greenland, Suriname, Labrador, Central America, South and East Africa, Ladakh in India, and in parts of Scandinavia: wherever the Moravian Church has sent missionaries.
in 2020, as the world descended in to the Covid19 pandemic, Moravians began to ring, rehang their stars as a sign of love, hope and peace during dark times. On March 27, 2020 Atrium Health Wake Forest Baptist reinstalled their 31 foot star, a top the North Tower.
Types of stars
The original Moravian star as manufactured in Herrnhut since 1897 exists only in a 26-point form, composed of eighteen square and eight triangular cone-shaped points. The 26th point is missing and used for mounting. This shape is technically known as a Kleetope of a rhombicuboctahedron. Each face of the geometric solid in the middle, the rhombicuboctahedron, serves as the base for one of the pyramid augmentations or starburst points. This is the most commonly seen and most widely available form of Christmas star.
Other forms of Christmas star exist, which differ from the original Herrnhut Moravian star. No matter how many points a star has, it has a symmetrical shape based on polyhedra. There are other stars with 20, 32, 50, 62 and 110 points that are commonly hand-made. The variety comes from various ways of forming the polyhedron that provides a base for the points—using an octagonal face instead of a square face, for example. The common original Herrnhut Moravian star becomes a 50-point star when the squares and triangles that normally make up the faces of the polyhedron become octagons and hexagons. This leaves a 4-sided trapezoidal hole in the corners of the faces which is then filled with an irregular four sided point. These 4-sided points form a "starburst" in the midst of an otherwise regular 26-point star.
Froebel stars, which are paper decorations made from four folded strips of paper, are sometimes inaccurately also called Moravian stars, among many other names.
See also
Christingle
Illumination (decoration)
Lovefeast
Nativity scene
References
External links
Make a German Froebel Star Ornament
Make a Froebel Star
Make a Moravian Star
Hardwood Moravian Stars
Christmas decorations
Christmas in Germany
Traditions of the Moravian Church
Herrnhut
Niesky
Advent
Star symbols | Moravian star | Mathematics | 1,093 |
69,544,961 | https://en.wikipedia.org/wiki/SHIELDS | The Spatial Heterodyne Interferometric Emission Line Dynamics Spectrometer (SHIELDS) mission is intended to study light from interstellar particles that have drifted into the Solar System in order to learn about the nearest reaches of interstellar space. The purpose of the mission is acquire a spatial map of scattered solar ultraviolet emission from interplanetary hydrogen that has crossed and been modified by the ion pile-up along the outer edge of the heliosphere. SHIELDS was successfully launched by NASA on April 19, 2021, from the White Sands Missile Range, in New Mexico. Flown aboard a sounding rocket, the mission is very short: an instrument stays in space for few minutes.
See also
List of NASA missions
References
Space probes launched in 2021
NASA space probes | SHIELDS | Astronomy | 153 |
72,252,379 | https://en.wikipedia.org/wiki/Blind%20polytope | In geometry, a Blind polytope is a convex polytope composed of regular polytope facets.
The category was named after the German couple Gerd and Roswitha Blind, who described them in a series of papers beginning in 1979.
It generalizes the set of semiregular polyhedra and Johnson solids to higher dimensions.
Uniform cases
The set of convex uniform 4-polytopes (also called semiregular 4-polytopes) are completely known cases, nearly all grouped by their Wythoff constructions, sharing symmetries of the convex regular 4-polytopes and prismatic forms.
Set of convex uniform 5-polytopes, uniform 6-polytopes, uniform 7-polytopes, etc are largely enumerated as Wythoff constructions, but not known to be complete.
Other cases
Pyramidal forms: (4D)
(Tetrahedral pyramid, ( ) ∨ {3,3}, a tetrahedron base, and 4 tetrahedral sides, a lower symmetry name of regular 5-cell.)
Octahedral pyramid, ( ) ∨ {3,4}, an octahedron base, and 8 tetrahedra sides meeting at an apex.
Icosahedral pyramid, ( ) ∨ {3,5}, an icosahedron base, and 20 tetrahedra sides.
Bipyramid forms: (4D)
Tetrahedral bipyramid, { } + {3,3}, a tetrahedron center, and 8 tetrahedral cells on two side.
(Octahedral bipyramid, { } + {3,4}, an octahedron center, and 8 tetrahedral cells on two side, a lower symmetry name of regular 16-cell.)
Icosahedral bipyramid, { } + {3,5}, an icosahedron center, and 40 tetrahedral cells on two sides.
Augmented forms: (4D)
Rectified 5-cell augmented with one octahedral pyramid, adding one vertex for 11 total. It retains 5 tetrahedral cells, reduced to 4 octahedral cells and adds 8 new tetrahedral cells.
Convex Regular-Faced Polytopes
Blind polytopes are a subset of convex regular-faced polytopes (CRF).
This much larger set allows CRF 4-polytopes to have Johnson solids as cells, as well as regular and semiregular polyhedral cells.
For example, a cubic bipyramid has 12 square pyramid cells.
References
External links
Blind polytope
Convex regular-faced polytopes
Polytopes | Blind polytope | Mathematics | 553 |
20,811,122 | https://en.wikipedia.org/wiki/PowerLab | PowerLab (before 1998 was referred to as MacLab) is a data acquisition system developed by ADInstruments comprising hardware and software and designed for use in life science research and teaching applications. It is commonly used in physiology, pharmacology, biomedical engineering, sports/exercise studies and psychophysiology laboratories to record and analyse physiological signals from human or animal subjects or from isolated organs. The system consists of an input device connected to a Microsoft Windows or Mac OS computer using a USB cable and LabChart software which is supplied with the PowerLab and provides the recording, display and analysis functions. The use of PowerLab and supplementary ADInstruments products have been demonstrated on the Journal of Visualised Experiments.
The original MacLab unit was developed in the late 1980s to run with only Macintosh computers to perform computer-based data acquisition and analysis. The MacLab product range was renamed "PowerLab" in 1997 to reflect the cross-platform nature of the system.
The PowerLab system is essentially a peripheral device designed to perform various functions needed for data acquisition, signal conditioning and pre-processing. Versatile display options and analysis functions are complemented by the ability to export data to other software (such as Microsoft Excel).
How is data acquired?
External signals detected are converted into analog electrical signal
Signals are amplified to amplify signals and filtered to remove unwanted frequencies or noise
Analog signal is multiplexed to an analog-to-digital converter
The digitized signal is transmitted to the computer using USB connection
Software receives, displays, analyses and records data in real time
PowerLab Models
Current
/35 Series for research: Released in 2011. Includes PowerLab 4/35, PowerLab 8/35 and PowerLab 16/35.
/30 Series for research: Released in 2004. Includes PowerLab 4/30, PowerLab 8/30 and PowerLab 16/30.
/26 Series for teaching: Released in 2007. Includes PowerLab 2/26, PowerLab 4/26, PowerLab26T
/T Series for teaching: Released in 2007. Includes PowerLab 15T
Previous
Original MacLab: First released November 1988. Includes MacLab/4 and MacLab/8.
E series: First released October 1992. Includes MacLab /2e, MacLab /4e, MacLab /8e, MacLab /200, PowerLab /200, MacLab /400, PowerLab /400, PowerLab /410, PowerLab /415, and PowerLab /800
S series: First released August 1994. For SCSI connection only and includes MacLab /4S, PowerLab /4S, PowerLab /8S and PowerLab /16S.
SP series: First released May 1999. For SCSI and USB connections and includes PowerLab /4SP, PowerLab /8SP, PowerLab /16SP, and PowerLab 4ST.
/20 series: First released July 2000. USB connections only and includes PowerLab 2/20, PowerLab 4/20, and PowerLab 4/20T.
/T series: First released September 2002. PowerLab 10T for teaching.
/25 series: First released September 2003. Requires high speed USB 2.0 connections and includes PowerLab 2/25, PowerLab 4/25, and PowerLab 4/25T.
Software for PowerLab
LabChart
Formerly known as Chart. The software functions like a traditional multi-channel chart recorder, XY plotter, polygraph and digital voltmeter. It is compatible with both Windows and Macintosh operating systems. The software has hardware settings control, performs analysis in real-time and offline without the loss of raw data, procedure automation via editable macros, and multiple block samplings for the recording and settings of different signals within one file. Large specialised add-ons called Modules provide data acquisition and analysis features for specific applications such as ECG, blood pressure, cardiac output, HRV etc. Smaller software plugins provide additional and specialized functionality to LabChart. Extensions perform functions such as file translations into other formats (including PVAN and Igor Pro) and specialist analysis functions (for specific research areas such as spirometry and ventricular pressure). The last version of LabChart6 (version 6.1.3) was released in January 2009.
In April 2009, LabChart 7 was released and incorporates the features of a multi-channel digital oscilloscope that allows recording and averaging of up to sixteen signals in real time. Latest version of LabChart7 is version 7.0.
LabChart 8 is also now available.
LabTutor
Software provides a range of hands-on laboratory background for students that includes experimental background & protocols, data acquisition & analysis, and report generation within one interface. The software and accompanying PowerLab hardware is configured for immediate use with step by step instructions designed to maximize student productivity by applying independent learning techniques to a suite of human and animal physiological experiments. Recently, LabAuthor software was released to provide educators the ability to design or edit existing LabTutor experiments and tailor the experiments to suit their practical classes without the need of programming or HTML skills.
Scope
Records and analyzes high frequency signals that are time-locked to a stimulus. The display allows computer screen to act as an oscilloscope and XY plotter
Limitations
The PowerLab messaging protocol is not publicly available and there is no public API for traditional programming languages such as C.
References
Cross-platform software
Plotting software
Data analysis software
Data collection in research
Life sciences industry | PowerLab | Biology | 1,123 |
53,156,368 | https://en.wikipedia.org/wiki/Beijing%E2%80%93Washington%20hotline | The Beijing–Washington hotline is a system that allows direct communication between the leaders of the United States and China. This hotline was established in November 2007, when both countries announced that they would set up a military hotline to avoid misunderstanding between their militaries during any moments of crisis in the Pacific.
History
Discussions to set up a Beijing–Washington hotline started during a meeting between Chinese president Hu Jintao and U.S. President George W. Bush in April 2006.
On 5 November 2007, U.S. Defense Secretary Robert M. Gates told reporters that he and Chinese Defense Minister Cao Gangchuan formally agreed to set up the dedicated 24-hour phone line in Beijing. According to a report, China's Defense Ministry long resisted the idea of a direct line until June 2007, when General Zhang Qinsheng stated that China was ready to proceed with the establishment at the Shangri-La Dialogue security conference in Singapore. After a meeting in February 2008, China and the United States officially signed an agreement to set up a military hotline between the United States Department of Defense and the Ministry of National Defense of the People's Republic of China. The hotline was set up on 10 April 2008.
The Beijing–Washington hotline uses different procedures compared to the Moscow–Washington hotline which was set up after the Cuban Missile Crisis. The 2008 agreement with China arranges a call to be put through to the Zhongnanhai telecommunications directorate, which has discretion on whether to forward the call to the foreign affairs section of the Department of Defense or the PLA's command headquarters in West Beijing. Furthermore, in protest against US actions, the Chinese have cut off the hotline twice for extended time periods.
In September 2015, Chinese leader Xi Jinping and U.S. President Barack Obama announced agreements on a new military hotline to reduce the risks of accidental escalations between the two countries.
In May 2021, Kurt Campbell, the US policy coordinator for the Indo-Pacific, said that China had been reluctant to use the hotline, describing it as "the couple of times we've used it, just rung in an empty room for hours upon hours".
In February 2023 Minister of Defense of China, Wei Fenghe, declined to respond to a call from U.S. Defense Secretary Lloyd Austin regarding a balloon over the Beaufort Sea in the vicinity of Deadhorse, Alaska, during the 2023 Chinese balloon incident. The hotline was restored after President Biden's meeting with Xi Jinping at the 2023 APEC summit.
Space hotline
In November 2015, the U.S. and China set up a so-called space hotline, allowing both nations to easily share information about activities in space and help their space and military agencies to discuss "potential collisions, approaches, or tests" to prevent misunderstanding or miscommunication from escalating to a dangerous situation. According to U.S. Assistant Secretary Of State Frank Rose, an urgent safety mechanism was required due to the growing amount of potentially lethal space debris in orbit, as well as numerous undisclosed military satellite launches. The link was established amid tensions due to China ramping up tests of weapons designed to target the orbital networks upon which almost all US high-tech military capabilities depend.
Cyber hotline
In November 2011, an editorial in the China Daily called for closer communication through a cyber red phone, especially in cases of an emergency concerning matters of cyberwarfare.
See also
Moscow–Washington hotline
Seoul–Pyongyang hotline
Islamabad–New Delhi hotline
China–United States relations
References
Communication circuits
China–United States relations
Military communications of the United States
Military communications of China
Hotlines between countries | Beijing–Washington hotline | Engineering | 749 |
27,440,875 | https://en.wikipedia.org/wiki/G%C3%ABzim%20Bo%C3%A7ari | Gëzim Boçari (born 25 March 1949) is an Albanian professor of pharmacology and a politician.
Gëzim Boçari is head of the pharmacology sector of the medicine faculty at the University of Tirana. He is one of the writers of medicine and pharmacology textbooks of Albanian universities. In the 2009 parliamentary elections for Albania he was the head candidate of the coalition of the Pole of Freedom () in the district of Vlorë.
External links
Interview of Gëzim Boçari
Sources
20th-century Albanian politicians
21st-century Albanian politicians
20th-century Albanian writers
21st-century Albanian writers
Albanian health professionals
Albanian scientists
Pharmacologists
Living people
Academic staff of the University of Tirana
People from Vlorë
1949 births | Gëzim Boçari | Chemistry | 151 |
4,834,091 | https://en.wikipedia.org/wiki/Kuhn%20length | The Kuhn length is a theoretical treatment, developed by Werner Kuhn, in which a real polymer chain is considered as a collection of Kuhn segments each with a Kuhn length . Each Kuhn segment can be thought of as if they are freely jointed with each other. Each segment in a freely jointed chain can randomly orient in any direction without the influence of any forces, independent of the directions taken by other segments. Instead of considering a real chain consisting of bonds and with fixed bond angles, torsion angles, and bond lengths, Kuhn considered an equivalent ideal chain with connected segments, now called Kuhn segments, that can orient in any random direction.
The length of a fully stretched chain is for the Kuhn segment chain. In the simplest treatment, such a chain follows the random walk model, where each step taken in a random direction is independent of the directions taken in the previous steps, forming a random coil. The average end-to-end distance for a chain satisfying the random walk model is .
Since the space occupied by a segment in the polymer chain cannot be taken by another segment, a self-avoiding random walk model can also be used. The Kuhn segment construction is useful in that it allows complicated polymers to be treated with simplified models as either a random walk or a self-avoiding walk, which can simplify the treatment considerably.
For an actual homopolymer chain (consists of the same repeat units) with bond length and bond angle θ with a dihedral angle energy potential, the average end-to-end distance can be obtained as
,
where is the average cosine of the dihedral angle.
The fully stretched length . By equating the two expressions for and the two expressions for from the actual chain and the equivalent chain with Kuhn segments, the number of Kuhn segments and the Kuhn segment length can be obtained.
For worm-like chain, Kuhn length equals two times the persistence length.
References
Polymer chemistry
Polymer physics | Kuhn length | Chemistry,Materials_science,Engineering | 401 |
9,020,211 | https://en.wikipedia.org/wiki/Chungah | A chungah is an obsolete unit of volume used in India, approximately equal to 1/6 of an imperial gallon (0.758 litres). After metrication in the mid-20th century the unit became obsolete.
See also
List of customary units of measurement in South Asia
References
Units of volume
Customary units in India
Obsolete units of measurement | Chungah | Mathematics | 70 |
62,726,859 | https://en.wikipedia.org/wiki/Institut%20oc%C3%A9anographique | The () is an ocean education organization based in Monaco. The institute manages two ocean museums (in Monaco and Paris) and lobbies globally for the preservation of the oceans' ecology.
History
The Institut océanographique was founded in 1906 by Albert I, Prince of Monaco (the International Hydrographic Organization was launched in Monaco in 1921).
In 1957, Jacques Cousteau was appointed director of the Institut océanographique. In 1961, the Institut océanographique reached an agreement with the International Atomic Energy Agency to relocate the International Laboratory of Marine Radioactivity in Monaco.
In 1998, the Institut océanographique organized the second International Aquariology Congress (the first edition was held in 1958). The European Union of Aquarium Curators was created during this event.
The institute implemented the "Monaco Blue Initiative", a global initiative focused on deep sea biodiversity and large marine species, and backed by UNESCO. In 2016, hundreds of cubic meters of archives belonging to the Institut were found in the Schœlcher campus of the University of the French West Indies (Martinique). In 2019, the Institut océanographique invested 5 million euros in the opening of a care center for marine species, especially sick or injured turtles.
Locations
Oceanographic Museum of Monaco
Institut océanographique de Paris
References
External links
Official website
Oceanography
Oceanographic organizations
Organisations based in Monaco | Institut océanographique | Physics,Environmental_science | 280 |
1,786,719 | https://en.wikipedia.org/wiki/Step-growth%20polymerization | In polymer chemistry, step-growth polymerization refers to a type of polymerization mechanism in which bi-functional or multifunctional monomers react to form first dimers, then trimers, longer oligomers and eventually long chain polymers. Many naturally-occurring and some synthetic polymers are produced by step-growth polymerization, e.g. polyesters, polyamides, polyurethanes, etc. Due to the nature of the polymerization mechanism, a high extent of reaction is required to achieve high molecular weight. The easiest way to visualize the mechanism of a step-growth polymerization is a group of people reaching out to hold their hands to form a human chain—each person has two hands (= reactive sites). There also is the possibility to have more than two reactive sites on a monomer: In this case branched polymers production take place.
IUPAC has deprecated the term step-growth polymerization, and recommends use of the terms polyaddition (when the propagation steps are addition reactions and molecules are not evolved during these steps) and polycondensation (when the propagation steps are condensation reactions and molecules are evolved during these steps).
Historical aspects
Most natural polymers being employed at early stage of human society are of condensation type. The synthesis of first truly synthetic polymeric material, bakelite, was announced by Leo Baekeland in 1907, through a typical step-growth polymerization fashion of phenol and formaldehyde.
The pioneer of synthetic polymer science, Wallace Carothers, developed a new means of making polyesters through step-growth polymerization in 1930s as a research group leader at DuPont. It was the first reaction designed and carried out with the specific purpose of creating high molecular weight polymer molecules, as well as the first polymerization reaction whose results had been predicted by scientific theory. Carothers developed a series of mathematic equations to describe the behavior of step-growth polymerization systems which are still known as the Carothers equations today. Collaborating with Paul Flory, a physical chemist, they developed theories that describe more mathematical aspects of step-growth polymerization including kinetics, stoichiometry, and molecular weight distribution etc. Carothers is also well known for his invention of Nylon.
Condensation polymerization
"Step growth polymerization" and condensation polymerization are two different concepts, not always identical. In fact polyurethane polymerizes with addition polymerization (because its polymerization produces no small molecules), but its reaction mechanism corresponds to a step-growth polymerization.
The distinction between "addition polymerization" and "condensation polymerization" was introduced by Wallace Carothers in 1929, and refers to the type of products, respectively:
a polymer only (addition)
a polymer and a molecule with a low molecular weight (condensation)
The distinction between "step-growth polymerization" and "chain-growth polymerization" was introduced by Paul Flory in 1953, and refers to the reaction mechanisms, respectively:
by functional groups (step-growth polymerization)
by free-radical or ion (chain-growth polymerization)
Differences from chain-growth polymerization
This technique is usually compared with chain-growth polymerization to show its characteristics.
Classes of step-growth polymers
Classes of step-growth polymers are:
Polyester has high glass transition temperature Tg and high melting point Tm, good mechanical properties to about 175 °C, good resistance to solvent and chemicals. It can exist as fibers and films. The former is used in garments, felts, tire cords, etc. The latter appears in magnetic recording tape and high grade films.
Polyamide (nylon) has good balance of properties: high strength, good elasticity and abrasion resistance, good toughness, favorable solvent resistance. The applications of polyamide include: rope, belting, fiber cloths, thread, substitute for metal in bearings, jackets on electrical wire.
Polyurethane can exist as elastomers with good abrasion resistance, hardness, good resistance to grease and good elasticity, as fibers with excellent rebound, as coatings with good resistance to solvent attack and abrasion and as foams with good strength, good rebound and high impact strength.
Polyurea shows high Tg, fair resistance to greases, oils, and solvents. It can be used in truck bed liners, bridge coating, caulk and decorative designs.
Polysiloxane, siloxane-based polymers available in a wide range of physical states—from liquids to greases, waxes, resins, and rubbers. Due to perfect thermal stability (thanks to silicon, Si) uses of this material include antifoam and release agents, gaskets, seals, cable and wire insulation, hot liquids and gas conduits, etc.
Polycarbonates are transparent, self-extinguishing materials. They possess properties like crystalline thermoplasticity, high impact strength, good thermal and oxidative stability. They can be used in machinery, auto-industry, and medical applications. For example, the cockpit canopy of F-22 Raptor is made of high optical quality polycarbonate.
Polysulfides have outstanding oil and solvent resistance, good gas impermeability, good resistance to aging and ozone. However, it smells bad, and it shows low tensile strength as well as poor heat resistance. It can be used in gasoline hoses, gaskets and places that require solvent resistance and gas resistance.
Polyether shows good thermoplastic behavior, water solubility, generally good mechanical properties, moderate strength and stiffness. It is applied in sizing for cotton and synthetic fibers, stabilizers for adhesives, binders, and film formers in pharmaceuticals.
Phenol formaldehyde resin (bakelite) have good heat resistance, dimensional stability as well as good resistance to most solvents. It also shows good dielectric properties. This material is typically used in molding applications, electrical, radio, televisions and automotive parts where their good dielectric properties are of use. Some other uses include: impregnating paper, varnishes, decorative laminates for wall coverings.
Polytriazole polymers are produced from monomers which bear both an alkyne and azide functional group. The monomer units are linked to each other by the a 1,2,3-triazole group; which is produced by the 1,3-dipolar cycloaddition, also called the azide-alkyne Huisgen cycloaddition. These polymers can take on the form of a strong resin, or a gel. With oligopeptide monomers containing a terminal alkyne and terminal azide the resulting clicked peptide polymer will be biodegradable due to action of endopeptidases on the oligopeptide unit.
Branched polymers
A monomer with functionality of 3 or more will introduce branching in a polymer and will ultimately form a cross-linked macrostructure or network even at low fractional conversion. The point at which a tree-like topology transits to a network is known as the gel point because it is signalled by an abrupt change in viscosity. One of the earliest so-called thermosets is known as bakelite. It is not always water that is released in step-growth polymerization: in acyclic diene metathesis or ADMET dienes polymerize with loss of ethene.
Kinetics
The kinetics and rates of step-growth polymerization can be described using a polyesterification mechanism. The simple esterification is an acid-catalyzed process in which protonation of the acid is followed by interaction with the alcohol to produce an ester and water. However, there are a few assumptions needed with this kinetic model. The first assumption is water (or any other condensation product) is efficiently removed. Secondly, the functional group reactivities are independent of chain length. Finally, it is assumed that each step only involves one alcohol and one acid.
This is a general rate law degree of polymerization for polyesterification
where n= reaction order.
Self-catalyzed polyesterification
If no acid catalyst is added, the reaction will still proceed because the acid can act as its own catalyst. The rate of condensation at any time t can then be derived from the rate of disappearance of -COOH groups and
The second-order term arises from its use as a catalyst, and k is the rate constant. For a system with equivalent quantities of acid and glycol, the functional group concentration can be written simply as
After integration and substitution from Carothers equation, the final form is the following
For a self-catalyzed system, the number average degree of polymerization (Xn) grows proportionally with .
External catalyzed polyesterification
The uncatalyzed reaction is rather slow, and a high Xn is not readily attained. In the presence of a catalyst, there is an acceleration of the rate, and the kinetic expression is altered to
which is kinetically first order in each functional group. Hence,
and integration gives finally
For an externally catalyzed system, the number average degree of polymerization grows proportionally with .
Molecular weight distribution in linear polymerization
The product of a polymerization is a mixture of polymer molecules of different molecular weights. For theoretical and practical reasons it is of interest to discuss the distribution of molecular weights in a polymerization. The molecular weight distribution (MWD) had been derived by Flory by a statistical approach based on the concept of equal reactivity of functional groups.
Probability
Step-growth polymerization is a random process so we can use statistics to calculate the probability of finding a chain with x-structural units ("x-mer") as a function of time or conversion.
{\mathit{x}AA} + \mathit{x}BB -> AA-(BB-AA)_{\mathit{x}-1}-BB
\mathit{x}AB -> A-(B-A)_{\mathit{x}-1}-B
Probability that an 'A' functional group has reacted
Probability of finding an 'A' unreacted
Combining the above two equations leads to.
Where Px is the probability of finding a chain that is x-units long and has an unreacted 'A'. As x increases the probability decreases.
Number fraction distribution
The number fraction distribution is the fraction of x-mers in any system and equals the probability of finding it in solution.
Where N is the total number of polymer molecules present in the reaction.
Weight fraction distribution
The weight fraction distribution is the fraction of x-mers in a system and the probability of finding them in terms of mass fraction.
Notes:
Mo is the molar mass of the repeat unit,
No is the initial number of monomer molecules,
and N is the number of unreacted functional groups
Substituting from the Carothers equation
We can now obtain:
PDI
The polydispersity index (PDI), is a measure of the distribution of molecular mass in a given polymer sample.
However, for step-growth polymerization the Carothers equation can be used to substitute and rearrange this formula into the following.
Therefore, in step-growth when p=1, then the PDI=2.
Molecular weight control in linear polymerization
Need for stoichiometric control
There are two important aspects with regard to the control of molecular weight in polymerization. In the synthesis of polymers, one is usually interested in obtaining a product of very specific molecular weight, since the properties of the polymer will usually be highly dependent on molecular weight. Molecular weights higher or lower than the desired weight are equally undesirable. Since the degree of polymerization is a function of reaction time, the desired molecular weight can be obtained by quenching the reaction at the appropriate time. However, the polymer obtained in this manner is unstable in that it leads to changes in molecular weight because the ends of the polymer molecule contain functional groups that can react further with each other.
This situation is avoided by adjusting the concentrations of the two monomers so that they are slightly nonstoichiometric. One of the reactants is present in slight excess. The polymerization then proceeds to a point at which one reactant is completely used up and all the chain ends possess the same functional group of the group that is in excess. Further polymerization is not possible, and the polymer is stable to subsequent molecular weight changes.
Another method of achieving the desired molecular weight is by addition of a small amount of monofunctional monomer, a monomer with only one functional group. The monofunctional monomer, often referred to as a chain stopper, controls and limits the polymerization of bifunctional monomers because the growing polymer yields chain ends devoid of functional groups and therefore incapable of further reaction.
Quantitative aspects
To properly control the polymer molecular weight, the stoichiometric imbalance of the bifunctional monomer or the monofunctional monomer must be precisely adjusted. If the nonstoichiometric imbalance is too large, the polymer molecular weight will be too low. It is important to understand the quantitative effect of the stoichiometric imbalance of reactants on the molecular weight. Also, this is necessary in order to know the quantitative effect of any reactive impurities that may be present in the reaction mixture either initially or that are formed by undesirable side reactions. Impurities with A or B functional groups may drastically lower the polymer molecular weight unless their presence is quantitatively taken into account.
More usefully, a precisely controlled stoichiometric imbalance of the reactants in the mixture can provide the desired result. For example, an excess of diamine over an acid chloride would eventually produce a polyamide with two amine end groups incapable of further growth when the acid chloride was totally consumed. This can be expressed in an extension of the Carothers equation as,
where r is the ratio of the number of molecules of the reactants.
were NBB is the molecule in excess.
The equation above can also be used for a monofunctional additive which is the following,
where NB is the number of monofunction molecules added. The coefficient of 2 in front of NB is require since one B molecule has the same quantitative effect as one excess B-B molecule.
Multi-chain polymerization
A monomer with functionality 3 has 3 functional groups which participate in the polymerization. This will introduce branching in a polymer and may ultimately form a cross-linked macrostructure. The point at which this three-dimensional 3D network is formed is known as the gel point, signaled by an abrupt change in viscosity.
A more general functionality factor fav is defined for multi-chain polymerization, as the average number of functional groups present per monomer unit. For a system containing N0 molecules initially and equivalent numbers of two function groups A and B, the total number of functional groups is N0fav.
And the modified Carothers equation is
, where p equals to
Advances in step-growth polymers
The driving force in designing new polymers is the prospect of replacing other materials of construction, especially metals, by using lightweight and heat-resistant polymers. The advantages of lightweight polymers include: high strength, solvent and chemical resistance, contributing to a variety of potential uses, such as electrical and engine parts on automotive and aircraft components, coatings on cookware, coating and circuit boards for electronic and microelectronic devices, etc. Polymer chains based on aromatic rings are desirable due to high bond strengths and rigid polymer chains. High molecular weight and crosslinking are desirable for the same reason. Strong dipole-dipole, hydrogen bond interactions and crystallinity also improve heat resistance. To obtain desired mechanical strength, sufficiently high molecular weights are necessary, however, decreased solubility is a problem. One approach to solve this problem is to introduce of some flexibilizing linkages, such as isopropylidene, C=O, and into the rigid polymer chain by using an appropriate monomer or comonomer. Another approach involves the synthesis of reactive telechelic oligomers containing functional end groups capable of reacting with each other, polymerization of the oligomer gives higher molecular weight, referred to as chain extension.
Aromatic polyether
The oxidative coupling polymerization of many 2,6-disubstituted phenols using a catalytic complex of a cuprous salt and amine form aromatic polyethers, commercially referred to as poly(p-phenylene oxide) or PPO. Neat PPO has little commercial uses due to its high melt viscosity. Its available products are blends of PPO with high-impact polystyrene (HIPS).
Polyethersulfone
Polyethersulfone (PES) is also referred to as polyetherketone, polysulfone. It is synthesized by nucleophilic aromatic substitution between aromatic dihalides and bisphenolate salts. Polyethersulfones are partially crystalline, highly resistant to a wide range of aqueous and organic environment. They are rated for continuous service at temperatures of 240-280 °C. The polyketones are finding applications in areas like automotive, aerospace, electrical-electronic cable insulation.
Aromatic polysulfides
Poly(p-phenylene sulfide) (PPS) is synthesized by the reaction of sodium sulfide with p-dichlorobenzene in a polar solvent such as 1-methyl-2-pyrrolidinone (NMP). It is inherently flame-resistant and stable toward organic and aqueous conditions; however, it is somewhat susceptible to oxidants. Applications of PPS include automotive, microwave oven component, coating for cookware when blend with fluorocarbon polymers and protective coatings for valves, pipes, electromotive cells, etc.
Aromatic polyimide
Aromatic polyimides are synthesized by the reaction of dianhydrides with diamines, for example, pyromellitic anhydride with p-phenylenediamine. It can also be accomplished using diisocyanates in place of diamines. Solubility considerations sometimes suggest use of the half acid-half ester of the dianhydride, instead of the dianhydride itself. Polymerization is accomplished by a two-stage process due to the insolubility of polyimides. The first stage forms a soluble and fusible high-molecular-weight poly(amic acid) in a polar aprotic solvent such as NMP or N,N-dimethylacetamide. The poly(amic aicd) can then be processed into the desired physical form of the final polymer product (e.g., film, fiber, laminate, coating) which is insoluble and infusible.
Telechelic oligomer approach
Telechelic oligomer approach applies the usual polymerization manner except that one includes a monofunctional reactant to stop reaction at the oligomer stage, generally in the 50-3000 molecular weight. The monofunctional reactant not only limits polymerization but end-caps the oligomer with functional groups capable of subsequent reaction to achieve curing of the oligomer. Functional groups like alkyne, norbornene, maleimide, nitrite, and cyanate have been used for this purpose. Maleimide and norbornene end-capped oligomers can be cured by heating. Alkyne, nitrile, and cyanate end-capped oligomers can undergo cyclotrimerization yielding aromatic structures.
See also
Conducting polymer
Fire-safe polymers
Liquid crystal polymer
Random graph theory of gelation
Thermosetting plastic
References
External links
Crosslinking
Polymerization reactions | Step-growth polymerization | Chemistry,Materials_science | 4,121 |
3,986,852 | https://en.wikipedia.org/wiki/Logical%20truth | Logical truth is one of the most fundamental concepts in logic. Broadly speaking, a logical truth is a statement which is true regardless of the truth or falsity of its constituent propositions. In other words, a logical truth is a statement which is not only true, but one which is true under all interpretations of its logical components (other than its logical constants). Thus, logical truths such as "if p, then p" can be considered tautologies. Logical truths are thought to be the simplest case of statements which are analytically true (or in other words, true by definition). All of philosophical logic can be thought of as providing accounts of the nature of logical truth, as well as logical consequence.
Logical truths are generally considered to be necessarily true. This is to say that they are such that no situation could arise in which they could fail to be true. The view that logical statements are necessarily true is sometimes treated as equivalent to saying that logical truths are true in all possible worlds. However, the question of which statements are necessarily true remains the subject of continued debate.
Treating logical truths, analytic truths, and necessary truths as equivalent, logical truths can be contrasted with facts (which can also be called contingent claims or synthetic claims). Contingent truths are true in this world, but could have turned out otherwise (in other words, they are false in at least one possible world). Logically true propositions such as "If p and q, then p" and "All married people are married" are logical truths because they are true due to their internal structure and not because of any facts of the world (whereas "All married people are happy", even if it were true, could not be true solely in virtue of its logical structure).
Rationalist philosophers have suggested that the existence of logical truths cannot be explained by empiricism, because they hold that it is impossible to account for our knowledge of logical truths on empiricist grounds. Empiricists commonly respond to this objection by arguing that logical truths (which they usually deem to be mere tautologies), are analytic and thus do not purport to describe the world. The latter view was notably defended by the logical positivists in the early 20th century.
Logical truths and analytic truths
Logical truths, being analytic statements, do not contain any information about any matters of fact. Other than logical truths, there is also a second class of analytic statements, typified by "no bachelor is married". The characteristic of such a statement is that it can be turned into a logical truth by substituting synonyms for synonyms salva veritate. "No bachelor is married" can be turned into "no unmarried man is married" by substituting "unmarried man" for its synonym "bachelor".
In his essay Two Dogmas of Empiricism, the philosopher W. V. O. Quine called into question the distinction between analytic and synthetic statements. It was this second class of analytic statements that caused him to note that the concept of analyticity itself stands in need of clarification, because it seems to depend on the concept of synonymy, which stands in need of clarification. In his conclusion, Quine rejects that logical truths are necessary truths. Instead he posits that the truth-value of any statement can be changed, including logical truths, given a re-evaluation of the truth-values of every other statement in one's complete theory.
Truth values and tautologies
Considering different interpretations of the same statement leads to the notion of truth value. The simplest approach to truth values means that the statement may be "true" in one case, but "false" in another. In one sense of the term tautology, it is any type of formula or proposition which turns out to be true under any possible interpretation of its terms (may also be called a valuation or assignment depending upon the context). This is synonymous to logical truth.
However, the term tautology is also commonly used to refer to what could more specifically be called truth-functional tautologies. Whereas a tautology or logical truth is true solely because of the logical terms it contains in general (e.g. "every", "some", and "is"), a truth-functional tautology is true because of the logical terms it contains which are logical connectives (e.g. "or", "and", and "nor"). Not all logical truths are tautologies of such a kind.
Logical truth and logical constants
Logical constants, including logical connectives and quantifiers, can all be reduced conceptually to logical truth. For instance, two statements or more are logically incompatible if, and only if their conjunction is logically false. One statement logically implies another when it is logically incompatible with the negation of the other. A statement is logically true if, and only if its opposite is logically false. The opposite statements must contradict one another. In this way all logical connectives can be expressed in terms of preserving logical truth. The logical form of a sentence is determined by its semantic or syntactic structure and by the placement of logical constants. Logical constants determine whether a statement is a logical truth when they are combined with a language that limits its meaning. Therefore, until it is determined how to make a distinction between all logical constants regardless of their language, it is impossible to know the complete truth of a statement or argument.
Logical truth and rules of inference
The concept of logical truth is closely connected to the concept of a rule of inference.
Logical truth and logical positivism
Logical positivism was a movement in the early 20th century that tried to reduce the reasoning processes of science to pure logic. Among other things, the logical positivists claimed that any proposition that is not empirically verifiable is neither true nor false, but nonsense.
Non-classical logics
Non-classical logic is the name given to formal systems which differ in a significant way from standard logical systems such as propositional and predicate logic. There are several ways in which this is done, including by way of extensions, deviations, and variations. The aim of these departures is to make it possible to construct different models of logical consequence and logical truth.
See also
Contradiction
False (logic)
Logical truth table, a mathematical table used in logic
Satisfiability
Tautology (logic) (for symbolism of logical truth)
Theorem
Validity
References
External links
Philosophical logic
Concepts in logic
Truth
Truth
Philosophy of logic
ca:Valor vertader | Logical truth | Mathematics | 1,340 |
32,526,740 | https://en.wikipedia.org/wiki/Promin | Promin, or sodium glucosulfone is a sulfone drug that was investigated for the treatment of malaria, tuberculosis and leprosy. It is broken down in the body to dapsone, which is the therapeutic form.
History
The first synthesis of Promin is sometimes credited to Edward Tillitson and Benjamin F. Tullar of Parke, Davis, & Co. pharmaceuticals in August 1937. However, although Parke-Davis did in fact synthesize the compound, it seems certain that they were not the first; in cooperation with J. Wittmann, Emil Fromm synthesised various sulfone compounds in 1908, including both dapsone and some of its derivatives, such as promin. Fromm and Wittmann however were engaged on chemical rather than medical work, and no-one investigated the medical value of such compounds until some decades afterwards. The medical evaluation of sulfones was inspired by the discovery of the unprecedented value of synthetic compounds such as sulfonamides in treating microbial diseases. Early investigations yielded disappointing results, but subsequently promin and dapsone proved valuable in treating mycobacterial diseases. They were the first treatments to show more than a glimmer of hope of controlling such infections.
Initially, Promin appeared to be safer than dapsone, so it was further investigated at the Mayo Clinic as a treatment for tuberculosis in a guinea pig model. Because it was already known that leprosy and tuberculosis were both caused by mycobacteria (Mycobacterium leprae and Mycobacterium tuberculosis, respectively), Guy Henry Faget of the National Leprosarium in Carville, Louisiana, requested information about the drug from Parke-Davis. They, in turn, informed him of the work being done on leprosy in rats by Edmund Cowdry at the Washington University School of Medicine. His successful results, published in 1941, convinced Faget to begin human studies, both with promin and sulfoxone sodium, a related compound from Abbott Laboratories. The initial trials were on six volunteers, and were then expanded and replicated in different locations. Despite severe side effects that caused the initial tests to be suspended temporarily, the drug was shown to be effective. This breakthrough was reported worldwide, and led to a reduction in the stigma attached to leprosy, and consequent better treatment of patients, at the time still referred to as "inmates", and forbidden from using public transport.
Pharmacology
Promin is heat-stable and water-soluble, and can therefore be heat-sterilized. It can be injected intravenously, and is available in ampules.
Beyond its solubility, however, promin was later found to have no real advantages over the simpler compound dapsone (which is administered in tablet form). Promin and other sulfones can also not be used as substitutes for dapsone when intolerance develops, as this is a general reaction to sulfones, and not specific to dapsone.
Today, the drugs of choice for treating leprosy are dapsone, rifampicin and clofazimine.
References
Anilines
Benzosulfones
Dihydropteroate synthetase inhibitors
Antileprotic drugs
Prodrugs | Promin | Chemistry | 682 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.