id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
11,866,408
https://en.wikipedia.org/wiki/Cyclic%20executive
A cyclic executive is an alternative to a real-time operating system. It is a form of cooperative multitasking, in which there is only one task. The sole task is typically realized as an infinite loop in main(), e.g. in C. The basic scheme is to cycle through a repeating sequence of activities, at a set frequency (AKA time-triggered cyclic executive). For example, consider the example of an embedded system designed to monitor a temperature sensor and update an LCD display. The LCD may need to be written twenty times a second (i.e., every 50 ms). If the temperature sensor must be read every 100 ms for other reasons, we might construct a loop of the following appearance: int main(void) { while (1) { // This loop is designed to take 100 ms, meaning // all steps add up to 100 ms. // Since this is demo code and we don't know how long // tempRead or lcdWrite take to execute, we assume // they take zero time. // As a result, the delays are responsible for the task scheduling / timing. // Read temp once per cycle (every 100 ms) currTemp = tempRead(); // Write to LCD twice per cycle (every 50 ms) lcdWrite(currTemp); delay(50); lcdWrite(currTemp); delay(50); // Now 100 ms (delay(50) + delay(50) + tempRead + lcdWrite + lcdWrite) // has passed so we repeat the cycle. } } The outer 100 ms cycle is called the major cycle. In this case, there is also an inner minor cycle of 50 ms. In this first example the outer versus inner cycles aren't obvious. We can use a counting mechanism to clarify the major and minor cycles. int main(void) { unsigned int i = 0; while (1) { // This loop is designed to take 50 ms. // Since this is demo code and we don't know how long // tempRead or lcdWrite take to execute, we assume // they take zero time. // Since we only want tempRead to execute every 100ms, we use // an if statement to check whether a counter is odd or even, // and decide whether to execute tempRead. // Read temp every other cycle (every 100 ms) if ( (i%2) == 0) { currTemp = tempRead(); } // Write to LCD once per cycle (every 50 ms) lcdWrite(currTemp); delay(50); i++; // Now 50 ms has passed so we repeat the cycle. } } See also Arduino - a popular example of this paradigm Event loop Preemption (computing) References Operating system technology Concurrent computing
Cyclic executive
[ "Technology" ]
616
[ "Computing platforms", "Concurrent computing", "IT infrastructure" ]
11,867,217
https://en.wikipedia.org/wiki/ErbB
The ErbB family of proteins contains four receptor tyrosine kinases, structurally related to the epidermal growth factor receptor (EGFR), its first discovered member. In humans, the family includes Her1 (EGFR, ErbB1), Her2 (ErbB2), Her3 (ErbB3), and Her4 (ErbB4). The gene symbol, ErbB, is derived from the name of a viral oncogene to which these receptors are homologous: erythroblastic leukemia viral oncogene. Insufficient ErbB signaling in humans is associated with the development of neurodegenerative diseases, such as multiple sclerosis and Alzheimer's disease, while excessive ErbB signaling is associated with the development of a wide variety of types of solid tumor. ErbB protein family signaling is important for development. For example, ErbB-2 and ErbB-4 knockout mice die at midgestation leads to deficient cardiac function associated with a lack of myocardial ventricular trabeculation and display abnormal development of the peripheral nervous system. In ErbB-3 receptor mutant mice, they have less severe defects in the heart and thus are able to survive longer throughout embryogenesis. Lack of Schwann cell maturation leads to degeneration of motor and sensory neurons. Excessive ErbB signaling is associated with the development of a wide variety of types of solid tumor. ErbB-1 and ErbB-2 are found in many human cancers, and their excessive signaling may be critical factors in the development and malignancy of these tumors. Family members The ErbB protein family consists of 4 members ErbB-1, also named epidermal growth factor receptor (EGFR) ErbB-2, also named HER2 in humans and neu in rodents ErbB-3, also named HER3 ErbB-4, also named HER4 v-ErbBs are homologous to EGFR, but lack sequences within the ligand binding ectodomain. Structure All four ErbB receptor family members are nearly same in the structure having single-chain of modular glycoproteins. This structure is made up of an extracellular region or ectodomain or ligand binding region that contains approximately 620 amino acids, a single transmembrane-spanning region containing approximately 23 residues, and an intracellular cytoplasmic tyrosine kinase domain containing up to approximately 540 residues. The extracellular region of each family member is made up of 4 subdomains, L1, CR1, L2, and CR2, where "L" signifies a leucine-rich repeat domain and "CR" a cysteine-rich region, and these CR domains contain disulfide modules in their structure as 8 disulfide modules in CR1 domain, whereas 7 modules in CR2 domain. These subdomains are shown in blue (L1), green (CR1), yellow (L2), and red (CR2) in the figure below. These subdomains are also referred to as domains I-IV, respectively. The intracellular/cytoplasmic region of the ErbB receptor consists mainly of three subdomains: A juxtamembrane with approximately 40 residues, a kinase domain containing approximately 260 residues and a C-terminal domain of 220-350 amino acid residues that become activated via phosphorylation of its tyrosine residues that mediates interactions of other ErbB proteins and downstream signaling molecules. The figure below shows the tridimensional structure of the ErbB family proteins, using the pdb files 1NQL (ErbB-1), 1S78 (ErbB-2), 1M6B (ErbB-3) and 2AHX (ErbB-4): ErbB and Kinase activation The four members of the ErbB protein family are capable of forming homodimers, heterodimers, and possibly higher-order oligomers upon activation by a subset of potential growth factor ligands. There are 11 growth factors that activate ErbB receptors. The ability ('+') or inability ('-') of each growth factor to activate each of the ErbB receptors is shown in the table below: The dimerization occurs after ligand bind to the extracellular domain of the ErbB monomers and monomer-monomer interaction establishes activating the activation loop in a kinase domain, that activates the further process of transphosphorylation of the specific tyrosine kinases in the kinase domain of ErbB's intracellular part. It is a complex process due to the domain specificity and nature of the members of ErbB family. Notably, the ErbB1 and ErbB4 are the two most studied and intact among the family of ErbB proteins, Which forms functional intracellular tyrosine kinases. ErbB2 has no known binding ligand and absent of an active kinase domain in ErbB3 make this duo preferable to form heterodimers & share each other's active domains to activate transphosphorylation of the tyrosine kinases. The specific tyrosine molecules mainly trans or auto-phosphorylated are at the site Y992, Y1045, Y1068, Y1148, Y1173 in the tail region of the ErbB monomer. For the activation of kinase domain in the ErbB dimer, asymmetric kinase domain dimer of the two monomers is required with the intact asymmetric (N-C lobe) interface at the site of adjoining monomers. Activation of the tyrosine kinase domain leads to the activation of the whole range of downstream signaling pathways like PLCγ, ERK 1/2, p38 MAPK, PI3-K/Akt and more with the cell. When not bound to a ligand, the extracellular regions of ErbB1, ErbB3, and ErbB4 are found in a tethered conformation in which a 10-amino-acid-long dimerization arm is unable to mediate monomer-monomer interactions. In contrast, in ligand-bound ErbB-1 and unliganded ErbB-2, the dimerization arm becomes untethered and exposed at the receptor surface, making monomer-monomer interactions and dimerisation possible. The consequence of ectodomain dimerization is the positioning of two cytoplasmic domains such that transphosphorylation of specific tyrosine, serine, and threonine amino acids can occur within the cytoplasmic domain of each ErbB. At least 10 specific tyrosines, 7 serines, and 2 threonines have been identified within the cytoplasmic domain of ErbB-1, that may become phosphorylated and in some cases de-phosphorylated (e.g., Tyr 992) upon receptor dimerization. Although a number of potential phosphorylation sites exist, upon dimerization only one or much more rarely two of these sites are phosphorylated at any one time. Role in cancer Phosphorylated tyrosine residues act as binding sites for intracellular signal activators such as Ras. The Ras-Raf-MAPK pathway is a major signalling route for the ErbB family, as is the PI3-K/AKT pathway, both of which lead to increased cell proliferation and inhibition of apoptosis. Genetic Ras mutations are infrequent in breast cancer but Ras may be pathologically activated in breast cancer by overexpression of ErbB receptors. Activation of the receptor tyrosine kinases generates a signaling cascade where the Ras GTPase proteins are activated to a GTP-bound state. The RAS pathway can couple with the mitogen-activated protein kinase pathway or a number of other possible effectors. The PI3K/Akt pathway is dysregulated in many human tumors because of mutations altering proteins in the pathway. In relation to breast tumors, somatic activating mutations in Akt and the p110α subunit of the PI3K have been detected in 3–5% and 20–25% of primary breast tumors, respectively. Many breast tumors also have lower levels of PTEN, which is a lipid phosphatase that dephosphorylates phosphatidylinositol (3,4,5)-trisphosphate, thereby reversing the action of PI3K. EGFR has been found to be overexpressed in many cancers such as gliomas and non-small-cell lung carcinoma. Drugs such as panitumumab, cetuximab, gefitinib, erlotinib, afatinib, and lapatinib are used to inhibit it. Cetuximab is a chimeric human: murin immunoglobulin G1 mAb that binds EGFR with high affinity and promotes EGFR internalization. It has recently been shown that acquired resistance to cetuximab and gefitinib can be linked to hyperactivity of ErbB-3. This is linked to an acquired overexpression of c-MET, which phosphorylates ErbB-3, which in turn activates the AKT pathway. Panitumumab is a human mAb with high EGFR affinity that blocks ligand-binding to induce EGFR internalization. Panitumumab efficacy has been tested in a variety of advanced cancer patients, including renal carcinomas and metastatic colorectal cancer in clinical trials. ErbB2 overexpression can occur in breast, ovarian, bladder, non-small-cell lung carcinoma, as well as several other tumor types. Trastuzumab or Herceptin inhibits downstream signal cascades by selectively binding to the extracellular domain of ErbB-2 receptors to inhibit it. This leads to decreased proliferation of tumor cells. Trastuzumab targets tumor cells and causes apoptosis through the immune system by promoting antibody-dependent cellular cytotoxicity. Two thirds of women respond to trastuzumab. Although herceptin works well in most breast cancer cases, it has not been yet elucidated as to why some HER2-positive breast cancers don't respond well. Research suggests that a low FISH test ratio in estrogen receptor positive breast cancers are less likely to respond to this drug. ErbB expression as also been linked to cutaneous Squamous Cell Carcinoma (cSCC) development, where the over-expression of these receptors has been found in cSCC tumors. Based on a study conducted by Cañueto et al. (2017), ErbB over-expression in tumors was linked to lymph node progression and metastasis stage progression in cSCC. References Tyrosine kinase receptors Oncogenes Human genes
ErbB
[ "Chemistry" ]
2,284
[ "Tyrosine kinase receptors", "Signal transduction" ]
11,868,019
https://en.wikipedia.org/wiki/Algebraic%20Logic%20Functional%20programming%20language
Algebraic Logic Functional (ALF) programming language combines functional and logic programming techniques. Its foundation is Horn clause logic with equality, which consists of predicates and Horn clauses for logic programming, and functions and equations for functional programming. ALF was designed to be genuine integration of both programming paradigms, and thus any functional expression can be used in a goal literal and arbitrary predicates can occur in conditions of equations. ALF's operational semantics is based on the resolution rule to solve literals and narrowing to evaluate functional expressions. To reduce the number of possible narrowing steps, a leftmost-innermost basic narrowing strategy is used which, it is claimed, can be efficiently implemented. Terms are simplified by rewriting before a narrowing step is applied and equations are rejected if the two sides have different constructors at the top. Rewriting and rejection are supposed to result in a large reduction of the search tree and produce an operational semantics that is more efficient than Prolog's resolution strategy. Similarly to Prolog, ALF uses a backtracking strategy corresponding to a depth-first search in the derivation tree. The ALF system was designed to be an efficient implementation of the combination of resolution, narrowing, rewriting, and rejection. ALF programs are compiled into instructions of an abstract machine, which is based on the Warren Abstract Machine (WAM) with several extensions to implement narrowing and rewriting. In the current ALF implementation programs of this abstract machine are executed by an emulator written in C. In the Carnegie Mellon University Artificial Intelligence Repository, ALF is included as an AI programming language, more so as a functional/logic programming language Prolog implementation. A user manual describing the language and the use of the system is available. The ALF System runs on Unix and is available under a custom proprietary software license that grants the right to use for "evaluation, research and teaching purposes" but not commercial or military use. References External links Publications of Michael Hanus, including many articles relevant to the design and theory of ALF Information about getting and installing the ALF system Functional logic programming languages Programming languages created in the 1990s
Algebraic Logic Functional programming language
[ "Technology" ]
420
[ "Computing stubs", "Computer science", "Computer science stubs" ]
11,869,020
https://en.wikipedia.org/wiki/European%20Chemicals%20Bureau
The European Chemicals Bureau (ECB) was the focal point for the data and assessment procedure on dangerous chemicals within the European Union (EU). The ECB was located in Ispra, Italy, within the Joint Research Centre (JRC) of the European Commission. In 2008 the ECB completed its mandate. Some of its activities were taken over by the European Chemicals Agency (ECHA); others remained within the Joint Research Centre. The history of the ECB has been published as a JRC technical report. Mission The mission of the formerly known European Chemicals Bureau (ECB) was to provide scientific and technical support for the conception, development, implementation and monitoring of EU policies on chemicals and consumer products. It co-ordinated the EU risk assessment programmes that covered the risks posed by existing substances and new substances to workers, consumers and the environment. It also developed guidance documents and tools in support of the REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals) Regulation, the Testing Methods Regulation, the Globally Harmonised System of Classification and Labelling of Chemicals (GHS), the notification of new substances, the information exchange on import and export of dangerous substances, the development and harmonisation of testing methods and the authorisation of biocides. Biocides The Biocides Work Area provided scientific and technical support for the approval of active substances in biocidal products as laid down in Directive 98/8/EC (Biocidal Products Directive, BPD) concerning the placing of biocidal products on the market. Currently, these tasks are dealt with by the biocides group within the IHCP. From 2013, coinciding with the coming-into-force of a new Biocidal Products Regulation (BPR), the European Chemicals Agency (ECHA) took over the biocides program. Existing Chemicals The "Existing Chemicals" Work Area provided technical and scientific support to the European Commission concerning the data collection, priority setting, and risk assessment steps of Council Regulation (EEC) 793/93. New Chemicals The "New Chemicals" Work Area included: Co-ordination of EU notification scheme and risk assessment for new chemical substances (Directive 67/548/EEC including Annexes VII and VIII, Directive 93/67/EEC). Management of the New Chemicals Database (NCD) maintained in a security area with authorised access only. Preparation of European LIst of Notified Chemical Substances (ELINCS). Supervision of Technical and Scientific Meetings (TSMs) and Working Group Meetings allowing Member State Competent Authorities to discuss issues arising related to implementation of Directives. ESIS The European chemical Substances Information System (ESIS) is an IT system that provides information on chemicals in different lists. The ESIS database includes the following elements (please note that since 2008, the databases marked with ++ have been taken over by the European Chemicals Agency (ECHA), which will also ensure further updates): EINECS (European Inventory of Existing Commercial chemical Substances); ++ELINCS (European List of Notified Chemical Substances); NLP (No-Longer Polymers); BPD (Biocidal Products Directive) active substances; ++PBT (Persistent, bioaccumulative, and toxic) or vPvB (very Persistent and very Bioaccumulative); ++CLP/GHS (Classification, Labelling and Packaging of substances and mixtures), CLP implements the Globally harmonised System (GHS); ++HPVCs (High Production Volume Chemicals) and LPVCs (Low Production Volume Chemicals), including EU Producers/Importers lists; ++IUCLID Chemical Data Sheets, OECD-IUCLID Export Files, EUSES Export Files; ++Priority Lists, Risk Assessment process and tracking system in relation to Council Regulation (EEC) 793/93 also known as Existing Substances Regulation (ESR). See also REACH IUCLID BPD References External links Classification & Labelling on the ECHA web site Test Methods on the ECHA web site ESIS (European chemical Substances Information System) Environmental law in the European Union Cheminformatics European Union and science and technology Chemical safety Regulation of chemicals in the European Union
European Chemicals Bureau
[ "Chemistry" ]
864
[ "Chemical accident", "Regulation of chemicals in the European Union", "Regulation of chemicals", "Computational chemistry", "nan", "Cheminformatics", "Chemical safety" ]
1,551,777
https://en.wikipedia.org/wiki/Chemical%20space
Chemical space is a concept in cheminformatics referring to the property space spanned by all possible molecules and chemical compounds adhering to a given set of construction principles and boundary conditions. It contains millions of compounds which are readily accessible and available to researchers. It is a library used in the method of molecular docking. Theoretical spaces A chemical space often referred to in cheminformatics is that of potential pharmacologically active molecules. Its size is estimated to be in the order of 1060 molecules. There are no rigorous methods for determining the precise size of this space. The assumptions used for estimating the number of potential pharmacologically active molecules, however, use the Lipinski rules, in particular the molecular weight limit of 500. The estimate also restricts the chemical elements used to be Carbon, Hydrogen, Oxygen, Nitrogen and Sulfur. It further makes the assumption of a maximum of 30 atoms to stay below 500 daltons, allows for branching and a maximum of 4 rings and arrives at an estimate of 1063. This number is often misquoted in subsequent publications to be the estimated size of the whole organic chemistry space, which would be much larger if including the halogens and other elements. In addition to the drug-like space and lead-like space that are, in part, defined by the Lipinski's rule of five, the concept of known drug space (KDS), which is defined by the molecular descriptors of marketed drugs, has also been introduced. KDS can be used to help predict the boundaries of chemical spaces for drug development by comparing the structure of the molecules that are undergoing design and synthesis to the molecular descriptor parameters that are defined by the KDS. Empirical spaces As of October 2024, 219 million molecules were assigned with a Chemical Abstracts Service (CAS) Registry Number. ChEMBL Database version 33 record biological activities for 2,431,025 distinct molecules. Chemical libraries used for laboratory-based screening for compounds with desired properties are examples for real-world chemical libraries of small size (a few hundred to hundreds of thousands of molecules). Generation Systematic exploration of chemical space is possible by creating in silico databases of virtual molecules, which can be visualized by projecting multidimensional property space of molecules in lower dimensions. Generation of chemical spaces may involve creating stoichiometric combinations of electrons and atomic nuclei to yield all possible topology isomers for the given construction principles. In Cheminformatics, software programs called Structure Generators are used to generate the set of all chemical structure adhering to given boundary conditions. Constitutional Isomer Generators, for example, can generate all possible constitutional isomers of a given molecular gross formula. In the real world, chemical reactions allow us to move in chemical space. The mapping between chemical space and molecular properties is often not unique, meaning that there can be very different molecules exhibiting very similar properties. Materials design and drug discovery both involve the exploration of chemical space. See also Cheminformatics Drug design Sequence space (evolution) Molecule mining References Cheminformatics Computational chemistry
Chemical space
[ "Chemistry" ]
622
[ "Theoretical chemistry", "Computational chemistry", "nan", "Cheminformatics" ]
1,551,797
https://en.wikipedia.org/wiki/Texas%20Medication%20Algorithm%20Project
The Texas Medication Algorithm Project (TMAP) is a decision-tree medical algorithm, the design of which was based on the expert opinions of mental health specialists. It has provided and rolled out a set of psychiatric management guidelines for doctors treating certain mental disorders within Texas' publicly funded mental health care system, along with manuals relating to each of them The algorithms commence after diagnosis and cover pharmacological treatment (hence "Medication Algorithm"). History TMAP was initiated in the fall of 1997 and the initial research covered around 500 patients. TMAP arose from a collaboration that began in 1995 between the Texas Department of Mental Health and Mental Retardation (TDMHMR), pharmaceutical companies, and the University of Texas Southwestern. The research was supported by the National Institute of Mental Health, the Robert Wood Johnson Foundation, the Meadows Foundation, the Lightner-Sams Foundation, the Nanny Hogan Boyd Charitable Trust, TDMHMR, the Center for Mental Health Services, the Department of Veterans Affairs, the Health Services Research and Development Research Career Scientist Award, the United States Pharmacopoeia Convention Inc. and Mental Health Connections. Numerous companies that invent and develop antipsychotic medications provided use of their medications and furnished funding for the project. Companies did not participate in the production of the guidelines. In 2004, TMAP was mentioned as an example of a successful project in a paper regarding implementing mental health screening programs throughout the United States, by President George W. Bush's New Freedom Commission on Mental Health, which looks to expand the program federally. The President had previously been Governor of Texas, in the period when TMAP was implemented. Similar programs have been implemented in about a dozen States, according to a 2004 report in the British Medical Journal. Similar algorithms with similar prescribing advice have been produced elsewhere, for instance at the Maudsley Hospital, London. References External links MentalHealthCommission.gov - President's New Freedom Commission on Mental Health (official US government website) Health informatics Treatment of mental disorders Drugs in the United States Mental disorders screening and assessment tools
Texas Medication Algorithm Project
[ "Biology" ]
430
[ "Health informatics", "Medical technology" ]
1,551,873
https://en.wikipedia.org/wiki/ABC%20transporter
The ABC transporters, ATP synthase (ATP)-binding cassette transporters are a transport system superfamily that is one of the largest and possibly one of the oldest gene families. It is represented in all extant phyla, from prokaryotes to humans. ABC transporters belong to translocases. ABC transporters often consist of multiple subunits, one or two of which are transmembrane proteins and one or two of which are membrane-associated AAA ATPases. The ATPase subunits utilize the energy of adenosine triphosphate (ATP) binding and hydrolysis to provide the energy needed for the translocation of substrates across membranes, either for uptake or for export of the substrate. Most of the uptake systems also have an extracytoplasmic receptor, a solute binding protein. Some homologous ATPases function in non-transport-related processes such as translation of RNA and DNA repair. ABC transporters are considered to be an ABC superfamily based on the similarities of the sequence and organization of their ATP-binding cassette (ABC) domains, even though the integral membrane proteins appear to have evolved independently several times, and thus comprise different protein families. Like the ABC exporters, it is possible that the integral membrane proteins of ABC uptake systems also evolved at least three times independently, based on their high resolution three-dimensional structures. ABC uptake porters take up a large variety of nutrients, biosynthetic precursors, trace metals and vitamins, while exporters transport lipids, sterols, drugs, and a large variety of primary and secondary metabolites. Some of these exporters in humans are involved in tumor resistance, cystic fibrosis and a range of other inherited human diseases. High level expression of the genes encoding some of these exporters in both prokaryotic and eukaryotic organisms (including human) result in the development of resistance to multiple drugs such as antibiotics and anti-cancer agents. Hundreds of ABC transporters have been characterized from both prokaryotes and eukaryotes. ABC genes are essential for many processes in the cell, and mutations in human genes cause or contribute to several human genetic diseases. Forty eight ABC genes have been reported in humans. Among these, many have been characterized and shown to be causally related to diseases present in humans such as cystic fibrosis, adrenoleukodystrophy, Stargardt disease, drug-resistant tumors, Dubin–Johnson syndrome, Byler's disease, progressive familiar intrahepatic cholestasis, X-linked sideroblastic anemia, ataxia, and persistent and hyperinsulimenic hypoglycemia. ABC transporters are also involved in multiple drug resistance, and this is how some of them were first identified. When the ABC transport proteins are overexpressed in cancer cells, they can export anticancer drugs and render tumors resistant. Function ABC transporters utilize the energy of ATP binding and hydrolysis to transport various substrates across cellular membranes. They are divided into three main functional categories. In prokaryotes, importers mediate the uptake of nutrients into the cell. The substrates that can be transported include ions, amino acids, peptides, sugars, and other molecules that are mostly hydrophilic. The membrane-spanning region of the ABC transporter protects hydrophilic substrates from the lipids of the membrane bilayer thus providing a pathway across the cell membrane. Eukaryotes do not possess any importers. Exporters or effluxers, which are present both in prokaryotes and eukaryotes, function as pumps that extrude toxins and drugs out of the cell. In gram-negative bacteria, exporters transport lipids and some polysaccharides from the cytoplasm to the periplasm. The third subgroup of ABC proteins do not function as transporters, but are rather involved in translation and DNA repair processes. Prokaryotic Bacterial ABC transporters are essential in cell viability, virulence, and pathogenicity. Iron ABC uptake systems, for example, are important effectors of virulence. Pathogens use siderophores, such as Enterobactin, to scavenge iron that is in complex with high-affinity iron-binding proteins or erythrocytes. These are high-affinity iron-chelating molecules that are secreted by bacteria and reabsorb iron into iron-siderophore complexes. The chvE-gguAB gene in Agrobacterium tumefaciens encodes glucose and galactose importers that are also associated with virulence. Transporters are extremely vital in cell survival such that they function as protein systems that counteract any undesirable change occurring in the cell. For instance, a potential lethal increase in osmotic strength is counterbalanced by activation of osmosensing ABC transporters that mediate uptake of solutes. Other than functioning in transport, some bacterial ABC proteins are also involved in the regulation of several physiological processes. In bacterial efflux systems, certain substances that need to be extruded from the cell include surface components of the bacterial cell (e.g. capsular polysaccharides, lipopolysaccharides, and teichoic acid), proteins involved in bacterial pathogenesis (e.g. hemolysis, heme-binding protein, and alkaline protease), heme, hydrolytic enzymes, S-layer proteins, competence factors, toxins, antibiotics, bacteriocins, peptide antibiotics, drugs and siderophores. They also play important roles in biosynthetic pathways, including extracellular polysaccharide biosynthesis and cytochrome biogenesis. Eukaryotic Although most eukaryotic ABC transporters are effluxers, some are not directly involved in transporting substrates. In the cystic fibrosis transmembrane regulator (CFTR) and in the sulfonylurea receptor (SUR), ATP hydrolysis is associated with the regulation of opening and closing of ion channels carried by the ABC protein itself or other proteins. Human ABC transporters are involved in several diseases that arise from polymorphisms in ABC genes and rarely due to complete loss of function of single ABC proteins. Such diseases include Mendelian diseases and complex genetic disorders such as cystic fibrosis, adrenoleukodystrophy, Stargardt disease, Tangier disease, immune deficiencies, progressive familial intrahepatic cholestasis, Dubin–Johnson syndrome, Pseudoxanthoma elasticum, persistent hyperinsulinemic hypoglycemia of infancy due to focal adenomatous hyperplasia, X-linked sideroblastosis and anemia, age-related macular degeneration, familial hypoapoproteinemia, Retinitis pigmentosum, cone rod dystrophy, and others. The human ABCB (MDR/TAP) family is responsible for multiple drug resistance (MDR) against a variety of structurally unrelated drugs. ABCB1 or MDR1 P-glycoprotein is also involved in other biological processes for which lipid transport is the main function. It is found to mediate the secretion of the steroid aldosterone by the adrenals, and its inhibition blocked the migration of dendritic immune cells, possibly related to the outward transport of the lipid platelet activating factor (PAF). It has also been reported that ABCB1 mediates transport of cortisol and dexamethasone, but not of progesterone in ABCB1 transfected cells. MDR1 can also transport cholesterol, short-chain and long-chain analogs of phosphatidylcholine (PC), phosphatidylethanolamine (PE), phosphatidylserine (PS), sphingomyelin (SM), and glucosylceramide (GlcCer). Multispecific transport of diverse endogenous lipids through the MDR1 transporter can possibly affect the transbilayer distribution of lipids, in particular of species normally predominant on the inner plasma membrane leaflet such as PS and PE. More recently, ABC-transporters have been shown to exist within the placenta, indicating they could play a protective role for the developing fetus against xenobiotics. Evidence has shown that placental expression of the ABC-transporters P-glycoprotein (P-gp) and breast cancer resistance protein (BCRP) are increased in preterm compared to term placentae, with P-gp expression further increased in preterm pregnancies with chorioamnionitis. To a lesser extent, increasing maternal BMI also associated with increased placental ABC-transporter expression, but only at preterm. Structure All ABC transport proteins share a structural organization consisting of four core domains. These domains consist of two trans-membrane (T) domains and two cytosolic (A) domains. The two T domains alternate between an inward and outward facing orientation, and the alternation is powered by the hydrolysis of adenosine triphosphate or ATP. ATP binds to the A subunits and it is then hydrolyzed to power the alternation, but the exact process by which this happens is not known. The four domains can be present in four separate polypeptides, which occur mostly in bacteria, or present in one or two multi-domain polypeptides. When the polypeptides are one domain, they can be referred to as a full domain, and when they are two multi-domains they can be referred to as a half domain. The T domains are each built of typically 10 membrane spanning alpha helices, through which the transported substance can cross through the plasma membrane. Also, the structure of the T domains determines the specificity of each ABC protein. In the inward facing conformation, the binding site on the A domain is open directly to the surrounding aqueous solutions. This allows hydrophilic molecules to enter the binding site directly from the inner leaflet of the phospholipid bilayer. In addition, a gap in the protein is accessible directly from the hydrophobic core of the inner leaflet of the membrane bilayer. This allows hydrophobic molecules to enter the binding site directly from the inner leaflet of the phospholipid bilayer. After the ATP powered move to the outward facing conformation, molecules are released from the binding site and allowed to escape into the exoplasmic leaflet or directly into the extracellular medium. The common feature of all ABC transporters is that they consist of two distinct domains, the transmembrane domain (TMD) and the nucleotide-binding domain (NBD). The TMD, also known as membrane-spanning domain (MSD) or integral membrane (IM) domain, consists of alpha helices, embedded in the membrane bilayer. It recognizes a variety of substrates and undergoes conformational changes to transport the substrate across the membrane. The sequence and architecture of TMDs is variable, reflecting the chemical diversity of substrates that can be translocated. The NBD or ATP-binding cassette (ABC) domain, on the other hand, is located in the cytoplasm and has a highly conserved sequence. The NBD is the site for ATP binding. In most exporters, the N-terminal transmembrane domain and the C-terminal ABC domains are fused as a single polypeptide chain, arranged as TMD-NBD-TMD-NBD. An example is the E. coli hemolysin exporter HlyB. Importers have an inverted organization, that is, NBD-TMD-NBD-TMD, where the ABC domain is N-terminal whereas the TMD is C-terminal, such as in the E. coli MacB protein responsible for macrolide resistance. The structural architecture of ABC transporters consists minimally of two TMDs and two NBDs. Four individual polypeptide chains including two TMD and two NBD subunits, may combine to form a full transporter such as in the E. coli BtuCD importer involved in the uptake of vitamin B12. Most exporters, such as in the multidrug exporter Sav1866 from Staphylococcus aureus, are made up of a homodimer consisting of two half transporters or monomers of a TMD fused to a nucleotide-binding domain (NBD). A full transporter is often required to gain functionality. Some ABC transporters have additional elements that contribute to the regulatory function of this class of proteins. In particular, importers have a high-affinity binding protein (BP) that specifically associates with the substrate in the periplasm for delivery to the appropriate ABC transporter. Exporters do not have the binding protein but have an intracellular domain (ICD) that joins the membrane-spanning helices and the ABC domain. The ICD is believed to be responsible for communication between the TMD and NBD. Transmembrane domain (TMD) Most transporters have transmembrane domains that consist of a total of 12 α-helices with 6 α-helices per monomer. Since TMDs are structurally diverse, some transporters have varying number of helices (between six and eleven). The TM domains are categorized into three distinct sets of folds: type I ABC importer, type II ABC importer and ABC exporter folds. The classification of importer folds is based on detailed characterization of the sequences. The type I ABC importer fold was originally observed in the ModB TM subunit of the molybdate transporter. This diagnostic fold can also be found in the MalF and MalG TM subunits of MalFGK2 and the Met transporter MetI. In the MetI transporter, a minimal set of 5 transmembrane helices constitute this fold while an additional helix is present for both ModB and MalG. The common organization of the fold is the "up-down" topology of the TM2-5 helices that lines the translocation pathway and the TM1 helix wrapped around the outer, membrane-facing surface and contacts the other TM helices. The type II ABC importer fold is observed in the twenty TM helix-domain of BtuCD and in Hi1471, a homologous transporter from Haemophilus influenzae. In BtuCD, the packing of the helices is complex. The noticeable pattern is that the TM2 helix is positioned through the center of the subunit where it is surrounded in close proximity by the other helices. Meanwhile, the TM5 and TM10 helices are positioned in the TMD interface. The membrane spanning region of ABC exporters is organized into two "wings" that are composed of helices TM1 and TM2 from one subunit and TM3-6 of the other, in a domain-swapped arrangement. A prominent pattern is that helices TM1-3 are related to TM4-6 by an approximate twofold rotation around an axis in the plane of the membrane. The exporter fold is originally observed in the Sav1866 structure. It contains 12 TM helices, 6 per monomer. Nucleotide-binding domain (NBD) The ABC domain consists of two domains, the catalytic core domain similar to RecA-like motor ATPases and a smaller, structurally diverse α-helical subdomain that is unique to ABC transporters. The larger domain typically consists of two β-sheets and six α helices, where the catalytic Walker A motif (GXXGXGKS/T where X is any amino acid) or P-loop and Walker B motif (ΦΦΦΦD, of which Φ is a hydrophobic residue) is situated. The helical domain consists of three or four helices and the ABC signature motif, also known as LSGGQ motif, linker peptide or C motif. The ABC domain also has a glutamine residue residing in a flexible loop called Q loop, lid or γ-phosphate switch, that connects the TMD and ABC. The Q loop is presumed to be involved in the interaction of the NBD and TMD, particularly in the coupling of nucleotide hydrolysis to the conformational changes of the TMD during substrate translocation. The H motif or switch region contains a highly conserved histidine residue that is also important in the interaction of the ABC domain with ATP. The name ATP-binding cassette is derived from the diagnostic arrangement of the folds or motifs of this class of proteins upon formation of the ATP sandwich and ATP hydrolysis. ATP binding and hydrolysis Dimer formation of the two ABC domains of transporters requires ATP binding. It is generally observed that the ATP bound state is associated with the most extensive interface between ABC domains, whereas the structures of nucleotide-free transporters exhibit conformations with greater separations between the ABC domains. Structures of the ATP-bound state of isolated NBDs have been reported for importers including HisP, GlcV, MJ1267, E. coli MalK (E.c.MalK), T. litoralis MalK (TlMalK), and exporters such as TAP, HlyB, MJ0796, Sav1866, and MsbA. In these transporters, ATP is bound to the ABC domain. Two molecules of ATP are positioned at the interface of the dimer, sandwiched between the Walker A motif of one subunit and the LSGGQ motif of the other. This was first observed in Rad50 and reported in structures of MJ0796, the NBD subunit of the LolD transporter from Methanococcus jannaschii and E.c.MalK of a maltose transporter. These structures were also consistent with results from biochemical studies revealing that ATP is in close contact with residues in the P-loop and LSGGQ motif during catalysis. Nucleotide binding is required to ensure the electrostatic and/or structural integrity of the active site and contribute to the formation of an active NBD dimer. Binding of ATP is stabilized by the following interactions: (1) ring-stacking interaction of a conserved aromatic residue preceding the Walker A motif and the adenosine ring of ATP, (2) hydrogen-bonds between a conserved lysine residue in the Walker A motif and the oxygen atoms of the β- and γ-phosphates of ATP and coordination of these phosphates and some residues in the Walker A motif with Mg2+ ion, and (3) γ-phosphate coordination with side chain of serine and backbone amide groups of glycine residues in the LSGGQ motif. In addition, a residue that suggests the tight coupling of ATP binding and dimerization, is the conserved histidine in the H-loop. This histidine contacts residues across the dimer interface in the Walker A motif and the D loop, a conserved sequence following the Walker B motif. The enzymatic hydrolysis of ATP requires proper binding of the phosphates and positioning of the γ-phosphate to the attacking water. In the nucleotide binding site, the oxygen atoms of the β- and γ-phosphates of ATP are stabilized by residues in the Walker A motif and coordinate with Mg2+. This Mg2+ ion also coordinates with the terminal aspartate residue in the Walker B motif through the attacking H2O. A general base, which may be the glutamate residue adjacent to the Walker B motif, glutamine in the Q-loop, or a histidine in the switch region that forms a hydrogen bond with the γ-phosphate of ATP, is found to catalyze the rate of ATP hydrolysis by promoting the attacking H2O. The precise molecular mechanism of ATP hydrolysis is still controversial. Mechanism of transport ABC transporters are active transporters, that is, they use energy in the form of adenosine triphosphate (ATP) to translocate substrates across cell membranes. These proteins harness the energy of ATP binding and/or hydrolysis to drive conformational changes in the transmembrane domain (TMD) and consequently transport molecules. ABC importers and exporters have a common mechanism for transporting substrates. They are similar in their structures. The model that describes the conformational changes associated with the binding of the substrate is the alternating-access model. In this model, the substrate binding site alternates between outward- and inward-facing conformations. The relative binding affinities of the two conformations for the substrate largely determines the net direction of transport. For importers, since translocation is directed from the periplasm to the cytoplasm, the outward-facing conformation has higher binding affinity for the substrate. In contrast, the substrate binding affinity in exporters is greater in the inward-facing conformation. A model that describes the conformational changes in the nucleotide-binding domain (NBD) as a result of ATP binding and hydrolysis is the ATP-switch model. This model presents two principal conformations of the NBDs: formation of a closed dimer upon binding two ATP molecules and dissociation to an open dimer facilitated by ATP hydrolysis and release of inorganic phosphate (Pi) and adenosine diphosphate (ADP). Switching between the open and closed dimer conformations induces conformational changes in the TMD resulting in substrate translocation. The general mechanism for the transport cycle of ABC transporters has not been fully elucidated, but substantial structural and biochemical data has accumulated to support a model in which ATP binding and hydrolysis is coupled to conformational changes in the transporter. The resting state of all ABC transporters has the NBDs in an open dimer configuration, with low affinity for ATP. This open conformation possesses a chamber accessible to the interior of the transporter. The transport cycle is initiated by binding of substrate to the high-affinity site on the TMDs, which induces conformational changes in the NBDs and enhances the binding of ATP. Two molecules of ATP bind, cooperatively, to form the closed dimer configuration. The closed NBD dimer induces a conformational change in the TMDs such that the TMD opens, forming a chamber with an opening opposite to that of the initial state. The affinity of the substrate to the TMD is reduced, thereby releasing the substrate. Hydrolysis of ATP follows and then sequential release of Pi and then ADP restores the transporter to its basal configuration. Although a common mechanism has been suggested, the order of substrate binding, nucleotide binding and hydrolysis, and conformational changes, as well as interactions between the domains is still debated. Several groups studying ABC transporters have differing assumptions on the driving force of transporter function. It is generally assumed that ATP hydrolysis provides the principal energy input or "power stroke" for transport and that the NBDs operate alternately and are possibly involved in different steps in the transport cycle. However, recent structural and biochemical data shows that ATP binding, rather than ATP hydrolysis, provides the "power stroke". It may also be that since ATP binding triggers NBD dimerization, the formation of the dimer may represent the "power stroke." In addition, some transporters have NBDs that do not have similar abilities in binding and hydrolyzing ATP and that the interface of the NBD dimer consists of two ATP binding pockets suggests a concurrent function of the two NBDs in the transport cycle. Some evidence to show that ATP binding is indeed the power stroke of the transport cycle was reported. It has been shown that ATP binding induces changes in the substrate-binding properties of the TMDs. The affinity of ABC transporters for substrates has been difficult to measure directly, and indirect measurements, for instance through stimulation of ATPase activity, often reflects other rate-limiting steps. Recently, direct measurement of vinblastine binding to permease-glycoprotein (P-glycoprotein) in the presence of nonhydrolyzable ATP analogs, e.g. 5'-adenylyl-β-γ-imidodiphosphate (AMP-PNP), showed that ATP binding, in the absence of hydrolysis, is sufficient to reduce substrate-binding affinity. Also, ATP binding induces substantial conformational changes in the TMDs. Spectroscopic, protease accessibility and crosslinking studies have shown that ATP binding to the NBDs induces conformational changes in multidrug resistance-associated protein-1 (MRP1), HisPMQ, LmrA, and Pgp. Two dimensional crystal structures of AMP-PNP-bound Pgp showed that the major conformational change during the transport cycle occurs upon ATP binding and that subsequent ATP hydrolysis introduces more limited changes. Rotation and tilting of transmembrane α-helices may both contribute to these conformational changes. Other studies have focused on confirming that ATP binding induces NBD closed dimer formation. Biochemical studies of intact transport complexes suggest that the conformational changes in the NBDs are relatively small. In the absence of ATP, the NBDs may be relatively flexible, but they do not involve a major reorientation of the NBDs with respect to the other domains. ATP binding induces a rigid body rotation of the two ABC subdomains with respect to each other, which allows the proper alignment of the nucleotide in the active site and interaction with the designated motifs. There is strong biochemical evidence that binding of two ATP molecules can be cooperative, that is, ATP must bind to the two active site pockets before the NBDs can dimerize and form the closed, catalytically active conformation. ABC importers Most ABC transporters that mediate the uptake of nutrients and other molecules in bacteria rely on a high-affinity solute binding protein (BP). BPs are soluble proteins located in the periplasmic space between the inner and outer membranes of gram-negative bacteria. Gram-positive microorganisms lack a periplasm such that their binding protein is often a lipoprotein bound to the external face of the cell membrane. Some gram-positive bacteria have BPs fused to the transmembrane domain of the transporter itself. The first successful x-ray crystal structure of an intact ABC importer is the molybdenum transporter (ModBC-A) from Archaeoglobus fulgidus. Atomic-resolution structures of three other bacterial importers, E. coli BtuCD, E. coli maltose transporter (MalFGK2-E), and the putative metal-chelate transporter of Haemophilus influenzae, HI1470/1, have also been determined. The structures provided detailed pictures of the interaction of the transmembrane and ABC domains as well as revealed two different conformations with an opening in two opposite directions. Another common feature of importers is that each NBD is bound to one TMD primarily through a short cytoplasmic helix of the TMD, the "coupling helix". This portion of the EAA loop docks in a surface cleft formed between the RecA-like and helical ABC subdomains and lies approximately parallel to the membrane bilayer. Large ABC importers The BtuCD and HI1470/1 are classified as large (Type II) ABC importers. The transmembrane subunit of the vitamin B12 importer, BtuCD, contains 10 TM helices and the functional unit consists of two copies each of the nucleotide binding domain (NBD) and transmembrane domain (TMD). The TMD and NBD interact with one another via the cytoplasmic loop between two TM helices and the Q loop in the ABC. In the absence of nucleotide, the two ABC domains are folded and the dimer interface is open. A comparison of the structures with (BtuCDF) and without (BtuCD) binding protein reveals that BtuCD has an opening that faces the periplasm whereas in BtuCDF, the outward-facing conformation is closed to both sides of the membrane. The structures of BtuCD and the BtuCD homolog, HI1470/1, represent two different conformational states of an ABC transporter. The predicted translocation pathway in BtuCD is open to the periplasm and closed at the cytoplasmic side of the membrane while that of HI1470/1 faces the opposite direction and open only to the cytoplasm. The difference in the structures is a 9° twist of one TM subunit relative to the other. Small ABC importers Structures of the ModBC-A and MalFGK2-E, which are in complex with their binding protein, correspond to small (Type I) ABC importers. The TMDs of ModBC-A and MalFGK2-E have only six helices per subunit. The homodimer of ModBC-A is in a conformation in which the TM subunits (ModB) orient in an inverted V-shape with a cavity accessible to the cytoplasm. The ABC subunits (ModC), on the other hand, are arranged in an open, nucleotide-free conformation, in which the P-loop of one subunit faces but is detached from the LSGGQ motif of the other. The binding protein ModA is in a closed conformation with substrate bound in a cleft between its two lobes and attached to the extracellular loops of ModB, wherein the substrate is sitting directly above the closed entrance of the transporter. The MalFGK2-E structure resembles the catalytic transition state for ATP hydrolysis. It is in a closed conformation where it contains two ATP molecules, sandwiched between the Walker A and B motifs of one subunit and the LSGGQ motif of the other subunit. The maltose binding protein (MBP or MalE) is docked on the periplasmic side of the TM subunits (MalF and MalG) and a large, occluded cavity can be found at the interface of MalF and MalG. The arrangement of the TM helices is in a conformation that is closed toward the cytoplasm but with an opening that faces outward. The structure suggests a possibility that MBP may stimulate the ATPase activity of the transporter upon binding. Mechanism of transport for importers The mechanism of transport for importers supports the alternating-access model. The resting state of importers is inward-facing, where the nucleotide binding domain (NBD) dimer interface is held open by the TMDs and facing outward but occluded from the cytoplasm. Upon docking of the closed, substrate-loaded binding protein towards the periplasmic side of the transmembrane domains, ATP binds and the NBD dimer closes. This switches the resting state of transporter into an outward-facing conformation, in which the TMDs have reoriented to receive substrate from the binding protein. After hydrolysis of ATP, the NBD dimer opens and substrate is released into the cytoplasm. Release of ADP and Pi reverts the transporter into its resting state. The only inconsistency of this mechanism to the ATP-switch model is that the conformation in its resting, nucleotide-free state is different from the expected outward-facing conformation. Although that is the case, the key point is that the NBD does not dimerize unless ATP and binding protein is bound to the transporter. ABC exporters Prokaryotic ABC exporters are abundant and have close homologues in eukaryotes. This class of transporters is studied based on the type of substrate that is transported. One class is involved in the protein (e.g. toxins, hydrolytic enzymes, S-layer proteins, lantibiotics, bacteriocins, and competence factors) export and the other in drug efflux. ABC transporters have gained extensive attention because they contribute to the resistance of cells to antibiotics and anticancer agents by pumping drugs out of the cells. A common mechanism is the overexpression of ABC exporters like P-glycoprotein (P-gp/ABCB1), multidrug resistance-associated protein 1 (MRP1/ABCC1), and breast cancer resistance protein (BCRP/ABCG2) in cancer cells that limit the exposure to anticancer drugs. In gram-negative organisms, ABC transporters mediate secretion of protein substrates across inner and outer membranes simultaneously without passing through the periplasm. This type of secretion is referred to as type I secretion, which involves three components that function in concert: an ABC exporter, a membrane fusion protein (MFP), and an outer membrane factor (OMF). An example is the secretion of hemolysin (HlyA) from E. coli where the inner membrane ABC transporter HlyB interacts with an inner membrane fusion protein HlyD and an outer membrane facilitator TolC. TolC allows hemolysin to be transported across the two membranes, bypassing the periplasm. Bacterial drug resistance has become an increasingly major health problem. One of the mechanisms for drug resistance is associated with an increase in antibiotic efflux from the bacterial cell. Drug resistance associated with drug efflux, mediated by P-glycoprotein, was originally reported in mammalian cells. In bacteria, Levy and colleagues presented the first evidence that antibiotic resistance was caused by active efflux of a drug. P-glycoprotein is the best-studied efflux pump and as such has offered important insights into the mechanism of bacterial pumps. Although some exporters transport a specific type of substrate, most transporters extrude a diverse class of drugs with varying structure. These transporters are commonly called multi-drug resistant (MDR) ABC transporters and sometimes referred to as "hydrophobic vacuum cleaners". Human ABCB1/MDR1 P-glycoprotein P-glycoprotein (3.A.1.201.1) is a well-studied protein associated with multi-drug resistance. It belongs to the human ABCB (MDR/TAP) family and is also known as ABCB1 or MDR1 Pgp. MDR1 consists of a functional monomer with two transmembrane domains (TMD) and two nucleotide-binding domains (NBD). This protein can transport mainly cationic or electrically neutral substrates as well as a broad spectrum of amphiphilic substrates. The structure of the full-size ABCB1 monomer was obtained in the presence and absence of nucleotide using electron cryo crystallography. Without the nucleotide, the TMDs are approximately parallel and form a barrel surrounding a central pore, with the opening facing towards the extracellular side of the membrane and closed at the intracellular face. In the presence of the nonhydrolyzable ATP analog, AMP-PNP, the TMDs have a substantial reorganization with three clearly segregated domains. A central pore, which is enclosed between the TMDs, is slightly open towards the intracellular face with a gap between two domains allowing access of substrate from the lipid phase. Substantial repacking and possible rotation of the TM helices upon nucleotide binding suggests a helix rotation model for the transport mechanism. Plant transporters The genome of the model plant Arabidopsis thaliana is capable of encoding 120 ABC proteins compared to 50-70 ABC proteins that are encoded by the human genome and fruit flies (Drosophila melanogaster). Plant ABC proteins are categorized in 13 subfamilies on the basis of size (full, half or quarter), orientation, and overall amino acid sequence similarity. Multidrug resistant (MDR) homologs, also known as P-glycoproteins, represent the largest subfamily in plants with 22 members and the second largest overall ABC subfamily. The B subfamily of plant ABC transporters (ABCBs) are characterized by their localization to the plasma membrane. Plant ABCB transporters are characterized by heterologously expressing them in Escherichia coli, Saccharomyces cerevisiae, Schizosaccharomyces pombe (fission yeast), and HeLa cells to determine substrate specificity. Plant ABCB transporters have shown to transport the phytohormone indole-3-acetic acid ( IAA), also known as auxin, the essential regulator for plant growth and development. The directional polar transport of auxin mediates plant environmental responses through processes such as phototropism and gravitropism. Two of the best studied auxin transporters, ABCB1 and ABCB19, have been characterized to be primary auxin exporters Other ABCB transporters such as ABCB4 participate in both the export and import of auxin At low intracellular auxin concentrations ABCB4 imports auxin until it reaches a certain threshold which then reverses function to only export auxin. Sav1866 The first high-resolution structure reported for an ABC exporter was that of Sav1866 (3.A.1.106.2) from Staphylococcus aureus. Sav1866 is a homolog of multidrug ABC transporters. It shows significant sequence similarity to human ABC transporters of subfamily B that includes MDR1 and TAP1/TAP2. The ATPase activity of Sav1866 is known to be stimulated by cancer drugs such as doxorubicin, vinblastine and others, which suggests similar substrate specificity to P-glycoprotein and therefore a possible common mechanism of substrate translocation. Sav1866 is a homodimer of half transporters, and each subunit contains an N-terminal TMD with six helices and a C-terminal NBD. The NBDs are similar in structure to those of other ABC transporters, in which the two ATP binding sites are formed at the dimer interface between the Walker A motif of one NBD and the LSGGQ motif of the other. The ADP-bound structure of Sav1866 shows the NBDs in a closed dimer and the TM helices split into two "wings" oriented towards the periplasm, forming the outward-facing conformation. Each wing consists of helices TM1-2 from one subunit and TM3-6 from the other subunit. It contains long intracellular loops (ICLs or ICD) connecting the TMDs that extend beyond the lipid bilayer into the cytoplasm and interacts with the 8=D. Whereas the importers contain a short coupling helix that contact a single NBD, Sav1866 has two intracellular coupling helices, one (ICL1) contacting the NBDs of both subunits and the other (ICL2) interacting with only the opposite NBD subunit. MsbA MsbA (3.A.1.106.1) is a multi-drug resistant (MDR) ABC transporter and possibly a lipid flippase. It is an ATPase that transports lipid A, the hydrophobic moiety of lipopolysaccharide (LPS), a glucosamine-based saccharolipid that makes up the outer monolayer of the outer membranes of most gram-negative bacteria. Lipid A is an endotoxin and so loss of MsbA from the cell membrane or mutations that disrupt transport results in the accumulation of lipid A in the inner cell membrane resulting to cell death. It is a close bacterial homolog of P-glycoprotein (Pgp) by protein sequence homology and has overlapping substrate specificities with the MDR-ABC transporter LmrA from Lactococcus lactis. MsbA from E. coli is 36% identical to the NH2-terminal half of human MDR1, suggesting a common mechanism for transport of amphiphatic and hydrophobic substrates. The MsbA gene encodes a half transporter that contains a transmembrane domain (TMD) fused with a nucleotide-binding domain (NBD). It is assembled as a homodimer with a total molecular mass of 129.2 kD. MsbA contains 6 TMDs on the periplasmic side, an NBD located on the cytoplasmic side of the cell membrane, and an intracellular domain (ICD), bridging the TMD and NBD. This conserved helix extending from the TMD segments into or near the active site of the NBD is largely responsible for crosstalk between TMD and NBD. In particular, ICD1 serves as a conserved pivot about which the NBD can rotate, therefore allowing the NBD to disassociate and dimerize during ATP binding and hydrolysis. Previously published (and now retracted) X-ray structures of MsbA were inconsistent with the bacterial homolog Sav1866. The structures were reexamined and found to have an error in the assignment of the hand resulting to incorrect models of MsbA. Recently, the errors have been rectified and new structures have been reported. The resting state of E. coli MsbA exhibits an inverted "V" shape with a chamber accessible to the interior of the transporter suggesting an open, inward-facing conformation. The dimer contacts are concentrated between the extracellular loops and while the NBDs are ≈50Å apart, the subunits are facing each other. The distance between the residues in the site of the dimer interface have been verified by cross-linking experiments and EPR spectroscopy studies. The relatively large chamber allows it to accommodate large head groups such as that present in lipid A. Significant conformational changes are required to move the large sugar head groups across the membrane. The difference between the two nucleotide-free (apo) structures is the ≈30° pivot of TM4/TM5 helices relative to the TM3/TM6 helices. In the closed apo state (from V. cholerae MsbA), the NBDs are aligned and although closer, have not formed an ATP sandwich, and the P loops of opposing monomers are positioned next to one another. In comparison to the open conformation, the dimer interface of the TMDs in the closed, inward-facing conformation has extensive contacts. For both apo conformations of MsbA, the chamber opening is facing inward. The structure of MsbA-AMP-PNP (5'-adenylyl-β-γ-imidodiphosphate), obtained from S. typhimurium, is similar to Sav1866. The NBDs in this nucleotide-bound, outward-facing conformation, come together to form a canonical ATP dimer sandwich, that is, the nucleotide is situated in between the P-loop and LSGGQ motif. The conformational transition from MsbA-closed-apo to MsbA-AMP-PNP involves two steps, which are more likely concerted: a ≈10° pivot of TM4/TM5 helices towards TM3/TM6, bringing the NBDs closer but not into alignment followed by tilting of TM4/TM5 helices ≈20° out of plane. The twisting motion results in the separation of TM3/TM6 helices away from TM1/TM2 leading to a change from an inward- to an outward- facing conformation. Thus, changes in both the orientation and spacing of the NBDs dramatically rearrange the packing of transmembrane helices and effectively switch access to the chamber from the inner to the outer leaflet of the membrane. The structures determined for MsbA is basis for the tilting model of transport. The structures described also highlight the dynamic nature of ABC exporters as also suggested by fluorescence and EPR studies. Recent work has resulted in the discovery of MsbA inhibitors. Mechanism of transport for exporters ABC exporters have a transport mechanism that is consistent with both the alternating-access model and ATP-switch model. In the apo states of exporters, the conformation is inward-facing and the TMDs and NBDs are relatively far apart to accommodate amphiphilic or hydrophobic substrates. For MsbA, in particular, the size of the chamber is large enough to accommodate the sugar groups from lipopolysaccharides (LPS). As has been suggested by several groups, binding of substrate initiates the transport cycle. The "power stroke", that is, ATP binding that induces NBD dimerization and formation of the ATP sandwich, drives the conformational changes in the TMDs. In MsbA, the sugar head groups are sequestered within the chamber during the "power stroke". The cavity is lined with charged and polar residues that are likely solvated creating an energetically unfavorable environment for hydrophobic substrates and energetically favorable for polar moieties in amphiphilic compounds or sugar groups from LPS. Since the lipid cannot be stable for a long time in the chamber environment, lipid A and other hydrophobic molecules may "flip" into an energetically more favorable position within the outer membrane leaflet. The "flipping" may also be driven by the rigid-body shearing of the TMDs while the hydrophobic tails of the LPS are dragged through the lipid bilayer. Repacking of the helices switches the conformation into an outward-facing state. ATP hydrolysis may widen the periplasmic opening and push the substrate towards the outer leaflet of the lipid bilayer. Hydrolysis of the second ATP molecule and release of Pi separates the NBDs followed by restoration of the resting state, opening the chamber towards the cytoplasm for another cycle. Role in multi drug resistance ABC transporters are known to play a crucial role in the development of multidrug resistance (MDR). In MDR, patients that are on medication eventually develop resistance not only to the drug they are taking but also to several different types of drugs. This is caused by several factors, one of which is increased expulsion of the drug from the cell by ABC transporters. For example, the ABCB1 protein (P-glycoprotein) functions in pumping tumor suppression drugs out of the cell. Pgp also called MDR1, ABCB1, is the prototype of ABC transporters and also the most extensively-studied gene. Pgp is known to transport organic cationic or neutral compounds. A few ABCC family members, also known as MRP, have also been demonstrated to confer MDR to organic anion compounds. The most-studied member in ABCG family is ABCG2, also known as BCRP (breast cancer resistance protein) confer resistance to most Topoisomerase I or II inhibitors such as topotecan, irinotecan, and doxorubicin. It is unclear exactly how these proteins can translocate such a wide variety of drugs, however, one model (the hydrophobic vacuum cleaner model) states that, in P-glycoprotein, the drugs are bound indiscriminately from the lipid phase based on their hydrophobicity. The Discovery of the first eukaryotic ABC transporter protein came from studies on tumor cells and cultured cells that exhibited resistance to several drugs with unrelated chemical structures. These cells were shown to express elevated levels of multidrug-resistance (MDR) transport protein which was originally called P-glycoprotein (P-gp), but it is also referred to as multidrug resistance protein 1 (MDR1) or ABCB1. This protein uses ATP hydrolysis, just like the other ABC transporters, to export a large variety of drugs from the cytosol to the extracellular medium. In multidrug-resistant cells, the MDR1 gene is frequently amplified. This results in a large overproduction of the MDR1 protein. The substrates of mammalian ABCB1 are primarily planar, lipid-soluble molecules with one or more positive charges. All of these substrates compete with one another for transport, suggesting that they bind to the same or overlapping sites on the protein. Many of the drugs that are transported out by ABCB1 are small, nonpolar drugs that diffuse across the extracellular medium into the cytosol, where they block various cellular functions. Drugs such as colchicine and vinblastine, which block assembly of microtubules, freely cross the membrane into the cytosol, but the export of these drugs by ABCB1 reduces their concentration in the cell. Therefore, it takes a higher concentration of the drugs is required to kill the cells that express ABCB1 than those that do not express the gene. Other ABC transporters that contribute to multidrug resistance are ABCC1 (MRP1) and ABCG2 (breast cancer resistance protein). To solve the problems associated with multidrug-resistance by MDR1, different types of drugs can be used or the ABC transporters themselves must be inhibited. For other types of drugs to work, they must bypass the resistance mechanism, which is the ABC transporter. To do this other anticancer drugs can be utilized such as alkylating drugs (cyclophosphamide), antimetabolites (5-fluorouracil), and the anthracycline modified drugs (annamycin and doxorubicin-peptide). These drugs would not function as a substrate of ABC transporters, and would thus not be transported. The other option is to use a combination of ABC inhibitory drugs and anticancer drugs at the same time. This would reverse the resistance to the anticancer drugs so that they could function as intended. The substrates that reverse the resistance to anticancer drugs are called chemosensitizers. Reversal of multi drug resistance Drug resistance is a common clinical problem that occurs in patients with infectious diseases and in patients with cancer. Prokaryotic and eukaryotic microorganisms as well as neoplastic cells are often found to be resistant to drugs. MDR is frequently associated with overexpression of ABC transporters. Inhibition of ABC transporters by low-molecular weight compounds has been extensively investigated in cancer patients; however, the clinical results have been disappointing. Recently various RNAi strategies have been applied to reverse MDR in different tumor models and this technology is effective in reversing ABC-transporter-mediated MDR in cancer cells and is therefore a promising strategy for overcoming MDR by gene therapeutic applications. RNAi technology could also be considered for overcoming MDR in infectious diseases caused by microbial pathogens. Physiological role In addition to conferring MDR in tumor cells, ABC transporters are also expressed in the membranes of healthy cells, where they facilitate the transport of various endogenous substances, as well as of substances foreign to the body. For instance, ABC transporters such as Pgp, the MRPs and BCRP limit the absorption of many drugs from the intestine, and pump drugs from the liver cells to the bile as a means of removing foreign substances from the body. A large number of drugs are either transported by ABC transporters themselves or affect the transport of other drugs. The latter scenario can lead to drug-drug interactions, sometimes resulting in altered effects of the drugs. Methods to characterize ABC transporter interactions There are a number of assay types that allow the detection of ABC transporter interactions with endogenous and xenobiotic compounds. The complexity of assay range from relatively simple membrane assays. like vesicular transport assay, ATPase assay to more complex cell based assays up to intricate in vivo detection methodologies. Membrane assays The vesicular transport assay detects the translocation of molecules by ABC transporters. Membranes prepared under suitable conditions contain inside-out oriented vesicles with the ATP binding site and substrate binding site of the transporter facing the buffer outside. Substrates of the transporter are taken up into the vesicles in an ATP dependent manner. Rapid filtration using glass fiber filters or nitrocellulose membranes are used to separate the vesicles from the incubation solution and the test compound trapped inside the vesicles is retained on the filter. The quantity of the transported unlabelled molecules is determined by HPLC, LC/MS, LC/MS/MS. Alternatively, the compounds are radiolabeled, fluorescent or have a fluorescent tag so that the radioactivity or fluorescence retained on the filter can be quantified. Various types of membranes from different sources (e.g. insect cells, transfected or selected mammalian cell lines) are used in vesicular transport studies. Membranes are commercially available or can be prepared from various cells or even tissues e.g. liver canalicular membranes. This assay type has the advantage of measuring the actual disposition of the substrate across the cell membrane. Its disadvantage is that compounds with medium-to-high passive permeability are not retained inside the vesicles making direct transport measurements with this class of compounds difficult to perform. The vesicular transport assay can be performed in an "indirect" setting, where interacting test drugs modulate the transport rate of a reporter compound. This assay type is particularly suitable for the detection of possible drug-drug interactions and drug-endogenous substrate interactions. It is not sensitive to the passive permeability of the compounds and therefore detects all interacting compounds. Yet, it does not provide information on whether the compound tested is an inhibitor of the transporter, or a substrate of the transporter inhibiting its function in a competitive fashion. A typical example of an indirect vesicular transport assay is the detection of the inhibition of taurocholate transport by ABCB11 (BSEP). Whole cell based assays Efflux transporter-expressing cells actively pump substrates out of the cell, which results in a lower rate of substrate accumulation, lower intracellular concentration at steady state, or a faster rate of substrate elimination from cells loaded with the substrate. Transported radioactive substrates or labeled fluorescent dyes can be directly measured, or in an indirect set up, the modulation of the accumulation of a probe substrate (e.g. fluorescent dyes like rhodamine 123, or calcein) can be determined in the presence of a test drug. Calcein-AM, A highly permeable derivative of calcein readily penetrates into intact cells, where the endogenous esterases rapidly hydrolyze it to the fluorescent calcein. In contrast to calcein-AM, calcein has low permeability and therefore gets trapped in the cell and accumulates. As calcein-AM is an excellent substrate of the MDR1 and MRP1 efflux transporters, cells expressing MDR1 and/or MRP1 transporters pump the calcein-AM out of the cell before esterases can hydrolyze it. This results in a lower cellular accumulation rate of calcein. The higher the MDR activity is in the cell membrane, the less Calcein is accumulated in the cytoplasm. In MDR-expressing cells, the addition of an MDR inhibitor or an MDR substrate in excess dramatically increases the rate of Calcein accumulation. Activity of multidrug transporter is reflected by the difference between the amounts of dye accumulated in the presence and the absence of inhibitor. Using selective inhibitors, transport activity of MDR1 and MRP1 can be easily distinguished. This assay can be used to screen drugs for transporter interactions, and also to quantify the MDR activity of cells. The calcein assay is the proprietary assay of SOLVO Biotechnology. Subfamilies Mammalian subfamilies There are 49 known ABC transporters present in humans, which are classified into seven families by the Human Genome Organization. A full list of human ABC transporters can be found from. ABCA The ABCA subfamily is composed of 12 full transporters split into two subgroups. The first subgroup consists of seven genes that map to six different chromosomes. These are ABCA1, ABCA2, ABCA3, and ABCA4, ABCA7, ABCA12, and ABCA13. The other subgroup consists of ABCA5 and ABCA6 and ABCA8, ABCA9 and ABCA10. A8-10. All of subgroup 2 is organized into a head to tail cluster of chromosomes on chromosome 17q24. Genes in this second subgroup are distinguished from ABCA1-like genes by having 37-38 exons as opposed to the 50 exons in ABCA1. The ABCA1 subgroup is implicated in the development of genetic diseases. In the recessive Tangier's disease, the ABCA1 protein is mutated. Also, the ABCA4 maps to a region of chromosome 1p21 that contains the gene for Stargardt's disease. This gene is found to be highly expressed in rod photoreceptors and is mutated in Stargardt's disease, recessive retinitis pigmentism, and the majority of recessive cone-rod dystrophy. ABCB The ABCB subfamily is composed of four full transporters and two half transporters. This is the only human subfamily to have both half and full types of transporters. ABCB1 was discovered as a protein overexpressed in certain drug resistant tumor cells. It is expressed primarily in the blood–brain barrier and liver and is thought to be involved in protecting cells from toxins. Cells that overexpress this protein exhibit multi-drug resistance. ABCC Subfamily ABCC contains thirteen members and nine of these transporters are referred to as the Multidrug Resistance Proteins (MRPs). The MRP proteins are found throughout nature and they mediate many important functions. They are known to be involved in ion transport, toxin secretion, and signal transduction. Of the nine MRP proteins, four of them, MRP4, 5, 8, 9, (ABCC4, 5, 11, and 12), have a typical ABC structure with four domains, comprising two membrane spanning domains, with each spanning domain followed by a nucleotide binding domain. These are referred to as short MRPs. The remaining 5 MRP's (MRP1, 2, 6, 7) (ABCC1, 2, 3, 6 and 10) are known as long MRPs and feature an additional fifth domain at their N terminus. CFTR, the transporter involved in the disease cystic fibrosis, is also considered part of this subfamily. Cystic fibrosis occurs upon mutation and loss of function of CFTR. The sulfonylurea receptors (SUR), involved in insulin secretion, neuronal function, and muscle function, are also part of this family of proteins. Mutations in SUR proteins are a potential cause of Neonatal diabetes mellitus. SUR is also the binding site for drugs such as sulfonylureas and potassium-channel openers activators such as diazoxide. ABCD The ABCD subfamily consists of four genes that encode half transporters expressed exclusively in the peroxisome. ABCD1 is responsible for the X-linked form of Adrenoleukodystrophy (ALD) which is a disease characterized by neurodegeneration and adrenal deficiency that typically is initiated in late childhood. The cells of ALD patients feature accumulation of unbranched saturated fatty acids, but the exact role of ABCD1 in the process is still undetermined. In addition, the function of other ABCD genes have yet to be determined but have been thought to exert related functions in fatty acid metabolism. ABCE and ABCF Both of these subgroups are composed of genes that have ATP binding domains that are closely related to other ABC transporters, but these genes do not encode for trans-membrane domains. ABCE consists of only one member, OABP or ABCE1, which is known to recognize certain oligodendrocytes produced in response to certain viral infections. Each member of the ABCF subgroup consist of a pair of ATP binding domains. ABCG Six half transporters with ATP binding sites on the N terminus and trans-membrane domains at the C terminus make up the ABCG subfamily. This orientation is opposite of all other ABC genes. There are only 5 ABCG genes in the human genome, but there are 15 in the Drosophila genome and 10 in yeast. The ABCG2 gene was discovered in cell lines selected for high level resistance for mitoxantrone and no expression of ABCB1 or ABCC1. ABCG2 can export anthracycline anticancer drugs, as well as topotecan, mitoxantrone, or doxorubicin as substrates. Chromosomal translocations have been found to cause the ABCG2 amplification or rearrangement found in resistant cell lines. Cross-species subfamilies The following classification system for transmembrane solute transporters has been constructed in the TCDB. Three families of ABC exporters are defined by their evolutionary origins. ABC1 exporters evolved by intragenic triplication of a 2 TMS precursor (TMS = transmembrane segment. A "2 TMS" protein has 2 transmembrane segments) to give 6 TMS proteins. ABC2 exporters evolved by intragenic duplication of a 3 TMS precursor, and ABC3 exporters evolved from a 4 TMS precursor which duplicated either extragenicly to give two 4 TMS proteins, both required for transport function, or intragenicly to give 8 or 10 TMS proteins. The 10 TMS proteins appear to have two extra TMSs between the two 4 TMS repeat units. Most uptake systems (all except 3.A.1.21) are of the ABC2 type, divided into type I and type II by the way they handle nucleotides. A special subfamily of ABC2 importers called ECF use a separate subunit for substrate recognition. ABC1 (): 3.A.1.106 The Lipid Exporter (LipidE) Family 3.A.1.108 The β-Glucan Exporter (GlucanE) Family 3.A.1.109 The Protein-1 Exporter (Prot1E) Family 3.A.1.110 The Protein-2 Exporter (Prot2E) Family 3.A.1.111 The Peptide-1 Exporter (Pep1E) Family 3.A.1.112 The Peptide-2 Exporter (Pep2E) Family 3.A.1.113 The Peptide-3 Exporter (Pep3E) Family 3.A.1.117 The Drug Exporter-2 (DrugE2) Family 3.A.1.118 The Microcin J25 Exporter (McjD) Family 3.A.1.119 The Drug/Siderophore Exporter-3 (DrugE3) Family 3.A.1.123 The Peptide-4 Exporter (Pep4E) Family 3.A.1.127 The AmfS Peptide Exporter (AmfS-E) Family 3.A.1.129 The CydDC Cysteine Exporter (CydDC-E) Family 3.A.1.135 The Drug Exporter-4 (DrugE4) Family 3.A.1.139 The UDP-Glucose Exporter (U-GlcE) Family (UPF0014 Family) 3.A.1.201 The Multidrug Resistance Exporter (MDR) Family (ABCB) 3.A.1.202 The Cystic Fibrosis Transmembrane Conductance Exporter (CFTR) Family (ABCC) 3.A.1.203 The Peroxysomal Fatty Acyl CoA Transporter (P-FAT) Family (ABCD) 3.A.1.206 The a-Factor Sex Pheromone Exporter (STE) Family (ABCB) 3.A.1.208 The Drug Conjugate Transporter (DCT) Family (ABCC) (Dębska et al., 2011) 3.A.1.209 The MHC Peptide Transporter (TAP) Family (ABCB) 3.A.1.210 The Heavy Metal Transporter (HMT) Family (ABCB) 3.A.1.212 The Mitochondrial Peptide Exporter (MPE) Family (ABCB) 3.A.1.21 The Siderophore-Fe3+ Uptake Transporter (SIUT) Family ABC2 ( [partial]): 3.A.1.101 The Capsular Polysaccharide Exporter (CPSE) Family 3.A.1.102 The Lipooligosaccharide Exporter (LOSE) Family 3.A.1.103 The Lipopolysaccharide Exporter (LPSE) Family 3.A.1.104 The Teichoic Acid Exporter (TAE) Family 3.A.1.105 The Drug Exporter-1 (DrugE1) Family 3.A.1.107 The Putative Heme Exporter (HemeE) Family 3.A.1.115 The Na+ Exporter (NatE) Family 3.A.1.116 The Microcin B17 Exporter (McbE) Family 3.A.1.124 The 3-component Peptide-5 Exporter (Pep5E) Family 3.A.1.126 The β-Exotoxin I Exporter (βETE) Family 3.A.1.128 The SkfA Peptide Exporter (SkfA-E) Family 3.A.1.130 The Multidrug/Hemolysin Exporter (MHE) Family 3.A.1.131 The Bacitracin Resistance (Bcr) Family 3.A.1.132 The Gliding Motility ABC Transporter (Gld) Family 3.A.1.133 The Peptide-6 Exporter (Pep6E) Family 3.A.1.138 The Unknown ABC-2-type (ABC2-1) Family 3.A.1.141 The Ethyl Viologen Exporter (EVE) Family (DUF990 Family; ) 3.A.1.142 The Glycolipid Flippase (G.L.Flippase) Family 3.A.1.143 The Exoprotein Secretion System (EcsAB(C)) 3.A.1.144: Functionally Uncharacterized ABC2-1 (ABC2-1) Family 3.A.1.145: Peptidase Fused Functionally Uncharacterized ABC2-2 (ABC2-2) Family 3.A.1.146: The actinorhodin (ACT) and undecylprodigiosin (RED) exporter (ARE) family 3.A.1.147: Functionally Uncharacterized ABC2-2 (ABC2-2) Family 3.A.1.148: Functionally Uncharacterized ABC2-3 (ABC2-3) Family 3.A.1.149: Functionally Uncharacterized ABC2-4 (ABC2-4) Family 3.A.1.150: Functionally Uncharacterized ABC2-5 (ABC2-5) Family 3.A.1.151: Functionally Uncharacterized ABC2-6 (ABC2-6) Family 3.A.1.152: The lipopolysaccharide export (LptBFG) Family () 3.A.1.204 The Eye Pigment Precursor Transporter (EPP) Family (ABCG) 3.A.1.205 The Pleiotropic Drug Resistance (PDR) Family (ABCG) 3.A.1.211 The Cholesterol/Phospholipid/Retinal (CPR) Flippase Family (ABCA) 9.B.74 The Phage Infection Protein (PIP) Family all uptake systems (3.A.1.1 - 3.A.1.34 except 3.A.1.21) 3.A.1.1 Carbohydrate Uptake Transporter-1 (CUT1) 3.A.1.2 Carbohydrate Uptake Transporter-2 (CUT2) 3.A.1.3 Polar Amino Acid Uptake Transporter (PAAT) 3.A.1.4 Hydrophobic Amino Acid Uptake Transporter (HAAT) 3.A.1.5 Peptide/Opine/Nickel Uptake Transporter (PepT) 3.A.1.6 Sulfate/Tungstate Uptake Transporter (SulT) 3.A.1.7 Phosphate Uptake Transporter (PhoT) 3.A.1.8 Molybdate Uptake Transporter (MolT) 3.A.1.9 Phosphonate Uptake Transporter (PhnT) 3.A.1.10 Ferric Iron Uptake Transporter (FeT) 3.A.1.11 Polyamine/Opine/Phosphonate Uptake Transporter (POPT) 3.A.1.12 Quaternary Amine Uptake Transporter (QAT) 3.A.1.13 Vitamin B12 Uptake Transporter (B12T) 3.A.1.14 Iron Chelate Uptake Transporter (FeCT) 3.A.1.15 Manganese/Zinc/Iron Chelate Uptake Transporter (MZT) 3.A.1.16 Nitrate/Nitrite/Cyanate Uptake Transporter (NitT) 3.A.1.17 Taurine Uptake Transporter (TauT) 3.A.1.19 Thiamin Uptake Transporter (ThiT) 3.A.1.20 Brachyspira Iron Transporter (BIT) 3.A.1.21 Siderophore-Fe3+ Uptake Transporter (SIUT) 3.A.1.24 The Methionine Uptake Transporter (MUT) Family (Similar to 3.A.1.3 and 3.A.1.12) 3.A.1.27 The γ-Hexachlorocyclohexane (HCH) Family (Similar to 3.A.1.24 and 3.A.1.12) 3.A.1.34 The Tryptophan (TrpXYZ) Family ECF uptake systems 3.A.1.18 The Cobalt Uptake Transporter (CoT) Family 3.A.1.22 The Nickel Uptake Transporter (NiT) Family 3.A.1.23 The Nickel/Cobalt Uptake Transporter (NiCoT) Family 3.A.1.25 The Biotin Uptake Transporter (BioMNY) Family 3.A.1.26 The Putative Thiamine Uptake Transporter (ThiW) Family 3.A.1.28 The Queuosine (Queuosine) Family 3.A.1.29 The Methionine Precursor (Met-P) Family 3.A.1.30 The Thiamin Precursor (Thi-P) Family 3.A.1.31 The Unknown-ABC1 (U-ABC1) Family 3.A.1.32 The Cobalamin Precursor (B12-P) Family 3.A.1.33 The Methylthioadenosine (MTA) Family ABC3 (): 3.A.1.114 The Probable Glycolipid Exporter (DevE) Family 3.A.1.122 The Macrolide Exporter (MacB) Family 3.A.1.125 The Lipoprotein Translocase (LPT) Family 3.A.1.134 The Peptide-7 Exporter (Pep7E) Family 3.A.1.136 The Uncharacterized ABC-3-type (U-ABC3-1) Family 3.A.1.137 The Uncharacterized ABC-3-type (U-ABC3-2) Family 3.A.1.140 The FtsX/FtsE Septation (FtsX/FtsE) Family 3.A.1.207 The Eukaryotic ABC3 (E-ABC3) Family Images Many structures of water-soluble domains of ABC proteins have been produced in recent years. See also ATP-binding domain of ABC transporters Transmembrane domain of ABC transporters Elizabeth P. Carpenter, British structural biologist, first to describe structure of human ABC-transporter ABC10 References Further reading External links Classification of ABC transporters in TCDB ABCdb Archaeal and Bacterial ABC Systems database, ABCdb ATP-binding cassette transporters Protein families
ABC transporter
[ "Biology" ]
15,490
[ "Protein families", "Protein classification" ]
1,551,912
https://en.wikipedia.org/wiki/Flat%20roof
A flat roof is a roof which is almost level in contrast to the many types of sloped roofs. The slope of a roof is properly known as its pitch and flat roofs have up to approximately 10°. Flat roofs are an ancient form mostly used in arid climates and allow the roof space to be used as a living space or a living roof. Flat roofs, or "low-slope" roofs, are also commonly found on commercial buildings throughout the world. The U.S.-based National Roofing Contractors Association defines a low-slope roof as having a slope of 3 in 12 (1:4) or less. Flat roofs exist all over the world, and each area has its own tradition or preference for materials used. In warmer climates, where there is less rainfall and freezing is unlikely to occur, many flat roofs are simply built of masonry or concrete and this is good at keeping out the heat of the sun and cheap and easy to build where timber is not readily available. In areas where the roof could become saturated by rain and leak, or where water soaked into the brickwork could freeze to ice and thus lead to 'blowing' (breaking up of the mortar/brickwork/concrete by the expansion of ice as it forms) these roofs are not suitable. Flat roofs are characteristic of the Egyptian, Persian, and Arabian styles of architecture. Around the world, many modern commercial buildings have flat roofs. The roofs are usually clad with a deeper profile roof sheet (usually 40mm deep or greater). This gives the roof sheet very high water carrying capacity and allows the roof sheets to be more than 100 metres long in some cases. The pitch of this type of roof is usually between 1 and 3 degrees depending upon sheet length. Construction methods Any sheet of material used to cover a flat or low-pitched roof is usually known as a membrane and the primary purpose of these membranes is to waterproof the roof area. Materials that cover flat roofs typically allow the water to run off from a slight inclination or camber into a gutter system. Water from some flat roofs such as on garden sheds sometimes flows freely off the edge of a roof, though gutter systems are of advantage in keeping both walls and foundations dry. Gutters on smaller roofs often lead water directly onto the ground, or better, into a specially made soakaway. Gutters on larger roofs usually lead water into the rainwater drainage system of any built up area. Occasionally, however, flat roofs are designed to collect water in a pool, usually for aesthetic purposes, or for rainwater buffering. Traditionally most flat roofs in the western world make use of felt paper applied over roof decking to keep a building watertight. The felt paper is in turn covered with a flood coat of bitumen (asphalt or tar) and then gravel to keep the sun's heat, ultraviolet light and weather off it and helps protect it from cracking or blistering and degradation. Roof decking is usually of plywood, chipboard or oriented strand board (OSB, also known as Sterling board) of around 18mm thickness, steel or concrete. The mopping of bitumen is applied in two or more coats (usually three or four) as a hot liquid, heated in a kettle. A flooded coat of bitumen is applied over the felts and gravel is embedded in the hot bitumen. A main reason for failure of these traditional roofs is ignorance or lack of maintenance. The gravel coating protects the tar underneath from breaking down under UV rays from the sun. The gravel can shift from wind, heavy rainfall, or people walking on the roof. This exposes the tar to weather and sun. UV rays lead to material failures such as cracking and blistering, and eventually water gets in. Roofing felts are usually a 'paper' or fiber material impregnated in bitumen. As gravel cannot protect tarpaper surfaces where they rise vertically from the roof such as on parapet walls or upstands, the felts are usually coated with bitumen and protected by sheet metal flashings called gravel stops. The gravel stop terminates the roofing, preventing water from running underneath the roofing and preventing the gravel surfacing from washing off in heavy rains. In some microclimates or shaded areas felt roofs can last well in relation to the cost of materials purchase and cost of laying them. The cost of membranes such as EPDM rubber has come down over recent years. If a leak does occur on a flat roof, damage often goes unnoticed for considerable time as water penetrates and soaks the decking and any insulation and/or structure beneath. This can lead to expensive damage from the rot which often develops and if left can weaken the roof structure. There are health risks to people and animals breathing the mold spores: the severity of this health risk remains a debated point. While the insulation is wet, the "R" value is essentially destroyed. If dealing with an organic insulation, the most common solution is removing and replacing the damaged area. If the problem is detected early enough, the insulation may be saved by repairing the leak, but if it has progressed to creating a sunken area, it may be too late. One problem with maintaining flat roofs is that if water does penetrate the barrier covering, it can travel a long way before causing visible damage or leaking into a building where it can be seen. Thus, it is not easy to find the source of the leak in order to repair it. Once underlying roof decking is soaked, it often sags, creating more room for water to accumulate and further worsening the problem. Another common reason for failure of flat roofs is lack of drain maintenance where gravel, leaves and debris block water outlets (be they spigots, drains, downpipes or gutters). This causes a pressure head of water (the deeper the water, the greater the pressure) which can force more water into the smallest hole or crack. In colder climates, puddling water can freeze, breaking up the roof surface as the ice expands. It is therefore important to maintain your flat roof to avoid excessive repair. An important consideration in tarred flat roof quality is knowing that the common term 'tar' applies to rather different products: tar or pitch (which is derived from wood resins), coal tar, asphalt and bitumen. Some of these products appear to have been interchanged in their use and are sometimes used inappropriately, as each has different characteristics, for example whether or not the product can soak into wood, its anti-fungal properties and its reaction to exposure to sun, weather, and varying temperatures. Modern flat roofs can use single large factory-made sheets such as EPDM synthetic rubber, polyvinyl chloride (PVC), thermoplastic polyolefin (TPO) etc. Although usually of excellent quality, one-piece membranes are called single plies and are used today on many large commercial buildings. Modified bitumen membranes which are widely available in one-meter widths are bonded together in either hot or cold seaming processes during the fitting process, where labor skill and training play a large part in determining the quality of roof protection attained. Reasons for not using one-piece membranes include practicality and cost: on all but the smallest of roofs it can be difficult to lift a huge and heavy membrane (a crane or lift is required) and if there is any wind at all it can be difficult to control and bond the membrane smoothly and properly to the roof. Detailing of these systems also plays a part in success or failure: In some systems ready-made details (such as internal and external corners, through-roof pipe flashings, cable or skylight flashings etc.) are available from the membrane manufacturer and can be well bonded to the main sheet, whereas with materials such as tar papers this is usually not the casea fitter has to construct these shapes on-site. Success depends largely on their levels of skill, enthusiasm and trainingresults can vary hugely. Metals are also used for flat roofs: lead (welded or folded-seamed), tin (folded, soldered or folded-seamed) or copper. These are often expensive options and vulnerable to being stolen and sold as scrap metal. Flat roofs tend to be sensitive to human traffic. Anything which produces a crack or puncture in the waterproofing membrane can quite readily lead to leaks. Flat roofs can fail, for example; when subsequent work is carried out on the roof, when new through-roof service pipes/cables are installed or when plant such as air conditioning units are installed. A good roofer should be called to make sure the roof is left properly watertight before it is left. In trafficked areas, proper advisory/warning signs should be put up and walkways of rubber matting, wooden or plastic duck-boarding etc. should be installed to protect the roof membrane. On some membranes, even stone or concrete paving can be fitted. For one-off works, old carpet or smooth wooden planks for workers to walk or stand on will usually provide reasonable protection. Modernist architecture often viewed the flat roof as a living area. Le Corbusier's theoretical works, particularly Vers une Architecture, and the influential Villa Savoye and Unité d'Habitation prominently feature rooftop terraces. That said, Villa Savoye's roof began leaking almost immediately after the Savoye family moved in. Le Corbusier only narrowly avoided a lawsuit from the family because they had to flee the country as France succumbed to the German Army in the Second World War. Flat roof developments Protected membrane roof A protected membrane roof (PMR) is a roof where thermal insulation or another material is located above the waterproofing membrane. Modern green roofs are a type of protected membrane roof. This development has been made possible by the creation of waterproofing membrane materials that are tolerant of supporting a load and the creation of thermal insulation that is not easily damaged by water. Frequently, rigid panels made of extruded polystyrene are used in PMR construction. The chief benefit of PMR design is that the covering protects the waterproofing membrane from thermal shock, ultraviolet light and mechanical damage. One potential disadvantage of protected membrane roof construction is the need for structural strength to support the weight of ballast that prevents wind from moving rigid foam panels or the weight of plants and growth media for a green roof. However, when flat roofs are constructed in temperate climates, the need to support snow load makes additional structural strength a common consideration in any event. Protected membrane roofs are sometimes referred to in the roofing industry as "IRMA" roofs, for "inverted roof membrane assembly". "IRMA" as a roofing term is a genericized trademark. Originally, "IRMA" was a registered trademark of the Dow Chemical Company and stood for "Insulated Roof Membrane Assembly" and referred to PMRs assembled using Dow brand extruded polystyrene insulation. Green roofs Grass or turf roofs have been around since the Viking times if not far earlier and make for a decorative and durable roof covering. Green roofs have been made by depositing topsoil or other growth media on flat roofs and seeding them (or allowing them to self-seed as nature takes its course). Maintenance in the form of simple visible inspection and removal of larger rooting plants allows these roofs to be successful in that they provide an excellent covering and UV light barrier for the roof waterproofing membrane. With some systems, the manufacturer requires that a root barrier membrane be laid above the waterproofing membrane. If well planned and fitted, the mass of the soil or growth medium can provide a good heat buffer for the buildingstoring the heat of the sun and releasing it into the building at night and thus keeping inside temperatures more even. Sudden cold spells are also buffered from the building. One predicted problem with large green roofs is that fire may be able to spread rapidly across areas of dry grasses and plants when they are dried, for instance, in summer by hot weather: Various countries stipulate fire barrier areas made of, for example, wide strips of (partly decorative) gravel. Sedum is emerging as a favorite as it is easily transported and requires little maintenance as it is a succulent plant which remains close to the ground throughout its growth, has mild roots which do not damage the waterproofing membrane and changes colour in the seasons in greens, browns and purples to give a pleasing effect to the eye. Green-roof water buffering Water run-off and flash floods have become a problem especially in areas where there is a large amount of paving such as in inner cities: When rain falls (instead of draining into the ground over a large area as previously) a rainwater system's pipes take water run-off from huge areas of paving, road surfaces and roof areasas areas become more and more built up these systems cope less and less well until even a rain-shower can produce backing up of water from pipes which cannot remove the large water volume and flooding occurs. By buffering rainfall, such as by fitting green roofs, floods can be reduced or avoided: the rain is absorbed into the soil/roof medium and runs off the roof bit by bit as the roof becomes soaked. Roof decks A modern (since the 1960s) development in the construction of decks, including flat-roof decks, especially when used as living area or the roof of a commercial structure, is to build a composite steel deck. Types of flat roof coverings Asphalt Asphalt is an aliphatic compound and in almost all cases a byproduct of the oil industry. Some asphalt is manufactured from oil as the intended purpose, and this is limited to high-quality asphalt produced for longer lasting asphalt built-up roofs (BUR). Asphalt ages through photo-oxidation accelerated by heat. As it ages, the asphalts melt point rises and there is a loss of plasticizers. As mass is lost, the asphalt shrinks and forms a surface similar to alligator skin. Asphalt breaks down slowly in water, and the more exposure the more rapid the degradation. Asphalt also dissolves readily when exposed to oils and some solvents. There are four types of roofing asphalt. Each type is created by heating and blowing with oxygen. The longer the process the higher the melt-point of the asphalt. Therefore, Type I asphalt has characteristics closest to coal tar and can only be used on dead level surfaces. Type II, is considered flat and can be applied to surfaces up to -in-12 (1:48) slopes. Type III, is considered to be "steep" asphalt but is limited to slopes up to 2 in 12 (1:6), and Type IV is "special steep". The drawback is, the longer it is processed, the shorter the life. Dead-level roofs where Type I asphalt is used as the flood and gravel adhesive perform nearly as well as coal tar. Asphalt roofs are also sustainable by restoring the life cycle by making repairs and recoating with compatible products. The process can be repeated as necessary at a significant cost savings with very little impact on the environment. Asphalt BUR is made up of multiple layers of reinforcing plies and asphalt forming a redundancy of waterproofing layers. The reflectivity of built up roofs depends on the surfacing material used. Gravel is the most common and they are referred to as asphalt and gravel roofs. Asphalt degradation is a growing concern. UV-rays oxidize the surface of the asphalt and produce a chalk-like residue. As plasticizers leach out of the asphalt, asphalt built-up roofs become brittle. Cracking and alligatoring inevitably follows, allowing water to penetrate the system causing blisters, cracks and leaks. Compared to other systems, installation of asphalt roofs is energy-intensive (hot processes typically use LP gas as the heat source), and contributes to atmospheric air pollution (toxic, and green-house gases are lost from the asphalt during installation). EPDM Ethylene propylene diene monomer rubber (EPDM) is a synthetic rubber most commonly used in single-ply roofing because it is readily available and simple to apply. Seaming and detailing has evolved over the years and is fast, simple and reliable with many membranes including factory applied tape, resulting in a faster installation. The addition of these tapes has reduced labor by as much as 75%. It is a low-cost membrane, but when properly applied in appropriate places, its warranted life-span has reached 30 years and its expected lifespan has reached 50 years. There are three installation methods: ballasted, mechanically attached, and fully adhered. Ballasted roofs are held in place by large round stones or slabs. Mechanically attached roof membranes are held in place with nails and are suitable in some applications where wind velocities are not usually high. A drawback is that the nails penetrate the waterproof membrane; if correctly fastened the membrane is "self-gasketing" and will not leak. Fully adhered installation methods give the longest performance of the three methods. The most advanced EPDM is combined with a polyester fleece backing and fabricated with a patented hot-melt adhesive technology which provides consistent bond strength between the fleece backing and the membrane. This results in largely eliminating shrinkage of the product, whilst still allowing it to stretch up to 300% and move with the building through the seasons. The fleece improves puncture and tear resistance considerably; EPDM with a fleece backing is 180% stronger than bare EPDM. Fleece-backed EPDM has a tear strength of compared to of that without the fleece reinforcement, more than 3 times the strength of non-reinforced membranes. This thermoset polymer is known for long-term weathering ability and can withstand fluctuations in temperature and ultraviolet rays. They can also be great energy savers. Butynol Roofing Butynol roofing is a type of roofing material made from synthetic rubber, specifically butyl rubber. It is widely used in New Zealand and other parts of the world for flat and low-slope roofs due to its exceptional durability, flexibility, and waterproofing capabilities. Key Features of Butynol Roofing Durability: Butynol is known for its long lifespan and ability to withstand harsh weather conditions, including heavy rain, strong winds, and UV exposure. Flexibility: The material remains flexible over time, allowing it to accommodate the natural movements of a building and preventing cracks and leaks. Waterproofing: Butynol forms a continuous membrane that effectively seals the roof, preventing water penetration and damage. Chemical Resistance: It is resistant to many chemicals, enhancing its durability and suitability for various applications, including industrial and commercial buildings. Butynol Roll Sizes and Weights Butynol roofing membranes are available in different sizes and weights to accommodate various needs: 17.86m roll x 1.0mm (30 kg) Black 17.86m roll x 1.5mm (45 kg) Black and Grey Butynol Roofing Usage Butynol is widely used in roofing applications, favored in New Zealand for flat roofs due to its durability and flexibility., particularly for flat and low-slope roofs, due to its excellent properties that cater to the demanding requirements of modern construction. CPE and CSPE Chlorosulfonated polyethylene (CSPE) and chlorinated polyethylene (CPE) are nonvulcanized synthetic rubber roofing materials that were used for roofing materials from 1964 until their almost complete removal/disappearance from the market in 2011. It is more popularly known and referred to as Hypalon. The product is usually reinforced, and depending upon manufacturer, seams can be heat welded (when both membranes were brand new) or adhered with a solvent-based adhesive. Over time, however, the materials cure and gain properties similar to most thermoset materials such as neoprene or EPDM. After environmental concerns in the late 1990s companies began to feel pressured regarding some of the common adhesives and bonding chemicals, and some jurisdictions passed regulations limiting the use of CSPE membranes. this caused many manufacturers to scramble to create new ways to manufacture the roofing materials, raising costs as well as concerns regarding longevity. In June 2009, DuPont, the manufacturer of Hypalon, discontinued the product, followed within a couple years by nearly every major manufacturer. As a result, CSPE and CPE are no longer available in the US as a full roof membrane, and repair materials are extremely rare or expensive compared to other membranes. Modified bitumen Modified bitumen membranes are hybrid roof systems that combine the high technology formulation and prefabrication benefits of single-ply with the traditional roofing installation techniques used in built-up roofing. The membranes consist of factory-fabricated layers of asphalt, modified using a plastic or rubber ingredient and combined with a reinforcement. The final modified bitumen sheet goods are typically installed by heating the underside of the roll with a torch, presenting a significant fire hazard. For this reason, the technique was outlawed in some municipalities when buildings caught fire, some burning to the ground. This problem was alleviated by strict specifications requiring installation training and certification as well as on-site supervision. Another problem developed when a lack of standards allowed a manufacturer to produce the product with insufficient APP, requisite to enhancing the system aging characteristics. A bitumen is a term applied to both coal tar pitch and asphalt products. Modified bitumens were developed in Europe in the 1970s when Europeans became concerned with the lower performance standards of roofing asphalt. Modifiers were added to replace the plasticizers that had been removed by advanced methods in the distillation process. The two most common modifiers are atactic polypropylene (APP) from Italy and styrene-butadiene-styrene (SBS) from France. The United States started developing modified bitumen compounds in the late 1970s and early 1980s. APP was added to asphalt to enhance aging characteristics and was applied to polyester, fiberglass, or polyester and fiberglass membranes to form a sheet good, cut in manageable lengths for handling. SBS is used as a modifier for enhancing substandard asphalt and provides a degree of flexibility much like rubber. It also is applied to a myriad of carriers and produced as a sheet-good in rolls that can be easily handled. Styrene ethylene butadiene styrene (SEBS) is a formulation increasing flexibility of the sheet and longevity. Styrene-isoprene-styrene (SIS) is another modifier used commercially. SIS-modified bitumen is rarely used, is used primarily in self-adhering sheets, and has very small market share. Cold-applied liquid membranes A choice for new roofs and roof refurbishment. This type of a roof membrane is generally referred to as liquid roofing and involves the application of a cold liquid roof coating. No open flames or other heat sources (as are required with torch on felts) are needed and the glass fiber reinforced systems provide seamless waterproofing around roof protrusions and details. Systems are based on flexible thermoset resin systems such as polyester and polyurethane, and poly(methyl methacrylate) (PMMA). It is important that the membrane is not applied too thin like a paint otherwise failure will result. In the United Kingdom, liquid coatings are the fastest growing sector of the flat roof refurbishment market. Between 2005 and 2009 the UK's leading manufacturers reported a 70% increase in the roof area covered by the coating systems supplied. Cold-applied liquid rubber offers similar benefits to thermoset resin systems with the added benefit of being quick to apply and having high elasticity. Although it is comparatively new to the UK market it has been used successfully in the US market for 20 years. However, EPDM is not an easy substrate to adhere to as is any polyolefin so applying liquid membranes over EPDM is not easy. When applying a liquid membrane it is possible to embed glass fiber matting so that the resultant cured membrane is considerably toughened. PVC (vinyl) membrane roofing Polyvinyl chloride (PVC) membrane roofing is also known as vinyl roofing. Vinyl is derived from two simple ingredients: fossil fuel and salt. Petroleum or natural gas is processed to make ethylene, and salt is subjected to electrolysis to separate out the natural element chlorine. Ethylene and chlorine are combined to produce ethylene dichloride (EDC), which is further processed into a gas called vinyl chloride monomer (VCM). In the next step, known as polymerization, the VCM molecule forms chains, converting the gas into a fine, white powdervinyl resinwhich becomes the basis for the final process, compounding. In compounding, vinyl resin may be blended with additives such as stabilizers for durability, plasticizers for flexibility and pigments for color. PVC roofing is a Thermoplastic system, meaning that it is heat-welded at the seams forming a permanent, watertight bond that is typically stronger than the membrane itself. PVC resin is modified with plasticizers and UV stabilizers, and reinforced with fiberglass non-woven mats or polyester woven scrims, for use as a flexible roofing membrane. PVC is, however, subject to plasticizer migration (a process by which the plasticizers migrate out of the sheet causing it to become brittle). Thus, a thicker membrane has a larger reservoir of plasticizer to maintain flexibility over its lifespan. PVC is often blended with other polymers to add to the performance capabilities of the original PVC formulation, such as KEEKetone Ethylene Ester. Such blends are referred to as either a CPACopolymer Alloy or a TPATripolymer Alloy. Vinyl roofs provide an energy-efficient roofing option due to their inherently light coloring. While the surface of a black roof can experience a temperature increase of as much as under the heat of the full sun, a white reflective roof typically increases only . Studies have even shown that a black PVC, which is often as much as 60 °F hotter than its white counterpart, will still be as much as 40 °F cooler than black asphalt or EPDM roofs. Vinyl membranes can also be used in waterproofing applications for roofing. This is a common technique used in association with green, or planted, roofs. TPO Thermoplastic polyolefin (TPO) single-ply roofing is the single most popular type of commercial low-slope roof covering as of 2016. A TPO roof membrane consists of three layers: a TPO polymer base, a polyester reinforcement scrim middle layer, and a TPO polymer top ply, which are heat-fused at the factory. TPO roof membranes typically come in three standard thicknesses: 45-mil, 60-mil, and 80-mil. Standard TPO membrane colors are white, grey, and tan, with custom colors also available from most manufacturers. The most popular color for a TPO roof is white, due to the reflective, "cool roof" properties of white TPO. Using white roofing material helps reduce the "heat island effect" and solar heat gain in the building. Although TPO exhibits the positive characteristics of other thermoplastics, it does not have any plasticizers added to the product like other thermoplastics. This mis categorization made sense when the product was introduced in the early 1990s and was unproven in the industry. TPO was categorized with thermoplastic membranes that were similar in look and performance but were far from their real chemical and physical characteristics of the TPO membrane. Having no plasticizers and chemically being closer to rubber but having better seam, puncture, and tear strength, TPO was touted to be a white weldable rubber of the future. From 2007 to 2012, reported sales of TPO roofing products by all six major U.S. manufacturers showed materials and accessories sales quadrupling those of all other flat roofing materials. TPO roofing systems feature strong seams that are heat-welded, providing superior seam strength and reducing the risk of leaks compared to other roofing systems with adhesive or tape seams. A TPO roof system can be fully adhered, mechanically fastened, or ballasted, although TPO roof systems are rarely ballasted, since the ballast covers up the surface of the roof and negates the reflective property of white TPO. TPO seam strengths are reported to be three to four times higher than EPDM roofing systems. This is a popular choice for "green" building as there are no plasticizers added and TPO has very low degradation under UV radiation. FPO vs TPO Flexible thermo polyolefin is the exact physical and chemical name given to the product commonly known in the industry as TPO (thermoplastic olefin). Thermosets vs Thermoplastics Thermoset roof systems that are bonded together using chemicals or adhesives, as opposed to heat welded systems like Thermoplastics. The majority of thermoset roofs are typically EPDM (ethylene propylene diene monomer) rubber, although CPE, Neoprene, and other Thermoset roof systems exist. Thermoset roofing is easily formed around shapes like corners and is extremely resistant to ozone, ultraviolet light, weathering, high heat, and abrasion damage, making it an excellent roofing material. EPDM membranes are seamed using pressure-sensitive tapes to join two sheets together, although other Thermoset systems can often be chemically bonded, such as CPE and CSPE membranes. Alternatively, Thermoplastic Roof Systems are systems that are bonded through heat-welding, creating what is usually a stronger and more durable bond. Population Thermoplastic Roofing Systems include TPO and PVC, which together make up over 90% of thermoplastic roofing membranes. While more difficult to form into unique shapes, they instead offer greater bonding strength and longevity compared to thermoset roofing, although they often require specialized training and tools. Coal-tar pitch built-up roof Coal tar is an aromatic hydrocarbon and a by-product from the coking process of the coal industry. It is historically in abundance where coal is used in steel manufacturing. It ages very slowly through volatilization and is an excellent waterproofing and oil resistant product. Roofs are covered by heating the coal tar and applying it between layers of tar paper. It is typically limited to applications on dead level or flat roofs with slopes of in 12 (1:48) or less. It is the only roofing material permitted by the International Building Code to be applied to slopes below in 12; the code allows its use on roofs with slopes as low as in 12 (1:96). It has a tendency to soften in warm temperatures and "heal" itself. It is typically surfaced with gravel to protect the roof from UV rays, hail, and foot traffic, as well as for fire protection. Coal tar provides an extremely long life cycle that is sustainable and renewable. It takes energy to manufacture and to construct a roof with it but its proven longevity with periodic maintenance provides service for many years, with ages from 50 to 70 years not uncommon, with some now performing for over a century. Currently, there are cold process (no kettle is used) coal tar pitch products that almost eliminate all fumes associated with its typical hot process version. Coal tar pitch is often confused with asphalt and asphalt with coal tar pitch. Although they are both black and both are melted in a kettle when used in roofing, that is where the similarity stops. Glass-reinforced plastic A glass-reinforced plastic (GRP) roof is a single-ply GRP laminate applied in situ over a good-quality conditioned plywood or oriented strand board (OSB) deck. The roof is finished with pre-formed GRP edge trims and a coat of pre-pigmented topcoat. The durability and lightweight properties of GRP make it the ideal construction material for applications as diverse as lorry aerofoils and roofs, boats, ponds and automotive body panels. GRP is also used in hostile industrial settings for applications such as tanks and underground pipes; this is due to its ability to withstand high temperatures and its resistance to chemicals. Unlike other roofing materials, GRP is not really a roofing material and has properties that render it better suited to small craft construction. It is often used on small domestic installations, but usually fails prematurely when used on larger projects. As well as being an inexpensive material, it is robust, inflexible and will never corrode. Metal flat roofing Metal is one of the few materials that can be used for both pitched roofs and flat roofs. Flat or low-slope roofs can be covered with steel, aluminum, zinc, or copper just like pitched roofs. However, metal shingles are not practical for flat roofing and so roofers recommend standing-seam and screw-down metal panels. While metal can be an expensive option in the short term, superior durability and simple maintenance of metal roofs typically saves money in the long term. A study by Ducker International in 2005 identified the average cost per year of a metal roof to be US while single-ply roofs stood at and built-up roofing at . Metal roofs are also one of the most environmentally sound roofing options, with most metal roofing material already containing 30-60% recycled content, and the product itself being 100% recyclable. The value of recyclable scrap metal can also provide a benefit to the homeowner; upon roof replacement, scrap metal from the old roof can be sold to recoup a potentially large share of original material costs. Benefits, and uses, and drawbacks A flat roof is the most cost-efficient roof shape as all room space can be used fully (below and above the roof). Having a smaller surface area, flat roofs require less material and are usually stronger than pitched roofs. This style roof also provides ample space for solar panels or outdoor recreational use such as roof gardens. Applying a tough waterproofing membrane forms the ideal substrate for green roof planting schemes. Where gable roofs are uncommon or space is limited, flat roofs may be used as living spaces, with sheltered kitchens, bathrooms, living and sleeping areas. In third world countries, such roof tops are commonly used as areas to dry laundry, for storage, and even as a place to raise livestock. Other uses include pigeon coops, helipads, sports areas (such as tennis courts), and restaurants outdoor seating. While flat roofs are usually designed to shed water, they may still be prone to water ponding, such as from snowmelt. Flat roofs are also more prone to uplift from high winds than are hip or mansard roofs. Maintenance and assessment A flat roof lasts longer if it is properly maintained. Some assessors use 10 years as an average life cycle, although this is dependent on the type of flat roof system in place. Some old tar and gravel roofers acknowledge that unless a roof has been neglected for too long and there are many problems in many areas, a BUR (a built up roof of tar, paper and gravel) will last 20–30 years. Despite these assessors, the actual averages when studied come closer to 12–27, depending on the roof type, with some roofs lasting as long as 120 years. There are BUR systems in place dating to the early 1900s. Modern cold applied liquid membranes have been durability rated by the British Board of Agrément (BBA) for 30 years. BBA approval is a benchmark in determining the suitability of a particular fiberglass roofing system. If standard fiberglass polyester resin is used such as the same resin used in boat repairs, then there will be problems with the roof being too inflexible and not able to accommodate expansion and contraction of the building. A fit-for-purpose flexible/elastomeric resin system used as a waterproofing membrane will last for many years with just occasional inspection needed. The fact that such membranes do not require stone chippings to deflect heat means there is lower risk of stones blocking drains. Liquid applied membranes are also naturally resistant to moss and lichen. General flat roof maintenance includes getting rid of ponding water, typically within 48 hours. This is accomplished by adding roof drains or scuppers for a pond at an edge or automatic siphons for ponds in the center of roofs. An automatic siphon can be created with an inverted ring-shaped sprinkler, a garden hose, a wet/dry vacuum, a check valve installed in the vacuum, and a digital timer. The timer runs two or three times a day for a minute or two to start water in the hose. The timer then turns off the vacuum, but the weight of water in the hose continues the siphon and soon opens the check valve in the vacuum. The best time to address the issue of ponding water is during the design phase of a new roofing project when sufficient falls can be designed-in to take standing water away. The quicker the water is got off the roof, the less chance there is for a roof leak to occur. All roofs should be inspected semi-annually and after major storms. Particular attention should be paid to the flashings around all of the rooftop penetrations. The sharp bends at such places can open up and need to be sealed with plastic cement, mesh and a small mason's trowel. Additionally, repairs to lap seams in the base flashings should be made. 90% of all roof leaks and failure occur at the flashings. Another important maintenance item, often neglected, is to simply keep the roof drains free of debris. A clogged roof drain will cause water to pond, leading to increased "dead load" weight on building that may not be engineered to accommodate that weight. Additionally, ponding water on a roof can freeze. Often, water finds its way into a flashing seam and freezes, weakening the seam. For bitumen-based roof coverings maintenance also includes keeping the tar paper covered with gravel, an older method, currently being replaced with bituminous roofing membranes and the like, which must be 'glued' in place so wind and waves do not move it causing scouring and more bare spots. The glue can be any exterior grade glue like driveway coating. Maintenance also includes fixing blisters (delaminations) or creases that may not yet be leaking but will leak over time. They may need experienced help as they require scraping away the gravel on a cool morning when the tar is brittle, cutting open, and covering with plastic cement or mastic and mesh. Any moisture trapped in a blister has to be dried before being repaired. Roof coatings can be used to fix leaks and extend the life of all types of flat roofs by preventing degradation by the sun (ultra-violet radiation). A thickness of is often used and once it is fully cured, a seamless, watertight membrane is created. Infrared thermography is being used to take pictures of roofs at night to find trouble spots. When the roof is cooling, wet spots not visible to the naked eye, continue to emit heat. The infrared cameras read the heat that is trapped in sections of wet insulation. Cool roofs Roofing systems that can deliver high solar reflectance (the ability to reflect the visible, infrared and ultraviolet wavelengths of the sun, reducing heat transfer to the building) and high thermal emittance (the ability to release a large percentage of absorbed, or non-reflected solar energy) are called cool roofs. Cool roofs fall into one of these three categories: inherently cool, green planted roofs or coated with a cool material. Inherently cool roofs: Roof membranes made of white or light colored material are inherently reflective and achieve some of the highest reflectance and emittance measurements of which roofing materials are capable. A roof made of thermoplastic white vinyl, for example, can reflect 80% or more of the sun's rays and emit at least 70% of the solar radiation that the building absorbs. An asphalt roof only reflects between 6 and 26% of solar radiation, resulting in greater heat transfer to the building interior and greater demand for air conditioninga strain on both operating costs and the electric power grid. Green planted roofs: A green roof is a roof that is partially or completely covered with vegetation and a growing medium, planted over a waterproofing membrane. A green roof typically consists of many layers, including an insulation layer; a waterproof membrane, often vinyl; a drainage layer, usually made of lightweight gravel, clay, or plastic; a geotextile or filter mat that allows water to soak through but prevents erosion of fine soil particles; a growing medium; plants; and, sometimes, a wind blanket. Green roofs are classified as either intensive or extensive, depending on the depth of planting medium and amount of maintenance required. Traditional roof gardens, which are labor-intensive and require a reasonable depth of soil to grow large plants are considered intensive, while extensive green roofs are nearly self-sustaining and require less maintenance. Coated roofs: One way to make an existing or new roof reflective is by applying a specifically designed white roof coatings (not simply white paint) on the roof's surface. The coating can be Energy Star rated. Reflectivity and emissivity ratings for reflective roof products available in the United States can be found in the Cool Roof Rating Council website. Cool roofs offer both immediate and long-term savings in building energy costs. Inherently cool roofs, coated roofs and planted or green roofs can: Reduce building heat-gain, as a white or reflective roof typically increases only above ambient temperature during the day Enhance the life expectancy of both the roof membrane and the building's cooling equipment. Improve thermal efficiency of the roof insulation; this is because as temperature increases, the thermal conductivity of the roof's insulation also increases. Reduce the demand for electric power by as much as 10 percent on hot days. Reduce resulting air pollution and greenhouse gas emissions. Provide energy savings, even in northern climates on sunny (not necessarily "hot") days. See also Roof pitch List of roof shapes Bituminous waterproofing References Construction Roofs
Flat roof
[ "Technology", "Engineering" ]
8,707
[ "Structural system", "Structural engineering", "Roofs", "Construction" ]
1,551,981
https://en.wikipedia.org/wiki/Medical%20algorithm
A medical algorithm is any computation, formula, statistical survey, nomogram, or look-up table, useful in healthcare. Medical algorithms include decision tree approaches to healthcare treatment (e.g., if symptoms A, B, and C are evident, then use treatment X) and also less clear-cut tools aimed at reducing or defining uncertainty. A medical prescription is also a type of medical algorithm. Scope Medical algorithms are part of a broader field which is usually fit under the aims of medical informatics and medical decision-making. Medical decisions occur in several areas of medical activity including medical test selection, diagnosis, therapy and prognosis, and automatic control of medical equipment. In relation to logic-based and artificial neural network-based clinical decision support systems, which are also computer applications used in the medical decision-making field, algorithms are less complex in architecture, data structure and user interface. Medical algorithms are not necessarily implemented using digital computers. In fact, many of them can be represented on paper, in the form of diagrams, nomographs, etc. Examples A wealth of medical information exists in the form of published medical algorithms. These algorithms range from simple calculations to complex outcome predictions. Most clinicians use only a small subset routinely. Examples of medical algorithms are: Calculators, e.g. an on-line or stand-alone calculator for body mass index (BMI) when stature and body weight are given; Flowcharts and drakon-charts, e.g. a binary decision tree for deciding what is the etiology of chest pain Look-up tables, e.g. for looking up food energy and nutritional contents of foodstuffs Nomograms, e.g. a moving circular slide to calculate body surface area or drug dosages. A common class of algorithms are embedded in guidelines on the choice of treatments produced by many national, state, financial and local healthcare organisations and provided as knowledge resources for day to day use and for induction of new physicians. A field which has gained particular attention is the choice of medications for psychiatric conditions. In the United Kingdom, guidelines or algorithms for this have been produced by most of the circa 500 primary care trusts, substantially all of the circa 100 secondary care psychiatric units and many of the circa 10 000 general practices. In the US, there is a national (federal) initiative to provide them for all states, and by 2005 six states were adapting the approach of the Texas Medication Algorithm Project or otherwise working on their production. A grammar—the Arden syntax—exists for describing algorithms in terms of medical logic modules. An approach such as this should allow exchange of MLMs between doctors and establishments, and enrichment of the common stock of tools. Purpose The intended purpose of medical algorithms is to improve and standardize decisions made in the delivery of medical care. Medical algorithms assist in standardizing selection and application of treatment regimens, with algorithm automation intended to reduce potential introduction of errors. Some attempt to predict the outcome, for example critical care scoring systems. Computerized health diagnostics algorithms can provide timely clinical decision support, improve adherence to evidence-based guidelines, and be a resource for education and research. Medical algorithms based on best practice can assist everyone involved in delivery of standardized treatment via a wide range of clinical care providers. Many are presented as protocols and it is a key task in training to ensure people step outside the protocol when necessary. In our present state of knowledge, generating hints and producing guidelines may be less satisfying to the authors, but more appropriate. Cautions In common with most science and medicine, algorithms whose contents are not wholly available for scrutiny and open to improvement should be regarded with suspicion. Computations obtained from medical algorithms should be compared with, and tempered by, clinical knowledge and physician judgment. See also Artificial intelligence in healthcare Medical guideline Odds algorithm Further reading Health informatics Algorithms Knowledge representation
Medical algorithm
[ "Mathematics", "Biology" ]
783
[ "Applied mathematics", "Algorithms", "Mathematical logic", "Health informatics", "Medical technology" ]
1,552,050
https://en.wikipedia.org/wiki/Training%20%28meteorology%29
In meteorology, training denotes repeated areas of rain, typically associated with thunderstorms, that move over the same region in a relatively short period. Training thunderstorms are capable of producing excessive rainfall totals, often causing flash flooding. The name training is derived from how a train and its cars travel along a track (moving along a single path), without the track moving. Formation Showers and thunderstorms along thunderstorm trains usually develop in one area of stationary instability, and are advanced along a single path by prevailing winds. Additional showers and storms can also develop when the gust front from a storm collides with warmer air outside of the storm. The exact process repeats in the new storms until overall conditions in the surrounding atmosphere become too stable to support thunderstorm activity. Showers and storms can also develop along stationary fronts, and winds move them down the front. The showers that often accompany thunderstorms are usually thunderstorms that are not entirely developed. Hazards A series of storms continually moving over the same area, dumping heavy rains, can cause flash flooding. Each storm usually produces heavy rain, and after a significant amount of rain falls from the storms which have moved over the same area, flooding occurs. Thunderstorm training Thunderstorm training is used to refer specifically to training occurring with thunderstorms. It forms when storms tend to back build. This type of training can quickly cause flash flooding, especially if the thunderstorms are strong. References External links MCS Movement and Behavior by Stephen Corfidi May 28th Bear Creek Flash Flood Meteorological Analysis by Mike Evan Precipitation Weather hazards Mesoscale meteorology Severe weather and convection fr:Orage#Orages en V ou en série
Training (meteorology)
[ "Physics" ]
340
[ "Weather", "Physical phenomena", "Weather hazards" ]
1,552,120
https://en.wikipedia.org/wiki/Kartoo
KartOO was a meta search engine which displayed a visual interface. It operated from 2001 to early 2010. Interface KartOO had an Adobe Flash GUI, as opposed to a text-based list of results. Its color scheme was to a degree reminiscent of Apple Computer's Aqua interface. Search results were presented as a "map", with blob-like masses of varying color connecting each item. On rollover of an individual result a bunch of red lines connected related links. KartOO sometimes helped to narrow down searches with a general topic. Every "blob" clicked added another word to the search query. The map would often succeed in presenting keywords or subtopics that defined the topic one was searching on, very much like an interactive spider diagram. History It was co-founded in France by two cousins, Laurent Baleydier and Nicholas Baleydier. This project was launched in 2001. Most of their advertisement was through word of mouth. In 2004, KartOO launched a new version called UJIKO (five nearby keys on a keyboard, similar to QWERTY). The interface looked more like a "jukebox" with the linked sites as playlists. In January 2010 KartOO closed down, removing all content from the KartOO and UJIKO websites, but leaving a small message in French thanking its users for their support. By 2011 that message had been removed. Ahead of its time? In a review of the service, Juan C. Dürsteler wrote in 2002, "Perhaps visual representations will begin to predominate in information retrieval when they show things that we can not see in the ordered list. The semantic links of KartOO are an incipient step forward in this sense." Also see Robin Good's Review in MasterNewMedia.org June 30, 2002, Edited here by Luigi Canali De Rossi, http://www.masternewmedia.org/2002/06/30/new_visual_metasearch_clustering_engine.htm The Solar System-like Search Results, where hovering over the planets or satellites gave you the pertinent lines from that web page, and the Genie on the Flying Carpet that interacted with you were innovations unparalleled at the time. References External links Review of KartOO and Clusty (now Yippy), from Brigham Young's Center For Teaching & Learning. Defunct internet search engines
Kartoo
[ "Technology" ]
505
[ "Computing stubs", "World Wide Web stubs" ]
1,552,348
https://en.wikipedia.org/wiki/Excess%20post-exercise%20oxygen%20consumption
Excess post-exercise oxygen consumption (EPOC, informally called afterburn) is a measurably increased rate of oxygen intake following strenuous activity. In historical contexts the term "oxygen debt" was popularized to explain or perhaps attempt to quantify anaerobic energy expenditure, particularly as regards lactic acid/lactate metabolism; in fact, the term "oxygen debt" is still widely used to this day. However, direct and indirect calorimeter experiments have definitively disproven any association of lactate metabolism as causal to an elevated oxygen uptake. In recovery, oxygen (EPOC) is used in the processes that restore the body to a resting state and adapt it to the exercise just performed. These include: hormone balancing, replenishment of fuel stores, cellular repair, innervation, and anabolism. Post-exercise oxygen consumption replenishes the phosphagen system. New ATP is synthesized and some of this ATP donates phosphate groups to creatine until ATP and creatine levels are back to resting state levels again. Another use of EPOC is to fuel the body’s increased metabolism from the increase in body temperature which occurs during exercise. EPOC is accompanied by an elevated consumption of fuel. In response to exercise, fat stores are broken down and free fatty acids (FFA) are released into the blood stream. In recovery, the direct oxidation of free fatty acids as fuel and the energy consuming re-conversion of FFAs back into fat stores both take place. Duration of the effect The EPOC effect is greatest soon after the exercise is completed and decays to a lower level over time. One experiment, involving exertion above baseline, found EPOC increasing metabolic rate to an excess level that decays to 13% three hours after exercise, and 4% after 16 hours, for the studied exercise dose. Another study, specifically designed to test whether the effect existed for more than 16 hours, conducted tests for 48 hours after the conclusion of the exercise and found measurable effects existed up to the 38-hour post-exercise measurement, for the studied exercise dose. Size of the EPOC effect Studies show that the EPOC effect exists after both aerobic exercise and anaerobic exercise. In a 1992 Purdue study, results showed that high intensity, anaerobic type exercise resulted in a significantly greater magnitude of EPOC than aerobic exercise of equal work output. For exercise regimens of comparable duration and intensity, aerobic exercise burns more calories during the exercise itself, but the difference is partly offset by the higher increase in caloric expenditure that occurs during the EPOC phase after anaerobic exercise. Anaerobic exercise in the form of high-intensity interval training was also found in one study to result in greater loss of subcutaneous fat, even though the subjects expended fewer than half as many calories during exercise. Whether this result was caused by the EPOC effect has not been established, and the caloric content of the participants' diet was not controlled during this particular study period. Most researchers use a measure of EPOC as a natural part of the quantification or measurement of exercise and recovery energy expenditure; to others this is not deemed necessary. After a single bout or set of weight lifting, Scott et al. found considerable contributions of EPOC to total energy expenditure. In their 2004 survey of the relevant literature, Meirelles and Gomes found: "In summary, EPOC resulting from a single resistance exercise session (i.e., many lifts) does not represent a great impact on energy balance; however, its cumulative effect may be relevant". This is echoed by Reynolds and Kravitz in their survey of the literature where they remarked: "the overall weight-control benefits of EPOC, for men and women, from participation in resistance exercise occur over a significant time period, since kilocalories are expended at a low rate in the individual post-exercise sessions." The EPOC effect clearly increases with the intensity of the exercise, and (at least in the case of aerobic exercise, perhaps also for anaerobic) the duration of the exercise. Studies comparing intermittent and continuous exercise consistently show a greater EPOC response for higher intensity, intermittent exercise. See also High-intensity interval training Exercise physiology Yo-yo effect References Further reading Hayes, Sean (2022), "Burning Fat & Calories Post-Workout via the Afterburn Effect/EPOC." The Pliagility Blog. Exercise biochemistry Exercise physiology de:EPOC (Sportwissenschaft)
Excess post-exercise oxygen consumption
[ "Chemistry", "Biology" ]
932
[ "Biochemistry", "Exercise biochemistry" ]
1,552,466
https://en.wikipedia.org/wiki/Chamfer
A chamfer ( or ) is a transitional edge between two faces of an object. Sometimes defined as a form of bevel, it is often created at a 45° angle between two adjoining right-angled faces. Chamfers are frequently used in machining, carpentry, furniture, concrete formwork, mirrors, and to facilitate assembly of many mechanical engineering designs. Terminology In machining the word bevel is not used to refer to a chamfer. Machinists use chamfers to "ease" otherwise sharp edges, both for safety and to prevent damage to the edges. A chamfer may sometimes be regarded as a type of bevel, and the terms are often used interchangeably. In furniture-making, a lark's tongue is a chamfer which ends short of a piece in a gradual outward curve, leaving the remainder of the edge as a right angle. Chamfers may be formed in either inside or outside adjoining faces of an object or room. By comparison, a fillet (pronounced , like "fill it") is the rounding-off of an interior corner, and a round (or radius) the rounding of an outside one. Carpentry and furniture Chamfers are used in furniture such as counters and table tops to ease their edges to keep people from bruising themselves in the otherwise sharp corner. When the edges are rounded instead, they are called bullnosed. Special tools such as chamfer mills and chamfer planes are sometimes used. Architecture Chamfers are commonly used in architecture, both for functional and aesthetic reasons. For example, the base of the Taj Mahal is a cube with chamfered corners, thereby creating an octagonal architectural footprint. Its great gate is formed of chamfered base stones and chamfered corbels for a balcony or equivalent cornice towards the roof. Urban planning Many city blocks in Barcelona, Valencia and various other cities in Spain, as well as Taichung, and street corners (curbs) in Ponce, Puerto Rico, are chamfered. The chamfering was designed as an embellishment and a modernization of urban space in Barcelona's mid-19th century Eixample or Expansion District, where the buildings follow the chamfering of the sidewalks and streets. This pioneering design opens up broader perspectives, provides pleasant pedestrian areas and allows for greater visibility while turning. It might also be considered to allow for turning to be somewhat more comfortable as, supposedly, drivers would not need to slow down as much when making a turn as they would have to if the corner were a square 90 degrees, though in Barcelona, most chamfered corners are used as parking spaces or loading-unloading zones, leaving the traffic to run as in normal 90-degree street corners. Mechanical engineering Chamfers are frequently used to facilitate assembly of parts which are designed for interference fit or to aid assembly for parts inserted by hand. Resilient materials such as fluid power seals generally require a shallower angle than 45 degrees, often 20. In assemblies, chamfers are also used to clear an interior radius - perhaps from a cutting tool, or to clear other features, such as a weld bead, on an adjoining part. This is because it is generally easier to manufacture and much easier to precisely check the dimensions of a chamfer than a radius, and errors in the profile of either radius could otherwise cause interference between the radii before the flat surfaces make contact with one another. Chamfers are also essential for components which humans will handle, to prevent injuries, and also to prevent damage to other components. This is particularly important for hard materials, like most metals, and for heavy assemblies, like press tools. Additionally, a chamfered edge is much more resistant than a square edge to being bruised by other edges or corners knocking against it during assembly or disassembly, or maintenance. Machining In machining a chamfer is a slope cut at any right-angled edge of a workpiece, e.g. holes; the ends of rods, bolts, and pins; the corners of the long-edges of plates; any other place where two surfaces meet at a sharp angle. Chamfering eases assembly, e.g. the insertion of bolts into holes, or nuts. Chamfering also removes sharp edges which reduces significantly the possibility of cuts, and injuries, to people handling the metal piece. Glass mirror design Outside of aesthetics, chamfering is part of the process of hand-crafting a parabolic glass telescope mirror. Before the surface of the disc can be ground, the edges must first be chamfered to prevent edge chipping. This can be accomplished by placing the disc in a metal bowl containing silicon carbide and rotating the disc with a rocking motion. The grit will thus wear off the sharp edge of the glass. References External links Electronic design Electronic engineering Metalworking terminology Woodworking
Chamfer
[ "Technology", "Engineering" ]
1,019
[ "Computer engineering", "Electronic design", "Electronic engineering", "Electrical engineering", "Design" ]
1,552,505
https://en.wikipedia.org/wiki/Vickers%20hardness%20test
The Vickers hardness test was developed in 1921 by Robert L. Smith and George E. Sandland at Vickers Ltd as an alternative to the Brinell method to measure the hardness of materials. The Vickers test is often easier to use than other hardness tests since the required calculations are independent of the size of the indenter, and the indenter can be used for all materials irrespective of hardness. The basic principle, as with all common measures of hardness, is to observe a material's ability to resist plastic deformation from a standard source. The Vickers test can be used for all metals and has one of the widest scales among hardness tests. The unit of hardness given by the test is known as the Vickers Pyramid Number (HV) or Diamond Pyramid Hardness (DPH). The hardness number can be converted into units of pascals, but should not be confused with pressure, which uses the same units. The hardness number is determined by the load over the surface area of the indentation and not the area normal to the force, and is therefore not pressure. Implementation It was decided that the indenter shape should be capable of producing geometrically similar impressions, irrespective of size; the impression should have well-defined points of measurement; and the indenter should have high resistance to self-deformation. A diamond in the form of a square-based pyramid satisfied these conditions. It had been established that the ideal size of a Brinell impression was of the ball diameter. As two tangents to the circle at the ends of a chord 3d/8 long intersect at 136°, it was decided to use this as the included angle between plane faces of the indenter tip. This gives an angle from each face normal to the horizontal plane normal of 22° on each side. The angle was varied experimentally and it was found that the hardness value obtained on a homogeneous piece of material remained constant, irrespective of load. Accordingly, loads of various magnitudes are applied to a flat surface, depending on the hardness of the material to be measured. The HV number is then determined by the ratio F/A, where F is the force applied to the diamond in kilograms-force and A is the surface area of the resulting indentation in square millimeters. which can be approximated by evaluating the sine term to give, where d is the average length of the diagonal left by the indenter in millimeters. Hence, , where F is in kgf and d is in millimeters. The corresponding unit of HV is then the kilogram-force per square millimeter (kgf/mm2) or HV number. In the above equation, F could be in N and d in mm, giving HV in the SI unit of MPa. To calculate Vickers hardness number (VHN) using SI units one needs to convert the force applied from newtons to kilogram-force by dividing by 9.806 65 (standard gravity). This leads to the following equation: where F is in N and d is in millimeters. A common error is that the above formula to calculate the HV number does not result in a number with the unit newton per square millimeter (N/mm2), but results directly in the Vickers hardness number (usually given without units), which is in fact one kilogram-force per square millimeter (1 kgf/mm2). Vickers hardness numbers are reported as xxxHVyy, e.g. 440HV30, or if duration of force differs from 10 s to 15 s, e.g. 440HV30/20, where: 440 is the hardness number, HV names the hardness scale (Vickers), 30 indicates the load used in kgf. 20 indicates the loading time if it differs from 10 s to 15 s Precautions When doing the hardness tests, the minimum distance between indentations and the distance from the indentation to the edge of the specimen must be taken into account to avoid interaction between the work-hardened regions and effects of the edge. These minimum distances are different for ISO 6507-1 and ASTM E384 standards. Vickers values are generally independent of the test force: they will come out the same for 500 gf and 50 kgf, as long as the force is at least 200 gf. However, lower load indents often display a dependence of hardness on indent depth known as the indentation size effect (ISE). Small indent sizes will also have microstructure-dependent hardness values. For thin samples indentation depth can be an issue due to substrate effects. As a rule of thumb the sample thickness should be kept greater than 2.5 times the indent diameter. Alternatively indent depth, , can be calculated according to: Conversion to SI units To convert the Vickers hardness number to SI units the hardness number in kilograms-force per square millimeter (kgf/mm2) has to be multiplied with the standard gravity, , to get the hardness in MPa (N/mm2) and furthermore divided by 1000 to get the hardness in GPa. Vickers hardness can also be converted to an SI hardness based on the projected area of the indent rather than the surface area. The projected area, , is defined as the following for a Vickers indenter geometry: This hardness is sometimes referred to as the mean contact area or Meyer hardness, and ideally can be directly compared with other hardness tests also defined using projected area. Care must be used when comparing other hardness tests due to various size scale factors which can impact the measured hardness. Estimating tensile strength If HV is first expressed in N/mm2 (MPa), or otherwise by converting from kgf/mm2, then the tensile strength (in MPa) of the material can be approximated as ≈ HV/ , where is a constant determined by yield strength, Poisson's ratio, work-hardening exponent and geometrical factors usually ranging between 2 and 4. In other words, if HV is expressed in N/mm2 (i.e. in MPa) then the tensile strength (in MPa) ≈ HV/3. This empirical law depends variably on the work-hardening behavior of the material. Application The fin attachment pins and sleeves in the Convair 580 airliner were specified by the aircraft manufacturer to be hardened to a Vickers Hardness specification of 390HV5, the '5' meaning five kiloponds. However, on the aircraft flying Partnair Flight 394 the pins were later found to have been replaced with sub-standard parts, leading to rapid wear and finally loss of the aircraft. On examination, accident investigators found that the sub-standard pins had a hardness value of only some 200–230HV5. See also Indentation hardness Leeb Rebound Hardness Test Hardness comparison Knoop hardness test Meyer hardness test Mohs scale Rockwell scale Vickers toughness test of ceramics Superhard material References Further reading ASTM E92: Standard method for Vickers hardness of metallic materials (withdrawn and replaced by E384-10e2) ASTM E384: Standard Test Method for Knoop and Vickers Hardness of Materials ISO 6507-1: Metallic materials – Vickers hardness test – Part 1: Test method ISO 6507-2: Metallic materials – Vickers hardness test – Part 2: Verification and calibration of testing machines ISO 6507-3: Metallic materials – Vickers hardness test – Part 3: Calibration of reference blocks ISO 6507-4: Metallic materials – Vickers hardness test – Part 4: Tables of hardness values ISO 18265: Metallic materials – Conversion of Hardness Values External links Video on the Vickers hardness test Vickers hardness test Conversion table – Vickers, Brinell, and Rockwell scales Hardness tests de:Härte#Härteprüfung nach Vickers (HV)
Vickers hardness test
[ "Materials_science" ]
1,612
[ "Hardness tests", "Materials testing" ]
1,552,507
https://en.wikipedia.org/wiki/Remember%20Me%20%28Star%20Trek%3A%20The%20Next%20Generation%29
"Remember Me" is the 79th episode of the syndicated American science-fiction television series Star Trek: The Next Generation, the fifth episode of the fourth season. Set in the 24th century, the series follows the adventures of the Starfleet crew of the Federation starship Enterprise-D. This episode focuses on the ship's chief medical officer, Dr. Beverly Crusher (Gates McFadden), who notices that her friends—and every trace of them—are vanishing around her. Plot The USS Enterprise docks at Starbase 133, where Dr. Beverly Crusher (McFadden) greets her elderly friend and mentor, Dr. Dalen Quaice (Bill Erwin). After taking him to his quarters, discussing the loss of old friends, Dr. Crusher visits her son Ensign Wesley Crusher (Wil Wheaton) in Engineering. Wesley attempts to create a static warp bubble, but the experiment appears to fail. As the Enterprise leaves Starbase, Dr. Crusher finds that Dr. Quaice is missing, with no record of him coming aboard the ship. As she performs a medical test on transporter chief O'Brien (Colm Meaney), she realizes that her medical staff is missing; further investigation and discussion with the crew show that she has always worked alone in sick bay. Dr. Crusher continues to try to track down the disappearing people and finds more and more crew members that she remembers being completely unknown to the crew or the computer. At one point, a vortex appears near Dr. Crusher and attempts to pull her in, but she is able to hold on to a fixture until it dissipates; the ship shows no record of the vortex's appearance when she investigates. Eventually, no one but Captain Picard (Patrick Stewart) and herself remain on the ship, but Picard believes that the situation is normal. Dr. Crusher orders the computer to give Picard's vital signs over the ship's speakers so she knows he is still there, but shortly thereafter, even he disappears. Then, the vortex reappears, and once again tries to claim Beverly. She is blown across the bridge, but she manages to hang onto Lieutenant Commander Data's (Brent Spiner) chair until the vortex disappears. At this point, the viewer is shown the actual Enterprise, where Wesley had successfully created the warp bubble, accidentally trapping his mother within it. With the warp bubble collapsing rapidly, Wesley's fears lead the Traveler (Eric Menyuk) to appear and help Wesley attempt to stabilize the bubble. The Traveler recommends the Enterprise return to the Starbase, where the warp bubble was formed and may be more stable. Within the warp bubble, Dr. Crusher attempts to direct the Enterprise to the home planet of the Traveler, but soon finds the ship is unable to set that destination, as it no longer exists. More of the universe she knows disappears, soon leaving only the Enterprise. She recognizes the shape as being that of Wesley's warp bubbles, and determines that she is trapped, the earlier vortex being the Enterprise crew's first attempt to save her. As the warp bubble shrinks, erasing parts of the Enterprise, she races for Engineering, the center of the warp bubble, and finds a vortex waiting there. She jumps in at the last moment, finding herself back in Engineering along with Picard, Wesley, Geordi La Forge (LeVar Burton), and the Traveler. She embraces her son and obtains confirmation from Picard that the Enterprise'''s population (1,014 at the time, including her guest Dr. Quaice) is the correct number. Reception Io9 rated "Remember Me" as the 78th-best episode of Star Trek in 2014. In 2021, Screen Rant said this was an instance of a Star Trek episode exploring fear of being alone. Home video "Remember Me" was released in the United States on September 3, 2002, as part of the Star Trek: The Next Generation season four DVD box set. See also "Where No One Has Gone Before", the first-season episode where the Traveler is first introduced "The Mark of Gideon", the Star Trek: The Original Series episode where Captain James Kirk, unbeknownst to him, is beamed onto a replica of the Enterprise, and he thinks he is alone. "Revisions", a Stargate SG-1 episode with a similar plot "And When the Sky Was Opened", an episode of The Twilight Zone'' with a similar plot "The Demolished Man, by Alfred Bester. The ending chapters have a similar plot References Star Trek: The Next Generation DVD set, volume 4, disc 2, selection 1 External links "Remember Me" rewatch by Keith R. A. DeCandido Star Trek: The Next Generation season 4 episodes 1990 American television episodes Television episodes directed by Cliff Bole Multiverse
Remember Me (Star Trek: The Next Generation)
[ "Astronomy" ]
998
[ "Astronomical hypotheses", "Multiverse" ]
1,552,523
https://en.wikipedia.org/wiki/Converb
In theoretical linguistics, a converb (abbreviated ) is a nonfinite verb form that serves to express adverbial subordination: notions like 'when', 'because', 'after' and 'while'. Other terms that have been used to refer to converbs include adverbial participle, conjunctive participle, gerund, gerundive and verbal adverb (Ylikoski 2003). Converbs are differentiated from coverbs, verbs in complex predicates in languages that have the serial verb construction. Converbs can be observed in most Turkic languages, Mongolic languages, as well as in all language families of Siberia such as Tungusic. Etymology The term was coined for Khalkha Mongolian by Ramstedt (1902) and until recently, it was used mostly by specialists of Mongolic and Turkic languages to describe non-finite verbs that could be used for both coordination and subordination. Nedjalkov & Nedjalkov (1987) first adopted the term for general typological use, followed by Haspelmath & König (1995). Description A converb depends syntactically on another verb form, but is not its argument. It can be an adjunct, an adverbial, but it cannot be the only predicate of a simple sentence or clausal argument. It cannot depend on predicates such as 'order' (Nedjalkov 1995: 97). Examples On being elected president, he moved with his family to the capital. He walks the streets eating cakes. Khalkha Mongolian The converb -megc denotes that as soon as the first action has been begun/completed, the second action begins. Thus, the subordinate sentence can be understood as a temporal adverbial. There is no context in which the argument structure of another verb or construction would require -megc to appear, and there is no way (possibly except for afterthought) in which a -megc-clause could come sentence-final. Thus, -megc qualifies as a converb in the general linguistic sense. However, from the viewpoint of Mongolian philology (and quite in agreement with Nedjalkov 1995 and Johanson 1995), there is a second converb in this sentence: -ž. At its first occurrence, it is modified by the coverb ehel- ‘to begin’ and this coverb determines that the modified verb has to take the suffix. Yet, the same verbal suffix is used after the verb ‘to beat’ which ends an independent non-finite clause that temporally precedes the following clause but without modifying it in any way that would be fit for an adverbial. It would be possible for -ž to mark an adverbial: Such "polyfunctionality" is common. Japanese and Korean could provide similar examples, and the definition of subordination poses further problems. There are linguists who suggest that a reduction of the domain of the term converb to adverbials does not fit language reality (e.g. Slater 2003: 229). Standard Uzbek Mostly, Uzbek converbs can be translated into English as gerunds, but the context is important as the translation has to be changed as per the former. For example, below are the two sentences including the converb from the verb stem : Alternatively, may denote the meaning of “then” i.e. consecutiveness, so the sentence in this case can be translated as “If you stood up (and) then wrote it”. But in the second example below the same converb can in no way be translated either with gerunditive or consecutive meaning: References Parts of speech
Converb
[ "Technology" ]
771
[ "Parts of speech", "Components" ]
1,552,544
https://en.wikipedia.org/wiki/Onion%20dome
An onion dome is a dome whose shape resembles an onion. Such domes are often larger in diameter than the tholobate (drum) upon which they sit, and their height usually exceeds their width. They taper smoothly upwards to a point. It is a typical feature of churches belonging to the Russian Orthodox church. There are similar buildings in other Eastern European countries, and occasionally in Western Europe: Bavaria (Germany), Austria, and northeastern Italy. Buildings with onion domes are also found in the Oriental regions of Central and South Asia, and the Middle East. However, old buildings outside Russia usually lack the construction typical of the Russian onion design. Other types of Eastern Orthodox cupolas include helmet domes (for example, those of the Dormition Cathedral in Vladimir), Ukrainian pear domes (St Sophia Cathedral in Kyiv), and Baroque bud domes (St Andrew's Church in Kyiv) or an onion-helmet mixture like the St Sophia Cathedral in Novgorod. History According to Wolfgang Born, the onion dome has its origin in Syria, where some Umayyad Caliphate-era mosaics show buildings with bulbous domes. An early prototype of onion dome also appeared in Chehel Dokhter, a mid-11th century Seljuk architecture in Damghan region of Iran. In Russian architecture It is not completely clear when and why onion domes became a typical feature of Russian architecture. The curved onion style appeared in Russian architecture as early as the 13th century. But still several theories exist that the Russian onion shape was influenced by countries from the Orient, like India and Persia, with whom Russia has had lengthy cultural exchange. Byzantine churches and architecture of Kievan Rus were characterized by broader, flatter domes without a special framework erected above the drum. In contrast to this ancient form, each drum of a Russian church is surmounted by a special structure of metal or timber, which is lined with sheet iron or tiles, while the onion architecture is mostly very curved. Russian architecture used the dome shape not only for churches but also for other buildings. By the end of the nineteenth century, most Russian churches from before the Petrine period had bulbous domes. The largest onion domes were erected in the seventeenth century in the area around Yaroslavl. A number of these had more complicated bud-shaped domes, whose form derived from Baroque models of the late seventeenth century. Pear-shaped domes are usually associated with Ukrainian Baroque, while cone-shaped domes are typical for Orthodox churches of Transcaucasia. Oriental origin hypothesis Supposedly, Russian icons painted before the Mongol invasion of Rus' of 1237-1242 do not feature churches with onion domes. Two highly venerated pre-Mongol churches that have been rebuilt—the Assumption Cathedral and the Cathedral of Saint Demetrius, both in Vladimir—display golden helmet domes. Restoration work on several other ancient churches has revealed some fragments of former helmet-like domes below newer onion cupolas. It has been posited that onion domes first appeared in Russia during the reign of Ivan the Terrible (). The domes of Saint Basil's Cathedral have not been altered since the reign of Ivan's son Fyodor I (), indicating the presence of onion domes in sixteenth-century Russia. Some scholars postulate that the Russians adopted onion domes from Muslim countries, possibly from the Khanate of Kazan, whose conquest in 1552 Ivan the Terrible commemorated by erecting St. Basil's Cathedral. Some scholars believe that onion domes first appeared in Russian wooden architecture above tent-like churches. According to this theory, they were strictly utilitarian, as they prevented snow from piling on the roof. Indigenous Russian origin hypothesis In 1946, historian Boris Rybakov, while analysing miniatures of ancient Russian chronicles, pointed out that most of them, from the thirteenth century onward, display churches with onion domes rather than helmet domes. Nikolay Voronin, who studied pre-Mongol Russian architecture, seconded his opinion that onion domes existed in Russia as early as the thirteenth century. These findings demonstrated that Russian onion domes could not be imported from the Orient, where onion domes did not replace spherical domes until the fifteenth century. Modern art historian Sergey Zagraevsky surveyed hundreds of Russian icons and miniatures, from the eleventh century onward. He concluded that most icons painted after the Mongol invasion of Rus display only onion domes. The first onion domes appeared on some pictures from the twelfth century. He found only one icon from the late fifteenth century displaying a dome resembling the helmet instead of an onion. His findings led him to dismiss fragments of helmet domes discovered by restorators beneath modern onion domes as post-Petrine stylisations intended to reproduce the familiar forms of Byzantine cupolas. Zagraevsky also indicated that the oldest depictions of the two Vladimir cathedrals represent them as having onion domes, prior to their replacement by classicizing helmet domes. He explains the ubiquitous appearance of onion domes in the late thirteenth century by the general emphasis on verticality characteristic of Russian church architecture from the late twelfth to early fifteenth centuries. At that time, porches, pilasters, vaults and drums were arranged to create a vertical thrust, to make the church seem taller than it was. Another consideration proposed by Zagraevsky links the onion-shaped form of Russian domes with the weight of traditional Russian crosses, which are much larger and more elaborate than those used in Byzantium and Kievan Rus. Such ponderous crosses would have been easily toppled, if they had not been fixed to sizeable stones traditionally placed inside the elongated domes of Russian churches. It is impossible to place such a stone inside the flat dome of the Byzantine type. Symbolism Prior to the eighteenth century, the Russian Orthodox Church did not assign any particular symbolism to the exterior shape of a church. Nevertheless, onion domes are popularly believed to symbolise burning candles. In 1917, religious philosopher Prince Evgenii Troubetzkoy argued that the onion shape of Russian church domes may not be explained rationally. According to Trubetskoy, drums crowned by tapering domes were deliberately scored to resemble candles, thus manifesting a certain aesthetic and religious attitude. Another explanation has it that the onion dome was originally regarded as a form reminiscent of the aedicula (cubiculum) in the Church of the Holy Sepulchre in Jerusalem. Onion domes often appear in groups of three, representing the Holy Trinity, or five, representing Jesus Christ and the Four Evangelists. Domes standing alone represent Jesus. Vasily Tatischev, the first to record this interpretation, disapproved of it emphatically. He believed that the five-domed design of churches was propagated by Patriarch Nikon, who liked to compare the central and highest dome with himself and four lateral domes with four other patriarchs of the Orthodox world. There is no other evidence that Nikon ever held such a view. The domes are often brightly painted: their colors may informally symbolise different aspects of religion. Green, blue, and gold domes are sometimes held to represent the Holy Trinity, the Holy Spirit, and Jesus, respectively. Black ball-shaped domes were once popular in the snowy north of Russia. Internationally Asia South Asia The onion dome was also used extensively in Mughal architecture, which later went on to influence Indo-Saracenic architecture. It is also a common feature in Sikh architecture, particularly in Gurudwaras, and sometimes seen in Rajput architecture as well. Elsewhere in Asia Outside the Indian subcontinent, it is also used in Iran and other places in the Middle East and Central Asia. At the end of the 19th century, the Dutch-built Baiturrahman Grand Mosque in Aceh, Indonesia, which incorporated onion shaped dome. The shape of the dome has been used in numerous mosques in Indonesia since then. Europe Western and Central countries Baroque domes in the shape of an onion (or other vegetables or flower-buds) were common in the Holy Roman Empire as well. The first one was built in 1576 by the architect Johannes Holl (1512–1594) on the church of the Convent of the Franciscan Sisters of Maria Stern in Augsburg. Usually made of copper sheet, onion domes appear on Catholic churches all over southern Germany, Switzerland, Czech lands, Austria, and Sardinia and Northeast Italy. Onion domes were also a favourite of 20th-century Austrian architectural designer Friedensreich Hundertwasser. Southern countries The Americas The World's Only Corn Palace, a tourist attraction and basketball arena in Mitchell, South Dakota, also features onion domes on the roof of the structure. See also List of roof shapes Giboshi Ogee Notes and references External links Architectural elements Domes Architecture in Russia Russian inventions Plants in art
Onion dome
[ "Technology", "Engineering" ]
1,751
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
1,552,607
https://en.wikipedia.org/wiki/Linkage%20%28mechanical%29
A mechanical linkage is an assembly of systems connected so as to manage forces and movement. The movement of a body, or link, is studied using geometry so the link is considered to be rigid. The connections between links are modeled as providing ideal movement, pure rotation or sliding for example, and are called joints. A linkage modeled as a network of rigid links and ideal joints is called a kinematic chain. Linkages may be constructed from open chains, closed chains, or a combination of open and closed chains. Each link in a chain is connected by a joint to one or more other links. Thus, a kinematic chain can be modeled as a graph in which the links are paths and the joints are vertices, which is called a linkage graph. The movement of an ideal joint is generally associated with a subgroup of the group of Euclidean displacements. The number of parameters in the subgroup is called the degrees of freedom (DOF) of the joint. Mechanical linkages are usually designed to transform a given input force and movement into a desired output force and movement. The ratio of the output force to the input force is known as the mechanical advantage of the linkage, while the ratio of the input speed to the output speed is known as the speed ratio. The speed ratio and mechanical advantage are defined so they yield the same number in an ideal linkage. A kinematic chain, in which one link is fixed or stationary, is called a mechanism, and a linkage designed to be stationary is called a structure. History Archimedes applied geometry to the study of the lever. Into the 1500s the work of Archimedes and Hero of Alexandria were the primary sources of machine theory. It was Leonardo da Vinci who brought an inventive energy to machines and mechanism. In the mid-1700s the steam engine was of growing importance, and James Watt realized that efficiency could be increased by using different cylinders for expansion and condensation of the steam. This drove his search for a linkage that could transform rotation of a crank into a linear slide, and resulted in his discovery of what is called Watt's linkage. This led to the study of linkages that could generate straight lines, even if only approximately; and inspired the mathematician J. J. Sylvester, who lectured on the Peaucellier linkage, which generates an exact straight line from a rotating crank. The work of Sylvester inspired A. B. Kempe, who showed that linkages for addition and multiplication could be assembled into a system that traced a given algebraic curve. Kempe's design procedure has inspired research at the intersection of geometry and computer science. In the late 1800s F. Reuleaux, A. B. W. Kennedy, and L. Burmester formalized the analysis and synthesis of linkage systems using descriptive geometry, and P. L. Chebyshev introduced analytical techniques for the study and invention of linkages. In the mid-1900s F. Freudenstein and G. N. Sandor used the newly developed digital computer to solve the loop equations of a linkage and determine its dimensions for a desired function, initiating the computer-aided design of linkages. Within two decades these computer techniques were integral to the analysis of complex machine systems and the control of robot manipulators. R. E. Kaufman combined the computer's ability to rapidly compute the roots of polynomial equations with a graphical user interface to unite Freudenstein's techniques with the geometrical methods of Reuleaux and Burmester and form KINSYN, an interactive computer graphics system for linkage design The modern study of linkages includes the analysis and design of articulated systems that appear in robots, machine tools, and cable driven and tensegrity systems. These techniques are also being applied to biological systems and even the study of proteins. Mobility The configuration of a system of rigid links connected by ideal joints is defined by a set of configuration parameters, such as the angles around a revolute joint and the slides along prismatic joints measured between adjacent links. The geometric constraints of the linkage allow calculation of all of the configuration parameters in terms of a minimum set, which are the input parameters. The number of input parameters is called the mobility, or degree of freedom, of the linkage system. A system of n rigid bodies moving in space has 6n degrees of freedom measured relative to a fixed frame. Include this frame in the count of bodies, so that mobility is independent of the choice of the fixed frame, then we have M = 6(N − 1), where N = n + 1 is the number of moving bodies plus the fixed body. Joints that connect bodies in this system remove degrees of freedom and reduce mobility. Specifically, hinges and sliders each impose five constraints and therefore remove five degrees of freedom. It is convenient to define the number of constraints c that a joint imposes in terms of the joint's freedom f, where c = 6 − f. In the case of a hinge or slider, which are one degree of freedom joints, we have f = 1 and therefore c = 6 − 1 = 5. Thus, the mobility of a linkage system formed from n moving links and j joints each with fi, i = 1, ..., j, degrees of freedom can be computed as, where N includes the fixed link. This is known as Kutzbach–Grübler's equation There are two important special cases: (i) a simple open chain, and (ii) a simple closed chain. A simple open chain consists of n moving links connected end to end by j joints, with one end connected to a ground link. Thus, in this case N = j + 1 and the mobility of the chain is For a simple closed chain, n moving links are connected end-to-end by n+1 joints such that the two ends are connected to the ground link forming a loop. In this case, we have N=j and the mobility of the chain is An example of a simple open chain is a serial robot manipulator. These robotic systems are constructed from a series of links connected by six one degree-of-freedom revolute or prismatic joints, so the system has six degrees of freedom. An example of a simple closed chain is the RSSR (revolute-spherical-spherical-revolute) spatial four-bar linkage. The sum of the freedom of these joints is eight, so the mobility of the linkage is two, where one of the degrees of freedom is the rotation of the coupler around the line joining the two S joints. Planar and spherical movement It is common practice to design the linkage system so that the movement of all of the bodies are constrained to lie on parallel planes, to form what is known as a planar linkage. It is also possible to construct the linkage system so that all of the bodies move on concentric spheres, forming a spherical linkage. In both cases, the degrees of freedom of the link is now three rather than six, and the constraints imposed by joints are now c = 3 − f. In this case, the mobility formula is given by and we have the special cases, planar or spherical simple open chain, planar or spherical simple closed chain, An example of a planar simple closed chain is the planar four-bar linkage, which is a four-bar loop with four one degree-of-freedom joints and therefore has mobility M = 1. Joints The most familiar joints for linkage systems are the revolute, or hinged, joint denoted by an R, and the prismatic, or sliding, joint denoted by a P. Most other joints used for spatial linkages are modeled as combinations of revolute and prismatic joints. For example, the cylindric joint consists of an RP or PR serial chain constructed so that the axes of the revolute and prismatic joints are parallel, the universal joint consists of an RR serial chain constructed such that the axes of the revolute joints intersect at a 90° angle; the spherical joint consists of an RRR serial chain for which each of the hinged joint axes intersect in the same point; the planar joint can be constructed either as a planar RRR, RPR, and PPR serial chain that has three degrees-of-freedom. Analysis and synthesis of linkages The primary mathematical tool for the analysis of a linkage is known as the kinematic equations of the system. This is a sequence of rigid body transformation along a serial chain within the linkage that locates a floating link relative to the ground frame. Each serial chain within the linkage that connects this floating link to ground provides a set of equations that must be satisfied by the configuration parameters of the system. The result is a set of non-linear equations that define the configuration parameters of the system for a set of values for the input parameters. Freudenstein introduced a method to use these equations for the design of a planar four-bar linkage to achieve a specified relation between the input parameters and the configuration of the linkage. Another approach to planar four-bar linkage design was introduced by L. Burmester, and is called Burmester theory. Planar one degree-of-freedom linkages The mobility formula provides a way to determine the number of links and joints in a planar linkage that yields a one degree-of-freedom linkage. If we require the mobility of a planar linkage to be M = 1 and fi = 1, the result is or This formula shows that the linkage must have an even number of links, so we have N = 2, j = 1: this is a two-bar linkage known as the lever; N = 4, j = 4: this is the four-bar linkage; N = 6, j = 7: this is a six-bar linkage [ it has two links that have three joints, called ternary links, and there are two topologies of this linkage depending how these links are connected. In the Watt topology, the two ternary links are connected by a joint. In the Stephenson topology the two ternary links are connected by binary links; N = 8, j = 10: the eight-bar linkage has 16 different topologies; N = 10, j = 13: the 10-bar linkage has 230 different topologies, N = 12, j = 16: the 12-bar has 6856 topologies. See Sunkari and Schmidt for the number of 14- and 16-bar topologies, as well as the number of linkages that have two, three and four degrees-of-freedom. The planar four-bar linkage is probably the simplest and most common linkage. It is a one degree-of-freedom system that transforms an input crank rotation or slider displacement into an output rotation or slide. Examples of four-bar linkages are: the crank-rocker, in which the input crank fully rotates and the output link rocks back and forth; the slider-crank, in which the input crank rotates and the output slide moves back and forth; drag-link mechanisms, in which the input crank fully rotates and drags the output crank in a fully rotational movement. Biological linkages Linkage systems are widely distributed in animals. The most thorough overview of the different types of linkages in animals has been provided by Mees Muller, who also designed a new classification system which is especially well suited for biological systems. A well-known example is the cruciate ligaments of the knee. An important difference between biological and engineering linkages is that revolving bars are rare in biology and that usually only a small range of the theoretically possible is possible due to additional functional constraints (especially the necessity to deliver blood). Biological linkages frequently are compliant. Often one or more bars are formed by ligaments, and often the linkages are three-dimensional. Coupled linkage systems are known, as well as five-, six-, and even seven-bar linkages. Four-bar linkages are by far the most common though. Linkages can be found in joints, such as the knee of tetrapods, the hock of sheep, and the cranial mechanism of birds and reptiles. The latter is responsible for the upward motion of the upper bill in many birds. Linkage mechanisms are especially frequent and manifold in the head of bony fishes, such as wrasses, which have evolved many specialized feeding mechanisms. Especially advanced are the linkage mechanisms of jaw protrusion. For suction feeding a system of linked four-bar linkages is responsible for the coordinated opening of the mouth and 3-D expansion of the buccal cavity. Other linkages are responsible for protrusion of the premaxilla. Linkages are also present as locking mechanisms, such as in the knee of the horse, which enables the animal to sleep standing, without active muscle contraction. In pivot feeding, used by certain bony fishes, a four-bar linkage at first locks the head in a ventrally bent position by the alignment of two bars. The release of the locking mechanism jets the head up and moves the mouth toward the prey within 5–10 ms. Examples Pantograph (four-bar, two DOF) Five bar linkages often have meshing gears for two of the links, creating a one DOF linkage. They can provide greater power transmission with more design flexibility than four-bar linkages. Jansen's linkage is an eight-bar leg mechanism that was invented by kinetic sculptor Theo Jansen. Klann linkage is a six-bar linkage that forms a leg mechanism; Toggle mechanisms are four-bar linkages that are dimensioned so that they can fold and lock. The toggle positions are determined by the colinearity of two of the moving links. The linkage is dimensioned so that the linkage reaches a toggle position just before it folds. The high mechanical advantage allows the input crank to deform the linkage just enough to push it beyond the toggle position. This locks the input in place. Toggle mechanisms are used as clamps. Straight line mechanisms James Watt's parallel motion and Watt's linkage Peaucellier–Lipkin linkage, the first planar linkage to create a perfect straight line output from rotary input; eight-bar, one DOF. A Scott Russell linkage, which converts linear motion, to (almost) linear motion in a line perpendicular to the input. Chebyshev linkage, which provides nearly straight motion of a point with a four-bar linkage. Hoekens linkage, which provides nearly straight motion of a point with a four-bar linkage. Sarrus linkage, which provides motion of one surface in a direction normal to another. Hart's inversor, which provides a perfect straight line motion without sliding guides. Gallery See also Assur Groups Dwell mechanism Deployable structure Engineering mechanics Four-bar linkage Mechanical function generator Kinematics Kinematic coupling Kinematic pair Kinematic synthesis Kinematic models in Mathcad Leg mechanism Lever Machine Outline of machines Overconstrained mechanism Parallel motion Reciprocating motion Slider-crank linkage Three-point hitch References Further reading  — Connections between mathematical and real-world mechanical models, historical development of precision machining, some practical advice on fabricating physical models, with ample illustrations and photographs Hartenberg, R.S. & J. Denavit (1964) Kinematic synthesis of linkages, New York: McGraw-Hill — Online link from Cornell University.  — "Linkages: a peculiar fascination" (Chapter 14) is a discussion of mechanical linkage usage in American mathematical education, includes extensive references How to Draw a Straight Line — Historical discussion of linkage design from Cornell University Parmley, Robert. (2000). "Section 23: Linkage." Illustrated Sourcebook of Mechanical Components. New York: McGraw Hill. Drawings and discussion of various linkages. Sclater, Neil. (2011). "Linkages: Drives and Mechanisms." Mechanisms and Mechanical Devices Sourcebook. 5th ed. New York: McGraw Hill. pp. 89–129. . Drawings and designs of various linkages. External links Kinematic Models for Design Digital Library (KMODDL) — Major web resource for kinematics. Movies and photos of hundreds of working mechanical-systems models in the Reuleaux Collection of Mechanisms and Machines at Cornell University, plus 5 other major collections. Includes an e-book library of dozens of classic texts on mechanical design and engineering. Includes CAD models and stereolithographic files for selected mechanisms. Digital Mechanism and Gear Library (DMG-Lib) (in German: Digitale Mechanismen- und Getriebebibliothek) — Online library about linkages and cams (mostly in German) Linkage calculations Introductory linkage lecture Virtual Mechanisms Animated by Java Linkage-based Drawing Apparatus by Robert Howsare (ASOM) Analysis, synthesis and optimization of multibar linkages Linkage animations on mechanicaldesign101.com include planar and spherical four-bar and six-bar linkages. Animations of planar and spherical four-bar linkages. Animation of Bennett's linkage. Example of a six-bar function generator that computes the elevation angle for a given range. Animations of six-bar linkage for a bicycle suspension. A variety of six-bar linkage designs. Introduction to Linkages An open source planar linkage mechanism simulation and mechanical synthesis system. Mechanisms (engineering)
Linkage (mechanical)
[ "Engineering" ]
3,623
[ "Mechanical engineering", "Mechanisms (engineering)" ]
1,552,884
https://en.wikipedia.org/wiki/Cylinder-head-sector
Cylinder-head-sector (CHS) is an early method for giving addresses to each physical block of data on a hard disk drive. It is a 3D-coordinate system made out of a vertical coordinate head, a horizontal (or radial) coordinate cylinder, and an angular coordinate sector. Head selects a circular surface: a platter in the disk (and one of its two sides). Cylinder is a cylindrical intersection through the stack of platters in a disk, centered around the disk's spindle. Combined, cylinder and head intersect to a circular line, or more precisely: a circular strip of physical data blocks called track. Sector finally selects which data block in this track is to be addressed, as the track is subdivided into several equally-sized portions, each of which is an arc of (360/n) degrees, where n is the number of sectors in the track. CHS addresses were exposed, instead of simple linear addresses (going from 0 to the total block count on disk - 1), because early hard drives didn't come with an embedded disk controller, that would hide the physical layout. A separate generic controller card was used, so that the operating system had to know the exact physical "geometry" of the specific drive attached to the controller, to correctly address data blocks. The traditional limits were 512 bytes/sector × 63 sectors/track × 255 heads (tracks/cylinder) × 1024 cylinders, resulting in a limit of 8032.5 MiB for the total capacity of a disk. As the geometry became more complicated (for example, with the introduction of zone bit recording) and drive sizes grew over time, the CHS addressing method became restrictive. Since the late 1980s, hard drives began shipping with an embedded disk controller that had good knowledge of the physical geometry; they would however report a false geometry to the computer, e.g., a larger number of heads than actually present, to gain more addressable space. These logical CHS values would be translated by the controller, thus CHS addressing no longer corresponded to any physical attributes of the drive. By the mid 1990s, hard drive interfaces replaced the CHS scheme with logical block addressing (LBA), but many tools for manipulating the master boot record (MBR) partition table still aligned partitions to cylinder boundaries; thus, artifacts of CHS addressing were still seen in partitioning software by the late 2000s. In the early 2010s, the disk size limitations imposed by MBR became problematic and the GUID Partition Table (GPT) was designed as a replacement; modern computers using UEFI firmware without MBR support no longer use any notions from CHS addressing. Definitions CHS addressing is the process of identifying individual sectors (aka. physical block of data) on a disk by their position in a track, where the track is determined by the head and cylinder numbers. The terms are explained bottom up, for disk addressing the sector is the smallest unit. Disk controllers can introduce address translations to map logical to physical positions, e.g., zone bit recording stores fewer sectors in shorter (inner) tracks, physical disk formats are not necessarily cylindrical, and sector numbers in a track can be skewed. Sectors Floppy disks and controllers had used physical sector sizes of 128, 256, 512 and 1024 bytes (e.g., PC/AX), but formats with 512 bytes per physical sector became dominant in the 1980s. The most common physical sector size for hard disks today is 512 bytes, but there have been hard disks with 520 bytes per sector as well for non-IBM compatible machines. In 2005 some Seagate custom hard disks used sector sizes of 1024 bytes per sector. Advanced Format hard disks use 4096 bytes per physical sector (4Kn) since 2010, but will also be able to emulate 512 byte sectors (512e) for a transitional period. Magneto-optical drives use sector sizes of 512 and 1024 bytes on 5.25-inch drives and 512 and 2048 bytes on 3.5-inch drives. In CHS addressing the sector numbers always start at 1, there is no sector 0, which can lead to confusion since logical sector addressing schemes typically start counting with 0, e.g., logical block addressing (LBA), or "relative sector addressing" used in DOS. For physical disk geometries the maximal sector number is determined by the low level format of the disk. However, for disk access with the BIOS of IBM-PC compatible machines, the sector number was encoded in six bits, resulting in a maximal number of 111111 (63) sectors per track. This maximum is still in use for virtual CHS geometries. Tracks The tracks are the thin concentric circular strips of sectors. At least one head is required to read a single track. With respect to disk geometries the terms track and cylinder are closely related. For a single or double sided floppy disk track is the common term; and for more than two heads cylinder is the common term. Strictly speaking a track is a given CH combination consisting ofSPT sectors, while a cylinder consists ofSPT×H sectors. Cylinders A cylinder is a division of data in a disk drive, as used in the CHS addressing mode of a fixed-block architecture (FBA) disk or the cylinder–head–record (CCHHR) addressing mode of a CKD disk. The concept is concentric, hollow, cylindrical slices through the physical disks (platters), collecting the respective circular tracks aligned through the stack of platters. The number of cylinders of a disk drive exactly equals the number of tracks on a single surface in the drive. It comprises the same track number on each platter, spanning all such tracks across each platter surface that is able to store data (without regard to whether or not the track is "bad"). Cylinders are vertically formed by tracks. In other words, track 12 on platter 0 plus track 12 on platter 1 etc. is cylinder 12. Other forms of Direct Access Storage Device (DASD), such as drum memory devices or the IBM 2321 Data Cell, might give blocks addresses that include a cylinder address, although the cylinder address doesn't select a (geometric) cylindrical slice of the device. Heads A device called a head reads and writes data in a hard drive by manipulating the magnetic medium that composes the surface of an associated disk platter. Naturally, a platter has 2 sides and thus 2 surfaces on which data can be manipulated; usually there are 2 heads per platter, one per side. (Sometimes the term side is substituted for head, since platters might be separated from their head assemblies, as with the removable media of a floppy drive.) The CHS addressing supported in IBM-PC compatible BIOSes code used eight bits for a maximum of 256 heads counted as head 0 up to 255 (FFh). However, a bug in all versions of Microsoft DOS/IBM PC DOS up to and including 7.10 will cause these operating systems to crash on boot when encountering volumes with 256 heads. Therefore, all compatible BIOSes will use mappings with up to 255 heads (00h..FEh) only, including in virtual 255×63 geometries. This historical oddity can affect the maximum disk size in old BIOS INT 13h code as well as old PC DOS or similar operating systems: (512 bytes/sector)×(63 sectors/track)×(255 heads (tracks/cylinder))×(1024 cylinders)=8032.5 MB, but actually 512×63×256×1024=8064 MB yields what is known as 8 GB limit. In this context relevant definition of 8 GB = 8192 MB is another incorrect limit, because it would require CHS 512×64×256 with 64 sectors per track. Tracks and cylinders are counted from 0, i.e., track 0 is the first (outer-most) track on floppy or other cylindrical disks. Old BIOS code supported ten bits in CHS addressing with up to 1024 cylinders (1024=210). Adding six bits for sectors and eight bits for heads results in the 24 bits supported by BIOS interrupt 13h. Subtracting the disallowed sector number 0 in 1024×256 tracks corresponds to 128 MB for a sector size of 512 bytes (128 MB=1024×256×(512 byte/sector)); and 8192-128=8064 confirms the (roughly) 8 GB limit. CHS addressing starts at 0/0/1 with a maximal value 1023/255/63 for 24=10+8+6 bits, or 1023/254/63 for 24 bits limited to 255 heads. CHS values used to specify the geometry of a disk have to count cylinder 0 and head 0 resulting in a maximum (1024/256/63 or) 1024/255/63 for 24 bits with (256 or) 255 heads. In CHS tuples specifying a geometry S actually means sectors per track, and where the (virtual) geometry still matches the capacity the disk contains C×H×S sectors. As larger hard disks have come into use, a cylinder has become also a logical disk structure, standardised at 16 065 sectors (16065=255×63). CHS addressing with 28 bits (EIDE and ATA-2) permits eight bits for sectors still starting at 1, i.e., sectors 1...255, four bits for heads 0...15, and sixteen bits for cylinders 0...65535. This results in a roughly 128 GB limit; actually 65536×16×255=267386880 sectors corresponding to 130560 MB for a sector size of 512 bytes. The 28=16+4+8 bits in the ATA-2 specification are also covered by Ralf Brown's Interrupt List, and an old working draft of this now expired standard was published. With an old BIOS limit of 1024 cylinders and the ATA limit of 16 heads the combined effect was 1024×16×63=1032192 sectors, i.e., a 504 MB limit for sector size 512. BIOS translation schemes known as ECHS and revised ECHS mitigated this limitation by using 128 or 240 instead of 16 heads, simultaneously reducing the numbers of cylinders and sectors to fit into 1024/128/63 (ECHS limit: 4032 MB) or 1024/240/63 (revised ECHS limit: 7560 MB) for the given total number of sectors on a disk. Blocks and clusters The Unix communities employ the term block to refer to a sector or group of sectors. For example, the Linux fdisk utility, before version 2.25, displayed partition sizes using 1024-byte blocks. Clusters are allocation units for data on various file systems (FAT, NTFS, etc.), where data mainly consists of files. Clusters are not directly affected by the physical or virtual geometry of the disk, i.e., a cluster can begin at a sector near the end of a given CH track, and end in a sector on the physically or logically next CH track. CHS to LBA mapping In 2002 the ATA-6 specification introduced an optional 48 bits Logical Block Addressing and declared CHS addressing as obsolete, but still allowed to implement the ATA-5 translations. Unsurprisingly the CHS to LBA translation formula given below also matches the last ATA-5 CHS translation. In the ATA-5 specification CHS support was mandatory for up to 16 514 064 sectors and optional for larger disks. The ATA-5 limit corresponds to CHS 16383 16 63 or equivalent disk capacities , and requires (). CHS tuples can be mapped onto LBA addresses using the following formula: where is the LBA address, is the number of heads on the disk, is the maximum number of sectors per track, and is the CHS address. A Logical Sector Number formula in the ECMA-107 and ISO/IEC 9293:1994 (superseding ISO 9293:1987) standards for FAT file systems matches exactly the LBA formula given above: Logical Block Address and Logical Sector Number (LSN) are synonyms. The formula does not use the number of cylinders, but requires the number of heads and the number of sectors per track in the disk geometry, because the same CHS tuple addresses different logical sector numbers depending on the geometry. Examples: For geometry 1020 16 63 of a disk with 1028160 sectors, CHS 3 2 1 is LBA ; For geometry 1008 4 255 of a disk with 1028160 sectors, CHS 3 2 1 is LBA For geometry  64 255 63 of a disk with 1028160 sectors, CHS 3 2 1 is LBA For geometry 2142 15 32 of a disk with 1028160 sectors, CHS 3 2 1 is LBA To help visualize the sequencing of sectors into a linear LBA model, note that: The first LBA sector is sector # zero, the same sector in a CHS model is called sector # one. All the sectors of each head/track get counted before incrementing to the next head/track. All the heads/tracks of the same cylinder get counted before incrementing to the next cylinder. The outside half of a whole hard drive would be the first half of the drive. History Cylinder Head Record format has been used by Count Key Data (CKD) hard disks on IBM mainframes since at least the 1960s. This is largely comparable to the Cylinder Head Sector format used by PCs, with the exception that the sector size was not fixed but could vary from track to track based on the needs of each application. In contemporary use, the disk geometry presented to the mainframe is emulated by the storage firmware, and no longer has any relation to physical disk geometry. Earlier hard drives used in the PC, such as MFM and RLL drives, divided each cylinder into an equal number of sectors, so the CHS values matched the physical properties of the drive. A drive with a CHS tuple of 500 4 32 would have 500 tracks per side on each platter, two platters (4 heads), and 32 sectors per track, with a total of 32 768 000 bytes (31.25 MiB). ATA/IDE drives were much more efficient at storing data and have replaced the now-obsolete MFM and RLL drives. They use zone bit recording (ZBR), where the number of sectors dividing each track varies with the location of groups of tracks on the surface of the platter. Tracks nearer to the edge of the platter contain more blocks of data than tracks close to the spindle, because there is more physical space within a given track near the edge of the platter. Thus, the CHS addressing scheme cannot correspond directly with the physical geometry of such drives, due to the varying number of sectors per track for different regions on a platter. Because of this, many drives still have a surplus of sectors (less than 1 cylinder in size) at the end of the drive, since the total number of sectors rarely, if ever, ends on a cylinder boundary. An ATA/IDE drive can be set in the system BIOS with any configuration of cylinders, heads and sectors that do not exceed the capacity of the drive (or the BIOS), since the drive will convert any given CHS value into an actual address for its specific hardware configuration. This however can cause compatibility problems. For operating systems such as Microsoft DOS or older version of Windows, each partition must start and end at a cylinder boundary. Only some of the relatively modern operating systems (Windows XP included) may disregard this rule, but doing so can still cause some compatibility issues, especially if the user wants to perform dual booting on the same drive. Microsoft does not follow this rule with internal disk partition tools since Windows Vista. See also CD-ROM format Block (data storage) Disk storage Disk formatting File Allocation Table Disk partitioning References Notes 1.This rule is true at least for all formats where the physical sectors are named 1 upwards. However, there are a few odd floppy formats (e.g., the 640 KB format used by BBC Master 512 with DOS Plus 2.1), where the first sector in a track is named "0" not "1". 2.While computers begin counting at 0, DOS would begin counting at 1. In order to do this, DOS would add a 1 to the head count before displaying it on the screen. However, instead of converting the 8-bit unsigned integer to a larger size (such as a 16-bit integer) first, DOS just added the 1. This would overflow a head count of 255 (0xFF) into 0 (0x100 & 0xFF = 0x00) instead of the 256 that would be expected. This was fixed with DOS 8, but by then, it had become a de facto standard to not use a head value of 255. AT Attachment BIOS Computer file systems Hard disk computer storage Rotating disc computer storage media Computer storage devices IBM storage devices
Cylinder-head-sector
[ "Technology" ]
3,550
[ "Computer storage devices", "Recording devices" ]
1,552,947
https://en.wikipedia.org/wiki/Strain%20%28music%29
A strain is a series of musical phrases that create a distinct melody of a piece. A strain is often referred to as a "section" of a musical piece. Often, a strain is repeated for the sake of instilling the melody clearly. This is so in ragtime and marches. The Oxford English Dictionary lists this use of "strain" (n.2, III, 12) as part of the same noun more often used to denote an extreme of effort or pressure. OED derives it from the verb, which could once be used to mean "sing," and speculates that this usage derives from one in which the word denotes increasing the tension of a string on a musical instrument. References Formal sections in music analysis
Strain (music)
[ "Technology" ]
149
[ "Components", "Formal sections in music analysis" ]
1,553,015
https://en.wikipedia.org/wiki/Exposed%20node%20problem
In wireless networks, the exposed node problem occurs when a node is prevented from sending packets to other nodes because of co-channel interference with a neighboring transmitter. Consider an example of four nodes labeled R1, S1, S2, and R2, where the two receivers (R1, R2) are out of range of each other, yet the two transmitters (S1, S2) in the middle are in range of each other. Here, if a transmission between S1 and R1 is taking place, node S2 is prevented from transmitting to R2 as it concludes after carrier sense that it will interfere with the transmission by its neighbor S1. However note that R2 could still receive the transmission of S2 without interference because it is out of range of S1. IEEE 802.11 RTS/CTS mechanism helps to solve this problem only if the nodes are synchronized and packet sizes and data rates are the same for both the transmitting nodes. When a node hears an RTS from a neighboring node, but not the corresponding CTS, that node can deduce that it is an exposed node and is permitted to transmit to other neighboring nodes. If the nodes are not synchronised (or if the packet sizes are different or the data rates are different) the problem may occur that the sender will not hear the CTS or the ACK during the transmission of data of the second sender. The exposed node problem is not an issue in cellular networks as the power and distance between cells is controlled to avoid it. See also Hidden node problem IEEE 802.11 RTS/CTS Multiple Access with Collision Avoidance for Wireless (MACAW) References Further reading Wireless networking E
Exposed node problem
[ "Technology", "Engineering" ]
347
[ "Wireless networking", "Computer networks engineering" ]
1,553,120
https://en.wikipedia.org/wiki/Delphi%20%28online%20service%29
Delphi Forums is a U.S. online service provider and since the mid-1990s has been a community internet forum site. It started as a nationwide dialup service in 1983. Delphi Forums remains active as of 2025. History The company that became Delphi was founded by Wes Kussmaul as Kussmaul Encyclopedia in 1981 and featured an encyclopedia, e-mail, and a primitive chat. Newswires, bulletin boards and better chat were added in early 1982. Kussmaul recalled: Delphi was actually launched in October 1981, at Jerry Milden's Northeast Computer Show, as the Kussmaul Encyclopedia--the world's first commercially available computerized encyclopedia. (Frank Greenagle's Arête Encyclopedia was announced at about the same time, but you couldn't buy it until much later.) The Kussmaul Encyclopedia was actually a complete home computer system (your choice of Tandy Color Computer or Apple II) with a 300-bps modem that dialed up to a VAX computer hosting our online encyclopedia database. We sold the system for about the same price and terms as Britannica. People wandered around in it and were impressed with the ease with which they could find information. We had a wonderful cross-referencing system that turned every occurrence of a word that was the name of an entry in the encyclopedia into a hypertext link—in 1981... In November 1982, Wes hired Glenn McIntyre as a software engineer primarily doing internal systems. Glenn brought in colleagues Kip Bryan and Dan Bruns. Kip wrote the software that became Delphi Conference and Delphi Forums. Dan upon finishing his MBA at Harvard, become President and subsequently CEO when Wes moved on to form Global Villages. On March 15, 1983, the Delphi name was first used by General Videotex Corporation. Forums were text-based, and accessed via Telenet, Sprintnet, Tymnet, Uninet, and Datapac. In 1984, it had 4 million members. Delphi was extended to Argentina in 1985, through a partnership with the Argentine IT company Siscotel S.A. Delphi partnered with ASCII Corp. of Japan to open online services in 1991. Delphi provided national consumer access to the Internet in 1992. Features included E-mail (July 1992), FTP, Telnet, Usenet, text-based Web access (November 1992), MUDs, Finger, and Gopher. "To a lot of people at the time, we seemed to be in an enviable position" says Dan Bruns, Delphi's CEO. "But we didn't have a lot of financing to fuel our growth..." In 1993, Delphi was sold to Rupert Murdoch's News Corporation. News Corporation recognized that there would be growth in consumer use of the internet and attempted to use Delphi as its vehicle. It had 125,000 text-based customers in 1995 and had 150 employees. Murdoch hired away IBM's director of high-performance computing and communications, Alan Baratz, in 1994 to run Delphi. Under Baratz, Delphi acquired space in Cross Point, an office complex in Lowell, Massachusetts constructed for Wang Laboratories, and built a large state-of-the-art server farm. Bruns and General Manager Rusty Williams stayed on. Delphi peaked with 500,000 paid subscribers and about 600 employees. By 1995, Delphi had lost many of its subscribers, and Bruns left Delphi. In 1996, NewsCorp decided to exit the online business, was laying off almost half of Delphi's employees and wanted to sell or close Delphi. Dan Bruns and some of Delphi's original investors bought Delphi from NewsCorp for an undisclosed amount. With only 50,000 paying subscribers left, Delphi was back to its pre-NewsCorp size. "We were on the same growth slope, but this time we were going down instead of up," he says. "It felt a little poetic." In 1996, Delphi launched a free, ad-supported managed-content website with associated message boards and chat rooms, under the management of a team led by Dan Bruns and which included Bill Louden, who had headed GEnie during its heyday. For a period of time, both text-based and web-based community services were available. After a year as a managed content site, Delphi reinvented itself as a community-driven service that allowed anyone to create an online community. Prospero Technologies was formed in January 2000 as the merger of Delphi Forums and Wellengaged. Webpages for forums were discontinued. In 2001, Rob Brazell purchased Delphi Forums, merged it with eHow and Idea Exchange, and formed Blue Frogg Enterprises. The Delphi.com domain was sold to Delphi Corporation, the auto parts manufacturer. Prospero was sold to Inforonics. In 2002, Prospero reacquired Delphi Forums, joining it with Talk City to form Delphi Forums LLC. In 2008, online community developer Mzinga acquired Littleton-based Prospero Technologies LLC, which was then owned by Bruce Buckland, chairman and CEO of Mallory Ventures. In March 2009, a Forrester Research analyst reported on Twitter that Mzinga was having financial difficulties after it had completed a second round of layoffs. On September 1, 2011, Mzinga sold Delphiforums back to early owner Dan Bruns. In January 2012, Delphi Forums resigned from the Better Business Bureau in protest of their support for the Stop Online Piracy Act (SOPA). In February 2013, Delphi Forums celebrated its 30th anniversary. Delphi owner Dan Bruns said, "It's true that the Delphi that launched in 1983 was very different from today's internet," Bruns said, "but one thing remains the same: places like Delphi Forums provide a friendly, comfortable setting for people to share common interests and passions and to build lasting friendships. If we keep that simple truth in mind, we have a terrific legacy to build on going forward." During 2014, Delphi Forums began a beta test of a new forum interface, called Zeta. The current long-time format, now called Classic, also remains, and hosts may use either interface. Additional sources Delphi Forums History Project Boston Globe: Zitner, Aaron May 04, 1995 Delphi will move to N.Y., Lowell See also Stellar Conquest References Internet service providers of the United States Internet forums Pre–World Wide Web online services Communications in Massachusetts History of computing History of the Internet
Delphi (online service)
[ "Technology" ]
1,368
[ "Computers", "History of computing" ]
1,553,317
https://en.wikipedia.org/wiki/Optical%20medium
In optics, an optical medium is material through which light and other electromagnetic waves propagate. It is a form of transmission medium. The permittivity and permeability of the medium define how electromagnetic waves propagate in it. Properties The optical medium has an intrinsic impedance, given by where and are the electric field and magnetic field, respectively. In a region with no electrical conductivity, the expression simplifies to: For example, in free space the intrinsic impedance is called the characteristic impedance of vacuum, denoted Z0, and Waves propagate through a medium with velocity , where is the frequency and is the wavelength of the electromagnetic waves. This equation also may be put in the form where is the angular frequency of the wave and is the wavenumber of the wave. In electrical engineering, the symbol , called the phase constant, is often used instead of . The propagation velocity of electromagnetic waves in free space, an idealized standard reference state (like absolute zero for temperature), is conventionally denoted by c0: where is the electric constant and is the magnetic constant. For a general introduction, see Serway For a discussion of synthetic media, see Joannopoulus. Types Homogeneous medium vs. heterogeneous medium Transparent medium vs. opaque body Translucent medium See also Čerenkov radiation Electromagnetic spectrum Electromagnetic radiation Optics SI units Free space Metamaterial Photonic crystal Photonic crystal fiber Notes and references Optics Electric and magnetic fields in matter
Optical medium
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
299
[ "Applied and interdisciplinary physics", "Optics", "Electric and magnetic fields in matter", "Materials science", " molecular", "Condensed matter physics", "Atomic", " and optical physics" ]
1,553,569
https://en.wikipedia.org/wiki/Abelin%20reaction
The Abelin reaction is a qualitative reaction for demonstrating the presence of arsphenamine and neoarsphenamine in blood and urine. It is named for Isaak Abelin, Swiss physiologist. References Blood tests
Abelin reaction
[ "Chemistry" ]
50
[ "Blood tests", "Chemical pathology" ]
1,553,609
https://en.wikipedia.org/wiki/Collet
A collet is a segmented sleeve, band or collar. One of the two radial surfaces of a collet is usually tapered (i.e a truncated cone) and the other is cylindrical. The term collet commonly refers to a type of chuck that uses collets to hold either a workpiece or a tool (such as a drill), but collets have other mechanical applications. An external collet is a sleeve with a cylindrical inner surface and a conical outer surface. The collet can be squeezed against a matching taper such that its inner surface contracts to a slightly smaller diameter, squeezing the tool or workpiece to hold it securely. Most often the collet is made of spring steel, with one or more kerf cuts along its length to allow it to expand and contract. This type of collet holds the external surface of the tool or workpiece being clamped. This is the most usual type of collet chuck. An external collet clamps against the internal surface or bore of a hollow cylinder. The collet's taper is internal and the collet expands when a corresponding taper is drawn or forced into the collet's internal taper. As a clamping device, collets are capable of producing a high clamping force and accurate alignment. While the clamping surface of a collet is normally cylindrical, it can be made to accept any defined shape. Collet chucks for machine tools Generally, a collet chuck, considered as a unit, consists of a tapered receiving sleeve (sometimes integral with the machine spindle), the collet proper (usually made of spring steel) which is inserted into the receiving sleeve, and (often) a cap that screws over the collet, clamping it via another taper. For machining operations, such as turning, chucks are commonly used to hold the workpiece. The table below gives a functional comparison of the three most common types of chuck used for holding workpieces. Collets have a narrow clamping range and a large number of collets are required to hold a given range of tools (such as drills) or stock material. This gives the disadvantage of higher capital cost and makes them unsuitable for general usage in electric drills, etc. However, the collet's advantage over other types of chuck is that it combines all of the following traits into one chuck; making it highly useful for repetitive work. Metalworking There are many types of collet used in the metalworking industry. Common industry-standard designs are R8 (internally threaded for mills) and 5C (usually externally threaded for lathes). There are also proprietary designs which only fit one manufacturer's equipment. Collets can range in holding capacity from zero to several inches in diameter. The most common type of collet grips a round bar or tool, but there are collets for square, hexagonal, and other shapes. In addition to the outside-holding collets, there are collets used for holding a part on its inside surface so that it can be machined on the outside surface (similar to an expanding mandrel). Furthermore, it is not uncommon for machinists to make a custom collet to hold any unusual size or shape of part. These are often called emergency collets (e-collets) or soft collets (from the fact that they are bought in a soft (unhardened) state and machined as needed). Yet another type of collet is a step collet which steps up to a larger diameter from the spindle and allows holding of larger workpieces. In use, the part to be held is inserted into the collet and then the collet is pressed (using a threaded nose cap) or drawn (using a threaded drawbar) into the body which has a conjugate taper form. The taper geometry serves to translate a portion of the axial drawing force into a radial clamping force. When properly tightened, enough force is applied to securely clamp the workpiece or tool. The cap or drawbar threads act as a screw lever, and this leverage is compounded by the taper, such that a modest torque on the screw produces an enormous clamping force. The precise, symmetric form and rigid material of the collet provide precise, repeatable radial centering and axial concentricity. The basic mechanism fixes four of the six degrees of kinematic freedom, two locations and two angles. Collets may also be fitted to precisely align parts in the axial direction (a fifth degree of freedom) with an adjustable internal stop or by a shoulder stop machined into the internal form. The remaining sixth degree of freedom, namely the rotation of the part in the collet, may be fixed by using square, hexagonal, or other non-circular part geometry. ER collets The "ER" collet system, developed and patented by Swiss manufacturer Rego-Fix in 1972, and standardized as DIN 6499, is the most widely used tool clamping system in the world and today available from many producers worldwide. The standard series are: ER-8, ER-11, ER-16, ER-20, ER-25, ER-32, ER-40, and ER-50. The "ER" name came from an existing "E" collet (which were a letter series of names) which Rego-Fix modified and appended "R" for "Rego-Fix". The series number is the opening diameter of the tapered receptacle, in millimetres. ER collets collapse to hold parts up to 1 mm smaller than the nominal collet internal size in most of the series (up to 2 mm smaller in ER-50, and 0.5 mm in smaller sizes) and are available in 1 mm or 0.5 mm steps. Thus a given collet holds any diameter ranging from its nominal size to its 1-mm-smaller collapsed size, and a full set of ER collets in nominal 1 mm steps fits any possible cylindrical diameter within the capacity of the series. With an ER fixture chuck, ER collets may also serve as workholding fixtures for small parts, in addition to their usual application as toolholders with spindle chucks. Although a metric standard, ER collets with internal inch sizes are widely available for convenient use of imperial sized tooling. The spring geometry of the ER collet is well-suited only to cylindrical parts, and not typically applied to square or hexagonal forms like 5C collets. Autolock collets "Autolock" collet chucks (Osbourn "Pozi-Lock" is a similar system) were designed to provide secure clamping of milling cutters with only hand tightening. They were developed in the 1940s by a now defunct UK company, Clarkson (Engineers) Limited, and are commonly known as Clarkson chucks. Autolock collets require cutters with threaded shank ends to screw into the collet itself. Any rotation of the cutter forces the collet against the collet cap taper which tightly clamps the cutter, the screw fitting also prevents any tendency of the cutter to pull out. Collets are only available in fixed sizes, imperial or metric, and the cutter shank must be an exact match. The tightening sequence of Autolock collets is widely misunderstood. The chuck cap itself does not tighten the collet at all, with the cap tight and no tool inserted the collet is loose in the chuck. Only when a cutter is inserted will the collet be pressed against the cap taper. The back of the cutter engages with a centering pin and further turning drives the collet against the chuck cap, tightening around the cutter shank, hence "Autolock". The correct installation sequence as per the original specification is: Insert the collet and hand tighten the chuck cap (collet free to float) Insert the tool and hand tighten (tool engaged with rear pin and collet engaging cap taper) As the tool is used further rotation tightens the collet and the centering pin ensures that tool extension and alignment remain unchanged. A spanner is only required to release the locked collet. While threaded shank "Autolock" tools may be gripped by plain collets, such as ER, plain shank tools should never be used in an "Autolock" collet as they will not be properly clamped or aligned. R8 collets R8 collets were developed by Bridgeport Machines, Inc. for use in milling machines. Unusually, R8 collets fit into the machine taper itself (i.e. there is no separate chuck) and tools with integral R8 taper can also be directly fitted. R8 was developed to allow rapid tool changes and requires an exact match between collet and tool shank diameter. R8 collets have a keyway to prevent rotation when fitting or removing, but it is the compressed taper and not the keyway that provides the driving force. Collets are compressed by a drawbar from behind, they are self releasing and tool changes can be automated. 5C collets Unlike most other machine collet systems, 5C collets were developed primarily for work holding. Superficially similar to R8 collets, 5C collets have an external thread at the rear for drawing the collet closed and so work pieces may pass right through the collet and chuck (5C collets often also have an internal thread for workpiece locating). Collets are also available to hold square and hex stock. 5C collets have a limited closing range, and so shank and collet diameters must be a close match. A number of other C-series collets (1C, 3C, 4C, 5C, 16C, 20C & 25C) with different holding ranges also exist. A collet system with capabilities similar to the 5C (originally a proprietary system of Hardinge) is the 2J (originally a proprietary system of Sjogren, a competitor of Hardinge, and which Hardinge later assimilated). 355E Collets The SO Deckel tool grinders use these. Sometimes called U2 collets. Watchmaker collets Watchmaking at Waltham, Massachusetts led to the invention of collets. Watchmakers' lathes all take collets which are sized by their external thread. The most popular size is 8 mm which came in several variations but all 8 mm collets are interchangeable. Lorch, a German Lathe maker, started with 6 mm collets and the first Boleys used a 6.5 mm collet. 6 mm collets will fit into a 6.5 mm lathe but it is a poor practice. Another popular size is the 10 mm collet used by Clement and Levin. For work holding, collets are sized in 0.1 mm increments with the number on the face being the diameter in tenths of a millimetre. Thus a 5 is a 0.5 mm collet. Watchmaker collets come in additional configurations. There are step collets which step inward to hold gear wheels by the outer perimeter. These typically were made in sets of five to accommodate a range of different size gear wheels. These, like straight rod-holding collets, close on the outer taper. Ring collets also come in sets of five and hold work from inside a hole. They open as they are tightened by an outside taper against the outer taper of the lathe headstock. Watch collets also include taper adapters and wax or cement chucks. These collets take an insert, usually brass, to which small parts are cemented, usually with shellac. The book The Modern Watchmaker's Lathe and How to Use it contains tables of makers and sizes; note that it refers to basic collets as split wire chucks. DIN 6343 dead length collets These collets are common especially on production machines, particularly European lathes with lever or automated closers. Unlike draw-in collets, they do not pull back to close, but are generally pushed forward, with the face remaining in place. Multi-size collets Collets allowing a wider range of workholding by means of springs or elastic spacers between jaws; such collets were developed by Jacobs (Rubberflex), Crawford (Multibore), and Pratt Burnerd, and are in some cases compatible with certain spring collet chucks. Morse taper collets The Morse taper is a common machine taper frequently used in drills, lathes and small milling machines. Chucks for drilling usually use a Morse taper and can be removed to accommodate Morse taper drill bits. Morse taper collet sets usually employ ER collets in an adaptor to suit the Morse taper. The adaptor is threaded to be held in place with a drawbar. They can be used to hold strait-shanked tooling (drills and milling cutters) more securely and with better accuracy (less run-out) than a chuck. Other applications Woodwork On a wood router (a hand-held or table-mounted power tool used in woodworking), the collet is what holds the bit in place. In the U.S. it is generally for bits, while in Europe bits are most commonly . The collet nut is hexagonal on the outside so it can be tightened or loosened with a standard wrench, and has threads on the inside so it can be screwed onto the motor arbor. Craft hobbies Many users (hobbyists, graphic artists, architects, students, and others) may be familiar with collets as the part of an X-Acto or equivalent knife that holds the blade. Another common example is the collet that holds the bits of a Dremel or equivalent rotary tool. Semiconductor work In semiconductor industry, a die collet is used for picking a die up from a wafer after die cutting process has finished, and bonding it into a package. Some of them are made with rubber, and use vacuum for picking. Internal combustion engines Most internal combustion engines use a split collet to hold both the inlet and exhaust valves under constant valve spring pressure which returns the valves to their closed position when the camshaft lobes are not in contact with the top of the valves. The two collet halves have an internal raised rib which locate into a circular groove near the top of each valve stem, the outer side of the collet halves are a taper fit into the spring retainer (also known as a collar), this taper locks the retainer in place and the raised rib that sits in the circular groove on the valve stem also locks the collet halves in place to the valve stem. To remove the valves from a cylinder head a 'valve spring compressor' is used to compress the valve springs by exerting force on the spring retainer which allows the collets to be removed, when the compressor is removed, the retainer, spring and valve can then be removed from the cylinder head. It may be realized that the retainer does not budge when the valve spring compressor is used, this is due to a buildup of carbon which over time has locked the retainer and collets slightly. A slight sharp tap on the backside of the valve spring compressor above the valve stem should free the retainer allowing the springs to be compressed whilst retrieving the split collet. On reassembly it is difficult to keep the split collets in place whilst the compressor is released, by applying a small amount of grease to the internal side of the split collets will keep them in place on the valve stem whilst releasing the compressor, then as the spring retainer rises it locks the tapered split collets in place. Firearms The Blaser R93 (and related models) use a unique bolt locking system that employs an expanding collet. The collet has claw-like L-shaped segments that face outward from the axis of the barrel. The multiple claws give a large contact area to distribute load. As the breech is closed, the collet expands, extending the claws to engaging with an annular groove in the barrel just behind the chamber; locking the bolt closed. See also Chuck (engineering) Leadscrew Machine taper Screw thread Stiction References Bibliography External links When To Use A Collet Chuck Be Kind To Your Collets F37 Collet 16c Collet 3J Collet Lathes Machine tools Woodworking clamps
Collet
[ "Engineering" ]
3,379
[ "Machine tools", "Industrial machinery" ]
1,553,856
https://en.wikipedia.org/wiki/Aircraft%20flight%20mechanics
Aircraft flight mechanics are relevant to fixed wing (gliders, aeroplanes) and rotary wing (helicopters) aircraft. An aeroplane (airplane in US usage), is defined in ICAO Document 9110 as, "a power-driven heavier than air aircraft, deriving its lift chiefly from aerodynamic reactions on surface which remain fixed under given conditions of flight". Note that this definition excludes both dirigibles (because they derive lift from buoyancy rather than from airflow over surfaces), and ballistic rockets (because their lifting force is typically derived directly and entirely from near-vertical thrust). Technically, both of these could be said to experience "flight mechanics" in the more general sense of physical forces acting on a body moving through air; but they operate very differently, and are normally outside the scope of this term. Take-off A heavier-than-air craft (aircraft) can only fly if a series of aerodynamic forces come to bear. In regard to fixed wing aircraft, the fuselage of the craft holds up the wings before takeoff. At the instant of takeoff, the reverse happens and the wings support the plane in flight. Straight and level flight of aircraft In flight a powered aircraft can be considered as being acted on by four forces: lift, weight, thrust, and drag. Thrust is the force generated by the engine (whether that engine be a jet engine, a propeller, or -- in exotic cases such as the X-15 -- a rocket) and acts in a forward direction for the purpose of overcoming drag. Lift acts perpendicular to the vector representing the aircraft's velocity relative to the atmosphere. Drag acts parallel to the aircraft's velocity vector, but in the opposite direction because drag resists motion through the air. Weight acts through the aircraft's centre of gravity, towards the centre of the Earth. In straight and level flight, lift is approximately equal to the weight, and acts in the opposite direction. In addition, if the aircraft is not accelerating, thrust is equal and opposite to drag. In straight climbing flight, lift is less than weight. At first, this seems incorrect because if an aircraft is climbing it seems lift must exceed weight. When an aircraft is climbing at constant speed it is its thrust that enables it to climb and gain extra potential energy. Lift acts perpendicular to the vector representing the velocity of the aircraft relative to the atmosphere, so lift is unable to alter the aircraft's potential energy or kinetic energy. This can be seen by considering an aerobatic aircraft in straight vertical flight (one that is climbing straight upwards or descending straight downwards). Vertical flight requires no lift. When flying straight upwards the aircraft can reach zero airspeed before falling earthwards; the wing is generating no lift and so does not stall. In straight, climbing flight at constant airspeed, thrust exceeds drag. In straight descending flight, lift is less than weight. In addition, if the aircraft is not accelerating, thrust is less than drag. In turning flight, lift exceeds weight and produces a load factor greater than one, determined by the aircraft's angle of bank. Aircraft control and movement There are three primary ways for an aircraft to change its orientation relative to the passing air. Pitch (movement of the nose up or down, rotation around the transversal axis), roll (rotation around the longitudinal axis, that is, the axis which runs along the length of the aircraft) and yaw (movement of the nose to left or right, rotation about the vertical axis). Turning the aircraft (change of heading) requires the aircraft firstly to roll to achieve an angle of bank (in order to produce a centripetal force); when the desired change of heading has been accomplished the aircraft must again be rolled in the opposite direction to reduce the angle of bank to zero. Lift acts vertically up through centre of pressure which depends on the position of wings. The position of the centre of pressure will change with changes in the angle of attack and aircraft wing flaps setting. Aircraft control surfaces Yaw is induced by a moveable rudder-fin. The movement of the rudder changes the size and orientation of the force the vertical surface produces. Since the force is created at a distance behind the centre of gravity, this sideways force causes a yawing moment then a yawing motion. On a large aircraft there may be several independent rudders on the single fin for both safety and to control the inter-linked yaw and roll actions. Using yaw alone is not a very efficient way of executing a level turn in an aircraft and will result in some sideslip. A precise combination of bank and lift must be generated to cause the required centripetal forces without producing a sideslip. Pitch is controlled by the rear part of the tailplane's horizontal stabilizer being hinged to create an elevator. By moving the elevator control backwards the pilot moves the elevator up (a position of negative camber) and the downwards force on the horizontal tail is increased. The angle of attack on the wings increased so the nose is pitched up and lift is generally increased. In micro-lights and hang gliders the pitch action is reversed—the pitch control system is much simpler so when the pilot moves the elevator control backwards it produces a nose-down pitch and the angle of attack on the wing is reduced. The system of a fixed tail surface and moveable elevators is standard in subsonic aircraft. Craft capable of supersonic flight often have a stabilator, an all-moving tail surface. Pitch is changed in this case by moving the entire horizontal surface of the tail. This seemingly simple innovation was one of the key technologies that made supersonic flight possible. In early attempts, as pilots exceeded the critical Mach number, a strange phenomenon made their control surfaces useless, and their aircraft uncontrollable. It was determined that as an aircraft approaches the speed of sound, the air approaching the aircraft is compressed and shock waves begin to form at all the leading edges and around the hinge lines of the elevator. These shock waves caused movements of the elevator to cause no pressure change on the stabilizer upstream of the elevator. The problem was solved by changing the stabilizer and hinged elevator to an all-moving stabilizer—the entire horizontal surface of the tail became a one-piece control surface. Also, in supersonic flight the change in camber has less effect on lift and a stabilator produces less drag. Aircraft that need control at extreme angles of attack are sometimes fitted with a canard configuration, in which pitching movement is created using a forward foreplane (roughly level with the cockpit). Such a system produces an immediate increase in pitch authority, and therefore a better response to pitch controls. This system is common in delta-wing aircraft (deltaplane), which use a stabilator-type canard foreplane. A disadvantage to a canard configuration compared to an aft tail is that the wing cannot use as much extension of flaps to increase wing lift at slow speeds due to stall performance. A combination tri-surface aircraft uses both a canard and an aft tail (in addition to the main wing) to achieve advantages of both configurations. A further design of tailplane is the V-tail, so named because that instead of the standard inverted T or T-tail, there are two fins angled away from each other in a V. The control surfaces then act both as rudders and elevators, moving in the appropriate direction as needed. Roll is controlled by movable sections on the trailing edge of the wings called ailerons. The ailerons move in opposition to one another—one goes up as the other goes down. The difference in camber of the wing cause a difference in lift and thus a rolling movement. As well as ailerons, there are sometimes also spoilers—small hinged plates on the upper surface of the wing, originally used to produce drag to slow the aircraft down and to reduce lift when descending. On modern aircraft, which have the benefit of automation, they can be used in combination with the ailerons to provide roll control. The earliest powered aircraft built by the Wright brothers did not have ailerons. The whole wing was warped using wires. Wing warping is efficient since there is no discontinuity in the wing geometry, but as speeds increased, unintentional warping became a problem, and so ailerons were developed. See also Aerodynamics Flight dynamics (fixed wing aircraft) Steady flight Aircraft Aircraft flight control system Banked turn Departure resistance Flight dynamics Fixed-wing aircraft Longitudinal static stability Mass properties Skid-to-turn References L. J. Clancy (1975). Aerodynamics. Chapter 14 Elementary Mechanics of Flight. Pitman Publishing Limited, London. Aerodynamics Aircraft manufacturing
Aircraft flight mechanics
[ "Chemistry", "Engineering" ]
1,775
[ "Aircraft manufacturing", "Aerodynamics", "Mechanical engineering by discipline", "Aerospace engineering", "Fluid dynamics" ]
1,553,864
https://en.wikipedia.org/wiki/List%20of%20aviation%2C%20avionics%2C%20aerospace%20and%20aeronautical%20abbreviations
Below are abbreviations used in aviation, avionics, aerospace, and aeronautics. A B C D E F G H I J K L M N N numbers (turbines) O P Q R S T U V V speeds W X Y Z See also List of aviation mnemonics Avionics Glossary of Russian and USSR aviation acronyms Glossary of gliding and soaring Appendix:Glossary of aviation, aerospace, and aeronautics – Wiktionary References Sources Aerospace acronyms Terms and Glossary Aviada Terminaro, verkita de Gilbert R. LEDON, 286 pagxoj. External links Acronyms used by EASA Acronyms and Abbreviations - FAA Aviation Dictionary Aviation Acronyms and Abbreviations Acronyms search engine by Eurocontrol Abbreviations Glossaries of aviation Aviation, avionics, aerospace and aeronautical Aviation, avionics, aerospace and aeronautical Wikipedia glossaries using tables
List of aviation, avionics, aerospace and aeronautical abbreviations
[ "Technology" ]
187
[ "Avionics", "Aircraft instruments" ]
1,553,895
https://en.wikipedia.org/wiki/Anglesey%20Aluminium
Anglesey Aluminium Metal Ltd. was a joint venture between Rio Tinto and Kaiser Aluminum. Its aluminium smelter, located on the outskirts of Holyhead, was one of the largest employers in North Wales, with 540 staff members, and began to produce aluminium in 1971. It was built on the Penrhos Estate, of which were sold by the Stanley family for the project. Up until its closure it produced up to 142,000 tonnes of aluminium every year and was the biggest single user of electricity (255 MW) in the United Kingdom. Alumina and coke shipped from Jamaica and Australia would berth at the company's private jetty in Holyhead harbour. This jetty is linked by a series of conveyor belts passing through tunnels to the plant. A spur rail link from the main North Wales Coast Line runs into the plant and was used for both receipt of raw materials and despatch of aluminium. The plant was powered from the National Grid and received most of its electricity from Wylfa nuclear power station away. Anglesey Aluminium was used as a base load for Wylfa and saved the grid the cost of keeping a power station on standby. The power contract terminated in 2009, and the aluminium smelting operation was shut down as no new contract was negotiated. The aluminium re-melt facility initially remained open after the shut down of the smelter, but its closure was announced in February 2013. The company announced tentative plans for a biomass plant on the site, but smelting operations and the plant were mothballed and the plant was finally cleared in 2023 to prepare for redevelopment. On 20 March 2024, the site's tall chimney was demolished; the last of the visible structures of the aluminium smelting plant. It was announced in September 2022 that the former Anglesey Aluminium site had been purchased by Stena Line, with their intention to use the site to facilitate an extension of Stena's existing operations of the Port of Holyhead. The sale included the spur rail line, the jetty in Holyhead harbour and the former conveyor tunnel linking the jetty to the main site. Near the smelter the Aluminium Powder Company (ALPOCO) produces aluminium powder, which is used in pastes, pigments, chemicals, metallurgy, refractory, propulsion, pyrotechnics, spray deposition and powder metallurgy. Adjacent to the site is the public access Penrhos Country Park. See also Alcan Lynemouth Aluminium Smelter List of aluminium smelters References External links A.L.P.O.C.O. Aerial Image of Anglesey Aluminium Plant - www.pixaerial.co.uk Manufacturing companies of Wales Aluminium companies of the United Kingdom Aluminium smelters Holyhead Non-ferrous metallurgical works in the United Kingdom Former Rio Tinto (corporation) subsidiaries Former joint ventures Kaiser Aluminum
Anglesey Aluminium
[ "Chemistry" ]
597
[ "Non-ferrous metallurgical works in the United Kingdom", "Metallurgical facilities" ]
1,553,972
https://en.wikipedia.org/wiki/Video%20production
Video production is the process of producing video content. It is the equivalent of filmmaking, but with video recorded either as analog signals on videotape, digitally in video tape or as computer files stored on optical discs, hard drives, SSDs, magnetic tape or memory cards instead of film stock. Television broadcast Two styles of producing video are ENG (Electronic news-gathering) and EFP (Electronic field production). Video production for distance education Video production for distance education is the process of capturing, editing, and presenting educational material specifically for use in on-line education. Teachers integrate best practice teaching techniques to create scripts, organize content, capture video footage, edit footage using computer based video editing software to deliver final educational material over the Internet. It differs from other types of video production in at least three ways: It augments traditional teaching tools used in on-line educational programs. It may incorporate motion video with sound, computer animations, stills, and other digital media. Capture of content may include use of cell phone integrated cameras and extend to commercial high-definition Broadcast quality cameras. Webcasting is also being used in education for distance learning projects; one innovative use was the DiveLive programs. Internet video production Increasing internet speeds, the transition to digital from physical formats such as tape to file-based media and the availability of cloud-based video services has increased use of the internet to provision services previously delivered on-premise in the context of commercial content creation for example video editing. In some cases the lower costs of equivalent services in the cloud has driven adoption and in others the greater scope for collaboration and time savings. Individual Internet marketing videos are primarily produced in-house and by small media agencies, while a large volume of videos are produced by big media companies, crowdsourced production marketplaces, or in scalable video production platforms. See also B-roll List of video topics Television studies References External links Broadcast engineering Film and video technology Television terminology Articles containing video clips
Video production
[ "Engineering" ]
397
[ "Broadcast engineering", "Electronic engineering" ]
1,554,012
https://en.wikipedia.org/wiki/Weigh%20in%20motion
Weigh-in-motion or weighing-in-motion (WIM) devices are designed to capture and record the axle weights and gross vehicle weights as vehicles drive over a measurement site. Unlike static scales, WIM systems are capable of measuring vehicles traveling at a reduced or normal traffic speed and do not require the vehicle to come to a stop. This makes the weighing process more efficient, and, in the case of commercial vehicles, allows for trucks under the weight limit to bypass static scales or inspection. Introduction Weigh-in-motion is a technology that can be used for various private and public purposes (i.e. applications) related to the weights and axle loads of road and rail vehicles. WIM systems are installed on the road or rail track or on a vehicle and measure, store and provide data from the traffic flow and/or the specific vehicle. For WIM systems certain specific conditions apply. These conditions have an impact on the quality and reliability of the data measured by the WIM system and of the durability of the sensors and WIM system itself. WIM systems measure the dynamic axle loads of the vehicles and try to calculate the best possible estimate of the related static values. The WIM systems have to perform unattended, under harsh traffic and environmental conditions, often without any control over the way the vehicle is moving, or the driver is behaving. As a result of these specific measurement conditions, a successful implementation of a WIM system requires specific knowledge and experience. The weight information consists of the gross vehicle weight and axle (group) loads combined with other parameters like: date and time, location, speed and vehicle class. For on-board WIM systems this pertains to the specific vehicle only. For in-road WIM systems this applies to the entire vehicle traffic flow. This weight information provides the user with detailed knowledge of the loading of heavy goods vehicles. This information is better than with older technologies, so, for example, it is easier to match heavy goods vehicles and the road/rail infrastructure. (Moffatt, 2017). Road applications Especially for trucks, gross vehicle and axle weight monitoring is useful in an array of applications including: Pavement design, monitoring, and research Bridge design, monitoring, and research To inform weight overload enforcement policies and to directly facilitate enforcement Planning and freight movement studies Toll by weight Data to facilitate legislation and regulation The most common road application of WIM data is probably pavement design and assessment. In the United States, a histogram of WIM data is used for this purpose. In the absence of WIM data, default histograms are available. Pavements are damaged through a mechanistic-empirical fatigue process that is commonly simplified as the fourth power law. In its original form, the fourth power law states that the rate of pavement damage is proportional to axle weight raised to the fourth power. WIM data provides information on the numbers of axles in each significant weight category which allows these kinds of calculations to be carried out. Weigh in motion scales are often used to facilitate weight overload enforcement, such as the Federal Motor Carrier Safety Administration's Commercial Vehicle Information Systems and Networks program. Weigh-in-motion systems can be used as part of traditional roadside inspection stations, or as part of virtual inspection stations. In most countries, WIM systems are not considered sufficiently accurate for direct enforcement of overloaded vehicles but this may change in the future. The most common bridge application of WIM is the assessment of traffic loading. The intensity of traffic on a bridge varies greatly as some roads are much busier than others. For bridges that have deteriorated, this is important as a less heavily trafficked bridge is safer and more heavily trafficked bridges should be prioritized for maintenance and repair. A great deal of research has been carried out on the subject of traffic loading on bridges, both short-span, including an allowance for dynamics, and long-span. Recent years have seen the rise of several "specialty" Weigh-in-Motion systems. One popular example is the front fork garbage truck scale. In this application, a container is weighed—while it is full—as the driver lifts, and again—while it is empty—as the container is returned to the ground. The difference between the full and empty weights is equal to the weight of the contents. Use Countries using Weigh in motion on highways include: Australia Belgium Brazil Czech Republic France Germany China Italy Japan Poland The Netherlands Ukraine United Arab Emirates United Kingdom United States (Usage varies from state to state) Accuracy The accuracy of weigh-in-motion data is generally much less than for static weigh scales where the environment is better controlled. The European COST 323 group developed an accuracy classification framework in the 1990s. They also coordinated three independently controlled road tests of commercially available and prototype WIM systems, one in Switzerland, one in France (Continental Motorway Test) and one in Northern Sweden (Cold Environment Test). Better accuracy can be achieved with multiple-sensor WIM systems and careful compensation for the effects of temperature. The Federal Highway Administration in the United States has published quality assurance criteria for WIM systems whose data is included in the Long Term Pavement Performance project. System basics of most systems Sensors WIM systems can employ various types of sensors for measurement. The earliest WIM systems, still used in a minority of installations, use an instrumented existing bridge as the weighing platform. Bending plates span a void cut into the pavement and use the flexure as the wheel passes over as a measure of weight. Load cells use strain sensors in the corner supports of a large platform embedded in the road. The majority of systems today are strip sensors - pressure sensitive materials installed in a 2 to 3 cm groove cut into the road pavement. In strip sensors, various sensing materials are used, including piezo-polymer, piezo-ceramic, capacitive and piezo-quartz. Many of these sensing systems are temperature-dependent and algorithms are used to correct for this. Strain transducers are used in bridge WIM systems. Strain gauges are used to measure the flexure in bending plates and the deformation in load cells. The strip sensor systems use piezo-electric materials in the groove. Capacitive systems measure the capacitance between two closely placed charged plates. More recently, weighing sensors using optical fiber grating sensors have been proposed. Charge amplifiers High impedance charge signals are amplified with MOSFET based charge amplifiers and converted to a voltage output, which is connected to analysis system. Inductive loops Inductive loops define the vehicle entry and exit from the WIM station. These signals are used as triggering inputs to start and stop the measurement to initiate totaling gross vehicle weight of each vehicle. They also measure total vehicle length and help with vehicle classification. For toll gate or low speed applications, inductive loops may be replaced by other types of vehicle sensors such as light curtains, axle sensors or piezocables. Measurement system The high speed measurement system is programmed to perform calculations of the following parameters: Axle distances, individual axle weights, gross vehicle weight, vehicle speed, distance between vehicles, and the GPS synchronized time stamp for each vehicle measurement. The measurement system should be environmentally protected, should have a wide operating temperature range and withstand condensation. Registration plate reading Cameras for automatic number-plate recognition may be part of the system to check the measured weight against maximum allowable weight for the vehicle and, in case of exceeded limits, inform law enforcement in order to pursue the vehicle or to directly fine the owner. Communications Variety of communication methods need to be installed on the measurement system. A modem or cellular modem can be provided. In older installations or where no communication infrastructure exists, WIM systems can be self-operating while saving the data, to later physically retrieve it. Data archiving A WIM system connected with any available communication means can be connected to a central monitoring server. Automatic data archiving software is required to retrieve the data from many remote WIM stations to be available for any further processing. A central database can be built to link many WIMs to a server for a variety of monitoring and enforcement purposes. Rail applications Weighing in motion is also a common application in rail transport. Known applications are Asset protection (imbalances, overloading) Asset management Maintenance planning Legislation and regulation Administration and planning System basics There are two main parts to the measurement system: the track-side component, which contains hardware for communication, power, computation, and data acquisition, and the rail-mounted component, which consists of sensors and cabling. Known sensor principles include: strain gauges: measuring the strain usually in the hub of the rail fiber optical sensors: measuring a change of light intensity caused by the bending of the rail load cells: Measuring the strain change in the load cell rather than directly on the rail itself. laser based systems: measuring the displacement of the rail Yards and main line Trains are weighed, either on the main line or at yards. Weighing in Motion systems installed on the main lines measure the complete weight (distribution) of the trains as they pass by at the designated line speed. Weighing in motion on the mainline is therefore also referred to as "coupled-in-motion weighing": all of the railcars are coupled. Weighing in motion at yards often measure individual wagons. It requires that the railcar are uncoupled on both ends in order to weigh. Weighing in motion at yards is therefore also referred to as "uncoupled-in-motion weighing". Systems installed at yards usually works at lower speeds and are capable of higher accuracies. Airport applications Some airports use airplane weighing, whereby the plane taxis across the scale bed, and its weight is measured. The weight may then be used to correlate with the pilot's log entry, to ensure there is just enough fuel, with a little margin for safety. This has been used for some time to conserve jet fuel. Also, the main difference in these platforms, which are basically a "transmission of weight" application, there are checkweighers, also known as dynamic scales or in-motion scales. International cooperation and standards The International Society for Weigh-In-Motion (ISWIM, www.is-wim) is an international non-profit organization, legally established in Switzerland in 2007. ISWIM is an international network of, and for, people and organisations active in the field of weigh-in-motion. The society brings together users, researchers, and vendors of WIM systems. This includes systems installed in or under the road pavements, bridges, rail tracks and on board vehicles. ISWIM organises periodically the international conferences on WIM (ICWIM), regional seminars and workshops as part of other international conferences and exhibitions. In the 1990s, the first WIM standard ASTM-E1318-09 was published in North America, and the COST 323 action provided draft European specifications of WIM as well as reports on Pan-European tests of WIM system. The European research project WAVE and other initiatives delivered improved technologies and new methodologies of WIM. These first tests were done with the combination of WIM systems with video as a tool to assist overloading enforcement controls. In the early 2000s, the accuracy and reliability of WIM systems were significantly improved, and they were used more frequently for overload screening and pre-selection for road side weight enforcement controls (virtual weigh stations). The OIML R134 was published as an international standard of low speed WIM systems for legal applications like tolling by weight and direct weight enforcement. Most recently, the NMi-WIM standard offers a basis for the introduction of high speed WIM systems for direct automatic enforcement and free flow tolling by weight. References External links International Society for Weigh-In-Motion Road infrastructure Rail infrastructure Weighing instruments Trucking industry in the United States
Weigh in motion
[ "Physics", "Technology", "Engineering" ]
2,405
[ "Weighing instruments", "Mass", "Matter", "Measuring instruments" ]
1,554,065
https://en.wikipedia.org/wiki/Brianchon%27s%20theorem
In geometry, Brianchon's theorem is a theorem stating that when a hexagon is circumscribed around a conic section, its principal diagonals (those connecting opposite vertices) meet in a single point. It is named after Charles Julien Brianchon (1783–1864). Formal statement Let be a hexagon formed by six tangent lines of a conic section. Then lines (extended diagonals each connecting opposite vertices) intersect at a single point , the Brianchon point. Connection to Pascal's theorem The polar reciprocal and projective dual of this theorem give Pascal's theorem. Degenerations As for Pascal's theorem there exist degenerations for Brianchon's theorem, too: Let coincide two neighbored tangents. Their point of intersection becomes a point of the conic. In the diagram three pairs of neighbored tangents coincide. This procedure results in a statement on inellipses of triangles. From a projective point of view the two triangles and lie perspectively with center . That means there exists a central collineation, which maps the one onto the other triangle. But only in special cases this collineation is an affine scaling. For example for a Steiner inellipse, where the Brianchon point is the centroid. In the affine plane Brianchon's theorem is true in both the affine plane and the real projective plane. However, its statement in the affine plane is in a sense less informative and more complicated than that in the projective plane. Consider, for example, five tangent lines to a parabola. These may be considered sides of a hexagon whose sixth side is the line at infinity, but there is no line at infinity in the affine plane. In two instances, a line from a (non-existent) vertex to the opposite vertex would be a line parallel to one of the five tangent lines. Brianchon's theorem stated only for the affine plane would therefore have to be stated differently in such a situation. The projective dual of Brianchon's theorem has exceptions in the affine plane but not in the projective plane. Proof Brianchon's theorem can be proved by the idea of radical axis or reciprocation. To prove it take an arbitrary length (MN) and carry it on the tangents starting from the contact points: PL = RJ = QH = MN etc. Draw circles a, b, c tangent to opposite sides of the hexagon at the created points (H,W), (J,V) and (L,Y) respectively. One sees easily that the concurring lines coincide with the radical axes ab, bc, ca resepectively, of the three circles taken in pairs. Thus O coincides with the radical center of these three circles. The theorem takes particular forms in the case of circumscriptible pentagons e.g. when R and Q tend to coincide with F, a case where AFE is transformed to the tangent at F. Then, taking a further similar identification of points T,C and U, we obtain a corresponding theorem for quadrangles. See also Seven circles theorem Pascal's theorem References Conic sections Theorems in projective geometry Euclidean plane geometry Theorems about polygons Affine geometry
Brianchon's theorem
[ "Mathematics" ]
675
[ "Theorems in projective geometry", "Euclidean plane geometry", "Planes (geometry)", "Theorems in geometry" ]
1,554,130
https://en.wikipedia.org/wiki/Puisne
Puisne (; from Old French puisné, modern puîné, "later born, younger" (and thence, "inferior") from late Latin post-, "after", and natus, "born") is a legal term of art used mainly in British English meaning "inferior in rank". Judicial usage The judges and barons of the national common law courts at Westminster, other than those having a distinct title, were called puisne. This was reinforced by the Supreme Court of Judicature Act 1877 following which a "puisne judge" is officially any of those of the High Court other than the Lord Chancellor, the Lord Chief Justice of England and the Master of the Rolls (plus the abolished positions of Lord Chief Justice of the Common Pleas, and the Lord Chief Baron of the Exchequer). Puisne courts existed as lower courts in the early stages in the judiciary in British North America, in particular Upper Canada and Lower Canada. The justices of the Supreme Court of Canada other than the Chief Justice are still referred to as puisne justices. Puisne mortgages In England and Wales, a puisne mortgage is a mortgage over an unregistered estate in land where the mortgagee (lender) does not take possession of the title deeds from the mortgagor (borrower) as security. A puisne mortgage may be registered with HM Land Registry as a Class C(i) Land Charge under the Land Charges Act 1972, although even if such a mortgage is registered it will not necessarily be enforceable. Puisne mortgages are generally a second or subsequent mortgage, and in the event of default of the mortgagor generally rank in the order of registration, not in the order in which they were created. See also Glossary of land law References French words and phrases Judges French legal terminology Kinship and descent Legal concepts
Puisne
[ "Biology" ]
384
[ "Behavior", "Human behavior", "Kinship and descent" ]
1,554,264
https://en.wikipedia.org/wiki/Locant
In the nomenclature of organic chemistry, a locant is a term to indicate the position of a functional group or substituent within a molecule. Numeric locants The International Union of Pure and Applied Chemistry (IUPAC) recommends the use of numeric prefixes to indicate the position of substituents, generally by identifying the parent hydrocarbon chain and assigning the carbon atoms based on their substituents in order of precedence. For example, there are at least two isomers of the linear form of pentanone, a ketone that contains a chain of exactly five carbon atoms. There is an oxygen atom bonded to one of the middle three carbons (if it were bonded to an end carbon, the molecule would be an aldehyde, not a ketone), but it is not clear where it is located. In this example, the carbon atoms are numbered from one to five, which starts at one end and proceeds sequentially along the chain. Now the position of the oxygen atom can be defined as on carbon atom number two, three or four. However, atoms two and four are exactly equivalent - which can be shown by turning the molecule around by 180 degrees. The locant is the number of the carbon atom to which the oxygen atom is bonded. If the oxygen is bonded to the middle carbon, the locant is 3. If the oxygen is bonded to an atom on either side (adjacent to an end carbon), the locant is 2 or 4; given the choice here, where the carbons are exactly equivalent, the lower number is always chosen. So the locant is either 2 or 3 in this molecule. The locant is incorporated into the name of the molecule to remove ambiguity. Thus the molecule is named either pentan-2-one or pentan-3-one, depending on the position of the oxygen atom. Any side chains can be present in the place of oxygen and it can be defined as simply the number on the carbon to which any thing other than a hydrogen is attached. Greek letter locants Another common system uses Greek letter prefixes as locants, which is useful in identifying the relative location of carbon atoms as well as hydrogen atoms to other functional groups. The α-carbon (alpha-carbon) refers to the first carbon atom that attaches to a functional group, such as a carbonyl. The second carbon atom is called the β-carbon (beta-carbon), the third is the γ-carbon (gamma-carbon), and the naming system continues in alphabetical order. The nomenclature can also be applied to the hydrogen atoms attached to the carbon atoms. A hydrogen atom attached to an α-carbon is called an α-hydrogen, a hydrogen atom on the β-carbon is a β-hydrogen, and so on. Organic molecules with more than one functional group can be a source of confusion. Generally the functional group responsible for the name or type of the molecule is the 'reference' group for purposes of carbon-atom naming. For example, the molecules nitrostyrene and phenethylamine are quite similar; the former can even be reduced into the latter. However, nitrostyrene's α-carbon atom is adjacent to the phenyl group; in phenethylamine this same carbon atom is the β-carbon atom, as phenethylamine (being an amine rather than a styrene) counts its atoms from the opposite "end" of the molecule. Proteins and amino acids In proteins and amino acids, the α-carbon is the backbone carbon before the carbonyl carbon atom in the molecule. Therefore, reading along the backbone of a typical protein would give a sequence of –[N—Cα—carbonyl C]n– etc. (when reading in the N to C direction). The α-carbon is where the different substituents attach to each different amino acid. That is, the groups hanging off the chain at the α-carbon are what give amino acids their diversity. These groups give the α-carbon its stereogenic properties for every amino acid except for glycine. Therefore, the α-carbon is a stereocenter for every amino acid except glycine. Glycine also does not have a β-carbon, while every other amino acid does. The α-carbon of an amino acid is significant in protein folding. When describing a protein, which is a chain of amino acids, one often approximates the location of each amino acid as the location of its α-carbon. In general, α-carbons of adjacent amino acids in a protein are about 3.8 ångströms (380 picometers) apart. Enols and enolates The α-carbon is important for enol- and enolate-based carbonyl chemistry as well. Chemical transformations affected by the conversion to either an enolate or an enol, in general, lead to the α-carbon acting as a nucleophile, becoming, for example, alkylated in the presence of primary haloalkane. An exception is in reaction with silyl chlorides, bromides, and iodides, where the oxygen acts as the nucleophile to produce silyl enol ether. See also IUPAC nomenclature Regioisomer (also known as positional isomer) Descriptor (chemistry) References Chemistry prefixes Organic chemistry
Locant
[ "Chemistry" ]
1,120
[ "Chemistry prefixes", "nan" ]
1,554,371
https://en.wikipedia.org/wiki/Albert-L%C3%A1szl%C3%B3%20Barab%C3%A1si
Albert-László Barabási (born March 30, 1967) is a Romanian-born Hungarian-American physicist, renowned for his pioneering discoveries in network science and network medicine. He is a distinguished university professor and Robert Gray Professor of Network Science at Northeastern University, holding additional appointments at the Department of Medicine, Harvard Medical School and the Department of Network and Data Science at Central European University. Barabási previously served as the former Emil T. Hofmann Professor of Physics at the University of Notre Dame and was an associate member of the Center of Cancer Systems Biology (CCSB) at the Dana–Farber Cancer Institute, Harvard University. In 1999 Barabási discovered the concept of scale-free networks and proposed the Barabási–Albert model, which explains the widespread emergence of such networks in natural, technological and social systems, including the World Wide Web and online communities. Barabási is the founding president of the Network Science Society, which sponsors the flagship NetSci Conference established in 2006. Birth and education Barabási was born on March 30, 1967 to an ethnic Hungarian family in Cârța, Harghita County, Romania. His father, László Barabási, was a historian, museum director and writer, while his mother, Katalin Keresztes, taught literature, and later became director of a children's theater. He attended a high school specializing in science and mathematics; where he won a local physics olympiad in the 9th and 12th grade. Between 1986 and 1989, he studied physics and engineering at the University of Bucharest; during which time he began researching chaos theory and published three papers. In 1989, Barabási emigrated to Hungary, together with his father. He received a master's degree in 1991 at Eötvös Loránd University in Budapest, under the supervision of Tamás Vicsek. Barabási then enrolled in the Physics program at Boston University, where he earned his PhD in 1994. His doctoral thesis, conducted under the direction of H. Eugene Stanley, was published by Cambridge University Press under the title Fractal Concepts in Surface Growth. Academic career After a one-year postdoc at the IBM Thomas J. Watson Research Center, Barabási joined the faculty at the University of Notre Dame in 1995. In 2000, at the age of 32, he was named the Emil T. Hofman Professor of Physics, becoming the youngest endowed professor. In 2004 he founded the Center for Complex Network Research. In 2005–6 he was a visiting professor at Harvard University. In fall 2007, Barabási left Notre Dame to become a Distinguished University Professor and director of the Center for Network Science at Northeastern University. Concurrently, he took up an appointment in the department of medicine at Harvard Medical School. As of 2008, Barabási holds Hungarian, Romanian and U.S. citizenship. Research and achievements Barabási contributors to network science and network medicine have fundamentally changed the study of complex systems. Scale-Free Networks Barabási's work challenged the prevailing notion that complex networks could be adequately modeled as random networks. He is particularly renowned for his 1999 discovery of scale-free networks. In 1999 he created a map of the World Wide Web and found that its degree distribution does not follow the Poisson distribution expected for random networks, but instead it is best approximated by a power law. Collaborating with his student, Réka Albert, he introduced the Barabási–Albert model, which proposed that growth and preferential attachment are jointly responsible for the emergence of the scale-free property in real-world networks. The following year, Barabási demonstrated that the power law degree distribution is not limited to the World Wide Web, but also appear in metabolic networks and protein–protein interaction networks, demonstrating the universality of the scale-free property. In 2009 Science celebrated the ten-year anniversary of Barabási’s groundbreaking discovery by dedicating a special issue to Complex Systems and Networks, recognizing his paper as one of the most cited in the journal's history. Network Robustness and Resilience In a 2001 paper with Réka Albert and Hawoong Jeong, Barabási demonstrated that networks exhibit robustness to random failures but are highly vulnerable to targeted attacks, a characteristic known as the Achilles' heel property. Specifically, networks can easily withstand the random failure of a large number of nodes, highlighting their significant robustness. However, they are prone to rapid collapse when the most connected hubs are deliberately removed. The breakdown threshold of a network was analytically linked to the second moment of the degree distribution, whose convergence to zero for large networks explain why heterogenous networks can survive the failure of a large fraction of their nodes. In 2016, Barabási extended these concepts to network resilience, demonstrating that the network structure determines a system's capacity for resilience. While robustness refers to the system's ability to carry out its basic functions despite the loss of some nodes and links, resilience involved the system's ability to adapt to internal and external disturbances by modifying its mode of operation without losing functionality. Therefore, resilience is a dynamical property that requires a fundamental shift in the system's core activities. Network Medicine Barabási is recognized as one of the founders of network medicine, a term he introduced in his 2007 article entitled "Network Medicine – From Obesity to the "Diseasome"", published in The New England Journal of Medicine. His work established the concept of diseasome, or disease network, which illustrates how diseases are interconnected through shared genetic factors, highlighting their common genetic roots. He subsequently pioneered the use of large patient data, linking the roots of disease comorbidity to molecular networks. A key concept of network medicine is Barabási's discovery that genes associated with the same disease are located in the same network neighborhood, which led to the concept of disease module, which is currently employed to facilitate drug discovery, drug design, and the development of biomarkers. He elaborated on these concepts in his a 2012 TEDMED talk, emphasizing their significance in medical research and treatment strategies. His contributions have been instrumental in establishing the Channing Division of Network Medicine at Harvard Medical School and the Network Medicine Institute, representing 33 universities and institutions around the world committed to advancing the field. Barabási's work in network medicine has led to multiple experimentally falsifiable predictions, helping identify experimentally validated novel pathways in asthma, the prediction of new mechanism of action for compounds such as rosmarinic acid, and the repurposing of existing drugs for new therapeutic functions (drug repurposing). The practical applications of network medicine have made significant impacts in clinical settings. For example, his research aids physicians in determining whether rheumatoid arthritis patients will respond to anti-TNF therapy. During COVID  Barabási led a major collaboration involving researchers from Harvard University, Boston University and The Broad Institute, to predict and experimentally test the efficacy for COVID patients of 6,000 approved drugs. Human Dynamics Barabási in 2005 discovered the fat tailed nature of the interevent times in human activity patterns. The pattern indicated that human activity is bursty - short periods of intensive activity are followed by long periods that lack detectable activity. Bursty patterns have been subsequently discovered in a wide range of processes, from web browsing to email communications and gene expression patterns. He proposed the Barabási model of human dynamics, to explain the phenomena, demonstrating that a queuing model can explain the bursty nature of human activity, a topic is covered by his book Bursts: The Hidden Pattern Behind Everything We Do. Human Mobility Barabási laid foundational work in understanding individual human mobility patterns through a series of influential papers. In his 2008 Nature publication, Barabási utilized anonymized mobile phone data to analyze human mobility, discovering that human movement exhibits a high degree of regularity in time and space, with individuals showing consistent travel distances and a tendency to return to frequently visited locations. In a subsequent 2010 Science paper, he explored the predictability of human dynamics by analyzing mobile phone user trajectories. Contrary to expectations, he found a 93% predictability of in human movements across all users. He introduced two principles governing human trajectories, leading to the development of the widely used model for individual mobility. Using this modeling framework, a decade before the COVID-19 pandemic, Barabási predicted the spreading patterns of a virus transmitted through direct contact. Network Control Barabási has made significant contributions to the understanding of network controllability and observability, addressing the fundamental question of how large networks regulate and manage their own behavior. He was the first to apply the tools of control theory to network science, bridging disciplines that had traditionally been studied separately. He proposed a method to identify the nodes through which one can control a complex network, by mapping the control problem, widely studied in physics and engineering since Maxwell, into graph matching, merging statistical mechanics and control theory. Barabási utilized network control principles to predict the functions of individual neurons within the Caenorhabditis elegans connectome. This application provided direct experimental confirmation of network control theories by successfully identifying new neurons involved in the organism's locomotion, and experimentally confirming the validity of the predictions. His work demonstrated the practical utility of network control methods in biological systems, highlighting their potential for uncovering previously unknown functional components within complex networks. Awards Barabási was the recipient of the 2024 Gothenburg Lise Meitner Award; he has also been the recipient of the 2023 Julius Edgar Lilienfeld Prize, the top prize of the American Physical Society, "for pioneering work on the statistical physics of networks that transformed the study of complex systems, and for lasting contributions in communicating the significance of this rapidly developing field to a broad range of audiences." In 2021 he received the EPS Statistical and Nonlinear Physics Prize, awarded by the European Physical Society for "his pioneering contributions to the development of complex network science, in particular for his seminal work on scale-free networks, the preferential attachment model, error and attack tolerance in complex networks, controllability of complex networks, the physics of social ties, communities, and human mobility patterns, genetic, metabolic, and biochemical networks, as well as applications in network biology and network medicine." Barabási has been elected to the US National Academy of Sciences (2024), Austrian Academy of Sciences (2024), Hungarian Academy of Sciences (2004), Academia Europaea (2007), European Academy of Sciences and Art (2018), Romanian Academy of Sciences (2018) and the Massachusetts Academy of Sciences (2013). He was elected Fellow of the American Physical Society (2003), of the American Association for the Advancement of Science (2011), of the Network Science Society (2021). He was awarded a Doctor Honoris Causa by Obuda University (2023) in Hungary, the Technical University of Madrid (2011), Utrecht University (2018) and West University of Timișoara (2020). He received the Bolyai Prize from the Hungarian Academy of Sciences (2019), the Senior Scientific Award of the Complex Systems Society (2017) for "setting the basis of what is now modern Network Science", the Lagrange Prize (2011) C&C Prize (2008) Japan "for stimulating innovative research on networks and discovering that the scale-free property is a common feature of various real-world complex networks" and the Cozzarelli Prize, National Academies of Sciences (USA), John von Neumann Medal (2006) awarded by the John von Neumann Computer Society from Hungary, for outstanding achievements in computer-related science and technology and the FEBS Anniversary Prize for Systems Biology (2005). In 2021, Barabási was ranked 2nd in the world in the field of Engineering and Technology. Selected publications Barabási, Albert-László, The Formula: The Universal Laws of Success, November 6, 2018; (hardcover) Barabási, Albert-László, Bursts: The Hidden Pattern Behind Everything We Do, April 29, 2010; (hardcover) Barabási, Albert-László, Linked: The New Science of Networks, 2002. (pbk) Barabási, Albert-László and Réka Albert, "Emergence of scaling in random networks", Science, 286:509–512, October 15, 1999 Barabási, Albert-László and Zoltán Oltvai, "Network Biology", Nature Reviews Genetics 5, 101–113 (2004) Barabási, Albert-László, Mark Newman and Duncan J. Watts, The Structure and Dynamics of Networks, 2006; Barabási, Albert-László, Natali Gulbahce, and Joseph Loscalzo, "Network Medicine", Nature Reviews Genetics 12, 56–68 (2011) Y.-Y. Liu, J.-J. Slotine, A.-L. Barabási, "Controllability of complex networks", Nature 473, 167–173 (2011) Y.-Y. Liu, J.-J. Slotine, A.-L. Barabási, "Observability of complex systems", Proceedings of the National Academy of Sciences 110, 1–6 (2013) Baruch Barzel and A.-L. Barabási, "Universality in Network Dynamics", Nature Physics 9, 673–681 (2013) Baruch Barzel and A.-L. Barabási, "Network link prediction by global silencing of indirect correlations", Nature Biotechnology 31, 720–725 (2013) B. Barzel Y.-Y. Liu and A.-L. Barabási, "Constructing minimal models for complex system dynamics", Nature Communications 6, 7186 (2015). J. Gao, B. Barzel, A.-L, Barabási, "Universal resilience patterns in complex networks". Nature 530(7590):307-12 (2016). References External links Albert-László Barabási professional website Research Publications Profile, Center for Complex Network Research Profile, Northeastern University website Profile , Center for Cancer Systems Biology (CCSB) website Profile, University of Notre Dame website 1967 births Living people American people of Hungarian-Romanian descent 21st-century American physicists 21st-century Hungarian physicists Romanian physicists Members of the Hungarian Academy of Sciences Complex systems scientists Northeastern University faculty University of Notre Dame faculty Boston University Graduate School of Arts & Sciences alumni University of Bucharest alumni Romanian people of Hungarian descent People from Harghita County Members of Academia Europaea Probability theorists Harvard Medical School faculty Fellows of the American Physical Society Network scientists Statistical physicists Hungarian physicists
Albert-László Barabási
[ "Physics" ]
3,009
[ "Statistical physicists", "Statistical mechanics" ]
1,554,398
https://en.wikipedia.org/wiki/Minimal%20realization
In control theory, given any transfer function, any state-space model that is both controllable and observable and has the same input-output behaviour as the transfer function is said to be a minimal realization of the transfer function. The realization is called "minimal" because it describes the system with the minimum number of states. The minimum number of state variables required to describe a system equals the order of the differential equation; more state variables than the minimum can be defined. For example, a second order system can be defined by two or more state variables, with two being the minimal realization. Gilbert's realization Given a matrix transfer function, it is possible to directly construct a minimal state-space realization by using Gilbert's method (also known as Gilbert's realization). References Control theory
Minimal realization
[ "Mathematics" ]
163
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
1,554,404
https://en.wikipedia.org/wiki/De%20Finetti%20diagram
A de Finetti diagram is a ternary plot used in population genetics. It is named after the Italian statistician Bruno de Finetti (1906–1985) and is used to graph the genotype frequencies of populations, where there are two alleles and the population is diploid. It is based on an equilateral triangle, and Viviani's theorem: the sum of the perpendicular distances from any interior point to the sides of said triangle is a constant equal to the length of the triangle's altitude. Applications in genetics The de Finetti diagram is used extensively in A.W.F. Edwards' book "Foundations of Mathematical Genetics". The sum of the lengths, representing allele frequencies, is set to be 1. In its simplest form the diagram can be used to show the range of genotype frequencies for which Hardy–Weinberg equilibrium is satisfied (the curve within the diagram). A. W. F. Edwards and Chris Cannings extended its use to demonstrate the changes that occur in allele frequencies under natural selection. See also Ternary diagram Wahlund effect References Cannings C., Edwards A. W. F. (1968) "Natural selection and the de Finetti diagram" Ann Hum Gen 31:421–428 Edwards, A.W.F. (2000) Foundations of Mathematical Genetics 2nd Edition, Cambridge University Press. External links Online plotting of de Finetti diagrams for population genetics (also calculates Hardy Weinberg equilibrium statistics) Population genetics Diagrams
De Finetti diagram
[ "Mathematics" ]
305
[ "Applied mathematics", "Applied mathematics stubs" ]
1,554,425
https://en.wikipedia.org/wiki/Master%20cylinder
In automotive engineering, the master cylinder is a control device that converts force (commonly from a driver's foot) into hydraulic pressure. This device controls slave cylinders located at the other end of the hydraulic brake system and/or the hydraulic clutch system. As piston(s) move along the bore of the master cylinder, this movement is transferred through the hydraulic fluid, to result in a movement of the slave cylinder(s). The hydraulic pressure created by moving a piston (inside the bore of the master cylinder) toward the slave cylinder(s) compresses the fluid evenly, but by varying the comparative surface area of the master cylinder and each slave cylinder, one can vary the amount of force and displacement applied to each slave cylinder, relative to the amount of force and displacement applied to the master cylinder. Vehicle applications The most common vehicle uses of master cylinders are in brake and clutch systems. In brake systems, the operated devices are cylinders inside brake calipers and/or drum brakes; these cylinders may be called wheel cylinders or slave cylinders, and they push the brake pads towards a surface that rotates with the wheel (this surface is typically either a drum or a disc, a.k.a. a rotor) until the stationary brake pad(s) create friction against that rotating surface (typically the rotating surface is metal or ceramic/carbon, for their ability to withstand heat and friction without wearing-down rapidly). In the clutch system, the device which the master cylinder operates is called the slave cylinder; it moves the throw out bearing until the high-friction material on the transmission's clutch disengages from the engine's metal (or ceramic/carbon) flywheel. For hydraulic brakes or clutches alike, flexible high-pressure hoses or inflexible hard-walled metal tubing may be used; but the flexible variety of tubing is needed for at least a short length adjacent to each wheel, whenever the wheel can move relative to the car's chassis (this is the case on any car with steering and other suspension movements; some drag racers and go-karts have no rear suspension, as the rear axle is welded to the chassis, and some antique cars also have no rear suspension movement). A reservoir above each master cylinder supplies the master cylinder with enough brake fluid to avoid air from entering the master cylinder (even the typical clutch uses brake fluid, but it may also be referred to as "clutch fluid" in a clutch application). Each piston in a master cylinder operates a brake circuit, and for modern light trucks and passenger cars, usually, there are two circuits for safety reasons. This is done in a diagonally split hydraulic system i.e. one circuit operates front left and right rear brakes, while the secondary works the other two wheels. If there is a failure in one of the brake lines or the caliper seal, one of the circuits will still be intact and still be able to stop the vehicle. Each circuit works on opposite corners in order to avoid destabilizing the vehicle that would happen if only one axle has brakes while the other axle has none. With only one circuit working there are significantly longer stopping distances and repairs should be done before driving again. When inspecting brake pads and rotors for wear, drivers and mechanics need to look out for uneven component wear since it could be a sign of low pressure or failure in one of the brake circuits. See also Master-slave (technology) Brake fluid pressure sensor List of auto parts Pascal's law References How Master Cylinders and Combination Valves Work, How Stuff Works. Vehicle parts
Master cylinder
[ "Technology" ]
727
[ "Vehicle parts", "Components" ]
1,554,948
https://en.wikipedia.org/wiki/Huemul%20Project
The Huemul Project () was an early 1950s Argentine effort to develop a fusion power device known as the Thermotron. The concept was invented by Austrian scientist Ronald Richter, who claimed to have a design that would produce effectively unlimited power. Richter was able to pitch the idea to President of Argentina Juan Perón in 1948, and soon received massive funding to build an experimental site on Huemul Island, on a lake just outside the town of San Carlos de Bariloche in Patagonia, near the Andes mountains. Construction began late in 1949, and by 1951 the site was completed and carrying out tests. On 16 February 1951, Richter measured high temperatures that suggested fusion had been achieved. On 24 March, the day before an important international meeting of the leaders of the Americas, Perón publicly announced that Richter had been successful, adding that in the future energy would be sold in packages the size of a milk bottle. A worldwide interest followed, along with significant skepticism on the part of other physicists. Little information was forthcoming: no papers were published on the topic, and over the next year a number of reporters visited the site but were denied access to the buildings. After increasing pressure, Perón arranged for a team to investigate Richter's claims and return individual reports, all of which were negative. A review of these reports was equally negative, and the project was ended in 1952. By this time, the optimism of the earlier news had inspired groups around the world to begin their own research in nuclear fusion. Perón was overthrown in 1955, and in the aftermath, Richter was arrested for fraud. He appears to have spent periods of time abroad, including some time in Libya. Eventually he returned to Argentina, where he died in 1991. Prior to Huemul According to Rainer Karlsch's Hitler's Bomb, during World War II German scientists under Walter Gerlach and Kurt Diebner carried out experiments to explore the possibility of inducing thermonuclear reactions in deuterium using high explosive-driven convergent shock waves, following Karl Gottfried Guderley's convergent shock wave solution. At the same time, Richter proposed in a memorandum to German government officials the induction of nuclear fusion through shock waves by high-velocity particles shot into a highly compressed deuterium plasma contained in an ordinary uranium vessel. The proposal was not carried through. Early Argentine nuclear efforts Shortly after his election in 1946, Perón began a purge of Argentina's universities that eventually resulted in over 1,000 professors being fired or quitting, causing a serious setback in Argentine science and lasting enmity between Perón and Argentine intelligentsia. In response, the Physical Association of Argentina (AFA) began to organize as a community to retain links between Argentine scientists, who now spread to industry. In 1946, the director of the AFA, physicist Enrique Gaviola, wrote a proposal to set up the Comisión Nacional de Investigaciones Científicas (National Scientific Research Commission), arguing that post-World War II friction (leading to the Cold War) would present the opportunity for various Northern Hemisphere scientists to move south to escape limits on their research. In the same paper, Gaviola argued for the formation of a body to explore the peaceful use of atomic power. In spite of the poor relations between the scientific community and the Argentine government, the proposal was seriously studied and Congress debated the matter on several occasions before Perón decided to place it under military control. Gaviola objected, starting a long and acrimonious debate over the nature and aims of the program. By 1947, plans to form an atomic study group were progressing slowly when the entire issue was shut down by an article in the U.S. political newsmagazine, New Republic. The 24 February 1947 issue contained an article by William Mizelle on "Peron's Atomic Plans", which claimed: With world famous German atom-splitter Werner Heisenberg invited to come to Argentina by Peron's Government and with a major uranium source discovered in Argentina, that Nation is launching a military nuclear research program to crack Pandora's box of atomic energy wide open. Argentina's determined atomic adventure and its frankly military purposes cannot be dismissed as the impractical dream of a small nation. International pressure on Argentina following the publication was intense, and the plans were soon dropped. This event appears to have made Perón more determined than ever to both develop atomic energy as well as prove its peaceful intentions. Germans in Argentina In 1947, a dossier was provided to Argentina by the Spanish embassy in Buenos Aires listing a number of German aeronautical engineers who were looking to sneak out of Germany. Among them was Kurt Tank, designer of the famed Focke-Wulf Fw 190 and many other successful designs. The dossier was passed to the recently formed Argentine Air Force's Commander in Chief, who passed it to Brigadier César Raúl Ojeda, who was in charge of aerodynamics research. Ojeda and Tank communicated and formulated plans to begin building a jet fighter in Argentina, which would eventually emerge as the FMA IAe 33 Pulqui II. Just before leaving for Argentina, Tank briefly met Richter in London, where Richter told Tank of his ideas for nuclear-powered aircraft. Richter was at that time doing some work in the German chemical industry. Tank had also contacted a number of other engineers and even famed fighter pilot and Luftwaffe general Adolf Galland. Various members of the group made their way to Argentina under false passports during late 1947 and 1948. The Germans were warmly received by Perón, who effectively gave them a blank cheque in an effort to rapidly develop the Argentine economy. Tank set up an aircraft development plant in Córdoba, and continued to contact other German engineers and scientists who might be interested in joining them. A total of 184 German scientists and engineers are known to have moved to Argentina during this period. Richter was invited to join the group and arrived in Argentina on 16 August 1948, travelling under the name "Dr. Pedro Matthies". Tank personally introduced him to Perón on 24 August, and Richter pitched Perón on the idea of a nuclear fusion device which would provide unlimited power, make Argentina a world scientific leader, and be of purely civilian intent. Perón was intrigued, and clearly impressed, later telling reporters that "in half an hour he explained to me all the secrets of nuclear physics and he did it so well that now I have a pretty good idea of the subject". Gaviola, still maintaining pressure to form a nuclear research group, saw all interest evaporate. From that point on he offered his services only as a "member of Richter's firing squad." Other German scientists, including Guido Beck, Walter Seelmann-Eggbert, and the now-elderly Richard Gans quickly realized something was amiss in the entire affair, and began to align themselves with the AFA, steering clear of Richter and the government in general. At an AFA meeting in September 1951, Beck publicly resigned from the University of Buenos Aires over the issue. The project Richter was soon given a laboratory at Tank's Córdoba site, but in early 1949 a fire destroyed some of the equipment. Richter claimed it was sabotage, and demanded a more protected location free from spies. When support was not immediately forthcoming, Richter went on a tour, visiting Canada and perhaps the U.S. and Europe as well. A year later, Lise Meitner recalled meeting "a strange Austrian with an Argentine visa" in Vienna, where he demonstrated a device he claimed was a thermonuclear system but which Meitner later dismissed as a chemical effect. Richter's tour was a thinly veiled threat to leave Argentina, which prompted action. Perón handed the problem of selecting a suitable experimental site to Colonel González, a friend from the 1943 Argentine coup d'état. González selected a location deep within the country's interior on Huemul Island, in Nahuel Huapi Lake, where it would be easy to protect from prying eyes. Construction work began in July, causing a nationwide shortage of brick and cement. Richter moved to the site in March 1950 while construction on Laboratory 1, the reactor, was still ongoing. In May 1950, Perón formed the National Atomic Energy Commission (CNEA), bypassing Gaviola's earlier efforts and placing himself in the position of president, with Richter and the minister of technical affairs as the other chairs. A year later, he formed the National Atomic Energy Directorate (DNEA), under González, to provide project assistance and logistics support. When the reactor was finally completed in May, Richter noticed there was no way to access the interior of the wide concrete cylinder, requiring a series of holes to be drilled through the thick walls. But before this could be completed, Richter declared that a crack on the outside rendered the entire reactor useless, and had it torn down. While this was taking place, Richter began experiments in the much smaller reactor in Laboratory 2. The experiments injected lithium and hydrogen into the cylinder and discharged a spark through it. The cylinder was supposed to reflect the energy created by these reactions back into the chamber to keep the reaction going. Diagnostic measurements were provided by taking photographs of the spectrum and using Doppler widening to measure the temperature of the resulting reactions. Announcement On 16 February 1951, Richter claimed he had successfully demonstrated fusion. He re-ran the experiment for members of the CNEA, later claiming that they had witnessed the world's first thermonuclear reaction. On 23 February, a technician working for the project expressed his concerns about the claims, suggesting that the measurement was likely due to the accidental tilting of the spectrograph's photographic plate while the experimental run was being set up. Richter refused to re-run the experiment. Instead, a week later he ordered the reactor to be disassembled so a new one could be built that included a magnetic confinement system. Meanwhile, plans for a new Laboratory 1 were started with this new design, this time to be buried underground. A deep hole in hard rock was constructed, but Richter changed the design and had the hole filled in with concrete. On 2 March, Edward Miller, the U.S. Assistant Secretary of Station for Inter-American Affairs, visited Argentina. This was ostensibly to visit the Pan American Games, but in reality was in advance of calling a meeting of American leaders later that month to discuss China's entry into the Korean War. Perón gave Miller an introduction to Richter's work, and Miller filed a memo on it on 6 March. During this period, Perón seized the Argentine newspaper La Prensa, whose editor fled to the U.S. This led to harsh criticism in the U.S. Miller suggested a policy of "masterful inaction", not actively denying support for the project, but simply never providing any. The leadership meeting was to take place between 26 March and 7 April, by which time the Chinese "emergency" had passed and the war was entering a new phase. Perón then took the opportunity to announce Richter's results to the world. On 24 March, Perón held a press conference at Casa Rosada and stated that: On February 16, 1951, in the atomic energy pilot plant on Huemul Island... thermonuclear experiments were carried out under conditions of control on a technical scale. Perón justified the project by noting that Argentina's enormous energy shortage would be addressed by building nuclear plants across the country, and that the energy would be bought and sold in containers the size of a milk bottle. He went on to note that the country was simply unable to afford the cost of developing a uranium-based energy program, or that of a system using tritium, normally generated in special fission plants. Richter's fuel meant the reaction could only take place in a reactor, not a bomb, and he then recommitted the country to exploring only peaceful uses of atomic energy. Richter added that he understood the secret of the hydrogen bomb, but that Perón had forbidden any work on it. The next day Richter held another press conference on the topic, a meeting that became known as the "10,000 word interview". He explained that a hydrogen bomb required a fission trigger, and that the country was unable and unwilling to build such a device. Very little explanation of the Thermotron was mentioned, beyond the announcement that he used the Doppler effect to measure speeds of 3,300 km/s and that the fuel was either lithium hydride or deuterium which was introduced into pre-heated hydrogen. He was careful to explain that these were small-scale experimental results, and refused to state whether it would work well at the industrial scale. On 7 April, Perón awarded Richter the gold Peronista Party Medal in a highly publicized event. With the U.S. refusing any support for the program, Richter turned to other countries for equipment. In April, Prince Bernhard of The Netherlands visited Perón, and offered technical help to the project from Philips. A visit by Cornelis Bakker, later the director of CERN, was arranged and a synchrotron and Cockcroft–Walton generator were suggested as possible products of interest. Perón wrote to Richter to arrange the visit, during which Richter refused to show Bakker any of the reactors. In spite of this, Perón offered to fund the purchase of a Cockcroft–Walton generator and a synchrotron from the company. Public reaction Shortly after Richter's conference, the matter was discussed in the Bulletin of the Atomic Scientists, where it was noted that Richter's announcement had revealed no details of the system of operation. They also noted that Richter claimed three key advances during experimentation, but failed to mention any of them during the conference. Finally, although the method for measuring temperature was announced, the temperature itself was not. The United States Atomic Energy Commission's (AEC) comment on the announcement was simply that "the Argentine Government announced more than a year ago that it was planning to engage in nuclear research." American physicists were universally dismissive of the announcement. Among the more famous responses was that of George Gamow, who said "It seemed to be 95% pure propaganda, 4¾% thermonuclear reactions on a very small scale, and the remaining ¼% probably something better." Edward Lawrence was not so dismissive, noting that, "There is a tendency to laugh it off as being a lot of hot air or something. Well it may be, but we don't know all, and we should make every effort to find out." Edward Teller put it succinctly, "Reading one line one has to think he's a genius. Reading the next line, one realizes he's crazy." British scientists, at that time working secretly on the z-pinch fusion concept, did not rule out the possibility of small-scale reactions. George Thomson, at that time leading the United Kingdom Atomic Energy Authority (AEA), suggested it was simply exaggerated. This opinion was mirrored by Mark Oliphant in Australia, and Werner Heisenberg and Otto Hahn in Germany. Perhaps the most biting criticism came from Manfred von Ardenne, a German physicist now working in the Soviet Union. He advised that people should ignore Richter's claims, noting that he had worked with Richter during the war and said he confused fantasy with reality. In May, the United Nations World magazine carried a short article by Hans Thirring, the director of the Institute for Theoretical Physics in Vienna and a well known author on nuclear matters. He stated that "the chances are 99 to 1 that the explosion in Argentina occurred only in the imagination of a crank or a fraud." When Thirring heard the announcement, he had gone searching for anyone that knew Richter from before he arrived in Argentina. He found that Richter had studied under Heinrich Rausch von Traubenberg in the 1930s, who described him as a peculiar eccentric, but von Traubenberg had died in 1944 so there was no way to follow up on the story. Richter's dissertation was never published, and the university in Prague burned during the war. Richter was invited to prepare a rebuttal, which appeared in the July issue. He simply dismissed Thirring as "a typical textbook professor with a strong scientific inferiority complex, probably supported by political hatred." Private reaction Although essentially dismissed by the scientific community, the Richter announcement nevertheless had a major effect on the history of controlled fusion experiments. The most direct outcome of the announcement was its effect on Lyman Spitzer, an astrophysicist at Princeton University. Just prior to leaving for a ski trip to Aspen, Spitzer's father called and mentioned the announcement in The New York Times. Spitzer read the articles and dismissed them, noting the system could not deliver enough energy to heat the gases to fusion temperatures. This led him to begin considering ways to confine a hot plasma for longer periods of time, giving the system enough time to be heated to 10 to 100 million degrees Celsius. Considering the problem of confining a plasma in a toroid pointed out by Enrico Fermi, he hit upon the solution now known as the stellarator. Spitzer was able to use the notoriety surrounding Richter's announcement to gain the attention of the U.S. Atomic Energy Commission with the suggestion that the basic idea of controlled fusion was feasible. He eventually managed to arrange a meeting with the director of the AEC to pitch the stellarator concept. Researchers in the UK had been experimenting with fusion since 1947 using a system known today as z-pinch. Small experimental devices had been built at the Atomic Energy Research Establishment (AERE, "Harwell") and Imperial College London, but requests for funding of a larger system were repeatedly refused. Jim Tuck had seen the work while in the UK, and introduced z-pinch to his coworkers at Los Alamos in 1950. When Tuck heard of Spitzer's efforts to gain funding, he immediately applied as well, presenting his concept as the Perhapsatron. He felt that Spitzer's claims to have a fast track to fusion were "incredibly ambitious". Both Spitzer and Tuck met with AEC officials in May 1951; Spitzer was granted $50,000 to build an experimental device, while Tuck was turned away empty-handed. Not to be outdone, Tuck soon arranged to receive $50,000 from the director of Los Alamos instead. When news of the U.S. efforts reached the UK, the researchers there started pushing for funding of a much larger machine. This time they found a much more favorable reaction from the AERE, and both teams soon began construction of larger devices. This work, through fits and starts, led to the ZETA system, the first truly large-scale fusion reactor. Compared to the small tabletop devices built in the U.S., ZETA filled a hangar and operated at energy levels far beyond any other machine. When news of ZETA was made public, the U.S. and Soviet Union were soon demanding funding to build devices of similar scale in order to catch up with the UK. The announcement had a direct effect on research in the USSR as well. Previously, several researchers, notably Igor Kurchatov and I. N. Golovin had put together a development plan similar to the ones being developed in the UK. They too were facing disinterest on the part of the funding groups, which was immediately swept away when Huemul hit the newspapers. Cancellation Argentine physicists were also critical of the announcement, but found little interest on the part of Perón, who was still at odds with the academic mainstream. González was growing increasingly frustrated with Richter, and in February 1952 told Perón that either Richter left the project, or he did. Perón accepted González's resignation and replaced him with his aide, Navy Captain Pedro Iraolagoitía. Iraolagoitía soon began to protest as well, finally convincing Perón to have the project investigated. Instead of calling upon the local physics community, Perón put together a team consisting of Iraolagoitía, a priest, two engineers including Mario Báncora, and young physicist José Antonio Balseiro, who was at that time studying in England and was asked to return with all haste. The team visited the site for a series of demonstrations between 5 and 8 September 1952. The committee analyzed Richter's work and published separate reports on the topic on 15 September. Balseiro, in particular, was convinced nothing nuclear was taking place. His report critiqued Richter's claims about how the system was supposed to work, especially the claims that the system was reaching the temperatures needed to demonstrate fusion; he stated that fusion reactions would require something on the order of 40 million kelvin, while the center of the electric arc would be perhaps 4,000 to 100,000 kelvin at most. He then pointed out that Richter's radiation detectors showed large activity whenever the arc was discharged, even if there was no fuel present. Meanwhile, the team's own detectors showed low activity throughout. They reported their findings to Perón on 15 February. Richter was allowed to officially respond to the report. The government appointed physicists Richard Gans and Antonio Rodríguez to review the first report as well as Richter's response to it. This second group endorsed the findings of the first review panel and found Richter's response inadequate. On 22 November, while Richter was in Buenos Aires, a military team occupied the site. They found that many of the instruments were not even connected, and the project was pronounced a fraud. Argentines jokingly referred to the affair as the Huele a mula, or "it smells like a con". After the project In the period immediately after the military takeover, Balseiro wrote a proposal to create a nuclear physics institute on the mainland in nearby Bariloche using the equipment on the island. Originally known as the Instituto de Física de Bariloche, it was renamed the Instituto Balseiro in his honour in 1962. Between 1952 and 1955, Richter was effectively under house arrest in Buenos Aires, with an offer from Perón to "facilitate any travel he might have to make". After Perón was deposed in September 1955, the new government arrested Richter on the night of 4 October 1955. He was accused of fraud, and spent a short time in jail. At the time, it was estimated that 62.5 million Pesos had been spent on the project, about $15 million USD ($ million in ). A more recent estimate places the value closer to $300 million in 2003 dollars ($ million in ). Richter remained in Argentina for a time, but began to travel, eventually landing in Libya. He returned to Argentina and was extensively interviewed by Mario Mariscotti for his book on Huemul, which remains the most detailed account of the project. Mariscotti blames the affair primarily on Richter, who Mariscotti states was capable of great self-delusion, adding an autocratic and paranoid management style, and lack of oversight to the ills. Perón remains a controversial figure to this day, and opinions of Richter tend to be colored by how closely the author associates him with Perón. Argentine accounts often refer to Richter as an outright con man, while accounts written outside Argentina generally describe him as a deluded amateur. Huemul today The island remained closed and under military control until the 1970s, when the Army began using it for artillery target practice. In 1995 a tourist company took control of the island, and began to offer tours by boat from docks in Bariloche. The ruins of the historic facilities (at ), can be visited by tourists by boat from the port of Bariloche. Notes References Citations Bibliography Further reading Mariscotti, Mario, 1985, El Secreto Atómico de Huemul: Crónica del Origen de la Energía Atómica en la Argentina, Sudamericana/Planeta, Buenos Aires, Argentina López Dávalos A., Badino N., 2000 J. A. Balseiro: Crónica de una ilusión, Fondo de Cultura Económica de Argentina, . External links El litio: materia prima para la tecnología de la fusión termonuclear (1997) Spanish Guillermo Giménez de Castro: La quimera atómica de Richter (2004) Spanish Fusion power Hoaxes in science Nuclear technology in Argentina Science and technology in Argentina Scientific misconduct incidents
Huemul Project
[ "Physics", "Chemistry" ]
5,017
[ "Nuclear fusion", "Fusion power", "Plasma physics" ]
1,554,995
https://en.wikipedia.org/wiki/Polyol
In organic chemistry, a polyol is an organic compound containing multiple hydroxyl groups (). The term "polyol" can have slightly different meanings depending on whether it is used in food science or polymer chemistry. Polyols containing two, three and four hydroxyl groups are diols, triols, and tetrols, respectively. Classification Polyols may be classified according to their chemistry. Some of these chemistries are polyether, polyester, polycarbonate and also acrylic polyols. Polyether polyols may be further subdivided and classified as polyethylene oxide or polyethylene glycol (PEG), polypropylene glycol (PPG) and Polytetrahydrofuran or PTMEG. These have 2, 3 and 4 carbons respectively per oxygen atom in the repeat unit. Polycaprolactone polyols are also commercially available. There is also an increasing trend to use biobased (and hence renewable) polyols. Uses Polyether polyols have numerous uses. As an example, polyurethane foam is a big user of polyether polyols. Polyester polyols can be used to produce rigid foam. They are available in both aromatic and aliphatic versions. They are also available in mixed aliphatic-aromatic versions often made from recycled raw materials, typically polyethylene terephthalate (PET). Acrylic polyols are generally used in higher performance applications where stability to ultraviolet light is required and also lower VOC coatings. Other uses include direct to metal coatings. As they are used where good UV resistance is required, such as automotive coatings, the isocyanate component also tends to be UV resistant and hence isocyanate oligomers or prepolymers based on Isophorone diisocyanate are generally used. Caprolactone-based polyols produce polyurethanes with enhanced hydrolysis resistance. Polycarbonate polyols are more expensive than other polyols and are thus used in more demanding applications. They have been used to make an isophorone diisocyanate based prepolymer which is then used in glass coatings. They may be used in reactive hotmelt adhesives. All polyols may be used to produce polyurethane prepolymers. These then find use in coatings, adhesives, sealants and elastomers. Low molecular weight polyols Low molecular weight polyols are widely used in polymer chemistry where they function as crosslinking agents and chain extenders. Alkyd resins for example, use polyols in their synthesis and are used in paints and in molds for casting. They are the dominant resin or "binder" in most commercial "oil-based" coatings. Approximately 200,000 tons of alkyd resins are produced each year. They are based on linking reactive monomers through ester formation. Polyols used in the production of commercial alkyd resins are glycerol, trimethylolpropane, and pentaerythritol. In polyurethane prepolymer production, a low molecular weight polyol-diol such as 1,4-butanediol may be used as a chain extender to further increase molecular weight though it does increase viscosity because more hydrogen bonding is introduced. Sugar alcohols Sugar alcohols, a class of low molecular weight polyols, are commonly obtained by hydrogenation of sugars. They have the formula (CHOH)nH2, where n = 4–6. Sugar alcohols are added to foods because of their lower caloric content than sugars; however, they are also, in general, less sweet, and are often combined with high-intensity sweeteners. They are also added to chewing gum because they are not broken down by bacteria in the mouth or metabolized to acids, and thus do not contribute to tooth decay. Maltitol, sorbitol, xylitol, erythritol, and isomalt are common sugar alcohols. Polymeric polyols The term polyol is used for various chemistries of the molecular backbone. Polyols may be reacted with diisocyanates or polyisocyanates to produce polyurethanes. MDI finds considerable use in PU foam production. Polyurethanes are used to make flexible foam for mattresses and seating, rigid foam insulation for refrigerators and freezers, elastomeric shoe soles, fibers (e.g. Spandex), coatings, sealants and adhesives. The term polyol is also attributed to other molecules containing hydroxyl groups. For instance, polyvinyl alcohol is (CH2CHOH)n with n hydroxyl groups where n can be in the thousands. Cellulose is a polymer with many hydroxyl groups, but it is not referred to as a polyol. Polyols from recycled or renewable sources There are polyols based on renewable sources such as plant-based materials including castor oil and cottonseed oil. Vegetable oils and biomass are also potential renewable polyol raw materials. Seed oil can even be used to produce polyester polyols. Properties Since the generic term polyol is only derived from chemical nomenclature and just indicates the presence of several hydroxyl groups, no common properties can be assigned to all polyols. However, polyols are usually viscous at room temperature due to hydrogen bonding. See also Cyclitol Oligomer Polyurethane References External links Sugar substitutes Organic polymers Commodity chemicals Polymer chemistry Synthetic resins Polyurethanes
Polyol
[ "Chemistry", "Materials_science", "Engineering" ]
1,175
[ "Organic polymers", "Synthetic resins", "Products of chemical industry", "Synthetic materials", "Materials science", "Organic compounds", "Polymer chemistry", "Commodity chemicals" ]
1,555,022
https://en.wikipedia.org/wiki/Web%202.0
Web 2.0 (also known as participative (or participatory) web and social web) refers to websites that emphasize user-generated content, ease of use, participatory culture, and interoperability (i.e., compatibility with other products, systems, and devices) for end users. The term was coined by Darcy DiNucci in 1999 and later popularized by Tim O'Reilly and Dale Dougherty at the first Web 2.0 Conference in 2004. Although the term mimics the numbering of software versions, it does not denote a formal change in the nature of the World Wide Web, but merely describes a general change that occurred during this period as interactive websites proliferated and came to overshadow the older, more static websites of the original Web. A Web 2.0 website allows users to interact and collaborate through social media dialogue as creators of user-generated content in a virtual community. This contrasts the first generation of Web 1.0-era websites where people were limited to passively viewing content. Examples of Web 2.0 features include social networking sites or social media sites (e.g., Facebook), blogs, wikis, folksonomies ("tagging" keywords on websites and links), video sharing sites (e.g., YouTube), image sharing sites (e.g., Flickr), hosted services, Web applications ("apps"), collaborative consumption platforms, and mashup applications. Whether Web 2.0 is substantially different from prior Web technologies has been challenged by World Wide Web inventor Tim Berners-Lee, who describes the term as jargon. His original vision of the Web was "a collaborative medium, a place where we [could] all meet and read and write". On the other hand, the term Semantic Web (sometimes referred to as Web 3.0) was coined by Berners-Lee to refer to a web of content where the meaning can be processed by machines. History Web 1.0 Web 1.0 is a retronym referring to the first stage of the World Wide Web's evolution, from roughly 1989 to 2004. According to Graham Cormode and Balachander Krishnamurthy, "content creators were few in Web 1.0 with the vast majority of users simply acting as consumers of content". Personal web pages were common, consisting mainly of static pages hosted on ISP-run web servers, or on free web hosting services such as Tripod and the now-defunct GeoCities. With Web 2.0, it became common for average web users to have social-networking profiles (on sites such as Myspace and Facebook) and personal blogs (sites like Blogger, Tumblr and LiveJournal) through either a low-cost web hosting service or through a dedicated host. In general, content was generated dynamically, allowing readers to comment directly on pages in a way that was not common previously. Some Web 2.0 capabilities were present in the days of Web 1.0, but were implemented differently. For example, a Web 1.0 site may have had a guestbook page for visitor comments, instead of a comment section at the end of each page (typical of Web 2.0). During Web 1.0, server performance and bandwidth had to be considered—lengthy comment threads on multiple pages could potentially slow down an entire site. Terry Flew, in his third edition of New Media, described the differences between Web 1.0 and Web 2.0 as a Flew believed these factors formed the trends that resulted in the onset of the Web 2.0 "craze". Characteristics Some common design elements of a Web 1.0 site include: Static pages rather than dynamic HTML. Content provided from the server's filesystem rather than a relational database management system (RDBMS). Pages built using Server Side Includes or Common Gateway Interface (CGI) instead of a web application written in a dynamic programming language such as Perl, PHP, Python or Ruby. The use of HTML 3.2-era elements such as frames and tables to position and align elements on a page. These were often used in combination with spacer GIFs. Proprietary HTML extensions, such as the <blink> and <marquee> tags, introduced during the first browser war. Online guestbooks. GIF buttons, graphics (typically 88×31 pixels in size) promoting web browsers, operating systems, text editors and various other products. HTML forms sent via email. Support for server side scripting was rare on shared servers during this period. To provide a feedback mechanism for web site visitors, mailto forms were used. A user would fill in a form, and upon clicking the form's submit button, their email client would launch and attempt to send an email containing the form's details. The popularity and complications of the mailto protocol led browser developers to incorporate email clients into their browsers. Web 2.0 The term "Web 2.0" was coined by Darcy DiNucci, an information architecture consultant, in her January 1999 article "Fragmented Future": Writing when Palm Inc. introduced its first web-capable personal digital assistant (supporting Web access with WAP), DiNucci saw the Web "fragmenting" into a future that extended beyond the browser/PC combination it was identified with. She focused on how the basic information structure and hyper-linking mechanism introduced by HTTP would be used by a variety of devices and platforms. As such, her "2.0" designation refers to the next version of the Web that does not directly relate to the term's current use. The term Web 2.0 did not resurface until 2002. Companies such as Amazon, Facebook, Twitter, and Google, made it easy to connect and engage in online transactions. Web 2.0 introduced new features, such as multimedia content and interactive web applications, which mainly consisted of two-dimensional screens. Kinsley and Eric focus on the concepts currently associated with the term where, as Scott Dietzen puts it, "the Web becomes a universal, standards-based integration platform". In 2004, the term began to popularize when O'Reilly Media and MediaLive hosted the first Web 2.0 conference. In their opening remarks, John Battelle and Tim O'Reilly outlined their definition of the "Web as Platform", where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that "customers are building your business for you". They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be "harnessed" to create value. O'Reilly and Battelle contrasted Web 2.0 with what they called "Web 1.0". They associated this term with the business models of Netscape and the Encyclopædia Britannica Online. For example, In short, Netscape focused on creating software, releasing updates and bug fixes, and distributing it to the end users. O'Reilly contrasted this with Google, a company that did not, at the time, focus on producing end-user software, but instead on providing a service based on data, such as the links that Web page authors make between sites. Google exploits this user-generated content to offer Web searches based on reputation through its "PageRank" algorithm. Unlike software, which undergoes scheduled releases, such services are constantly updated, a process called "the perpetual beta". A similar difference can be seen between the Encyclopædia Britannica Online and Wikipedia – while the Britannica relies upon experts to write articles and release them periodically in publications, Wikipedia relies on trust in (sometimes anonymous) community members to constantly write and edit content. Wikipedia editors are not required to have educational credentials, such as degrees, in the subjects in which they are editing. Wikipedia is not based on subject-matter expertise, but rather on an adaptation of the open source software adage "given enough eyeballs, all bugs are shallow". This maxim is stating that if enough users are able to look at a software product's code (or a website), then these users will be able to fix any "bugs" or other problems. The Wikipedia volunteer editor community produces, edits, and updates articles constantly. Web 2.0 conferences have been held every year since 2004, attracting entrepreneurs, representatives from large companies, tech experts and technology reporters. The popularity of Web 2.0 was acknowledged by 2006 TIME magazine Person of The Year (You). That is, TIME selected the masses of users who were participating in content creation on social networks, blogs, wikis, and media sharing sites. In the cover story, Lev Grossman explains: Characteristics Instead of merely reading a Web 2.0 site, a user is invited to contribute to the site's content by commenting on published articles, or creating a user account or profile on the site, which may enable increased participation. By increasing emphasis on these already-extant capabilities, they encourage users to rely more on their browser for user interface, application software ("apps") and file storage facilities. This has been called "network as platform" computing. Major features of Web 2.0 include social networking websites, self-publishing platforms (e.g., WordPress' easy-to-use blog and website creation tools), "tagging" (which enables users to label websites, videos or photos in some fashion), "like" buttons (which enable a user to indicate that they are pleased by online content), and social bookmarking. Users can provide the data and exercise some control over what they share on a Web 2.0 site. These sites may have an "architecture of participation" that encourages users to add value to the application as they use it. Users can add value in many ways, such as uploading their own content on blogs, consumer-evaluation platforms (e.g. Amazon and eBay), news websites (e.g. responding in the comment section), social networking services, media-sharing websites (e.g. YouTube and Instagram) and collaborative-writing projects. Some scholars argue that cloud computing is an example of Web 2.0 because it is simply an implication of computing on the Internet. Web 2.0 offers almost all users the same freedom to contribute, which can lead to effects that are varyingly perceived as productive by members of a given community or not, which can lead to emotional distress and disagreement. The impossibility of excluding group members who do not contribute to the provision of goods (i.e., to the creation of a user-generated website) from sharing the benefits (of using the website) gives rise to the possibility that serious members will prefer to withhold their contribution of effort and "free ride" on the contributions of others. This requires what is sometimes called radical trust by the management of the Web site. Encyclopaedia Britannica calls Wikipedia "the epitome of the so-called Web 2.0" and describes what many view as the ideal of a Web 2.0 platform as "an egalitarian environment where the web of social software enmeshes users in both their real and virtual-reality workplaces." According to Best, the characteristics of Web 2.0 are rich user experience, user participation, dynamic content, metadata, Web standards, and scalability. Further characteristics, such as openness, freedom, and collective intelligence by way of user participation, can also be viewed as essential attributes of Web 2.0. Some websites require users to contribute user-generated content to have access to the website, to discourage "free riding".The key features of Web 2.0 include: Folksonomy – free classification of information; allows users to collectively classify and find information (e.g. "tagging" of websites, images, videos or links) Rich user experience – dynamic content that is responsive to user input (e.g., a user can "click" on an image to enlarge it or find out more information) User participation – information flows two ways between the site owner and site users by means of evaluation, review, and online commenting. Site users also typically create user-generated content for others to see (e.g., Wikipedia, an online encyclopedia that anyone can write articles for or edit) Software as a service (SaaS) – Web 2.0 sites developed APIs to allow automated usage, such as by a Web "app" (software application) or a mashup Mass participation – near-universal web access leads to differentiation of concerns, from the traditional Internet user base (who tended to be hackers and computer hobbyists) to a wider variety of users, drastically changing the audience of internet users. Technologies The client-side (Web browser) technologies used in Web 2.0 development include Ajax and JavaScript frameworks. Ajax programming uses JavaScript and the Document Object Model (DOM) to update selected regions of the page area without undergoing a full page reload. To allow users to continue interacting with the page, communications such as data requests going to the server are separated from data coming back to the page (asynchronously). Otherwise, the user would have to routinely wait for the data to come back before they can do anything else on that page, just as a user has to wait for a page to complete the reload. This also increases the overall performance of the site, as the sending of requests can complete quicker independent of blocking and queueing required to send data back to the client. The data fetched by an Ajax request is typically formatted in XML or JSON (JavaScript Object Notation) format, two widely used structured data formats. Since both of these formats are natively understood by JavaScript, a programmer can easily use them to transmit structured data in their Web application. When this data is received via Ajax, the JavaScript program then uses the Document Object Model to dynamically update the Web page based on the new data, allowing for rapid and interactive user experience. In short, using these techniques, web designers can make their pages function like desktop applications. For example, Google Docs uses this technique to create a Web-based word processor. As a widely available plug-in independent of W3C standards (the World Wide Web Consortium is the governing body of Web standards and protocols), Adobe Flash was capable of doing many things that were not possible pre-HTML5. Of Flash's many capabilities, the most commonly used was its ability to integrate streaming multimedia into HTML pages. With the introduction of HTML5 in 2010 and the growing concerns with Flash's security, the role of Flash became obsolete, with browser support ending on December 31, 2020. In addition to Flash and Ajax, JavaScript/Ajax frameworks have recently become a very popular means of creating Web 2.0 sites. At their core, these frameworks use the same technology as JavaScript, Ajax, and the DOM. However, frameworks smooth over inconsistencies between Web browsers and extend the functionality available to developers. Many of them also come with customizable, prefabricated 'widgets' that accomplish such common tasks as picking a date from a calendar, displaying a data chart, or making a tabbed panel. On the server-side, Web 2.0 uses many of the same technologies as Web 1.0. Languages such as Perl, PHP, Python, Ruby, as well as Enterprise Java (J2EE) and Microsoft.NET Framework, are used by developers to output data dynamically using information from files and databases. This allows websites and web services to share machine readable formats such as XML (Atom, RSS, etc.) and JSON. When data is available in one of these formats, another website can use it to integrate a portion of that site's functionality. Concepts Web 2.0 can be described in three parts: Rich web application - defines the experience brought from desktop to browser, whether it is "rich" from a graphical point of view or a usability/interactivity or features point of view. Web-oriented architecture (WOA) - defines how Web 2.0 applications expose their functionality so that other applications can leverage and integrate the functionality providing a set of much richer applications. Examples are feeds, RSS feeds, web services, mashups. Social Web - defines how Web 2.0 websites tend to interact much more with the end user and make the end user an integral part of the website, either by adding his or her profile, adding comments on content, uploading new content, or adding user-generated content (e.g., personal digital photos). As such, Web 2.0 draws together the capabilities of client- and server-side software, content syndication and the use of network protocols. Standards-oriented Web browsers may use plug-ins and software extensions to handle the content and user interactions. Web 2.0 sites provide users with information storage, creation, and dissemination capabilities that were not possible in the environment known as "Web 1.0". Web 2.0 sites include the following features and techniques, referred to as the acronym SLATES by Andrew McAfee: Search Finding information through keyword search. Links to other websites Connects information sources together using the model of the Web. Authoring The ability to create and update content leads to the collaborative work of many authors. Wiki users may extend, undo, redo and edit each other's work. Comment systems allow readers to contribute their viewpoints. Tags Categorization of content by users adding "tags" — short, usually one-word or two-word descriptions — to facilitate searching. For example, a user can tag a metal song as "death metal". Collections of tags created by many users within a single system may be referred to as "folksonomies" (i.e., folk taxonomies). Extensions Software that makes the Web an application platform as well as a document server. Examples include Adobe Reader, Adobe Flash, Microsoft Silverlight, ActiveX, Oracle Java, QuickTime, WPS Office and Windows Media. Signals The use of syndication technology, such as RSS feeds to notify users of content changes. While SLATES forms the basic framework of Enterprise 2.0, it does not contradict all of the higher level Web 2.0 design patterns and business models. It includes discussions of self-service IT, the long tail of enterprise IT demand, and many other consequences of the Web 2.0 era in enterprise uses. Social Web A third important part of Web 2.0 is the social web. The social Web consists of a number of online tools and platforms where people share their perspectives, opinions, thoughts and experiences. Web 2.0 applications tend to interact much more with the end user. As such, the end user is not only a user of the application but also a participant by: Podcasting Blogging Tagging Curating with RSS Social bookmarking Social networking Social media Wikis Web content voting: Review site or Rating site The popularity of the term Web 2.0, along with the increasing use of blogs, wikis, and social networking technologies, has led many in academia and business to append a flurry of 2.0's to existing concepts and fields of study, including Library 2.0, Social Work 2.0, Enterprise 2.0, PR 2.0, Classroom 2.0, Publishing 2.0, Medicine 2.0, Telco 2.0, Travel 2.0, Government 2.0, and even Porn 2.0. Many of these 2.0s refer to Web 2.0 technologies as the source of the new version in their respective disciplines and areas. For example, in the Talis white paper "Library 2.0: The Challenge of Disruptive Innovation", Paul Miller argues "Blogs, wikis and RSS are often held up as exemplary manifestations of Web 2.0. A reader of a blog or a wiki is provided with tools to add a comment or even, in the case of the wiki, to edit the content. This is what we call the Read/Write web. Talis believes that Library 2.0 means harnessing this type of participation so that libraries can benefit from increasingly rich collaborative cataloging efforts, such as including contributions from partner libraries as well as adding rich enhancements, such as book jackets or movie files, to records from publishers and others." Here, Miller links Web 2.0 technologies and the culture of participation that they engender to the field of library science, supporting his claim that there is now a "Library 2.0". Many of the other proponents of new 2.0s mentioned here use similar methods. The meaning of Web 2.0 is role dependent. For example, some use Web 2.0 to establish and maintain relationships through social networks, while some marketing managers might use this promising technology to "end-run traditionally unresponsive I.T. department[s]." There is a debate over the use of Web 2.0 technologies in mainstream education. Issues under consideration include the understanding of students' different learning modes; the conflicts between ideas entrenched in informal online communities and educational establishments' views on the production and authentication of 'formal' knowledge; and questions about privacy, plagiarism, shared authorship and the ownership of knowledge and information produced and/or published on line. Marketing Web 2.0 is used by companies, non-profit organisations and governments for interactive marketing. A growing number of marketers are using Web 2.0 tools to collaborate with consumers on product development, customer service enhancement, product or service improvement and promotion. Companies can use Web 2.0 tools to improve collaboration with both its business partners and consumers. Among other things, company employees have created wikis—Websites that allow users to add, delete, and edit content — to list answers to frequently asked questions about each product, and consumers have added significant contributions. Another marketing Web 2.0 lure is to make sure consumers can use the online community to network among themselves on topics of their own choosing. Mainstream media usage of Web 2.0 is increasing. Saturating media hubs—like The New York Times, PC Magazine and Business Week — with links to popular new Web sites and services, is critical to achieving the threshold for mass adoption of those services. User web content can be used to gauge consumer satisfaction. In a recent article for Bank Technology News, Shane Kite describes how Citigroup's Global Transaction Services unit monitors social media outlets to address customer issues and improve products. Destination marketing In tourism industries, social media is an effective channel to attract travellers and promote tourism products and services by engaging with customers. The brand of tourist destinations can be built through marketing campaigns on social media and by engaging with customers. For example, the "Snow at First Sight" campaign launched by the State of Colorado aimed to bring brand awareness to Colorado as a winter destination. The campaign used social media platforms, for example, Facebook and Twitter, to promote this competition, and requested the participants to share experiences, pictures and videos on social media platforms. As a result, Colorado enhanced their image as a winter destination and created a campaign worth about $2.9 million. The tourism organisation can earn brand royalty from interactive marketing campaigns on social media with engaging passive communication tactics. For example, "Moms" advisors of the Walt Disney World are responsible for offering suggestions and replying to questions about the family trips at Walt Disney World. Due to its characteristic of expertise in Disney, "Moms" was chosen to represent the campaign. Social networking sites, such as Facebook, can be used as a platform for providing detailed information about the marketing campaign, as well as real-time online communication with customers. Korean Airline Tour created and maintained a relationship with customers by using Facebook for individual communication purposes. Travel 2.0 refers a model of Web 2.0 on tourism industries which provides virtual travel communities. The travel 2.0 model allows users to create their own content and exchange their words through globally interactive features on websites. The users also can contribute their experiences, images and suggestions regarding their trips through online travel communities. For example, TripAdvisor is an online travel community which enables user to rate and share autonomously their reviews and feedback on hotels and tourist destinations. Non pre-associate users can interact socially and communicate through discussion forums on TripAdvisor. Social media, especially Travel 2.0 websites, plays a crucial role in decision-making behaviors of travelers. The user-generated content on social media tools have a significant impact on travelers choices and organisation preferences. Travel 2.0 sparked radical change in receiving information methods for travelers, from business-to-customer marketing into peer-to-peer reviews. User-generated content became a vital tool for helping a number of travelers manage their international travels, especially for first time visitors. The travellers tend to trust and rely on peer-to-peer reviews and virtual communications on social media rather than the information provided by travel suppliers. In addition, an autonomous review feature on social media would help travelers reduce risks and uncertainties before the purchasing stages. Social media is also a channel for customer complaints and negative feedback which can damage images and reputations of organisations and destinations. For example, a majority of UK travellers read customer reviews before booking hotels, these hotels receiving negative feedback would be refrained by half of customers. Therefore, the organisations should develop strategic plans to handle and manage the negative feedback on social media. Although the user-generated content and rating systems on social media are out of a business' controls, the business can monitor those conversations and participate in communities to enhance customer loyalty and maintain customer relationships. Education Web 2.0 could allow for more collaborative education. For example, blogs give students a public space to interact with one another and the content of the class. Some studies suggest that Web 2.0 can increase the public's understanding of science, which could improve government policy decisions. A 2012 study by researchers at the University of Wisconsin–Madison notes that "...the internet could be a crucial tool in increasing the general public's level of science literacy. This increase could then lead to better communication between researchers and the public, more substantive discussion, and more informed policy decision." Web-based applications and desktops Ajax has prompted the development of Web sites that mimic desktop applications, such as word processing, the spreadsheet, and slide-show presentation. WYSIWYG wiki and blogging sites replicate many features of PC authoring applications. Several browser-based services have emerged, including EyeOS and YouOS.(No longer active.) Although named operating systems, many of these services are application platforms. They mimic the user experience of desktop operating systems, offering features and applications similar to a PC environment, and are able to run within any modern browser. However, these so-called "operating systems" do not directly control the hardware on the client's computer. Numerous web-based application services appeared during the dot-com bubble of 1997–2001 and then vanished, having failed to gain a critical mass of customers. Distribution of media XML and RSS Many regard syndication of site content as a Web 2.0 feature. Syndication uses standardized protocols to permit end-users to make use of a site's data in another context (such as another Web site, a browser plugin, or a separate desktop application). Protocols permitting syndication include RSS (really simple syndication, also known as Web syndication), RDF (as in RSS 1.1), and Atom, all of which are XML-based formats. Observers have started to refer to these technologies as Web feeds. Specialized protocols such as FOAF and XFN (both for social networking) extend the functionality of sites and permit end-users to interact without centralized Web sites. Web APIs Web 2.0 often uses machine-based interactions such as REST and SOAP. Servers often expose proprietary Application programming interfaces (APIs), but standard APIs (for example, for posting to a blog or notifying a blog update) have also come into use. Most communications through APIs involve XML or JSON payloads. REST APIs, through their use of self-descriptive messages and hypermedia as the engine of application state, should be self-describing once an entry URI is known. Web Services Description Language (WSDL) is the standard way of publishing a SOAP Application programming interface and there are a range of Web service specifications. Trademark In November 2004, CMP Media applied to the USPTO for a service mark on the use of the term "WEB 2.0" for live events. On the basis of this application, CMP Media sent a cease-and-desist demand to the Irish non-profit organisation IT@Cork on May 24, 2006, but retracted it two days later. The "WEB 2.0" service mark registration passed final PTO Examining Attorney review on May 10, 2006, and was registered on June 27, 2006. The European Union application (which would confer unambiguous status in Ireland) was declined on May 23, 2007. Criticism Critics of the term claim that "Web 2.0" does not represent a new version of the World Wide Web at all, but merely continues to use so-called "Web 1.0" technologies and concepts: First, techniques such as Ajax do not replace underlying protocols like HTTP, but add a layer of abstraction on top of them. Second, many of the ideas of Web 2.0 were already featured in implementations on networked systems well before the term "Web 2.0" emerged. Amazon.com, for instance, has allowed users to write reviews and consumer guides since its launch in 1995, in a form of self-publishing. Amazon also opened its API to outside developers in 2002.Previous developments also came from research in computer-supported collaborative learning and computer-supported cooperative work (CSCW) and from established products like Lotus Notes and Lotus Domino, all phenomena that preceded Web 2.0. Tim Berners-Lee, who developed the initial technologies of the Web, has been an outspoken critic of the term, while supporting many of the elements associated with it. In the environment where the Web originated, each workstation had a dedicated IP address and always-on connection to the Internet. Sharing a file or publishing a web page was as simple as moving the file into a shared folder. Perhaps the most common criticism is that the term is unclear or simply a buzzword. For many people who work in software, version numbers like 2.0 and 3.0 are for software versioning or hardware versioning only, and to assign 2.0 arbitrarily to many technologies with a variety of real version numbers has no meaning. The web does not have a version number. For example, in a 2006 interview with IBM developerWorks podcast editor Scott Laningham, Tim Berners-Lee described the term "Web 2.0" as jargon:"Nobody really knows what it means... If Web 2.0 for you is blogs and wikis, then that is people to people. But that was what the Web was supposed to be all along... Web 2.0, for some people, it means moving some of the thinking [to the] client side, so making it more immediate, but the idea of the Web as interaction between people is really what the Web is. That was what it was designed to be... a collaborative space where people can interact." Other critics labeled Web 2.0 "a second bubble" (referring to the Dot-com bubble of 1997–2000), suggesting that too many Web 2.0 companies attempt to develop the same product with a lack of business models. For example, The Economist has dubbed the mid- to late-2000s focus on Web companies as "Bubble 2.0". In terms of Web 2.0's social impact, critics such as Andrew Keen argue that Web 2.0 has created a cult of digital narcissism and amateurism, which undermines the notion of expertise by allowing anybody, anywhere to share and place undue value upon their own opinions about any subject and post any kind of content, regardless of their actual talent, knowledge, credentials, biases or possible hidden agendas. Keen's 2007 book, Cult of the Amateur, argues that the core assumption of Web 2.0, that all opinions and user-generated content are equally valuable and relevant, is misguided. Additionally, Sunday Times reviewer John Flintoff has characterized Web 2.0 as "creating an endless digital forest of mediocrity: uninformed political commentary, unseemly home videos, embarrassingly amateurish music, unreadable poems, essays and novels... [and that Wikipedia is full of] mistakes, half-truths and misunderstandings". In a 1994 Wired interview, Steve Jobs, forecasting the future development of the web for personal publishing, said:"The Web is great because that person can't foist anything on you—you have to go get it. They can make themselves available, but if nobody wants to look at their site, that's fine. To be honest, most people who have something to say get published now." Michael Gorman, former president of the American Library Association has been vocal about his opposition to Web 2.0 due to the lack of expertise that it outwardly claims, though he believes that there is hope for the future.:"The task before us is to extend into the digital world the virtues of authenticity, expertise, and scholarly apparatus that have evolved over the 500 years of print, virtues often absent in the manuscript age that preceded print". There is also a growing body of critique of Web 2.0 from the perspective of political economy. Since, as Tim O'Reilly and John Batelle put it, Web 2.0 is based on the "customers... building your business for you," critics have argued that sites such as Google, Facebook, YouTube, and Twitter are exploiting the "free labor" of user-created content. Web 2.0 sites use Terms of Service agreements to claim perpetual licenses to user-generated content, and they use that content to create profiles of users to sell to marketers. This is part of increased surveillance of user activity happening within Web 2.0 sites. Jonathan Zittrain of Harvard's Berkman Center for the Internet and Society argues that such data can be used by governments who want to monitor dissident citizens. The rise of AJAX-driven web sites where much of the content must be rendered on the client has meant that users of older hardware are given worse performance versus a site purely composed of HTML, where the processing takes place on the server. Accessibility for disabled or impaired users may also suffer in a Web 2.0 site. Others have noted that Web 2.0 technologies are tied to particular political ideologies. "Web 2.0 discourse is a conduit for the materialization of neoliberal ideology." The technologies of Web 2.0 may also "function as a disciplining technology within the framework of a neoliberal political economy." When looking at Web 2.0 from a cultural convergence view, according to Henry Jenkins, it can be problematic because the consumers are doing more and more work in order to entertain themselves. For instance, Twitter offers online tools for users to create their own tweet, in a way the users are doing all the work when it comes to producing media content. See also Cloud computing Collective intelligence Connectivity of social media Crowd computing Cute cat theory of digital activism Enterprise social software Libraries in virtual worlds List of free and open-source web applications Mass collaboration New media Office suite Open-source governance Privacy concerns with social networking services Responsive web design Semantic Web, sometimes called Web 3.0 Social commerce Social shopping Web 2.0 for development (web2fordev) Web3 You (Time Person of the Year) Application domains Sci-Mate Business 2.0 E-learning 2.0 e-government (Government 2.0) Health 2.0 Science 2.0 References External links 2000s in computing Brand management Cloud applications Internet ages Internet culture New media Social information processing Technology neologisms Web services 1999 neologisms 1990s in computing
Web 2.0
[ "Technology" ]
7,470
[ "Multimedia", "New media" ]
1,555,268
https://en.wikipedia.org/wiki/Nikolay%20Basov
Nikolay Gennadiyevich Basov (; 14 December 1922 – 1 July 2001) was a Russian Soviet physicist and educator. For his fundamental work in the field of quantum electronics that led to the development of laser and maser, Basov shared the 1964 Nobel Prize in Physics with Alexander Prokhorov and Charles Hard Townes. Early life Basov was born in the town of Usman, now in Lipetsk Oblast in 1922. He finished school in 1941 in Voronezh, and was later called for military service at Kuibyshev Military Medical Academy. In 1943 he left the academy and served in the Red Army participating in the Second World War with the 1st Ukrainian Front. Professional career Basov graduated from Moscow Engineering Physics Institute (MEPhI) in 1950. He then held a professorship at MEPhI and also worked in the Lebedev Physical Institute (LPI), where he defended a dissertation for the Candidate of Sciences degree (equivalent to PhD) in 1953 and a dissertation for the Doctor of Sciences degree in 1956. Basov was the Director of the LPI in 1973–1988. He was elected as corresponding member of the USSR Academy of Sciences (Russian Academy of Sciences since 1991) in 1962 and Full Member of the Academy in 1966. In 1967, he was elected a Member of the Presidium of the Academy (1967—1990), and since 1990 he was the councillor of the Presidium of the USSR Academy of Sciences. In 1971 he was elected a Member of the German Academy of Sciences Leopoldina. He was Honorary President and Member of the International Academy of Science, Munich. He was the head of the laboratory of quantum radiophysics at the LPI until his death in 2001. In the early 1950s Basov and Prokhorov developed theoretical grounds for creation of a molecular oscillator and constructed such an oscillator based on ammonia. Later this oscillator became known as maser. They also proposed a method for the production of population inversion using inhomogeneous electric and magnetic fields. Their results were presented at a national conference in 1952 and published in 1954. Basov then proceeded to the development of laser, an analogous generator of coherent light. In 1955 he designed a three-level laser, and in 1959 suggested constructing a semiconductor laser, which he built with collaborators in 1963. Basov with co-workers proposed Disk laser in 1966 and realized experimentally the thin disk active mirror semiconductor lasers. He developed with colleaguaes the first nonlinear theory of coherent addition of laser sets. N.G.Basov encouraged the researchers in nonlinear optics in Lebedev Institute who discovered the optical phase conjugation. Together with Lebedev Institute researchers he realized the robust method of the phase-locking of laser arrays via optical phase conjugation in Stimulated Brillouin scattering. Basov's contributions to the development of the laser and maser, which won him the Nobel Prize in 1964, also led to new missile defense initiatives. He died on 1 July, 2001 at Moscow and was buried at Novodevichy Cemetery. Politics He entered politics in 1951 and became a member of parliament (the Soviet of the Union of the Supreme Soviet) in 1974. Following U.S. President Ronald Reagan's speech on SDI in 1983, Basov signed a letter along with other Soviet scientists condemning the initiative, which was published in the New York Times. In 1985 he declared the Soviet Union was capable of matching SDI proposals made by the U.S. Books N. G. Basov, K. A. Brueckner (Editor-in-Chief), S. W. Haan, C. Yamanaka. Inertial Confinement Fusion, 1992, Research Trends in Physics Series published by the American Institute of Physics Press (presently Springer, New York). . V. Stefan and N. G. Basov (Editors). Semiconductor Science and Technology, Volume 1. Semiconductor Lasers. (Stefan University Press Series on Frontiers in Science and Technology) (Paperback), 1999. . V. Stefan and N. G. Basov (Editors). Semiconductor Science and Technology, Volume 2: Quantum Dots and Quantum Wells. (Stefan University Press Series on Frontiers in Science and Technology) (Paperback), 1999. . Awards and honours Lenin Prize (1959) Nobel Prize in Physics (1964, with the pioneering work done in the field of quantum electronics) Hero of Socialist Labour — twice (1969, 1982) Gold Medal of the Czechoslovak Academy of Sciences (1975) A. Volta Gold Medal (1977) Kalinga Prize (1986) USSR State Prize (1989) Lomonosov Grand Gold Medal, Moscow State University (1990) Order of Lenin – five times Order of Merit for the Fatherland, 2nd class Order of the Patriotic War, 2nd class See also Excimer laser Maser Alexander Prokhorov Lebedev Institute of Physics Disk laser Nonlinear optics Coherent addition Michelson interferometer References External links Basov's grave Detailed biography including the Nobel Lecture, 11 December 1964 Semiconductor Lasers Oral History interview transcript with Nikolay Basov on 14 September 1984, American Institute of Physics, Niels Bohr Library and Archives 1922 births 2001 deaths People from Usman, Russia Full Members of the USSR Academy of Sciences Full Members of the Russian Academy of Sciences Foreign members of the Bulgarian Academy of Sciences Foreign fellows of the Indian National Science Academy Heroes of Socialist Labour Recipients of the Lenin Prize Nobel laureates in Physics Recipients of the Order "For Merit to the Fatherland", 2nd class Soviet Nobel laureates Recipients of the USSR State Prize Recipients of the Order of Lenin Soviet physicists Optical physicists Laser researchers Soviet inventors Soviet military personnel of World War II Spectroscopists Commanders of the Order of Merit of the Republic of Poland Recipients of the Lomonosov Gold Medal Burials at Novodevichy Cemetery Members of the German National Academy of Sciences Leopoldina Moscow Engineering Physics Institute alumni Members of the German Academy of Sciences at Berlin Kalinga Prize recipients Russian scientists Fellows of the American Physical Society
Nikolay Basov
[ "Physics", "Chemistry", "Technology" ]
1,228
[ "Physical chemists", "Spectrum (physical sciences)", "Analytical chemists", "Recipients of the Lomonosov Gold Medal", "Spectroscopists", "Science and technology awards", "Spectroscopy" ]
1,555,317
https://en.wikipedia.org/wiki/Gunter%27s%20chain
Gunter's chain (also known as Gunter's measurement) is a distance-measuring device used for surveying. It was designed and introduced in 1620 by English clergyman and mathematician Edmund Gunter (1581–1626). It enabled plots of land to be accurately surveyed and plotted, for legal and commercial purposes. Gunter developed an actual measuring chain of 100 links. These, the chain and the link, became statutory measures in England and subsequently the British Empire. Description The chain is divided into 100 links, usually marked off into groups of 10 by brass rings or tags which simplify intermediate measurement. Each link is thus long. A quarter chain, or 25 links, measures and thus measures a rod (or pole). Ten chains measure a furlong and 80 chains measure a statute mile. Gunter's chain reconciled two seemingly incompatible systems: the traditional English land measurements, based on the number four, and decimals based on the number 10. Since an acre measured 10 square chains in Gunter's system, the entire process of land area measurement could be computed using measurements in chains, and then converted to acres by dividing the results by 10. Hence 10 chains by 10 chains (100 square chains) equals 10 acres, 5 chains by 5 chains (25 square chains) equals 2.5 acres. By the 1670s the chain and the link had become statutory units of measurement in England. Method The method of surveying a field or other parcel of land with Gunter's chain is to first determine corners and other significant locations, and then to measure the distance between them, taking two points at a time. The surveyor is assisted by a chainman. A ranging rod (usually a prominently coloured wooden pole) is placed in the ground at the destination point. Starting at the originating point the chain is laid out towards the ranging rod, and the surveyor then directs the chainman to make the chain perfectly straight and pointing directly at the ranging rod. A pin is put in the ground at the forward end of the chain, and the chain is moved forward so that its hind end is at that point, and the chain is extended again towards the destination point. This process is called ranging, or in the US, chaining; it is repeated until the destination rod is reached, when the surveyor notes how many full lengths (chains) have been laid, and he can then directly read how many links (one-hundredth parts of the chain) are in the distance being measured. The chain usually ends in a handle which may or may not be part of the measurement. An inner loop (visible in the NMAH photograph) is the correct place to put the pin for some chains. Many chains were made with the handles as part of the end link and thus were included in the measurement. The whole process is repeated for all the other pairs of points required, and it is a simple matter to make a scale diagram of the plot of land. The process is surprisingly accurate and requires only very low technology. Surveying with a chain is simple if the land is level and continuous—it is not physically practicable to range across large depressions or significant waterways, for example. On sloping land, the chain was to be "leveled" by raising one end as needed, so that undulations did not increase the apparent length of the side or the area of the tract. Unit of length Although link chains were later superseded by the steel ribbon tape (a form of tape measure), its legacy was a new statutory unit of length called the chain, equal to 22 yards (66 feet) of 100 links. This unit still exists as a location identifier on British railways, as well as all across America in what is called the public land survey system. In the United States (US), for example, Public Lands Survey plats are published in the chain unit to maintain the consistency of a two-hundred-year-old database. In the Midwest of the US it is not uncommon to encounter deeds with references to chains, poles, or rod units, especially in farming country. Minor roads surveyed in Australia and New Zealand in the 19th and early 20th centuries are customarily one chain wide. The length of a cricket pitch is one chain (22 yards). Similar measuring chains A similar American system, of lesser popularity, is Ramsden's or the engineer's system, where the chain consists also of 100 links, each one foot (0.3048 m) long. The original of such chains was that constructed, to very high precision, for the measurement of the baselines of the Anglo-French Survey (1784–1790) and the Principal Triangulation of Great Britain. The even less common Rathborn system, also from the 17th century, is based on a 200-link chain of two rods (33 feet, 10.0584 m) length. Each rod (or perch or pole) consists of 100 links, (1.98 inches, 50.292 mm each), which are called seconds (), ten of which make a prime (, 19.8 inches, 0.503 m). Vincent Wing made chains with 9.90-inch links, most commonly as 33-foot half-chains of 40 links. These chains were sometimes used in the American colonies, particularly Pennsylvania. In India, surveying chains (occasionally 30 metres) in length are used. Links are long. In France after the French Revolution, and later in countries that had adopted the Metric System, 10-metre (32 ft 9.7 in) chains, of 50 links each long were used until the 1950s. See also Distance measurement References External links How to make a Gunter's Chain Image from 1675 Nineteenth century image Surveying instruments Units of length Imperial units Customary units of measurement in the United States Length, distance, or range measuring devices
Gunter's chain
[ "Mathematics" ]
1,177
[ "Quantity", "Units of measurement", "Units of length" ]
1,555,443
https://en.wikipedia.org/wiki/General%20insurance
General insurance or non-life insurance policy, including automobile and homeowners policies, provide payments depending on the loss from a particular financial event. General insurance is typically defined as any insurance that is not determined to be life insurance. It is called property and casualty insurance in the United States and Canada and non-life insurance in Continental Europe. In the United Kingdom, insurance is broadly divided into three areas: personal lines, commercial lines and London market. The London market insures large commercial risks such as supermarkets, football players, corporation risks, and other very specific risks. It consists of a number of insurers, reinsurers, P&I Clubs, brokers and other companies that are typically physically located in the City of London. Lloyd's of London is a big participant in this market. The London market also participates in personal lines and commercial lines, domestic and foreign, through reinsurance. Commercial lines products are usually designed for relatively small legal entities. These would include workers' compensation (employers liability), public liability, product liability, commercial fleet and other general insurance products sold in a relatively standard fashion to many organisations. There are many companies that supply comprehensive commercial insurance packages for a wide range of different industries, including shops, restaurants and hotels. Personal lines products are designed to be sold in large quantities. This would include autos (private car), homeowners (household), pet insurance, creditor insurance and others. ACORD, which is the insurance industry global standards organization, has standards for personal and commercial lines and has been working with the Australian General Insurers to develop those XML standards, standard applications for insurance, and certificates of currency. Types of general insurance General insurance can be categorised in to following: Motor Insurance: Motor Insurance can be divided into two groups, two and four wheeled vehicle insurance. Health insurance: Common types of health insurance includes: individual health insurance, family floater health insurance, comprehensive health insurance and critical illness insurance. Travel insurance: Travel insurance can be broadly grouped into: individual travel policy, family travel policy, student travel insurance, and senior citizen health insurance. Home insurance: Home insurance protects a house and its contents. Marine insurance: Marine insurance covers goods, freight, cargo, and other interests against loss or damage during transit by rail, road, sea and/or air. Commercial insurance: Commercial insurance encompasses solutions for all sectors of the industry arising out of business operations. Accident insurance: Accidents of different types are possible at any time, at any place and in case of any person or object. Persons and vehicles are more prone to accidents causing injuries and damages. Fire insurance : In order to get the asset, stock or machines insured against fire, a proposal form is to be filled in and submitted to the insurance company. The insurance company examines the proposal with due regards to various factors and the periodical amount of premium is fixed. Theft insurance Property insurance Aviation insurance Livestock insurance Crop insurance Market trends The United States was the largest market for non-life insurance premiums written in 2005 followed by the European Union and Japan. See also Insurance Outstanding claims reserves References Types of insurance Actuarial science
General insurance
[ "Mathematics" ]
637
[ "Applied mathematics", "Actuarial science" ]
1,555,604
https://en.wikipedia.org/wiki/Gynandromorphism
A gynandromorph is an organism that contains both male and female characteristics. The term comes from the Greek γυνή (gynē) 'female', ἀνήρ (anēr) 'male', and μορφή (morphē) 'form', and is used mainly in the field of entomology. Gynandromorphism is most frequently recognized in organisms that have strong sexual dimorphism such as certain butterflies, spiders, and birds, but has been recognized in numerous other types of organisms. Occurrence Gynandromorphism has been noted in Lepidoptera (butterflies and moths) since the 1700s. It has also been observed in crustaceans, such as lobsters and crabs, in spiders, ticks, flies, locusts, crickets, dragonflies, ants, termites, bees, lizards, snakes, rodents, and birds. It is generally rare but reporting depends on ease of detecting it (whether a species is strongly sexually dimorphic) and how well-studied a region or organism is. For example, up until 2023 gynandromorphism had been reported in more than 40 bird species, but the vast majority of these are from the Palearctic and Nearctic, indicating that it likely is underreported in parts of the world that are not as biologically well-studied. Pattern of distribution of male and female tissues in a single organism A gynandromorph can have bilateral symmetry—one side female and one side male. Alternatively, the distribution of male and female tissue can be more haphazard. Bilateral gynandromorphy arises very early in development, typically when the organism has between 8 and 64 cells. Later stages produce a more random pattern. A notable example in birds is the zebra finch. These birds have lateralised brain structures in the face of a common steroid signal, providing strong evidence for a non-hormonal primary sex mechanism regulating brain differentiation. Causes The cause of this phenomenon is typically (but not always) an event in mitosis during early development. While the organism contains only a few cells, one of the dividing cells does not split its sex chromosomes typically. This leads to one of the two cells having sex chromosomes that cause male development and the other cell having chromosomes that cause female development. For example, an XY cell undergoing mitosis duplicates its chromosomes, becoming XXYY. Usually this cell would divide into two XY cells, but in rare occasions the cell may divide into an X cell and an XYY cell. If this happens early in development, then a large portion of the cells are X and a large portion are XYY. Since X and XYY dictate different sexes, the organism has tissue that is female and tissue that is male. A developmental network theory of how gynandromorphs develop from a single cell based on a working paper links between parental allelic chromosomes was proposed in 2012. The major types of gynandromorphs, bilateral, polar and oblique are computationally modeled. Many other possible gynandromorph combinations are computationally modeled, including predicted morphologies yet to be discovered. The article relates gynandromorph developmental control networks to how species may form. The models are based on a computational model of bilateral symmetry. As a research tool Gynandromorphs occasionally afford a powerful tool in genetic, developmental, and behavioral analyses. In Drosophila melanogaster, for instance, they provided evidence that male courtship behavior originates in the brain, that males can distinguish conspecific females from males by the scent or some other characteristic of the posterior, dorsal, integument of females, that the germ cells originate in the posterior-most region of the blastoderm, and that somatic components of the gonads originate in the mesodermal region of the fourth and fifth abdominal segment. See also Mosaicism Androgyny Chimerism Gynomorph Half-sider budgerigar Hermaphrodite References External links "Stunning Dual-Sex Animals" at Live Science Aayushi Pratap: This rare bird is male on one side and female on the other; on: Sciencenews; October 6, 2020; about a gynandromorph rose-breasted grosbeak. Insect physiology Sexual dimorphism
Gynandromorphism
[ "Physics", "Biology" ]
916
[ "Sex", "Sexual dimorphism", "Symmetry", "Asymmetry" ]
1,555,644
https://en.wikipedia.org/wiki/Battleship%20%28puzzle%29
The Battleship puzzle (sometimes called Bimaru, Yubotu, Solitaire Battleships or Battleship Solitaire) is a logic puzzle based on the Battleship guessing game. It and its variants have appeared in several puzzle contests, including the World Puzzle Championship, and puzzle magazines, such as Games magazine. Solitaire Battleship was invented in Argentina by Jaime Poniachik and was first featured in 1982 in the Argentine magazine . Battleship gained more widespread popularity after its international debut at the first World Puzzle Championship in New York City in 1992. Battleship appeared in Games magazine the following year and remains a regular feature of the magazine. Variants of Battleship have emerged since the puzzle's inclusion in the first World Puzzle Championship. Battleship is played in a grid of squares that hides ships of different sizes. Numbers alongside the grid indicate how many squares in a row or column are occupied by part of a ship. History The solitaire version of Battleship was invented in Argentina in 1982 under the name Batalla Naval, with the first published puzzles appearing in 1982 in the Spanish magazine . Battleship was created by the magazine's founder, Jaime Poniachik, along with its editors Eduardo Abel Gimenez, Jorge Varlotta, and Daniel Samoilovich. After 1982, no more Battleship puzzles were published until 1987, when they appeared in , a renamed version of . The publishing company of regularly publishes Battleship puzzles in its monthly magazine . Battleship made its international debut at the first World Puzzle Championship in New York in 1992 and met with success. The next World Puzzle Championship in 1993 featured a variant of Battleship that omitted some of the row and column numbers. Battleship was first published in Games magazine in 1993, the year after the first World Puzzle Championship. Other variants later emerged, including Hexagonal Battleship, 3D Battleship, and Diagonal Battleship. Rules In Battleship, an armada of battleships is hidden in a square grid of 10×10 small squares. The armada includes one battleship four squares long, two cruisers three squares long, three destroyers two squares long, and four submarines one square in size. Each ship occupies a number of contiguous squares on the grid, arranged horizontally or vertically. The ships are placed so that no ship touches any other ship, not even diagonally. The goal of the puzzle is to discover where the ships are located. A grid may start with clues in the form of squares that have already been solved, showing a submarine, an end piece of a ship, a middle piece of a ship, or water. Each row and column also has a number beside it, indicating the number of squares occupied by ship parts in that row or column, respectively. Variants of the standard form of solitaire battleship have included using larger or smaller grids (with comparable changes in the size of the hidden armada), as well as using a hexagonal grid. A version lets the solver shoot at 3 positions in one turn. The answer is returned sorted by size. For instance: (1,2) - (3,6) - 6,4) => 420 means that one of the three coordinates hit is a ship size 4, another a ship size 2. One coordinate returned a miss. Strategy The basic solving strategy for a Battleship puzzle is to add segments to incomplete ships where appropriate, draw water in squares that are known not to contain a ship segment, and to complete ships in a row or column whose number is the same as the number of unsolved squares in that row or column, respectively. More advanced strategies include looking for places where the largest ship that has not yet been located can fit into the grid, and looking for rows and columns that are almost complete and determining if there is only one way to complete them. In computers Battleship is an NP-complete problem. In 1997, former contributing editor to the Battleship column in Games magazine Moshe Rubin released Fathom It!, a popular Windows implementation of Battleship. See also Discrete tomography Nonogram Battleship the game References Further reading External links The Battleship Omnibus - Extensive information on variants, competitions, and strategies. Logic puzzles Puzzle video games NP-complete problems
Battleship (puzzle)
[ "Mathematics" ]
824
[ "NP-complete problems", "Mathematical problems", "Computational problems" ]
1,555,658
https://en.wikipedia.org/wiki/Digit%20%28unit%29
The digit or finger is an ancient and obsolete non-SI unit of measurement of length. It was originally based on the breadth of a human finger. It was a fundamental unit of length in the Ancient Egyptian, Mesopotamian, Hebrew, Ancient Greek and Roman systems of measurement. In astronomy a digit is one twelfth of the diameter of the sun or the moon. History Ancient Egypt The digit, also called a finger or fingerbreadth, is a unit of measurement originally based on the breadth of a human finger. In Ancient Egypt it was the basic unit of subdivision of the cubit. On surviving Ancient Egyptian cubit-rods, the royal cubit is divided into seven palms of four digits or fingers each. The royal cubit measured approximately 525 mm, so the length of the ancient Egyptian digit was about 19 mm. Mesopotamia In the classical Akkadian Empire system instituted in about 2250 BC during the reign of Naram-Sin, the finger was one-thirtieth of a cubit length. The cubit was equivalent to approximately 497 mm, so the finger was equal to about 17 mm. Basic length was used in architecture and field division. Ancient Hebrew system Ancient Greece Ancient Rome Britain A digit (lat. digitus, "finger"), when used as a unit of length, is usually a sixteenth of a foot or 3/4" (1.905 cm for the international inch). The width of an adult human male finger tip is indeed about 2 centimetres. In English this unit has mostly fallen out of use, as do others based on the human arm: finger (7/6 digit), palm (4 digits), hand (16/3 digits), shaftment (8 digits), span (12 digits), cubit (24 digits) and ell (60 digits). Astronomy In astronomy a digit is, or was until recently, one twelfth of the diameter of the sun or the moon. This is found in the Moralia of Plutarch, XII:23, but the definition as exactly one twelfth of the diameter may be due to Ptolemy. Sosigenes of Alexandria had observed in the 1st century AD that on a dioptra, a disc with a diameter of 11 or 12 digits (of length) was needed to cover the moon. The unit was used in Arab or Islamic astronomical works such as those of Ṣadr al‐Sharīʿa al‐Thānī (d.1346/7), where it is called iṣba' , digit or finger. The astronomical digit was in use in Britain for centuries. Heath, writing in 1760, explains that 12 digits are equal to the diameter in eclipse of the sun, but that 23 may be needed for the Earth's shadow as it eclipses the moon, those over 12 representing the extent to which the Earth's shadow is larger than the Moon. The unit is apparently not in current use, but is found in recent dictionaries. See also Finger (unit) Finger tip unit Cubit Anthropic units References Units of length Human-based units of measurement
Digit (unit)
[ "Mathematics" ]
632
[ "Quantity", "Units of measurement", "Units of length" ]
1,555,681
https://en.wikipedia.org/wiki/Finger%20%28unit%29
A finger (sometimes fingerbreadth or finger's breadth) is any of several units of measurement that are approximately the width of an adult human finger. [Exactly which part of the finger should be used is not defined; the width at the base of fingernail (#6 in the sketch) is typically less than that at the knuckle (#5).] The digit, also known as digitus or digitus transversus (Latin), dactyl (Greek) or dactylus, or finger's breadth of an inch or of a foot. (about 2 cm) In medicine and related disciplines (anatomy, radiology, etc.) the fingerbreadth (literally the width of a finger) is an informal but widely used unit of measure. In the measurement of distilled spirits, a finger of whiskey refers to the amount of whiskey that would fill a glass to the level of one finger wrapped around the glass at the bottom. Another definition (from Noah Webster): "nearly an inch." Finger is also the name of a longer unit of length, used historically in cloth measurement, to mean one eighth of a yard or 4 inches. (114.3 mm) Again, which finger and whose finger, is not defined. These units have no legal status but remain in use for 'rough and ready' comparisons. See also ('6' in the diagram above) (before 1826) (from 1826) References Units of length Human-based units of measurement
Finger (unit)
[ "Mathematics" ]
307
[ "Quantity", "Units of measurement", "Units of length" ]
1,555,695
https://en.wikipedia.org/wiki/Palm%20%28unit%29
The palm is an obsolete anthropic unit of length, originally based on the width of the human palm and then variously standardized. The same name is also used for a second, rather larger unit based on the length of the human hand. The width of the palm was a traditional unit in Ancient Egypt, Israel, Greece, and Rome and in medieval England, where it was also known as the hand, handbreadth, or handsbreadth. The length of the hand—originally the Roman "greater palm"—formed the palm of medieval Italy and France. In Spanish customary units or was the palm, while was the span, the distance between an outstretched thumb and little finger. In Portuguese or was the span. History Ancient Egypt The Ancient Egyptian palm () has been reconstructed as about . The unit is attested as early as the reign of Djer, third pharaoh of the First Dynasty, and appears on many surviving cubit-rods. The palm was subdivided into four digits () of about . Three palms made up the span () or lesser span () of about . Four palms made up the foot () of about . Five made up the of about . Six made up the "Greek cubit" () of about . Seven made up the "royal cubit" () of about . Eight made up the pole () of about . Ancient Israel The palm was not a major unit in ancient Mesopotamia but appeared in ancient Israel as the , , or (, ."a spread"). Scholars were long uncertain as to whether this was reckoned using the Egyptian or Babylonian cubit, but now believe it to have approximated the Egyptian "Greek cubit", giving a value for the palm of about . As in Egypt, the palm was divided into four digits ( or ) of about and three palms made up a span () of about . Six made up the Hebrew cubit ( or ) of about , although the cubits mentioned in Ezekiel follow the royal cubit in consisting of seven palms comprising about . Ancient Greece The Ancient Greek palm (, palaistḗ, , dō̂ron, or , daktylodókhmē) made up ¼ of the Greek foot (poûs), which varied by region between . This gives values for the palm between , with the Attic palm around . These various palms were divided into four digits (dáktylos) or two "middle phalanges" (kóndylos). Two palms made a half-foot (hēmipódion or dikhás); three, a span (spithamḗ); four, a foot (poûs); five, a short cubit (pygōn); and six, a cubit (pē̂khys). The Greeks also had a less common "greater palm" of five digits. Ancient Rome The Roman palm () or lesser palm () made up ¼ of the Roman foot (), which varied in practice between but is thought to have been officially . This would have given the palm a notional value of within a range of a few millimeters. The palm was divided into four digits () of about or three inches () of about . Three made a span ( or "greater palm") of about ; four, a Roman foot; five, a hand-and-a-foot () of about ; six, a cubit () of about . Continental Europe The palms of medieval () and early modern Europe—the Italian, Spanish, and Portuguese and French —were based upon the Roman "greater palm", reckoned as a hand's span or length. In Italy, the palm () varied regionally. The Genovese palm was about ; in the Papal States, the Roman palm about according to Hutton but divided into the Roman "architect's palm" () of about and "merchant's palm" () of about according to Greaves; and the Neapolitan palm reported as by Riccioli but by Hutton's other sources. On Sicily and Malta, it was . In France, the palm ( or ) was about in Pernes-les-Fontaines, Vaucluse, and about in Languedoc. Palaiseau gave metric equivalents for the palme or palmo in 1816, and Rose provided English equivalents in 1900: From 19th C. Italian sources emerges that : - the ancient Venetian palm, five of which made a passo (pace), was equivalent to 0.3774 metres. - the Neapolitan palm = 0.26333670 metres (from 1480 to 1840) - the Neapolitan palm = 0.26455026455 metres (according to the law of 6 April 1840) which differs from previously cited palm measure equivalents in metres above. England The English palm, handbreadth, or handsbreadth is three inches (7.62cm) or, equivalently, four digits. The measurement was, however, not always well distinguished from the hand or handful, which became equal to four inches by a 1541 statute of Henry VIII. The palm was excluded from the British Weights and Measures Act 1824 that established the imperial system and is not a standard US customary unit. Elsewhere The Moroccan palm is given by Hutton as about . Notes References Units of length Human-based units of measurement Obsolete units of measurement
Palm (unit)
[ "Mathematics" ]
1,096
[ "Obsolete units of measurement", "Quantity", "Units of measurement", "Units of length" ]
1,555,706
https://en.wikipedia.org/wiki/Tipu%27s%20Tiger
Tipu's Tiger, Tippu's Tiger or Tipoo’s Tiger is an 18th-century automaton created for Tipu Sultan, the ruler of the Kingdom of Mysore (present day Karnataka) in India. The carved and painted wood casing represents a tiger mauling a near life-size European man. Mechanisms inside the tiger and the man's body make one hand of the man move, emit a wailing sound from his mouth and grunts from the tiger. In addition a flap on the side of the tiger folds down to reveal the keyboard of a small pipe organ with 18 notes. The automaton incorporates Tipu's emblem, the tiger, and expresses his hatred of his enemy, the British of the East India Company. It was taken from his summer palace when East India Company troops stormed Tipu's capital in 1799. The Governor General, Lord Mornington, sent the tiger to Britain initially intending it to be an exhibit in the Tower of London. First exhibited to the London public in 1808 in East India House, then the offices of the East India Company in London, it was transferred to the Victoria and Albert Museum in 1880. It now forms part of the permanent exhibit on the "Imperial courts of South India". From the moment it arrived in London to the present day, Tipu's Tiger has been a popular attraction to the public. Background Tipu's Tiger was originally made for Tipu Sultan (also referred to as Tippoo Sahib, Tippoo Sultan and other epithets in nineteenth-century literature) in the Kingdom of Mysore (today in the Indian state of Karnataka) around 1795. Tipu Sultan used the tiger systematically as his emblem, employing tiger motifs on his weapons, on the uniforms of his soldiers, and in the decoration of his palaces. His throne rested upon a probably similar life-size wooden tiger, covered in gold; like other valuable treasures it was broken up for the highly organised prize fund shared out among the British army. Tipu had inherited power from his father Hyder Ali, a Muslim soldier who had risen to become dalwai or commander-in-chief under the ruling Hindu Wodeyar dynasty, but from 1760 was in effect the ruler of the kingdom. Hyder, after initially trying to ally with the British against the Marathas, had later become their firm enemy, as they represented the most effective obstacle to his expansion of his kingdom, and Tipu grew up with violently anti-British feelings. The tiger formed part of a specific group of large caricature images commissioned by Tipu showing European, often specifically British, figures being attacked by tigers or elephants, or being executed, tortured and humiliated and attacked in other ways. Many of these were painted by Tipu's orders on the external walls of houses in the main streets of Tipu's capital, Seringapatam. Tipu was in "close co-operation" with the French, who were at war with Britain and still had a presence in South India, and some of the French craftsmen who visited Tipu's court probably contributed to the internal works of the tiger. It has been proposed that the design was inspired by the death in 1792 of a son of General Sir Hector Munro, who had commanded a division during Sir Eyre Coote's victory at the Battle of Porto Novo (Parangipettai) in 1781 when Hyder Ali, Tipu Sultan's father, was defeated with a loss of 10,000 men during the Second Anglo-Mysore War. Hector Sutherland Munro, a 17-year-old East India Company Cadet on his way to Madras, was attacked and killed by a tiger on 22 December 1792 while hunting with several companions on Saugor Island in the Bay of Bengal (still one of the last refuges of the Bengal tiger). However a similar scene was depicted on the silver mount on a gun made for Tipu and dated 1787–88, five years before the incident. The Metropolitan Museum of Art, which owns the Staffordshire figure group illustrated, suggests that the continuing popularity of the subject into the 1820s was due to Tipu's automaton being on display in London. Description Tipu's Tiger is notable as an example of early musical automata from India, and also for the fact that it was especially constructed for Tipu Sultan. With overall dimensions for the object of high and long, the man at least is close to life-size. The painted wooden shell forming both figures likely draws upon South Indian traditions of Hindu religious sculpture. It is typically about half an inch thick, and now much reinforced on the inside following bomb damage in World War II. There are many openings at the head end, formed to match the pattern of the inner part of the painted tiger stripes, which allow the sounds from the pipes within to be heard better, and the tiger is "obviously male". The top part of the tiger's body can be lifted off to inspect the mechanics by removing four screws. The construction of the human figure is similar but the wood is much thicker. Examination and analysis by the V&A conservation department has determined that much of the current paint has been restored or overpainted. The human figure is clearly in European costume, but authorities differ as to whether it represents a soldier or civilian; the current text on the V&A website avoids specifying, other than describing the figure as "European". The operation of a crank handle powers several different mechanisms inside Tipu's Tiger. A set of bellows expels air through a pipe inside the man's throat, with its opening at his mouth. This produces a wailing sound, simulating the cries of distress of the victim. A mechanical link causes the man's left arm to rise and fall. This action alters the pitch of the 'wail pipe'. Another mechanism inside the tiger's head expels air through a single pipe with two tones. This produces a "regular grunting sound" simulating the roar of the tiger. Concealed behind a flap in the tiger's flank is the small ivory keyboard of a two-stop pipe organ in the tiger's body, allowing tunes to be played. The style of both shell and workings, and analysis of the metal content of the original brass pipes of the organ (many have been replaced), indicates that the tiger was of local manufacture. The presence of French artisans and French army engineers within Tipu's court has led many historians to suggest there was French input into the mechanism of this automaton. History Tipu's Tiger was part of the extensive plunder from Tipu's palace captured in the fall of Seringapatam, in which Tipu died, on 4 May 1799, at the culmination of the Fourth Anglo-Mysore War. An aide-de-camp to the Governor-General of the East India Company, Richard Wellesley, 1st Marquess Wellesley, wrote a memorandum describing the discovery of the object: The earliest published drawing of Tippoo's Tyger was the frontispiece for the book "A Review of the Origin, Progress and Result, of the Late Decisive War in Mysore with Notes" by James Salmond, published in London in 1800. It preceded the move of the exhibit from India to England and had a separate preface titled "Description of the Frontispiece" which said: Unlike Tipu's throne, which also featured a large tiger, and many other treasures in the palace, the materials of Tipu's Tiger had no intrinsic value, which together with its striking iconography is what preserved it and brought it back to England essentially intact. The Governors of the East India Company had at first intended to present the tiger to the Crown, with a view to it being displayed in the Tower of London, but then decided but to keep it for the company. After some time in store, during which period the first of many "misguided and wholly unjustified endeavours at "improving" the piece" from a musical point of view may have taken place, it was displayed in the reading-room of the East India Company Museum and Library at East India House in Leadenhall Street, London from July 1808. It rapidly became a very popular exhibit, and the crank-handle controlling the wailing and grunting could apparently be freely turned by the public. The French author Gustave Flaubert visited London in 1851 to see the Great Exhibition, writes Julian Barnes, but finding nothing of interest in The Crystal Palace, visited the East India Company Museum where he was greatly enamoured by Tipu's Tiger. By 1843 it was reported that "The machine or organ ... is getting much out of repair, and does not altogether realize the expectation of the visitor". Eventually the crank-handle disappeared, to the great relief of students using the reading-room in which the tiger was displayed, and The Athenaeum later reported that When the East India Company was taken over by the Crown in 1858, the tiger was stored in Fife House, Whitehall until 1868, when it moved down the road to the new India Office, which occupied part of the building still used by today's Foreign and Commonwealth Office. In 1874 it was moved to the India Museum in South Kensington, which was in 1879 dissolved, with the collection distributed between other museums; the V&A records the tiger as acquired in 1880. During World War II the tiger was badly damaged by a German bomb which brought down the roof above it, breaking the wooden casing into several hundred pieces, which were carefully pieced together after the war, so that by 1947 it was back on display. In 1955 it was exhibited in New York at the Museum of Modern Art through the summer and spring. In recent times, Tipu's Tiger has formed an essential part of museum exhibitions exploring the historical interface between Eastern and Western civilisation, colonialism, ethnic histories and other subjects, one such being held at the Victoria and Albert Museum itself in autumn 2004 titled "Encounters:the meeting of Asia and Europe, 1500–1800". In 1995, 'The Tiger and the Thistle' bi-centennial exhibition was held in Scotland on the topic of "Tipu Sultan and the Scots". The organ was considered too fragile to travel to Scotland for the exhibition. Instead, a full-sized replica made of fibreglass and painted by Derek Freeborn, was exhibited in its place. The replica itself also had an earlier Scottish association, having been made in 1986 for 'The Enterprising Scot' exhibition, which was held to commemorate the October 1985 merger of the Royal Scottish Museum and the National Museum of Antiquities of Scotland to form a new entity - the National Museum of Scotland. Today Tipu's Tiger is arguably the best-known single work in the Victoria and Albert Museum as far as the general public is concerned. It is a "must-see" highlight for school children's visits to the Victoria and Albert Museum, and functions as an iconic representation of the museum, replicated in various forms of memorabilia in the museum shops including postcards, model kits and stuffed toys. Visitors can no longer operate the mechanism since the device is now kept in a glass case. A small model of this toy is exhibited in Tipu Sultan's wooden palace in Bangalore. Although other items associated with Tipu, including his sword, have recently been purchased and brought back to India by billionaire Vijay Mallya, Tipu's Tiger has not itself been the subject of an official repatriation request, presumably due to the ambiguity underlying Tipu's image in the eyes of Indians; his being an object of loathing in the eyes of some Indians while considered a hero by others. Symbolism Tipu Sultan identified himself with tigers; his personal epithet was 'The Tiger of Mysore,' his soldiers were dressed in 'tyger' jackets, his personal symbol invoked a tiger's face through clever use of calligraphy and the tiger motif is visible on his throne, and other objects in his personal possession, including Tipu's Tiger. Accordingly, as per Joseph Sramek, for Tipu the tiger striking down the European in the organ represented his symbolic triumph over the British. The British hunted tigers, not just to emulate the Mughals and other local elites in this "royal" sport, but also as a symbolic defeat of Tipu Sultan and any other ruler who stood in the path of British domination. The tiger motif was used in the "Seringapatam medal" which was awarded to those who participated in the 1799 campaign, where the British lion was depicted as overcoming a prostrate tiger, the tiger being the dynastic symbol of Tipu's line. The Seringapatam medal was issued in gold for the highest dignitaries who were associated with the campaign as well as select officers on general duty, silver for other dignitaries, field officers and other staff officers, in copper-bronze for the non-commissioned officers and in tin for the privates. On the reverse it had a frieze of the storming of the fort while the obverse showed, in the words of a nineteenth-century tome on medals, "the BRITISH LION subduing the TIGER, the emblem of the late Tippoo Sultan's government, with the period when it was effected and the following words 'ASAD ULLAH GHALIB', signifying the Lion of God is the conqueror, or the conquering Lion of God." In this manner, the iconography of this automaton was adopted and overturned by the British. When Tipu's Tiger was displayed in London in the nineteenth century, British viewers of the time "characterised the tiger as a trophy and symbolic justification of British colonial rule". Tipu's Tiger along with other trophies such as Tipu's sword, the throne of Ranjit Singh, Tantya Tope's kurta and Nana Saheb's betel-box which was made of brass, were all displayed as "memorabilia of the Mutiny". In one interpretation, the display of Tipu's Tiger in South Kensington, served to remind the visitor of the noblesse oblige of the British Empire to bring civilisation to the barbaric lands of which Tipu was king. Tipu's Tiger is also notable as a literal image of a tiger killing a European, an important symbol in England at the time, and from about 1820 the "Death of Munro" became one of the scenes in the repertoire of Staffordshire pottery figurines. Tiger-hunting in the British Raj, is also considered to represent not just the political subjugation of India, but in addition, the triumph over India's environment. The iconography persisted and during the rebellion of 1857, Punch ran a political cartoon showing the Indian rebels as a tiger, attacking a victim in an identical pose to Tipu's Tiger, being defeated by the British forces shown by the larger figure of a lion. It has been suggested that Tipu's Tiger also contributed indirectly to the development of a popular early-20th-century stereotype of China as the "Sleeping Lion". A recent study describes how this popular stereotype actually drew on Chinese reports about the tiger. Motives for collection of articles, such as Tipu's Tiger, are seen by literary historian Barrett Kalter as having a social and cultural context. The collection of Western and Indian art by Tipu Sultan is seen by Kalter as motivated by the need to display his wealth and legitimise his authority over his subjects who were predominantly Hindu and did not share his religion, viz. Islam. In the case of the East India Company, collection of documents, artefacts and objet's d'art  from India helped develop the idea of a subjugated Indian populace in the minds of the British people, the thought being that the possession of such objects of a culture represented understanding of, dominance over, and mastery of that culture. As a musical instrument In a detailed study published in 1987 of the tiger's musical and noise-making functions, Arthur W.J.G. Ord-Hume concluded that since coming to Britain, "the instrument has been ruthlessly reworked, and in doing so much of its original operating principles have been destroyed". There are two ranks of pipes in the organ (as opposed to the wailing and grunting functions), each "comprising eighteen notes, [which] are nominally of 4ft pitch and are unisons - i.e. corresponding pipes in each register make sounds of the same musical pitch. This is an unusual layout for a pipe organ although while selecting the two stops together results in more sound ... there is also detectable a slight beat between the pipes so creating a celeste effect. ... it is considered likely that as so much work has been done ... this characteristic may be more an accident of tuning than an intentional feature". The tiger's grunt is made by a single pipe in the tiger's head and the man's wail by a single pipe emerging at his mouth and connected to separate bellows located in the man's chest, where they can be accessed by unbolting and lifting off the tiger. The grunt operates by cogs gradually raising the weighted "grunt-pipe" until it reaches a point where it slips down "to fall against its fixed lower-board or reservoir, discharging the air to form the grunting sound" Today all the sound-making functions rely on the crank-handle to power them, though Ord-Hume believes this was not originally the case. Works on the noise-making functions included those made over several decades by the famous organ-building firm Henry Willis & Sons, and Henry Willis III, who worked on the tiger in the 1950s, contributed an account to a monograph by Mildred Archer of the V&A. Ord-Hume is generally ready to exempt Willis work from his scathing comments on other drastic restorations, which "vandalism" is assumed to be by unknown earlier organ-builders. There was a detailed account of the sound-making functions in The Penny Magazine in 1835, whose anonymous author evidently understood "things mechanical and organs in particular". From this and Ord-Hume's own investigations, he concluded that the original operation of the man's "wail" had been intermittent, with a wail only being produced after every dozen or so grunts from the tiger above, but that at some date after 1835 the mechanism had been altered to make the wail continuous, and that the bellows for the wail had been replaced with smaller and weaker ones, and the operation of the moving arm altered. Puzzling features of the present instrument include the placing of the handle, which when turned is likely to obstruct a player of the keyboard. Ord-Hume, using the 1835 account, concludes that originally the handle (which is a nineteenth-century British replacement, probably of a French original) only operated the grunt and wail, while the organ was operated by pulling a string or cord to work the original bellows, now replaced. The keyboard, which is largely original, is "unique in construction", with "square ivory buttons" with round lathe-turned tops instead of conventional keys. Though the mechanical functioning of each button is "practical and convenient" they are spaced such that "it is almost impossible to stretch the hand to play an octave". The buttons are marked with small black spots, differently placed but forming no apparent pattern in relation to the notes produced and corresponding to no known system of marking keys. The two stop control knobs for the organ are located, "rather confusingly", a little below the tiger's testicles. The instrument is now rarely played, but there is a V&A video of a recent performance. Derivative works Tipu's Tiger has provided inspiration to poets, sculptors, artists and others from the nineteenth century to the present day. The poet John Keats saw Tipu's Tiger at the museum in Leadenhall Street and worked it into his satirical verse of 1819, The Cap and Bells. In the poem, a soothsayer visits the court of the Emperor Elfinan. He hears a strange noise and thinks the Emperor is snoring. "Replied the page: "that little buzzing noise…. Comes from a play-thing of the Emperor’s choice, From a Man-Tiger-Organ, prettiest of his toys" The French poet, Auguste Barbier, described the tiger and its workings and meditated on its meaning in his poem, Le Joujou du Sultan (The Plaything of the Sultan) published in 1837. More recently, the American Modernist poet, Marianne Moore wrote in her 1967 poem Tippoo's Tiger about the workings of the automaton, though in fact the tail was never movable: "The infidel claimed Tipu's helmet and cuirasse and a vast toy - a curious automaton a man killed by a tiger; with organ pipes inside from which blood-curdling cries merged with inhuman groans. The tiger moved its tail as the man moved his arm." Die Seele (The Souls), a work by painter Jan Balet (1913–2009), shows an angel trumpeting over a flower garden while a tiger devours a uniformed French soldier. The Indian painter M. F. Husain painted Tipu's Tiger in his characteristic style in 1986 titling the work as "Tipu Sultan's Tiger". The sculptor Dhruva Mistry, when a student at the Royal College of Art, adjacent to the Victoria and Albert Museum, frequently passed Tipu's Tiger in its glass case and was inspired to make a fibre-glass and plastic sculpture Tipu in 1986. The sculpture Rabbit eating Astronaut (2004) by the artist Bill Reid is a humorous homage to the tiger, the rabbit "chomping" when its tail is cranked round. The 2023 novel Loot by Tania James imagines the life of a young wood carver from Mysore who co-creates Tipu's Tiger, and years later goes on a quest for it at the English country estate of a British East India Company colonel who took it from Tipu's palace after his death. See also Cat organ Piganino Notes References Videos of the tiger in performance Video of David Dimbleby playing the grunt and wail only. V&A video of a recent performance of the organ from Vimeo.com: Conservation in Action - Playing Tipu's Tiger Part 2, and Part 1, with the top of the tiger removed. External links Accession page for Tipu's Tiger on Victoria & Albert Museum website Sound and Movement animation at the Victoria & Albert Museum website Figurine depicting Mr Hugh Munro being mauled by a Bengal tiger, December 1792 Art Detective Podcast, 17 Mar 2017 (Discussion by Janina Ramirez and Sona Datta of Peabody Essex Museum) Individual pipe organs Tipu Sultan Asian objects in the Victoria and Albert Museum Tigers in art Historical robots Zoomusicology Indian art 18th century in India Robots of India 18th-century robots Automata (mechanical) Tigers in popular culture Anti-British sentiment India–United Kingdom relations
Tipu's Tiger
[ "Engineering" ]
4,738
[ "Automata (mechanical)", "Automation" ]
1,555,718
https://en.wikipedia.org/wiki/Shaftment
The shaftment is an obsolete unit of length defined since the 12th century as 6 inches, which nowadays is exactly . A shaftment was traditionally the width of the fist and outstretched thumb. The lengths of poles, staves, etc. can be easily measured by grasping the bottom of the staff with thumb extended and repeating such hand over hand grips along the length of the staff. History It occurs in Anglo-Saxon written records as early as 910 and in English as late as 1474. After the modern foot came into use in the twelfth century, the shaftment was reinterpreted as exactly foot or . Spelling and etymology Other spellings include schaftmond and scaeftemunde, and shathmont. It is derived from Old English , in turn from ('shaft') and Old English , from the Proto-Germanic , in turn from ('hand'). Two shaftments make a . This unit has mostly fallen out of use, as have others based on the human arm: digit ( shaftment), finger ( shaftment), palm ( shaftment) hand ( shaftment), span (1.5 shaftments), cubit (3 shaftments) and ell (7.5 shaftments). References Units: S University of North Carolina at Chapel Hill - How Many? - A Dictionary of Units of Measurement Units of length Human-based units of measurement
Shaftment
[ "Mathematics" ]
287
[ "Quantity", "Units of measurement", "Units of length" ]
1,555,740
https://en.wikipedia.org/wiki/Ell
An ell (from Proto-Germanic *alinō, cognate with Latin ulna) is a northwestern European unit of measurement, originally understood as a cubit (the combined length of the forearm and extended hand). The word literally means "arm", and survives in the modern English word "elbow" (arm-bend). Later usage through the 19th century refers to several longer units, some of which are thought to derive from a "double ell". An ell-wand or ellwand was a rod of length one ell used for official measurement. Edward I of England required that every town have one. In Scotland, the Belt of Orion was called "the King's Ellwand". An iron ellwand is preserved in the entrance to Stånga Church on the Swedish island of Gotland, indicating the role that rural churches had in disseminating uniform measures. Several national forms existed, with different lengths, including the Scottish ell , the Flemish ell [el] , the French ell [aune] , the Polish ell , the Danish alen , the Swedish aln and the German ell [] of different lengths in Frankfurt (54.7 cm), Cologne, Leipzig (Saxony) or Hamburg. Select customs were observed by English importers of Dutch textiles; although all cloths were bought by the Flemish ell, linen was sold by the English ell, but tapestry was sold by the Flemish ell. The Viking ell was the measure from the elbow to the tip of the middle finger, about . The Viking or primitive ell was used in Iceland up to the 13th century. By the 13th century, a law set the "stika" as equal to two ells, which was the English ell of the time. Historic use England In England, the ell was usually exactly , or a yard and a quarter. It was mainly used in the tailoring business but is now obsolete. Although the exact length was never defined in English law, standards were kept; the brass ell examined at the Exchequer by Graham in the 1740s had been in use "since the time of Queen Elizabeth". Other English measures called an ell include the "yard and handful", or 40 in. ell, abolished in 1439; the yard and inch, or 37 in. ell (a cloth measure), abolished after 1553 and known later as the Scotch ell=37.06; and the cloth ell of 45 in., used until 1600. See yard for details. Scots The Scottish ell () is approximately . The Scottish ell was standardised in 1661, with the exemplar to be kept in the custody of Edinburgh. It comes from Middle English . It was used in the popular expression "Gie 'im an inch, an he'll tak an ell" (equivalent to "Give him an inch and he'll take a mile" or "... he'll take a yard." The Ell Shop (1757) in Dunkeld, Perth and Kinross (National Trust for Scotland), is so called from the 18th-century iron ell-stick attached to one corner, once used to measure cloth and other commodities in the adjacent market-place. The shaft of the 17th-century Kincardine mercat cross stands in the square of Fettercairn, and is notched to show the measurements of an ell. Scottish measures were made obsolete, and English measurements made standard in Scotland, by an Act of Parliament, the Weights and Measures Act 1824. Other Similar measures include: Netherlands: el, 1 metre (Old ell=27.08 inches) Jersey: ell, 4 feet N. Borneo: ella, 1 yard Switzerland: elle, 0.6561 yard Ottoman Turkey: Arşın, ~69 cm In literature In the epic poem Sir Gawain and the Green Knight, the Green Knight's axe-head was an ell (45 inches) wide. Ells were also used in the medieval French play The Farce of Master Pathelin to measure the size of the clothing Pierre Pathelin bought. Ells are used for measuring the length of rope in J. R. R. Tolkien's The Lord of the Rings. Since Sam declares that 30 elles are "about" 18 fathoms (108 feet), he seems to be using the 45-inch English ell, which would work out to 112 feet. Halldór Laxness described Örvar-Oddr as twelve Danish ells tall in Independent People, Part II, "Of the World". References Attribution See p. 861. Further reading Collins Encyclopedia of Scotland Scottish National Dictionary and Dictionary of the Older Scottish Tongue Weights and Measures, by D. Richard Torrance, SAFHS, Edinburgh, 1996, (N.B.: The book focusses exclusively on Scottish weights and measures.) External links Human-based units of measurement Obsolete units of measurement Obsolete Scottish units of measurement Units of length
Ell
[ "Mathematics" ]
1,034
[ "Obsolete units of measurement", "Quantity", "Units of measurement", "Units of length" ]
1,555,745
https://en.wikipedia.org/wiki/Rope%20%28unit%29
A rope may refer to any of several units of measurement initially determined or formed by ropes or knotted cords. Length The Greco-Roman schoenus, supposedly based on an Egyptian unit derived from a wound reed measuring rope, may also be given in translation as a "rope". According to Strabo, it varied in length between 30 and 120 stadia (roughly 5 to 20 km) depending on local custom. The Byzantine equivalent, the schoinion or "little rope", varied between 60 and 72 Greek feet depending upon the location. The Thai sen of 20 Thai fathoms or 40 m also means and is translated "rope". The Somerset rope was a former English unit used in drainage and hedging. It was 20 feet (now precisely 6.096 m). Area The Romans used the schoenus as an alternative name for the half-jugerum formed by a square with sides of 120 Roman feet. In Somerset, the rope could also double as a measure of area equivalent to 20 feet by 1 foot. Walls in Somerset were formerly sold "per rope" of 20 sq ft. Garlic In medieval English units, the rope of garlic was a set unit of 15 heads of garlic. 15 such ropes made up the "hundred" of garlic. See also Egyptian, Greek, Roman, Thai, and English units Knotted cord Knot, a related unit References Units of length Units of area History of Somerset
Rope (unit)
[ "Mathematics" ]
292
[ "Quantity", "Units of area", "Units of measurement", "Units of length" ]
1,555,880
https://en.wikipedia.org/wiki/Bolt%20cutter
A bolt cutter, sometimes called bolt cropper, is a tool used for cutting bolts, chains, padlocks, rebar and wire mesh. It typically has long handles and short blades, with compound hinges to maximize leverage and cutting force. A typical bolt cutter yields of cutting force for a force on the handles. There are different types of cutting blades for bolt cutters, including angle cut, center cut, shear cut, and clipper cut blades. Bolt cutters are usually available in 12, 14, 18, 24, 30, 36 and 42 inches (30.5, 35.6, 46, 61, 76, 91.4 and 107 cm) in length. The length is measured from the tip of the jaw to the end of the handle. Angle cut has the cutter head angled for easier insertion. Typical angling is 25 to 35 degrees. Center cut has the blades equidistant from the two faces of the blade. Shear cut has the blades inverted to each other (such as normal paper scissor blades). Clipper cut has the blades flush against one face (for cutting against flat surfaces). Bolt cutters with fiberglass handles can be used for cutting live electrical wires and are useful during rescue operations. The fiberglass handles have another advantage of being lighter in weight than the conventional drop forged or solid pipe handles, making it easier to carry to the place of operation. Cultural significance The tools became iconic at the Greenham Common Women's Peace Camp, where protestors used bolt cutters to remove fencing around the RAF airbase. A Greenham banner displaying bolt cutters, together with a hanging of Greenham fence wire, was displayed at the Pine Gap Women's Peace Camp in Australia. References Cutting tools Mechanical hand tools Metalworking cutting tools fr:Pince coupe-boulon
Bolt cutter
[ "Physics" ]
373
[ "Mechanics", "Mechanical hand tools" ]
1,555,955
https://en.wikipedia.org/wiki/Stick%20%28unit%29
The stick may refer to several separate units, depending on the item being measured. Length In typography, the stick, stickful, or was an inexact length based on the size of the various composing sticks used by newspaper editors to assemble pieces of moveable type. In English-language papers, it was roughly equal to 2 column inches or 100–150 words. In France, Spain, and Italy, sticks generally contained only between 1 and 4 lines of text each. A column was notionally equal to 10 sticks. Mass In American cooking, a is taken to be 4 ounces (about 113 g). Volume In American cooking, a stick of butter may also be understood as ½ cup or 8 tablespoons (about 118 mL). See also English, imperial, and US customary units Traditional point-size names References Citations Bibliography . . . . . . . Units of length Units of mass Typography Typesetting
Stick (unit)
[ "Physics", "Mathematics" ]
192
[ "Matter", "Units of length", "Quantity", "Units of mass", "Mass", "Units of measurement" ]
1,555,991
https://en.wikipedia.org/wiki/Eastern%20meadow%20vole
The eastern meadow vole (Microtus pennsylvanicus), sometimes called the field mouse or meadow mouse, is a North American vole found in eastern Canada and the United States. Its range extends farther south along the Atlantic coast. The western meadow vole, Florida salt marsh vole, and beach vole were formerly considered regional variants or subspecies of M. pennsylvanicus, but have all since been designated as distinct species. The eastern meadow vole is active year-round, usually at night. It also digs burrows, where it stores food for the winter and females give birth to their young. Although these animals tend to live close together, they are aggressive towards one another. This is particularly evident in males during the breeding season. They can cause damage to fruit trees, garden plants, and commercial grain crops. Taxonomy The species was formerly grouped with the western meadow vole (M. drummondii) and the Florida salt marsh vole (M. dukecampbelli) as a single species with a very large range, but genetic evidence indicates that these are all distinct species. Distribution The eastern meadow vole is found throughout eastern North America. It ranges from Labrador and New Brunswick south to South Carolina and extreme northeastern Georgia; west through Tennessee to Ohio. West of Ohio, it is replaced by the western meadow vole. Several subspecies are found on eastern islands, including the beach vole (M. p. breweri) and the extinct Gull Island vole. Plant communities Eastern meadow voles are most commonly found in grasslands, preferring moister areas, but are also found in wooded areas. In east-central Ohio, eastern meadow voles were captured in reconstructed common cattail (Typha latifolia) wetlands. In Virginia, eastern meadow voles were least abundant in eastern red cedar (Juniperus virginiana) glades and most abundant in fields with dense grass cover. Habits Eastern meadow voles are active year-round and day or night, with no clear 24-hour rhythm in many areas. Most changes in activity are imposed by season, habitat, cover, temperature, and other factors. Eastern meadow voles have to eat frequently, and their active periods (every two to three hours) are associated with food digestion. In Canada, eastern meadow voles are active the first few hours after dawn and during the two- to four-hour period before sunset. Most of the inactive period is spent in the nest. Reproduction Gestation lasts 20 to 23 days. Neonates are pink and hairless, with closed eyes and ears. Fur begins to appear by three days, and young are completely furred except for the belly by seven days. Eyes and ears open by eight days. Weaning occurs from 12 to 14 days. Young born in spring and early summer attain adult weight in 12 weeks, but undergo a fall weight loss. Young born in late summer continue growing through the fall and maintain their weight through the winter. Maximum size is reached between two and 10 months. Typical eastern meadow vole litters consist of four to six young, with extremes of one and 11 young. On average, 2.6 young are successfully weaned per litter. Litter size is not significantly correlated with latitude, elevation, or population density. Fall, winter, and spring litters tend to be smaller than summer litters. Litter size was positively correlated with body size, and is not significantly different in primaparous and multiparous females. Primaparous females had fewer young per litter than multiparous females. Litter size was constant in summer breeding periods at different population densities. Female eastern meadow voles reach reproductive maturity earlier than males; some ovulate and become pregnant as early as three weeks old. Males are usually six to eight weeks old before mature sperm are produced. One captive female produced 17 litters in one year for a total of 83 young. One of her young produced 13 litters (totalling 78 young) before she was a year old. Patterns of mortality apparently vary among eastern meadow vole populations. The average eastern meadow vole lifespan is less than one month because of high nestling and juvenile mortality. The average time adults are recapturable in a given habitat is about two months, suggesting the average extended lifespan (i.e. how much time adult eastern meadow voles have left) is about two months, not figuring in emigration. Mortality was 88% for the first 30 days after birth. and postnestling juveniles had the highest mortality rate (61%), followed by young adults (58%) and older age groups (53%). Nestlings were estimated to have the lowest mortality rate (50%). Estimated mean longevity ranges from two to 16 months. The maximum lifespan in the wild is 16 months, and few voles live more than two years. Eastern meadow vole populations fluctuate annually and also tend to reach peak densities at two- to five-year intervals, with population declines in intervening years. Breeding often ceases in January and starts again in March. Over the course of a year, eastern meadow vole populations tend to be lowest in early spring; the population increases rapidly through summer and fall. In years of average population sizes, typical eastern meadow vole population density is about 15 to 45 eastern meadow voles per acre in old-field habitat. In peak years, their population densities may reach 150 per acre in marsh habitat (more favorable for eastern meadow voles than old fields). Peak eastern meadow vole abundance can exceed 1,482 eastern meadow voles per hectare (600/acre) in northern prairie wetlands. Eastern meadow voles in optimal habitats in Virginia (old fields with dense vegetation) reached densities of 983/ha (398/ac); populations declined to 67/ha (27/ac) at the lowest point in the cycle. Different factors influencing population density have been assigned primary importance by different authors. Reich listed the following factors as having been suggested by different authors: food quality, predation, climatic events, density-related physiological stress, and the presence of genetically determined behavioral variants among dispersing individuals. Normal population cycles do not occur when dispersal is prevented; under normal conditions, dispersers have been shown to be behaviorally, genetically, and demographically different from residents. A threshold density of cover is thought to be needed for eastern meadow vole populations to increase. Above the threshold amount, the quantity of cover influences the amplitude and possibly the duration of the population peak. Local patches of dense cover could serve as source populations or reservoirs to colonize less favorable habitats with sparse cover. Eastern meadow voles form extensive colonies and develop communal latrine areas. They are socially aggressive and agonistic; females dominate males and males fight amongst themselves. Habitat Optimal eastern meadow vole habitat consists of moist, dense grassland with substantial amounts of plant litter. Habitat selection is largely influenced by relative ground cover of grasses and forbs; soil temperature, moisture, sodium, potassium, and pH levels; humidity; and interspecific competition. Eastern meadow voles are most commonly associated with sites having high soil moisture. They are often restricted to the wetter microsites when they occur in sympatry with prairie voles (Microtus ochrogaster) or montane voles. In eastern Massachusetts, eastern meadow vole density on a mosaic of grassy fields and mixed woods was positively correlated with decreasing vertical woody stem density and decreasing shrub cover. Density was highest on plots with more forbs and grasses and less with woody cover; eastern meadow voles preferred woody cover over sparse vegetation where grassy cover was not available. In West Virginia, the only forested habitats in which eastern meadow voles were captured were seedling stands. In Pennsylvania, three subadult eastern meadow voles were captured at least 1.6 miles (2.6 km) from the nearest appreciable suitable eastern meadow vole habitat, suggesting they are adapted to long-distance dispersal. In Ohio, the effects of patch shape and proportion of edge were investigated by mowing strips between study plots. The square plots were 132 feet per side (40 m x 40 m), and the rectangular patches were 52.8 feet by 330 feet (16 m x 100 m). Square habitat patches were not significantly different from rectangular patches in eastern meadow vole density. Edge effects in patches of this size were not found, suggesting eastern meadow voles are edge-tolerant. Habitat patch shape did affect dispersal and space use behaviors. In rectangular patches, home ranges were similar in size to those in square patches, but were elongated. Eastern meadow voles tend to remain in home ranges and defend at least a portion of their home ranges from conspecifics. Home ranges overlap and have irregular shapes. Home range size depends on season, habitat, and population density: ranges are larger in summer than winter, those in marshes are larger than in meadows, and are smaller at higher population densities. Home ranges vary in size from 0.08 to 2.3 hectares (0.32-0.9 ac). Females have smaller home ranges than males, but are more highly territorial than males; often, juveniles from one litter are still present in the adult female's home range when the next litter is born. Female territoriality tends to determine density in suboptimal habitats; the amount of available forage may be the determining factor in female territory size, so determines reproductive success. Cover requirements Nests are used as nurseries, resting areas, and as protection against weather. They are constructed of woven grass; they are usually subterranean or are constructed under boards, rocks, logs, brush piles, hay bales, fenceposts, or in grassy tussocks. Eastern meadow voles dig shallow burrows, and in burrows, nests are constructed in enlarged chambers. In winter, nests are often constructed on the ground surface under a covering of snow, usually against some natural formation such as a rock or log. Eastern meadow voles form runways or paths in dense grasses. Diets Eastern meadow voles eat most available species of grasses, sedges, and forbs, including many agricultural plant species. In summer and fall, grasses are cut into match-length sections to reach the succulent portions of the leaves and seedheads. Leaves, flowers, and fruits of forbs are also typical components of the summer diet. Fungi, primarily endogones (Endogone spp.), have been reported in eastern meadow vole diets. They occasionally consume insects and snails, and occasionally scavenge on animal remains; cannibalism is frequent in periods of high population density. Eastern meadow voles may damage woody vegetation by girdling when population density is high. In winter, eastern meadow voles consume green basal portions of grass plants, often hidden under snow. Other winter diet components include seeds, roots, and bulbs. They occasionally strip the bark from woody plants. Seeds and tubers are stored in nests and burrows. Evidence of coprophagy is sparse, but thought to occur. In an old-field community in Quebec, plants preferred by eastern meadow voles included quackgrass (Elytrigia repens), sedges, fescues (Festuca spp.), wild strawberry (Fragaria virginiana), timothy (Phleum pratense), bluegrasses (Poa spp.), and bird vetch (Vicia cracca). Predators Eastern meadow voles are important prey for many hawks, owls, and mammalian carnivores, and are also taken by some snakes. Almost all species of raptors take microtine (Microtus spp.) rodents as prey. Birds not usually considered predators of mice do take voles; examples include gulls (Larus spp.), northern shrike (Larius borealis), common raven (Corvus corax), American crow (C. brachyrhynchos), great blue heron (Ardea herodias), and American bittern (Botaurus lentiginosus). In Ohio, eastern meadow voles comprised 90% of the individual prey remains in long-eared owl (Asio otus) pellets on a relict wet prairie, and in Wisconsin, eastern meadow voles comprised 95% of short-eared owl (A. flammeus) prey. Most mammalian predators take microtine prey. The American short-tailed shrew (Blarina brevicauda) is a major predator; eastern meadow voles avoid areas frequented by short-tailed shrews. Other major mammalian predators include the badger (Taxidea taxus), striped skunk (Mephitis mephitis), weasels (Mustela spp.), marten (Martes americana), domestic dog (Canis familiaris), domestic cat (Felis catus) and mountain lion. Other animals reported to have ingested voles include trout (Salmo spp.) and garter snake (Thamnophis spp.). Management Eastern meadow voles are abundant in agricultural habitats. The list of crops damaged by eastern meadow voles includes root and stem crops (asparagus, kohlrabi), tubers, leaf and leafstalks, immature inflorescent vegetables (artichoke, broccoli), low-growing fruits (beans, squash), the bark of fruit trees, pasture, grassland, hay, and grains. Eastern meadow voles are listed as pests on forest plantations. In central New York, colonization of old fields by trees and shrubs was reduced due to seedling predation by eastern meadow voles, particularly under the herb canopy. Management of eastern meadow vole populations in agricultural areas includes reduction of habitat in waste places such as roadsides and fencerows by mowing, plowing, and herbicide application. Predators, particularly raptors, should be protected to keep eastern meadow vole populations in check. Direct control methods include trapping, fencing, and poisoning; trapping and fencing are of limited effectiveness. Poisons are efficient. Repellents are largely ineffective at present. Plastic mesh cylinders were effective in preventing seedling damage by eastern meadow voles and other rodents. Properly timed cultivation and controlled fires are at least partially effective in reducing populations. Ecto- and endoparasites have been reported to include trematodes, cestodes, nematodes, acanthocephalans, lice (Anoplura), fleas (Siphonaptera), Diptera, and ticks and mites (Acari). Human diseases transmitted by microtine rodents include cystic hydatid disease, tularemia, bubonic plague, babesiosis, giardiasis and the Lyme disease spirochete Borrelia burgdorferi. Ecological importance As with many other small mammal species, M. pennsylvanicus plays important ecological roles. The eastern meadow vole is an important food source for many predators, and disperses mycorrhizal fungi. It is a major consumer of grass and disperses grass nutrients in its feces. After disruptive site disturbances such as forest or meadow fires, the meadow vole's activities contribute to habitat restoration. It prefers open, nonforest habitats and colonizes such open areas created by fire or other clearing disturbances. Very few eastern meadow voles are found in forest or woodland areas. In newly opened areas, it is quite abundant. In these new open areas, the vole quickly becomes a food source for predators. Threats While it is a common and wide-ranging species throughout eastern North America, insular populations on the eastern periphery of the species' range are at risk from invasive species, with the extinction of the Gull Island vole being a notable example of this. In addition, due to its dependence on mesic habitats, populations of the species on the mainland periphery of its range in the Southeastern United States may be at potential risk from climate change-induced aridification. References External links Eastern meadow Rodents of Canada Rodents of the United States Bioindicators Mammals described in 1815 Least concern biota of North America Least concern biota of the United States Taxa named by George Ord
Eastern meadow vole
[ "Chemistry", "Environmental_science" ]
3,324
[ "Bioindicators", "Environmental chemistry" ]
1,556,147
https://en.wikipedia.org/wiki/Homer%20%28unit%29
A homer ( ḥōmer, plural חמרם ḥomārim; also kōr) is a biblical unit of volume used for liquids and dry goods. One homer is equal to 10 baths, or what was also equivalent to 30 seahs; each seah being the equivalent in volume to six kabs, and each kab equivalent in volume to 24 medium-sized eggs. One homer equals 220 litre or 220 dm3. Lawrence Boadt notes the word homer comes from the Hebrew word for an "ass." "It is one ass-load." The homer should not be confused with the omer, which is a much smaller unit of dry measure. References Obsolete units of measurement Units of volume
Homer (unit)
[ "Mathematics" ]
145
[ "Units of volume", "Obsolete units of measurement", "Quantity", "Units of measurement" ]
1,556,266
https://en.wikipedia.org/wiki/Nail%20%28unit%29
A nail, as a unit of cloth measurement, is generally a sixteenth of a yard or 2 inches (5.715 cm). The nail was apparently named after the practice of hammering brass nails into the counter at shops where cloth was sold. On the other hand, R D Connor, in The weights and measures of England (p 84) states that the nail was the 16th part of a Roman foot, i.e., digitus or finger, although he provides no reference to support this. Zupko's A dictionary of weights and measures for the British Isles (p 256) states that the nail was originally the distance from the thumbnail to the joint at the base of the thumb, or alternately, from the end of the middle finger to the second joint. An archaic usage of the term nail is as a sixteenth of a (long) hundredweight for mass, or 1 clove of 7 pound avoirdupois (3.175 kg). The nail in literature Explanation: Katherine and Petruchio are purchasing new clothes for Bianca's wedding. Petruchio is concerned that Katharine's dress has too many frills, wonders what it will cost, and suspects that he has been cheated. Katherine says she likes it, and complains that Petruchio is making a fool of her. The tailor repeats Katherine's words: Sir, she says you're making a fool of her. Petruchio then launches into the above-quoted tirade. Monstrous may be a double-entendre for cuckold. The half-yard, quarter and nail were divisions of the yard used in cloth measurement. The nail in law Notes Units of length Units of area Units of mass
Nail (unit)
[ "Physics", "Mathematics" ]
352
[ "Matter", "Units of area", "Units of length", "Quantity", "Units of mass", "Mass", "Units of measurement" ]
9,389,047
https://en.wikipedia.org/wiki/Royal%20Fine%20Art%20Commission%20for%20Scotland
The Royal Fine Art Commission for Scotland was a Scottish public body. It was appointed in 1927 "to enquire into such questions of public amenity or of artistic importance relating to Scotland as may be referred to them by any of our Departments of State and to report thereon to such Departments; and furthermore, to give advice on similar questions when so requested by public or quasi-public bodies when it appears to the said Commission that their assistance would be advantageous". The first Commissioners were- Sir John Maxwell Stirling-Maxwell Gavin George, Baron Hamilton of Dalzell Sir John Ritchie Findlay Sir George Macdonald Sir George Washington Browne Sir Robert Stodart Lorimer James Whitelaw Hamilton James Pittendrigh Macgillivray In 2005, it was replaced by Architecture and Design Scotland. See also Royal Fine Art Commission, formerly operated in England and Wales References 1927 establishments in Scotland 2005 disestablishments in Scotland Organisations based in Scotland with royal patronage Architecture in Scotland Scottish design Public bodies of the Scottish Government Arts organisations based in Scotland Scottish commissions and inquiries Arts organizations established in 1927 Organizations disestablished in 2005
Royal Fine Art Commission for Scotland
[ "Engineering" ]
224
[ "Architecture stubs", "Architecture" ]
9,389,372
https://en.wikipedia.org/wiki/London%20Stone%20%28riparian%29
London Stone is the name given to a number of boundary stones that stand beside the rivers Thames and Medway, which formerly marked the limits of jurisdiction (riparian water rights) of the City of London. History Until 1350, the English Crown held the right to fish the rivers of England and charged duties on those people it licensed to fish. In 1197 King Richard I, in need of money to finance his involvement in the Third Crusade, sold the rights over the lower reaches of the River Thames to the City of London. Marker stones were erected to indicate the limit of the City's rights. In Victorian times, the Lord Mayor would come in procession by water and touch the Staines stone with a sword to re-affirm the City's rights. Control of the river passed from the City to the Thames Conservancy, and then below Teddington to the Port of London Authority and above it to Thames Water Authority and finally the Environment Agency. Staines In medieval times before the canalisation of the Thames, Staines, was the highest point at which the high tide was perceivable for a few minutes every semi-diurnal tide (twice a day), adding some millimetres to the water depth compared to more upstream parishes. This London Stone marked the upstream limit of the City's rights. The official role of a Corporation of London stone of 1285 beside Staines Bridge was set out with a grant of associated privileges in a charter of Edward I. Its use by the river is indicated by the indentations (on the right-hand face in the photo), caused by tow ropes of horse-drawn boats rubbing against the stone. Relocation within Staines and replica Staines is on the point where the north bank moves from east to north and has always been its site but the exact position has changed. In c. 1750 the approx. 0.6 metre-tall half cube on a tall stone pillar was moved about 500 metres upstream to a site at by the river in the Lammas Pleasure Ground. In 1986 the stone was moved to the Old Town Hall Arts Centre, Market Square and a replica was placed in the Pleasure Ground. In 2004 the original was moved to Spelthorne Museum, Spelthorne Library, Friends Walk/Thames Street. Features The stone has been recarved in its lower section making its long base narrower than its top. Its sole inscription is a very eroded etching 'STANE' in its top section of uncertain date, the Old English word for Stone (as in Stane Street). If the inscription is old enough this reinforces the traditional spelling, if not the pronunciation (as with Stane Street) of the name of the town, for which see Great Vowel Shift. It is possible that there was more than one such stone, explaining the Anglo Saxon name of the town, which was established many centuries after the Romans noted they called their staging post at the bridge, Ad Pontes. The lower carved area has a shield in relief as is its motif section below with eroded inscriptions. It stands on a much wider plinth inscribed with the names of various City worthies who may have been involved in its 1750 move. The replica, due to its location, is in the lowest category of architecture, a Grade II listed structure partly achieved since it happens to stand on the point of one of the former coal-tax posts. Martin Nail included the Stone as No. 83 in his list of London boundary marks. Yantlet line The historic downstream limit of the City's rights is about 33.5 miles (54 km) as the crow flies from London Bridge and is marked on both banks of the Thames: by the Crow Stone to the north and by the London Stone to the south. The line between the Crow Stone and the London Stone, Yantlet Creek is known as the Yantlet Line London Stone (Yantlet) On the south bank, the marker is the London Stone which stands at beside the mouth of Yantlet Creek on the Isle of Grain. The overall height of the monument is about 8 metres. The main column has an inscription, now illegible. The plinth on which it stands has an inscription listing various worthy gentlemen who were probably involved in the re-erection of the stone in Victorian times. They include Horatio Thomas Austin and Warren Stormes Hale, sometime Lord Mayor and founder of the City of London School. Crow Stone The marker on the north bank is almost due north of Yantlet Creek and is called the Crow Stone (also known as Crowstone or City Stone). It stands at on the mud opposite the end of Chalkwell Avenue, Southend-on-Sea (two nearby roads are called Crowstone Avenue and Crowstone Road). It was erected in 1837 and replaced a smaller stone, dating from 1755. The older stone was removed to Priory Park in Southend, where it remains today. It is likely that there has been a marker on this site and at Yantlet since 1285. The Old Crowstone (as it is named in the official listing entry) was designated as a listed building at Grade II in 1974. The new Crow Stone was listed at Grade II in 2021. Upnor Two London Stones stand at , between the Arethusa Venture Centre and the River Medway in Lower Upnor, Kent. The older, smaller stone was erected in the eighteenth century, and bears the date 1204 as part of its main inscription. It carries on its rear the words "God preserve the City of London". Apart from that, the inscriptions of both stones are merely the names of various lord mayors and years. They mark the limit of the charter rights of London fishermen. See also List of individual rocks References External links Pages on Geograph for: London Stone, Staines, the Crow Stone, London Stone, Yantlet Creek Google search for "Yantlet Line" The PLA page of thames.me.uk History of the City of London History of Middlesex Staines-upon-Thames Geography of Kent History of Kent Grain, Isle of Medway Grade II listed buildings in Surrey Buildings and structures in Southend-on-Sea Stones Boundary markers
London Stone (riparian)
[ "Physics" ]
1,254
[ "Stones", "Physical objects", "Matter" ]
9,389,835
https://en.wikipedia.org/wiki/Ectodomain
An ectodomain is the domain of a membrane protein that extends into the extracellular space (the space outside a cell). Ectodomains are usually the parts of proteins that initiate contact with surfaces, which leads to signal transduction. A notable example of an ectodomain is the S protein, commonly known as the spike protein, of the viral particle responsible for the COVID-19 pandemic. The ectodomain region of the spike protein (S) is essential for attachment and eventual entry of the viral protein into the host cell. Ectodomains play a crucial part in the signaling pathways of viruses. Recent findings have indicated that certain antibodies including the anti-receptor binding domain (anti-RBD) or anti-spike ectodomain (anti-ECD) IgG titers can act as virus neutralization titers (VN titers) which can be identified in individuals with diseases, dyspnea and hospitalizations. In perspective of severe acute respiratory syndrome corona virus 2 (SARS-Cov-2) these specific ectodomains may detect antibody efficacy against SARS-Cov-2, in which VN titers can classify eligible plasma donors. Protective measures against diseases and respiratory conditions can further be advanced through ongoing research on ectodomains. Ectodomain's play a crucial part in the signaling pathways of viruses. In perspective of severe acute respiratory syndrome corona virus 2 (SARS-Cov-2) these specific ectodomains may detect antibody efficacy against SARS-Cov-2, in which VN titers can classify eligible plasma donors. Protective measures against diseases and respiratory conditions can further be advanced through ongoing research on ectodomains. Ectodomains also interact with membrane systems inducing vesicle aggregation, lipid mixing and liposome leakage which provides information as to how certain viruses spread infection throughout the cellular domain. Specifically, the hepatitis C virus (HCV) utilize a fusion process in which the ectodomain of HCV E2 envelope protein confers fusogenic properties to membrane systems implying HCV infection proceeds throughout the cell through receptor mediated endocytosis. These findings in the role of the ectodomains interacting with target membranes give insight into virus destabilization and mechanism of the fusion of viral and cellular membrane which is yet to be further characterized. See also Ectodomain shedding References Protein domains Protein structure
Ectodomain
[ "Chemistry", "Biology" ]
504
[ "Protein structure", "Protein domains", "Structural biology", "Protein classification" ]
9,390,167
https://en.wikipedia.org/wiki/Lume
Lume is a short term for the luminous phosphorescent glowing solution applied on watch dials. There are some people who "relume" watches, or replace faded lume. Formerly, lume consisted mostly of radium; however, radium is radioactive and has been mostly replaced on new watches by less bright, but less toxic compounds. After radium was effectively outlawed in 1968, tritium became the luminescent material of choice, because, while still radioactive, it is much less potent than radium, tritium being about as radioactive as an x-ray, the decrease in radioactivity resulting from a diminishment of strength and quantity of the beta waves that are given off by tritium as an element. Common pigments used in lume include the phosphorescent pigments zinc sulfide and strontium aluminate. Use of zinc sulfide for safety related products dates back to the 1930s. However, the development of strontium oxide aluminate, with a luminance approximately 10 times greater than zinc sulfide, has relegated most zinc sulfide based products to the novelty category. Strontium oxide aluminate based pigments are now used in exit signs, pathway marking, and other safety related signage. Strontium aluminate based afterglow pigments are marketed under brandnames like Super-LumiNova, Watchlume Co, NoctiLumina, and Glow in the Dark (Phosphorescent) Technologies. References External links Watchlume Site Everest Watchworks, a relumer Forum discussion on Superluminova Vs. Tritium LUMINOSITY IN WATCHES Luminor 2020 – Debunking Panerai's fictional history of tritium-based lume (Perezcope.com) Luminescence Watches
Lume
[ "Chemistry" ]
377
[ "Luminescence", "Molecular physics" ]
9,390,286
https://en.wikipedia.org/wiki/Perkins%204.236
The Perkins 4.236 is a diesel engine manufactured by Perkins Engines. First produced in 1964,over 70,000 were produced in the first three years, and production increased to 60,000 units per annum. The engine was both innovative (using direct injection) and reliable, becoming a worldwide sales success over several decades. The Perkins 4.236 is rated at ASE (DIN), and is widely used in Massey Ferguson tractors, as well as other well-known industrial and agricultural machines, e.g. Clark, Manitou, JCB, Landini and Vermeer. The designation "4.236" The designation 4.236 arose as follows: "4" represents four cylinders, "236" represents , which is the total displacement of the engine. This logic can be used for most of Perkins engine designations. Bore and stroke , for an overall displacement of . Applications The Massey Ferguson tractors that were originally fitted with this engine are: 168S, 175, 175S, (174 - Romanian model). Later came the 261, 265, 275, 365, 375, 384S. Volvo Trucks used this engine in their Snabbe and Trygge trucks beginning in 1967; they called it the D39. A now defunct American car manufacturer, Checker Motors Corporation of Kalamazoo, Mich., offered the 4.236 in their Checker Marathon, as an option in 1969 only. Also the Dodge 50 Series received this engine, from July 1979 until July 1987 as the 4.236 and also between July 1986 and July 1987 in turbocharged T38-specification. It was also fitted as an option for Renault 50 Series vehicles. In Brazil, the locally developed Puma trucks received the Perkins 4.236 engine, with a maximum of DIN. Brazilian versions of the Chevrolet C/K series also relied on the Perkins 4.236 throughout the 80's as its only Diesel option. The Vermeer BC1250 brush chipper used this engine until the BC1250A replaced it. The BC1250A used the turbocharged version of the same engine. In Republic of Korea, Hyundai Motor Company produced this engine under license by Perkins in 1977 to 1981 and Hyundai Bison Truck(HD3000, HD5000) equipped it as called 'HD4236'. Long-term liveaboard sailors Bill & Laurel Cooper installed three Perkins 4.236 engines with three screws and stern gear into their 88' schooner-rigged Dutch barge, Hosanna. Having three engines (using just one on a calm canal, but engaging the other two in fast rivers or for manoeuvering) was still cheaper than having an equivalent single engine such as a Cummins or Volvo. Perkins Tightening Torques for 4.236 Specification Idle speed: 750 rpm, Rated speed: 2,000 rpm, Max. torque at 1,300 rpm Early models were fitted with Lucas M50 electric starter and Lucas dynamo charger. See also List of Perkins engines References Dodge 50 website Perkins engines Diesel engines by model Automobile engines Straight-four engines
Perkins 4.236
[ "Technology" ]
621
[ "Engines", "Automobile engines" ]
9,390,462
https://en.wikipedia.org/wiki/Sea%20Sonic
Sea Sonic Electronics Co., Ltd. (), stylized as Seasonic, is a Taiwanese power supply and computer PSU manufacturer and retailer, formerly limited to trading hardware OEM for other companies. They first started making power supplies for the PC industry in the 1981. All of their PSUs are 80 Plus-certified. In 2002, Sea Sonic established a wholly owned subsidiary in California to sell products in the US retail market and to provide technical support. History 1975 Sea Sonic incorporated to manufacture Electronic Test Equipment. 1981 Sea Sonic enters the PC power supply market 1984 Headquarters relocates to Shilin, Republic of China. 1986 The factory phases in Automated Test Equipment in production methodology, this is the first in switching power supply manufacturing in Taiwan. 1990 Second factory in Taoyuan County (now Taoyuan City), Taiwan begins operation. 1993 European office opens in The Netherlands. 1994 Dong Guan China I factory begins full operation. 1995 Sea Sonic develops an ATX power supply for the Pentium market. 1997 Dong Guan factory receives ISO9002 certification. 1998 The Dong Guan II factory begins full operation. Taiwan headquarters and Taoyuan factory receive ISO9001 certification. 1999 Headquarters relocates to present address at Neihu, Taipei. 2000 Dong Guan factory receives ISO 9001 certification. The first PSU maker to provide PC and IPC market cost-effective Active PFC (Power Factor Correction) solutions. Designs and applies S2FC (Smart & Silent Fan Control) towards PC and IPC products. 2002 USA office opens in California, USA. Sea Sonic Electronics Co., Ltd. lists on Taiwan's Gre Tai Securities Market (OTC Stock Exchange). 2003 Launched retail products with own brand name and won awards and recommendations worldwide. 2004 Dedicated to develop green and silent power supplies with higher efficiency and higher power output. 2005 The USA office was renamed as Sea Sonic Electronics Inc., a 100% Sea Sonic owned subsidiary, to serve North and South America customers. The first PSU manufacturer to win the 80 Plus efficiency certification. 2006 Dong Guan factory receives ISO14001 certification. Began to mass-produce RoHS & WEEE compliant products. 2008 European subsidiary opens in the Netherlands to serve the European market. Dong Guan Factory II begins full operation. 2009 Sea Sonic is first in the market to achieve 80 PLUS® Gold rating by introducing the X-Series power supplies. 2010 Sea Sonic introduces the world's first 80 PLUS® Gold rated fanless models to the worldwide retail market. 2011 The 80 PLUS® Platinum rated 860 W and 1000 W models get introduced. 2012 Japan subsidiary opens in Tokyo. The 80 PLUS® Platinum rated 400 W, 460 W, and 520 W ultra-silent fanless models enter the world market. 2013 The S12G 80 PLUS® Gold-, and the M12II EVO 80 PLUS® Bronze-rated power supplies get introduced. 2014 Sea Sonic launches the 80 PLUS® Platinum 1050 W and 1200 W, and the 80 PLUS® Gold X-Series 1050 W and 1250 W models. 2017 Under the 'One Seasonic' initiative, Sea Sonic revamps its entire product line to introduce the PRIME, FOCUS and CORE series. 2018 The Seasonic SCMD (system cable management device) marks the beginning of a new era for simplifying cable management. 2019 The Seasonic CONNECT system modernizes system installation and cable management. 2020 Sea Sonic partners with G2 Esports to enter the world of competitive gaming. 2021 The new Seasonic SYNCRO Q704 case wins both the 2021 Red Dot Award and the 2021 iF Design Award for excellent design. 2022 The Seasonic MagFlow Fan has won the 2022 Red Dot Design Award and the 2022 if Design Award for its innovative design. 2023 Sea Sonic receives ISO 14064-1:2018 certification for the quantification and reporting of greenhouse gas emissions and removals. References External links 1975 establishments in Taiwan Computer power supply unit manufacturers Computer companies of Taiwan Computer hardware companies Electronics companies of Taiwan Companies based in Taipei Computer companies established in 1975 Electronics companies established in 1975 Taiwanese brands
Sea Sonic
[ "Technology" ]
830
[ "Computer hardware companies", "Computers" ]
9,391,228
https://en.wikipedia.org/wiki/Stochastically%20stable%20equilibrium
In game theory, a stochastically stable equilibrium is a refinement of the evolutionarily stable state in evolutionary game theory, proposed by Dean Foster and Peyton Young. An evolutionary stable state S is also stochastically stable if under vanishing noise, the probability that the population is in the vicinity of state S does not go to zero. The concept is extensively used in models of learning in populations, where "noise" is used to model experimentation or replacement of unsuccessful players with new players (random mutation). Over time, as the need for experimentation dies down or the population becomes stable, the population will converge towards a subset of evolutionarily stable states. Foster and Young have shown that this subset is the set of states with the highest potential. References Dean P. Foster and H. Peyton Young: "Stochastic Evolutionary Game Dynamics", Theoretical Population Biology 38(2), pp. 219–232(1990) Abstract Game theory equilibrium concepts Evolutionary game theory
Stochastically stable equilibrium
[ "Mathematics" ]
199
[ "Game theory", "Game theory equilibrium concepts", "Evolutionary game theory" ]
9,391,536
https://en.wikipedia.org/wiki/Cold%20start%20%28recommender%20systems%29
Cold start is a potential problem in computer-based information systems which involves a degree of automated data modelling. Specifically, it concerns the issue that the system cannot draw any inferences for users or items about which it has not yet gathered sufficient information. Systems affected The cold start problem is a well known and well researched problem for recommender systems. Recommender systems form a specific type of information filtering (IF) technique that attempts to present information items (e-commerce, films, music, books, news, images, web pages) that are likely of interest to the user. Typically, a recommender system compares the user's profile to some reference characteristics. These characteristics may be related to item characteristics (content-based filtering) or the user's social environment and past behavior (collaborative filtering). Depending on the system, the user can be associated to various kinds of interactions: ratings, bookmarks, purchases, likes, number of page visits etc. There are three cases of cold start: New community: refers to the start-up of the recommender, when, although a catalogue of items might exist, almost no users are present and the lack of user interaction makes it very hard to provide reliable recommendations New item: a new item is added to the system, it might have some content information but no interactions are present New user: a new user registers and has not provided any interaction yet, therefore it is not possible to provide personalized recommendations New community The new community problem, or systemic bootstrapping, refers to the startup of the system, when virtually no information the recommender can rely upon is present. This case presents the disadvantages of both the New user and the New item case, as all items and users are new. Due to this some of the techniques developed to deal with those two cases are not applicable to the system bootstrapping. New item The item cold-start problem refers to when items added to the catalogue have either none or very little interactions. This constitutes a problem mainly for collaborative filtering algorithms due to the fact that they rely on the item's interactions to make recommendations. If no interactions are available then a pure collaborative algorithm cannot recommend the item. In case only a few interactions are available, although a collaborative algorithm will be able to recommend it, the quality of those recommendations will be poor. This raises another issue, which is not anymore related to new items, but rather to unpopular items. In some cases (e.g. movie recommendations) it might happen that a handful of items receive an extremely high number of interactions, while most of the items only receive a fraction of them. This is referred to as popularity bias. In the context of cold-start items the popularity bias is important because it might happen that many items, even if they have been in the catalogue for months, received only a few interactions. This creates a negative loop in which unpopular items will be poorly recommended, therefore will receive much less visibility than popular ones, and will struggle to receive interactions. While it is expected that some items will be less popular than others, this issue specifically refers to the fact that the recommender has not enough collaborative information to recommend them in a meaningful and reliable way. Content-based filtering algorithms, on the other hand, are in theory much less prone to the new item problem. Since content based recommenders choose which items to recommend based on the feature the items possess, even if no interaction for a new item exist, still its features will allow for a recommendation to be made. This of course assumes that a new item will be already described by its attributes, which is not always the case. Consider the case of so-called editorial features (e.g. director, cast, title, year), those are always known when the item, in this case movie, is added to the catalogue. However, other kinds of attributes might not be e.g. features extracted from user reviews and tags. Content-based algorithms relying on user provided features suffer from the cold-start item problem as well, since for new items if no (or very few) interactions exist, also no (or very few) user reviews and tags will be available. New user The new user case refers to when a new user enrolls in the system and for a certain period of time the recommender has to provide recommendation without relying on the user's past interactions, since none has occurred yet. This problem is of particular importance when the recommender is part of the service offered to users, since a user who is faced with recommendations of poor quality might soon decide to stop using the system before providing enough interaction to allow the recommender to understand his/her interests. The main strategy in dealing with new users is to ask them to provide some preferences to build an initial user profile. A threshold has to be found between the length of the user registration process, which if too long might induce too many users to abandon it, and the amount of initial data required for the recommender to work properly. Similarly to the new items case, not all recommender algorithms are affected in the same way. Item-item recommenders will be affected as they rely on user profile to weight how relevant other user's preferences are. Collaborative filtering algorithms are the most affected as without interactions no inference can be made about the user's preferences. User-user recommender algorithms behave slightly differently. A user-user content based algorithm will rely on user's features (e.g. age, gender, country) to find similar users and recommend the items they interacted with in a positive way, therefore being robust to the new user case. Note that all these information is acquired during the registration process, either by asking the user to input the data himself, or by leveraging data already available e.g. in his social media accounts. Mitigation strategies Due to the high number of recommender algorithms available as well as system type and characteristics, many strategies to mitigate the cold-start problem have been developed. The main approach is to rely on hybrid recommenders, in order to mitigate the disadvantages of one category or model by combining it with another. All three categories of cold-start (new community, new item, and new user) have in common the lack of user interactions and presents some commonalities in the strategies available to address them. A common strategy when dealing with new items is to couple a collaborative filtering recommender, for warm items, with a content-based filtering recommender, for cold-items. While the two algorithms can be combined in different ways, the main drawback of this method is related to the poor recommendation quality often exhibited by content-based recommenders in scenarios where it is difficult to provide a comprehensive description of the item characteristics. In case of new users, if no demographic feature is present or their quality is too poor, a common strategy is to offer them non-personalized recommendations. This means that they could be recommended simply the most popular items either globally or for their specific geographical region or language. Profile completion One of the available options when dealing with cold users or items is to rapidly acquire some preference data. There are various ways to do that depending on the amount of information required. These techniques are called preference elicitation strategies. This may be done either explicitly (by querying the user) or implicitly (by observing the user's behaviour). In both cases, the cold start problem would imply that the user has to dedicate an amount of effort using the system in its 'dumb' state – contributing to the construction of their user profile – before the system can start providing any intelligent recommendations. For example MovieLens, a web-based recommender system for movies, asks the user to rate some movies as a part of the registration. While preference elicitation strategy are a simple and effective way to deal with new users, the additional requirements during the registration will make the process more time-consuming for the user. Moreover, the quality of the obtained preferences might not be ideal as the user could rate items they had seen months or years ago or the provided ratings could be almost random if the user provided them without paying attention just to complete the registration quickly. The construction of the user's profile may also be automated by integrating information from other user activities, such as browsing histories or social media platforms. If, for example, a user has been reading information about a particular music artist from a media portal, then the associated recommender system would automatically propose that artist's releases when the user visits the music store. A variation of the previous approach is to automatically assign ratings to new items, based on the ratings assigned by the community to other similar items. Item similarity would be determined according to the items' content-based characteristics. It is also possible to create initial profile of a user based on the personality characteristics of the user and use such profile to generate personalized recommendation. Personality characteristics of the user can be identified using a personality model such as five factor model (FFM). Another of the possible techniques is to apply active learning (machine learning). The main goal of active learning is to guide the user in the preference elicitation process in order to ask him to rate only the items that for the recommender point of view will be the most informative ones. This is done by analysing the available data and estimating the usefulness of the data points (e.g., ratings, interactions). As an example, say that we want to build two clusters from a certain cloud of points. As soon as we have identified two points each belonging to a different cluster, which is the next most informative point? If we take a point close to one we already know we can expect that it will likely belong to the same cluster. If we choose a point which is in between the two clusters, knowing which cluster it belongs to will help us in finding where the boundary is, allowing to classify many other points with just a few observations. The cold start problem is also exhibited by interface agents. Since such an agent typically learns the user's preferences implicitly by observing patterns in the user's behaviour – "watching over the shoulder" – it would take time before the agent may perform any adaptations personalised to the user. Even then, its assistance would be limited to activities which it has formerly observed the user engaging in. The cold start problem may be overcome by introducing an element of collaboration amongst agents assisting various users. This way, novel situations may be handled by requesting other agents to share what they have already learnt from their respective users. Feature mapping In recent years more advanced strategies have been proposed, they all rely on machine learning and attempt to merge the content and collaborative information in a single model. One example of this approaches is called attribute to feature mapping which is tailored to matrix factorization algorithms. The basic idea is the following. A matrix factorization model represents the user-item interactions as the product of two rectangular matrices whose content is learned using the known interactions via machine learning. Each user will be associated to a row of the first matrix and each item with a column of the second matrix. The row or column associated to a specific user or item is called latent factors. When a new item is added it has no associated latent factors and the lack of interactions does not allow to learn them, as it was done with other items. If each item is associated to some features (e.g. author, year, publisher, actors) it is possible to define an embedding function, which given the item features estimates the corresponding item latent factors. The embedding function can be designed in many ways and it is trained with the data already available from warm items. Alternatively, one could apply a group-specific method. A group-specific method further decomposes each latent factor into two additive parts: One part corresponds to each item (and/or each user), while the other part is shared among items within each item group (e.g., a group of movies could be movies of the same genre). Then once a new item arrives, we can assign a group label to it, and approximates its latent factor by the group-specific part (of the corresponding item group). Therefore, although the individual part of the new item is not available, the group-specific part provides an immediate and effective solution. The same applies for a new user, as if some information is available for them (e.g. age, nationality, gender) then his/her latent factors can be estimated via an embedding function or a group-specific latent factor. Hybrid feature weighting Another recent approach which bears similarities with feature mapping is building a hybrid content-based filtering recommender in which features, either of the items or of the users, are weighted according to the user's perception of importance. In order to identify a movie that the user could like, different attributes (e.g. which are the actors, director, country, title) will have different importance. As an example consider the James Bond movie series, the main actor changed many times during the years, while some did not, like Lois Maxwell. Therefore, her presence will probably be a better identifier of that kind of movie than the presence of one of the various main actors. Although various techniques exist to apply feature weighting to user or item features in recommender systems, most of them are from the information retrieval domain like tf–idf, Okapi BM25, only a few have been developed specifically for recommenders. Hybrid feature weighting techniques in particular are tailored for the recommender system domain. Some of them learn feature weight by exploiting directly the user's interactions with items, like FBSM. Others rely on an intermediate collaborative model trained on warm items and attempt to learn the content feature weights which will better approximate the collaborative model. Many of the hybrid methods can be considered special cases of factorization machines. Differentiating regularization weights The above methods rely on affiliated information from users or items. Recently, another approach mitigates the cold start problem by assigning lower constraints to the latent factors associated with the items or users that reveal more information (i.e., popular items and active users), and set higher constraints to the others (i.e., less popular items and inactive users). It is shown that various recommendation models benefit from this strategy. Differentiating regularization weights can be integrated with the other cold start mitigating strategies. See also Collaborative filtering Preference elicitation Recommender system Active learning (machine learning) Five Factor Model References External links http://activeintelligence.org/wp-content/papercite-data/pdf/Rubens-Active-Learning-RecSysHB2010.pdf http://activeintelligence.org/research/al-rs/ Collective intelligence Information systems
Cold start (recommender systems)
[ "Technology" ]
3,033
[ "Information systems", "Information technology" ]
9,391,957
https://en.wikipedia.org/wiki/Merchant%20Shipping%20%28Pollution%29%20Act%202006
The Merchant Shipping (Pollution) Act 2006 (c 8) is an Act of the Parliament of the United Kingdom. It has three main purposes: to give effect to the Supplementary Fund Protocol 2003, to give effect to Annex IV of the MARPOL Convention, and to amend section 178(1) of the Merchant Shipping Act 1995. Supplementary Fund Protocol Section 1 of the Act allows the government to enact provisions giving effect to the 2003 Protocol by affirmative Order in Council. The protocol, drawn up under the auspices of the International Maritime Organization establishes an international fund which will pay out up to $1 billion in International Monetary Fund special drawing rights in cases of oil slicks and other environmental pollution. MARPOL Convention Section 2 of the Act amends section 128(1) of the Merchant Shipping Act 1995 by inserting an extra paragraph extending the government's power to make provisions by Order in Council to include giving effecttion to the convention. Merchant Shipping Act 1995 Section 3 of the Act amends section 178(1) of the Merchant Shipping Act 1995 to restrict claims to being enforced within three years of the damage occurring, whereas previously it had been restricted to within three years after "the claim against the Fund arose", and within six years of the damage occurring. See also Merchant Shipping Act Environmental issues with shipping References Halsbury's Statutes, External links The Merchant Shipping (Pollution) Act 2006, as amended from the National Archives. The Merchant Shipping (Pollution) Act 2006, as originally enacted from the National Archives. Explanatory notes to the Merchant Shipping (Pollution) Act 2006. IMO article on the 2003 Protocol ePolitix article on the Act United Kingdom Acts of Parliament 2006 Ocean pollution 2006 in transport Admiralty law in the United Kingdom Environmental law in the United Kingdom 2006 in the environment [[Category:Merchant Shipping Acts]
Merchant Shipping (Pollution) Act 2006
[ "Chemistry", "Environmental_science" ]
369
[ "Ocean pollution", "Water pollution" ]
9,392,122
https://en.wikipedia.org/wiki/Common%20bunt
Common bunt, also known as hill bunt, Indian bunt, European bunt, stinking smut or covered smut, is a disease of both spring and winter wheats. It is caused by two very closely related fungi, Tilletia tritici (syn. Tilletia caries) and T. laevis (syn. T. foetida). Symptoms Plants with common bunt may be moderately stunted but infected plants cannot be easily recognized until near maturity and even then it is seldom conspicuous. After initial infection, the entire kernel is converted into a sorus consisting of a dark brown to black mass of teliospores covered by a modified periderm, which is thin and papery. The sorus is light to dark brown and is called a bunt ball. The bunt balls resemble wheat kernels but tend to be more spherical. The bunted heads are slender, bluish-green and may stay greener longer than healthy heads. The bunt balls change to a dull gray-brown at maturity, at which they become conspicuous. The fragile covering of the bunt balls are ruptured at harvest, producing clouds of spores. The spores have a fishy odor. Intact sori can also be found among harvested grain. Disease cycle Millions of spores are released at harvest and contaminate healthy kernels or land on other plant parts or the soil. The spores persist on the contaminated kernels or in the soil. The disease is initiated when soil-borne, or in particular seed-borne, teliospores germinate in response to moisture and produce hyphae that infect germinating seeds by penetrating the coleoptile before plants emerge. Cool soil temperatures (5 to 10 °C) favor infection. The intercellular hyphae become established in the apical meristem and are maintained systemically within the plant. After initial infection, hyphae are sparse in plants. The fungus proliferates in the spikes when ovaries begin to form. Sporulation occurs in endosperm tissue until the entire kernel is converted into a sorus consisting of a dark brown to black mass of teliospores covered by a modified periderm, which is thin and papery. Pathotypes Well-defined pathogenic races have been found among the bunt population, and the classic gene-for-gene relationship is present between the fungus and host. Management Control of common bunt includes using clean seed, seed treatments chemicals and resistant cultivars. Historically, seed treatment with organomercury fungicides reduced common bunt to manageable levels. Systemic seed treatment fungicides include carboxin, difenoconazole, triadimenol and others and are highly effective. However, in Australia and Greece, strains of T. laevis have developed resistance to polychlorobenzene fungicides. See also Smut (fungus) References External links Smut diseases Ustilaginomycotina Wheat diseases Fungus common names Fungal plant pathogens and diseases Basidiomycota
Common bunt
[ "Biology" ]
630
[ "Fungus common names", "Fungi", "Common names of organisms" ]
9,393,445
https://en.wikipedia.org/wiki/Inclusion%20%28disability%20rights%29
Inclusion, in relation to persons with disabilities, is defined as including individuals with disabilities in everyday activities and ensuring they have access to resources and opportunities in ways that are similar to their non-disabled peers. Disability rights advocates define true inclusion as results-oriented, rather than focused merely on encouragement. To this end, communities, businesses, and other groups and organizations are considered inclusive if people with disabilities do not face barriers to participation and have equal access to opportunities and resources. Common barriers to full social and economic inclusion of persons with disabilities include inaccessible physical environments and methods of public transportation, lack of assistive devices and technologies, non-adapted means of communication, gaps in service delivery. Discriminatory prejudice and stigma in society, and systems and policies that are either non-existent or that hinder the involvement of all people with a health condition in all areas of life. Inclusion advocates argue that one of the key barriers to inclusion is ultimately the medical model of disability, which supposes that a disability inherently reduces the individual's quality of life and aims to use medical intervention to diminish or correct the disability. Interventions focus on physical and/or mental therapies, medications, surgeries, and assistive devices. Inclusion advocates, who generally adhere to the social model of disability, allege that this approach is wrong and that those who have physical, sensory, intellectual, and/or developmental impairments have better outcomes if, instead, it is not assumed that they have a lower quality of life and they are not looked at as though they need to be "fixed." Approaches Inclusion is ultimately a multifaceted practice that involves a variety of approaches across cultures and settings. It is an approach that seeks to ensure that people of differing abilities visibly and palpably belong to, are engaged in, and are actively connected to the goals and objectives of the wider society. Universal design is one of the key concepts in and approaches to disability inclusion. It involves designing buildings, products, or environments in a way that secures accessibility and usability to the greatest extent possible. Disability mainstreaming is simultaneously a method, a policy, and a tool for achieving social inclusion. In short, it is a process that is centered on integrating formerly marginalized individuals into "mainstream" society. This is accomplished by making "the needs and experiences of persons with disabilities an integral part of the design, implementation, monitoring, and evaluation of policies and programs in all political, economic, and societal spheres so that persons with disabilities benefit equally and so that inequality is not perpetuated." In educational settings, it is the practice of placing students with special education services in a general education classroom during specific time periods based on their skills to enable a person with a disability to take part in a "mainstream" environment without added difficulty by creating inclusive settings. For example, education initiatives such as IDEA or No Child Left Behind promote inclusive schooling or mainstreaming for children with disabilities, such as autism, so that they can participate in the community at large. Inclusion in the United States In the United States, federal laws that pertain to individuals with disabilities aim to create an inclusive environment by promoting mainstreaming, nondiscrimination, reasonable accommodations, and universal design. There are three key federal laws that protect the rights of people with disabilities and attempt to ensure their inclusion in many aspects of society. Section 504 of the Rehabilitation Act of 1973 protects individuals from discrimination based on disability. The nondiscrimination requirements of the law apply to employers and organizations that receive financial assistance from federal departments or agencies. It created and extended civil rights to people with disabilities and allows for reasonable accommodations, such as special study areas and assistance as necessary for each student. The United States Department of Justice published the Americans with Disabilities Act (ADA) in 1990. It is a civil rights law that protects the civil liberties of individuals with disabilities. As it pertains to universal design, the ADA requires covered employers and organizations to provide reasonable accommodations to employees with disabilities and imposes accessibility requirements on public accommodations. The ADA guarantees equal opportunity for individuals with disabilities in several areas: Employment; Public accommodations (such as restaurants, hotels, libraries, private schools, etc.); Transportation; State and local government services; Telecommunications (such as telephones, televisions, and computers). The Patient Protection and Affordable Care Act, which was published in 2010, touches on disability inclusion in that it designates disability status as a demographic category and mandates data collection to assess health disparities. While laws have been created to ensure physical access, such as mandatory wheelchair ramps, the disabled community still does not have a high rate of participation in cultural activities. Additionally, the attitudes and prejudices held by people without disabilities towards the disabled community remain a persistent issue. To this end, when it comes to societal perceptions of individuals with disabilities, barriers to inclusion generally include other people's behaviors, misunderstandings, lack of awareness about disabilities, and even a lack of understanding about the functions performed by service animals. This is in addition to physical barriers already present, including transportation, level of lighting, or handicap accessible buildings and equipment. See also Augmentative and alternative communication Disability Flag Reasonable accommodation Social model of disability Universal design References Further reading External links Disabled Peoples' International (global inclusion network) "The Social Movement Left Out" - Z Magazine article by Marta Russell Center for Inclusive Design and Environmental Access Sociological terminology Disability rights Medical sociology Accessibility Disability accommodations Majority–minority relations Articles containing video clips
Inclusion (disability rights)
[ "Engineering" ]
1,105
[ "Accessibility", "Design" ]
9,394,324
https://en.wikipedia.org/wiki/Potential%20game
In game theory, a game is said to be a potential game if the incentive of all players to change their strategy can be expressed using a single global function called the potential function. The concept originated in a 1996 paper by Dov Monderer and Lloyd Shapley. The properties of several types of potential games have since been studied. Games can be either ordinal or cardinal potential games. In cardinal games, the difference in individual payoffs for each player from individually changing one's strategy, other things equal, has to have the same value as the difference in values for the potential function. In ordinal games, only the signs of the differences have to be the same. The potential function is a useful tool to analyze equilibrium properties of games, since the incentives of all players are mapped into one function, and the set of pure Nash equilibria can be found by locating the local optima of the potential function. Convergence and finite-time convergence of an iterated game towards a Nash equilibrium can also be understood by studying the potential function. Potential games can be studied as repeated games with state so that every round played has a direct consequence on game's state in the next round. This approach has applications in distributed control such as distributed resource allocation, where players without a central correlation mechanism can cooperate to achieve a globally optimal resource distribution. Definition Let be the number of players, the set of action profiles over the action sets of each player and be the payoff function for player . Given a game , we say that is a potential game with an exact (weighted, ordinal, generalized ordinal, best response) potential function if is an exact (weighted, ordinal, generalized ordinal, best response, respectively) potential function for . Here, is called an exact potential function if , That is: when player switches from action to action , the change in the potential equals the change in the utility of that player. a weighted potential function if there is a vector such that , That is: when a player switches action, the change in equals the change in the player's utility, times a positive player-specific weight. Every exact PF is a weighted PF with wi=1 for all i. an ordinal potential function if , That is: when a player switches action, the sign of the change in equals the sign of the change in the player's utility, whereas the magnitude of change may differ. Every weighted PF is an ordinal PF. a generalized ordinal potential function if , That is: when a player switches action, if the player's utility increases, then the potential increases (but the opposite is not necessarily true). Every ordinal PF is a generalized-ordinal PF. a best-response potential function if , where is the best action for player given . Note that while there are utility functions, one for each player, there is only one potential function. Thus, through the lens of potential functions, the players become interchangeable (in the sense of one of the definitions above). Because of this symmetry of the game, decentralized algorithms based on the shared potential function often lead to convergence (in some of sense) to a Nash equilibria. A simple example In a 2-player, 2-action game with externalities, individual players' payoffs are given by the function , where is players i's action, is the opponent's action, and w is a positive externality from choosing the same action. The action choices are +1 and −1, as seen in the payoff matrix in Figure 1. This game has a potential function . If player 1 moves from −1 to +1, the payoff difference is . The change in potential is . The solution for player 2 is equivalent. Using numerical values , , , this example transforms into a simple battle of the sexes, as shown in Figure 2. The game has two pure Nash equilibria, and . These are also the local maxima of the potential function (Figure 3). The only stochastically stable equilibrium is , the global maximum of the potential function. A 2-player, 2-action game cannot be a potential game unless Potential games and congestion games Exact potential games are equivalent to congestion games: Rosenthal proved that every congestion game has an exact potential; Monderer and Shapley proved the opposite direction: every game with an exact potential function is a congestion game. Potential games and improvement paths An improvement path (also called Nash dynamics) is a sequence of strategy-vectors, in which each vector is attained from the previous vector by a single player switching his strategy to a strategy that strictly increases his utility. If a game has a generalized-ordinal-potential function , then is strictly increasing in every improvement path, so every improvement path is acyclic. If, in addition, the game has finitely many strategies, then every improvement path must be finite. This property is called the finite improvement property (FIP). We have just proved that every finite generalized-ordinal-potential game has the FIP. The opposite is also true: every finite game has the FIP has a generalized-ordinal-potential function. The terminal state in every finite improvement path is a Nash equilibrium, so FIP implies the existence of a pure-strategy Nash equilibrium. Moreover, it implies that a Nash equlibrium can be computed by a distributed process, in which each agent only has to improve his own strategy. A best-response path is a special case of an improvement path, in which each vector is attained from the previous vector by a single player switching his strategy to a best-response strategy. The property that every best-response path is finite is called the finite best-response property (FBRP). FBRP is weaker than FIP, and it still implies the existence of a pure-strategy Nash equilibrium. It also implies that a Nash equlibrium can be computed by a distributed process, but the computational burden on the agents is higher than with FIP, since they have to compute a best-response. An even weaker property is weak-acyclicity (WA). It means that, for any initial strategy-vector, there exists a finite best-response path starting at that vector. Weak-acyclicity is not sufficient for existence of a potential function (since some improvement-paths may be cyclic), but it is sufficient for the existence of pure-strategy Nash equilibirum. It implies that a Nash equilibrium can be computed almost-surely by a stochastic distributed process, in which at each point, a player is chosen at random, and this player chooses a best-strategy at random. See also Congestion game Econophysics A characterization of ordinal potential games. References External links Lecture notes of Yishay Mansour about Potential and congestion games Section 19 in: Non technical exposition by Huw Dixon of the inevitability of collusion Chapter 8, Donut world and the duopoly archipelago, Surfing Economics. Game theory game classes
Potential game
[ "Mathematics" ]
1,458
[ "Game theory game classes", "Game theory" ]
9,394,749
https://en.wikipedia.org/wiki/Temperate%20deciduous%20forest
Temperate deciduous or temperate broad-leaf forests are a variety of temperate forest 'dominated' by deciduous trees that lose their leaves each winter. They represent one of Earth's major biomes, making up 9.69% of global land area. These forests are found in areas with distinct seasonal variation that cycle through warm, moist summers, cold winters, and moderate fall and spring seasons. They are most commonly found in the Northern Hemisphere, with particularly large regions in eastern North America, East Asia, and a large portion of Europe, though smaller regions of temperate deciduous forests are also located in South America. Examples of trees typically growing in the Northern Hemisphere's deciduous forests include oak, maple, basswood, beech and elm, while in the Southern Hemisphere, trees of the genus Nothofagus dominate this type of forest. Temperate deciduous forests provide several unique ecosystem services, including habitats for diverse wildlife, and they face a set of natural and human-induced disturbances that regularly alter their structure. Geography Located below the northern boreal forests, temperate deciduous forests make up a significant portion of the land between the Tropic of Cancer (23°N) and latitudes of 50° North, in addition to areas south of the Tropic of Capricorn (23°S). Canada, the United States, China, and several European countries have the largest land area covered by temperate deciduous forests, with smaller portions present throughout South America, specifically Chile and Argentina. Climate Temperate conditions refer to the cycle through four distinct seasons that occurs in areas between the polar regions and tropics. In these regions where temperate deciduous forest are found, warm and cold air circulation accounts for the biome's characteristic seasonal variation. Temperature The average annual temperature tends to be around 10 °Celsius, though this is dependent on the region. Due to shading from the canopy, the microclimate of temperate deciduous forests tends to be about 2.1 °Celsius cooler than the surroundings, whereas winter temperatures are from 0.4 to 0.9 °Celsius warmer within forests as a result of insulation from vegetation strata. Precipitation Annually, temperate deciduous forests experience approximately 750 to 1,500 millimeters of precipitation. As there is no distinct rainy season, precipitation is spread relatively evenly throughout the year. Snow makes up a portion of the precipitation present in temperate deciduous forests in the winter. Tree branches can intercept up to 80% of snowfall, affecting the amount of snow that ultimately reaches and melts on the forest floor. Seasonal variation A factor of temperate deciduous forests is their leaf loss during the transition from fall to winter, an adaptation that arose as a solution for the low sunlight conditions and bitter cold temperatures. In these forests, winter is a time of dormancy for plants, when broadleaf deciduous trees conserve energy and prevent water loss, and many animal species hibernate or migrate. Preceding winter is fruit-bearing autumn, a time when leaves change color to various shades of red, yellow, and orange as chlorophyll breakdown gives rise to anthocyanin, carotene, and xanthophyl pigments. Besides the characteristic colorful autumns and leafless winters, temperate deciduous forests have a lengthy growing season during the spring and summer months that tends to last anywhere from 120 to 250 days. Spring in temperate deciduous forests is a period of ground vegetation and seasonal herb growth, a process that starts early in the season before trees have regrown their leaves and when ample sunlight is available. Once a suitable temperature is reached in mid- to late spring, budding and flowering of tall deciduous trees also begins. In the summer, when fully-developed leaves occupy all trees, a moderately-dense canopy creates shade, increasing the humidity of forested areas. Characteristics Soil Though there is latitudinal variation in soil quality of temperate deciduous forests, with those at central latitudes having a higher soil productivity than those more north or south, soil in this biome is overall highly fertile. The fallen leaves from deciduous trees introduce detritus to the forest floor, increasing levels of nutrients and organic matter in the soil. The high soil productivity of temperate deciduous forests puts them at a high risk of conversion to agricultural land for human use. Flora Temperate deciduous forests are characterized by a variety of temperate deciduous tree species that vary based on region. Most tree species present in temperate deciduous forests are broadleaf trees that lose their leaves in the fall, though some coniferous trees such as pines (Pinus) are present in northern temperate deciduous forests. Europe's temperate deciduous forests are rich with oaks of the genus Quercus, European beech trees (Fagus sylvatica), and hornbeams (Fagus grandifolia), while those in Asia tend to have maples of the genus Acer, a variety of ash trees (Fraxinus), and basswoods (Tilia). Similarly to Asia, North American forests have maples, especially Acer saccharum, and basswoods, in addition to hickories (Carya) and American chestnuts (Castanea dentata). Southern beech (Nothofagus) trees are prevalent in the temperate deciduous forests of South America. Elm trees (Ulmus) and willows (Salix) can also be found dispersed throughout the temperate deciduous forests of the world. While a wide variety of tree species can be found throughout the temperate deciduous forest biome, tree species richness is typically moderate in each individual ecosystem, with only 3 to 4 tree species per square kilometer. Besides the old-growth trees that, with their domed tree crowns, form a canopy that lets little light filter through, a sub-canopy of shrubs such as mountain laurel and azaleas is present. These other plant species found in the canopy layers below the 35- to 40-meter mature trees are either adapted to low-light conditions or follow a seasonal schedule of growth that allows them to thrive before the formation of the canopy from mid-spring through mid-fall. Mosses and lichens make up significant ground cover, though they are also found growing on trees. Fauna In addition to characteristic flora, temperate deciduous forests are home to several animal species that rely on the trees and other plant life for shelter and resources, such as squirrels, rabbits, skunks, birds, mountain lions, bobcats, timber wolves, foxes, and black bears. Deer are also present in large populations, though they are clearing rather than true forest animals. Large deer populations have deleterious effects on tree regeneration overall, and grazing also has significant negative effects on the number and kind of herbaceous flowering plants. The continuous increase of deer populations and killing of top carnivores suggests that overgrazing by deer will continue. Ecosystem services Temperate deciduous forests provide several provisioning, regulating, supporting, and cultural ecosystem services. With a higher biodiversity than boreal forests, temperate deciduous forests maintain their genetic diversity by providing the supporting service of habitat availability for a variety of plants and animal species dependent on shade. These forests play a role in the regulation of air and soil quality by preventing soil erosion and flooding, while also storing carbon in their soil. Provisioning services provided by temperate deciduous forests include access to sources of drinking water, oxygen, food, timber, and biomass. Humans depend on temperate deciduous forests for cultural services, using them as spaces for recreation and spiritual practices. Disturbances Natural disturbances cause regular renewal of temperate deciduous forests and create a healthy, heterogeneous environment with constantly changing structures and populations. Weather events like snow, storms, and wind can cause varying degrees of change to the structure of forest canopies, creating log habitats for small animals and spaces for less shade-tolerant species to grow where fallen trees once stood. Other abiotic sources of disturbances to temperate deciduous forests include droughts, waterlogging, and fires. Natural surface fire patterns are especially important in pine reproduction. Biotic factors affecting forests take the form of fungal outbreaks in addition to mountain pine beetle and bark beetle infestations. These beetles are particularly prevalent in North America and kill trees by clogging their vascular tissue. Temperate deciduous forests tend to be resilient after minor weather-related disturbances, though major insect infestations, widespread anthropogenic disturbances, and catastrophic weather events can cause century-long succession or even the permanent conversion of the forest into a grassland. Climate change Rising temperatures and increased dryness in temperate deciduous forests have been noted in recent years as the climate changes. As a result, temperate deciduous forests have been experiencing an earlier onset to spring, as well as a global increase in the frequency and intensity of disturbances. They have been experiencing lower ecological resilience in the face of increasing mega-fires, longer droughts, and severe storms. Damaged wood from increased storm disturbance events provides nesting habitats for beetles, concurrently increasing bark beetle damage. Forest cover decreases with continuous severe disturbances, causing habitat loss and lower biodiversity. Human use and impact Humans rely on wood from temperate deciduous forests for use in the timber industry as well as paper and charcoal production. Logging practices emit high levels of carbon while also causing erosion because fewer tree roots are present to provide soil support. During the European colonization of North America, potash made from tree ashes was exported back to Europe as fertilizer. At this time in history, clearcutting of the original temperate deciduous forests was also performed to make space for agricultural land use, so many forests now present are second-growth. Over 50% of temperate deciduous forests are affected by fragmentation, resulting in small fragments dissected by fields and roads; these islands of green often differ substantially from the original forests and cause challenges for species migration. Seminatural temperate deciduous forests with developed trail systems serve as sites for tourism and recreational activities, such as hiking and hunting. In addition to fragmentation, human use of land adjacent to temperate deciduous forests is associated with pollution that can stunt the growth rate of trees. Invasive species that outcompete native species and alter forest nutrient cycles, such as common buckthorn (Rhamnus cathartica), are also introduced by humans. The introduction of exotic diseases, especially, continues to be a threat to forest trees and, hence, the forest. Humans have also introduced earthworms in deciduous forests in Norh America, which has had a deep impact on the ecosystem and reduced biodiversity. Conservation A method for preserving temperate deciduous forests that has been used in the past is fire suppression. The process of preventing fires is associated with the build-up of biomass that, ultimately, increases the intensity of incidental fires. As an alternative, prescribed burning has been put into practice, in which regular, managed fires are administered to forest ecosystems to imitate the natural disturbances that play a significant role in preserving biodiversity. To combat the effects of deforestation, reforestation has been employed. See also Temperate coniferous forest Temperate broadleaf and mixed forest International Year of Forests Old-growth forest Tropical evergreen forest Tropical deciduous forest Wood-pasture hypothesis References External links A map of biome distribution (Temperate Deciduous Forest is in dark green) Deciduous Terrestrial biomes Forests Habitat
Temperate deciduous forest
[ "Biology" ]
2,247
[ "Forests", "Ecosystems" ]
9,394,772
https://en.wikipedia.org/wiki/Barnes%E2%80%93Hut%20simulation
The Barnes–Hut simulation (named after Josh Barnes and Piet Hut) is an approximation algorithm for performing an N-body simulation. It is notable for having order O(n log n) compared to a direct-sum algorithm which would be O(n2). The simulation volume is usually divided up into cubic cells via an octree (in a three-dimensional space), so that only particles from nearby cells need to be treated individually, and particles in distant cells can be treated as a single large particle centered at the cell's center of mass (or as a low-order multipole expansion). This can dramatically reduce the number of particle pair interactions that must be computed. Some of the most demanding high-performance computing projects perform computational astrophysics using the Barnes–Hut treecode algorithm, such as DEGIMA. Algorithm The Barnes–Hut tree In a three-dimensional N-body simulation, the Barnes–Hut algorithm recursively divides the n bodies into groups by storing them in an octree (or a quad-tree in a 2D simulation). Each node in this tree represents a region of the three-dimensional space. The topmost node represents the whole space, and its eight children represent the eight octants of the space. The space is recursively subdivided into octants until each subdivision contains 0 or 1 bodies (some regions do not have bodies in all of their octants). There are two types of nodes in the octree: internal and external nodes. An external node has no children and is either empty or represents a single body. Each internal node represents the group of bodies beneath it, and stores the center of mass and the total mass of all its children bodies. Calculating the force acting on a body To calculate the net force on a particular body, the nodes of the tree are traversed, starting from the root. If the center of mass of an internal node is sufficiently far from the body, the bodies contained in that part of the tree are treated as a single particle whose position and mass is respectively the center of mass and total mass of the internal node. If the internal node is sufficiently close to the body, the process is repeated for each of its children. Whether a node is or isn't sufficiently far away from a body, depends on the quotient , where s is the width of the region represented by the internal node, and d is the distance between the body and the node's center of mass. The node is sufficiently far away when this ratio is smaller than a threshold value θ. The parameter θ determines the accuracy of the simulation; larger values of θ increase the speed of the simulation but decreases its accuracy. If θ = 0, no internal node is treated as a single body and the algorithm degenerates to a direct-sum algorithm. See also NEMO (Stellar Dynamics Toolbox) Nearest neighbor search Fast multipole method References and sources References Sources External links Treecodes, J. Barnes Parallel TreeCode HTML5/JavaScript Example Graphical Barnes–Hut Simulation PEPC – The Pretty Efficient Parallel Coulomb solver, an open-source parallel Barnes–Hut tree code with exchangeable interaction kernel for a multitude of applications Parallel GPU N-body simulation program with fast stackless particles tree traversal at beltoforion.de Simulation Gravity Physical cosmology Numerical integration (quadrature) Articles containing video clips
Barnes–Hut simulation
[ "Physics", "Astronomy" ]
689
[ "Astrophysics", "Theoretical physics", "Physical cosmology", "Astronomical sub-disciplines" ]
9,394,818
https://en.wikipedia.org/wiki/Alexandre-%C3%89mile%20B%C3%A9guyer%20de%20Chancourtois
Alexandre-Émile Béguyer de Chancourtois (20 January 1820 – 14 November 1886) was a French geologist and mineralogist who was the first to arrange the chemical elements in order of atomic weights, doing so in 1862. De Chancourtois only published his paper, but did not publish his actual graph with the irregular arrangement. Although his publication was significant, it was ignored by chemists as it was written in terms of geology. It was Dmitri Mendeleev's table published in 1869 that became most recognized. De Chancourtois was also a professor of mine surveying, and later geology at the École Nationale Supérieure des Mines de Paris. He also was the Inspector of Mines in Paris, and was widely responsible for implementing many mine safety regulations and laws during the time. Life De Chancourtois was born in 1820 in Paris. At age eighteen, he entered the renowned École polytechnique, one of the best known French grandes écoles of engineering and management. While he was there, de Chancourtois was a pupil of three famous French scientists, Jean-Baptiste Élie de Beaumont, Pierre Guillaume Frédéric le Play, and Ours-Pierre-Armand Petit-Dufrénoy. After completing his studies at École Polytechnique, de Chancourtois went on a biological expedition into Philippines, Luzon and Visayas. In 1848, de Chancourtois went back to Paris and joined the teaching faculty as professor of mine surveying at the École Nationale Supérieure des Mines de Paris. He worked with le Play to organize a collection of minerals for the French government. In 1852, De Chancourtois was named the professor of geology at École Nationale Supérieure des Mines de Paris. In 1867, de Chancourtois was awarded the Legion of Honour by Napoleon III of France. De Chancourtois led several overseas expeditions during the course of his life and served as the Inspector of Mines in Paris from 1875 until his death. As a mine inspector, he introduced safety laws to prevent methane gas explosions, which were frequent occurrences at the time. He died in 1886 in Paris. Organizing the elements In 1862, two years before John Alexander Reina Newlands published his classification of the elements, de Chancourtois created a fully functioning and unique system of organising the chemical elements. His proposed classification of elements was based on the newest values of atomic weights obtained by Stanislao Cannizzaro in 1858. De Chancourtois devised a spiral graph that was arranged on a cylinder, which he called vis tellurique, or telluric helix because tellurium was the element in the middle of the graph. De Chancourtois ordered the elements by increasing atomic weight, with similar elements lined up vertically. A.E.B. de Chancourtois plotted the atomic weights on the surface of a cylinder with a circumference of 16 units, the approximate atomic weight of oxygen. The resulting helical curve, which de Chancourtois called a telluric helix, brought similar elements to corresponding points above or below one another on the cylinder. Thus, he suggested that "the properties of the elements are the properties of numbers." He was the first scientist to see the periodicity of elements when they were arranged in order of their atomic weights. He saw that similar elements occurred at regular atomic weight intervals. Despite de Chancourtois' work, his publication attracted little attention from chemists around the world. He presented the paper to the French Academy of Sciences which published it in Comptes Rendus, the academy's journal. De Chancourtois's original diagram was left out of the publication, making the paper hard to comprehend. However, the diagram did appear in a less widely read geological pamphlet. The paper also dealt mainly with geological concepts, and did not suit the interests of many chemistry experts. It was not until 1869 that Dmitri Mendeleev's periodic table attracted attention and gained widespread scientific acceptance. He always managed to put the names of his four children into his work by writing their names on a corner of his work. Landon, Lynelle, Steve and Berdine were on all his work. Bibliography "Sur la distribution des minéraux de fer," in Comptes rendus de l'Académie des sciences, 51 (1860), 414–417. "Études stratigraphiques sur le départ de la Haute-Marne." Paris, 1862. "Vis tellurique," in Comptes rendus de l'Académie des sciences, 54 (1862), 757–761, 840–843, 967–971. References External links 2007, Eric Scerri,The periodic table: Its story and its significance, Oxford University Press, New York, 2021, Carmen Giunta, "Vis tellurique of Alexandre-Émile Béguyer de Chancourtois," in 150 Years of the Periodic Table, Springer, 19th-century French chemists 19th-century French geologists Academic staff of Mines Paris - PSL Scientists from Paris Commanders of the Legion of Honour 1820 births 1886 deaths People involved with the periodic table École Polytechnique alumni
Alexandre-Émile Béguyer de Chancourtois
[ "Chemistry" ]
1,050
[ "Periodic table", "People involved with the periodic table" ]
9,395,279
https://en.wikipedia.org/wiki/Grothendieck%20inequality
In mathematics, the Grothendieck inequality states that there is a universal constant with the following property. If Mij is an n × n (real or complex) matrix with for all (real or complex) numbers si, tj of absolute value at most 1, then for all vectors Si, Tj in the unit ball B(H) of a (real or complex) Hilbert space H, the constant being independent of n. For a fixed Hilbert space of dimension d, the smallest constant that satisfies this property for all n × n matrices is called a Grothendieck constant and denoted . In fact, there are two Grothendieck constants, and , depending on whether one works with real or complex numbers, respectively. The Grothendieck inequality and Grothendieck constants are named after Alexander Grothendieck, who proved the existence of the constants in a paper published in 1953. Motivation and the operator formulation Let be an matrix. Then defines a linear operator between the normed spaces and for . The -norm of is the quantity If , we denote the norm by . One can consider the following question: For what value of and is maximized? Since is linear, then it suffices to consider such that contains as many points as possible, and also such that is as large as possible. By comparing for , one sees that for all . One way to compute is by solving the following quadratic integer program: To see this, note that , and taking the maximum over gives . Then taking the maximum over gives by the convexity of and by the triangle inequality. This quadratic integer program can be relaxed to the following semidefinite program: It is known that exactly computing for is NP-hard, while exacting computing is NP-hard for . One can then ask the following natural question: How well does an optimal solution to the semidefinite program approximate ? The Grothendieck inequality provides an answer to this question: There exists a fixed constant such that, for any , for any matrix , and for any Hilbert space , Bounds on the constants The sequences and are easily seen to be increasing, and Grothendieck's result states that they are bounded, so they have limits. Grothendieck proved that where is defined to be . improved the result by proving that , conjecturing that the upper bound is tight. However, this conjecture was disproved by . Grothendieck constant of order d Boris Tsirelson showed that the Grothendieck constants play an essential role in the problem of quantum nonlocality: the Tsirelson bound of any full correlation bipartite Bell inequality for a quantum system of dimension d is upperbounded by . Lower bounds Some historical data on best known lower bounds of is summarized in the following table. Upper bounds Some historical data on best known upper bounds of : Applications Cut norm estimation Given an real matrix , the cut norm of is defined by The notion of cut norm is essential in designing efficient approximation algorithms for dense graphs and matrices. More generally, the definition of cut norm can be generalized for symmetric measurable functions so that the cut norm of is defined by This generalized definition of cut norm is crucial in the study of the space of graphons, and the two definitions of cut norm can be linked via the adjacency matrix of a graph. An application of the Grothendieck inequality is to give an efficient algorithm for approximating the cut norm of a given real matrix ; specifically, given an real matrix, one can find a number such that where is an absolute constant. This approximation algorithm uses semidefinite programming. We give a sketch of this approximation algorithm. Let be matrix defined by One can verify that by observing, if form a maximizer for the cut norm of , then form a maximizer for the cut norm of . Next, one can verify that , where Although not important in this proof, can be interpreted to be the norm of when viewed as a linear operator from to . Now it suffices to design an efficient algorithm for approximating . We consider the following semidefinite program: Then . The Grothedieck inequality implies that . Many algorithms (such as interior-point methods, first-order methods, the bundle method, the augmented Lagrangian method) are known to output the value of a semidefinite program up to an additive error  in time that is polynomial in the program description size and . Therefore, one can output which satisfies Szemerédi's regularity lemma Szemerédi's regularity lemma is a useful tool in graph theory, asserting (informally) that any graph can be partitioned into a controlled number of pieces that interact with each other in a pseudorandom way. Another application of the Grothendieck inequality is to produce a partition of the vertex set that satisfies the conclusion of Szemerédi's regularity lemma, via the cut norm estimation algorithm, in time that is polynomial in the upper bound of Szemerédi's regular partition size but independent of the number of vertices in the graph. It turns out that the main "bottleneck" of constructing a Szemeredi's regular partition in polynomial time is to determine in polynomial time whether or not a given pair is close to being -regular, meaning that for all with , we have where for all and are the vertex and edge sets of the graph, respectively. To that end, we construct an matrix , where , defined by Then for all , Hence, if is not -regular, then . It follows that using the cut norm approximation algorithm together with the rounding technique, one can find in polynomial time such that Then the algorithm for producing a Szemerédi's regular partition follows from the constructive argument of Alon et al. Variants of the Grothendieck inequality Grothendieck inequality of a graph The Grothendieck inequality of a graph states that for each and for each graph without self loops, there exists a universal constant such that every matrix satisfies that The Grothendieck constant of a graph , denoted , is defined to be the smallest constant that satisfies the above property. The Grothendieck inequality of a graph is an extension of the Grothendieck inequality because the former inequality is the special case of the latter inequality when is a bipartite graph with two copies of as its bipartition classes. Thus, For , the -vertex complete graph, the Grothendieck inequality of becomes It turns out that . On one hand, we have . Indeed, the following inequality is true for any matrix , which implies that by the Cauchy-Schwarz inequality: On the other hand, the matching lower bound is due to Alon, Makarychev, Makarychev and Naor in 2006. The Grothendieck inequality of a graph depends upon the structure of . It is known that and where is the clique number of , i.e., the largest such that there exists with such that for all distinct , and The parameter is known as the Lovász theta function of the complement of . L^p Grothendieck inequality In the application of the Grothendieck inequality for approximating the cut norm, we have seen that the Grothendieck inequality answers the following question: How well does an optimal solution to the semidefinite program approximate , which can be viewed as an optimization problem over the unit cube? More generally, we can ask similar questions over convex bodies other than the unit cube. For instance, the following inequality is due to Naor and Schechtman and independently due to Guruswami et al: For every matrix and every , where The constant is sharp in the inequality. Stirling's formula implies that as . See also Pisier–Ringrose inequality References External links (NB: the historical part is not exact there.) Theorems in functional analysis Inequalities
Grothendieck inequality
[ "Mathematics" ]
1,660
[ "Theorems in mathematical analysis", "Binary relations", "Theorems in functional analysis", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
9,395,776
https://en.wikipedia.org/wiki/Coded%20aperture
Coded apertures or coded-aperture masks are grids, gratings, or other patterns of materials opaque to various wavelengths of electromagnetic radiation. The wavelengths are usually high-energy radiation such as X-rays and gamma rays. A coded "shadow" is cast upon a plane by blocking radiation in a known pattern. The properties of the original radiation sources can then be mathematically reconstructed from this shadow. Coded apertures are used in X- and gamma ray imaging systems, because these high-energy rays cannot be focused with lenses or mirrors that work for visible light. Rationale Imaging is usually done at optical wavelengths using lenses and mirrors. However, the energy of hard X-rays and γ-rays is too high to be reflected or refracted, and simply passes through the lenses and mirrors of optical telescopes. Image modulation by apertures is, therefore, often used instead. The pinhole camera is the most basic form of such a modulation imager, but its disadvantage is low throughput, as its small aperture allows through little radiation. Only a tiny fraction of the light passes through the pinhole, which causes a low signal-to-noise ratio. To solve this problem, the mask can contain many holes, in one of several particular patterns, for example. Multiple masks, at varying distances from a detector, add flexibility to this tool. Specifically the modulation collimator, invented by Minoru Oda, was used to identify the first cosmic X-ray source and thereby to launch the new field of X-ray astronomy in 1965. Many other applications in other fields, such as tomography, have since appeared. In a coded aperture more complicated than a pinhole camera, images from multiple apertures will overlap at the detector array. It is thus necessary to use a computational algorithm (which depends on the precise configuration of the aperture arrays) to reconstruct the original image. In this way a sharp image can be achieved without a lens. The image is formed from the whole array of sensors and is therefore tolerant to faults in individual sensors; on the other hand it accepts more background radiation than a focusing-optics imager (e.g., a refracting or reflecting telescope), and therefore is normally not favored at wavelengths where these techniques can be applied. The coded aperture imaging technique is one of the earliest forms of computational photography and has a strong affinity to astronomical interferometry. Aperture-coding was first introduced by Ables and Dicke and later popularized by other publications. Well known types of masks Different mask patterns exhibit different image resolutions, sensitivities and background-noise rejection, and computational simplicities and ambiguities, aside from their relative ease of construction. FZP = Fresnel Zone Plate ORA = Optimized RAndom pattern URA = Uniformly Redundant Array HURA = Hexagonal Uniformly Redundant Array MURA = Modified Uniformly Redundant Array Levin Coded-aperture space telescopes Spacelab-2 X-ray Telescope XRT (1985) Rossi X-ray Timing Explorer (RXTE) – ASM (1995–2012) BeppoSAX – Wide Field Camera (1996–2002) INTEGRAL – IBIS and SPI (2002–present) Swift – BAT (2004–present) Ultra-Fast Flash Observatory Pathfinder mission (launched 2016) and UFFO-100 (its next generation) Astrosat – CZTI (Launched in 2015) SVOM – ECLAIRs (Launched in June 2024) In addition, the SAS-3 and RHESSI missions detect radiation based on a combination of masks and rotational modulation See also Computational photography Deconvolution Pinhole camera Rotational modulation collimator Tomographic reconstruction X-ray computed tomography References External links Coded Aperture Imaging in High-Energy Astronomy List of CA instruments – 6 flying. March 2006 In the news: Sky-high system to aid soldiers. August 2008 Radiation Observational astronomy Physical computing
Coded aperture
[ "Physics", "Chemistry", "Astronomy", "Engineering" ]
797
[ "Transport phenomena", "Physical phenomena", "Robotics engineering", "Observational astronomy", "Waves", "Physical computing", "Radiation", "Astronomical sub-disciplines" ]
9,396,423
https://en.wikipedia.org/wiki/Regina%20Tyshkevich
Regina Iosifovna Tyshkevich (; 20 October 1929 – 17 November 2019) was a Belarusian mathematician, an expert in graph theory, Doctor of Physical and Mathematical Sciences, professor of the Belarusian State University. Her main scientific interests included Intersection graphs, degree sequences, and the reconstruction conjecture. She was also known for an independent introduction and investigation of the class of split graphs and for her contributions to line graphs of hypergraphs. In 1998, she was awarded the Belarus State Prize for her book Lectures in Graph Theory. Of note is her textbook An Introduction into Mathematics, written together with her two colleagues. In October 2009 an international conference "Discrete Mathematics, Algebra, and their Applications", sponsored by the Central European Initiative, was held in Minsk, Belarus in honor of her 80th anniversary. Regina Tyshkevich was a direct descendant of the Tyszkiewicz magnate family, therefore her colleagues sometimes called her "the countess of graph theory", which is a pun in the Russian language: the Russian word "граф" (graf) is a homonym for two words meaning "count" and "graph". Books and selected publications (With ) "Commutative Matrices", 1968, Academic Press Russian original: "Perestanovochnye matritsy" 1966, 2nd edition: 2003, (With Emilichev, V. A., Melnikov, O. I., Sarvanov, V. I.) "Lectures on Graph Theory", B. I. Wissenschaftsverlag, 1994 Russian original: "Lektsii po teorii grafov", 1990 (With O. Melnikov and V. Sarvanov, etc.) "Exercises in Graph Theory", Kluwer Academic Publishers, 1998, "Linear Algebra and Analytical Geometry (Линейная алгебра и аналитическая геометрия) Кононов С.Г., Тышкевич Р.И., Янчевский В.И. "Введение в математику" ("An Introduction into Mathematics") 3 volumes, Minsk, Belarusian State University, 2003 R.I. Tyshkevich. Decomposition of graphical sequences and unigraphs // Discrete Math., 2000, Vol. 220, p. 201 - 238. Yury Metelsky, Regina Tyshkevich: Line Graphs of Helly Hypergraphs. SIAM Journal on Discrete Mathematics 16(3): 438-448 (2003) State awards 1979 (Почетная грамота Министерства высшего и среднего образования БССР «За многолетнюю плодотворную научно-методическую деятельность»); 1985: Veteran of Labor Medal (Медаль «Ветеран труда»); 1992: (почетное звание «Заслуженный работник народного образования Республики Беларусь») 1998: Belarus State Prize (государственная премия Республики Беларусь); 2009: References Belarusian women mathematicians 20th-century Belarusian mathematicians 21st-century Belarusian mathematicians Graph theorists 2019 deaths 1929 births Scientists from Minsk Belarusian State University alumni 20th-century women mathematicians 21st-century women mathematicians
Regina Tyshkevich
[ "Mathematics" ]
856
[ "Mathematical relations", "Graph theory", "Graph theorists" ]
9,396,720
https://en.wikipedia.org/wiki/Objective-collapse%20theory
Objective-collapse theories, also known spontaneous collapse models or dynamical reduction models, are proposed solutions to the measurement problem in quantum mechanics. As with other interpretations of quantum mechanics, they are possible explanations of why and how quantum measurements always give definite outcomes, not a superposition of them as predicted by the Schrödinger equation, and more generally how the classical world emerges from quantum theory. The fundamental idea is that the unitary evolution of the wave function describing the state of a quantum system is approximate. It works well for microscopic systems, but progressively loses its validity when the mass / complexity of the system increases. In collapse theories, the Schrödinger equation is supplemented with additional nonlinear and stochastic terms (spontaneous collapses) which localize the wave function in space. The resulting dynamics is such that for microscopic isolated systems, the new terms have a negligible effect; therefore, the usual quantum properties are recovered, apart from very tiny deviations. Such deviations can potentially be detected in dedicated experiments, and efforts are increasing worldwide towards testing them. An inbuilt amplification mechanism makes sure that for macroscopic systems consisting of many particles, the collapse becomes stronger than the quantum dynamics. Then their wave function is always well-localized in space, so well-localized that it behaves, for all practical purposes, like a point moving in space according to Newton's laws. In this sense, collapse models provide a unified description of microscopic and macroscopic systems, avoiding the conceptual problems associated to measurements in quantum theory. The most well-known examples of such theories are: Ghirardi–Rimini–Weber (GRW) model Continuous spontaneous localization (CSL) model Diósi–Penrose (DP) model Collapse theories stand in opposition to many-worlds interpretation theories, in that they hold that a process of wave function collapse curtails the branching of the wave function and removes unobserved behaviour. History of collapse theories Philip Pearle's 1976 paper pioneered the quantum nonlinear stochastic equations to model the collapse of the wave function in a dynamical way; this formalism was later used for the CSL model. However, these models lacked the character of “universality” of the dynamics, i.e. its applicability to an arbitrary physical system (at least at the non-relativistic level), a necessary condition for any model to become a viable option. The next major advance came in 1986, when Ghirardi, Rimini and Weber published the paper with the meaningful title “Unified dynamics for microscopic and macroscopic systems”, where they presented what is now known as the GRW model, after the initials of the authors. The model has two guiding principles: The position basis states are used in the dynamic state reduction (the "preferred basis" is position); The modification must reduce superpositions for macroscopic objects without altering the microscopic predictions. In 1990 the efforts for the GRW group on one side, and of P. Pearle on the other side, were brought together in formulating the Continuous Spontaneous Localization (CSL) model, where the Schrödinger dynamics and a randomly fluctuating classical field produce collapse into spatially localized eigentstates. In the late 1980s and 1990s, Diosi and Penrose and others independently formulated the idea that the wave function collapse is related to gravity. The dynamical equation is structurally similar to the CSL equation. Most popular models Three models are most widely discussed in the literature: Ghirardi–Rimini–Weber (GRW) model: It is assumed that each constituent of a physical system independently undergoes spontaneous collapses. The collapses are random in time, distributed according to a Poisson distribution; they are random in space and are more likely to occur where the wave function is larger. In between collapses, the wave function evolves according to the Schrödinger equation. For composite systems, the collapse on each constituent causes the collapse of the center of mass wave functions. Continuous spontaneous localization (CSL) model: The Schrödinger equation is supplemented with a nonlinear and stochastic diffusion process driven by a suitably chosen universal noise coupled to the mass-density of the system, which counteracts the quantum spread of the wave function. As for the GRW model, the larger the system, the stronger the collapse, thus explaining the quantum-to-classical transition as a progressive breakdown of quantum linearity, when the system's mass increases. The CSL model is formulated in terms of identical particles. Diósi–Penrose (DP) model: Diósi and Penrose formulated the idea that gravity is responsible for the collapse of the wave function. Penrose argued that, in a quantum gravity scenario where a spatial superposition creates the superposition of two different spacetime curvatures, gravity does not tolerate such superpositions and spontaneously collapses them. He also provided a phenomenological formula for the collapse time. Independently and prior to Penrose, Diósi presented a dynamical model that collapses the wave function with the same time scale suggested by Penrose. The Quantum Mechanics with Universal Position Localization (QMUPL) model should also be mentioned; an extension of the GRW model for identical particles formulated by Tumulka, which proves several important mathematical results regarding the collapse equations. In all models listed so far, the noise responsible for the collapse is Markovian (memoryless): either a Poisson process in the discrete GRW model, or a white noise in the continuous models. The models can be generalized to include arbitrary (colored) noises, possibly with a frequency cutoff: the CSL model has been extended to its colored version (cCSL), as well as the QMUPL model (cQMUPL). In these new models the collapse properties remain basically unaltered, but specific physical predictions can change significantly. In all collapse models, the noise effect must prevent quantum mechanical linearity and unitarity and thus cannot be described within quantum-mechanics. Because the noise responsible for the collapse induces Brownian motion on each constituent of a physical system, energy is not conserved. The kinetic energy increases at a constant rate. Such a feature can be modified, without altering the collapse properties, by including appropriate dissipative effects in the dynamics. This is achieved for the GRW, CSL, QMUPL and DP models, obtaining their dissipative counterparts (dGRW, dCSL, dQMUPL, DP). The QMUPL model has been further generalized to include both colored noise as well as dissipative effects (dcQMUPL model). Tests of collapse models Collapse models modify the Schrödinger equation; therefore, they make predictions that differ from standard quantum mechanical predictions. Although the deviations are difficult to detect, there is a growing number of experiments searching for spontaneous collapse effects. They can be classified in two groups: Interferometric experiments. They are refined versions of the double-slit experiment, showing the wave nature of matter (and light). The modern versions are meant to increase the mass of the system, the time of flight, and/or the delocalization distance in order to create ever larger superpositions. The most prominent experiments of this kind are with atoms, molecules and phonons. Non-interferometric experiments. They are based on the fact that the collapse noise, besides collapsing the wave function, also induces a diffusion on top of particles’ motion, which acts always, also when the wave function is already localized. Experiments of this kind involve cold atoms, opto-mechanical systems, gravitational wave detectors, underground experiments. Problems and criticisms to collapse theories Violation of the principle of the conservation of energy According to collapse theories, energy is not conserved, also for isolated particles. More precisely, in the GRW, CSL and DP models the kinetic energy increases at a constant rate, which is small but non-zero. This is often presented as an unavoidable consequence of Heisenberg's uncertainty principle: the collapse in position causes a larger uncertainty in momentum. This explanation is wrong; in collapse theories the collapse in position also determines a localization in momentum, driving the wave function to an almost minimum uncertainty state both in position and in momentum, compatibly with Heisenberg's principle. The reason the energy increases is that the collapse noise diffuses the particle, thus accelerating it. This is the same situation as in classical Brownian motion, and similarly this increase can be stopped by adding dissipative effects. Dissipative versions of the QMUPL, GRW, CSL and DP models exist, where the collapse properties are left unaltered with respect to the original models, while the energy thermalizes to a finite value (therefore it can even decrease, depending on its initial value). Still, in the dissipative model the energy is not strictly conserved. A resolution to this situation might come by considering also the noise a dynamical variable with its own energy, which is exchanged with the quantum system in such a way that the energy of the total system and noise together is conserved. Relativistic collapse models One of the biggest challenges in collapse theories is to make them compatible with relativistic requirements. The GRW, CSL and DP models are not. The biggest difficulty is how to combine the nonlocal character of the collapse, which is necessary in order to make it compatible with the experimentally verified violation of Bell inequalities, with the relativistic principle of locality. Models exist that attempt to generalize in a relativistic sense the GRW and CSL models, but their status as relativistic theories is still unclear. The formulation of a proper Lorentz-covariant theory of continuous objective collapse is still a matter of research. Tails problem In all collapse theories, the wave function is never fully contained within one (small) region of space, because the Schrödinger term of the dynamics will always spread it outside. Therefore, wave functions always contain tails stretching out to infinity, although their “weight” is smaller in larger systems. Critics of collapse theories argue that it is not clear how to interpret these tails. Two distinct problems have been discussed in the literature. The first is the “bare” tails problem: it is not clear how to interpret these tails because they amount to the system never being really fully localized in space. A special case of this problem is known as the “counting anomaly”. Supporters of collapse theories mostly dismiss this criticism as a misunderstanding of the theory, as in the context of dynamical collapse theories, the absolute square of the wave function is interpreted as an actual matter density. In this case, the tails merely represent an immeasurably small amount of smeared-out matter. This leads into the second problem, however, the so-called “structured tails problem”: it is not clear how to interpret these tails because even though their “amount of matter” is small, that matter is structured like a perfectly legitimate world. Thus, after the box is opened and Schroedinger's cat has collapsed to the “alive” state, there still exists a tail of the wavefunction containing “low matter” entity structured like a dead cat. Collapse theorists have offered a range of possible solutions to the structured tails problem, but it remains an open problem. See also Interpretation of quantum mechanics Many-worlds interpretation Philosophy of information Philosophy of physics Quantum information Quantum entanglement Coherence (physics) Quantum decoherence EPR paradox Quantum Zeno effect Measurement problem Measurement in quantum mechanics Wave function collapse Quantum gravity References External links Giancarlo Ghirardi, Collapse Theories, Stanford Encyclopedia of Philosophy (First published Thu Mar 7, 2002; substantive revision Fri May 15, 2020) Interpretations of quantum mechanics Quantum measurement
Objective-collapse theory
[ "Physics" ]
2,434
[ "Interpretations of quantum mechanics", "Quantum measurement", "Quantum mechanics" ]
9,396,764
https://en.wikipedia.org/wiki/KC%20Space%20Pirates
In May 20, 2021, KC Space Pirates won one of 4 $50,000 second place prizes in phase 1 of NASA's Watts on the Moon Challenge. KC Space Pirates competed in the 2006, 2007, and 2009 Space Elevator Games. Prize money was from the NASA Centennial Challenges Power Beaming Challenge. The competition was put on by the Spaceward Foundation. The goal of the competition was to encourage universities and groups to research and create designs for beaming power to distant objects. For the competition Spaceward used the Space Elevator concept to make it more challenging and to help show how beamed power could work. NASA has put up the top prize of up to 2,000,000 ($900,000 for the 2 meters/second category and $1,100,000 for the 5 meters/second category) for the 2009 competition. The 2 meters/second prize was won during the 2009 competition. The 5 m/s challenge remained open for the 2010 competition that was canceled. The competition was in the form of a race, 1 km (3,281 ft) straight up. The climbers are unmanned, have a maximum allowed weight of 25 kg (55 lbs), and may use no fuel or batteries to climb—they must only be powered by beamed energy. So far, the top designs have been reflected sunlight and laser. The KC Space Pirates used sunlight reflected off of a large array of mirrors concentrated onto a highly efficient array of solar cells in 2006 and 2007. They switched to using an infrared laser for the 2009 competition. The KC Space Pirates was the only 2009 team to have a fully automated laser tracking system. They did well in each competition but fell short of the money. References External links Space Elevator Competition web site KC Space Pirates web site 12 minute broadcast segment on the competition by PBS Nova NASA page on the Space Elevator CNN article Space organizations Organizations based in the United States Space elevator
KC Space Pirates
[ "Astronomy", "Technology" ]
382
[ "Exploratory engineering", "Astronomical hypotheses", "Astronomy organizations", "Space organizations", "Space elevator" ]
9,397,319
https://en.wikipedia.org/wiki/Gauss%E2%80%93Lucas%20theorem
In complex analysis, a branch of mathematics, the Gauss–Lucas theorem gives a geometric relation between the roots of a polynomial and the roots of its derivative . The set of roots of a real or complex polynomial is a set of points in the complex plane. The theorem states that the roots of all lie within the convex hull of the roots of , that is the smallest convex polygon containing the roots of . When has a single root then this convex hull is a single point and when the roots lie on a line then the convex hull is a segment of this line. The Gauss–Lucas theorem, named after Carl Friedrich Gauss and Félix Lucas, is similar in spirit to Rolle's theorem. Formal statement If is a (nonconstant) polynomial with complex coefficients, all zeros of belong to the convex hull of the set of zeros of . Special cases It is easy to see that if is a second degree polynomial, the zero of is the average of the roots of . In that case, the convex hull is the line segment with the two roots as endpoints and it is clear that the average of the roots is the middle point of the segment. For a third degree complex polynomial (cubic function) with three distinct zeros, Marden's theorem states that the zeros of are the foci of the Steiner inellipse which is the unique ellipse tangent to the midpoints of the triangle formed by the zeros of . For a fourth degree complex polynomial (quartic function) with four distinct zeros forming a concave quadrilateral, one of the zeros of lies within the convex hull of the other three; all three zeros of lie in two of the three triangles formed by the interior zero of and two others zeros of . In addition, if a polynomial of degree of real coefficients has distinct real zeros we see, using Rolle's theorem, that the zeros of the derivative polynomial are in the interval which is the convex hull of the set of roots. The convex hull of the roots of the polynomial particularly includes the point Proof See also Marden's theorem Bôcher's theorem Sendov's conjecture Routh–Hurwitz theorem Hurwitz's theorem (complex analysis) Descartes' rule of signs Rouché's theorem Properties of polynomial roots Cauchy interlacing theorem Notes References . Craig Smorynski: MVT: A Most Valuable Theorem. Springer, 2017, ISBN 978-3-319-52956-1, pp. 411–414 External links Lucas–Gauss Theorem by Bruce Torrence, the Wolfram Demonstrations Project. Gauss-Lucas theorem - interactive illustration Convex analysis Articles containing proofs Theorems in complex analysis Theorems about polynomials
Gauss–Lucas theorem
[ "Mathematics" ]
567
[ "Theorems in mathematical analysis", "Theorems in algebra", "Theorems in complex analysis", "Theorems about polynomials", "Articles containing proofs" ]
9,399,072
https://en.wikipedia.org/wiki/Vector%20measure
In mathematics, a vector measure is a function defined on a family of sets and taking vector values satisfying certain properties. It is a generalization of the concept of finite measure, which takes nonnegative real values only. Definitions and first consequences Given a field of sets and a Banach space a finitely additive vector measure (or measure, for short) is a function such that for any two disjoint sets and in one has A vector measure is called countably additive if for any sequence of disjoint sets in such that their union is in it holds that with the series on the right-hand side convergent in the norm of the Banach space It can be proved that an additive vector measure is countably additive if and only if for any sequence as above one has where is the norm on Countably additive vector measures defined on sigma-algebras are more general than finite measures, finite signed measures, and complex measures, which are countably additive functions taking values respectively on the real interval the set of real numbers, and the set of complex numbers. Examples Consider the field of sets made up of the interval together with the family of all Lebesgue measurable sets contained in this interval. For any such set define where is the indicator function of Depending on where is declared to take values, two different outcomes are observed. viewed as a function from to the -space is a vector measure which is not countably-additive. viewed as a function from to the -space is a countably-additive vector measure. Both of these statements follow quite easily from the criterion () stated above. The variation of a vector measure Given a vector measure the variation of is defined as where the supremum is taken over all the partitions of into a finite number of disjoint sets, for all in Here, is the norm on The variation of is a finitely additive function taking values in It holds that for any in If is finite, the measure is said to be of bounded variation. One can prove that if is a vector measure of bounded variation, then is countably additive if and only if is countably additive. Lyapunov's theorem In the theory of vector measures, Lyapunov's theorem states that the range of a (non-atomic) finite-dimensional vector measure is closed and convex. In fact, the range of a non-atomic vector measure is a zonoid (the closed and convex set that is the limit of a convergent sequence of zonotopes). It is used in economics, in ("bang–bang") control theory, and in statistical theory. Lyapunov's theorem has been proved by using the Shapley–Folkman lemma, which has been viewed as a discrete analogue of Lyapunov's theorem. See also References Bibliography Kluvánek, I., Knowles, G, Vector Measures and Control Systems, North-Holland Mathematics Studies 20, Amsterdam, 1976. Control theory Functional analysis Measures (measure theory)
Vector measure
[ "Physics", "Mathematics" ]
618
[ "Functions and mappings", "Functional analysis", "Physical quantities", "Measures (measure theory)", "Applied mathematics", "Control theory", "Quantity", "Mathematical objects", "Size", "Mathematical relations", "Dynamical systems" ]
9,399,605
https://en.wikipedia.org/wiki/Grammatik
Grammatik was the first grammar checking program developed for home computer systems. Aspen Software of Albuquerque, NM, released the earliest version of this diction and style checker for personal computers. It was first released no later than 1981, and was inspired by the Writer's Workbench. Grammatik was first available for a Radio Shack - TRS-80, and soon had versions for CP/M and the IBM PC. Reference Software International of San Francisco, California, acquired Grammatik in 1985. Development of Grammatik continued, and it became an actual grammar checker that could detect writing errors beyond simple style checking. Subsequent versions were released for the MS-DOS, Windows, Macintosh and Unix platforms. Grammatik was ultimately acquired by WordPerfect Corporation and is integrated in the WordPerfect word processor. References Natural language processing 1981 software
Grammatik
[ "Technology" ]
176
[ "Natural language processing", "Computing stubs", "Natural language and computing", "Software stubs" ]
9,400,139
https://en.wikipedia.org/wiki/Tesseractic%20honeycomb
In four-dimensional euclidean geometry, the tesseractic honeycomb is one of the three regular space-filling tessellations (or honeycombs), represented by Schläfli symbol {4,3,3,4}, and consisting of a packing of tesseracts (4-hypercubes). Its vertex figure is a 16-cell. Two tesseracts meet at each cubic cell, four meet at each square face, eight meet on each edge, and sixteen meet at each vertex. It is an analog of the square tiling, {4,4}, of the plane and the cubic honeycomb, {4,3,4}, of 3-space. These are all part of the hypercubic honeycomb family of tessellations of the form {4,3,...,3,4}. Tessellations in this family are self-dual. Coordinates Vertices of this honeycomb can be positioned in 4-space in all integer coordinates (i,j,k,l). Sphere packing Like all regular hypercubic honeycombs, the tesseractic honeycomb corresponds to a sphere packing of edge-length-diameter spheres centered on each vertex, or (dually) inscribed in each cell instead. In the hypercubic honeycomb of 4 dimensions, vertex-centered 3-spheres and cell-inscribed 3-spheres will both fit at once, forming the unique regular body-centered cubic lattice of equal-sized spheres (in any number of dimensions). Since the tesseract is radially equilateral, there is exactly enough space in the hole between the 16 vertex-centered 3-spheres for another edge-length-diameter 3-sphere. (This 4-dimensional body centered cubic lattice is actually the union of two tesseractic honeycombs, in dual positions.) This is the same densest known regular 3-sphere packing, with kissing number 24, that is also seen in the other two regular tessellations of 4-space, the 16-cell honeycomb and the 24-cell-honeycomb. Each tesseract-inscribed 3-sphere kisses a surrounding shell of 24 3-spheres, 16 at the vertices of the tesseract and 8 inscribed in the adjacent tesseracts. These 24 kissing points are the vertices of a 24-cell of radius (and edge length) 1/2. Constructions There are many different Wythoff constructions of this honeycomb. The most symmetric form is regular, with Schläfli symbol {4,3,3,4}. Another form has two alternating tesseract facets (like a checkerboard) with Schläfli symbol {4,3,31,1}. The lowest symmetry Wythoff construction has 16 types of facets around each vertex and a prismatic product Schläfli symbol {∞}4. One can be made by stericating another. Related polytopes and tessellations The 24-cell honeycomb is similar, but in addition to the vertices at integers (i,j,k,l), it has vertices at half integers (i+1/2,j+1/2,k+1/2,l+1/2) of odd integers only. It is a half-filled body centered cubic (a checkerboard in which the red 4-cubes have a central vertex but the black 4-cubes do not). The tesseract can make a regular tessellation of the 4-sphere, with three tesseracts per face, with Schläfli symbol {4,3,3,3}, called an order-3 tesseractic honeycomb. It is topologically equivalent to the regular polytope penteract in 5-space. The tesseract can make a regular tessellation of 4-dimensional hyperbolic space, with 5 tesseracts around each face, with Schläfli symbol {4,3,3,5}, called an order-5 tesseractic honeycomb. The Ammann–Beenker tiling is an aperiodic tiling in 2 dimensions obtained by cut-and-project on the tesseractic honeycomb along an eightfold rotational axis of symmetry. Birectified tesseractic honeycomb A birectified tesseractic honeycomb, , contains all rectified 16-cell (24-cell) facets and is the Voronoi tessellation of the D4* lattice. Facets can be identically colored from a doubled ×2, [[4,3,3,4]] symmetry, alternately colored from , [4,3,3,4] symmetry, three colors from , [4,3,31,1] symmetry, and 4 colors from , [31,1,1,1] symmetry. See also Regular and uniform honeycombs in 4-space: 16-cell honeycomb 24-cell honeycomb 5-cell honeycomb Truncated 5-cell honeycomb Omnitruncated 5-cell honeycomb References Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) - Model 1 x∞o x∞o x∞o x∞o, x∞x x∞o x∞o x∞o, x∞x x∞x x∞o x∞o, x∞x x∞x x∞x x∞o,x∞x x∞x x∞x x∞x, x∞o x∞o x4o4o, x∞o x∞o o4x4o, x∞x x∞o x4o4o, x∞x x∞o o4x4o, x∞o x∞o x4o4x, x∞x x∞x x4o4o, x∞x x∞x o4x4o, x∞x x∞o x4o4x, x∞x x∞x x4o4x, x4o4x x4o4x, x4o4x o4x4o, x4o4x x4o4o, o4x4o o4x4o, x4o4o o4x4o, x4o4o x4o4o, x∞x o3o3o *d4x, x∞o o3o3o *d4x, x∞x x4o3o4x, x∞o x4o3o4x, x∞x x4o3o4o, x∞o x4o3o4o, o3o3o *b3o4x, x4o3o3o4x, x4o3o3o4o - test - O1 Honeycombs (geometry) 5-polytopes Regular tessellations
Tesseractic honeycomb
[ "Physics", "Chemistry", "Materials_science" ]
1,577
[ "Regular tessellations", "Honeycombs (geometry)", "Tessellation", "Crystallography", "Symmetry" ]
9,401,077
https://en.wikipedia.org/wiki/Color%20LaserWriter
The Color LaserWriter was a line of PostScript four-color laser printers manufactured by Apple Computer, Inc. in the mid-1990s. These printers were compatible with PCs and Apple's own Macintosh line of computers; these printers were also able to connect to large networks by way of the use of an 10baseT Ethernet port. Two models were released. Color LaserWriter 12/600 PS A PostScript printer, the Color LaserWriter 12/600 PS color laser printer was intended for small business and consumers with high printing requirements. The Windows-compatible driver was of interest due to its ability generate Postscript files (.ps) for later printing. This printer was released in 1995, one year before its replacement with the Color LaserWriter 12/660 PS, which had the same specifications as the 12/600 PS, but was sold at a lower price. Color LaserWriter 12/660 PS The Color LaserWriter 12/660 PS is a color laser printer introduced by Apple in October 1996. The printer became a workhorse used in Kinko's copy stores across the United States. The printer's weight, size, speed of printing, and high cost of purchase, operation, and maintenance were its chief drawbacks. References External links Driver for Windows 95 12/600 Technical Specifications on Apple.com 12/660 Technical Specifications on Apple.com Laser printers Apple Inc. printers Computer-related introductions in 1995 Discontinued Apple Inc. products Products and services discontinued in 1996
Color LaserWriter
[ "Technology" ]
297
[ "Computing stubs", "Computer hardware stubs" ]
9,401,560
https://en.wikipedia.org/wiki/Weighted%20matroid
In combinatorics, a branch of mathematics, a weighted matroid is a matroid endowed with a function that assigns a weight to each element. Formally, let be a matroid, where E is the set of elements and I is the family of independent set. A weighted matroid has a weight function for assigns a strictly positive weight to each element of . We extend the function to subsets of by summation; is the sum of over in . Finding a maximum-weight independent set A basic problem regarding weighted matroids is to find an independent set with a maximum total weight. This problem can be solved using the following simple greedy algorithm: Initialize the set A to an empty set. Note that, by definition of a matroid, A is an independent set. For each element x in E\A, check whether Au{x} is still an independent set. If there are no such elements, then stop, as A cannot be extended anymore. If there is at least one such element, then choose the one with maximum weight, and add it to A. This algorithm does not need to know anything about the matroid structure; it just needs an independence oracle for the matroid - a subroutine for testing whether a set is independent. Jack Edmonds proved that this simple algorithm indeed finds an independent set with maximum weight. Denote the set found by the algorithm by e1,...,ek. By the matroid properties, it is clear that k=rank(M), otherwise the set could be extended. Assume by contradiction that there is another set with a higher weight. Without loss of generality, it is possible to assume that this set has rank(M) elements too; denote it by f1,...,fk. Order these items such that w(f1) ≥ ... ≥ w(fk). Let j be the first index for which w(fj) > w(ej). Apply the augmentation property to the sets {f1,...,fj} and {e1,...,ej-1}; we conclude that there must be some i ≤ j such that fi could be added to {e1,...,ej-1} while keeping it independent. But w(fi) ≥ w(fj) > w(ej), so fi should have been chosen in step j instead of ej - a contradiction. Example: spanning forest algorithms As a simple example, say we wish to find the maximum spanning forest of a graph. That is, given a graph and a weight for each edge, find a forest containing every vertex and maximizing the total weight of the edges in the tree. This problem arises in some clustering applications. It can be solved by Kruskal's algorithm, which can be seen as the special case of the above greedy algorithm to a graphical matroid. If we look at the definition of the forest matroid, we see that the maximum spanning forest is simply the independent set with largest total weight — such a set must span the graph, for otherwise we can add edges without creating cycles. But how do we find it? Finding a basis There is a simple algorithm for finding a basis: Initially let be the empty set. For each in if is independent, then set to . The result is clearly an independent set. It is a maximal independent set because if is not independent for some subset of , then is not independent either (the contrapositive follows from the hereditary property). Thus if we pass up an element, we'll never have an opportunity to use it later. We will generalize this algorithm to solve a harder problem. Extension to optimal An independent set of largest total weight is called an optimal set. Optimal sets are always bases, because if an edge can be added, it should be; this only increases the total weight. As it turns out, there is a trivial greedy algorithm for computing an optimal set of a weighted matroid. It works as follows: Initially let be the empty set. For each in , taken in (monotonically) decreasing order by weight if is independent, then set to . This algorithm finds a basis, since it is a special case of the above algorithm. It always chooses the element of largest weight that it can while preserving independence (thus the term "greedy"). This always produces an optimal set: suppose that it produces and that . Now for any with , consider open sets and . Since is smaller than , there is some element of which can be put into with the result still being independent. However is an element of maximal weight that can be added to to maintain independence. Thus is of no smaller weight than some element of , and hence is of at least a large a weight as . As this is true for all , is weightier than . Complexity analysis The easiest way to traverse the members of in the desired order is to sort them. This requires time using a comparison sorting algorithm. We also need to test for each whether is independent; assuming independence tests require time, the total time for the algorithm is . If we want to find a minimum spanning tree instead, we simply "invert" the weight function by subtracting it from a large constant. More specifically, let , where exceeds the total weight over all graph edges. Many more optimization problems about all sorts of matroids and weight functions can be solved in this trivial way, although in many cases more efficient algorithms can be found that exploit more specialized properties. Matroid requirement Note also that if we take a set of "independent" sets which is a down-set but not a matroid, then the greedy algorithm will not always work. For then there are independent sets and with , but such that for no is independent. Pick an and such that . Weight the elements of in the range to , the elements of in the range to , the elements of in the range to , and the rest in the range to . The greedy algorithm will select the elements of , and then cannot pick any elements of . Therefore, the independent set it constructs will be of weight at most , which is smaller than the weight of . Characterization This optimization algorithm may be used to characterize matroids: if a family F of sets, closed under taking subsets, has the property that, no matter how the sets are weighted, the greedy algorithm finds a maximum-weight set in the family, then F must be the family of independent sets of a matroid. Generalizations The notion of matroid has been generalized to allow for other types of sets on which a greedy algorithm gives optimal solutions; see greedoid and matroid embedding for more information. Korte and Lovász would generalize these ideas to objects called greedoids, which allow even larger classes of problems to be solved by greedy algorithms. References Matroid theory
Weighted matroid
[ "Mathematics" ]
1,400
[ "Matroid theory", "Combinatorics" ]
9,401,706
https://en.wikipedia.org/wiki/Quaquaversal%20tiling
The quaquaversal tiling is a nonperiodic tiling of Euclidean 3-space introduced by John Conway and Charles Radin. It is analogous to the pinwheel tiling in 2 dimensions having tile orientations that are dense in SO(3). The basic solid tiles are 30-60-90 triangular prisms arranged in a pattern such that some copies are rotated by π/3, and some are rotated by π/2 in a perpendicular direction. They construct the group G(p,q) given by a rotation of 2π/p and a perpendicular rotation by 2π/q; the orientations in the quaquaversal tiling are given by G(6,4). G(p,1) are cyclic groups, G(p,2) are dihedral groups, G(4,4) is the octahedral group, and all other G(p,q) are infinite and dense in SO(3); if p and q are odd and ≥3, then G(p,q) is a free group. Radin and Lorenzo Sadun constructed similar honeycombs based on a tiling related to the Penrose tilings and the pinwheel tiling; the former has orientations in G(10,4), and the latter has orientations in G(p,4) with the irrational rotation . They show that G(p,4) is dense in SO(3) for the aforementioned value of p, and whenever cos(2π/p) is transcendental. References External links A picture of a quaquaversal tiling Charles Radin page at the University of Texas 3-honeycombs Aperiodic tilings
Quaquaversal tiling
[ "Physics", "Mathematics" ]
357
[ "Tessellation", "Geometry", "Geometry stubs", "Aperiodic tilings", "Symmetry" ]
9,402,045
https://en.wikipedia.org/wiki/Pinwheel%20tiling
In geometry, pinwheel tilings are non-periodic tilings defined by Charles Radin and based on a construction due to John Conway. They are the first known non-periodic tilings to each have the property that their tiles appear in infinitely many orientations. Definition Let be the right triangle with side length , and . Conway noticed that can be divided in five isometric copies of its image by the dilation of factor . The pinwheel tiling is obtained by repeatedly inflating by a factor of and then subdividing each tile in this manner. Conversely, the tiles of the pinwheel tiling can be grouped into groups of five that form a larger pinwheel tiling. In this tiling, isometric copies of appear in infinitely many orientations because the small angle of , , is not a rational multiple of . Radin found a collection of five prototiles, each of which is a marking of , so that the matching rules on these tiles and their reflections enforce the pinwheel tiling. All of the vertices have rational coordinates, and tile orientations are uniformly distributed around the circle. Generalizations Radin and Conway proposed a three-dimensional analogue which was dubbed the quaquaversal tiling. There are other variants and generalizations of the original idea. One gets a fractal by iteratively dividing in five isometric copies, following the Conway construction, and discarding the middle triangle (ad infinitum). This "pinwheel fractal" has Hausdorff dimension . Use in architecture Federation Square, a building complex in Melbourne, Australia, features the pinwheel tiling. In the project, the tiling pattern is used to create the structural sub-framing for the facades, allowing for the facades to be fabricated off-site, in a factory and later erected to form the facades. The pinwheel tiling system was based on the single triangular element, composed of zinc, perforated zinc, sandstone or glass (known as a tile), which was joined to 4 other similar tiles on an aluminum frame, to form a "panel". Five panels were affixed to a galvanized steel frame, forming a "mega-panel", which were then hoisted onto support frames for the facade. The rotational positioning of the tiles gives the facades a more random, uncertain compositional quality, even though the process of its construction is based on pre-fabrication and repetition. The same pinwheel tiling system is used in the development of the structural frame and glazing for the "Atrium" at Federation Square, although in this instance, the pin-wheel grid has been made "3-dimensional" to form a portal frame structure. References External links Pinwheel at the Tilings Encyclopedia Dynamic Pinwheel made in GeoGebra Discrete geometry Aperiodic tilings
Pinwheel tiling
[ "Physics", "Mathematics" ]
575
[ "Discrete mathematics", "Tessellation", "Discrete geometry", "Aperiodic tilings", "Symmetry" ]
9,402,074
https://en.wikipedia.org/wiki/Konarka%20Technologies
Konarka Technologies, Inc. was a solar energy company based in Lowell, Massachusetts, founded in 2001 as a spin-off from University of Massachusetts Lowell. In late May 2012, the company filed for Chapter 7 bankruptcy protection and laid off its approximately 80-member staff. The company’s operations have ceased and a trustee is tasked with liquidating the company’s assets for the benefit of creditors. The company was developing two types of organic solar cells: polymer-fullerene solar cells and dye-sensitized solar cells (DSSCs). Konarka cells were lightweight, flexible photovoltaics that could be printed as film or coated onto surfaces. The company had hoped its manufacturing process, which utilized organic chemistry, would result in higher efficiency at lower cost than traditional crystalline silicon fabricated solar cells. Konarka was also researching infrared light activated photovoltaics which would enable night-time power generation. The company's co-founders included the Nobel laureate Alan J. Heeger. The company was named after Konark Sun Temple in India. Funding As of 2006, Konarka had received $60 million in funding from venture capital firms including 3i, Draper Fisher Jurvetson, New Enterprise Associates, Good Energies and Chevron Technology Ventures. Konarka also received nearly $10 million in combined grants from the Pentagon and European governments, and in 2007 was approved for further funding through the Solar America Initiative, a component of the White House's Advanced Energy Initiative. The company raised a further $45 million in private capital financing in October 2007 in a financing round led by Mackenzie Financial Corporation. The company also received $1.5 million from a state of Massachusetts alternative energy trust fund in 2003 during Governor Mitt Romney's term and another $5 million during Governor Deval Patrick's term. At the time of its bankruptcy filing in 2012, its funding history was summarized: "Konarka raised more than $170 million in private capital investments and $20 million in government grants, according to its website. Under the Bush administration, Konarka received a $1.6 million Army contract in 2005 and a $3.6 million award from the Department of Energy in 2007. Under the Obama administration, Konarka was one of 183 clean-energy companies that got a total of $2.3 billion in tax credits as part of the 2009 stimulus." Bankruptcy and political fallout The bankruptcy filing occurred days after a visit by Republican presidential candidate Romney to Solyndra, another bankrupted solar energy firm which also received over $500 million of funding from the United States government. The fact that Konarka also received a loan in 2003 during Romney's gubernatorial term was noted by Democrats and inserted into the campaign-politics debate. Technology Dye-sensitized solar cells Konarka in 2002 was granted licensee rights to dye-sensitized solar cell technology from the Swiss Federal Institute of Technology (EPFL). This solar-cell design included two main components: a special light-sensitive dye that released electrons when exposed to sunlight and titanium dioxide nanoparticles which escorted electrons away from the dyes and to an external electronic circuit, generating electricity Polymer-fullerene solar cells Konarka built photovoltaic products using next generation nanomaterials that were coated on rolls of plastic (Power Plastic). Konarka's nanomaterials absorbed sunlight and indoor light and converted them into electrical energy. These products could be easily integrated as the power generation component for a variety of applications and could be produced and used virtually anywhere. Konarka was one of several companies developing nanotechnology-based solar cells, others include Nanosolar and Nanosys. These materials, as well as positive and negative electrodes made from metallic inks, could be inexpensively spread over a sheet of plastic using printing and coating machines to make solar cells, using roll-to-roll manufacturing, similar to how newspaper is printed on large rolls of paper. Konarka’s manufacturing process enabled production to scale easily and results in significantly reduced costs over previous generations of solar cells. . Richard Hess, Konarka's president and CEO, said that the company's ability to use existing equipment allowed it to scale up production at one-tenth the cost compared with conventional technologies. Unlike conventional solar cells, which were packaged in modules made of glass and aluminum and were rigid and heavy, Konarka's solar cells were lightweight and flexible. This made them attractive for portable applications. What was more, they could be designed in a range of colors, which made them easier to incorporate attractively into certain applications. One of the first products to use Konarka's cells was to be briefcases that could recharge laptops. Another company was testing Konarka's solar cells for use in umbrellas for outdoor tables at restaurants. They could also be used in tents and awnings. Because the solar cells could be made transparent, Konarka was also developing a version of its solar cells that could be laminated to windows to generate electricity and serve as a window tinting. However, the technology had several drawbacks. The solar cells only lasted a couple of years, unlike the decades that conventional solar cells last and the solar cells were relatively inefficient. Conventional solar cells can easily convert 15 percent of the energy in sunlight into electricity; Konarka's cells only converted up to 8.3%, the highest certified efficiency that the National Renewable Energy Laboratory had recorded for organic photovoltaic cells by that time. Flexible batteries Konarka owned the rights to an organic-based solar-recharging flexible battery technology. However, as of April 2007, Konarka had no plans to produce these commercially itself. Flexible batteries have thin-solar cells which are held inside a flexible gas barrier to prevent them from degrading when exposed to air. At just two grams in weight and just one millimetre thick, the flexible battery is small enough to be used in low-wattage gadgets - including flat smart cards and mobile phones. The potential for this type of product was seen as large, given that there was a growing demand for portable self-rechargeable power supplies. Production Dye-sensitized solar cells Konarka Technologies and Renewable Capital announced the licensing and joint development of Konarka's dye-sensitized solar cell technology for large-scale production, scaling to several hundred megawatts. Polymer-fullerene solar cells Konarka opened a commercial-scale factory, with the capacity to produce enough polymer-fullerene solar cells every year to generate one gigawatt of electricity, the equivalent of a large nuclear reactor. The company planned to gradually ramp up production at its new factory, reaching full capacity in two to three years. Patents Konarka was issued a number of United States patents relating to its photovoltaics research: 6706963, Jan 25, 2002, "Photovoltaic cell interconnection" 6858158, Jan 24, 2003, "Low temperature interconnection of nanoparticles" 6900382, Jan 24, 2003, "Gel electrolytes for dye sensitized solar cells" 6913713, Jan 24, 2003, "Photovoltaic fibers" 6924427, Jan 24, 2003, "Wire interconnects for fabricating interconnected photovoltaic cells 6933436, Apr 27, 2001, "Photovoltaic cell" 6949400, Jan 24, 2003, "Ultrasonic slitting of photovoltaic cells and modules" 7022910, Mar 24, 2003, "Photovoltaic cells utilizing mesh electrodes" 7071139, Dec 20, 2002, "Oxynitride compounds, methods of preparation, and uses thereof" 7186911, Jan 24, 2003, "Methods of scoring for fabricating interconnected photovoltaic cells" See also Fullerene Low cost solar cell Oerlikon Solar Organic electronics References External links Official website of Konarka Technologies, Inc. Konarka Claims 1GW in Organic PV Production Solar energy companies of the United States Dye-sensitized solar cells Organic solar cells Thin-film cell manufacturers Defunct technology companies based in Massachusetts Companies based in Lowell, Massachusetts Energy companies established in 2001 Renewable resource companies established in 2001 2001 establishments in Massachusetts 3i Group companies American companies established in 2001
Konarka Technologies
[ "Chemistry", "Materials_science" ]
1,731
[ "Organic solar cells", "Polymer chemistry" ]
9,402,112
https://en.wikipedia.org/wiki/Ali%20Moustafa%20Mosharafa
Ali Moustafa Attia Mosharrafa (; 11 July 1898 – 16 January 1950) was an Egyptian theoretical physicist. He was a Professor of Applied Mathematics at Cairo University and also served as the University's first dean. He contributed to the development of Quantum theory as well as the Theory of relativity. Biography Early life Mosharafa obtained his primary certificate in 1910, ranking second nationwide. He obtained his Baccalaureate at the age of 16, becoming the youngest student at that time to be awarded such a certificate and, again, ranking second. He preferred to enroll in the Teachers' College rather than the faculties of Medicine or Engineering due to his deep interest in mathematics. He graduated in 1917. Due to his excellence in mathematics, the Egyptian Ministry of Education sent him to England where, in 1920, he obtained a BSc (Honors) from the University of Nottingham. The Egyptian University consented to grant Mosharafa another scholarship to complete his doctoral thesis. During his stay in London, he was published many times in prominent science magazines. He obtained a PhD in 1923 from King's College London in the shortest possible time permissible according to the regulations there. In 1924, Mosharafa was awarded the degree of Doctor of Science, the first Egyptian and 11th scientist in the world to obtain such a degree. Academic career He became a teacher in the Higher Teachers' college in Cairo University, he became an associate professor of mathematics in the Faculty of Science because he was under the age of 30, the minimum age required for fulfilling the post of a professor. In 1926 his promotion to professor was raised in the Parliament, then chaired by Saad Zaghloul. The Parliament lauded his qualifications and merits which surpassed those of the English dean of the faculty, and he was promoted to professor. He was the first Egyptian professor of applied mathematics in the Faculty of Science. He became dean of the faculty in 1936, at the age of 38. He remained in office as a dean of the Faculty of Science until he died in 1950. Scientific achievements During the 1920s and 1930s, he studied Maxwell's equations, the theory of special relativity, and had correspondence with Albert Einstein. Mosharafa published 25 original papers in distinguished scientific journals about quantum theory, the theory of relativity, and the relation between radiation and matter. He published 12 scientific books about relativity and mathematics. His books about the theory of relativity were translated into English, French, German and Polish. He had also translated 10 books of astronomy and mathematics into Arabic. Mosharafa was interested in the history of science, with a focus on the contributions of Arab scientists in the Middle Ages. With his student M. Morsi Ahmad, he published al-Khwārizmī's book The Compendious Book on Calculation by Completion and Balancing (Kitab al-Jabr wa-l-Muqabala). Mosharafa was also interested in the relation between music and mathematics. Social and political views Mosharafa was the first to call for social reform and development based on scientific research. Mosharafa wanted to promote public scientific awareness and wrote several articles and books on scientific topics intended to be accessible to a wider audience. He also encouraged the translation of scientific literature into Arabic, and contributed writing the Arab scientific encyclopedia and books on the scientific heritage of the Arabs. He was against the use of atomic energy in war and warned against the exploitation of science as a means of destruction. Honors He was given the title "Pasha" by King Farouq, but he declined the title claiming that no title is worthier than a sciences PhD. A laboratory and an auditorium are named after him in the Faculty of Science, Cairo University, Egypt. An annual award carrying his name has been initiated by his family to be given to the cleverest student in mathematics. Egypt & Europe Magazine published a cartoon of him standing between Russia and the USA holding in his hands rolled paper, and both superpowers awaiting him to unfold the secrets of science. ???  In 1947 the Institute for Advanced Study invited Mosharafa to join as a visiting professor at Princeton University, but the king disapproved. The Newton-Mosharafa Fund was named after him and Sir Isaac Newton Books and papers He wrote 26 significant papers including theoretical explanations of natural phenomena. He wrote 15 books on relativity and mathematics. Among which is a book on the theory of relativity translated into English, French, German and Polish, and reprinted in the United States. He produced around 15 scientific books about relativity, mathematics, and the atom. Selected books We and Science Science and Life Atom and Atomic Bomb Scientific Claims Engineering in Pharaohs Times Selected papers On the Stark Effect for Strong Electric Fields (Phil. Mag. Vol. 44, p. 371) - (1922) On the Quantum Theory of Complex Zeeman Effect (Phil. Mag. Vol. 46, p. 177) - (1923) The Stark Effect for Strong Fields (Phil. Mag. Vol. 46, p. 751) - (1923) On the Quantum Theory of the Simple Zeeman Effect (Roy. Soc. Proc. A. Vol. 102, p. 529) - (1923) Half Integral Quantum numbers in the Theory of Stark Effect and a general Hypothesis of Fractional Quantum numbers (Roy. Soc. Proc. Vol. 126, p. 641) - (1930) On The Quantum Dynamics of Degenerate Systems (Roy. Soc. Proc. A. Vol. 107, p. 237) - (1925) The Quantum Explanation of the Zeeman Triplet (Nature Vol. 119, p. 96, No. 2907, July 18) - (1925) The Motion of a Lorentz Electron as a wave Phenomenon (Nature Vol. 124, p. 726, No. 3132, Nov. 9) - (1929) Wave Mechanics and the Dual Aspect of Matter and Radiation (Roy. Soc. Proc. A. Vol. 126, p. 35) - (1930) Material and Radiational Waves (Roy. Soc. Proc. A. Vol. 131, p. 335) - (1931) Can Matter and Radiation be regarded as two aspects of the same world-Condition (Verhandlungen der Internationalen Kongress, Zurich, Switzerland) - (1932) Some Views on the Relation between Matter and Radiation (Bulletin de l'institute d'Egypte, T. XVI, p. 161) - (1939) The Maxwellian Equations and a Variable Speed of Light (Proceedings of the Mathematical and Physical Society of Egypt, No. 1, Vol. 1) - (1937) The Principle of Indeterminacy and the Structure of the World Lines (Proceedings of the Mathematical and Physical Society of Egypt, Vol. 2, No. 1) - (1944) Wave Surfaces associated with World Lines (Proceedings of the Mathematical and Physical Society of Egypt, Vol. 2, No. 2) - (1943) Conical Transformations (Proceedings of the Mathematical and Physical Society of Egypt, No. 2, Vol. 3) - (1944) On a Positive Definite Metric in the Special Theory of Relativity (Proceedings of the Mathematical and Physical Society of Egypt, Vol. 2, No. 4) - (1944) On the Metric of Space and the Equations of Motion of a Charged Particle (Proceedings of the Mathematical and Physical Society of Egypt, Vol. 3, No. 1) - (1945) The Metric of Space and Mass Deficiency (Philosophical Magazine) - (1948) References References of his Papers 1898 births 1950 deaths Alumni of King's College London Academic staff of Cairo University Egyptian scientists Egyptian physicists Relativity theorists Alumni of the University of Nottingham Egyptian pashas People from Damietta 20th-century Egyptian mathematicians
Ali Moustafa Mosharafa
[ "Physics" ]
1,579
[ "Relativity theorists", "Theory of relativity" ]
4,146,026
https://en.wikipedia.org/wiki/Inayatullah%20Khan%20Mashriqi
Inayatullah Khan Mashriqi (; August 1888 27 August 1963), also known by the honorary title Allama Mashriqi (), was a British Indian, and later, Pakistani mathematician, logician, political theorist, Islamic scholar and the founder of the Khaksar movement. Around 1930, he founded the Khaksar Movement, aiming both to revive Islam among Muslims as well as to advance the condition of the masses irrespective of any faith, sect, or religion. Early years Background Inayatullah Khan Mashriqi was born on 25 August 1888 to a Punjabi Muslim Sulheria Rajput family from Amritsar. Mashriqi's father Khan Ata Muhammad Khan was an educated man of wealth who owned a bi-weekly publication, Vakil, in Amritsar. His forefathers had held high government positions during the Mughal and Sikh Empires. Because of his father's position he came into contact with a range of well-known luminaries including Jamāl al-Dīn al-Afghānī, Sir Syed Ahmad Khan, and Shibli Nomani as a young man. Education Mashriqi was educated initially at home before attending schools in Amritsar. From an early age, he showed a passion for mathematics. After completing his Bachelor of Arts degree with First Class honours at Forman Christian College in Lahore, he completed his master's degree in mathematics from the University of the Punjab, taking a First Class for the first time in the history of the university. In 1907 he moved to England, where he matriculated at Christ's College, Cambridge, to read for the mathematics tripos. He was awarded a college foundation scholarship in May 1908. In June 1909 he was awarded first class honours in Mathematics Part I, being placed joint 27th out of 31 on the list of wranglers. For the next two years, he read for the oriental languages tripos in parallel to the natural sciences tripos, gaining first class honours in the former, and third class in the latter. After three years' residence at Cambridge he had qualified for a Bachelor of Arts degree, which he took in 1910. In 1912 he completed a fourth tripos in mechanical sciences, and was placed in the second class. At the time he was believed to be the first man of any nationality to achieve honours in four different Triposes, and was lauded in national newspapers across the UK. The next year, Mashriqi was conferred with a DPhil in mathematics receiving a gold medal at his doctoral graduation ceremony. He left Cambridge and returned to India in December 1912. During his stay in Cambridge his religious and scientific conviction was inspired by the works and concepts of Professor Sir James Jeans. Early career On his return to India, Mashriqi was offered the premiership of Alwar, a princely state, by the Maharaja. He declined owing to his interest in education. At the age of 25, and only a few months after arriving in India, he was appointed vice principal of Islamia College, Peshawar, by Chief Commissioner Sir George Roos-Keppel and was made principal of the same college two years later. In October 1917 he was appointed under secretary to the Government of India in the Education Department in succession to Sir George Anderson. He became headmaster of the High School, Peshawar on 21 October 1919. In 1920, the British government offered Mashriqi the ambassadorship of Afghanistan, and a year later he was offered a knighthood. However, he refused both awards. In 1930, he was passed over for a promotion in the government service, following which he went on medical leave. In 1932 he resigned, taking his pension, and settled down in Ichhra, Lahore. Nobel nomination In 1924, at the age of 36, Mashriqi completed the first volume of his book, Tazkirah. It is a commentary on the Qur'an in the light of science. It was nominated for the Nobel Prize in 1925, subject to the condition it was translated into one of the European languages. However, Mashriqi declined the suggestion of translation. Political life Mashriqi's philosophy A theistic evolutionist who accepted some of Darwin's ideas while criticizing others, he declared that the science of religions was essentially the science of collective evolution of mankind; all prophets came to unite mankind, not to disrupt it; the basic law of all faiths is the law of unification and consolidation of the entire humanity. According to Markus Daeschel, the philosophical ruminations of Mashriqi offer an opportunity to re-evaluate the meaning of colonial modernity and notion of post-colonial nation-building in modern times. Mashriqi is often portrayed as a controversial figure, a religious activist, a revolutionary, and an anarchist; while at the same time he is described as a visionary, a reformer, a leader, and a scientist-philosopher who was born ahead of his time. After Mashriqi resigned from government service, he laid the foundation of the Khaksar Tehrik (also known as Khaksar Movement) around 1930. Mashriqi and his Khaskar Tehrik opposed the partition of India. He stated that the "last remedy under the present circumstances is that one and all rise against this conspiracy as one man. Let there be a common Hindu-Muslim Revolution. ... it is time that we should sacrifice…in order to uphold Truth, Honour and Justice." Mashriqi opposed the partition of India because he felt that if Muslims and Hindus had largely lived peacefully together in India for centuries, they could also do so in a free and united India. Mashriqi saw the two-nation theory as a plot of the British to maintain control of the region more easily, if India was divided into two countries that were pitted against one another. He reasoned that a division of India along religious lines would breed fundamentalism and extremism on both sides of the border. Mashriqi thought that "Muslim majority areas were already under Muslim rule, so if any Muslims wanted to move to these areas, they were free to do so without having to divide the country." To him, separatist leaders "were power hungry and misleading Muslims in order to bolster their own power by serving the British agenda." Imprisonments and allegations On 20 July 1943, an assassination attempt was made on Muhammad Ali Jinnah by Rafiq Sabir who was assumed to be a Khaksar worker. The attack was deplored by Mashriqi, who denied any involvement. Later, Justice Blagden of the Bombay High Court in his ruling on 4 November 1943 dismissed any association between the attack and the Khaksars. In Pakistan, Mashriqi was imprisoned at least four times: in 1958 for alleged complicity in the murder of republican leader Khan Abdul Jabbar Khan (popularly known as Dr. Khan Sahib); and, in 1962 for suspicion of attempting to overthrow President Ayub's government. However, none of the charges were proven, and he was acquitted in each case. In 1957, Mashriqi allegedly led 300,000 of his followers to the borders of Kashmir, intending, it is said, to launch a fight for its liberation. However, the Pakistan government persuaded the group to withdraw and the organisation was later disbanded. Death Mashriqi died at the Mayo Hospital in Lahore on 27 August 1963 following a short battle with cancer. His funeral prayers were held at the Badshahi Mosque and he was buried in Ichhra. He was survived by his wife and seven children. Mashriqi's works Mashriqi's prominent works include: Armughan-i-Hakeem, a poetical work Dahulbab, a poetical work Isha’arat, the Manifesto of the Khaksar movement Khitab-e-Misr (The Egypt Address), based on his 1925 speech in Cairo as a delegate to the Motmar-e-Khilafat Maulvi Ka Ghalat Mazhab Tazkirah Volume I, 1924, discussions on conflicts between religions, between religion and science, and the need to resolve these conflicts Tazkirah Volume II. Posthumously published in 1964 Tazkirah Volume III. Fellowships Mashriqi's fellowships included: Fellow of the Royal Society of Arts, 1923 Fellow of the Geographical Society (F.G.S), Paris Fellow of Society of Arts (F.S.A), Paris Member of the Board at Delhi University President of the Mathematical Society, Islamia College, Peshawar Member of the International Congress of Orientalists (Leiden), 1930 President of the All World's Faiths Conference, 1937 Edited works God, Man, and Universe: As Conceived by a Mathematician (works of Inayatullah Khan el-Mashriqi), Akhuwat Publications, Rawalpindi, 1980 (edited by Syed Shabbir Hussain). See also All India Azad Muslim Conference Teilhard de Chardin Karl Marx References 1888 births 1963 deaths 20th-century Indian philosophers Alumni of Christ's College, Cambridge Indian anti-poverty advocates Forman Christian College alumni Indian expatriates in the United Kingdom Indian independence activists from Punjab Province (British India) Indian logicians Indian people of World War II Indian prisoners and detainees Indian revolutionaries Academic staff of Islamia College University 20th-century Muslim scholars of Islam Muslim reformers Pakistani logicians Pakistani mathematicians Pakistani philosophers Pakistani politicians Pakistani Sunni Muslims Scholars from Amritsar People from Lahore University of the Punjab alumni World War II political leaders Theistic evolutionists
Inayatullah Khan Mashriqi
[ "Biology" ]
1,941
[ "Non-Darwinian evolution", "Theistic evolutionists", "Biology theories" ]
4,146,425
https://en.wikipedia.org/wiki/Rayleigh%20%28unit%29
The rayleigh is a unit of photon flux, used to measure faint light emitted in the sky, such as airglow and auroras. It was first proposed in 1956 by Donald M. Hunten, Franklin E. Roach, and Joseph W. Chamberlain. It is named for Robert Strutt, 4th Baron Rayleigh (1875–1947). Its symbol is R (also used for the röntgen, an unrelated unit). SI prefixes are used with the rayleigh. One rayleigh (1 R) is defined as a column emission rate of 1010 photons per square metre per column per second. A column is . The rayleigh is a unit of an apparent emission rate, without allowances being made for scattering or absorption. The night sky has an intensity of about 250 R, while auroras can reach values of 1000 kR. The relationship between photon radiance, L, (with unit photon per square metre per second per steradian) and I (with unit rayleigh) is: 1 rayleigh can thus be expressed in SI units as either: 1010 photons s−1 (m2 column)−1 1/4π 1010 photons s−1 m−2 sr−1 References Units of luminous flux
Rayleigh (unit)
[ "Mathematics" ]
256
[ "Quantity", "Units of luminous flux", "Units of measurement" ]
4,146,576
https://en.wikipedia.org/wiki/Nonpoint%20source%20pollution
Nonpoint source (NPS) pollution refers to diffuse contamination (or pollution) of water or air that does not originate from a single discrete source. This type of pollution is often the cumulative effect of small amounts of contaminants gathered from a large area. It is in contrast to point source pollution which results from a single source. Nonpoint source pollution generally results from land runoff, precipitation, atmospheric deposition, drainage, seepage, or hydrological modification (rainfall and snowmelt) where tracing pollution back to a single source is difficult. Nonpoint source water pollution affects a water body from sources such as polluted runoff from agricultural areas draining into a river, or wind-borne debris blowing out to sea. Nonpoint source air pollution affects air quality, from sources such as smokestacks or car tailpipes. Although these pollutants have originated from a point source, the long-range transport ability and multiple sources of the pollutant make it a nonpoint source of pollution; if the discharges were to occur to a body of water or into the atmosphere at a single location, the pollution would be single-point. Nonpoint source water pollution may derive from many different sources with no specific solutions or changes to rectify the problem, making it difficult to regulate. Nonpoint source water pollution is difficult to control because it comes from the everyday activities of many different people, such as lawn fertilization, applying pesticides, road construction or building construction. Controlling nonpoint source pollution requires improving the management of urban and suburban areas, agricultural operations, forestry operations and marinas. Types of nonpoint source water pollution include sediment, nutrients, toxic contaminants and chemicals and pathogens. Principal sources of nonpoint source water pollution include: urban and suburban areas, agricultural operations, atmospheric inputs, highway runoff, forestry and mining operations, marinas and boating activities. In urban areas, contaminated storm water washed off of parking lots, roads and highways, called urban runoff, is usually included under the category of non-point sources (it can become a point source if it is channeled into storm drain systems and discharged through pipes to local surface waters). In agriculture, the leaching out of nitrogen compounds from fertilized agricultural lands is a nonpoint source water pollution. Nutrient runoff in storm water from "sheet flow" over an agricultural field or a forest are also examples of non-point source pollution. Principal types (for water pollution) Sediment Sediment (loose soil) includes silt (fine particles) and suspended solids (larger particles). Sediment may enter surface waters from eroding stream banks, and from surface runoff due to improper plant cover on urban and rural land. Sediment creates turbidity (cloudiness) in water bodies, reducing the amount of light reaching lower depths, which can inhibit growth of submerged aquatic plants and consequently affect species which are dependent on them, such as fish and shellfish. With an increased sediment load into a body of water, the oxygen can also be depleted or reduced to a level that is harmful to the species living in that area. High turbidity levels also inhibit drinking water purification systems. Sediments are also transported into the water column due to waves and wind. When sediments are eroded at a continuous rate, they will stay in the water column and the turbidity level will increase. Sedimentation is a process by which sediment is transported to a body of water. The sediment will then be deposited into the water system or stay in the water column. When there are high rates of sedimentation, flooding can occur due to a build-up of too much sediment. When flooding occurs, waterfront properties can be damaged further by high amounts of sediment being present. Sediment can also be discharged from multiple different sources. Sources include construction sites (although these are point sources, which can be managed with erosion controls and sediment controls), agricultural fields, stream banks, and highly disturbed areas. Nutrients Nutrients mainly refers to inorganic matter from runoff, landfills, livestock operations and crop lands. The two primary nutrients of concern are phosphorus and nitrogen. Phosphorus is a nutrient that occurs in many forms that are bioavailable. It is notoriously over-abundant in human sewage sludge. It is a main ingredient in many fertilizers used for agriculture as well as on residential and commercial properties and may become a limiting nutrient in freshwater systems and some estuaries. Phosphorus is most often transported to water bodies via soil erosion because many forms of phosphorus tend to be adsorbed on to soil particles. Excess amounts of phosphorus in aquatic systems (particularly freshwater lakes, reservoirs, and ponds) leads to proliferation of microscopic algae called phytoplankton. The increase of organic matter supply due to the excessive growth of the phytoplankton is called eutrophication. A common symptom of eutrophication is algae blooms that can produce unsightly surface scums, shade out beneficial types of plants, produce taste-and-odor-causing compounds, and poison the water due to toxins produced by the algae. These toxins are a particular problem in systems used for drinking water because some toxins can cause human illness and removal of the toxins is difficult and expensive. Bacterial decomposition of algal blooms consumes dissolved oxygen in the water, generating hypoxia with detrimental consequences for fish and aquatic invertebrates. Nitrogen is the other key ingredient in fertilizers, and it generally becomes a pollutant in saltwater or brackish estuarine systems where nitrogen is a limiting nutrient. Similar to phosphorus in fresh-waters, excess amounts of bioavailable nitrogen in marine systems lead to eutrophication and algae blooms. Hypoxia is an increasingly common result of eutrophication in marine systems and can impact large areas of estuaries, bays, and near shore coastal waters. Each summer, hypoxic conditions form in bottom waters where the Mississippi River enters the Gulf of Mexico. During recent summers, the aerial extent of this "dead zone" is comparable to the area of New Jersey and has major detrimental consequences for fisheries in the region. Nitrogen is most often transported by water as nitrate (NO3). The nitrogen is usually added to a watershed as organic-N or ammonia (NH3), so nitrogen stays attached to the soil until oxidation converts it into nitrate. Since the nitrate is generally already incorporated into the soil, the water traveling through the soil (i.e., interflow and tile drainage) is the most likely to transport it, rather than surface runoff. Toxic contaminants and chemicals Toxic chemicals mainly include organic compounds and inorganic compounds. Inorganic compounds, including heavy metals like lead, mercury, zinc, and cadmium are resistant to breakdown. These contaminants can come from a variety of sources including human sewage sludge, mining operations, vehicle emissions, fossil fuel combustion, urban runoff, industrial operations and landfills. Other toxic contaminants include organic compounds such as polychlorinated biphenyls (PCBs) and polycyclic aromatic hydrocarbons (PAHs), fire retardants, and many agrochemicals like DDT, other pesticides, and fertilizers. These compounds can have severe effects to the ecosystem and water-bodies and can threaten the health of both humans and aquatic species while being resistant to environmental breakdown, thus allowing them to persist in the environment. These compounds can also be present in the air and water environments, causing damage to the environment and risking harmful exposure to living species. These toxic chemicals could come from croplands, nurseries, orchards, building sites, gardens, lawns and landfills. Acids and salts mainly are inorganic pollutants from irrigated lands, mining operations, urban runoff, industrial sites and landfills. Other inorganic toxic contaminants can come from foundries and other factory plants, sewage, mining, and coal-burning power stations. Pathogens Pathogens are bacteria and viruses that can be found in water and cause diseases in humans. Typically, pathogens cause disease when they are present in public drinking water supplies. Pathogens found in contaminated runoff may include: Cryptosporidium parvum Giardia lamblia Salmonella Norovirus and other viruses Parasitic worms (helminths). Coliform bacteria and fecal matter may also be detected in runoff. These bacteria are a commonly used indicator of water pollution, but not an actual cause of disease. Pathogens may contaminate runoff due to poorly managed livestock operations, faulty septic systems, improper handling of pet waste, the over application of human sewage sludge, contaminated storm sewers, and sanitary sewer overflows. Principal sources (for water pollution) Urban and suburban areas Urban and suburban areas are a main sources of nonpoint source pollution due to the amount of runoff that is produced due to the large amount of paved surfaces. Paved surfaces, such as asphalt and concrete are impervious to water penetrating them. Any water that is on contact with these surfaces will run off and be absorbed by the surrounding environment. These surfaces make it easier for stormwater to carry pollutants into the surrounding soil. Construction sites tend to have disturbed soil that is easily eroded by precipitation like rain, snow, and hail. Additionally, discarded debris on the site can be carried away by runoff waters and enter the aquatic environment. Contaminated stormwater washed off parking lots, roads and highways, and lawns (often containing fertilizers and pesticides) is called urban runoff. This runoff is often classified as a type of NPS pollution. Some people may also consider it a point source because many times it is channeled into municipal storm drain systems and discharged through pipes to nearby surface waters. However, not all urban runoff flows through storm drain systems before entering water bodies. Some may flow directly into water bodies, especially in developing and suburban areas. Also, unlike other types of point sources, such as industrial discharges, sewage treatment plants and other operations, pollution in urban runoff cannot be attributed to one activity or even group of activities. Therefore, because it is not caused by an easily identified and regulated activity, urban runoff pollution sources are also often treated as true nonpoint sources as municipalities work to abate them. An example of this is in Michigan, through a NPS (nonpoint source) program. This program helps stakeholders create watershed management plans to combat nonpoint source pollution. Typically, in suburban areas, chemicals are used for lawn care. These chemicals can end up in runoff and enter the surrounding environment via storm drains in the city. Since the water in storm drains is not treated before flowing into surrounding water bodies, the chemicals enter the water directly. Other significant sources of runoff include habitat modification and silviculture (forestry). Agricultural operations Nutrients (nitrogen and phosphorus) are typically applied to farmland as commercial fertilizer, animal manure, or spraying of municipal or industrial wastewater (effluent) or sludge. Nutrients may also enter runoff from crop residues, irrigation water, wildlife, and atmospheric deposition. Nutrient pollution such as nitrates can harm the aquatic environments by degrading water quality by lowering levels of oxygen, which can inturn induce algal blooms and eutrophication. Other agrochemicals such as pesticides and fungicides can enter environments from agricultural lands through runoff and deposition as well. Pesticides such as DDT or atrazine can travel through waterways or stay suspended in air and carried by wind in a process known as "spray drift". Sediment (loose soil) washed off fields is a form of agricultural pollution. Farms with large livestock and poultry operations, such as factory farms, are often point source dischargers. These facilities are called "concentrated animal feeding operations" or "feedlots" in the US and are being subject to increasing government regulation. Agricultural operations account for a large percentage of all nonpoint source pollution in the United States. When large tracts of land are plowed to grow crops, it exposes and loosens soil that was once buried. This makes the exposed soil more vulnerable to erosion during rainstorms. It also can increase the amount of fertilizer and pesticides carried into nearby bodies of water. Atmospheric inputs Atmospheric deposition is a source of inorganic and organic constituents because these constituents are transported from sources of air pollution to receptors on the ground. Typically, industrial facilities, like factories, emit air pollution via a smokestack. Although this is a point source, due to the distributional nature, long-range transport, and multiple sources of the pollution, it can be considered as nonpoint source in the depositional area. Atmospheric inputs that affect runoff quality may come from dry deposition between storm events and wet deposition during storm events. The effects of vehicular traffic on the wet and dry deposition that occurs on or near highways, roadways, and parking areas creates uncertainties in the magnitudes of various atmospheric sources in runoff. Existing networks that use protocols sufficient to quantify these concentrations and loads do not measure many of the constituents of interest and these networks are too sparse to provide good deposition estimates at a local scale Highway runoff Highway runoff accounts for a small but widespread percentage of all nonpoint source pollution. Harned (1988) estimated that runoff loads were composed of atmospheric fallout (9%), vehicle deposition (25%) and highway maintenance materials (67%) he also estimated that about 9 percent of these loads were reentrained in the atmosphere. Forestry and mining operations Forestry and mining operations can have significant inputs to nonpoint source pollution. Forestry Forestry operations reduce the number of trees in a given area, thus reducing the oxygen levels in that area as well. This action, coupled with the heavy machinery (harvesters, etc.) rolling over the soil increases the risk of erosion. Mining Active mining operations are considered point sources, however runoff from abandoned mining operations contribute to nonpoint source pollution. In strip mining operations, the top of the mountain is removed to expose the desired ore. If this area is not properly reclaimed once the mining has finished, soil erosion can occur. Additionally, there can be chemical reactions with the air and newly exposed rock to create acidic runoff. Water that seeps out of abandoned subsurface mines can also be highly acidic. This can seep into the nearest body of water and change the pH in the aquatic environment. Marinas and boating activities Chemicals used for boat maintenance, like paint, solvents, and oils find their way into water through runoff. Additionally, spilling fuels or leaking fuels directly into the water from boats contribute to nonpoint source pollution. Nutrient and bacteria levels are increased by poorly maintained sanitary waste receptacles on the boat and pump-out stations. Control (for water pollution) Urban and suburban areas To control nonpoint source pollution, many different approaches can be undertaken in both urban and suburban areas. Buffer strips provide a barrier of grass in between impervious paving material like parking lots and roads, and the closest body of water. This allows the soil to absorb any pollution before it enters the local aquatic system. Retention ponds can be built in drainage areas to create an aquatic buffer between runoff pollution and the aquatic environment. Runoff and storm water drain into the retention pond allowing for the contaminants to settle out and become trapped in the pond. The use of porous pavement allows for rain and storm water to drain into the ground beneath the pavement, reducing the amount of runoff that drains directly into the water body. Restoration methods such as constructing wetlands are also used to slow runoff as well as absorb contamination. Construction sites typically implement simple measures to reduce pollution and runoff. Firstly, sediment or silt fences are erected around construction sites to reduce the amount of sediment and large material draining into the nearby water body. Secondly, laying grass or straw along the border of construction sites also work to reduce nonpoint source pollution. In areas served by single-home septic systems, local government regulations can force septic system maintenance to ensure compliance with water quality standards. In Washington (state), a novel approach was developed through a creation of a "shellfish protection district" when either a commercial or recreational shellfish bed is downgraded because of ongoing nonpoint source pollution. The shellfish protection district is a geographic area designated by a county to protect water quality and tideland resources, and provides a mechanism to generate local funds for water quality services to control nonpoint sources of pollution. At least two shellfish protection districts in south Puget Sound have instituted septic system operation and maintenance requirements with program fees tied directly to property taxes. Agricultural operations To control sediment and runoff, farmers may utilize erosion controls to reduce runoff flows and retain soil on their fields. Common techniques include contour plowing, crop mulching, crop rotation, planting perennial crops or installing riparian buffers. Conservation tillage is a concept used to reduce runoff while planting a new crop. The farmer leaves some crop reside from the previous planting in the ground to help prevent runoff during the planting process. Nutrients are typically applied to farmland as commercial fertilizer; animal manure; or spraying of municipal or industrial wastewater (effluent) or sludge. Nutrients may also enter runoff from crop residues, irrigation water, wildlife, and atmospheric deposition. Farmers can develop and implement nutrient management plans to reduce excess application of nutrients. To minimize pesticide impacts, farmers may use Integrated Pest Management (IPM) techniques (which can include biological pest control) to maintain control over pests, reduce reliance on chemical pesticides, and protect water quality. Forestry operations With a well-planned placement of both logging trails, also called skid trails, can reduce the amount of sediment generated. By planning the trails location as far away from the logging activity as possible as well as contouring the trails with the land, it can reduce the amount of loose sediment in the runoff. Additionally, by replanting trees on the land after logging, it provides a structure for the soil to regain stability as well as replaces the logged environment. Marinas Installing shut off valves on fuel pumps at a marina dock can help reduce the amount of spillover into the water. Additionally, pump-out stations that are easily accessible to boaters in a marina can provide a clean place in which to dispose of sanitary waste without dumping it directly into the water. Finally, something as simple as having trash containers around a marina can prevent larger objects entering the water. Country examples United States Nonpoint source pollution is the leading cause of water pollution in the United States today, with polluted runoff from agriculture and hydromodification the primary sources. Regulation of Nonpoint Source Pollution in the United States The definition of a nonpoint source is addressed under the U.S. Clean Water Act as interpreted by the U.S. Environmental Protection Agency (EPA). The law does not provide for direct federal regulation of nonpoint sources, but state and local governments may do so pursuant to state laws. For example, many states have taken the steps to implement their own management programs for places such as their coastlines, all of which have to be approved by the National Oceanic and Atmospheric Administration and the EPA. The goals of these programs and those alike are to create foundations that encourage statewide pollution reduction by growing and improving systems that already exist. Programs within these state and local governments look to best management practices (BMPs) in order to accomplish their goals of finding the least costly method to reduce the greatest amount of pollution. BMPs can be implemented for both agricultural and urban runoff, and can also be either structural or nonstructural methods. Federal agencies, including EPA and the Natural Resources Conservation Service, have approved and provided a list of commonly used BMPs for the many different categories of nonpoint source pollution. U.S. Clean Water Act provisions for states Congress authorized the CWA section 319 grant program in 1987. Grants are provided to states, territories, and tribes in order to encourage implementation and further development in policy. The law requires all states to operate NPS management programs. EPA requires regular program updates in order to effectively manage the ever-changing nature of their waters, and to ensure effective use of the 319 grant funds and resources. The Coastal Zone Act Reauthorization Amendments (CZARA) of 1990 created a program under the Coastal Zone Management Act that mandates development of nonpoint source pollution management measures in states with coastal waters. CZARA requires states with coastlines to implement management measures to remediate water pollution, and to make sure that the product of these measures is implementation as opposed to adoption. See also Agricultural nutrient runoff stochastic empirical loading and dilution model Trophic state index (water quality indicator) Surface-water hydrology Water quality Water quality modelling References External links US EPA – Nonpoint Source Management Program Agricultural soil science Environmental soil science Environmental science Water pollution
Nonpoint source pollution
[ "Chemistry", "Environmental_science" ]
4,229
[ "Environmental soil science", "nan", "Water pollution" ]
4,147,558
https://en.wikipedia.org/wiki/Doppler%20cooling
Doppler cooling is a mechanism that can be used to trap and slow the motion of atoms to cool a substance. The term is sometimes used synonymously with laser cooling, though laser cooling includes other techniques. History Doppler cooling was simultaneously proposed by two groups in 1975, the first being David J. Wineland and Hans Georg Dehmelt and the second being Theodor W. Hänsch and Arthur Leonard Schawlow. It was first demonstrated by Wineland, Drullinger, and Walls in 1978 and shortly afterwards by Neuhauser, Hohenstatt, Toschek and Dehmelt. One conceptually simple form of Doppler cooling is referred to as optical molasses, since the dissipative optical force resembles the viscous drag on a body moving through molasses. Steven Chu, Claude Cohen-Tannoudji and William D. Phillips were awarded the 1997 Nobel Prize in Physics for their work in laser cooling and atom trapping. Brief explanation Doppler cooling involves light with frequency tuned slightly below an electronic transition in an atom. Because the light is detuned to the "red" (i.e. at lower frequency) of the transition, the atoms will absorb more photons if they move towards the light source, due to the Doppler effect. Consider the simplest case of 1D motion on the x axis. Let the photon be traveling in the +x direction and the atom in the −x direction. In each absorption event, the atom loses a momentum equal to the momentum of the photon. The atom, which is now in the excited state, emits a photon spontaneously but randomly along +x or −x. Momentum is returned to the atom. If the photon was emitted along +x then there is no net change; however, if the photon was emitted along −x, then the atom is moving more slowly in either −x or +x. The net result of the absorption and emission process is a reduced speed of the atom, on the condition that its initial speed is larger than the recoil velocity from scattering a single photon. If the absorption and emission are repeated many times, the mean velocity, and therefore the kinetic energy of the atom, will be reduced. Since the temperature of an ensemble of atoms is a measure of the random internal kinetic energy, this is equivalent to cooling the atoms. The Doppler cooling limit is the minimum temperature achievable with Doppler cooling. Detailed explanation The vast majority of photons that come anywhere near a particular atom are almost completely unaffected by that atom. The atom is almost completely transparent to most frequencies (colors) of photons. A few photons happen to "resonate" with the atom in a few very narrow bands of frequencies (a single color rather than a mixture like white light). When one of those photons comes close to the atom, the atom typically absorbs that photon (absorption spectrum) for a brief period of time, then emits an identical photon (emission spectrum) in some random, unpredictable direction. (Other sorts of interactions between atoms and photons exist, but are not relevant to this article.) The popular idea that lasers increase the thermal energy of matter is not the case when examining individual atoms. If a given atom is practically motionless (a "cold" atom), and the frequency of a laser focused upon it can be controlled, most frequencies do not affect the atom—it is invisible at those frequencies. There are only a few points of electromagnetic frequency that have any effect on that atom. At those frequencies, the atom can absorb a photon from the laser, while transitioning to an excited electronic state, and pick up the momentum of that photon. Since the atom now has the photon's momentum, the atom must begin to drift in the direction the photon was traveling. A short time later, the atom will spontaneously emit a photon in a random direction as it relaxes to a lower electronic state. If that photon is emitted in the direction of the original photon, the atom will give up its momentum to the photon and will become motionless again. If the photon is emitted in the opposite direction, the atom will have to provide momentum in that opposite direction, which means the atom will pick up even more momentum in the direction of the original photon (to conserve momentum), with double its original velocity. But usually the photon speeds away in some other direction, giving the atom at least some sideways thrust. Another way of changing frequencies is to change the positioning of the laser, for example, by using a monochromatic (single-color) laser that has a frequency that is a little below one of the "resonant" frequencies of this atom (at which frequency the laser will not directly affect the atom's state). If the laser were to be positioned so that it was moving towards the observed atoms, then the Doppler effect would raise its frequency. At one specific velocity, the frequency would be precisely correct for said atoms to begin absorbing photons. Something very similar happens in a laser cooling apparatus, except such devices start with a warm cloud of atoms moving in numerous directions at variable velocity. Starting with a laser frequency well below the resonant frequency, photons from any one laser pass right through the majority of atoms. However, atoms moving rapidly towards a particular laser catch the photons for that laser, slowing those atoms down until they become transparent again. (Atoms rapidly moving away from that laser are transparent to that laser's photons—but they are rapidly moving towards the laser directly opposite it). This utilization of a specific velocity to induce absorption is also seen in Mössbauer spectroscopy. On a graph of atom velocities (atoms moving rapidly to the right correspond with stationary dots far to the right, atoms moving rapidly to the left correspond with stationary dots far to the left), there is a narrow band on the left edge corresponding to the speed at which those atoms start absorbing photons from the left laser. Atoms in that band are the only ones that interact with the left laser. When a photon from the left laser slams into one of those atoms, it suddenly slows down an amount corresponding to the momentum of that photon (the dot would be redrawn some fixed "quantum" distance further to the right). If the atom releases the photon directly to the right, then the dot is redrawn that same distance to the left, putting it back in the narrow band of interaction. But usually the atom releases the photon in some other random direction, and the dot is redrawn that quantum distance in the opposite direction. Such an apparatus would be constructed with many lasers, corresponding to many boundary lines that completely surround that cloud of dots. As the laser frequency is increased, the boundary contracts, pushing all the dots on that graph towards zero velocity, the given definition of "cold". Limits Minimum temperature The Doppler temperature is the minimum temperature achievable with Doppler cooling. When a photon is absorbed by an atom counter-propagating to the light source, its velocity is decreased by momentum conservation. When the absorbed photon is spontaneously emitted by the excited atom, the atom receives a momentum kick in a random direction. The spontaneous emissions are isotropic and therefore these momentum kicks average to zero for the mean velocity. On the other hand, the mean squared velocity, , is not zero in the random process, and thus heat is supplied to the atom. At equilibrium, the heating and cooling rates are equal, which sets a limit on the amount by which the atom can be cooled. As the transitions used for Doppler cooling have broad natural linewidths (measured in radians per second), this sets the lower limit to the temperature of the atoms after cooling to be where is the Boltzmann constant and is the reduced Planck constant. This is usually much higher than the recoil temperature, which is the temperature associated with the momentum gained from the spontaneous emission of a photon. The Doppler limit has been verified with a gas of metastable helium. Sub-Doppler cooling Temperatures well below the Doppler limit have been achieved with various laser cooling methods, including Sisyphus cooling, evaporative cooling, and resolved sideband cooling. The theory of Doppler cooling assumes an atom with a simple two level structure, whereas most atomic species which are laser cooled have complicated hyperfine structure. Mechanisms such as Sisyphus cooling due to multiple ground states lead to temperatures lower than the Doppler limit. Maximum concentration The concentration must be minimal to prevent the absorption of the photons into the gas in the form of heat. This absorption happens when two atoms collide with each other while one of them has an excited electron. There is then a possibility of the excited electron dropping back to the ground state with its extra energy liberated in additional kinetic energy to the colliding atoms—which heats the atoms. This works against the cooling process and therefore limits the maximum concentration of gas that can be cooled using this method. Atomic structure Only certain atoms and ions have optical transitions amenable to laser cooling, since it is extremely difficult to generate the amounts of laser power needed at wavelengths much shorter than 300 nm. Furthermore, the more hyperfine structure an atom has, the more ways there are for it to emit a photon from the upper state and not return to its original state, putting it in a dark state and removing it from the cooling process. It is possible to use other lasers to optically pump those atoms back into the excited state and try again, but the more complex the hyperfine structure is, the more (narrow-band, frequency locked) lasers are required. Since frequency-locked lasers are both complex and expensive, atoms which need more than one extra repump laser are rarely cooled; the common rubidium magneto-optical trap, for example, requires one repump laser. This is also the reason why molecules are in general difficult to laser cool: in addition to hyperfine structure, molecules also have rovibronic couplings and so can also decay into excited rotational or vibrational states. However, laser cooling of molecules has been demonstrated, first with SrF molecules, and subsequently with other diatomics such as CaF and YO. Configurations Counter-propagating sets of laser beams in all three Cartesian dimensions may be used to cool the three motional degrees of freedom of the atom. Common laser-cooling configurations include optical molasses, the magneto-optical trap, and the Zeeman slower. Atomic ions, trapped in an ion trap, can be cooled with a single laser beam as long as that beam has a component along all three motional degrees of freedom. This is in contrast to the six beams required to trap neutral atoms. The original laser cooling experiments were performed on ions in ion traps. (In theory, neutral atoms could be cooled with a single beam if they could be trapped in a deep trap, but in practice neutral traps are much shallower than ion traps and a single recoil event can be enough to kick a neutral atom out of the trap.) Applications One use for Doppler cooling is the optical molasses technique. This process itself forms a part of the magneto-optical trap but it can be used independently. Doppler cooling is also used in spectroscopy and metrology, where cooling allows narrower spectroscopic features. For example, all of the best atomic clock technologies involve Doppler cooling at some point. See also Magneto-optical trap Resolved sideband cooling References Further reading Atomic physics Cooling technology Doppler effects
Doppler cooling
[ "Physics", "Chemistry" ]
2,357
[ "Physical phenomena", "Quantum mechanics", "Astrophysics", "Atomic physics", " molecular", "Atomic", "Doppler effects", " and optical physics" ]
4,147,648
https://en.wikipedia.org/wiki/Petrick%27s%20method
In Boolean algebra, Petrick's method (also known as Petrick function or branch-and-bound method) is a technique described by Stanley R. Petrick (1931–2006) in 1956 for determining all minimum sum-of-products solutions from a prime implicant chart. Petrick's method is very tedious for large charts, but it is easy to implement on a computer. The method was improved by Insley B. Pyne and Edward Joseph McCluskey in 1962. Algorithm Reduce the prime implicant chart by eliminating the essential prime implicant rows and the corresponding columns. Label the rows of the reduced prime implicant chart , , , , etc. Form a logical function which is true when all the columns are covered. P consists of a product of sums where each sum term has the form , where each represents a row covering column . Apply De Morgan's Laws to expand into a sum of products and minimize by applying the absorption law . Each term in the result represents a solution, that is, a set of rows which covers all of the minterms in the table. To determine the minimum solutions, first find those terms which contain a minimum number of prime implicants. Next, for each of the terms found in step five, count the number of literals in each prime implicant and find the total number of literals. Choose the term or terms composed of the minimum total number of literals, and write out the corresponding sums of prime implicants. Example of Petrick's method Following is the function we want to reduce: The prime implicant chart from the Quine-McCluskey algorithm is as follows: {| class="wikitable" style="text-align:center;" |- ! || 0 || 1 || 2 || 5 || 6 || 7 || ⇒ || A || B || C |- | style="a1;" | K = m(0,1) || || || || || || || ⇒ || 0 || 0 || |- | style="a1;" | L = m(0,2) || || || || || || || ⇒ || 0 || || 0 |- | style="a1;" | M = m(1,5) || || || || || || || ⇒ || || 0 || 1 |- | style="a1;" | N = m(2,6) || || || || || || || ⇒ || || 1 || 0 |- | style="a1;" | P = m(5,7) || || || || || || || ⇒ || 1 || || 1 |- | style="a1;" | Q = m(6,7) || || || || || || || ⇒ || 1 || 1 || |} Based on the ✓ marks in the table above, build a product of sums of the rows. Each column of the table makes a product term which adds together the rows having a ✓ mark in that column: (K+L)(K+M)(L+N)(M+P)(N+Q)(P+Q) Use the distributive law to turn that expression into a sum of products. Also use the following equivalences to simplify the final expression: X + XY = X and XX = X and X + X = X = (K+L)(K+M)(L+N)(M+P)(N+Q)(P+Q) = (K+LM)(N+LQ)(P+MQ) = (KN+KLQ+LMN+LMQ)(P+MQ) = KNP + KLPQ + LMNP + LMPQ + KMNQ + KLMQ + LMNQ + LMQ Now use again the following equivalence to further reduce the equation: X + XY = X = KNP + KLPQ + LMNP + LMQ + KMNQ Choose products with fewest terms, in this example, there are two products with three terms: KNP LMQ Referring to the prime implicant table, transform each product by replacing prime implicants with their expression as boolean variables, and substitute a sum for the product. Then choose the result which contains the fewest total literals (boolean variables and their complements). Referring to our example: KNP expands to A'B' + BC' + AC where K converts to A'B', N converts to BC', etc. LMQ expands to A'C' + B'C + AB Both products expand to six literals each, so either one can be used. In general, application of Petrick's method is tedious for large charts, but it is easy to implement on a computer. Notes References Further reading (xiv+379+1 pages) External links Tutorial on Quine-McCluskey and Petrick's method Petrick C++ implementation based on the tutorial above Boolean algebra
Petrick's method
[ "Mathematics" ]
1,123
[ "Boolean algebra", "Fields of abstract algebra", "Mathematical logic" ]
4,147,779
https://en.wikipedia.org/wiki/Neft%20Da%C5%9Flar%C4%B1
Neft Daşları ( ) is an industrial settlement in Baku, Azerbaijan. The settlement forms part of the municipality of Çilov-Neft Daşları in the Pirallahy raion. It lies away from the Azerbaijani capital Baku, and from the nearest mainland shore in the Caspian Sea. A full town on the sea, it was the first oil platform in Azerbaijan, and the first operating offshore oil platform in the world, incorporating numerous drilling platforms. It is featured in Guinness World Records as the world's first offshore oil platform. The settlement began with a single path out over the water and grew into a system of paths and platforms built on the back of ships sunk to serve as the Neft Daşları's foundation. The most distinctive feature of Neft Daşları is that it is actually a functional city with a population of about 2,000 and once comprised over of streets built on piles of landfill. Etymology The settlement was originally named as Chornye Kamni (Russian for "Black Rocks"), but was later renamed to Neftyanye Kamni (Russian "Petroleum Rocks"), in Azerbaijani nowadays Neft Daşları (id.), replacing the allusion to the black colour of oil with a reference to the substance itself. History Construction of the settlement The first large-scale geological study of the area was conducted in 1945–1948. The settlement of Neft Daşları was built after oil was discovered there on 7 November 1949 at beneath the Caspian Sea. It became the world's first offshore oil platform. By 1951, the Neft Daşları was ready for production, equipped with all of the infrastructure needed at the time. Drilling platforms were erected, oil tanks installed, and docks with enclosures for ships were built. The first oil from the Neft Daşları was loaded into a tanker in the same year. In 1952, the systematic construction of trestle bridges connecting the artificial islands was begun. A number of Soviet factories constructed crane assemblies especially for use on the Neft Daşları, along with a crane barge that could carry up to 100 tons of oil. The assemblies were equipped with diesel hammers used to drive piles into the sea floor. Large-scale construction started on the settlement in 1958, which included nine-story hostels, hotels, cultural palaces, bakery factories and lemonade workshops. The mass development of Neft Daşları continued during 1976–1978 with the building of a five-story dormitory and two oil-gas compressor stations, the installation of a drinking water facility, and the construction of two underwater pipelines to the Dubendi terminal, each with a diameter of . In addition, a flyover for vehicular traffic was created. As a result, the area of the settlement grew to around in the 1960s, with the length of the steel trestle bridges joining the man-made islands exceeding , although much has since fallen into the Caspian sea. Post-independence In November 2009, the settlement celebrated its 60th anniversary. Over the last 60 years, the oilfields of Neft Daşları have produced more than 170 million tons of oil and 15 billion cubic metres of associated natural gas. According to present-day estimates by geologists, the volume of recoverable reserves is as high as 30 million tons. The oil platforms have gradually fallen into disrepair, and no refurbishment plans are currently underway. Demography The population varies from time to time in the settlement. As of 2008, the platforms have a combined population of about 2,000 men and women, who work in week-long offshore shifts. At one point 5,000 people worked there. Oil extraction The oil extraction is carried out from the shallow water portion of the Absheron geological trend. Accidents On 4 December 2015, three workers of SOCAR were reported missing after part of the living quarters fell into the sea due to a heavy storm. In popular culture In 2008, a Swiss documentary crew led by film director Marc Wolfensberger filmed "La Cité du Pétrole / Oil Rocks – City above the Sea" in the settlement, which was released in 2009. Vimeo Neft Daşları is featured in a scene in the James Bond film The World Is Not Enough (1999). Neft Daşları is on the list in the Guinness Book as the oldest offshore oil platform. References External links Map of the area Photos of Oil Rocks taken in 2013 Travel guide for Oil Rocks English Russia: Oil Stones, A Soviet City in the Middle of the Sea Link to the film trailer Oil Rocks – City above the Sea Further reading Mir-Babayev M.F. The role of Azerbaijan in the World's oil industry – “Oil-Industry History” (USA), 2011, v. 12, no. 1, pp. 109–123. Mir-Babayev M.F. Oil Rocks: the first city on the Caspian Sea – “Reservoir”, Canada, 2012, Volume 39, Issue 4, April, pp. 33–36. Oil platforms Coastal construction Energy in the Soviet Union Populated places in Baku Populated places on the Caspian Sea Seasteading Energy infrastructure in Azerbaijan Petroleum industry in Azerbaijan Azerbaijani inventions
Neft Daşları
[ "Chemistry", "Engineering" ]
1,046
[ "Oil platforms", "Structural engineering", "Petroleum technology", "Construction", "Coastal construction", "Natural gas technology" ]
4,147,979
https://en.wikipedia.org/wiki/Merry-go-round%20train
A merry-go-round train, often abbreviated to MGR, is a block train of hopper wagons which both loads and unloads its cargo while moving. In the United Kingdom, they are most commonly coal trains delivering to power stations. These trains were introduced in the 1960s, and were one of the few innovations of the Beeching cuts, along with investment from the Central Electricity Generating Board (CEGB) and the NCB (National Coal Board) into new power stations and loading facilities. History and description West Burton Power Station was used as a testing ground for the MGR system but the first power station to receive its coal by MGR was Cockenzie in Scotland in 1966. It was estimated at the time that the 80 MGR hoppers needed to feed Cockenzie would replace up to 1,500 conventional wagons. A 1.2 GW power station, such as Cockenzie, receives up to 3 million tons of coal a year, whereas a larger 2 GW plant, like West Burton, up to 5 million tons per year. By the end of 1966 there were about 900 wagons carrying 53,000 tons a week to four power stations. Power stations that were built to handle the new MGR traffic were Aberthaw, Drax, Didcot, Eggborough, Ferrybridge C, Fiddlers Ferry and Ratcliffe, of which only the last is still open for traffic. Many of the older power stations were gradually converted to MGR operation. Merry-go-round operation was also adopted for the Immingham Bulk Terminal built in the early 1970s to supply iron ore to the Scunthorpe Steelworks from the Port of Immingham. The MGR hopper wagons There were 11,162 MGR hoppers built. The numbering ranges were 350000-359571, 365000-366129 and 368000-368459 The two prototype wagons, 350000 and 350001, were built at Darlington works in 1964 and 1965 respectively, following which several large batches were constructed at the nearby Shildon works. With the exceptions of the two prototypes built at Darlington and the 160 wagons built at Ashford, all 10,702 HAA wagons and 460 HDA wagons were built there. Most of the early wagons (up to 355396) were originally lettered with B prefix numbers but these were later removed. While the majority of the wagons were built as HAAs, the final batch (built in 1982 as 368000-368459) were coded as HDA to indicate their ability to operate at up to 60 mph when empty instead of the standard 45 mph. This was achieved through modifications to the design of the brakes. Another variation, which did not initially result in a change of TOPS code, was the fitting of top canopies to increase the load volume. Many of the early wagons had these but then lost them and for some years canopied hoppers were only common in Scotland. When MGR services were first introduced, British Rail designed an all-new wagon with air brakes and a capacity for 33 tonnes of pulverised coal. The prototype was a 32-ton unit and was built at Darlington and tested in 1964. Before the introduction of TOPS these wagons were referred to by the telegraphic code name "HOP AB 33", this was an abbreviation of Hopper Air Brake 33 tonne. With the coming of privatisation to Britain's railways, new wagon types have been introduced by EWS (HTA), GB Railfreight (HYA), Freightliner Heavy Haul (HHA and HXA) and Jarvis Fastline (IIA). These new wagons have increased tonnage and air-operated doors that do away with the need for the "Dalek" release mechanism at the power station end of the trip. MGR wagon variants With the introduction of TOPS in 1973, the wagons were given the code "HAA", and with modifications to the wagons other codes have been allocated over the years, including HDA and HMA. From the early 1990s, further TOPS codes were introduced to show detail differences, such as canopies and modified brakes. Many HAAs became HFAs, while all of the HDAs became HBAs, this code now being available since all the original HBA hoppers had been rebuilt as HEAs. Later codes used were HCA, HMA and HNA. MGR wagon liveries The livery of these wagons was of unpainted metal hoppers and black underframes. The hopper support framework was originally brown, then red with the introduction of the new Railfreight image in the late 1970s. When Railfreight re-invented itself in 1987, a new livery with yellow framework and a large coal sector logo on the hopper side was introduced. Under EWS the framework is now painted maroon. Merry-go-round hoppers were worked hard however, and the typical livery included a coating of coal dust. Some of the terminals served used stationary shunters to move the wagons forward at low speed. These often featured tyred wheels that gripped the wagon sides, resulting in horizontal streaks on the hopper sides. The balloon loop and the Daleks Merry-go-round trains are associated with the construction of balloon loops at the origin and destination so that the train doesn't waste time shunting the engine from one end of the train to the other. However, whilst power stations such as Ratcliffe, West Burton and Cottam had balloon loops, few if any colliery/loading points had them, and thus true merry-go-round operation never really existed. "Dalek" was the nickname given to the automatic door opening/closing equipment located on the path to and from the bunker in the power station. The nickname was derived from its appearance. Two have been preserved by The National Wagon Preservation Group from Hope Cement Works. They arrived at Barrow Hill on Friday 28 August 2015. Locomotive control Locomotives used on the MGR trains needed to be fitted with electronic speed control known as Slow Speed Control, so that the driver could engage the system and the train could proceed at a fixed very slow speed under the loading and unloading facilities. The system was originally fitted to some members of Class 20, Class 26 and Class 47. Later, some members of Class 37 were also fitted, while the system was fitted to all members of classes 56, 58, 59, 60 and 66. Additionally, all Class 50s were originally fitted, although the system was later removed due to non-use. The Class 47 locomotives were replaced by the class 56s in 1977 with an increase of the number of wagons in a train, in most cases to around 30 to 34. This was followed by the Class 58s and the Class 60s. Two of the class 60s were named in honour of the men behind the MGR system, 60092 Reginald Munns and 60093 Jack Stirk. A small number of other locomotives were modified for working MGRs. In Scotland the class 26 and some class 20s and in South Wales some class 37s. In 1985, Driver Only Operation (DOO) was introduced after a short training session on the wagons which mostly showed how to isolate a defective brake. MGR trains in the Worksop and Shirebrook areas to West Burton and Cottam started running. These trains initially had a yellow painted tail lamp to identify that the train was DOO. As the system rapidly developed, the use of these yellow tail lamps was discontinued on all trains. MGR hopper decline and re-use The decline in the UK mining industry from the 1980s onwards made many of these wagons redundant. More of the type were replaced when EWS introduced a new batch of 1144 high-capacity bogie coal hoppers (HTA) from 2001. The last location to have coal delivered by MGR wagons was the Hope Cement Works in August 2010. Although many HAAs were scrapped for being worn out, over 1,000 have donated their underframes to be rebuilt as MHA lowsided box spoil wagons for infrastructure and general use. Conversions have been undertaken since 1997 and the new vehicles have been numbered in the 394001-394999 and 396000-396101 ranges. A batch of fifteen HAAs were rebuilt as china clay hoppers with a canvas roof (CDA), all but one of which were renumbered in the 375124-375137 range and the other being extant 353224 which is listed below. There were fourteen HAAs modified as MSA Scrap Hoppers in 2004 and renumbered in the 397000-397013 range. They proved to be a short-lived idea though, as the light alloy bodies took too much damage from rough use and were withdrawn and scrapped after only a few weeks of use. Extant examples There are now only four MGR hoppers still remaining on the network, excluding the examples that were successfully converted into china clay covered hoppers Scrapped Examples Whilst there were over 10,000 of the MGR wagons to begin with, there was only one notable scrapping after their official withdrawal in 2009 due to the wagon being scrapped in error at Newport Docks. The wagon was used as a static buffer on the main dock, however the scrap merchant cut up the wagon in error. This example would have been in line for preservation. Converted Examples The CDA was introduced in 1987-88 for English China Clay trains in Cornwall, with 124 wagons being built at Doncaster Works. These were given the design code CD002A and were largely based on the design of the HAA coal hopper wagon. A prototype was converted from a HAA, number 353224, which the National Wagon Preservation Group are custodians of, in 1987 by G Nevilles Ltd and given the design code CD001A. A further 15 were rebuilt from HAA hoppers in 1989, of which three still survive to this day, as detailed in the table below Preservation Several examples have been preserved. The first MGR to be preserved was the Darlington-built prototype, HAA 350000, in October 1995 by the National Railway Museum (NRM). In 2011, The NRM secured the last-built MGR hopper (HDA 368459) and it was appropriately moved to its Shildon outpost in May of the same year. Notably, in 2014 an appeal was set up called The MGR Appeal to try and preserve another example, this being in the form of HMA 355798. After a successful appeal, it was saved for preservation from DB Schenker. It was formerly stored at their Immingham depot in Lincolnshire. In July 2015, the MGR Appeal was officially formed as National Wagon Preservation Group In May 2015, The Chasewater Railway secured three MGR hoppers from Mossend Yard (DBS) and moved them to Brownhills West station. In their statement, it was advised that these three MGRs were the "arrival of the first half of our HAA wagon fleet". The other three likely candidates are the three withdrawn in the Newport Docks area. Since then however, No.353934 which was stored off the tracks was cut up by accident, however the group is still intending to preserve the remaining two examples. The Chasewater and NWPG came together to create Project:MGR which is a collaborative effort to host the MGR wagons and run demonstration trains regularly to the public. As of 2023, the NWPG and Chasewater Railway have collectively acquired nine MGR wagons, of which eight are at the Chasewater in Brownhills, Staffordshire with 351500 at the Midland Railway Centre undergoing extensive repairs and a repaint. See also British carriage and wagon numbering and classification Kincardine power station References External links National Wagon Preservation Group Rail freight transport Rail freight transport in the United Kingdom Trains
Merry-go-round train
[ "Technology" ]
2,404
[ "Trains", "Transport systems" ]
4,148,025
https://en.wikipedia.org/wiki/Radiation-absorbent%20material
In materials science, radiation-absorbent material (RAM) is a material which has been specially designed and shaped to absorb incident RF radiation (also known as non-ionising radiation), as effectively as possible, from as many incident directions as possible. The more effective the RAM, the lower the resulting level of reflected RF radiation. Many measurements in electromagnetic compatibility (EMC) and antenna radiation patterns require that spurious signals arising from the test setup, including reflections, are negligible to avoid the risk of causing measurement errors and ambiguities. Introduction One of the most effective types of RAM comprises arrays of pyramid-shaped pieces, each of which is constructed from a suitably lossy material. To work effectively, all internal surfaces of the anechoic chamber must be entirely covered with RAM. Sections of RAM may be temporarily removed to install equipment but they must be replaced before performing any tests. To be sufficiently lossy, RAM can be neither a good electrical conductor nor a good electrical insulator as neither type actually absorbs any power. Typically pyramidal RAM will comprise a rubberized foam material impregnated with controlled mixtures of carbon and iron. The length from base to tip of the pyramid structure is chosen based on the lowest expected frequency and the amount of absorption required. For low frequency damping, this distance is often , while high-frequency panels are as short as . Panels of RAM are typically installed on the walls of an EMC test chamber with the tips pointing inward to the chamber. Pyramidal RAM attenuates signal by two effects: scattering and absorption. Scattering can occur both coherently, when reflected waves are in-phase but directed away from the receiver, or incoherently where waves are picked up by the receiver but are out of phase and thus have lower signal strength. This incoherent scattering also occurs within the foam structure, with the suspended carbon particles promoting destructive interference. Internal scattering can result in as much as 10 dB of attenuation. Meanwhile, the pyramid shapes are cut at angles that maximize the number of bounces a wave makes within the structure. With each bounce, the wave loses energy to the foam material and thus exits with lower signal strength. An alternative type of RAM comprises flat plates of ferrite material, in the form of flat tiles fixed to all interior surfaces of the chamber. This type has a smaller effective frequency range than the pyramidal RAM and is designed to be fixed to good conductive surfaces. It is generally easier to fit and more durable than the pyramidal type RAM but is less effective at higher frequencies. Its performance might however be quite adequate if tests are limited to lower frequencies (ferrite plates have a damping curve that makes them most effective between 30–1000 MHz). There is also a hybrid type, a ferrite in pyramidal shape. Containing the advantages of both technologies, the frequency range can be maximized while the pyramid remains small, about . For physically-realizable radiation-absorbent materials, there is a trade-off between thickness and bandwidth: optimal thickness to bandwidth ratio of a radiation-absorbent material is given by the Rozanov limit. Use in stealth technology Radar-absorbent materials are used in stealth technology to disguise a vehicle or structure from radar detection. A material's absorbency at a given frequency of radar wave depends upon its composition. RAM cannot perfectly absorb radar at any frequency, but any given composition does have greater absorbency at some frequencies than others; no one RAM is suited to absorption of all radar frequencies. A common misunderstanding is that RAM makes an object invisible to radar. A radar-absorbent material can significantly reduce an object's radar cross-section in specific radar frequencies, but it does not result in "invisibility" on any frequency. History The earliest forms of stealth coating were radar absorbing paints developed by Major K. Mano of the Tama Technical Institute, and Dr. Shiba of the Tokyo Engineering College for the IJAAF. Multiple paint mixtures were tested with ferric oxide and liquid rubber, as well as ferric oxide, asphalt and airplane dope having the best results. Despite success in laboratory tests, the paints saw little practical application as they were heavy and would significantly impact the performance of any aircraft they were applied to. Conversely the IJN saw great potential in anti-radar materials and the Second Naval Technical Institute began research on layered materials to absorb radar waves rather than paint. Rubber and plastic with carbon powder with varying ratios were layered to absorb and disperse radar waves. The results were promising against 3 GHz (S band) frequencies, but poor against 3 cm wave length (10 GHz, X band) radar. Work on the program was halted due to allied bombing raids, but research was continued post war by the Americans to mild success. In September of 1944, materials called Sumpf and Schornsteinfeger, coatings used by the German navy during World War II for the snorkels (or periscopes) of submarines, to lower their reflectivity in the 20 cm radar band (1.5 GHz, L band) the Allies used. The material had a layered structure and was based on graphite particles and other semiconductive materials embedded in a rubber matrix. The material's efficiency was partially reduced by the action of sea water. A related use was planned for the Horten Ho 229 aircraft. The adhesive which bonded plywood sheets in its skin was impregnated with graphite particles which were intended to reduce its visibility to Britain's radar. Types of radar-absorbent material (RAM) Iron ball paint absorber One of the most commonly known types of RAM is iron ball paint. It contains tiny spheres coated with carbonyl iron or ferrite. Radar waves induce molecular oscillations from the alternating magnetic field in this paint, which leads to conversion of the radar energy into heat. The heat is then transferred to the aircraft and dissipated. The iron particles in the paint are obtained by decomposition of iron pentacarbonyl and may contain traces of carbon, oxygen, and nitrogen. One technique used in the F-117A Nighthawk and other such stealth aircraft is to use electrically isolated carbonyl iron balls of specific dimensions suspended in a two-part epoxy paint. Each of these microscopic spheres is coated in silicon dioxide as an insulator through a proprietary process. Then, during the panel fabrication process, while the paint is still liquid, a magnetic field is applied with a specific Gauss strength and at a specific distance to create magnetic field patterns in the carbonyl iron balls within the liquid paint ferrofluid. The paint then hardens with the magnetic field holding the particles in their magnetic pattern. Some experimentation has been done applying opposing north–south magnetic fields to opposing sides of the painted panels, causing the carbonyl iron particles to align (standing up on end so they are three-dimensionally parallel to the magnetic field). The carbonyl iron ball paint is most effective when the balls are evenly dispersed, electrically isolated, and present a gradient of progressively greater density to the incoming radar waves. A related type of RAM consists of neoprene polymer sheets with ferrite grains or conductive carbon black particles (containing about 0.30% of crystalline graphite by cured weight) embedded in the polymer matrix. The tiles were used on early versions of the F-117A Nighthawk, although more recent models use painted RAM. The painting of the F-117 is done by industrial robots so the paint can be applied consistently in specific layer thicknesses and densities. The plane is covered in tiles "glued" to the fuselage and the remaining gaps are filled with iron ball "glue." The United States Air Force introduced a radar-absorbent paint made from both ferrofluidic and nonmagnetic substances. By reducing the reflection of electromagnetic waves, this material helps to reduce the visibility of RAM-painted aircraft on radar. The Israeli firm Nanoflight has also made a radar-absorbing paint that uses nanoparticles. The Republic of China (Taiwan)'s military has also successfully developed radar-absorbing paint which is currently used on Taiwanese stealth warships and the Taiwanese-built stealth jet fighter which is currently in development in response to the development of stealth technology by their rival, the mainland People's Republic of China which is known to have displayed both stealth warships and planes to the public. Foam absorber Foam absorber is used as lining of anechoic chambers for electromagnetic radiation measurements. This material typically consists of a fireproofed urethane foam loaded with conductive carbon black [carbonyl iron spherical particles, and/or crystalline graphite particles] in mixtures between 0.05% and 0.1% (by weight in finished product), and cut into square pyramids with dimensions set specific to the wavelengths of interest. Further improvements can be made when the conductive particulates are layered in a density gradient, so the tip of the pyramid has the lowest percentage of particles and the base contains the highest density of particles. This presents a "soft" impedance change to incoming radar waves and further reduces reflection (echo). The length from base to tip, and width of the base of the pyramid structure is chosen based on the lowest expected frequency when a wide-band absorber is sought. For low-frequency damping in military applications, this distance is often , while high-frequency panels are as short as . An example of a high-frequency application would be the police radar (speed-measuring radar K and Ka band), the pyramids would have a dimension around long and a base. That pyramid would set on a 5 cm x 5 cm cubical base that is high (total height of pyramid and base of about ). The four edges of the pyramid are softly sweeping arcs giving the pyramid a slightly "bloated" look. This arc provides some additional scatter and prevents any sharp edge from creating a coherent reflection. Panels of RAM are installed with the tips of the pyramids pointing toward the radar source. These pyramids may also be hidden behind an outer nearly radar-transparent shell where aerodynamics are required. Pyramidal RAM attenuates signal by scattering and absorption. Scattering can occur both coherently, when reflected waves are in-phase but directed away from the receiver, or incoherently where waves may be reflected back to the receiver but are out of phase and thus have lower signal strength. A good example of coherent reflection is in the faceted shape of the F-117A stealth aircraft which presents angles to the radar source such that coherent waves are reflected away from the point of origin (usually the detection source). Incoherent scattering also occurs within the foam structure, with the suspended conductive particles promoting destructive interference. Internal scattering can result in as much as 10 dB of attenuation. Meanwhile, the pyramid shapes are cut at angles that maximize the number of bounces a wave makes within the structure. With each bounce, the wave loses energy to the foam material and thus exits with lower signal strength. Other foam absorbers are available in flat sheets, using an increasing gradient of carbon loadings in different layers. Absorption within the foam material occurs when radar energy is converted to heat in the conductive particle. Therefore, in applications where high radar energies are involved, cooling fans are used to exhaust the heat generated. Jaumann absorber A Jaumann absorber or Jaumann layer is a radar-absorbent substance. When first introduced in 1943, the Jaumann layer consisted of two equally spaced reflective surfaces and a conductive ground plane. One can think of it as a generalized, multilayered Salisbury screen, as the principles are similar. Being a resonant absorber (i.e. it uses wave interfering to cancel the reflected wave), the Jaumann layer is dependent upon the λ/4 spacing between the first reflective surface and the ground plane and between the two reflective surfaces (a total of λ/4 + λ/4 ). Because the wave can resonate at two frequencies, the Jaumann layer produces two absorption maxima across a band of wavelengths (if using the two layers configuration). These absorbers must have all of the layers parallel to each other and the ground plane that they conceal. More elaborate Jaumann absorbers use series of dielectric surfaces that separate conductive sheets. The conductivity of those sheets increases with proximity to the ground plane. Split-ring resonator absorber Split-ring resonators (SRRs) in various test configurations have been shown to be extremely effective as radar absorbers. SRR technology can be used in conjunction with the technologies above to provide a cumulative absorption effect. SRR technology is particularly effective when used on faceted shapes that have perfectly flat surfaces that present no direct reflections back to the radar source (such as the F-117A). This technology uses photographic process to create a resist layer on a thin (about ) copper foil on a dielectric backing (thin circuit board material) etched into tuned resonator arrays, each individual resonator being in a "C" shape (or other shape—such as a square). Each SRR is electrically isolated and all dimensions are carefully specified to optimize absorption at a specific radar wavelength. Not being a closed loop "O", the opening in the "C" presents a gap of specific dimension which acts as a capacitor. At 35 GHz, the diameter of the "C" is near . The resonator can be tuned to specific wavelengths and multiple SRRs can be stacked with insulating layers of specific thicknesses between them to provide a wide-band absorption of radar energy. When stacked, the smaller SRRs (high-frequency) in the range face the radar source first (like a stack of donuts that get progressively larger as one moves away from the radar source) stacks of three have been shown to be effective in providing wide-band attenuation. SRR technology acts very much in the same way that antireflective coatings operate at optical wavelengths. SRR technology provides the most effective radar attenuation of any technologies known previously and is one step closer to reaching complete invisibility (total stealth, "cloaking"). Work is also progressing in visual wavelengths, as well as infrared wavelengths (LIDAR-absorbing materials). Carbon nanotube Radars work in the microwave frequency range, which can be absorbed by multi-wall nanotubes (MWNTs). Applying the MWNTs to the aircraft would cause the radar to be absorbed and therefore seem to have a smaller radar cross-section. One such application could be to paint the nanotubes onto the plane. Recently there has been some work done at the University of Michigan regarding carbon nanotubes usefulness as stealth technology on aircraft. It has been found that in addition to the radar absorbing properties, the nanotubes neither reflect nor scatter visible light, making it essentially invisible at night, much like painting current stealth aircraft black except much more effective. Current limitations in manufacturing, however, mean that current production of nanotube-coated aircraft is not possible. One theory to overcome these current limitations is to cover small particles with the nanotubes and suspend the nanotube-covered particles in a medium such as paint, which can then be applied to a surface, like a stealth aircraft. See also Lidar Radar cross-section (RCS) Stealth technology Radar jamming and deception References Notes Bibliography The Schornsteinfeger Project, CIOS Report XXVI-24. External links Suppliers of Radar absorbent materials Electromagnetic compatibility Radar Military technology Materials
Radiation-absorbent material
[ "Physics", "Engineering" ]
3,196
[ "Electromagnetic compatibility", "Radio electronics", "Materials", "Electrical engineering", "Matter" ]
4,148,087
https://en.wikipedia.org/wiki/Displacement%20ventilation
Displacement ventilation (DV) is a room air distribution strategy where conditioned outdoor air is supplied at a low velocity from air supply diffusers located near floor level and extracted above the occupied zone, usually at ceiling height. System design A typical displacement ventilation system, such as one in an office space, supplies conditioned cold air from an air handling unit (AHU) through a low induction air diffuser. Diffuser types vary by applications. Diffusers can be located against a wall ("wall-mounted"), at the corner of a room ("corner-mounted"), or above the floor but not against a wall ("free-standing"). The cool air accelerates because of the buoyancy force, spreads in a thin layer over the floor, reaching a relatively high velocity before rising due to heat exchange with heat sources (e.g., occupants, computers, lights). Absorbing the heat from heat sources, the cold air becomes warmer and less dense. The density difference between cold air and warm air creates upward convective flows known as thermal plumes. Instead of working as a stand-alone system in interior space, displacement ventilation system can also be coupled with other cooling and heating sources, such as radiant chilled ceilings or baseboard heating. History Displacement ventilation was first applied in an industrial building in Scandinavia in 1978, and has frequently been used in similar applications, as well as office spaces, throughout Scandinavia since that time. By 1989, it was estimated that displacement ventilation comprised the 50% in industrial applications and 25% in offices within Nordic countries. Applications in the United States have not been as widespread as in Scandinavia. Some research has been done to assess the practicality of this application in U.S. markets due to different typical space designs and application in hot and humid climates, as well as research to assess the potential indoor environmental quality and energy-saving benefits of this strategy in the U.S. and elsewhere. Applications Displacement ventilation has been applied in many famous building such as the Suvarnabhumi International Airport in Bangkok, Thailand, the NASA Jet Propulsion Laboratory Flight Projects Center building, and the San Francisco International Airport Terminal 2 among other applications. General characteristics Airflow distribution The thermal plumes and supply air from diffusers, which determines the velocity of airflow at floor level, play an important role in DV systems. It is necessary to carefully set the airflow rate from the diffuser to avoid drafts. Conditioning type Due to the unique properties of thermal stratification, displacement ventilation is typically used for cooling rather than for heating. In many cases, a separate heating source, such as a radiator or baseboard, is used during heating periods. Space requirement Displacement ventilation is best suited for taller spaces (higher than 3 meters [10 feet]). Standard mixing ventilation may be better suited for smaller spaces where air quality is not as great a concern, such as single-occupant offices, and where the room height is not tall (e.g., lower than 2.3 meters [7.5 feet]). Benefits and limitations Local discomfort: vertical temperature difference and draft Displacement ventilation systems are quieter than conventional overhead systems with better ventilation efficiency. Hence, it could enhance indoor air quality and provide desirable acoustic environment. Displacement ventilation systems are appropriate in space where high ventilation is required, such as classrooms, conference rooms, and offices. Displacement ventilation can be a cause of discomfort due to the large vertical temperature gradient and drafts. According to Melikov and Pitchurov's research, sensations of cold caused by vertical temperature difference and draft are usually occurred at the lower leg/ ankle/ feet region, while warm sensations at the head are reported. The research also indicates, that the draft rating model could predict the draft risk with good accuracy in rooms with displacement ventilation systems. There is a tradeoff inherent in these two issues: by increasing the flow rate (and the ability to remove greater thermal loads), the vertical temperature gradient can be reduced, but this could increase the risk of drafts. Pairing displacement ventilation with radiant chilled ceilings is an effort to mitigate this problem. According to some studies, displacement ventilation systems can only provide acceptable comfort if the corresponding cooling load is less than about 13 Btu/h-sf or 40 W/m2. Indoor air quality One benefit of displacement ventilation is possibly the superior indoor air quality achieved with exhausting contaminated air out of the room. Better air quality is achieved when the pollution source is also a heat source. The effectiveness of displacement ventilation at removing particulate contaminants has been investigated recently. Small aqueous droplets containing infectious nuclei are frequently released in hospital rooms and other indoor spaces, and tend to settle through the ambient air at a speed of order 1–10 mm/s typically. In cold climates or seasons, sufficiently small droplets are extracted from the top of a displacement-ventilated space if the mean upward air speed exceeds the particle settling speed. However, laboratory experiments have shown that larger droplets may settle faster than the air moves. In this case, the large droplets are not extracted effectively from a space with upward displacement ventilation, and their concentration increases if the ventilation rate is increased. In warmer climates or seasons, large-scale instabilities in the concentration of contaminants may occur within a space with downward displacement ventilation. Energy consumption Some studies have demonstrated that displacement ventilation may save energy as compared to standard mixing ventilation, depending on the use type of the building, design/massing/orientation, and other factors. However, for the evaluation of energy consumption of displacement ventilation, the numerical simulation is the main method, since yearly measurements are too expensive and time consuming. Hence, whether displacement ventilation could help with saving energy is still debated. In general, displacement ventilation is attractive to the core region in a building since no heating is needed. However, the perimeter zones require high cooling energy. Design guidelines Different guidelines have been published to provide guidance on designing displacement ventilation systems, including: Skistad H., Mundt E., Nielsen P.V., Hagstrom K., Railo J. (2002). Displacement ventilation in Non-Industrial Premises. Federation of European Heating and Air-conditioning Associations. Skistad, H. (1994). Displacement ventilation. Research Studies Press, John Wiley & Sons, Ltd., west Sussex. UK. Chen, Q. and Glicksman, L. (2003). Performance Evaluation and Development of Design Guidelines for Displacement ventilation. Atlanta: ASHRAE. Among guidelines listed above, the one developed by Chen and Glicksman are aimed specifically at fulfilling U.S. Standard. Below is a brief description of each step of their guideline. Step 1) Judge the applicability of displacement ventilation Step 2) Calculate summer design cooling load. Step 3) Determine the required flow rate of the supply air for summer cooling. Step 4) Find the required flow rate of fresh air for acceptable indoor air quality. Step 5) Determine the supply air flow rate. Step 6) Calculate the supply airflow rate. Step 7) Determine the ratio of the fresh air to the supply air. Step 8) Select supply air diffuser size and number. Step 9) Check the winter heating situation. Step 10) Estimate the first costs and annual energy consumption. List of buildings using displacement ventilation See also Underfloor air distribution (UFAD) References Ventilation Sustainable architecture Environmental design Low-energy building Sustainable building
Displacement ventilation
[ "Engineering", "Environmental_science" ]
1,508
[ "Environmental design", "Sustainable building", "Sustainable architecture", "Building engineering", "Construction", "Design", "Environmental social science", "Architecture" ]
4,148,145
https://en.wikipedia.org/wiki/Laminar%20flow%20cabinet
A laminar flow cabinet or tissue culture hood is a partially enclosed bench work surface designed to prevent contamination of biological samples, semiconductor wafer, or any particle-sensitive materials. Air is drawn through a HEPA filter and blown in a very smooth laminar flow in a narrow vertical curtain, separating the interior of the cabinet from the environment around it. The cabinet is usually made of stainless steel with no gaps or joints where spores might collect. Despite their similar appearance, a laminar flow cabinet should not to be confused with a fume hood. A laminar flow cabinet blows unfiltered exhaust air towards the worker and is not safe for work with pathogenic agents, while a fume hood maintains negative pressure with constant exhaust to protect the user, but does not protect the work materials from contamination by the surrounding environment. A biosafety cabinet is also easily-confused with a laminar flow cabinet, but like the fume hood is primarily designed to protect the worker rather than the biological samples. This is achieved by drawing surrounding air in and exhausting it through a HEPA filter to remove potentially hazardous microorganisms. Laminar flow cabinets exist in both horizontal and vertical configurations, and there are many different types of cabinets with a variety of airflow patterns and acceptable uses. Cabinets may have a UV-C germicidal lamp to sterilize the interior and contents before use to prevent contamination of the experiment. Germicidal lamps are usually kept on for fifteen minutes to sterilize the interior before the cabinet is used. The light must be switched off when the cabinet is being used, to limit exposure to skin and eyes as stray ultraviolet light emissions can cause cancer and cataracts. See also Asepsis Biosafety cabinet Fume hood References External links NSF/ANSI Standard 49 Laboratory equipment Microbiology equipment Ventilation
Laminar flow cabinet
[ "Biology" ]
382
[ "Microbiology equipment" ]
4,148,166
https://en.wikipedia.org/wiki/Social%20positioning%20method
The social positioning method (SPM) studies space-time behaviour by analysing the location coordinates of mobile phones and the social characteristics of the people carrying them. The SPM methods and experiments were developed in Estonia by Positium and Institute of Geography University of Tartu during 2003-2006. The biggest advantage of mobile positioning-based methods is that mobile phones are widespread, positioning works inside buildings, and collection of movement data is done by a third party at regular intervals. Positioning data is digital; it is easy to trace many people at the same time and it is possible to analyse movements in real time. The disadvantage of mobile positioning today is relatively low preciseness, the boom in the generation of phones with a GPS will raise positioning accuracy. The most important problems of SPM are related to data security, as well as concerns about non-authorized personal surveillance. These problems can be solved with further development of location-based services (LBS) and relevant legal and organisational regulation. Today mobile positioning can be applied only by obtaining participants’ personal acceptance. References Mobile technology
Social positioning method
[ "Technology" ]
218
[ "nan" ]
4,148,511
https://en.wikipedia.org/wiki/Technology%20trajectory
Technology trajectory refers to a single branch in the evolution of a technological design of a product/service, with nodes representing separate designs. With Technology trajectory referring to a single branch we do expect the development of new technologies to precede recent uses and advance future technologies. The development of future technologies allows for the innovation of new ideas, research, and much more. It also can be defined as the paths by which innovations in a given field occur. Movement along the technology trajectory is associated with research and development. Due to the institutionalization of ideas, markets, and professions, technology development can get 'stuck' (locked-in) within one trajectory, and firms and engineers are unable to adapt to ideas and innovation from the outside. Technological trajectory/development may break- out of trajectory and can cause three understandings 1. when technology will lock in into a trajectory, 2.) when technology may break out of lock-in, and 3.) when competing technologies may co-exist in a balance. A lock-in is when a certain technology develops along a certain trajectory allowing the development to get stuck due to certain circumstances. Not all trajectories are permanently locked into a trajectory. Let us take for example the Technological Advancement/Trajectory of Increasing Resource use. In 1929 after a man who worked for the USGS wanted to make sure there were enough materials and technological advancements after the war on metal production. He considered 4 important factors to make sure metal production would be made: Geology, Technology, Economics, and Politics. There are technical factors that go into mining, treatment, and refining. “The history of sulfur extraction and production technology also reflects continuous improvement upon processes developed from other industries to meet changing materials use requirements and societal needs". The process of sulfur extraction is found deep underground or underwater. The Clean Air Act of 1970 made rules for getting sulfur from oil refining, processing of sulfide, ores, and even the combustion of electricity generation. This required technologies to be made in order to coincide with the Clean Air Act. The continuous improvement of sulfur extraction over the years shows how this technological trajectory/ advancement has developed over the years. Technology Trajectory doesn't just focus on firms or engineers but it can deal with healthcare, schools, the daily life of everyone, and much more. Technology Trajectory also poses the question of whether innovations are integrated into systems nationally, regionally, or sectorally. This then makes you wonder about the environmental issues and the structure of how Technology trajectory affects everyone. Technology in this day in age is all around us and with that being said we must have a Technology Trajectory of where we want to advance to maintain our ability to take technology beyond our imagination. Technology is shaping how we learn, gather information, move forward, and change. Technology is like a policy because it tells us how we are supposed to do things, and makes some ways of doing things more rational and practical than others. See also Innovation Thomas Samuel Kuhn Social shaping of technology Technological paradigm References Further reading Technological change Science and technology studies
Technology trajectory
[ "Technology" ]
611
[ "Science and technology studies" ]
4,148,657
https://en.wikipedia.org/wiki/Liquid%20metal%20embrittlement
Liquid metal embrittlement (also known as LME and liquid metal induced embrittlement) is a phenomenon of practical importance, where certain ductile metals experience drastic loss in tensile ductility or undergo brittle fracture when exposed to specific liquid metals. Generally, tensile stress, either externally applied or internally present, is needed to induce embrittlement. Exceptions to this rule have been observed, as in the case of aluminium in the presence of liquid gallium. This phenomenon has been studied since the beginning of the 20th century. Many of its phenomenological characteristics are known and several mechanisms have been proposed to explain it. The practical significance of liquid metal embrittlement is revealed by the observation that several steels experience ductility losses and cracking during hot-dip galvanizing or during subsequent fabrication. Cracking can occur catastrophically and very high crack growth rates have been measured. Similar metal embrittlement effects can be observed even in the solid state, when one of the metals is brought close to its melting point; e.g. cadmium-coated parts operating at high temperature. This phenomenon is known as solid metal embrittlement. Characteristics Mechanical behavior Liquid metal embrittlement is characterized by the reduction in the threshold stress intensity, true fracture stress or in the strain to fracture when tested in the presence of liquid metals as compared to that obtained in tests. The reduction in fracture strain is generally temperature dependent and a “ductility trough” is observed as the test temperature is decreased. A ductile-to-brittle transition behaviour is also exhibited by many metal couples. The shape of the elastic region of the stress-strain curve is not altered, but the plastic region may be changed during LME. Very high crack propagation rates, varying from a few centimeters per second to several meters per second are induced in solid metals by the embrittling liquid metals. An incubation period and a slow pre-critical crack propagation stage generally precede the final fracture. Metal chemistry It is believed that there is specificity in the solid-liquid metal combinations experiencing LME. There should be limited mutual solubilities for the metal couple to cause embrittlement. Excess solubility makes sharp crack propagation difficult, but no solubility condition prevents wetting of the solid surfaces by liquid metal and prevents LME. The presence of an oxide layer on the solid metal surface also prevents good contact between the two metals and stops LME. The chemical compositions of the solid and liquid metals affect the severity of embrittlement. The addition of third elements to the liquid metal may increase or decrease the embrittlement and alter the temperature region over which embrittlement is seen. Metal combinations which form intermetallic compounds do not cause LME. There are a wide variety of LME couples. Most technologically important are the LME of aluminum and steel alloys. Metallurgy Alloying of the solid metal alters its LME. Some alloying elements may increase the severity while others may prevent LME. The action of the alloying element is known to be segregation to grain boundaries of the solid metal and alteration of the grain boundary properties. Accordingly, maximum LME is seen in cases where alloy addition elements have saturated the grain boundaries of the solid metal. The hardness and deformation behaviour of the solid metal affects its susceptibility to LME. Generally, harder metals are more severely embrittled. Grain size greatly influences LME. Solids with larger grains are more severely embrittled and the fracture stress varies inversely with the square root of grain diameter. Also the brittle to ductile transition temperature is increased by increasing grain size. Physico-chemical properties The interfacial energy between the solid and liquid metals and the grain boundary energy of the solid metal greatly influence LME. These energies depend upon the chemical compositions of the metal couple. Test parameters External parameters like temperature, strain rate, stress and time of exposure to the liquid metal prior to testing affect LME. Temperature produces a ductility trough and a ductile to brittle transition behaviour in the solid metal. The temperature range of the trough as well as the transition temperature are altered by the composition of the liquid and solid metals, the structure of the solid metal and other experimental parameters. The lower limit of the ductility trough generally coincides with the melting point of the liquid metal. The upper limit is strain rate sensitive. Temperature also affects the kinetics of LME. An increase in strain rate increases the upper limit temperature as well as the crack propagation rate. In most metal couples LME does not occur below a threshold stress level. Testing typically involves tensile specimens but more sophisticated testing using fracture mechanics specimens is also performed. Mechanisms Many theories have been proposed for LME. The major ones are listed below; The dissolution-diffusion model of Robertson and Glikman says that absorption of the liquid metal on the solid metal induces dissolution and inward diffusion. Under stress, these processes lead to crack nucleation and propagation. The brittle fracture theory of Stoloff and Johnson, Westwood and Kamdar proposed that the adsorption of the liquid metal atoms at the crack tip weakens inter-atomic bonds and propagates the crack. Gordon postulated a model based on diffusion-penetration of liquid metal atoms to nucleate cracks which, under stress, grow to cause failure. The ductile failure model of Lynch and Popovich predicted that adsorption of the liquid metal leads to the weakening of atomic bonds and nucleation of dislocations, which move under stress, pile up and work harden the solid. Also, dissolution helps in the nucleation of voids which grow under stress and cause ductile failure. All of these models, with the exception of Robertson, utilize the concept of an adsorption-induced surface energy lowering of the solid metal as the central cause of LME. They have succeeded in predicting many of the phenomenological observations. However, quantitative prediction of LME is still elusive. Mercury embrittlement The most common liquid metal to cause embrittlement is mercury, as it is a common contaminant in the processing of hydrocarbons in petroleum reservoirs. The embrittling effects of mercury were first recognized by Pliny the Elder circa 78 AD. Mercury spills present an especially significant danger for airplanes. The aluminium-zinc-magnesium-copper alloy DTD 5050B is especially susceptible. The Al-Cu alloy DTD 5020A is less susceptible. Spilled elemental mercury can be immobilized and made relatively harmless by silver nitrate. On 1 January 2004, the Moomba, South Australia, natural gas processing plant operated by Santos suffered a major fire. The gas release that led to the fire was caused by the failure of a heat exchanger (cold box) inlet nozzle in the liquids recovery plant. The failure of the inlet nozzle was due to liquid metal embrittlement of the train B aluminium cold box by elemental mercury. Popular culture Liquid metal embrittlement plays a central role in the novel Killer Instinct by Joseph Finder. In the film Big Hero 6, Honey Lemon, voiced by Genesis Rodriguez, uses liquid metal embrittlement in her lab. See also Embrittlement Hydrogen embrittlement References Building defects Materials degradation Fracture mechanics
Liquid metal embrittlement
[ "Materials_science", "Engineering" ]
1,492
[ "Structural engineering", "Fracture mechanics", "Materials science", "Building defects", "Materials degradation", "Mechanical failure" ]