id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
14,709,342 | https://en.wikipedia.org/wiki/Cation%20channel%20superfamily | The transmembrane cation channel superfamily was defined in InterPro and Pfam as the family of tetrameric ion channels. These include the sodium, potassium, calcium, ryanodine receptor, HCN, CNG, CatSper, and TRP channels. This large group of ion channels apparently includes families , , , and of the TCDB transporter classification.
They are described as minimally having two transmembrane helices flanking a loop which determines the ion selectivity of the channel pore. Many eukaryotic channels have four additional transmembrane helices (TM) (), related to or vestigial of voltage gating. The proteins with only two transmembrane helices () are most commonly found in bacteria. This also includes the 2-TM inward-rectifier potassium channels () found primarily in eukaryotes. There are commonly additional regulatory domains which serve to regulate ion conduction and channel gating. The pores may also be homotetramers or heterotetramers; where heterotetramers may be encoded as distinct genes or as multiple pore domains within a single polypeptide. The HVCN1 and Putative tyrosine-protein phosphatase proteins do not contain an expected ion conduction pore domain, but rather have homology only to the voltage sensor domain of voltage gated ion channels.
Human channels with 6 TM helices
Cation
Transient receptor potential
Canonical
TRPC1; TRPC3; TRPC4; TRPC5; TRPC6; TRPC7
Melastatin
TRPM1; TRPM2; TRPM3; TRPM4; TRPM5; TRPM6; TRPM7; TRPM8
Vanilloid
TRPV1; TRPV2; TRPV3; TRPV4; TRPV5; TRPV6
Mucolipin
MCOLN1; MCOLN2; MCOLN3;
Ankyrin
TRPA1
TRPP
PKD1L3;
Calcium
Voltage-dependent
CACNA1A; CACNA1B; CACNA1C; CACNA1D; CACNA1E; CACNA1F; CACNA1G; CACNA1H; CACNA1I; CACNA1S
Sperm
CATSPER1; CATSPER2; CATSPER3; CATSPER4
Ryanodine receptor
RYR1; RYR2; RYR3
Potassium
Voltage-gated potassium
Delayed rectifier
Kvα1.x - Shaker-related: Kv1.1 (KCNA1), Kv1.2 (KCNA2), Kv1.3 (KCNA3), Kv1.5 (KCNA5), Kv1.6 (KCNA6), Kv1.7 (KCNA7), Kv1.8 (KCNA10)
Kvα2.x - Shab-related: Kv2.1 (KCNB1), Kv2.2 (KCNB2)
Kvα3.x - Shaw-related: Kv3.1 (KCNC1), Kv3.2 (KCNC2)
Kvα7.x: Kv7.1 (KCNQ1) - KvLQT1, Kv7.2 (KCNQ2), Kv7.3 (KCNQ3), Kv7.4 (KCNQ4), Kv7.5 (KCNQ5)
Kvα10.x: Kv10.1 (KCNH1)
A-type potassium
Kvα1.x - Shaker-related: Kv1.4 (KCNA4)
Kvα3.x - Shaw-related: Kv3.3 (KCNC3), Kv3.4 (KCNC4)
Kvα4.x - Shal-related: Kv4.1 (KCND1), Kv4.2 (KCND2), Kv4.3 (KCND3)
Outward-rectifying
Kvα10.x: Kv10.2 (KCNH5)
Inwardly-rectifying
Kvα11.x - ether-a-go-go potassium channels: Kv11.1 (KCNH2) - hERG, Kv11.2 (KCNH6), Kv11.3 (KCNH7)
Slowly activating
Kvα12.x: Kv12.1 (KCNH8), Kv12.2 (KCNH3), Kv12.3 (KCNH4)
Modifier/silencer
Kvα5.x: Kv5.1 (KCNF1)
Kvα6.x: Kv6.1 (KCNG1), Kv6.2 (KCNG2), Kv6.3 (KCNG3), Kv6.4 (KCNG4)
Kvα8.x: Kv8.1 (KCNV1), Kv8.2 (KCNV2)
Kvα9.x: Kv9.1 (KCNS1), Kv9.2 (KCNS2), Kv9.3 (KCNS3)
Calcium-activated
BK
KCa1.1 (BK, Slo1, Maxi-K, )
SK
KCa2.x: KCa2.1 (KCNN1) - SK1, KCa2.2 (KCNN2) - SK2, KCa2.3 (KCNN3) - SK3
KCa3.x: KCa3.1 (KCNN4) - SK4
KCa4.x: KCa4.1 (KCNT1) - SLACK, KCa4.2 (KCNT2) - SLICK
IK
KCa3.1 (IKCa1, SK4, )
Other subfamilies
KCa5.1 (Slo3, )
Inward-rectifier potassium
Sodium
NALCN
SCN1A; SCN2A; SCN2A2; SCN3A; SCN4A; SCN5A; SCN7A; SCN8A; SCN9A; SCN10A; SCN11A
SLC9A10; SLC9A11
Cyclic nucleotide-gated
CNGA1; CNGA2; CNGA3; CNGA4
CNGB1; CNGB3
HCN1; HCN2; HCN3; HCN4
ITPR1; ITPR2; ITPR3
Proton
HVCN1
Related proteins
TPTE, part of the larger Voltage sensitive phosphatase family
Human channels with 2 TM helices in each subunit
Potassium
Tandem pore domain potassium channel
KCNK1; KCNK2; KCNK3; KCNK4; KCNK5; KCNK6; KCNK7; KCNK9; KCNK10; KCNK12; KCNK13; KCNK15; KCNK16; KCNK17; KCNK18
Non-human channels
Two-pore
TPCN1
TPCN2
Pore-only potassium
KcsA
Ligand-gated potassium
GluR0
Voltage-gated potassium
KvAP
Prokaryotic KCa
Kch
MthK
TrkA/TrkH
KtrAB
GsuK
TM1088
Voltage and cyclic nucleotide gated potassium
MlotiK1
Sodium
NaChBac
NaVAb
NaVAe1
NaVAp
NaVMm
Non-selective
NaK
Prokaryotic inward-rectifier potassium
KirBac
Engineered
NaK2CNG
NaK2K
References
External links
Protein domains
Protein families
Transmembrane proteins
Ion channels | Cation channel superfamily | [
"Chemistry",
"Biology"
] | 1,606 | [
"Protein classification",
"Protein domains",
"Protein families",
"Neurochemistry",
"Ion channels"
] |
14,709,851 | https://en.wikipedia.org/wiki/Krzysztof%20Matyjaszewski | Krzysztof "Kris" Matyjaszewski (; born April 8, 1950) is a Polish-American chemist. He is the J.C. Warner Professor of the Natural Sciences at the Carnegie Mellon University Matyjaszewski is best known for the discovery of atom transfer radical polymerization (ATRP), a novel method of polymer synthesis that has revolutionized the way macromolecules are made.
Matyjaszewski was elected a member of the National Academy of Engineering in 2006 and the National Academy of Sciences in 2019 for expanding the capabilities of controlled/living polymerizations and developing ATRP, a robust catalytic process for the radical polymerization of monomers. He received the prestigious Wolf Prize in Chemistry in 2011, the Dreyfus Prize in the Chemical Sciences in 2015, and the Grand Prix de la Fondation de la Maison de la Chimie, France in 2020, and the National Academy of Sciences Award in Chemical Sciences in 2023.
Education and career
Matyjaszewski began studying chemistry at Lodz University of Technology in late 1960s and later graduated from the Petrochemical University in Moscow. He received his doctorate from the Center of Molecular and Macromolecular Studies of the Polish Academy of Sciences in 1976 and completed a postdoctoral fellowship at the University of Florida in 1977. From 1978 to 1984, he was a research associate of the Polish Academy of Sciences. From 1984 to 1985, Matyjaszewski held appointments at the University of Paris, first as a research associate and then as a visiting professor. In 1985, he joined the chemistry department at Carnegie Mellon University. He founded and currently directs the university's Center for Macromolecular Engineering. This center is funded both by an active consortium and government agencies, including the National Science Foundation. In 1998, Matyjaszewski was appointed the J.C. Warner Professor of Natural Sciences. In 2004 he was named a university professor, the highest distinction faculty can achieve at Carnegie Mellon. Matyjaszewski is also an adjunct professor in Carnegie Mellon's department of materials science and chemical engineering.
From 1994 to 1998, Matyjaszewski served as head of the department of chemistry at Carnegie Mellon and assisted in recruiting additional faculty with strengths in polymer chemistry. At the same time, he formed a research consortium with various industrial corporations to expand the understanding of controlled radical polymerization, including ATRP, and accelerate the transfer of this technology to different commercial applications. A second consortium, the CRP Consortium, formed under his leadership in 2001, continues and expands these efforts, training university and industrial scientists in procedures for responsive polymeric material development and has comprised 60 industrial members. The same year, Matyjaszewski became an adjunct professor at Polish Academy of Sciences and at the Department of Chemical and Petroleum Engineering of the University of Pittsburgh.
Matyjaszewski is a co-inventor on 72 issued U.S. patented technologies and holds over 150 international patents.
One of the leading educators in the field of polymer chemistry, Matyjaszewski has mentored more than 300 undergraduate, graduate and postdoctoral students since joining Carnegie Mellon. He has co-authored 25 books, 100 book chapters and more than 1300 peer-reviewed scientific papers. According to Google Scholar, his work has been cited in the scientific literature more than 203,000 times, with an h-index of 214, making him one of the most cited chemists in the world.
Matyjaszewski has received numerous awards for his work, including the 2023 National Academy of Sciences Award in Chemical Sciences, 2020 Grand Prix de la Fondation de la Maison de la Chimie, France, 2017 Benjamin Franklin Medal in Chemistry, 2017 Medema Lecture Award, 2015 Dreyfus Prize in the Chemical Sciences, 2014 National Institute of Materials Science (Japan) Award, 2012 Dannie Heineman Prize from the Göttingen Academy of Sciences, 2011 Wolf Prize in Chemistry and the 2009 Presidential Green Chemistry Challenge Award. He has been honored by the American Chemical Society (ACS) with the 2002 Polymer Chemistry Award, 2011 Applied Polymer Science Award, 2011 Herman Mark Award, 2015 Charles G. Overberger Prize, 2019 Chemistry of Materials Award, 2020 Paul Flory Polymer Education Award and 2020 Nichols Medal. He is a member of the U.S. National Academy of Engineering, National Academy of Sciences and National Academy of Inventors, as well as a member of the Polish, Australian and European Academies of Sciences. He also is an honorary member of the Israeli and Chinese Chemical Societies.
Matyjaszewski's work has been recognized in his native country of Poland. In 2004, he received the annual Prize of the Foundation for Polish Science, the most prestigious scientific award in Poland, referred to as the Polish Nobel Prize. In 2005 he became a foreign member of the Polish Academy of Science. He received honorary degrees from Polish universities Lodz University of Technology in 2007, Poznań University in 2016 and Rzeszow University of Technology in 2024. He has also received honorary degrees from the Technion, Israel, the University of Ghent, Belgium, Russian Academy of Sciences, University of Athens, Greece, Polytechnic Institute in Toulouse, France, Pusan National University in South Korea., Universite P. & M. Curie, Sorbonne in Paris, University of Padua, Italy, University of Coimbra, Portugal and University of Crete, Greece.
Awards and honors
1974 Award of the Scientific Secretary of the Polish Academy of Sciences
1980 Award of the Polish Chemical Society
1981 Award of the Polish Academy of Sciences
1989 Presidential Young Investigator Award, National Science Foundation
1995 Carl S. Marvel Creative Polymer Chemistry Award, American Chemical Society
1998 Elf Chair of the French Academy of Sciences
1999 Humboldt Prize for Senior Scientists
2001 Fellow, Polymeric Materials Science and Engineering Fellow, American Chemical Society
2001 Pittsburgh Award, American Chemical Society
2002 Polymer Chemistry Award, American Chemical Society
2004 Cooperative Research Award, American Chemical Society
2004 Prize of the Foundation for Polish Science
2005 Chair, Gordon Research Conference, Polymer East
2005 Foreign Member, Polish Academy of Sciences
2005 Macro Group Medal, Royal Society of Chemistry
2006 Member, National Academy of Engineering
2007 Herman Mark Senior Scholar Award, American Chemical Society
2008 Clarivate Citation Laureate
2009 Presidential Green Chemistry Challenge Award
2010 Fellow, American Chemical Society Polymer Chemistry Division
2010 Gutenberg Award, University of Mainz
2011 Fellow, American Chemical Society
2011 Applied Polymer Science Award, American Chemical Society
2011 Japanese Society Polymer Science Award
2011 Wolf Prize in Chemistry, with Stuart Alan Rice of the University of Chicago and Ching W. Tang of the University of Rochester
2012 Dannie-Heineman Prize, Göttingen Academy of Sciences
2012 Société Chimique de France Prize
2012 Marie Curie Medal, Polish Chemical Society
2013 Madison Marshall Award, American Chemical Society, Alabama Section
2013 Inaugural Akzo Nobel North America Science Award, American Chemical Society
2014 Fellow, National Academy of Inventors
2014 National Institute for Materials Science (Japan) Award
2015 The Charles Overberger Prize (ACS)
2015 The Dreyfus Prize in the Chemical Sciences
2017 Franklin Institute Award in Chemistry
2019 Member, National Academy of Sciences
2019 Corresponding member, Australian Academy of Science
2019 Chemistry of Materials Award, American Chemical Society
2020 Fellow, European Academy of Sciences
2020 Paul Flory Polymer Education Award, American Chemical Society
2020 William H. Nichols Medal, ACS New York Section
2020 Grand Prix de la Fondation de la Maison de la Chimie
2022 CNRS Ambassador of Chemical Sciences in France
2023 National Academy of Sciences Award in Chemical Sciences
Honorary degrees
2002 – University of Ghent, Belgium
2006 – Russian Academy of Sciences
2007 – Lodz University of Technology, Poland
2008 – University of Athens, Greece
2010 – l'Institut Polytechnique, Toulouse, France
2013 – Pusan National University, Busan, South Korea
2013 – Universite P. & M. Curie, Sorbonne, Paris, France
2015 – Technion, Haifa, Israel
2016 – Adam Mickiewicz University in Poznań, Poznań, Poland
2017 – University of Padua, Padua, Italy
2018 – University of Coimbra, Coimbra, Portugal
2023 – University of Crete, Greece
2024 – Rzeszow University of Technology in 2024
Visiting professorships
ESPCI ParisTech, 2011
University of Pusan, 2010
Lodz University of Technology, 2009
University of Tokyo, Fellow of the Japanese Society of the Promotion of Science, 2005
University of Paris, 1985, 1990, 1997, 1998, 2005
University of Bordeaux, 1996, 2004
Michigan Molecular Institute, 2004
University of Pisa, Italy, 2000
University of Ulm, 1999
University of Strasbourg, 1992
University of Beyreuth, 1991
University of Freiburg, 1988
See also
List of Poles
Timeline of Polish science and technology
References
External links
Homepage at CMU
1950 births
Living people
Polish chemists
Carnegie Mellon University faculty
University of Pittsburgh faculty
Polish emigrants to the United States
Wolf Prize in Chemistry laureates
Foreign members of the Russian Academy of Sciences
Polymer scientists and engineers
Members of the United States National Academy of Engineering
Members of the United States National Academy of Sciences
Fellows of the American Chemical Society
People from Pabianice County | Krzysztof Matyjaszewski | [
"Chemistry",
"Materials_science"
] | 1,829 | [
"Polymer scientists and engineers",
"Physical chemists",
"Polymer chemistry"
] |
14,710,128 | https://en.wikipedia.org/wiki/Potassium%20channel%20tetramerisation%20domain | K+ channel tetramerisation domain is the N-terminal, cytoplasmic tetramerisation domain (T1) of voltage-gated K+ channels. It defines molecular determinants for subfamily-specific assembly of alpha-subunits into functional tetrameric channels. It is distantly related to the BTB/POZ domain .
Potassium channels
Potassium channels are the most diverse group of the ion channel family. They are important in shaping the action potential, and in neuronal excitability and plasticity. The potassium channel family is composed of several functionally distinct isoforms, which can be broadly separated into 2 groups: the practically non-inactivating 'delayed' group and the rapidly inactivating 'transient' group.
These are all highly similar proteins, with only small amino acid changes causing the diversity of the voltage-dependent gating mechanism, channel conductance and toxin binding properties. Each type of K+ channel is activated by different signals and conditions depending on their type of regulation: some open in response to depolarisation of the plasma membrane; others in response to hyperpolarisation or an increase in intracellular calcium concentration; some can be regulated by binding of a transmitter, together with intracellular kinases; while others are regulated by GTP-binding proteins or other second messengers. In eukaryotic cells, K+ channels are involved in neural signalling and generation of
the cardiac rhythm, act as effectors in signal transduction pathways involving G protein-coupled receptors (GPCRs)
and may have a role in target cell lysis by cytotoxic T-lymphocytes. In prokaryotic cells, they play a role in the maintenance of ionic homeostasis.
Alpha subunits of the channels
All K+ channels discovered so far possess a core of alpha subunits, each comprising either one or two copies of a highly conserved pore loop domain (P-domain). The P-domain contains the sequence (T/SxxTxGxG), which has
been termed the K+ selectivity sequence. In families that contain one P-domain, four subunits assemble to form a selective pathway for K+ across the membrane. However, it remains unclear how the 2 P-domain subunits assemble to form a selective pore. The functional diversity of these families can arise through homo- or hetero-associations of alpha subunits or association with auxiliary cytoplasmic beta subunits. K+ channel subunits containing one pore domain can be assigned into one of two superfamilies: those that possess six transmembrane (TM) domains and those that possess only two TM domains. The six TM domain superfamily can be further subdivided into conserved gene families: the voltage-gated (Kv) channels; the KCNQ channels (originally known as KvLQT channels); the EAG-like K+ channels; and three types of calcium (Ca)-activated K+ channels (BK, IK and SK). The 2TM domain family comprises inward-rectifying K+ channels. In addition, there are K+ channel alpha-subunits that possess two P-domains. These are usually highly regulated K+ selective leak channels.
The Kv family can be divided into several subfamilies on the basis of sequence similarity and function. Four of these subfamilies, Kv1 (Shaker), Kv2 (Shab), Kv3 (Shaw) and Kv4 (Shal), consist of pore-forming alpha subunits that associate with different types of beta subunit. Each alpha subunit comprises six hydrophobic TM domains with a P-domain between the fifth and sixth, which partially resides in the membrane. The fourth TM domain has positively charged residues at every third residue and acts as a voltage sensor, which triggers the conformational change that opens the channel pore in response to a displacement in membrane potential. More recently, 4 new electrically-silent alpha subunits have been
cloned: Kv5 (KCNF), Kv6 (KCNG), Kv8 and Kv9 (KCNS). These subunits do not themselves possess any functional activity, but appear to form heteromeric channels with Kv2 subunits, and thus modulate Shab channel activity. When highly expressed, they inhibit channel activity, but at lower levels show more specific modulatory actions.
Tetramerization domain
The N-terminal, cytoplasmic tetramerization domain (T1) of voltage-gated potassium channels encodes molecular determinants for subfamily-specific assembly of alpha-subunits into functional tetrameric channels. This domain is found in a subset of a larger group of proteins that contain the BTB/POZ domain.
Human proteins containing this domain
BTBD10; KCNA1; KCNA10; KCNA2; KCNA3; KCNA4; KCNA5; KCNA6;
KCNA7; KCNB1; KCNB2; KCNC1; KCNC2; KCNC3; KCNC4; KCND1;
KCND2; KCND3; KCNF1; KCNG1; KCNG2; KCNG3; KCNG4; KCNRG;
KCNS1; KCNS2; KCNS3; KCNV1; KCNV2; KCTD1; KCTD10; KCTD11;
KCTD12; KCTD13; KCTD14; KCTD15; KCTD16; KCTD17; KCTD18; KCTD19;
KCTD2; KCTD20; KCTD21; KCTD3; KCTD4; KCTD5; KCTD6; KCTD7;
KCTD8; KCTD9; SHKBP1; TNFAIP1;
References
Further reading
Protein domains
Transmembrane proteins | Potassium channel tetramerisation domain | [
"Biology"
] | 1,235 | [
"Protein domains",
"Protein classification"
] |
14,710,215 | https://en.wikipedia.org/wiki/GRAM%20domain | The GRAM domain is found in glucosyltransferases, myotubularins and other membrane-associated proteins. The structure of the GRAM domain is similar to that found in PH domains.
Proteins containing GRAM domains are found in all eukaryotes and bacteria, but not archaea. Various GRAM domains can bind proteins or lipids.
Human proteins containing this domain
GRAMD1A; GRAMD1B; GRAMD1C; GRAMD2A; GRAMD2B; GRAMD4; MTM1; MTMR1; MTMR2; NCOA7; NSMAF; OXR1; SBF1; SBF2; TBC1D8; TBC1D8B; TBC1D9; TBC1D9B; WBP2; WBP2NL; dJ439F8.1;
References
Protein domains
Protein families
Peripheral membrane proteins | GRAM domain | [
"Biology"
] | 193 | [
"Protein families",
"Protein domains",
"Protein classification"
] |
14,710,808 | https://en.wikipedia.org/wiki/Animal%20heme-dependent%20peroxidases | Animal heme-dependent peroxidases is a family of peroxidases. Peroxidases are found in bacteria, fungi, plants and animals. On the basis of sequence similarity, a number of animal heme peroxidases can be categorized as members of a superfamily: myeloperoxidase (MPO); eosinophil peroxidase (EPO); lactoperoxidase (LPO); thyroid peroxidase (TPO); prostaglandin H synthase (PGHS); and peroxidasin.
Function
Myeloperoxidase (MPO) plays a major role in the oxygen-dependent microbicidal system of neutrophils. EPO from eosinophilic granulocytes participates in immunological reactions, and potentiates tumor necrosis factor (TNF) production and hydrogen peroxide release by human monocyte-derived macrophages. MPO (and possibly EPO) primarily use Cl−ions and H2O2 to form hypochlorous acid (HOCl), which can effectively kill bacteria or parasites. In secreted fluids, LPO catalyses the oxidation of thiocyanate ions (SCN−) by H2O2, producing the weak oxidizing agent hypothiocyanite (OSCN−), which has bacteriostatic activity. TPO uses I− ions and H2O2 to generate iodine, and plays a central role in the biosynthesis of thyroid hormones T3 and T4. Myeloperoxidase (), for example, resides in the human nucleus and lysosome and acts as a defense response to oxidative stress, preventing apoptosis of the cell.
Structure
3D structures of MPO and PGHS have been reported. MPO is a homodimer: each monomer consists of a light (A or B) and a heavy (C or D) chain resulting from post-translational excision of 6 residues from the common precursor. Monomers are linked by a single inter-chain disulfide. Each monomer includes a bound calcium ion. PGHS exists as a symmetric homodimer, each monomer of which consists of 3 domains: an N-terminal epidermal growth factor (EGF) like module; a membrane-binding domain; and a large C-terminal catalytic domain containing the cyclooxygenase and the peroxidase active sites. The catalytic domain shows striking structural similarity to MPO. The image at the top of this page is an example of Myeloperoxidase 1dnu derived from X-ray diffraction with resolution 1.85 angstrom.
Active site
The cyclooxygenase active site, which catalyzes the formation of prostaglandin G2 (PGG2) from arachidonic acid, resides at the apex of a long hydrophobic channel, extending from the membrane-binding domain to the center of the molecule. The peroxidase active site, which catalyzes the reduction of PGG2 to PGH2, is located on the other side of the molecule, at the heme binding site. Both MPO and the catalytic domain of PGHS are mainly alpha-helical, 19 helices being identified as topologically and spatially equivalent; PGHS contains 5 additional N-terminal helices that have no equivalent in MPO. In both proteins, three Asn residues in each monomer are glycosylated.
Human proteins containing this domain
The following is a list of human proteins containing this domain:
DUOX1; DUOX2; EPX; LPO; MPO; PTGS1; PTGS2; PXDNL; TPO
References
External links
Protein domains
Protein families
Integral monotopic proteins
EC 1.11.1 | Animal heme-dependent peroxidases | [
"Biology"
] | 813 | [
"Protein families",
"Protein domains",
"Protein classification"
] |
14,711,012 | https://en.wikipedia.org/wiki/Endonuclease/Exonuclease/phosphatase%20family | Endonuclease/Exonuclease/phosphatase family is a structural domain found in the large family of proteins including magnesium dependent endonucleases and many phosphatases involved in intracellular signaling.
Examples
AP endonuclease proteins ,
DNase I proteins ,
Synaptojanin, an inositol-1,4,5-trisphosphate phosphatase
Sphingomyelinase
Nocturnin, an NADPH 2' phosphatase
Subfamilies
Inositol polyphosphate related phosphatase
Human proteins containing this domain
2'-PDE; 2-PDE; ANGEL1; ANGEL2; APEX1; APEX2; CCRN4L; CNOT6;
CNOT6L; DNASE1; DNASE1L1; DNASE1L2; DNASE1L3; INPP5A; INPP5B; INPP5D;
INPP5E; INPPL1; KIAA1706; OCRL; PIB5PA; SKIP; SMPD2; SMPD3;
SYNJ1; SYNJ2; TTRAP; Nocturnin;
Notes
References
Protein domains
Peripheral membrane proteins
EC 3.1.3 | Endonuclease/Exonuclease/phosphatase family | [
"Biology"
] | 270 | [
"Protein domains",
"Protein classification"
] |
14,711,458 | https://en.wikipedia.org/wiki/Twin%20bridges | Twin bridges are a set of two bridges running parallel to each other. A pair of twin bridges is often referred to collectively as a twin-span or dual-span bridge. Twin bridges are independent structures and each bridge has its own superstructure, substructure, and foundation. Bridges of this type are often created by building a new bridge parallel to an existing one in order to increase the traffic capacity of the crossing. While most twin-span bridges consist of two identical bridges, this is not always the case.
For a bridge owner, twin bridges can improve the maintenance and management of the structures. For motorists, twin bridges can limit the risk that both directions of traffic will be disrupted by an accident.
Examples
Carquinez Bridge – original cantilever span built in 1927 and later twinned in 1958; a newer suspension span was built in 2003 to replace the original 1927 span, which was later demolished in 2007.
Chesapeake Bay Bridge – twin suspension spans with notable visual differences in construction techniques.
Delaware Memorial Bridge – identical twin spans built 18 years apart from one another.
Maria Skłodowska-Curie Bridge – almost identical triplets.
Donald and Morris Goodkind Bridges – original Art Deco arch span built in 1929; newer steel bridge built in 1976.
Crescent City Connection – original cantilever span opened in 1958; parallel span opened in 1988
Lake Pontchartrain Causeway – the longest twin-span bridge in the world. Original span opened in 1956, second span in 1969
Bi-State Vietnam Gold Star Bridges – pair of cantilever bridges over the Ohio River. First span opened in 1932, second span in 1965.
Blue Water Bridge - Border crossing between Port Huron, Michigan (United States) and Sarnia, Ontario (Canada).
Québec Bridge
Sir Leo Hielscher Bridges
Iron Cove Bridge
Tacoma Narrows Bridge
Daniel Carter Beard Bridge
Tappan Zee Bridge – originally a single span in 1955, rebuilt into a new double span in 2017, old bridge demolished
Newburgh–Beacon Bridge – originally a single span in 1963, 2nd span added in 1980
Thaddeus Kosciusko Bridge – commonly referred to as the "Twin Bridges", or just "The Twins"
Kosciuszko Bridge – twin cable-stayed spans, with their pylons mirrored on opposite sides of Newtown Creek
South Grand Island Bridge
References
Structural engineering
Bridge design
Duos | Twin bridges | [
"Engineering"
] | 474 | [
"Structural engineering",
"Bridge design",
"Construction",
"Civil engineering",
"Architecture"
] |
14,711,705 | https://en.wikipedia.org/wiki/Predicting%20the%20timing%20of%20peak%20oil | Predicting the timing of peak oil involves estimation of future production from existing oil fields as well as future discoveries. The initial production model was Hubbert peak theory, first proposed in the 1950s. Since then, many experts have tried to forecast peak oil.
Present range of predictions
As of 2024, the International Energy Agency predicts that peak oil will happen by 2030, while the US Energy Information Administration forecasts a peak in 2050 and the OPEC does not see a peak in oil demand before 2045.
Past predictions
1880s-1940s
The idea that human use of petroleum faces sustainability limits attracted practical concern at least as early as the 1880s, as did the related idea that the timing of those limits depends on the extraction technology. The concept of exhausting a natural resource to a point of diminishing returns had some antecedent examples. During the same decades when the modern petroleum industry was launching, the New England whale oil industry had just experienced a peak and was grappling with decline.
Economist and oil analyst Daniel Yergin notes that the first predictions of imminent oil peaks go back to the 1880s, when some American experts believed that exhaustion of the Pennsylvania oil fields would kill the US oil industry. Another wave of peak predictions occurred after World War I.
"... the peak of production will soon be passed, possibly within 3 years. ... There are many well-informed geologists and engineers who believe that the peak in the production of natural petroleum in this country will be reached by 1921 and who present impressive evidence that it may come even before 1920."
- David White, chief geologist, United States Geological Survey (1919)
“The average middle-aged man of today will live to see the virtual exhaustion of the world’s supply of oil from wells,”
- Victor C. Anderson, president of the Colorado School of Mines (1921)
A correspondent named W.D. Hornaday, quoting oil industry executive J.S. Cullinan, described the concerns in a 1918 article for Tractor and Gas Engine Review titled "Petroleum consumption enormous." The article said, "There has been considerable discussion of late as to the possible length of time that the petroleum supply of the United States and the world will hold out." The article quoted Cullinan as saying, "It is just possible, so far as the United States is concerned, that the development and the exhaustion of the supplies may occur within the course of one human life. It is certain that unless radical changes from present methods are applied promptly, all sources of supply within the range of known drilling methods will be exhausted during the life of your children and mine." It turned out that radical changes from 1910s drilling methods were, in fact, applied promptly, and thus, the predicted timeframe was premature; but the underlying concerns (that the vastness of consumption would lead to shortages soon enough to worry about, regardless of the exact decade) did not disappear.
Hubbert's model
In 1956, M. King Hubbert created and first used the models behind peak oil to predict that United States oil production would peak between 1965 and 1971.
In 1956, Hubbert calculated that the world held an ultimate cumulative of 1.25 trillion barrels, of which 124 billion had already been produced. He projected that world oil production would peak at about 12.5 billion barrels per year, sometime around the year 2000. He repeated the prediction in 1962. World oil production surpassed his predicted peak in 1967 and kept rising; world oil production did not peak on or near the year 2000, and for the year 2012 was 26.67 billion barrels, more than twice the peak rate Hubbert had projected back in 1956.
Hubbert's 1956 peak projection for the United States depended on geological estimates of ultimate recoverable oil resources, but starting in his 1962 publication, he concluded that ultimate oil recovery was an output of his mathematical analysis, rather than an assumption. He regarded his peak oil calculation as independent of reserve estimates.
In 1956, Hubbert confined his peak oil prediction to that crude oil "producible by methods now in use." By 1962, however, his analyses included future improvements in exploration and production. All of Hubbert's analyses of peak oil specifically excluded oil manufactured from oil shale or mined from oil sands. A 2013 study predicting an early peak excluded deepwater oil, tight oil, oil with API gravity less than 17.5, and oil close to the poles, such as that on the North Slope of Alaska, all of which it defined as non-conventional.
In 1962, Hubbert predicted that world oil production would peak at a rate of 12.5 billion barrels per year, around the year 2000. In 1974, Hubbert predicted that peak oil would occur in 1995 "if current trends continue". Those predictions proved incorrect.
In 1974, Hubbert again predicted that world peak oil would occur near 2000, this time in 1995 "if current trends continue." However, in the late 1970s and early 1980s, global oil consumption actually dropped (due to the shift to energy-efficient cars, the shift to electricity and natural gas for heating, and other factors), then rebounded with a lower rate of growth in the mid 1980s. Thus oil production did not peak in 1995, and has climbed to more than double the rate initially projected.
2000s
In 2001, Kenneth S. Deffeyes, professor emeritus of geology at Princeton University, used Hubbert’s theory to predict that world oil production would peak about 2005, with a possible range of 2003 to 2006. He used the observed growth of production plus reserves to calculate ultimate world oil production of 2.12 trillion barrels, noting: "No educated guesses go in." He considered the application of new technology, but wrote: "This much is certain: no initiative put in place starting today can have a substantial effect on the peak production year." His final conclusion was: "There is nothing plausible that could postpone the peak until 2009. Get used to it." As of late 2009, Deffeyes was still convinced that 2005 had been the peak, and wrote: “I think it unlikely that oil production will ever climb back to the 2005 levels.”
The term "peak oil" was popularized by Colin Campbell and Kjell Aleklett in 2002 when they helped form the Association for the Study of Peak Oil and Gas (ASPO). In his publications, Hubbert used the term "peak production rate" and "peak in the rate of discoveries".
According to Matthew Simmons, former chairman of Simmons & Company International and author of Twilight in the Desert: The Coming Saudi Oil Shock and the World Economy, "peaking is one of these fuzzy events that you only know clearly when you see it through a rear view mirror, and by then an alternate resolution is generally too late." On October 26, 2006 Simmons said that global oil production may have peaked in December 2005, though he cautioned that further monitoring of production is required to determine if a peak has actually occurred.
Phibro statistics show that major oil companies hit peak production in 2005. Fatih Birol, chief economist at the International Energy Agency, stated in 2011 that "crude oil production for the world has already peaked in 2006."
Several sources in 2006 and 2007 predicted that worldwide production was at or past its maximum. However, in 2013 OPEC's figures showed that world crude oil production and remaining proven reserves were at record highs.
In a 2006 analysis of Hubbert theory, it was noted that uncertainty in real world oil production amounts and confusion in definitions increases the uncertainty in general of production predictions. By comparing the fit of various other models, it was found that Hubbert's methods yielded the closest fit overall but none of the models were very accurate. In 1956 Hubbert himself recommended using "a family of possible production curves" when predicting a production peak and decline curve.
The July 2007 IEA Medium-Term Oil Market Report projected a 2% non-OPEC liquids supply growth in 2007-2009, reaching in 2008, receding thereafter as the slate of verifiable investment projects diminishes. They refer to this decline as a plateau. The report expects only a small amount of supply growth from OPEC producers, with 70% of the increase coming from Saudi Arabia, the UAE, and Angola as security and investment issues continue to impinge on oil exports from Iraq, Nigeria and Venezuela.
In October 2007, the Energy Watch Group, a German research group founded by MP Hans-Josef Fell, released a report claiming that oil production peaked in 2006 and would decline by several percent annually. The authors predicted negative economic effects and social unrest as a result. They stated that the IEA production plateau prediction uses purely economic models, which rely on an ability to raise production and discovery rates at will.
Sadad Ibrahim Al Husseini, former head of Saudi Aramco's production and exploration, stated in an October 29, 2007 interview that oil production had likely already reached its peak in 2006, and that assumptions by the IEA and EIA of production increases by OPEC to over are "quite unrealistic." Data from the United States Energy Information Administration show that world production leveled out in 2004, and an October 2007 retrospective report by the Energy Watch Group concluded that this data showed the peak of conventional oil production in the third quarter of 2006.
ASPO predicted in their January 2008 newsletter that the peak in all oil (including non-conventional sources), would occur in 2010. This is earlier than the July 2007 newsletter prediction of 2011. ASPO Ireland in its May 2008 newsletter, number 89, revised its depletion model and advanced the date of the peak of overall liquids from 2010 to 2007.
Texas alternative energy activist and oilman T. Boone Pickens stated in 2005 that worldwide conventional oil production was very close to peaking. On June 17, 2008, in testimony before the U.S. Senate Energy and Natural Resources Committee, Pickens stated that "I do believe you have peaked out at 85 million barrels a day globally."
The UK Industry Taskforce on Peak Oil and Energy Security (ITPOES) reported in late October 2008 that peak oil is likely to occur by 2013. ITPOES consists of eight companies: Arup, FirstGroup, Foster + Partners, Scottish and Southern Energy, Solarcentury, Stagecoach Group, Virgin Group, and Yahoo. Their report includes a chapter written by Shell corporation.
In October 2009, a report published by the Government-supported UK Energy Research Centre, following 'a review of over 500 studies, analysis of industry databases and comparison of global supply forecasts', concluded that 'a peak in conventional oil production before 2030 appears likely and there is a significant risk of a peak before 2020'. The authors believe this forecast to be valid 'despite the large uncertainties in the available data'. The study was claimed to be the first to undertake an 'independent, thorough and systematic review of the evidence and arguments in the 'peak oil’ debate'. The authors noted that 'forecasts that delay a peak in conventional oil production until after 2030 are at best optimistic and at worst implausible' and warn of the risk that 'rising oil prices will encourage the rapid development of carbon-intensive alternatives that will make it difficult or impossible to prevent dangerous climate change and that 'early investment in low-carbon alternatives to conventional oil is of considerable importance' in avoiding this scenario.
In 2009, a number of industry leaders and analysts believed that world oil production would peak between 2015 and 2030, with a significant chance that the peak would occur before 2020. They consider dates after 2030 implausible. By comparison, a 2014 analysis of production and reserve data predicted a peak in oil production about 2035. Determining a more specific range is difficult due to the lack of certainty over the actual size of world oil reserves. Unconventional oil is not currently predicted to meet the expected shortfall even in a best-case scenario. For unconventional oil to fill the gap without "potentially serious impacts on the global economy", oil production would have to remain stable after its peak, until 2035 at the earliest.
Non-'peakists' can be divided into several different categories based on their specific criticism of peak oil. Some claim that any peak will not come soon or have a dramatic effect on the world economies. Others claim we will not reach a peak for technological reasons, while still others claim our oil reserves are quickly regenerated abiotically.
CERA, which counts unconventional sources in reserves while discounting EROEI, believes that global production will eventually follow an “undulating plateau” for one or more decades before declining slowly. In 2005 the group predicted that "petroleum supplies will be expanding faster than demand over the next five years."
In 2007, The Wall Street Journal reported that "a growing number of oil-industry chieftains" believed that oil production would soon reach a ceiling for a variety of reasons, and plateau at that level for some time. Several chief executives stated that projections of over of production per day are unrealistic, contradicting the projections of the International Energy Agency and United States Energy Information Administration.
It has been argued that even a "plateau oil" scenario may cause political and economic disruption due to increasing petroleum demand and price volatility.
Energy Information Administration and USGS 2000 reports
The United States Energy Information Administration projects (as of 2006) world consumption of oil to increase to in 2015 and in 2030. This would require a more than 35 percent increase in world oil production by 2030. A 2004 paper by the Energy Information Administration based on data collected in 2000 disagrees with Hubbert peak theory on several points. It:
explicitly incorporates demand into model as well as supply
does not assume pre/post-peak symmetry of production levels
models pre- and post-peak production with different functions (exponential growth and constant reserves-to-production ratio, respectively)
assumes reserve growth, including via technological advancement and exploitation of small reservoirs
The EIA estimates of future oil supply are disagreed with by Sadad Ibrahim Al Husseini, a retired Vice President of Exploration of Aramco, who called it a 'dangerous over-estimate'. Husseini also pointed out that population growth and the emergence of China and India means oil prices are now going to be structurally higher than they have been.
Colin Campbell argued that the 2000 United States Geological Survey (USGS) estimates was a methodologically flawed study that has done incalculable damage by misleading international agencies and governments. Campbell dismisses the notion that the world can seamlessly move to more difficult and expensive sources of oil and gas when the need arises. He argued that oil is in profitable abundance or not there at all, due ultimately to the fact that it is a liquid concentrated by nature in a few places that possess the right geological conditions. Campbell believes OPEC countries raised their reserves to get higher oil quotas and to avoid internal critique. He has also pointed out that the USGS failed to extrapolate past discovery trends in the world’s mature basins. He concluded (2002) that peak production was "imminent." Campbell's own record is that he successively predicted that the peak in world production would occur in 1989, 2004, and 2010.
IEA World Energy Outlook and British Petroleum
The 2008 World Energy Outlook of the International Energy Agency suggested that there was sufficient oil supply to meet demand at reasonable prices for the foreseeable future. This was critiqued by K. Aleklett and M. Höök, but their critique has itself been accused of bias towards non-representative depletion rates, with the result that their figures are ill-founded. Subsequent research clarified more on depletion rates and different ways to define them, but still showed that it rests on solid scientific ground. Ultimately, much of the criticism raised by Uppsala has been addressed and corrected for by the IEA as the same group has thoroughly reviewed oil projections in the IEA World Energy Outlook while remaining uncertainties are chiefly attributable to OPEC and unconventional oil .
According to the World Energy Outlook 2010, conventional crude oil production peaked in 2006, with an all-time maximum of 70 million barrels per day. In September 2020 BP stated their belief (as reported by Bloomberg) that 2019 would be the all-time global liquid fossil fuel production peak. If true, the collapse in demand due to the Covid pandemic from early 2020, the accelaration of previous trends (e.g. electric vehicle adoption), and the deteriorating economics of oil production from large existing fields would all have contributed to making 2019 the peak year.
2010s
Papers published since 2010 have been relatively pessimistic. A 2010 Kuwait University study predicted production would peak in 2014. A 2010 Oxford University study predicted that production would peak before 2015, but its projection of a change soon "... from a demand-led market to a supply constrained market ..." was incorrect. A 2014 validation of a significant 2004 study in the journal Energy proposed that it is likely that conventional oil production peaked, according to various definitions, between 2005 and 2011. A set of models published in a 2014 Ph.D. thesis predicted that a 2012 peak would be followed by a drop in oil prices, which in some scenarios could turn into a rapid rise in prices thereafter. According to energy blogger Ron Patterson, the peak of world oil production was probably around 2010.
On the other hand, the US Energy Information Administration projected in 2014 that world production of “total liquids,” which, in addition to liquid petroleum, includes biofuels, natural gas liquids, and oil sands, would increase at an average rate of about one percent per year through 2040 without peaking. OPEC countries are expected to increase oil production at a faster rate than non-OPEC countries.
Given the large range offered by meta-studies, papers published since 2010 have been relatively pessimistic. A 2010 Kuwait University study predicted production would peak in 2014. A 2010 Oxford University study predicted that production will peak before 2015. A 2014 validation of a significant 2004 study in the journal Energy proposed that it is likely that conventional oil production peaked, according to various definitions, between 2005 and 2011. Models which show a continued increase in oil production may be including both conventional and non-conventional oil. A set of models published in a 2014 Ph.D. thesis predicted that a 2012 peak would be followed by a drop in oil prices, which in some scenarios could turn into a rapid rise in prices thereafter.
There was a consensus between industry leaders and analysts that world oil production would peak between 2010 and 2030, with a significant chance that the peak will occur before 2020. Dates after 2030 were considered implausible by some. Determining a more specific range is difficult due to the lack of certainty over the actual size of world oil reserves. Unconventional oil is not currently predicted to meet the expected shortfall even in a best-case scenario. For unconventional oil to fill the gap without "potentially serious impacts on the global economy", oil production would have to remain stable after its peak, until 2035 at the earliest.
Early 2020s
In January 2023, a report by BP predicted that the world will sharply reduce its reliance on oil and gas, over the next 25 years. Elon Musk disagreed and said this would most likely happen in the next 5 years.
An October 2021 article by Fortune journalist Sophie Mellor predicts that 2025 will mark peak oil demand, though mentioned that the International Energy Agency stressed trillions of dollars in renewable energy investment.
No peak oil
The view that oil extraction will never enter a depletion phase is often referred to as "cornucopian" in ecology and sustainability literature.
Abdullah S. Jum'ah, President, Director and CEO of Saudi Aramco states that the world has adequate reserves of conventional and nonconventional oil sources that will last for more than a century.
As recently as 2008 he pronounced "We have grossly underestimated mankind’s ability to find new reserves of petroleum, as well as our capacity to raise recovery rates and tap fields once thought inaccessible or impossible to produce.” Jum’ah believes that in-place conventional and non-conventional liquid resources may ultimately total between 13 trillion and and that only a small fraction (1.1 trillion) has been extracted to date.
Economist Michael Lynch says that the Hubbert peak theory is flawed and that there is no imminent peak in oil production. He argued in 2004 that production is determined by demand as well as geology, and that fluctuations in oil supply are due to political and economic effects as well as the physical processes of exploration, discovery and production. This idea is echoed by Jad Mouawad, who explains that as oil prices rise, new extraction technologies become viable, thus expanding the total recoverable oil reserves. This, according to Mouwad, is one explanation of the changes in peak production estimates.
Leonardo Maugeri, the former group senior vice president, Corporate Strategies of Eni S.p.A., dismissed the peak oil thesis in a 2004 policy position piece in Science as "the current model of oil doomsters," and based on several flawed assumptions. He characterized the peak oil theory as part of a series of "recurring oil panics" that have "driven Western political circles toward oil imperialism and attempts to assert direct or indirect control over oil-producing regions". Maugeri claimed the geological structure of the earth has not been explored thoroughly enough to conclude that the declining trend in discoveries, which began in the 1960s, will continue. He also stated that complete global oil production, discovery trends, and geological data are not available globally.
Economist and oil analyst Daniel Yergin criticizes Hubbert peak oil theory for ignoring the effects of both economics and improved technology. Yergin believes that world oil production will continue to rise until “perhaps sometime around mid-century” and then “plateau” or enter a gradual decline. He considers it possible that declining oil production, when it comes, will be caused less by resource scarcity than by lower demand brought about by improved efficiency.
Peak oil for individual nations
Peak oil as a concept applies globally, but it is based on the summation of individual nations experiencing peak oil.
In State of the World 2005, Worldwatch Institute observed that oil production was in decline in 33 of the 48 largest oil-producing countries. Other countries have also passed their individual oil production peaks.
The following list shows some oil-producing nations and their peak oil production years.
Algeria: 2006
Angola: 2008
Argentina: 2001
Azerbaijan: 2010
Australia: 2000
China: 2015
Egypt: 1987
Denmark: 2004
France: 1988
Germany: 1966
India: 2011
Iran: 1974
Indonesia: 1991
Libya: 1970 (disputed)
Malaysia: 2004
Mexico: 2003
New Zealand: 2008
Nigeria: 1979
Norway: 2000
Oman: 2000
Peru: 1982, with an additional peak possible if the Amazon is drilled
Qatar: 2007
Syria: 1996
Tobago: 1981
UK: 1999
Venezuela: 1970
Peak oil production has not been reached in the following nations (and is estimated in a 2010 Kuwait University study to occur in the following years):
Iraq: 2036
Kazakhstan: 2020
Kuwait: 2033
Saudi Arabia: 2027
An ABC television program in 2006 predicted that Russia would hit peak in 2010, it has continued to rise through 2016.
See also
Hubbert peak theory
Petroleum
Hubbert linearization
References
External links
Peak oil
Petroleum politics
Prediction | Predicting the timing of peak oil | [
"Chemistry"
] | 4,738 | [
"Petroleum",
"Petroleum politics"
] |
14,711,760 | https://en.wikipedia.org/wiki/Saposin%20protein%20domain | The saposin domains refers to two evolutionally-conserved protein domains found in saposin and related proteins (SAPLIP). Saposins are small lysosomal proteins that serve as activators of various lysosomal lipid-degrading enzymes. They probably act by isolating the lipid substrate from the membrane surroundings, thus making it more accessible to the soluble degradative enzymes. All mammalian saposins are synthesized as a single precursor molecule (prosaposin) which contains four Saposin-B domains, yielding the active saposins after proteolytic cleavage, and two Saposin-A domains that are removed in the activation reaction.
The Saposin-B domains also occur in other proteins, most of them playing a role in interacting with membranes.
Classification
The saposin (SapB1-SapB2) domains are found in a wide range of proteins. Each half-domain encodes two alpha helices in the SapB domain for a total of four.
The mamallian prosaposin (domain organization below) is a prototypic family member. It also includes the N- and C-terminal SapA domains, both of which are proteolyticly cleaved as the proprotein matures. Four connected pairs of SapB1-SapB2 domains are released, sequentially named Saposin-A through D. Some closely related proteins, such as PSAPL1 and SFTPB, share the architecture and the cleaving mechanism in whole or in part. While Prosaposin and PSAPL1 act in lysosomal lipid degradation, SFTPB is released into the pulmonary surfactant, playing a role in rearranging lipids.
However, proteins like GNLY and AOAH do not carry a SapA domain. While GNLY is essentially a SapB with N-terminal extensions specialized for lysing pathogen cell membranes, the ADAH protein uses the uncleaved SapB domain for targeting the correct intracellular compartment.
The plant-specific insert is an unusual variation on the SapB domains. It features a circular permutation compared to the usual topology: instead of featuring a SapB1-SapB2 unit, it is made up of a SapB2-linker-SapB1 unit seemingly derived by taking a half of each of two SapB units.
Human proteins containing this domain
AOAH
GNLY
Prosaposin
PSAPL1
SFTPB
References
Further reading
External links
Saposin B-type domain in PROSITE
Protein domains
Protein families
Peripheral membrane proteins | Saposin protein domain | [
"Biology"
] | 540 | [
"Protein families",
"Protein domains",
"Protein classification"
] |
14,712,046 | https://en.wikipedia.org/wiki/Martin%20Lindauer | Martin Lindauer (December 19, 1918 – November 13, 2008 ) was a German behavioral scientist. Lindauer studied communication systems in various species of social bees including stingless bees and honey bees. Much of his work was done in collaboration with Warwick Kerr in Brazil. Involved with the evolution of bees etymology by re-classifying them from honey bugs.
Biography
Martin Lindauer was born in Upper Bavaria. He was on the Russian Front during World War II.
Academics
Lindauer’s academic supervisor was Nobel Prize winning Karl von Frisch with whom he had much academic collaboration. He was a major contributor to bee behavioral and sensory research, particularly in the fields of communication and orientation. Among other topics, he studied dance language and use of polarized light by bees as a compass. His work laid the foundation for many future bee researchers. He was also a co-editor of the Journal of Comparative Physiology.
Awards
Elected to the American Academy of Arts and Sciences (1962)
Elected to the American Philosophical Society (1976)
Elected to the United States National Academy of Sciences (1976)
Magellanic Premium (1980)
Honorary Doctorate-University of Zürich
Honorary Doctorate-University of Umea
Honorary Doctorate-University of Saarbrücken
The Order of the Federal Republic of Germany-1st Class
The Bavarian Maximilian Order for Science and Art
Memberships to several National and International scientific academies
References
1918 births
2008 deaths
Ethologists
Members of the German National Academy of Sciences Leopoldina
Foreign associates of the National Academy of Sciences
German Army personnel of World War II
20th-century German zoologists
Members of the American Philosophical Society | Martin Lindauer | [
"Biology"
] | 319 | [
"Ethology",
"Behavior",
"Ethologists"
] |
14,712,087 | https://en.wikipedia.org/wiki/Major%20facilitator%20superfamily | The major facilitator superfamily (MFS) is a superfamily of membrane transport proteins that facilitate movement of small solutes across cell membranes in response to chemiosmotic gradients.
Function
The major facilitator superfamily (MFS) are membrane proteins which are expressed ubiquitously in all kingdoms of life for the import or export of target substrates. The MFS family was originally believed to function primarily in the uptake of sugars but subsequent studies revealed that drugs, metabolites, oligosaccharides, amino acids and oxyanions were all transported by MFS family members. These proteins energetically drive transport utilizing the electrochemical gradient of the target substrate (uniporter), or act as a cotransporter where transport is coupled to the movement of a second substrate.
Fold
The basic fold of the MFS transporter is built around 12, or in some cases, 14 transmembrane helices (TMH), with two 6- (or 7- ) helix bundles formed by the N and C terminal homologous domains of the transporter which are connected by an extended cytoplasmic loop. The two halves of the protein pack against each other in a clam-shell fashion, sealing via interactions at the ends of the transmembrane helices and extracellular loops. This forms a large aqueous cavity at the center of the membrane, which is alternatively open to the cytoplasm or periplasm/extracellular space. Lining this aqueous cavity are the amino-acids which bind the substrates and define transporter specificity. Many MFS transporters are thought to be dimers through in vitro and in vivo methods, with some evidence to suggest a functional role for this oligomerization.
Mechanism
The alternating-access mechanism thought to underlie the transport of most MFS transport is classically described as the "rocker-switch" mechanism. In this model, the transporter opens to either the extracellular space or cytoplasm and simultaneously seals the opposing face of the transporter, preventing a continuous pathway across the membrane. For example, in the best studied MFS transporter, LacY, lactose and protons typically bind from the periplasm to specific sites within the aqueous cleft. This drives closure of the extracellular face, and opening of the cytoplasmic side, allowing substrate into the cell. Upon substrate release, the transporter recycles to the periplasmic facing orientation.
Exporters and antiporters of the MFS family follow a similar reaction cycle, though exporters bind substrate in the cytoplasm and extrude it to the extracellular or periplasmic space, while antiporters bind substrate in both states to drive each conformational change. While most MFS structures suggest large, rigid body structural changes with substrate binding, the movements may be small in the cases of small substrates, such as the nitrate transporter NarK.
Transport
The generalized transport reactions catalyzed by MFS porters are:
Uniport: S (out) ⇌ S (in)
Symport: S (out) + [H+ or Na+] (out) ⇌ S (in) + [H+ or Na+] (in)
Antiport: S1 (out) + S2 (in) ⇌ S1 (in) + S2 (out) (S1 may be H+ or a solute)
Substrate specificity
Though initially identified as sugar transporters, a function conserved from prokaryotes to mammals, the MFS family is notable for the great diversity of substrates transported by the superfamily. These range from small oxyanions to large peptide fragments. Other MFS transporters are notable for a lack of selectivity, extruding broad classes of drugs and xenobiotics. This substrate specificity is largely determined by specific side chains which line the aqueous pocket at the center of the membrane. While one substrate of particular biological importance is often used to name the transporter or family, there may also be co-transported or leaked ions or molecules. These include water molecules or the coupling ions which energetically drive transport.
Structures
The crystal structures of a number of MFS transporters have been characterized. The first structures were of the glycerol 3-phosphate/phosphate exchanger GlpT and the lactose-proton symporter LacY, which served to elucidate the overall structure of the protein family and provided initial models for understanding the MFS transport mechanism. Since these initial structures other MFS structures have been solved which illustrate substrate specificity or states within the reaction cycle. While the initial MFS structures solved were of bacterial transporters, recently structures of the first eukaryotic structures have been published. These include a fungal phosphate transporter PiPT, plant nitrate transporter NRT1.1, and the human glucose transporter GLUT1.
Evolution
The origin of the basic MFS transporter fold is currently under heavy debate. All currently recognized MFS permeases have the two six-TMH domains within a single polypeptide chain, although in some MFS families an additional two TMHs are present. Evidence suggests that the MFS permeases arose by a tandem intragenic duplication event in the early prokaryotes. This event generated the 12 transmembrane helix topology from a (presumed) primordial 6-helix dimer. Moreover, the well-conserved MFS specific motif between TMS2 and TMS3 and the related but less well conserved motif between TMS8 and TMS9 prove to be a characteristic of virtually all of the more than 300 MFS proteins identified. However, the origin of the primordial 6-helix domain is under heavy debate. While some functional and structural evidence suggests that this domain arose out of a simpler 3-helix domain, bioinformatic or phylogenetic evidence supporting this hypothesis is lacking.
Medical significance
MFS family members are central to human physiology and play an important role in a number of diseases, through aberrant action, drug transport, or drug resistance. The OAT1 transporter transports a number of nucleoside analogs central to antiviral therapy. Resistance to antibiotics is frequently the result of action of MFS resistance genes. Mutations in MFS transporters have also been found to cause neurodegerative disease, vascular disorders of the brain, and glucose storage diseases.
Disease mutations
Disease associated mutations have been found in a number of human MFS transporters; those annotated in Uniprot are listed below.
Human MFS proteins
There are several MFS proteins in humans, where they are known as solute carriers (SLCs) and Atypical SLCs. There are today 52 SLC families, of which 16 families include MFS proteins; SLC2, 15 16, 17, 18, 19, SLCO (SLC21), 22, 29, 33, 37, 40, 43, 45, 46 and 49. Atypical SLCs are MFS proteins, sharing sequence similarities and evolutionary origin with SLCs, but they are not named according to the SLC root system, which originates from the hugo gene nomenclature system (HGNC). All atypical SLCs are listed in detail in, but they are: MFSD1, MFSD2A, MFSD2B, MFSD3, MFSD4A, MFSD4B, MFSD5, MFSD6, MFSD6L, MFSD8, MFSD9, MFSD10, MFSD11, MFSD12, MFSD13A, MFSD14A, MFSD14B, UNC93A, UNC93B1, SV2A, SV2B, SV2C, SVOP, SVOPL, SPNS1, SPNS2, SPNS3 and CLN3. As there is high sequence identity and phylogenetic resemblance between the atypical SLCs of MFS type, they can be divided into 15 AMTFs (Atypical MFS Transporter Families), suggesting there are at least 64 different families including SLC proteins of MFS type.
References
Protein domains
Transmembrane proteins
Articles containing video clips
Protein superfamilies
Transport proteins | Major facilitator superfamily | [
"Biology"
] | 1,735 | [
"Protein superfamilies",
"Protein domains",
"Protein classification"
] |
14,712,165 | https://en.wikipedia.org/wiki/Nationalization%20of%20oil%20supplies | The nationalization of oil supplies refers to the process of confiscation of oil production operations and their property, generally for the purpose of obtaining more revenue from oil for the governments of oil-producing countries. This process, which should not be confused with restrictions on crude oil exports, represents a significant turning point in the development of oil policy. Nationalization eliminates private business operations—in which private international companies control oil resources within oil-producing countries—and transfers them to the ownership of the governments of those countries. Once these countries become the sole owners of these resources, they have to decide how to maximize the net present value of their known stock of oil in the ground.
Several key implications can be observed as a result of oil nationalization. "On the home front, national oil companies are often torn between national expectations that they should 'carry the flag' and their own ambitions for commercial success, which might mean a degree of emancipation from the confines of a national agenda."
According to consulting firm PFC Energy, only 7% of the world's estimated oil and gas reserves are in countries that allow private international companies free rein. Roughly 65% are in the hands of state-owned companies such as Saudi Aramco, with the rest in countries such as Russia and Venezuela, where access by Western companies is difficult. The PFC study implies political groups unfavorable to capitalism in some countries tend to limit oil production increases in Mexico, Venezuela, Iran, Iraq, Kuwait and Russia. Saudi Arabia is also limiting capacity expansion, but because of a self-imposed cap, unlike the other countries.
History
This nationalization (expropriation) of previously privately owned oil supplies where it has occurred, has been a gradual process. Before the discovery of oil, some Middle Eastern countries such as Iraq, Saudi Arabia, and Kuwait were all poor and underdeveloped. They were desert kingdoms that had few natural resources and were without adequate financial resources to maintain the state. Poor peasants made up a majority of the population.
When oil was discovered in these developing nations during the early twentieth century, the countries did not have enough knowledge of the oil industry to make use of the newly discovered natural resources. The countries were therefore not able to mine or market their petroleum.
Major oil companies had the technology and expertise and they negotiated concession agreements with the developing countries; the companies were given exclusive rights to explore and develop the production of oil within the country in exchange for making risky investments, discovering the oil deposits, producing the oil, and paying local taxes. The concession agreements made between the oil-producing country and the oil company specified a limited area the company could utilize, lasted a limited amount of time, and required the company to take all the financial and commercial risks as well as pay the host governments surface taxes, royalties, and production taxes. As long as companies met those requirements, governments promised the companies would be able to claim any of the oil they mined. As a result, the world's oil was largely in the hands of seven corporations based in the United States and Europe, often called the Seven Sisters. Five of the companies were American (Chevron, Exxon, Gulf, Mobil, and Texaco), one was British (BP), and one was Anglo-Dutch (Royal Dutch Shell). These companies have since merged into four: Shell, ExxonMobil, Chevron, and BP. The nations with oil reserves were unhappy with the percentage of the profits they had negotiated. But, due to the inclusion of choice-of-law clauses, the sovereign host countries could not simply change the contracts arbitrarily. In other words, disputes over contract details would be settled by a third party instead of the host country. The only way for host countries to alter their contracts was through nationalization (expropriation).
Although undeveloped nations originally welcomed concession agreements, some nationalists began to argue that the oil companies were exploiting them. Led by Venezuela, oil-producing countries realized that they could control the price of oil by limiting the supply. The countries joined together as OPEC and gradually governments took control of oil supplies.
Before the 1970s there were only two major incidents of successful oil nationalization—the first following the Bolshevik Revolution of 1917 in Russia and the second in 1938 in Mexico.
Pre-nationalization
Due to the presence of oil, the Middle East has been the center of international tension even before the nationalization of oil supplies. Britain was the first country that took interest in Middle Eastern oil. In 1908, oil was discovered in Persia by the Anglo-Persian oil company under the stimulus of the British government. Britain maintained strategic and military domination of areas of the Middle East outside Turkish control until after World War I when the former Turkish Empire was divided between the British and the French. It turned out that many of the areas controlled by the French had little oil potential.
On the other hand, Britain continued to expand oil interests into other parts of the Persian Gulf. Although oil resources were found in Kuwait, there was not enough demand for oil at the time to develop in this area.
Due to political and commercial pressure, it did not take long before the United States secured an entry into Middle Eastern oil supplies. The British government was forced to allow the US into Iraq and the Persian Gulf states. Iraq became dominated by US oil companies while Kuwait consisted of a 50/50 split between British and American companies.
Up until 1939, Middle Eastern oil remained relatively unimportant in world markets. According to “The Significance of Oil,” the Middle East at the time “was contributing only 5 percent of total world oil production and its exports were limited to countries within the immediate region and, via the Suez Canal, in western Europe.” The real significance of pre-1939 developments in the Middle East is that they established the framework for the post-1945 oil expansion.
After World War II, the demand for oil increased significantly. Due to war-time oil development, which proved the great potential for oil discovery in the Middle East, there was little hesitation in investing capital in Iran, Iraq, Kuwait and Saudi Arabia.
Huge investments were made to improve the infrastructure needed to transport Middle Eastern oil. For example, investment was made on the Suez Canal to ensure that larger tankers could utilize it. There was also an increased construction of oil pipelines. The expansion of infrastructure to produce and transport Middle East oil was mainly under the operation of the seven major international oil companies.
Early nationalizations
Prior to 1970, there were ten countries that nationalized oil production: the Soviet Union in 1918, Bolivia in 1937 and 1969, Mexico in 1938, Iran in 1951, Iraq in 1961, Burma and Egypt in 1962, Argentina in 1963, Indonesia in 1963, and Peru in 1968. Although these countries were nationalized by 1971, all of the “important” industries that existed in developing countries were still held by foreign firms. In addition, only Mexico and Iran were significant exporters at the time of nationalization.
The government of Brazil, under Getúlio Vargas, nationalized the oil industry in 1953, thereby creating Petrobras.
Reasons for nationalization
Exploitation
Proponents of nationalization asserted that the original contracts held between an oil-producing country and an oil company were unfair to the producing country. Yet without the knowledge and skill brought into the country by the international oil companies, the countries would not have been able to get the oil. Contracts, which could not be altered or ended in advance of the true end date, covered huge expanses of land and lasted for long durations. Nationalist ideas began once producing countries realized that the oil companies were exploiting them. Many times these countries did not pay the companies for their loss of assets or only paid nominal amounts.
The first country to act was Venezuela, which had the most favorable concession agreement. In 1943, the country increased the total royalties and tax paid by the companies to 50% of their total profits. However, true equal profit sharing was not accomplished until 1948. Because oil companies were able to deduct the tax from their income tax, profits acquired by the oil companies did not change significantly and, as a result, the oil companies did not have any major problems with the change imposed by Venezuela. Even with increased oil prices, the companies still held a dominant position over Venezuela.
Change in oil prices
The posted price of oil was originally the determinant factor of the taxes that oil companies had to pay. This concept was beneficial to the oil companies because they were the ones who controlled the posted prices. Companies could increase the actual price of oil without changing the posted price, thus avoiding an increase in taxes paid to the producing country.
Oil-producing countries did not realize that the companies were adjusting oil prices until the cost of oil dropped in the late 1950s and companies started reducing posted prices very frequently. The main reason for the reduction in oil prices was the change in the world's energy situation after 1957 that led to competition between energy sources. Efforts to find markets led to price cuts. Price cutting was first achieved by shaving profit margins, but soon prices were reduced to levels far lower than posted prices as companies producing oil in the Middle East started to offer crude to independent and state-owned refineries.
Producing countries became aggravated when the companies would reduce the prices without warning. According to “The Significance of Oil,” “small reductions in posted prices in 1958 and 1959 produced some indications of disapproval from certain Middle East governments, but it was not until major cuts—of the order of 10 to 15 percent—were announced in 1960 that a storm broke over the heads of the companies whose decisions would reduce the oil revenues of the countries by 5 to 7 ½ percent.”
High oil prices, on the other hand, raise the bargaining power of oil-producing countries. As a result, some say that countries are more likely to nationalize their oil supplies during times of high oil prices. However, nationalization can come with various costs and it is often questioned why a government would respond to an oil price increase with nationalization rather than by imposing higher taxes. Contract theory provides reasoning against nationalization.
Structural change of oil-producing countries
The Third World went through dramatic structural change in the decades after oil was first discovered. Rising nationalism and the emergence of shared group consciousness among developing countries accompanied the end of the formal colonial relationships in the 1950s and 1960s. Shared group consciousness among the oil-exporting countries was expressed through the formation of OPEC, increased contact and communication between countries, and attempts of common action among countries during the 1960s. The structure of the industry, which led to increased nationalistic mentality, was affected by the following important changes:
Strategic control
Originally, oil-producing countries were poor and needed oil companies to help them produce the oil and manage the oil reserves located within the country. However, as the countries began to develop, their demands for revenue increased. The industry became integrated into a local economy that required strategic control by the host country over pricing and the rate of production. Gradually, foreign investors lost the trust of oil-producing countries to develop resources in the national interest. Oil-producing countries demanded participation in the control of the oil within their country.
Increased capabilities
Furthermore, technological innovation and managerial expertise increased dramatically after World War II, which increased the bargaining power of producing countries. Increased bargaining power allowed the companies to change their mode of operation.
Expansion of the oil industry
Stephen J. Kobrin states that “During the interwar period and through the 1950s, international petroleum was a very tight oligopoly dominated by seven major international oil companies (Exxon, Shell, BP, Gulf, Texaco, Mobil and Chevron—as they are known today). However, between 1953 and 1972 more than three hundred private firms and fifty state-owned firms entered the industry, drawn by the explosion in oil consumption and substantially diminished barriers to entry.”
The new, independent companies disturbed the equilibrium between the major companies and the producing countries. Countries became aware of their options as these companies offered better agreement terms.
Changes in supply and demand
The shortage of oil in the 1970s increased the value of oil from previous decades. The bargaining power of producing countries increased as both the country governments and the oil companies became increasingly concerned about the continued access to crude oil.
Diffusion of ideas between oil-producing countries
Rogers defines diffusion as, “the process by which (1) an innovation (2) is communicated through certain channels (3) over time (4) among members of a social system.” Innovations may consist of technology, philosophy, or managerial techniques. Examples of communication channels include the mass media, organizations such as OPEC or the U.N., or educational institutions. Due to diffusion, attempts at oil nationalization from producing countries, and whether or not these attempts were successful, affected decisions to nationalize oil supplies.
Two attempts of nationalization that had clear inhibiting effects on other producing countries were the nationalizations of Mexico in 1938 and of Iran in 1951, which occurred prior to the important structural change in the oil industry. The Mexican nationalization proved that although it was possible to accomplish nationalization, it came at the cost of isolation from the international industry, which was dominated by the major companies at the time. The Iranian nationalization also failed due to the lack of cooperation with international oil companies. These two incidences proved to other oil-producing countries that, until the structure of the oil industry changed to rely less upon international oil companies, any attempts to nationalize would be a great risk and would likely be unsuccessful.
Once the oil industry structure changed, oil-producing countries were more likely to succeed in nationalizing their oil supplies. The development of OPEC provided the medium in which producing countries could communicate and diffusion could occur rapidly.
The first country to successfully nationalize after the structural change of the industry was Algeria, which nationalized 51% of the French companies only ten days after the 1971 Tehran agreement and later was able to nationalize 100% of their companies. The nationalization of Algerian oil influenced Libya to nationalize British Petroleum in 1971 and the rest of its foreign companies by 1974. A ripple effect quickly occurred, spreading first to the more-militant oil producers like Iraq and then followed by more-conservative oil producers like Saudi Arabia. Stephen J. Kobrin states that “By 1976 virtually every other major producer in the mid-East, Africa, Asia, and Latin America had followed nationalizing at least some of its producers to gain either a share of participation or to take over the entire industry and employ the international companies on a contractual basis.”
Implications of nationalization
Vertical integration of the oil industry was broken
Due to the overall instability of supply, oil became an instrument of foreign policy for oil-exporting countries. Nationalization increased the stability in the oil markets and broke the vertical integration within the system. Vertical integration was replaced with a dual system where OPEC countries controlled upstream activities such as the production and marketing of crude oil while oil companies controlled downstream activities such as transportation, refining, distribution, and sale of oil products.
Under the new dual structure, OPEC was neither vertically or horizontally integrated and was not able to take over the entire oil sector from the oil companies. The temporary fear of an oil shortage during the 1970s helped to hide this consequence. In addition, relations between producing countries of the Persian Gulf and previous concessionary companies induced an “artificial” vertical integration. These relations included long-term contracts, discount of official prices, and phase-out clauses. Free markets started to become prevalent in 1981 after the trade in oil switched from being a sellers’ to a buyers’ market.
Oil companies lost access to oil supplies
According to the Energy Studies Review the western world oil demand decreased 15% between the years 1973 and 1982. In the same time period the major oil companies went from a production in the crude oil market of , a decrease of nearly 50%. In this period, the production from reserves under their own control went from , a decrease of 74%. As a result, important oil companies became important net buyers of crude oil after a long time of being vertically integrated sellers to their own refineries.
Change in the horizontal integration of the oil industry
The increase in oil prices of the 70s attracted non-OPEC producers—Norway, Mexico, Great Britain, Egypt, and some African and Asian countries—to explore within their country. In 1965, the Herfindahl index of horizontal integration for the crude oil production industry was 1600 and the horizontal integration for the exploration industry was 1250. By 1986, it decreased to around 930 for the crude oil production industry and 600 for the exploration industry. This created a further destabilizing factor for OPEC.
Restructuring of the refining sector
The world refining capacity of the major oil companies in 1973 was per day. However, by 1982, their world refining capacity had decreased to per day. This decrease was a result of their decreased access to the oil reserves of OPEC countries and, subsequently, the rationalization of their world refining and distribution network to decrease their dependence on OPEC countries. The increase in the refining capacity of OPEC countries that wanted to sell not only crude oil but also refined products further reinforced this trend towards rationalization.
Change in the spot market
The nationalization of oil supplies and the emergence of the OPEC market caused the spot market to change in both orientation and size. The spot market changed in orientation because it started to deal not only with crude oil but also with refined products. The spot market changed in size because as the OPEC market declined the number of spot market transactions increased. The development of the spot market made oil prices volatile. The risks involving oil investment increased. To protect against these potential risks, parallel markets such as the forward market developed. As these new markets developed, price control became more difficult for OPEC. In addition, oil was transformed from a strategic product to a commodity.
Changes in the spot market favored competition and made it more difficult for oligopolistic agreements. The development of many free markets impacted OPEC in two different ways:
A destabilizing effect occurred that made it easier for OPEC members not to respect their own quota if they did not want to.
A stabilizing effect occurred that provided an incentive for cooperation among OPEC members. Decreased prices due to free markets made it more profitable for OPEC countries to work together rather than to seek profit individually.
OPEC countries
Algeria
Currently, Algeria is one of the largest natural gas producers in what is known as the Arab World behind Qatar and Saudi Arabia. Algeria's nationalization of oil and gas came a mere nine years after the nation declared independence from colonial France which had ruled over the region for 130 years. Algeria joined OPEC in 1969 and fully nationalized its industry in 1971, but Algeria was taking steps to play a larger role in the oil industry profiting from their reserves in the Sahara in 1963.
Ecuador
Ecuador has had one of the most volatile oil policies in the region, partly a reflection of the high political volatility in the country. Petroecuador accounts for over half of oil production, however, as a result of financial setbacks combined with a drop in oil price, private companies increased oil investments in Ecuador. In the early 1990s annual foreign investment in oil was below US$200 million, by the early 2000s it had surpassed US$1 billion (Campodónico, 2004). Changes in political power led to an increase in government control over oil extraction. In particular, the election of President Rafael Correa, on a resource-nationalism platform, prompted increases in government control and the approval of a windfall profits tax.
Iran
Since its beginning, Iran's oil industry has experienced expansion and contraction. Rapid growth at the time of World War I declined soon after the start of World War II. Recovery began in 1943 with the reopening of supply routes to the United Kingdom. The oil was produced by what became the Anglo-Iranian Oil Company, but political difficulties arose with the Iranian government in the postwar period.
Iran sought to rid itself of British political influence and the exploitation by AIOC. Negotiations between Anglo-Iranian Oil Company and the government failed and in 1951 the oil industry was nationalized. As a result of Britain's boycott and the Abadan Crisis, Iranian production dropped to virtually zero. On British initiative the CIA
overthrew Prime Minister of Iran Mosaddegh in Operation Ajax. Formally the nationalization remained effective, but in practice a consortium of oil companies was allowed in under a by then standard 50/50 profit-sharing deal.
The whole process had left the British a major share in what had been their single most valuable foreign asset. It had stopped the democratic transition in Iran however, leaving its mark for decades to come. The coup is widely believed to have significantly contributed to the 1979 Iranian Revolution after which the oil industry would be nationalized again.
Iraq
The properties of the majors were nationalized totally in Iraq, in 1972. Worldwide oil shortages in the 1970s forced major oil suppliers to look elsewhere for ways to acquire the resource. Under these circumstances, NOCs often came forward as alternative suppliers of oil. Nationalization of the Iraq Petroleum Company (IPC) in 1972 after years of rancor, together with restrictions on oil liftings by all but one of the IPC's former partners, put Iraq at the forefront of direct marketing. Iraq's oil production suffered major damage in the aftermath of the Gulf War. In spite of United Nations sanctions, Iraq has been rebuilding war-damaged oil facilities and export terminals. Iraq plans to increase its oil productive capacity to per day in 2000 and per day in 2010.
Libya
Libya, in particular, sought out independent oil firms to develop its oilfields; in 1970, the Libyan government used its leverage to restructure radically the terms of its agreements with these independent companies, precipitating a rash of contract renegotiations throughout the oil-exporting world.
Nigeria
The discovery of oil in Nigeria caused conflict within the state. The emergence of commercial oil production from the region in 1958 and thereafter raised the stakes and generated a struggle by the indigenes for control of the oil resources. The northern hegemony, ruled by Hausa and Fulani, took a military dictatorship and seized control of oil production. To meet popular demands for cheaper food during the inflationary period just after the civil war, government created a new state corporation, the National Nigerian Supply Company (NNSC). While oil production proceeded, the region by the 1990s was one of the least developed and most poor. The local communities responded with protests and successful efforts to stop oil production in the area if they did not receive any benefit. By September 1999, about 50 Shell workers had been kidnapped and released. Not only are the people of Nigeria affected, but the environment in the area is also affected by deforestation and improper waste treatment. Nigerian oil production also faces problems with illegal trade of the refined product on the black market. This is undertaken by authorized marketers in collusion with smuggling syndicates.
Activities such as these severely affect the oil industries of both the state and MNCs. Oil production deferments arising from community disturbances and sabotage was 45mm barrels in 2000 and 35mm barrels in 2001. The state has not been a very effective means of controlling incursions such as these. The illegal oil economy in such a circumstance may continue to exist for a long time, albeit in
curtailed and small scales.
Saudi Arabia
By 1950, Saudi Arabia had become a very successful producing area, with an even greater undeveloped oil production potential. Because of favorable geological conditions and the close proximity of oil fields to the coast, Saudi Arabia operations were low cost. American companies therefore heavily valued the oil. The joint concessionary company, ARAMCO, agreed to the government's demand to use the introduced posted price as a way to calculate profits. Profit-sharing between ARAMCO and Saudi Arabia was established as a 50/50 split. Eventually the Saudi government fully purchased Aramco in 1980 renaming it as Saudi Aramco.
Venezuela
In 1938, Venezuelan President Eleazar López Contreras enacted a new Hydrocarbons Law, which established the increase of royalties, as well as the increase of exploration and exploitation taxes. The State was also authorized to create companies or institutes for the development of the oil activity. On 13 March 1943, President Isaías Medina Angarita promulgated another Hydrocarbons Law, which established that from then on at least 10% of the crude oil had to be refined in Venezuela; the royalty or exploitation tax could not be less than 16.7%; the Venezuelan State received a 50% profit from oil exploitation and 12% of the income tax. New taxes were also established to prevent companies from maintaining idle fields. While the world was in the midst of World War II, Venezuela increased its oil production to supply the Allies with fuel, much of the oil was refined in the Caribbean islands. Medina's government was overthrown by a coup on 18 October 1945; an interim government was installed, which later gave way in 1948 to another democratically elected government presided by Rómulo Gallegos, during which the oil policy of "no more concessions" was promoted, which was also authored by the then Minister of Development of those two periods, Juan Pablo Pérez Alfonzo. Implementing a 50%-50% or fifty-fifty readjustment in 1948. Gallegos' government was in turn deposed by a military coup d'état later that year on 24 November. Another coup 1958 brought an end to the military dictatorship in the country. The newly elected Minister of Mines and Hydrocarbons, Juan Pablo Pérez Alfonzo, acted to raise the income tax on oil companies and introduced the key aspect of supply and demand to the oil trade.
On 29 August 1975, during the tenure of President Carlos Andrés Pérez, "Law that Reserves the Hydrocarbon Industry to the State" was enacted and the state-owned company Petróleos de Venezuela (PDVSA) was created to control all oil businesses in the Venezuelan territory. The law came into effect on 1 January 1976, as well as the nationalization of the oil industry with it, after which PDVSA began commercial operations.
Non-OPEC countries
Argentina
Nationalization of oil resources in Argentina began in 1907, when upon the discovery of the nation's first sizable oil field near Comodoro Rivadavia, President José Figueroa Alcorta declared the area around the oil field public property. YPF, the first oil company in the world established as a state enterprise, was established by President Hipólito Yrigoyen and General Enrique Mosconi in 1922. The nation's mineral resources were nationalized in toto with Article 40 of the Argentine Constitution of 1949 promulgated by President Juan Perón. The latter was abrogated in 1956, but oil and natural gas were renationalized in 1958 during President Arturo Frondizi's self-described "oil battle" for self-sufficiency in the staple, and private firms operated afterward via leases. YPF was privatized in 1993, and Madrid-based Repsol acquired a majority stake in 1999. Oil and gas production subsequently weakened while demand increased, and in 2011 Argentina recorded the first energy trade deficit since 1987.
In April 2010, Argentina's president Cristina Fernández de Kirchner introduced a bill on April 16, 2012, for the expropriation of YPF, the nation's largest energy firm. The state would purchase a 51% share, with the national government controlling 51% of this package and ten provincial governments receiving the remaining 49%.
Investment in exploration at YPF as a percentage of profits had been far below those in most other Repsol subsidiaries, and declines in output at the firm represented 54% of the nation's lost oil production and 97% in the case of natural gas. Market analysts and Repsol blamed the decline in exploration and production on government controls on exports and prospecting leases, as well as price controls on domestic oil and gas. YPF increased its estimates of oil reserves in Argentina in 2012, but warned that government policies would have to change to allow investment in new production. The government announced instead that it would acquire a majority stake in YPF. Argentine Economy Minister Hernán Lorenzino claimed that asset stripping at YPF had financed Repsol's expansion in other parts of the world, while Repsol officials denied charges of underinvestment in its YPF operations.
Argentine Deputy Economy Minister Axel Kicillof rejected Repsol's initial demands for payment of US$10.5 billion for a controlling stake in YPF, citing debts of nearly US$9 billion. The book value of YPF was US$4.4 billion at the end of 2011; its total market capitalization on the day of the announcement was US$10.4 billion. The bill was overwhelmingly approved by both houses of Congress, and was signed by the president on May 5.
The government of Argentina eventually agreed to pay $ billion compensation to Repsol, which had previously owned YPF.
Canada
In 2010 Canada was the United States' leading oil supplier, exporting some of oil per year (), 99 percent of its annual oil exports, according to the EIA. Following the OPEC oil embargo in the early 1970s, Canada took initiative to control its oil supplies. The result of these initiatives was Petro-Canada, a state-owned oil company. Petro-Canada put forth national goals including, increased domestic ownership of the industry, development of reserves not located in the western provinces, that is to say, the promotion of the Canada Lands in the north and offshore, better information about the petroleum industry, security of supply, decrease dependence on the large multinational oil corporations, especially the Big Four, and increase revenues flowing to the federal treasury from the oil and gas sector. Petro-Canada was founded in 1975 as a federally-owned crown corporation, then privatized beginning in 1991. The provincial government of Ontario purchased a 25% stake in Suncor Energy in 1981, then divested it in 1993.
Petro-Canada has been met with opposition mainly from Alberta, home to one of the main oil patches in Canada. After negotiating a royalty increase on oil and price increases for natural gas, Lougheed asserted Alberta's position as the centre of Canada's petroleum industry. Alberta had since been the main source of oil in Canada since the 1970s. The clashing viewpoints of resource control has resulted in conflict over the direction of Canada's oil industry, and as a result, the vast majority of Canada's oil ownership and profits continue to lay in foreign hands.
Mexico
Mexico nationalized its oil industry in 1938, and has never privatized, restricting foreign investment. Important reserve additions in the 1970s allowed a significant increase in production and exports, financed by the high oil prices. Despite producing more oil than any other country in Latin America, oil does not carry a relevant proportion of Mexico's exports. Since the giant Cantarell Field in Mexico is now in decline, the state oil company Pemex has faced intense political opposition to opening up the country's oil and gas sector to foreign participation. The lack of financial autonomy has limited Pemex's own investment capacity, inducing the company to become highly indebted and to use an out of budget mechanism of deferred payment of projects (PIDIREGAS) to finance the expansion of production. Some feel that the state oil company Pemex does not have the capacity to develop deep water assets by itself, but needs to do so if it is to stem the decline in the country's crude production.
Russia
Since Putin assumed the Russian Presidency in January 2000, there has been what
amounts to a creeping re-nationalization of the Russian oil industry. In Russia, Vladimir Putin's government has pressured Royal Dutch Shell to hand over control of one major project on Sakhalin Island, to the publicly traded company Gazprom in December. The founder of formerly private Yukos has also been jailed, and the company absorbed by state-owned Rosneft. Such moves strain the confidence of international oil companies in forming partnerships with Russia. Russia has taken notice of their increasing foreign oil investment improving politics with other countries, especially former states of the Soviet Union. Oil industry in Russia is one of the top producers in the world, however, the proven reserves in Russia are not as prevalent as in other areas. Furthermore, previously accessible oil fields have been lost since the Cold War. With the collapse of the USSR, Russia has lost the rich Caspian Basin off-shore and on-shore oil fields in the Central Asian states and Azerbaijan.
See also
Economic nationalism
Energy security
Energy security and renewable technology
Peak oil
United States energy independence
References
Energy policy
Nationalization
Petroleum economics
Petroleum politics | Nationalization of oil supplies | [
"Chemistry",
"Environmental_science"
] | 6,658 | [
"Petroleum",
"Environmental social science",
"Petroleum politics",
"Energy policy"
] |
14,712,213 | https://en.wikipedia.org/wiki/Stannin | Stannins are small proteins that consist of a single transmembrane helix, an unstructured linker domain, and a cytoplasmic domain. The transmembrane region contains a conserved cysteine residue (Cys32) that, together with Cys34 found in the stannin unstructured linker domain, constitutes the putative trimethyltin-binding site, close to the lipid/solvent interface.
The unstructured protein region connects two adjacent helical domains. It contains a conserved CXC metal-binding motif and a putative 14-3-3-zeta binding domain. Upon coordinating dimethytin, considerable structural or dynamic changes in the flexible loop region of SNN may take place, recruiting other binding partners such as 14-3-3-zeta, and thereby initiating the apoptotic cascade.
The cytoplasmic domain forms a distorted helix that is partially absorbed into the plane of the lipid bilayer. It interacts with the surface of the lipid bilayer, and contributes to the initiation of the apoptotic cascade on binding of the unstructured linker domain to dimethyltin.
Human proteins containing this domain
SNN;
References
Protein domains
Single-pass transmembrane proteins | Stannin | [
"Biology"
] | 263 | [
"Protein domains",
"Protein classification"
] |
14,712,290 | https://en.wikipedia.org/wiki/Delta%20endotoxins | Delta endotoxins (δ-endotoxins) are a family of pore-forming toxins produced by Bacillus thuringiensis species of bacteria. They are useful for their insecticidal action and are the primary toxin produced by the genetically modified (GM) Bt maize/corn and other GM crops. During spore formation the bacteria produce crystals of such proteins (hence the name Cry toxins) that are also known as parasporal bodies, next to the endospores; as a result some members are known as a parasporin. The Cyt (cytolytic) toxin group is another group of delta-endotoxins formed in the cytoplasm. VIP toxins (vegetative insecticidal proteins) are formed at other stages of the life cycle.
Mechanism of action
When an insect ingests these proteins, they are activated by proteolytic cleavage. The N-terminus is cleaved in all of the proteins and a C-terminal extension is cleaved in some members. Once activated, the endotoxin binds to the gut epithelium and causes cell lysis by the formation of cation-selective channels, which leads to death.
For many years there was no clarity as to the relationship between aminopeptidase N and Bt toxins. Although AP-N does bind Cry proteins in vitro (reviewed by Soberón et al. 2009 and Pigott & Ellar 2007), no cases of resistance or even reduced in vitro binding due to AP-N structure alteration were known through 2002, and there was some doubt that the resistance mechanism was so straight forward. Indeed, Luo et al. 1997, Mohammed et al. 1996, and Zhu et al. 2000 positively found this to not occur in Lepidoptera examples. Subsequently, however Herrero et al. 2005 showed correlation between nonexpression and Bt resistance, and actual resistance was found in Helicoverpa armigera by Zhang et al. 2009, in Ostrinia nubilalis by Khajuria et al. 2011, and in Trichoplusia ni by Baxter et al. 2011 and Tiewsiri & Wang 2011 (also all Lepidoptera). There continues to be confirmation that AP-Ns do not by themselves affect resistance in some cases, possibly due to sequential binding by the toxin being required to produce its effect. In this sequence each binding step is theoretically not indispensable, but if it occurs does contribute to the final pore formation result.
Structure
The activated region of the delta toxin is composed of three distinct structural domains: an N-terminal helical bundle domain () involved in membrane insertion and pore formation; a beta-sheet central domain involved in receptor binding; and a C-terminal beta-sandwich domain () that interacts with the N-terminal domain to form a channel.
Types
B. thuringiensis encodes many proteins of the delta endotoxin family (), with some strains encoding multiple types simultaneously. A gene mostly found on plasmids, delta-entotoxins sometimes show up in genomes of other species, albeit at a lower proportion than those found in B. thuringiensis. The gene names looks like Cry3Bb, which in this case indicates a Cry toxin of superfamily 3 family B subfamily b.
Cry proteins that are interesting to cancer research are listed under a parasporin (PS) nomenclature in addition to the Cry nomenclature. They do not kill insects, but instead kill leukemia cells. The Cyt toxins tend to form their own group distinct from Cry toxins. Not all Cry crystal-form toxins directly share a common root. Examples of non-three-domain toxins that nevertheless have a Cry name include Cry34/35Ab1 and related beta-sandwich binary (Bin-like) toxins, Cry6Aa, and many beta-sandwich parasporins.
Specific delta-endotoxins that have been inserted with genetic engineering include Cry3Bb1 found in MON 863 and Cry1Ab found in MON 810, both of which are maize/corn cultivars. Cry3Bb1 is particularly useful because it kills Coleopteran insects such as the corn rootworm, an activity not seen in other Cry proteins. Other common toxins include Cry2Ab and Cry1F in cotton and maize/corn. In addition, Cry1Ac is effective as a vaccine adjuvant in humans.
Some insects populations have started to develop resistance towards delta endotoxin, with five resistant species found as of 2013. Plants with two kinds of delta endotoxins tend to make resistance happen slower, as the insects have to evolve to overcome both toxins at once. Planting non-Bt plants with the resistant plants will reduce the selection pressure for developing the toxin. Finally, two-toxin plants should not be planted with one-toxin plants, as one-toxin plants act as a stepping stone for adaption in this case.
References
Further reading
External links
Cry3Bb1 at the United States Environmental Protection Agency
Protein domains
Peripheral membrane proteins
Bacterial toxins
Crystals
Proteins | Delta endotoxins | [
"Chemistry",
"Materials_science",
"Biology"
] | 1,050 | [
"Biomolecules by chemical classification",
"Protein classification",
"Crystallography",
"Crystals",
"Protein domains",
"Molecular biology",
"Proteins"
] |
14,712,455 | https://en.wikipedia.org/wiki/Toxicology%20testing | Toxicology testing, also known as safety assessment, or toxicity testing, is the process of determining the degree to which a substance of interest negatively impacts the normal biological functions of an organism, given a certain exposure duration, route of exposure, and substance concentration.
Toxicology testing is often conducted by researchers who follow established toxicology test protocols for a certain substance, mode of exposure, exposure environment, duration of exposure, a particular organism of interest, or for a particular developmental stage of interest. Toxicology testing is commonly conducted during preclinical development for a substance intended for human exposure. Stages of in silico, in vitro and in vivo research are conducted to determine safe exposure doses in model organisms. If necessary, the next phase of research involves human toxicology testing during a first-in-man study. Toxicology testing may be conducted by the pharmaceutical industry, biotechnology companies, contract research organizations, or environmental scientists.
History
The study of poisons and toxic substances has a long history dating back to ancient times, when humans recognized the dangers posed by various natural compounds. However, the formalization and development of toxicology as a distinct scientific discipline can be attributed to notable figures like Paracelsus (1493–1541) and Orfila (1757–1853).
Paracelsus (1493–1541): Often regarded as the "father of toxicology, Paracelsus, whose real name was Theophrastus von Hohenheim, challenged prevailing beliefs about poisons during the Renaissance era. He introduced the fundamental concept that "the dose makes the poison," emphasizing that the toxicity of a substance depends on its quantity. This principle remains a cornerstone of toxicology.
Mathieu Orfila (1787–1853): A Spanish-born chemist and toxicologist, Orfila made significant contributions to the field in the 19th century. He is best known for his pioneering work in forensic toxicology, particularly in developing methods for detecting and analyzing poisons in biological samples. Orfila's work played a vital role in establishing toxicology as a recognized scientific discipline and laid the groundwork for modern forensic toxicology practices in criminal investigations and legal cases.
Prevalence
Around one million animals, primate and non-primate, are used every year in Europe in toxicology tests. In the UK, one-fifth of animal experiments are toxicology tests.
Methodology
Toxicity tests examine finished products such as pesticides, medications, cosmetics, food additives such as artificial sweeteners, packing materials, and air freshener, or their chemical ingredients. The substances are tested using a variety of methods including dermal application, respiration, orally, injected or in water sources. They are applied to the skin or eyes; injected intravenously, intramuscularly, or subcutaneously; inhaled either by placing a mask over the animals, or by placing them in an inhalation chamber; or administered orally, placing them in the animals' food or through a tube into the stomach. Doses may be given once, repeated regularly for many months, or for the lifespan of the animal. Toxicity tests can also be conducted on materials need to be disposed such as sediment to be disposed in a marine environment.
Initial toxicity tests often involve computer modelling (in silico) to predict toxicokinetic pathways or to predict potential exposure points by modelling weather and water currents to determine which animals or regions that will be most affected.
Other less intensive and more common in vitro toxicology tests involve, amongst others, microtox assays to observe bacteria growth and productivity. This can be adapted to plant life measure photosynthesis levels and growth of exposed plants.
Contract research organizations
A contract research organization (CRO) is an organization that provides support to the pharmaceutical, biotechnology, chemical, and medical device industries in the form of research services outsourced on a contract basis. A CRO may provide toxicity testing services, along with others such as assay development, preclinical research, clinical research, clinical trials management, and pharmacovigilance. CROs also support foundations, research institutions, and universities, in addition to governmental organizations (such as the NIH, EMEA, etc.).
Regulation
United States
In the United States, toxicology tests are subject to Good Laboratory Practice guidelines and other Food and Drug Administration laws.
Europe
Animal testing for cosmetic purposes is currently banned all across the European Union.
See also
Animal testing
Children's Environmental Exposure Research Study
References
Further reading
External links
What is aquatic toxicity testing?
Genetic and Molecular Toxicology Assays, Safety Assessment, Animal Research Laboratories Agency.
emka TECHNOLOGIES Physiological data acquisition & analysis for preclinical research
Animal testing
Tests
Toxicology | Toxicology testing | [
"Chemistry",
"Environmental_science"
] | 953 | [
"Animal testing",
"Toxicology"
] |
14,712,857 | https://en.wikipedia.org/wiki/Nucleoside-specific%20porin | Nucleoside-specific porin (the tsx gene of Escherichia coli) is an outer membrane protein, Tsx, which constitutes the receptor for colicin K and Bacteriophage T6, and functions as a substrate-specific channel for nucleosides and deoxy-nucleosides.<ref
name="PUB00006257"></ref> The protein contains 294 amino acids, the first 22 of which are characteristic of a bacterial signal sequence peptide. Tsx shows no significant similarities to general bacterial porins.
References
Protein domains
Protein families
Outer membrane proteins | Nucleoside-specific porin | [
"Biology"
] | 129 | [
"Protein families",
"Protein domains",
"Protein classification"
] |
14,713,315 | https://en.wikipedia.org/wiki/Hyperdata | Hyperdata are data objects linked to other data objects in other places, as hypertext indicates text linked to other text in other places. Hyperdata enables the formation of a web of data, evolving from the "data on the Web" that is not inter-related (or at least, not linked).
In the same way that hypertext usually refers to the World Wide Web but is a broader term, hyperdata usually refers to the Semantic Web, but may also be applied more broadly to other data-linking technologies such as microformats – including XHTML Friends Network.
A hypertext link indicates that a link exists between two documents or "information resources". Hyperdata links go beyond simply such a connection, and express semantics about the kind of connection being made. For instance, in a document about Hillary Clinton, a hypertext link might be made from the word senator to a document about the United States Senate. In contrast, a hyperdata link from the same word to the same document might also state that senator was one of Hillary Clinton's roles, titles, or positions (depending on the ontology being used to define this link).
Semantic Web
The Semantic Web introduces the controversial concept of links to non-data resources. In the Semantic Web, links are not limited to "information resources" or documents, such as the typical Web page. Hyperdata links may refer to a physical structure (e.g., "the Eiffel Tower"), a place ("Champ de Mars" where the Eiffel Tower stands), a person (Gustave Eiffel, the man responsible for the tower's construction), or other "non-information resources". The links in this article are hypertext, not hyperdata, and they all lead to documents which describe the entities named.
A hyperdata browser (also called a Semantic Web browser), is a browser used to navigate the Semantic Web. Semantic Web architecture does not necessarily involve the HTML document format, which typical HTML Web browsers rely upon. A hyperdata browser specifically requests RDF data from Web servers, often through content negotiation or conneg, starting from the same URL as the traditional Web browser; the Web server may immediately return the requested RDF, or it may deliver a redirection to a new URI where the RDF may actually be found, or the RDF may be embedded in the same HTML document which would be returned to a Web browser which did not request RDF. The RDF data will generally describe the resource represented by the originally requested URI. The hyperdata browser then renders the information received as an HTML page that contains hyperlinks for users to navigate to indicated resources.
See also
Data Web
Linked data
Web resource
Web service
References
Hypertext
Hypermedia
Electronic literature | Hyperdata | [
"Technology"
] | 579 | [
"Multimedia",
"Hypermedia"
] |
14,713,486 | https://en.wikipedia.org/wiki/Frog%20hearing%20and%20communication | Frogs and toads produce a rich variety of sounds, calls, and songs during their courtship and mating rituals. The callers, usually males, make stereotyped sounds in order to advertise their location, their mating readiness and their willingness to defend their territory; listeners respond to the calls by return calling, by approach, and by going silent. These responses have been shown to be important for species recognition, mate assessment, and localization. Beginning with the pioneering experiments of Robert Capranica in the 1930s using playback techniques with normal and synthetic calls, behavioral biologists and neurobiologists have teamed up to use frogs and toads as a model system for understanding the auditory function and evolution. It is now considered an important example of the neural basis of animal behavior, because of the simplicity of the sounds, the relative ease with which neurophysiological recordings can be made from the auditory nerve, and the reliability of localization behavior. Acoustic communication is essential for the frog's survival in both territorial defense and in localization and attraction of mates. Sounds from frogs travel through the air, through water, and through the substrate. Frogs and toads largely ignore sounds that are not conspecific calls or those of predators, with only louder noises startling the animals. Even then, unless major vibration is included, they usually do not take any action unless the source has been visually identified. The neural basis of communication and audition gives insights into the science of sound applied to human communication.
Sound communication
Behavioral ecology
Frogs are more often heard than seen, and other frogs (and researchers) rely on their calls to identify them. Depending on the region that the frog lives in, certain times of the year are better for breeding than others, and frogs may live away from the best breeding grounds when it is not the species’ mating season. During the breeding season, they congregate to the best breeding site and compete for call time and recognition. Species that have a narrow mating season due to ponds that dry up have the most vigorous calls.
Calling strategy
Male-male competition
In many frog species only males call. Each species has a distinct call, though even among the same species, different dialects are found in different regions. Although humans cannot detect the differences in dialects, frogs distinguish between regional dialects. For example, male bullfrogs can recognize the calls of their direct territorial neighbors. By ignoring the calls of these neighbors, they save energy, and only vocalize aggressively in response to an intruder's call. In this way, calls establish territories, but they also attract females.
Males may have a solitary call for times when there is no competition that uses less energy. During other times, when a frog must compete with hundreds or thousands of other frogs to be heard, together they perform a chorus call where each frog calls in turn, successively. The most important feature of the chorus is the shared pattern. Through this pattern, few individuals calls are drowned out.
One frog's call may be dominant and trigger the calls of the responding frogs in symphony. Calling is linked to physical size and females may be attracted to more vigorous calls.
Frogs in the same region chorus within their species and between different species. Frogs of the same species will retune their frequency so it is distinct from other frogs of the same species. Different species of frogs living in the same region have more dramatically different call frequencies. The frequency and durations of different species' calls vary similarly to the preference of that species' females. The neural circuitry of females of different species varies.
A frog which demonstrates vocalizations in male-male competition is the Lithobates clamitans aka the Green Frog. Typically, they have four types of calls each warning a different level of urgency and each being distinct. The first two calls are types of advertisement calls to establish dominance among the challengers. The other two calls are more directed towards agonistic encounters.
Male-female interactions
Like the males, females can distinguish the minute differences between individual frogs. However, males and females are attuned to different parts of the advertisement call. For example, males of the onomatopoeically named coqui species are more attuned to the low frequency co part of the call, whereas females are more attuned to the high frequency qui. In fact, the order of the parts does not matter. Similarly, for females of the Tungara species, the female basilar papilla is biased towards a lower-than-average “chuck” portion of a male call. Experiments that measure the vocal responses and approaches shows these attenuations.
Mode of sound communication
Calls are often sent through the air, but other mediums have been discovered. Some species call while they are underwater and the sound travels through the water. This is adaptive in a region with many species competing for air time. Narins has found female frog species that use solid surfaces, such as blades of grass and logs, upon which they tap rhythmically to attract mates. Also, Feng, Narins and colleagues have found that some species of frogs use ultrasound.
Sound production
The smallest frogs expend much energy to produce calls. In order for vocalizations to be produced, the respiratory airflow goes from the lungs, passing through the larynx, and into the oral cavity. The vocal cords then oscillate as a result. In addition, vocalizing muscles can make up 15% of a male spring peeper's body mass, while the same muscles are only 3% of females. Frogs produce sound from the air sac below their mouth that from the outside, is seen to inflate and deflate. Air from the lungs is channeled to the air sac, which resonates to make the sound louder. The larynx is larger and more developed in males, though not significantly different from females.
Frogs produce two types of calls that most experiments tend to focus on, which are release calling and mating calling. Only the male frogs are able to produce mating calls to attract gravid female frogs. When male and non-gravid female frogs are clasped by sexually active male frogs, they produce a release call. In the leopard frog, there are three movements for their sound production. First, there are body wall contradictions to serve as a way for the intra-pulmonary pressure to increase. Second, in order for air flow to pass through the larynx, the glottis must be open. Third and last, in the larynx, the vocal cords must oppose each other at the midline so that the air flow can cause them to vibrate. In addition, their release calls and movements of their throats and sides are correlated with laryngeal calling movements. For the Concave-eared torrent frog (Amolops tormotus), they produce sounds in the ultrasonic range. Three areas that are highly involved in frog calls are the preoptic area, the medulla-midbrain junction, and the medulla-spinal cord junction. The preoptic area is important in order the frog to initiate mate calling. The medulla-midbrain junction is responsible for producing the calling motor pattern. The medulla-spinal cord junction contains the hypoglossal and vagus nuclei, which are vital to organize the calling and breathing motor patterns.
Sound localization
Biologists believed that frogs ears are placed too close together to localize sound accurately. Frogs cannot hear short, high frequency sounds. Sound is localized by the time difference when the sound reaches each ear. The “vibration spot” near the lungs vibrates in response to sound, and may be used as an additional measure to localize from.
Applications of frog neuroethology
Dr. Feng's work applies the neuroethology of frog communication to medicine. A recent project on hearing aids is based on how female frogs find their mates. Females must recognize the male they choose by his call. By localizing where his call is coming from she can find him. An additional challenge is that she is localizing his call while listening to the many other frogs in the chorus, and to the noise of the stream and insects. The breeding pond is a very noisy place, and females must distinguish a male's calls from the other noise. How they recognize the sound pattern of the male they are pursuing from the surrounding noise is similar to how intelligent hearing aids help people hear certain sounds and cancel out others. The underlying neural mechanisms are fast neural oscillations, and synaptic inhibition to cancel out noise. The timing and frequency of the sound also play a part in frog communication and may be used in Feng's work. He also studies bat echolocation to create intelligent hearing aids. He is also working on cochlear implants.
See also
Neuroethology
Frogs
Umwelt
Vision in toads
Animal echolocation
References
Notes
Capranica, Robert R. (1965) The Evoked Vocal Response of the Bullfrog. MIT PRESS, Cambridge, Massachusetts. (110p.)
Albert S. Feng. Neuroscience Program University of Illinois at Urbana-Champaign. 17 Dec 2007
Long, Kim. Frogs A Wildlife Handbook. Boulder, Colorado: Johnson Printing, 1999.
Mundry, KM, and RR Capranica. "Correlation between auditory evoked responses in the thalamus and species-specific call characteristics. I Rana catesbeiana." Journal of Comp Physiology 160(1987): (4):477-89.
McClelland, BE., W. Wilczynski, and AS. Rand. Department of Psychology, University of Texas, Sexual dimorphism and species differences in the neurophysiology and morphology of the acoustic communication system of two neotropical hylids.
Narins, PM, and RR Capranica. "Neural adaptations for processing the two-note call of the Puerto Rican treefrog, Eleutherodactylus coqui." Brain Behavioral Evolution 17(1)(1980): 48-66.
External links
Neuroethology course link
http://instruct1.cit.cornell.edu/courses/bionb424/
Feng
http://www.life.uiuc.edu/neuroscience/people/showpeople.php?person=faculty/afeng1
https://web.archive.org/web/20070517215838/http://www.beckman.uiuc.edu/directory/index.php?qry=BY_NETID&type=BIO&filter=afeng1
https://web.archive.org/web/20071215062406/http://www.sciencemuseum.org.uk/antenna/ultrasonicfrogs/
Narins
http://www.physci.ucla.edu/Faculty/Narins/research/research.html
https://web.archive.org/web/20071027090303/http://www.acoustics.org/press/swa9501.html
http://www.physci.ucla.edu/Faculty/Narins/publications/pdfs/Sun%20and%20Narins%20BC.pdf
http://www.physci.ucla.edu/Faculty/Narins/publications/pdfs/NarinsJCP2.pdf
http://www.physci.ucla.edu/Faculty/Narins/publications/publications.html
Sound library
http://www.animalbehaviorarchive.org/assetSearch.do?method=searchCQL&query=%22Rana%22+and+%22catesbeiana%22&firstRecord=1&maximumRecords=9&totalResults=37&view=list&sortKeys=audioQual,ascending=false
https://web.archive.org/web/20071205032005/http://www.animalbehaviorarchive.org/assetSearchInterim.do
Animal communication
Frogs
Neurophysiology
Neuroethology | Frog hearing and communication | [
"Biology"
] | 2,522 | [
"Ethology",
"Behavior",
"Neuroethology"
] |
14,713,923 | https://en.wikipedia.org/wiki/5-Hydroxyeicosatetraenoic%20acid | 5-Hydroxyeicosatetraenoic acid (5-HETE, 5(S)-HETE, or 5S-HETE) is an eicosanoid, i.e. a metabolite of arachidonic acid. It is produced by diverse cell types in humans and other animal species. These cells may then metabolize the formed 5(S)-HETE to 5-oxo-eicosatetraenoic acid (5-oxo-ETE), 5(S),15(S)-dihydroxyeicosatetraenoic acid (5(S),15(S)-diHETE), or 5-oxo-15-hydroxyeicosatetraenoic acid (5-oxo-15(S)-HETE).
5(S)-HETE, 5-oxo-ETE, 5(S),15(S)-diHETE, and 5-oxo-15(S)-HETE, while differing in potencies, share a common mechanism for activating cells and a common set of activities. They are therefore a family of structurally related metabolites. Animal studies and a limited set of human studies suggest that this family of metabolites serve as hormone-like autocrine and paracrine signalling agents that contribute to the up-regulation of acute inflammatory and allergic responses. In this capacity, these metabolites may be members of the innate immune system.
In vitro studies suggest that 5(S)-HETE and/or other of its family members may also be active in promoting the growth of certain types of cancers, in simulating bone reabsorption, in signaling for the secretion of aldosterone and progesterone, in triggering parturition, and in contributing to other responses in animals and humans. However, the roles of 5(S)-HETE family members in these responses as well as in inflammation and allergy are unproven and will require much further study.
Among the 5(S)-HETE family members, 5(S)-HETE takes precedence over the other members of this family because it was the first to be discovered and has been studied far more thoroughly. However, 5-oxo-ETE is the most potent member of this family and therefore may be its critical member with respect to physiology and pathology. 5-OxoETE has gained attention in recent studies.
Nomenclature
5-Hydroxyeicosatetraenoic acid is more properly termed 5(S)-hydroxyicosatetraenoic acid or 5(S)-HETE) to signify the (S) configuration of its 5-hydroxy residue as opposed to its 5(R)-hydroxyicosatetraenoic acid (i.e., 5(R)-HETE) stereoisomer. Since 5(R)-HETE was rarely considered in the early literature, 5(S)-HETE was frequently termed 5-HETE. This practice occasionally continues. 5(S)-HETE's IUPAC name, (5S,6E,8Z,11Z,14Z)-5-hydroxyicosa-6,8,11,14-tetraenoic acid, defines 5(S)-HETE's structure unambiguously by notating not only its S-hydroxyl chirality but also the cis–trans isomerism geometry for each of its 4 double bonds; E signifies trans and Z signifies cis double bond geometry. The literature commonly uses an alternate but still unambiguous name for 5(S)-HETE viz., 5(S)-hydroxy-6E,8Z,11Z,14Z-eicosatetraenoic acid.
History of discovery
The Nobel laureate, Bengt I. Samuelsson, and colleagues first described 5(S)-HETE in 1976 as a metabolite of arachidonic acid made by rabbit neutrophils. Biological activity was linked to it several years later when it was found to stimulate human neutrophil rises in cytosolic calcium, chemotaxis, and increases in their cell surface adhesiveness as indicated by their aggregation to each other. Since a previously discovered arachidonic acid metabolite made by neutrophils, leukotriene B4 (LTB4), also stimulates human neutrophil calcium rises, chemotaxis, and auto-aggregation and is structurally similar to 5(S)-HETE in being a 5(S)-hydroxy-eicosateraenoate, it was assumed that 5(S)-HETE stimulated cells through the same cell surface receptors as those used by LTB4 viz., the leukotriene B4 receptors. However, further studies in neutrophils indicated that 5(S)-HETE acts through a receptor distinct from that used by LTB4 as well as various other neutrophil stimuli. This 5(S)-HETE receptor is termed the oxoeicosanoid receptor 1 (abbreviated as OXER1).
5(S)-HETE production
5(S)-HETE is a product of the cellular metabolism of the n-6 polyunsaturated fatty acid, arachidonic acid (i.e. 5Z,8Z,11Z,14Z-eicosatetraenoic acid), by ALOX5 (also termed arachidonate-5-lipoxygenase, 5-lipoxygenase, 5-LO, and 5-LOX). ALOX5 metabolizes arachidonic acid to its hydroperoxide derivative, arachidonic acid 5-hydroperoxide i.e. 5(S)-hydroperoxy-6E,8Z,11Z,14Z-eicosatetraenoic acid (5(S)-HpETE). 5(S)-HpETE may then be released and rapidly converted to 5(S)-HETE by ubiquitous cellular peroxidases:
Alternatively, 5(S)-HpETE may be further metabolized to its epoxide, 5(6)-oxido-eicosatetraenoic acid viz., leukotriene A4 (i.e. 5S,6S-epoxy-7E,9E,11Z,14Z-eicosatetraenoic acid or 5S-5,6-oxido-7E,9E,11Z,14Z-eicosatetraenoic acid). Leukotriene A4 may then be further metabolized either to leukotriene B4 by leukotriene A4 hydrolase or to leukotriene C4 by leukotriene C4 synthase. Finally, leukotriene C4 may be metabolized to leukotriene D4 and then to leukotriene E4. The relative amounts of these metabolites made by specific cells and tissues depends in large part on the relative content of the appropriate enzymes.
The selective synthesis of 5(S)-HETE (i.e. synthesis of 5(S)-HETE without concurrent synthesis of 5(R)-HETE) by cells is dependent on, and generally proportionate to, the presence and levels of its forming enzyme, ALOX5. Human ALOX5 is highly expressed in cells that regulate innate immunity responses, particularly those involved in inflammation and allergy. Examples of such cells include neutrophils, eosinophils, B lymphocytes, monocytes, macrophages, mast cells, dendritic cells, and the monocyte-derived foam cells of atherosclerosis tissues. ALOX5 is also expressed but usually at relatively low levels in many other cell types. The production of 5(S)-HETE by these cells typically serves a physiological function. However, ALOX5 can become overexpressed at high levels in certain types of human cancer cells such as those of the prostate, lung, colon, colorectal and pancreatic as a consequence of their malignant transformation. In these cells, the ALOX5-dependent production of 5(S)-HETE appears to serve a pathological function viz., it promotes the growth and spread of the cancer cells.
5(S)-HETE may also be made in combination with 5(R)-HETE along with numerous other (S,R)-hydroxy polyunsaturated fatty acids as a consequence of the non-enzymatic oxidation reactions. Formation of these products can occur in any tissue subjected to oxidative stress.
5(S)-HETE metabolism
In addition to its intrinsic activity, 5(S)-ETE can serve as an intermediate that is converted to other bioactive products. Most importantly, 5-Hydroxyeicosanoid dehydrogenase (i.e. 5-HEDH) converts the 5-hydroxy residue of 5(S)-HETE to a ketone residue to form 5-oxo-eicosatetraenoic acid (i.e. 5-oxo-6E,8Z,11Z,14Z-eicosatetraenoate, abbreviated as 5-oxo-ETE). 5-HEDH is a reversibly acting NADP+/NADPH-dependent enzyme that catalyzes to following reaction:
5-HEDH acts bi-directionally: it preferentially oxygenates 5(S)-HETE to 5-oxo-ETE in the presence of excess NADH+ but preferentially reduces 5-oxo-ETE back to 5(S)-HETE in the presence of excess NADPH. Since cells typically maintain far higher levels of NADPH than NADP+, they usually make little or no 5-oxo-ETE. When undergoing oxidative stress, however, cells contain higher levels of NADH+ than NADPH and make 5-oxo-ETE preferentially. Additionally, in vitro studies indicate that cells can transfer their 5(S)-HETE to cells that contain high levels of 5-NEDH and NADP+ and therefore convert the transferred 5(S)-HETE to 5-oxo-ETE. It is suggested that 5-oxo-ETE forms preferentially in vivo under conditions of oxidative stress or conditions where ALOX5-rich cells can transfer their 5(S)-HETE to cells epithelial, endothelial, dendritic, and certain (e.g. prostate, breast, and lung) cancer cells which display little or no ALOX5 activity but have high levels of 5-NEDH and NADP+. Since 5-oxo-ETE is 30- to 100-fold more potent than 5(S)-HETE, 5-HEDH main function may be to increase the biological impact of 5-HETE production.
Cells metabolize 5-(S)-HETE in other ways. They may use:
An acyltransferase to esterify 5(S)-HETE into their membrane phospholipids. This reaction may serve to storing 5(S)-HETE for its release during subsequent cell stimulation and/or alter the properties of cell membranes in functionally important ways.
A cytochrome P450, probably CYP4F3, to metabolize 5(S)-HETE to 5(S),20-dihydroxy-eicosatetraenoate (5,20-diHETE). Since 5,20-diHETE is ~50- to 100-fold weaker than 5(S)-HETE in stimulating cells, this metabolism is proposed to represent a pathway for 5(S)-HETE inactivation.
ALOX15 to metabolize 5(S)-HETE to 5(S),15(S)-dihydroxy-eicosatetraenoate (5,15-diHETE). 5,15-diHETE is ~3- to 10-fold weaker than 5(S)-HETE in stimulating cells.
12-Lipoxygenase (i.e. ALOX12) to metabolize 5(S)-HETE to 5(S),12(S)-diHETE. The activity of this product has not yet been fully evaluated.
Cyclooxygenase-2 to metabolize 5(S)-HETE to 5(S),15(R)-diHETE and 5(S),11(R)-diHETE. The activity of these products have not yet been fully evaluated.
Aspirin-treated cyclooxygenase-2 to metabolize 5(S)-HETE to 5(S),15(R)-diHETE. The activity of this product has not yet been fully evaluated.
Alternate pathways that make some of the above products include the: a) metabolism of 5(S)-HpETE to 5-oxo-ETE by cytochrome P450 (CYP) enzymes such as CYP1A1, CYP1A2, CYP1B1, and CYP2S1; b) conversion of 5-HETE to 5-oxo-ETE non-enzymatically by heme or other dehydrating agents; c) formation of 5-oxo-15(S)-hydroxy-ETE through 5-HEDH-based oxidation of 5(S),15(S)-dihydroxyicosatetraenoate; d) formation of 5(S),15(R)-dihydroxy-eicosatetraenoate by the attack of ALOX5 on 15-hydroxyicosatetraenoic acid (15(S)-HETE); e) formation of 5-oxo-15(S)-hydroxy-eicosatetreaenoate (5-oxo-15(S)-hydroxy-ETE) by the arachidonate 15-lipoxygenase-1-based or arachidonate 15-lipoxygenase-2-based metabolism of 5-oxo-ETE; and f) conversion of 5(S)-HpETE and 5(R)-HpETE to 5-oxo-ETE by the action of a mouse macrophage 50-60 kilodalton cytosolic protein.
Mechanism of action
The OXER1 receptor
5(S)-HETE family members share a common receptor target for stimulating cells that differs from the receptors targeted by the other major products of ALOX5, i.e., leukotriene B4, leukotriene C4, leukotriene D4, leukotriene E4, lipoxin A4, and lipoxin B4. It and other members of the 5(S)-HETE family stimulate cells primarily by binding and thereby activating a dedicated G protein-coupled receptor, the oxoeicosanoid receptor 1 (i.e. OXER1, also termed the OXE, OXE-R, hGPCR48, HGPCR48, or R527 receptor). OXER1 couples to the G protein complex composed of the Gi alpha subunit (Gαi) and G beta-gamma complex (Gβγ); when bound to a 5-(S)-HETE family member, OXER1 triggers this G protein complex to dissociate into its Gαi and Gβγ components with Gβγ appearing to be the component responsible for activating the signal pathways which lead to cellular functional responses. The cell-activation pathways stimulated by OXER1 include those mobilizing calcium ions and activating MAPK/ERK, p38 mitogen-activated protein kinases, cytosolic phospholipase A2, PI3K/Akt, and protein kinase C beta and epsilon. The relative potencies of 5-oxo-ETE, 5-oxo-15(S)-HETE, 5(S)-HETE, 5(S),15(S)-diHETE, 5-oxo-20-hydroxy-ETE, 5(S),20-diHETE, and 5,15-dioxo-ETE in binding to, activating, and thereby stimulating cell responses through the OXER1 receptor are ~100, 30, 5–10, 1–3, 1–3, 1, and <1, respectively.
Other receptors
Progress in proving the role of the 5-HETE family of agonists and their OXER1 receptor in human physiology and disease has been made difficult because mice, rats, and the other rodents so far tested lack OXER1. Rodents are the most common in vivo models for investigating these issues. OXER1 is expressed in non-human primates, a wide range of other mammals, and various fish species and a model of allergic airways disease in cats, which express OXER1 and make 5-oxo-ETE, has recently been developed for such studies. In any event, cultured mouse MA-10 Leydig cells, while responding to 5-oxo-ETE, lack OXER1. It is suggested that this cell's, as well as mouse and other rodent, responses to 5-oxo-ETE are mediated by a receptor closely related to OXER11 viz., the mouse niacin receptor 1, Niacr1. Niacr1, an ortholog of OXER1, is a G protein-coupled receptor for niacin, and responds to 5-oxo-ETE. It has also been suggested that one or more of the mouse hydroxycarboxylic acid (HCA) family of the G protein-coupled receptors, HCA1 (GPR81), HCA2 (GPR109A), and HCA3 (GPR109B), which are G protein-coupled receptors for fatty acids may be responsible for rodent responses to 5-oxo-ETE. It is possible that human cellular responses to 5-oxo-ETE and perhaps its analogs may involve, at least in isolated instances, one or more of these receptors.
PPARγ
5-Oxo-15(S)-hydroxy-ETE and to a lesser extent 5-oxo-ETE but not 5(S)-HETE also bind to and activate peroxisome proliferator-activated receptor gamma (PPARγ). Activation of OXER1 receptor and PPARγ by the oxo analogs can have opposing effects on cells. For example, 5-oxo-ETE-bound OXER1 stimulates while 5-oxo-ETE-bound PPARγ inhibits the proliferation of various types of human cancer cell lines.
Other mechanisms
5(S)-HETE acylated into the phosphatidylethanolamines fraction of human neutrophil membranes is associated with the inhibition of these cells from forming neutrophil extracellular traps, i.e. extracellular DNA scaffolds which contain neutrophil-derived antimicrobial proteins that circulate in blood and have the ability to trap bacteria. It seems unlikely that this inhibition reflects involvement of OXER1. 5-Oxo-ETE relaxes pre-contracted human bronchi by a mechanism that does not appear to involve OXER1 but is otherwise undefined.
Clinical significance
Inflammation
5(S)-HETE and other family members were first detected as products of arachidonic acid made by stimulated human polymorphonuclear neutrophils (PMN), a leukocyte blood cell type involved in host immune defense against infection but also implicated in aberrant pro-inflammatory immune responses such as arthritis; soon thereafter they found to be active also in stimulating these cells to migrate (i.e. chemotaxis), degranulate (i.e. release the anti-bacterial and tissue-injuring contents of their granules), produce bacteriocidal and tissue-injuring reactive oxygen species, and mount other pro-defensive as well as pro-inflammatory responses of the innate immune system. For example, the gram-negative bacterium, Salmonella tryphimurium, and the outer surface of gram-negative bacteria lipopolysaccharide, promote the production of 5(S)-HETE and 5-oxo-ETE by human neutrophils. The family members stimulate another blood cell of the innate immunity system, the human monocyte, acting synergistically with the pro-inflammatory CC chemokines, monocyte chemotactic protein-1 and monocyte chemotactic protein-3, to stimulate monocyte function. 5-Oxo-ETE also stimulates two other cell types that share responsibility with the PMN for regulating inflammation, the human lymphocyte and dendritic cell. And, in vivo studies, the injection of 5-oxo-ETE into the skin of human volunteers causes the local accumulation of PMN and monocyte-derived macrophages. Furthermore, the production of one or more 5(S)-HETE family members as well as the expression of orthologs of the human OXER1 receptor occur in various mammalian species including dogs, cats, cows, sheep, elephants, pandas, opossums, and ferrets and in several species of fish; for example, cats undergoing experimentally induced asthma accumulate 5-oxo-ETE in their lung lavage fluid, feline leucocytes make as well as respond to 5-oxo-ETE by an oxer1-dependent mechanism; and an OXER1 ortholog and, apparently, 5-oxo-ETE are necessary for the inflammatory response to tissue damage caused by osmolarity insult in zebrafish.
These results given above suggest that members of the 5-oxo-ETE family and the OXER1 receptor or its orthologs may contribute to protection against microbes, the repair of damaged tissues, and pathological inflammatory responses in humans and other animal species. However, an OXER1 ortholog is absent in mice and other rodents; while rodent tissues do exhibit responsiveness to 5-oxo-ETE, the lack of an oxer1 or other clear 5-oxoETE receptor in such valued animal models of diseases as rodents has impeded progress in our understanding of the physiological and pathological roles of 5-oxo-ETE.
Allergy
The following human cell types or tissues that are implicated in allergic reactivity produce 5-HETE (stereoisomer typically not defined): alveolar macrophages isolated from asthmatic and non-asthmatic patients, basophils isolated from blood and challenged with anti-IgE antibody, mast cells isolated from lung, cultured pulmonary artery endothelial cells, isolated human pulmonary vasculature, and allergen-sensitized human lung specimens challenged with specific allergen. Additionally, cultured human airway epithelial cell lines, normal bronchial epithelium, and bronchial smooth muscle cells convert 5(S)-HETE to 5-oxo-ETE in a reaction that is greatly increase by oxidative stress, which is a common component in allergic inflammatory reactions. Finally, 5-HETE is found in the bronchoalveolar lavage fluid of asthmatic humans and 5-oxo-ETE is found in the bronchoalveolar lavage fluid of cats undergoing allergen-induced bronchospasm.
Among the 5-HETE family of metabolites, 5-oxo-ETE is implicated as the most likely member to contribute to allergic reactions. It has exceptionally high potency in stimulating the chemotaxis, release of granule-bound tissue-injuring enzymes, and production of tissue-injuring reactive oxygen species of a cell type involved in allergic reactions, the human eosinophil granulocyte. It is also exceptionally potent in stimulating eosinophils to activate cytosolic phospholipase A2 (PLA2G4A) and possibly thereby to form platelet-activating factor (PAF) as well as metabolites of the 5-HETE family. PAF is itself a proposed mediator of human allergic reactions which commonly forms concurrently with 5-HETE family metabolites in human leukocytes and acts synergistically with these metabolites, particularly 5-oxo-ETE, to stimulate eosinophils. 5-Oxo-ETE also cooperates positively with at least four other potential contributors to allergic reactions, RANTES, eotaxin, granulocyte macrophage colony-stimulating factor, and granulocyte colony-stimulating factor in stimulating human eosinophils and is a powerful stimulator of chemotaxis in another cell type contributing to allergic reactions, the human basophil granulocyte. Finally, 5-oxo-ETE stimulates the infiltration of eosinophils into the skin of humans following its intradermal injection (its actions are more pronounced in asthmatic compared to healthy subjects) and when instilled into the trachea of Brown Norway rats causes eosinophils to infiltrate lung. These results suggest that the 5-oxo-ETE made at the initial tissue site of allergen insult acting through the OXER1 on target cells attracts circulating eosinophils and basophils to lung, nasal passages, skin, and possibly other sites of allergen deposition to contribute to asthma, rhinitis, and dermatitis, and other sites of allergic reactivity.
The role of 5-HETE family agonists in the bronchoconstriction of airways (a hallmark of allergen-induced asthma) in humans is currently unclear. 5-HETE stimulates the contraction of isolated human bronchial muscle, enhances the ability of histamine to contract this muscle, and contracts guinea pig lung strips. 5-Oxo-ETE also stimulates contractile responses in fresh bronchi, cultured bronchi, and cultured lung smooth muscle taken from guinea pigs but in direct contrast to these studies is reported to relax bronchi isolated from humans. The latter bronchi contractile responses were blocked by cyclooxygenase-2 inhibition or a thromboxane A2 receptor antagonist and therefore appear mediated by 5-oxo-ETE-induced production of this thromboxane. In all events, the relaxing action of 5-oxo-ETE on human bronchi does not appear to involve OXER1.
Cancer
The 5-oxo-ETE family of agonists have also been proposed to contribute to the growth of several types of human cancers. This is based on their ability to stimulate certain cultured human cancer cell lines to proliferate, the presence of OXER1 mRNA and/or protein in these cell lines, the production of 5-oxo-ETE family members by these cell lines, the induction of cell death (i.e. apoptosis) by inhibiting 5-lipoxygenase in these cells, and/or the overexpression of 5-lipoxygenase in tissue taken from the human tumors. Human cancers whose growth has been implicated by these studies as being mediated at least in part by a member(s) of the 5-oxo-ETE family include those of the prostate, breast, lung, ovary, and pancreas.
Steroid production
5(S)-HETE and 5(S)-HpETE stimulate the production of progesterone by cultured rat ovarian glomerulosa cells and enhance the secretion of progesterone and testosterone by cultured rat testicular Leydig cells. Both metabolites are made by cyclic adenosine monophosphate-stimulated MA-10 mouse Leydig cells; stimulate these cells to transcribe steroidogenic acute regulatory protein, and in consequence produce the steroids. The results suggest that trophic hormones (e.g., leutenizing hormone, adrenocorticotropic hormone) stimulate these steroid producing cells to make 5(S)-HETE and 5(S)-HpEPE which in turn increase the synthesis of steroidogenic acute regulatory protein; the latter protein promotes the rate-limiting step in steroidogenesis, transfer of cholesterol from the outer to the inner membrane of mitochondria and thereby acts in conjunction with trophic hormone-induce activation of protein kinase A to make progesterone and testosterone. This pathway may also operate in humans: Human H295R adrenocortical cells do express OXER1 and respond to 5-oxo-ETE by an increasing the transcription of steroidogenic acute regulatory protein messenger RNA as well as the production of aldosterone and progesterone by an apparent OXER1-dependent pathway.
Rat and mouse cells lack OXER1. It has been suggested that the cited mouse MA-10 cell responses to 5-oxo-ETE are mediated by an ortholog to OXER1, mouse niacin receptor 1, Niacr1, which is a G protein-coupled receptor mediating the activity of niacin, or by one or more of the mouse hydroxycarboxylic acid (HCA) family of the G protein-coupled receptors, HCA1 (GPR81), HCA2 (GPR109A), and HCA3 (GPR109B), which are G protein-coupled receptors for fatty acids. In any event, Human H295R adrenocortical cells do express OXER1 and respond to 5-oxo-ETE by an increasing the transcription of steroidogenic acute regulatory protein messenger RNA as well as the production of aldosterone and progesterone by an apparent OXER1-dependent pathway.
Bone remodeling
In an in vitro mixed culture system, 5(S)-HETE is released by monocytes to stimulate, at sub-nanomolar concentrations, osteoclast-dependent bone reabsorption. It also inhibits morphogenetic protein-2 (BMP-2)-induced bone-like nodule formation in mouse calvarial organ cultures. These results allow that 5(S)-HETE and perhaps more potently, 5-oxo-ETE contribute to the regulation of bone remodeling.
Parturition
5(S)-HETE is: elevated in the human uterus during labor; at 3–150 nM, increases both the rates of spontaneous contractions and overall contractility of myometrial strips obtained at term but prior to labor from human lower uterine segments; and in an in vitro system crosses either amnion or intact amnion-chorion-decidua and thereby may along with prostaglandin E2 move from the amnion to uterus during labor in humans. These studies allow that 5(S)-HETE, perhaps in cooperation with established role of prostaglandin E2, may play a role in the onset of human labor.
Other actions
5(S)-HETE is reported to modulate tubuloglomerular feedback. 5(S)-HpETE is also reported to inhibit the -ATPase activity of synaptosome membrane preparations prepared from rat cerebral cortex and may thereby inhibit synapse-dependent communications between neurons.
5(S)-HETE acylated into phosphatidylethanolamine is reported to increase the stimulated production of superoxide anion and interleukin-8 release by isolated human neutrophils and to inhibit the formation of neutrophil extracellular traps (i.e. NETS); NETS trap blood-circulating bacteria to assist in their neutralization. 5(S)-HETE esterified to phosphatidylcholine and glycerol esters by human endothelial cells is reported to be associated with the inhibition of prostaglandin production.
See also
Arachidonic acid
5-Lipoxygenase
5-Oxo-eicosatetraenoic acid
Leukotriene B4
Polyunsaturated fat
12-Hydroxyeicosatetraenoic acid
15-Hydroxyeicosatetraenoic acid
References
External links
5-LOX Gene Atlas entry
5-LOX entry in Atlas of Genetics and Cytogenetics in Oncology and Haematology entry
Human physiology
Animal physiology
Fatty acids
Eicosanoids
Immunology
Inflammations
Cell signaling | 5-Hydroxyeicosatetraenoic acid | [
"Biology"
] | 7,001 | [
"Immunology",
"Animals",
"Animal physiology"
] |
14,713,975 | https://en.wikipedia.org/wiki/Narcissus%20%C3%97%20medioluteus | Narcissus × medioluteus (syn. Narcissus biflorus), common names primrose-peerless, April beauty, cemetery ladies, loving couples, pale narcissus, twin sisters, two-flowered narcissus, is a flowering plant, which is a naturally occurring hybrid between Narcissus poeticus and Narcissus tazetta (informally called "poetaz hybrid"). It was found initially in the West of France.
This first poetaz narcissus has long been grown as a garden ornamental and has also become naturalised in Great Britain, Ireland, Switzerland, Spain, Portugal, the former Yugoslavia, Madeira, New Zealand, and in scattered locales in the eastern United States (Michigan, Illinois, Missouri, Indiana, Ohio, Kentucky, Tennessee, Georgia, Alabama, Arkansas, Louisiana, Maryland, North Carolina, South Carolina and Virginia).
The flowers are generally held in pairs, hence the common names "Twin Sisters" and "Loving Couples". The fragrant cream flowers (medioluteus) are smaller than those of Narcissus poeticus. The cup lacks a red edge.
Other poetaz hybrids have several flowers per stem, and some have double flowers.
Gallery
photo of herbarium specimen at Missouri Botanical Garden, collected in Missouri, Narcissus x medioluteus
References
medioluteus
Hybrid plants
Flora of France
Plants described in 1768 | Narcissus × medioluteus | [
"Biology"
] | 298 | [
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
1,039,124 | https://en.wikipedia.org/wiki/Stellar%20structure | Stellar structure models describe the internal structure of a star in detail and make predictions about the luminosity, the color and the future evolution of the star. Different classes and ages of stars have different internal structures, reflecting their elemental makeup and energy transport mechanisms.
Heat transport
For energy transport refer to Radiative transfer.
Different layers of the stars transport heat up and outwards in different ways, primarily convection and radiative transfer, but thermal conduction is important in white dwarfs.
Convection is the dominant mode of energy transport when the temperature gradient is steep enough so that a given parcel of gas within the star will continue to rise if it rises slightly via an adiabatic process. In this case, the rising parcel is buoyant and continues to rise if it is warmer than the surrounding gas; if the rising parcel is cooler than the surrounding gas, it will fall back to its original height. In regions with a low temperature gradient and a low enough opacity to allow energy transport via radiation, radiation is the dominant mode of energy transport.
The internal structure of a main sequence star depends upon the mass of the star.
In stars with masses of 0.3–1.5 solar masses (), including the Sun, hydrogen-to-helium fusion occurs primarily via proton–proton chains, which do not establish a steep temperature gradient. Thus, radiation dominates in the inner portion of solar mass stars. The outer portion of solar mass stars is cool enough that hydrogen is neutral and thus opaque to ultraviolet photons, so convection dominates. Therefore, solar mass stars have radiative cores with convective envelopes in the outer portion of the star.
In massive stars (greater than about 1.5 ), the core temperature is above about 1.8×107 K, so hydrogen-to-helium fusion occurs primarily via the CNO cycle. In the CNO cycle, the energy generation rate scales as the temperature to the 15th power, whereas the rate scales as the temperature to the 4th power in the proton-proton chains. Due to the strong temperature sensitivity of the CNO cycle, the temperature gradient in the inner portion of the star is steep enough to make the core convective. In the outer portion of the star, the temperature gradient is shallower but the temperature is high enough that the hydrogen is nearly fully ionized, so the star remains transparent to ultraviolet radiation. Thus, massive stars have a radiative envelope.
The lowest mass main sequence stars have no radiation zone; the dominant energy transport mechanism throughout the star is convection.
Equations of stellar structure
The simplest commonly used model of stellar structure is the spherically symmetric quasi-static model, which assumes that a star is in a steady state and that it is spherically symmetric. It contains four basic first-order differential equations: two represent how matter and pressure vary with radius; two represent how temperature and luminosity vary with radius.
In forming the stellar structure equations (exploiting the assumed spherical symmetry), one considers the matter density , temperature , total pressure (matter plus radiation) , luminosity , and energy generation rate per unit mass in a spherical shell of a thickness at a distance from the center of the star. The star is assumed to be in local thermodynamic equilibrium (LTE) so the temperature is identical for matter and photons. Although LTE does not strictly hold because the temperature a given shell "sees" below itself is always hotter than the temperature above, this approximation is normally excellent because the photon mean free path, , is much smaller than the length over which the temperature varies considerably, i.e. .
First is a statement of hydrostatic equilibrium: the outward force due to the pressure gradient within the star is exactly balanced by the inward force due to gravity. This is sometimes referred to as stellar equilibrium.
,
where is the cumulative mass inside the shell at and G is the gravitational constant. The cumulative mass increases with radius according to the mass continuity equation:
Integrating the mass continuity equation from the star center () to the radius of the star () yields the total mass of the star.
Considering the energy leaving the spherical shell yields the energy equation:
,
where is the luminosity produced in the form of neutrinos (which usually escape the star without interacting with ordinary matter) per unit mass. Outside the core of the star, where nuclear reactions occur, no energy is generated, so the luminosity is constant.
The energy transport equation takes differing forms depending upon the mode of energy transport. For conductive energy transport (appropriate for a white dwarf), the energy equation is
where k is the thermal conductivity.
In the case of radiative energy transport, appropriate for the inner portion of a solar mass main sequence star and the outer envelope of a massive main sequence star,
where is the opacity of the matter, is the Stefan–Boltzmann constant, and the Boltzmann constant is set to one.
The case of convective energy transport does not have a known rigorous mathematical formulation, and involves turbulence in the gas. Convective energy transport is usually modeled using mixing length theory. This treats the gas in the star as containing discrete elements which roughly retain the temperature, density, and pressure of their surroundings but move through the star as far as a characteristic length, called the mixing length. For a monatomic ideal gas, when the convection is adiabatic, meaning that the convective gas bubbles don't exchange heat with their surroundings, mixing length theory yields
where is the adiabatic index, the ratio of specific heats in the gas. (For a fully ionized ideal gas, .) When the convection is not adiabatic, the true temperature gradient is not given by this equation. For example, in the Sun the convection at the base of the convection zone, near the core, is adiabatic but that near the surface is not. The mixing length theory contains two free parameters which must be set to make the model fit observations, so it is a phenomenological theory rather than a rigorous mathematical formulation.
Also required are the equations of state, relating the pressure, opacity and energy generation rate to other local variables appropriate for the material, such as temperature, density, chemical composition, etc. Relevant equations of state for pressure may have to include the perfect gas law, radiation pressure, pressure due to degenerate electrons, etc. Opacity cannot be expressed exactly by a single formula. It is calculated for various compositions at specific densities and temperatures and presented in tabular form. Stellar structure codes (meaning computer programs calculating the model's variables) either interpolate in a density-temperature grid to obtain the opacity needed, or use a fitting function based on the tabulated values. A similar situation occurs for accurate calculations of the pressure equation of state. Finally, the nuclear energy generation rate is computed from nuclear physics experiments, using reaction networks to compute reaction rates for each individual reaction step and equilibrium abundances for each isotope in the gas.
Combined with a set of boundary conditions, a solution of these equations completely describes the behavior of the star. Typical boundary conditions set the values of the observable parameters appropriately at the surface () and center () of the star: , meaning the pressure at the surface of the star is zero; , there is no mass inside the center of the star, as required if the mass density remains finite; , the total mass of the star is the star's mass; and , the temperature at the surface is the effective temperature of the star.
Although nowadays stellar evolution models describe the main features of color–magnitude diagrams, important improvements have to be made in order to remove uncertainties which are linked to the limited knowledge of transport phenomena. The most difficult challenge remains the numerical treatment of turbulence. Some research teams are developing simplified modelling of turbulence in 3D calculations.
Rapid evolution
The above simplified model is not adequate without modification in situations when the composition changes are sufficiently rapid. The equation of hydrostatic equilibrium may need to be modified by adding a radial acceleration term if the radius of the star is changing very quickly, for example if the star is radially pulsating. Also, if the nuclear burning is not stable, or the star's core is rapidly collapsing, an entropy term must be added to the energy equation.
See also
Scale height
Standard solar model
References
Sources
External links
opacity code retrieved November 2009
The Yellow CESAM code, stellar evolution and structure Fortran source code
EZ to Evolve ZAMS Stars a FORTRAN 90 software derived from Eggleton's Stellar Evolution Code, a web-based interface can be found here .
Geneva Grids of Stellar Evolution Models (some of them including rotational induced mixing)
The BaSTI database of stellar evolution tracks
Stellar atmospheres: A contribution to the observational study of high temperature in the reversing layers of stars, (1925) by Cecilia Payne-Gaposchkin, Cambridge: The Observatory.
Structure
Stellar astronomy classification systems
Concepts in stellar astronomy | Stellar structure | [
"Physics",
"Astronomy"
] | 1,827 | [
"Stellar astronomy classification systems",
"Concepts in astrophysics",
"Astronomical classification systems",
"Concepts in stellar astronomy",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
1,039,146 | https://en.wikipedia.org/wiki/Directional%20drilling | Directional drilling (or slant drilling) is the practice of drilling non-vertical bores. It can be broken down into four main groups: oilfield directional drilling, utility installation directional drilling, directional boring (horizontal directional drilling - HDD), and surface in seam (SIS), which horizontally intersects a vertical bore target to extract coal bed methane.
History
Many prerequisites enabled this suite of technologies to become productive. Probably, the first requirement was the realization that oil wells, or water wells, do not necessarily need to be vertical. This realization was quite slow, and did not really grasp the attention of the oil industry until the late 1920s when there were several lawsuits alleging that wells drilled from a rig on one property had crossed the boundary and were penetrating a reservoir on an adjacent property. Initially, proxy evidence such as production changes in other wells was accepted, but such cases fueled the development of small diameter tools capable of surveying wells during drilling. Horizontal directional drill rigs are developing towards large-scale, micro-miniaturization, mechanical automation, hard stratum working, exceeding length and depth oriented monitored drilling.
Measuring the inclination of a wellbore (its deviation from the vertical) is comparatively simple, requiring only a pendulum. Measuring the azimuth (direction with respect to the geographic grid in which the wellbore was running from the vertical), however, was more difficult. In certain circumstances, magnetic fields could be used, but would be influenced by metalwork used inside wellbores, as well as the metalwork used in drilling equipment. The next advance was in the modification of small gyroscopic compasses by the Sperry Corporation, which was making similar compasses for aeronautical navigation. Sperry did this under contract to Sun Oil (which was involved in a lawsuit as described above), and a spin-off company "Sperry Sun" was formed, which brand continues to this day, absorbed into Halliburton. Three components are measured at any given point in a wellbore in order to determine its position: the depth of the point along the course of the borehole (measured depth), the inclination at the point, and the magnetic azimuth at the point. These three components combined are referred to as a "survey". A series of consecutive surveys are needed to track the progress and location of a wellbore.
Prior experience with rotary drilling had established several principles for the configuration of drilling equipment down hole ("bottom hole assembly" or "BHA") that would be prone to "drilling crooked hole" (i.e., initial accidental deviations from the vertical would be increased). Counter-experience had also given early directional drillers ("DD's") principles of BHA design and drilling practice that would help bring a crooked hole nearer the vertical.
In 1934, H. John Eastman and Roman W. Hines of Long Beach, California, became pioneers in directional drilling when they and George Failing of Enid, Oklahoma, saved the Conroe, Texas, oil field. Failing had recently patented a portable drilling truck. He had started his company in 1931 when he mated a drilling rig to a truck and a power take-off assembly. The innovation allowed rapid drilling of a series of slanted wells. This capacity to quickly drill multiple relief wells and relieve the enormous gas pressure was critical to extinguishing the Conroe fire. In a May, 1934, Popular Science Monthly article, it was stated that "Only a handful of men in the world have the strange power to make a bit, rotating a mile below ground at the end of a steel drill pipe, snake its way in a curve or around a dog-leg angle, to reach a desired objective." Eastman Whipstock, Inc., would become the world's largest directional company in 1973.
Combined, these survey tools and BHA designs made directional drilling possible, but it was perceived as arcane. The next major advance was in the 1970s, when downhole drilling motors (aka mud motors, driven by the hydraulic power of drilling mud circulated down the drill string) became common. These allowed the drill bit to continue rotating at the cutting face at the bottom of the hole, while most of the drill pipe was held stationary. A piece of bent pipe (a "bent sub") between the stationary drill pipe and the top of the motor allowed the direction of the wellbore to be changed without needing to pull all the drill pipe out and place another whipstock. Coupled with the development of measurement while drilling tools (using mud pulse telemetry, networked or wired pipe or electromagnetism (EM) telemetry, which allows tools down hole to send directional data back to the surface without disturbing drilling operations), directional drilling became easier.
Certain profiles cannot be easily drilled while the drill pipe is rotating. Drilling directionally with a downhole motor requires occasionally stopping rotation of the drill pipe and "sliding" the pipe through the channel as the motor cuts a curved path. "Sliding" can be difficult in some formations, and it is almost always slower and therefore more expensive than drilling while the pipe is rotating, so the ability to steer the bit while the drill pipe is rotating is desirable. Several companies have developed tools which allow directional control while rotating. These tools are referred to as rotary steerable systems (RSS). RSS technology has made access and directional control possible in previously inaccessible or uncontrollable formations.
Benefits
Wells are drilled directionally for several purposes:
Increasing the exposed section length through the reservoir by drilling through the reservoir at an angle.
Drilling into the reservoir where vertical access is difficult or not possible. For instance an oilfield under a town, under a lake, or underneath a difficult-to-drill formation.
Allowing more wellheads to be grouped together on one surface location can allow fewer rig moves, less surface area disturbance, and make it easier and cheaper to complete and produce the wells. For instance, on an oil platform or jacket offshore, 40 or more wells can be grouped together. The wells will fan out from the platform into the reservoir(s) below. This concept is being applied to land wells, allowing multiple subsurface locations to be reached from one pad, reducing costs.
Drilling along the underside of a reservoir-constraining fault allows multiple productive sands to be completed at the highest stratigraphic points.
Drilling a "relief well" to relieve the pressure of a well producing without restraint (a "blowout"). In this scenario, another well could be drilled starting at a safe distance away from the blowout, but intersecting the troubled wellbore. Then, heavy fluid (kill fluid) is pumped into the relief wellbore to suppress the high pressure in the original wellbore causing the blowout.
Most directional drillers are given a blue well path to follow that is predetermined by engineers and geologists before the drilling commences. When the directional driller starts the drilling process, periodic surveys are taken with a downhole instrument to provide survey data (inclination and azimuth) of the well bore. These pictures are typically taken at intervals between 10 and 150 meters (30–500 feet), with 30 meters (90 feet) common during active changes of angle or direction, and distances of 60–100 meters (200–300 feet) being typical while "drilling ahead" (not making active changes to angle and direction). During critical angle and direction changes, especially while using a downhole motor, a measurement while drilling (MWD) tool will be added to the drill string to provide continuously updated measurements that may be used for (near) real-time adjustments.
This data indicates if the well is following the planned path and whether the orientation of the drilling assembly is causing the well to deviate as planned. Corrections are regularly made by techniques as simple as adjusting rotation speed or the drill string weight (weight on bottom) and stiffness, as well as more complicated and time-consuming methods, such as introducing a downhole motor. Such pictures, or surveys, are plotted and maintained as an engineering and legal record describing the path of the well bore. The survey pictures taken while drilling are typically confirmed by a later survey in full of the borehole, typically using a "multi-shot camera" device.
The multi-shot camera advances the film at time intervals so that by dropping the camera instrument in a sealed tubular housing inside the drilling string (down to just above the drilling bit) and then withdrawing the drill string at time intervals, the well may be fully surveyed at regular depth intervals (approximately every 30 meters (90 feet) being common, the typical length of 2 or 3 joints of drill pipe, known as a stand, since most drilling rigs "stand back" the pipe withdrawn from the hole at such increments, known as "stands").
Drilling to targets far laterally from the surface location requires careful planning and design. The current record holders manage wells over away from the surface location at a true vertical depth (TVD) of only 1,600–2,600 m (5,200–8,500 ft).
This form of drilling can also reduce the environmental cost and scarring of the landscape. Previously, long lengths of landscape had to be removed from the surface. This is no longer required with directional drilling.
Disadvantages
Until the arrival of modern downhole motors and better tools to measure inclination and azimuth of the hole, directional drilling and horizontal drilling was much slower than vertical drilling due to the need to stop regularly and take time-consuming surveys, and due to slower progress in drilling itself (lower rate of penetration). These disadvantages have shrunk over time as downhole motors became more efficient and semi-continuous surveying became possible.
What remains is a difference in operating costs: for wells with an inclination of less than 40 degrees, tools to carry out adjustments or repair work can be lowered by gravity on cable into the hole. For higher inclinations, more expensive equipment has to be mobilized to push tools down the hole.
Another disadvantage of wells with a high inclination was that prevention of sand influx into the well was less reliable and needed higher effort. Again, this disadvantage has diminished such that, provided sand control is adequately planned, it is possible to carry it out reliably.
Stealing oil
In 1990, Iraq accused Kuwait of stealing Iraq's oil through slant drilling.
The United Nations redrew the border after the 1991 Gulf war, which ended the seven-month Iraqi occupation of Kuwait. As part of the reconstruction, 11 new oil wells were placed among the existing 600. Some farms and an old naval base that used to be in the Iraqi side became part of Kuwait.
In the mid-twentieth century, a slant-drilling scandal occurred in the huge East Texas Oil Field.
New technologies
Between 1985 and 1993, the Naval Civil Engineering Laboratory (NCEL) (now the Naval Facilities Engineering Service Center (NFESC)) of Port Hueneme, California developed controllable horizontal drilling technologies. These technologies are capable of reaching (3000–4500 m) and may reach (7500 m) when used under favorable conditions.
Techniques
Wellbore Surveys
Specialized tools determine the wellbore's deviation from vertical (inclination) and its directional orientation (azimuth). This data is vital for trajectory adjustments. These surveys are taken at regular intervals (e.g., every 30-100 meters) to track the wellbore's progress in real time. In critical sections, measurement while drilling (MWD) tools provide continuous downhole measurements for immediate directional corrections as needed. MWD uses gyroscopes, magnetometers, and accelerometers to determine borehole inclination and azimuth while the drilling is being done.
Trajectory Control
Bottom Hole Assembly (BHA): The configuration of drilling equipment near the drill bit (BHA) profoundly influences drilling direction. BHAs can be tailored to promote straight drilling or induce deviations.
Downhole Motors: Specialized mud motors rotate only the drill bit, allowing controlled changes in direction while the majority of the drill string remains stationary.
Rotary Steerable Systems (RSS): Advanced RSS technology enables steering even while the entire drill string is rotating, ensuring greater efficiency and control.
See also
Deviation survey
Geosteering
Hydraulic fracturing
Logging while drilling
Measurement while drilling
Mud motor
Mudlogger
Rotary steerable system
Trenchless technology
References
External links
"Slanted Oil Wells, Work New Marvels" Popular Science, May 1934, early article on the drilling technology
"Technology and the Conroe Crater" American Oil & Gas Historical Society
Short video explaining horizontal drilling for gas extraction from oil shale. (American Petroleum Institute)
A video depicting horizontal shale drilling can be seen here.
"Mechanical Mole Bores Crooked Wells." Popular Science, June 1942, pp. 94–95.
The unsung masters of the oil industry 21 July 2012 The Economist
Drilling technology
Engineering vehicles | Directional drilling | [
"Engineering"
] | 2,639 | [
"Engineering vehicles"
] |
1,039,176 | https://en.wikipedia.org/wiki/Linda%20B.%20Buck | Linda Brown Buck (born January 29, 1947) is an American biologist best known for her work on the olfactory system. She was awarded the 2004 Nobel Prize in Physiology or Medicine, along with Richard Axel, for their work on olfactory receptors. She is currently on the faculty of the Fred Hutchinson Cancer Research Center in Seattle.
Personal life
Linda B. Buck was born in Seattle, Washington on January 29, 1947. Her father was an electrical engineer who spent his time inventing and building different items in his spare time, while her mother was a homemaker who spent a majority of her free time solving word puzzles. Buck was the second of three children, all of them girls. Her father has Irish ancestry as well as ancestors dating back to the American revolution. Her mother is of Swedish ancestry. In 1994 Buck met Roger Brent, also a biologist. The two married in 2006.
Education
Buck received her B.S. in psychology and microbiology in 1975 from the University of Washington, Seattle. She is the first female University of Washington alumnus to win the Nobel Prize. She was awarded her Ph.D. in immunology in 1980 under the direction of Professor Ellen Vitetta at the University of Texas Southwestern Medical Center at Dallas.
Career and research
In 1980, Buck began postdoctoral research at Columbia University under Benvenuto Pernis (1980–1982). In 1982, she joined the laboratory of Richard Axel, also at Columbia in the Institute of Cancer Research. After reading Sol Snyder's group research paper at Johns Hopkins University, Buck set out to map the olfactory process at the molecular level, tracing the travel of odors through the cells of the nose to the brain. Buck and Axel worked with rat genes in their research and identified a family of genes that code for more than 1000 odor receptors and published these findings in 1991. Later that year, Buck became an assistant professor in the Neurobiology Department at Harvard Medical School where she established her own lab. After finding how odors are detected by the nose, Buck published her findings in 1993 on how the inputs from different odor receptors are organized in the nose. Essentially, her primary research interest is on how pheromones and odors are detected in the nose and interpreted in the brain. She is a Full Member of the Basic Sciences Division at Fred Hutchinson Cancer Research Center, and an Affiliate Professor of Physiology and Biophysics at the University of Washington, Seattle.
Nobel Prize in Physiology or Medicine (2004)
In her landmark paper published in 1991 with Richard Axel, Linda Buck discovered hundreds of genes code for the odorant sensors located in the olfactory neurons of our noses. Each receptor is a protein that changes when an odor attaches to the receptor, causing an electrical signal to be sent to the brain. Differences between odorant sensors mean that certain odors cause a signal to be released from a certain receptor. We are then able to interpret varying signals from our receptors as specific scents. To do this, Buck and Axel cloned olfactory receptors, showing that they belong to the family of G protein-coupled receptors. By analyzing rat DNA, they estimated that there were approximately 1,000 different genes for olfactory receptors in the mammalian genome. This research opened the door to the genetic and molecular analysis of the mechanisms of olfaction. In their later work, Buck and Axel have shown that each olfactory receptor neuron remarkably only expresses one kind of olfactory receptor protein and that the input from all neurons expressing the same receptor is collected by a single dedicated glomerulus of the olfactory bulb.
Awards and honors
Buck was awarded the Takasago Award for Research in Olfaction (1992), Unilever Science Award (1996), R.H. Wright Award in Olfactory Research (1996), Lewis S. Rosenstiel Award for Distinguished Work in Basic Medical Research (1996), Perl/UNC Neuroscience Prize (2002), and Gairdner Foundation International Award (2003). In 2005, she received the Golden Plate Award of the American Academy of Achievement. Buck was inducted into the National Academy of Sciences in 2003 and the Institutes of Medicine in 2006. Buck has been a Fellow of the American Association for the Advancement of Science and the American Academy of Arts and Sciences since 2008. She also sits on the Selection Committee for Life Science and Medicine which chooses winners of the Shaw Prize. In 2015, Buck was awarded an honorary doctorate by Harvard University and elected a Foreign Member of the Royal Society (ForMemRS).
Retractions
Buck retracted 3 papers, published in Nature (pub. 2001, retracted 2008), Science (pub 2006, retracted 2010) and Proceedings of the National Academy of Sciences (pub 2005, retracted 2010) due to falsification/fabrication of results by lead author and collaborator Zhihua Zou.
See also
Timeline of women in science
References
External links
1947 births
Living people
American neuroscientists
Columbia University faculty
Nobel laureates in Physiology or Medicine
American Nobel laureates
University of Washington College of Arts and Sciences alumni
University of Washington faculty
Harvard University staff
Howard Hughes Medical Investigators
American women neuroscientists
Fellows of the American Academy of Arts and Sciences
Fellows of the American Association for the Advancement of Science
Members of the United States National Academy of Sciences
Scientists from Seattle
University of Texas Southwestern Medical Center alumni
Women Nobel laureates
20th-century American women scientists
20th-century American biologists
Sloan Research Fellows
Foreign members of the Royal Society
Members of the National Academy of Medicine
21st-century American women scientists
Fred Hutchinson Cancer Research Center people
Biologists from Washington (state) | Linda B. Buck | [
"Technology"
] | 1,124 | [
"Women Nobel laureates",
"Women in science and technology"
] |
1,039,250 | https://en.wikipedia.org/wiki/Apple%20Type%20Services%20for%20Unicode%20Imaging | The Apple Type Services for Unicode Imaging (ATSUI) is the set of services for rendering Unicode-encoded text introduced in Mac OS 8.5 and carried forward into Mac OS X.
It replaced the WorldScript engine for legacy encodings.
Obsolescence
ATSUI was replaced by a faster and modern Unicode imaging engine called Core Text in Mac OS X 10.5 (Leopard).
It was officially deprecated with Xcode 4.6, which was released in December 2012: "Source code using ATS APIs will generate warnings while being compiled. For 10.8, there will be no loss of functionality but there could be areas where performance will suffer. Programmers are instructed to replace all their ATS code (including ATSUI) with CoreText as ATS functionality will be completely removed in future releases of OS X."
ATSUI was removed in September 2022 for source code targeting only macOS 13 Ventura, and is completely removed in macOS 14 Sonoma.
References
External links
Home page
Unicode
Application programming interfaces
Macintosh operating systems
macOS APIs
Text rendering libraries | Apple Type Services for Unicode Imaging | [
"Technology"
] | 222 | [
"Computing stubs",
"Digital typography stubs"
] |
1,039,260 | https://en.wikipedia.org/wiki/HOMFLY%20polynomial | In the mathematical field of knot theory, the HOMFLY polynomial or HOMFLYPT polynomial, sometimes called the generalized Jones polynomial, is a 2-variable knot polynomial, i.e. a knot invariant in the form of a polynomial of variables m and l.
A central question in the mathematical theory of knots is whether two knot diagrams represent the same knot. One tool used to answer such questions is a knot polynomial, which is computed from a diagram of the knot and can be shown to be an invariant of the knot, i.e. diagrams representing the same knot have the same polynomial. The converse may not be true. The HOMFLY polynomial is one such invariant and it generalizes two polynomials previously discovered, the Alexander polynomial and the Jones polynomial, both of which can be obtained by appropriate substitutions from HOMFLY. The HOMFLY polynomial is also a quantum invariant.
The name HOMFLY combines the initials of its co-discoverers: Jim Hoste, Adrian Ocneanu, Kenneth Millett, Peter J. Freyd, W. B. R. Lickorish, and David N. Yetter. The addition of PT recognizes independent work carried out by Józef H. Przytycki and Paweł Traczyk.
Definition
The polynomial is defined using skein relations:
where are links formed by crossing and smoothing changes on a local region of a link diagram, as indicated in the figure.
The HOMFLY polynomial of a link L that is a split union of two links and is given by
See the page on skein relation for an example of a computation using such relations.
Other HOMFLY skein relations
This polynomial can be obtained also using other skein relations:
Main properties
, where # denotes the knot sum; thus the HOMFLY polynomial of a composite knot is the product of the HOMFLY polynomials of its components.
, so the HOMFLY polynomial can often be used to distinguish between two knots of different chirality. However there exist chiral pairs of knots that have the same HOMFLY polynomial, e.g. knots 942 and 1071 together with their respective mirror images.
The Jones polynomial, V(t), and the Alexander polynomial, can be computed in terms of the HOMFLY polynomial (the version in and variables) as follows:
References
Further reading
Kauffman, L.H., "Formal knot theory", Princeton University Press, 1983.
Lickorish, W.B.R. "An Introduction to Knot Theory". Springer. .
External links
Knot theory
Polynomials | HOMFLY polynomial | [
"Mathematics"
] | 525 | [
"Polynomials",
"Algebra"
] |
1,039,292 | https://en.wikipedia.org/wiki/Trona | Trona (trisodium hydrogendicarbonate dihydrate, also sodium sesquicarbonate dihydrate, Na2CO3·NaHCO3·2H2O) is a non-marine evaporite mineral. It is mined as the primary source of sodium carbonate in the United States, where it has replaced the Solvay process used in most of the rest of the world for sodium carbonate production. Turkey is also a major producer.
Etymology
The word entered English by way of either Swedish () or Spanish (), with both possible sources having the same meaning as in English: the mineral natron from North Africa. Both the Spanish and Swedish terms derive from the Arabic trōn, which in turn derives from Arabic natron and Hebrew (natruna), which comes from ancient Greek (nitron), derived ultimately from ancient Egyptian ntry (or nitry'’).
Natural deposits
Trona is found at Owens Lake and Searles Lake, California; the Green River Formation of Wyoming and Utah; the Makgadikgadi Pans in Botswana and in the Nile Valley in Egypt. The trona near Green River, Wyoming, is the largest known deposit in the world and lies in layered evaporite deposits below ground, where the trona was deposited in a lake during the Paleogene Period. Trona has also been mined at Lake Magadi in the Kenyan Rift Valley for nearly 100 years. The northern part of Lake Natron is covered by a 1.5 m thick trona bed, and occurs in 'salt' pans in the Etosha National Park in Namibia. The Beypazari region in the Ankara Province of Turkey has some 33 trona beds in two fault-bound lensoid bodies in and above oil shales of the Lower Hirka Formation (16 in the lower and 17 in the upper body). The Wucheng basin trona mine, Henan Province China has some 36 trona beds (693–974 m deep), the lower 15 beds are 0.5–1.5 m thick, thickest 2.38 m; the upper 21 beds are 1–3 m thick, with a maximum of 4.56 m hosted and underlain by dolomitic oil shales of the Wulidui Formation.
Trona has also been found in magmatic environments. Research has shown that trona can be formed by autometasomatic reactions of late-magmatic fluids or melts (or supercritical fluid-melt mixtures), with earlier crystallized rocks within the same plutonic complex, or by large-scale vapor unmixing in the very final stages of magmatism.
Crystal structure
The crystal structure of trona was first determined by Brown et al. (1949). The structure consists of units of 3 edge-sharing sodium polyhedra (a central octahedron flanked by septahedra), cross-linked by carbonate groups and hydrogen bonds. Bacon and Curry (1956) refined the structure determination using two-dimensional single-crystal neutron diffraction, and suggested that the hydrogen atom in the symmetric (HC2O6)3− anion is disordered. The environment of the disordered H atom was later investigated by Choi and Mighell (1982) at 300 K with three-dimensional single-crystal neutron diffraction: they concluded that the H atom is dynamically disordered between two equivalent sites, separated from one another by 0.211(9) Å. The dynamically disordered H atom was reinvestigated at low temperature by O'Bannon et al. 2014 and they concluded that it does not order at temperatures as low as 100K.
Uses
Trona is a common source of soda ash, which is a significant economic commodity because of its applications in manufacturing glass, chemicals, paper, detergents, and textiles.
It is used to condition water.
It is used to remove sulfur from both flue gases and lignite coals.
It is a product of carbon sequestration of flue gases.
It is also used as a food additive.
Mining operations
Rio Tinto – Owens Lake
Magadi Soda Company
Searles Valley Minerals Inc.
Solvay
Tata Chemicals
Genesis Alkali formerly Tronox Alkali formerly FMC Corporation General Chemical
Ciner Wyoming formerly OCI Chemical Corp.''
ANSAC
Eti Soda, Turkey
Kazan Soda Elektrik, Turkey
Church & Dwight – Green River Mine
Intrepid Potash
Simplot
See also
Natron
Nahcolite
Shortite
Sodium sesquicarbonate
Thermonatrite
References
Sodium minerals
Carbonate minerals
Monoclinic minerals
Minerals in space group 15
Evaporite
Luminescent minerals
Green River Formation
Dihydrate minerals | Trona | [
"Chemistry"
] | 974 | [
"Luminescence",
"Luminescent minerals"
] |
1,039,393 | https://en.wikipedia.org/wiki/Andrewsarchus | Andrewsarchus (), meaning "Andrews' ruler", is an extinct genus of artiodactyl that lived during the Middle Eocene in what is now China. The genus was first described by Henry Fairfield Osborn in 1924 with the type species A. mongoliensis based on a largely complete cranium. A second species, A. crassum, was described in 1977 based on teeth. A mandible, formerly described as Paratriisodon, does probably belong to Andrewsarchus as well. The genus has been historically placed in the families Mesonychidae or Arctocyonidae, or was considered to be a close relative of whales. It is now regarded as the sole member of its own family, Andrewsarchidae, and may have been related to entelodonts. Fossils of Andrewsarchus have been recovered from the Middle Eocene Irdin Manha, Lushi, and Dongjun Formations of Inner Mongolia, each dated to the Irdinmanhan Asian land mammal age (Lutetian–Bartonian stages, 48–38 million years ago).
Andrewsarchus has historically been reputed as the largest terrestrial, carnivorous mammal given its skull length of , though its overall body size was probably overestimated due to inaccurate comparisons with mesonychids. Its incisors are arranged in a semicircle, similar to entelodonts, with the second rivalling the canine in size. The premolars are again similar to entelodonts in having a single cusp. The crowns of the molars are wrinkled, suggesting it was omnivorous or a scavenger. Unlike many modern scavengers, a reduced sagittal crest and flat mandibular fossa suggest that Andrewsarchus likely had a fairly weak bite force.
Taxonomy
Early history
The holotype of Andrewsarchus mongoliensis is a mostly complete cranium (specimen number AMNH-VP 20135). It was recovered from the lower Irdin Manha Formation of Inner Mongolia during a 1923 palaeontological expedition conducted by the American Museum of Natural History of New York. Its discoverer was a local assistant, Kan Chuen-pao, also known as "Buckshot". It was initially identified by Walter W. Granger as the skull of an Entelodon. A drawing of the skull was sent to the museum, where it was identified by William Diller Matthew as belonging to "the primitive Creodonta of the family Mesonychidae". The specimen itself arrived at the museum and was described by Osborn in 1924. Its generic name honours Roy Chapman Andrews, the leader of the expedition, with the Ancient Greek archos (ἀρχός, "ruler") added to his surname.
A second species of Andrewsarchus, A. crassum, was named by Ding Suyin and colleagues in 1977 on the basis of IVPP V5101, a pair of teeth (the second and third lower premolars) recovered from the Dongjun Formation of Guangxi.
In the 1957, Zhou Mingzhen and colleagues recovered a mandible, a fragmentary maxilla, and several isolated teeth from the Lushi Formation of Henan, China, which correlates to the Irdin Manha Formation. The maxilla belonged to a skull that was crushed beyond recognition; it is likely from the same individual as the mandible. Zhou described it in 1959 as Paratriisodon henanensis, and assigned it to Arctocyonidae. He further classified it as part of the subfamily Triisodontinae (now the family Triisodontidae) based on close similarities of the molars and premolars to those of Triisodon. A second species, P. gigas, was named by Zhou and colleagues in 1973 for a molar also from the Lushi Formation. Three molars and an incisor from the Irdin Manha Formation were later referred to P. gigas. Comparisons between the two genera were drawn as far back as 1969, when Frederick Szalay suggested that they either evolved from the same arctocyonid ancestors or that they were an example of convergent evolution. Paratriisodon was first properly synonymised with Andrewsarchus by Leigh Van Valen in 1978, who did so without explanation. Regardless, their synonymy was upheld by Maureen O'Leary in 1998, based on similarities between the molars and premolars of the two genera and their comparable body sizes.
Classification
Andrewsarchus was initially regarded as a mesonychid, and Paratriisodon as an arctocyonid. In 1995, the former became the sole member of its own subfamily, Andrewsarchinae, within Mesonychia. The subfamily was elevated to family level by Philip D. Gingerich in 1998, who tentatively assigned Paratriisodon to it. In 1988, Donald Prothero and colleagues recovered Andrewsarchus as the sister taxon to whales. It has since been recovered as a more basal member of Cetancodontamorpha, most closely related to entelodonts, hippos, and whales. In 2023, Yu and colleagues conducted a phylogenetic analysis of ungulates, with a particular focus on entelodontid artiodactyls. Andrewsarchus was recovered as part of a clade consisting of itself, Achaenodon, Erlianhyus, Protentelodon, Wutuhyus, and Entelodontidae. It was found to be most closely related to Achaenodon and Erlianhyus, with which it formed a polytomy. A cladogram based on their phylogeny is reproduced below:
Description
When first describing Andrewsarchus, Osborn believed it to be the largest terrestrial, carnivorous mammal. Based on the length of the A. mongoliensis holotype skull, and using the proportions of Mesonyx, he estimated a total body length of and a body height of . However, considering cranial and dental similarities with entelodonts, Frederick Szalay and Stephen Jay Gould proposed that it had proportions less like mesonychids and more like them, and thus that Osborn's estimates were likely inaccurate.
Skull
The holotype skull of Andrewsarchus has a total length of , and is wide at the zygomatic arches. The snout is greatly elongated, measuring one-and-a-half times the length of the basicranium, and the portion of the snout in front of the canines resembles that of entelodonts. Unlike entelodonts, however, the postorbital bar is incomplete. The sagittal crest is reduced, and the mandibular fossa is relatively flat. Together, these attributes suggest a weak temporalis muscle and a fairly weak bite force. The hard palate is long and narrow. The mandibular fossa is also offset laterally and ventrally from the basicranium, similar to the condition seen in mesonychids. The mandible itself is long and shallow, characterised by a straight and relatively shallow horizontal ramus. The masseteric fossa, the depression on the mandible to which the masseter attaches, is shallow. Symphyseal contact between the two mandibles is limited.
Dentition
The holotype cranium of Andrewsarchus demonstrates the typical placental tooth formula, of three incisors, one canine, four premolars and three molars per side, though it is not clear whether the same applies to the mandible. The upper incisors are arranged in a semicircle in front of the canines, a trait that is shared with entelodonts. The second incisor is enlarged, and is almost the size of the canines. This is partly because, while the canines were originally described as being "of enormous size", they are relatively small in proportion to the rest of the dentition. The upper premolars are elongate and consist of a single cusp, resembling those of entelodonts. The fourth premolar retains the protocone, though in a vestigial form. Their roots are not confluent and lack a dentine platform, which are both likely to be adaptations to prolong the tooth's functional life after crown abrasion. The first molar is the smallest. The second is the widest, but has been heavily worn since fossilisation. The third has largely avoided that wear. The premolars and molars have wrinkled crowns, similar to the condition seen in suids and other omnivorous artiodactyls. The tooth structure of the mandible (IVPP V5101) is difficult to determine, as nearly all are worn or broken. All of the right mandible's teeth are preserved save for the first premolar, which is instead preserved on the left mandible. The lower canine and the first premolar both point forwards. The third molar is large, with talonids that have two cusps.
Diet
In his paper describing Andrewsarchus, Osborn suggested that it may have been omnivorous based on comparisons with entelodonts. This conclusion was supported by Szalay and Gould, who use the heavily wrinkled crowns of the molars and premolars as supporting evidence, as well as the close phylogenetic relationship between Andrewsarchus and entelodonts. R.M. Joeckel, in 1990, suggested that it was likely an "omnivore-scavenger", and that it was an ecological analogue to entelodonts. Lars Werdelin further suggested that it was a scavenger, or that it might have preyed on brontotheres.
Palaeoecology
For much of the Eocene, a hothouse climate with humid, tropical environments with consistently high precipitations prevailed. Modern mammalian orders including the Perissodactyla, Artiodactyla, and Primates (or the suborder Euprimates) appeared already by the Early Eocene, diversifying rapidly and developing dentitions specialized for folivory. The omnivorous forms mostly either switched to folivorous diets or went extinct by the Middle Eocene (Lutetian–Bartonian, 48–38 million years ago) along with the archaic "condylarths". By the Late Eocene (Priabonian, 38–34 million years ago), most of the ungulate form dentitions shifted from bunodont cusps to cutting ridges (i.e. lophs) for folivorous diets.
The Irdin Manha Formation, from which the holotype of Andrewsarchus was recovered, consists of Irdinmanhan strata dated to the Middle Eocene. Andrewsarchus mongoliensis comes from the IM-1 locality, dated to the lower Irdinmanhan, from which the hyaenodontine Propterodon, the mesonychid Harpagolestes, at least three unnamed mesonychids, the artiodactyl Erlianhyus, the perissodactyls Deperetella and Lophialetes, the omomyid Tarkops, the glirian Gomphos, the rodent Tamquammys, and various indeterminate glirians are also known. The Lushi Formation, from which the Paratriisodon henanensis specimen was recovered, was deposited at around the same time as the Irdin Manha Formation. The mesonychid Mesonyx, the pantodont Eudinoceras, the dichobunid Dichobune, the helohyid Gobiohyus, the brontotheres Rhinotitan and Microtitan, the perissodactyls Amynodon and Lophialetes, the ctenodactylid Tsinlingomys, and the lagomorph Lushilagus have been identified from the Lushi Formation. The Dongjun Formation, from which A. crassum originates, is similarly Middle Eocene. It preserves the nimravid Eusmilus, the anthracotheriid Probrachyodus, the pantodont Eudinoceras, the brontotheres Metatelmatherium and cf. Protitan, the deperetellids Deperetella and Teleolophus, the hyracodontid Forstercooperia, the rhinocerotids Ilianodon and Prohyracodon, and the amynodonts Amynodon, Gigantamynodon, and Paramnyodon.
References
Cetancodontamorpha
Eocene Artiodactyla
Enigmatic mammal taxa
Eocene mammals of Asia
Lutetian genus first appearances
Priabonian genus extinctions
Fossil taxa described in 1924
Taxa named by Henry Fairfield Osborn
Prehistoric Artiodactyla genera | Andrewsarchus | [
"Biology"
] | 2,669 | [
"Phylogenetics",
"Cetancodontamorpha"
] |
1,039,609 | https://en.wikipedia.org/wiki/Antimatter%20weapon | An antimatter weapon is a theoretically possible device using antimatter as a power source, a propellant, or an explosive for a weapon. Antimatter weapons are currently too costly and unreliable to be viable in warfare, as producing antimatter is enormously expensive (estimated at US$6 billion for every 100 nanograms), the quantities of antimatter generated are very small, and current technology has great difficulty containing antimatter, which annihilates upon touching ordinary matter.
The paramount advantage of such a theoretical weapon is that antimatter and matter collisions result in the entire sum of their mass energy equivalent being released as energy, which is at least two orders of magnitude greater than the energy release of the most efficient fusion weapons (100% vs 0.4–1%). Annihilation requires and converts exactly equal masses of antimatter and matter by the collision which releases the entire mass-energy of both, which for 1 gram is ~9×1013 joules. Using the convention that 1 kiloton TNT equivalent = 4.184×1012 joules (or one trillion calories of energy), one half gram of antimatter reacting with one half gram of ordinary matter (one gram total) results in 21.5 kilotons-equivalent of energy (the same as the atomic bomb dropped on Nagasaki in 1945).
Cost
, the cost of producing one millionth of a gram of antimatter was estimated at . By way of comparison, the cost of the Manhattan Project (to produce the first atomic bomb) is estimated at US$23 billion in 2007 prices. As such, Hui Chen of Lawrence Livermore National Laboratory dismissed concerns about antimatter bombs in 2008 as "unrealistic".
Antimatter catalyzed weapons
Antimatter-catalyzed nuclear pulse propulsion proposes the use of antimatter as a "trigger" to initiate small nuclear explosions; the explosions provide thrust to a spacecraft. The same technology could theoretically be used to make very small and possibly "fission-free" (very low nuclear fallout) weapons (see pure fusion weapon).
In popular culture
An antimatter weapon is a part of the plot of the Dan Brown book Angels & Demons and its film adaptation, where it is used in a plot to blow up the Vatican City.
The Ground Zero expansion pack of the video game Quake II requires the protagonist to manufacture an Antimatter Bomb in the Munitions Plant to achieve the final objective.
In the Star Trek franchise, Federation starships are armed with photon torpedoes which contain antimatter warheads.
References
External links
Spotlight on "Angels and Demons" – A discussion at CERN's public website on the viability of the use of antimatter for energy and weaponry
"Air Force pursuing antimatter weapons: Program was touted publicly, then came official gag order"
Page discussing the possibility of using antimatter as a trigger for a thermonuclear explosion
Paper discussing the number of antiprotons required to ignite a thermonuclear weapon
Weapon
Proposed weapons
Science fiction weapons
Weapons of mass destruction | Antimatter weapon | [
"Physics"
] | 636 | [
"Antimatter",
"Matter"
] |
1,039,736 | https://en.wikipedia.org/wiki/Isotone | Two nuclides are isotones if they have the same neutron number N, but different proton number Z. For example, boron-12 and carbon-13 nuclei both contain 7 neutrons, and so are isotones. Similarly, 36S, 37Cl, 38Ar, 39K, and 40Ca nuclei are all isotones of 20 because they all contain 20 neutrons. Despite its similarity to the Greek for "same stretching", the term was formed by the German physicist K. Guggenheimer by changing the "p" in "isotope" from "p" for "proton" to "n" for "neutron".
The largest numbers of observationally stable nuclides exist for isotones 50 (five: 86Kr, 88Sr, 89Y, 90Zr, 92Mo – noting also the primordial radionuclide 87Rb) and 82 (six: 138Ba, 139La, 140Ce, 141Pr, 142Nd, 144Sm – noting also the primordial radionuclide 136Xe). Neutron numbers for which there are no stable isotones are 19, 21, 35, 39, 45, 61, 89, 115, 123, and 127 or more (though 21, 142, 143, 146, and perhaps 150 have primordial radionuclides). In contrast, the proton numbers for which there are no stable isotopes are 43, 61, and 83 or more (83, 90, 92, and perhaps 94 have primordial radionuclides). This is related to nuclear magic numbers, the number of nucleons forming complete shells within the nucleus, e.g. 2, 8, 20, 28, 50, 82, and 126. No more than one observationally stable nuclide has the same odd neutron number, except for 1 (2H and 3He), 5 (9Be and 10B), 7 (13C and 14N), 55 (97Mo and 99Ru), and 107 (179Hf and 180mTa). In contrast, all even neutron numbers from 6 to 124, except 84 and 86, have at least two observationally stable nuclides. Neutron numbers for which there is a stable nuclide and a primordial radionuclide are 27 (50V), 65 (113Cd), 81 (138La), 84 (144Nd), 85 (147Sm), 86 (148Sm), 105 (176Lu), and 126 (209Bi). Neutron numbers for which there are two primordial radionuclides are 88 (151Eu and 152Gd) and 112 (187Re and 190Pt).
The neutron numbers which have only one stable nuclide (compare: monoisotopic element for the proton numbers) are: 0, 2, 3, 4, 9, 11, 13, 15, 17, 23, 25, 27, 29, 31, 33, 37, 41, 43, 47, 49, 51, 53, 57, 59, 63, 65, 67, 69, 71, 73, 75, 77, 79, 81, 83, 84, 85, 86, 87, 91, 93, 95, 97, 99, 101, 103, 105, 109, 111, 113, 117, 119, 121, 125, 126, and the neutron numbers which have only one significant naturally-abundant nuclide (compare: mononuclidic element for the proton numbers) are: 0, 2, 3, 4, 9, 11, 13, 15, 17, 21, 23, 25, 29, 31, 33, 37, 41, 43, 47, 49, 51, 53, 57, 59, 63, 67, 69, 71, 73, 75, 77, 79, 83, 87, 91, 93, 95, 97, 99, 101, 103, 109, 111, 113, 117, 119, 121, 125, 142, 143, 146.
See also
Isotopes are nuclides having the same number of protons: e.g. carbon-12 and carbon-13.
Isobars are nuclides having the same mass number (i.e. sum of protons plus neutrons): e.g. carbon-12 and boron-12.
Nuclear isomers are different excited states of the same type of nucleus. A transition from one isomer to another is accompanied by emission or absorption of a gamma ray, or the process of internal conversion. (Not to be confused with chemical isomers.)
Notes
Nuclear physics | Isotone | [
"Physics"
] | 944 | [
"Nuclear physics"
] |
1,039,766 | https://en.wikipedia.org/wiki/FieldTurf | FieldTurf is a brand of artificial turf playing surface. It is manufactured and installed by FieldTurf Tarkett, a division of French company Tarkett. FieldTurf is headquartered in Montreal, Quebec, Canada, and its primary manufacturing facility is located in Calhoun, Georgia, United States. With a design intended to more accurately replicate real grass, the new product rapidly gained popularity in the late 1990s.
History
Jean Prévost bought the patent of the FieldTurf product in 1988, and originally named his Montreal-based company SynTenni Co., a name which would eventually be dropped in favor of FieldTurf Inc. In 1995, John Gilman, a former Canadian Football League player and coach, joined FieldTurf as CEO.
In 1997, FieldTurf made its first major installation for a professional team, at the training facility for the English Premiership's Middlesbrough F.C. , FieldTurf has installed over 7000 athletic fields.
In 2005, French flooring manufacturer and minority shareholder Tarkett increased its share in FieldTurf, which led to the integration of the two companies. FieldTurf is a part of the Tarkett Sports division of the holding company Tarkett SA.
In May 2010, FieldTurf acquired the American company EasyTurf of San Diego, California, as a way to gain entry into the rapidly growing residential and commercial synthetic grass markets in the United States.
Product details
The surface is composed of monofilament polyethylene-blend fibers tufted into a polypropylene backing. The infill is composed of a bottom layer of silica sand, a middle layer which is a mixture of sand and cryogenic rubber, and a top layer of only rubber. The fibers are meant to replicate blades of grass, while the infill acts as a cushion. This cushion is intended to improve safety when compared to earlier artificial surfaces and allows players to plant and pivot as if they were playing on a grass field.
Each square foot of turf contains about 3 kg (7 lb) of sand and 1.5 kg (3 lb) of cryogenic rubber. FieldTurf does not use shock-absorbency pads below its infill. The backing of the turf is a combination of woven and nonwoven polypropylene. These materials are permeable and allow water to drain through the backing itself.
Safety
Some evidence shows higher player injury on artificial turf. In a study performed by the National Football League Injury and Safety Panel, published in the October 2012 issue of the American Journal of Sports Medicine, Elliott B. Hershman et al. reviewed injury data from NFL games played between 2000 and 2009. They wrote, "...the injury rate of knee sprains as a whole was 22% higher on FieldTurf than on natural grass. While MCL sprains did not occur at a rate significantly higher than on grass, rates of ACL sprains were 67% higher on FieldTurf."
Studies of the safety of FieldTurf are conflicting. A five-year study funded by FieldTurf and published in the American Journal of Sports Medicine found that injury rates for high-school sports were similar on natural grass and synthetic turf. However, notable differences in the types of injuries were found. Athletes playing on synthetic turf sustained more skin injuries and muscle strains, while those who played on natural grass were more susceptible to concussions and ligament tears. In 2010, another FieldTurf-funded but peer-reviewed study was published in the American Journal of Sports Medicine, this time on NCAA Division I-A football, concluding that in many cases, games played on FieldTurf-branded products led to fewer injuries than those played on natural grass. However, the NFL's Injury and Safety Panel presented a study finding that anterior cruciate ligament (ACL) injuries happened 88% more often in games played on FieldTurf than in games played on grass. In 2012, the Injury and Safety Panel published an independently funded analysis of actual game data over the 2000–2009 seasons. Their statistically significant findings showed a 67% higher rate of ACL sprains and 31% higher rate of eversion ankle sprains.
Uses
Gridiron football
The first installation of FieldTurf in the United States took place at Dick Bivins Stadium in Amarillo, Texas (which was the home field for the Amarillo Independent School District's football teams) in 1998. The first major college football installation was at University of Nebraska's Memorial Stadium in 1999. The following year, it was installed at the two Pac-10 stadiums: Martin Stadium in Pullman, Washington and Husky Stadium in Seattle. The first installation in an NFL (and by extension, professional) stadium was in 2002, at the Seattle Seahawks' new stadium, known as Lumen Field. Originally planned to have a natural grass field, the Seahawks instead decided to install FieldTurf after they had played the two previous seasons in Husky Stadium on that surface, and to ease conversion and footing concerns for a future Major League Soccer team in the venue, which has been shared with Seattle Sounders FC since 2009 (natural grass is brought in and installed over the FieldTurf for FIFA-sanctioned events).
Association football
FieldTurf's first high-profile installation came in January 1997 as English club Middlesbrough chose FieldTurf for its new training field. Only artificial fields with FIFA-recommended 2-star status can be used in FIFA and UEFA Finals competitions. Other FIFA and UEFA competitions require at least 1-star status.
In 2001, Boston University's FieldTurf soccer field became FieldTurf's first to obtain FIFA 1-star status. In 2005, Saprissa Stadium in San José, Costa Rica became the first stadium to host a FIFA World Cup qualifying match on FieldTurf. The Dundalk F.C. Stadium, Oriel Park, received FieldTurf's first FIFA 2-star rating. FieldTurf has 29 FIFA-recommended 1-Star installations and 31 FIFA Recommended 2-Star installations . In 2007, the FIFA U-20 World Cup Canada had almost 50% of its games played on FieldTurf.
Major League Soccer
The use of FieldTurf in Major League Soccer (MLS) has received criticism, especially from the league's international roster players used to playing on natural grass overseas in their home domestic leagues and FIFA competitions.
The installation of the surface at CenturyLink Field in Seattle was approved as mentioned above when the state stadium authority which operates the venue agreed to bring in a natural grass surface for FIFA-sanctioned events.
In September 2006, several top Canadian soccer players appealed to the Canadian Soccer Association to install a natural grass surface at BMO Field in Toronto. The club removed the FieldTurf playing surface and switched to a traditional grass surface starting in 2010.
Following David Beckham's move to Major League Soccer in 2007, he voiced his opinion that the league should convert to grass for all pitches. In an apology, he stated that the surface is fine at lower levels, but that his feelings had not changed about the MLS use because of the toll the harder surface takes on the body. Thierry Henry opted out of road matches in Seattle when he played for the New York Red Bulls specifically because of the Sounders' use of FieldTurf in that venue.
Public works
A specialized version of FieldTurf called Air FieldTurf has been installed to cover the edges of runways at several airports.
See also
AstroTurf
Poly-Turf
References
External links
Artificial turf | FieldTurf | [
"Chemistry"
] | 1,536 | [
"Synthetic materials",
"Artificial turf"
] |
1,039,777 | https://en.wikipedia.org/wiki/Vis-viva%20equation | In astrodynamics, the vis-viva equation is one of the equations that model the motion of orbiting bodies. It is the direct result of the principle of conservation of mechanical energy which applies when the only force acting on an object is its own weight which is the gravitational force determined by the product of the mass of the object and the strength of the surrounding gravitational field.
Vis viva (Latin for "living force") is a term from the history of mechanics and this name is given to the orbital equation originally derived by Isaac Newton. It represents the principle that the difference between the total work of the accelerating forces of a system and that of the retarding forces is equal to one half the vis viva accumulated or lost in the system while the work is being done.
Equation
For any Keplerian orbit (elliptic, parabolic, hyperbolic, or radial), the vis-viva equation is as follows:
where:
is the relative speed of the two bodies
is the distance between the two bodies' centers of mass
is the length of the semi-major axis ( for ellipses, or for parabolas, and for hyperbolas)
is the gravitational constant
is the mass of the central body
The product of can also be expressed as the standard gravitational parameter using the Greek letter .
Practical applications
Given the total mass and the scalars and at a single point of the orbit, one can compute:
and at any other point in the orbit; and
the specific orbital energy , allowing an object orbiting a larger object to be classified as having not enough energy to remain in orbit, hence being "suborbital" (a ballistic missile, for example), having enough energy to be "orbital", but without the possibility to complete a full orbit anyway because it eventually collides with the other body, or having enough energy to come from and/or go to infinity (as a meteor, for example).
The formula for escape velocity can be obtained from the Vis-viva equation by taking the limit as approaches :
For a given orbital radius, the escape velocity will be times the orbital velocity.
Derivation for elliptic orbits (0 ≤ eccentricity < 1)
Specific total energy is constant throughout the orbit. Thus, using the subscripts and to denote apoapsis (apogee) and periapsis (perigee), respectively,
Rearranging,
Recalling that for an elliptical orbit (and hence also a circular orbit) the velocity and radius vectors are perpendicular at apoapsis and periapsis, conservation of angular momentum requires specific angular momentum , thus :
Isolating the kinetic energy at apoapsis and simplifying,
From the geometry of an ellipse, where a is the length of the semimajor axis. Thus,
Substituting this into our original expression for specific orbital energy,
Thus, and the vis-viva equation may be written
or
Therefore, the conserved angular momentum can be derived using and , where is semi-major axis and is semi-minor axis of the elliptical orbit, as follows:
and alternately,
Therefore, specific angular momentum , and
Total angular momentum
References
Orbits
Conservation laws
Equations of astronomy | Vis-viva equation | [
"Physics",
"Astronomy"
] | 635 | [
"Equations of physics",
"Concepts in astronomy",
"Conservation laws",
"Equations of astronomy",
"Symmetry",
"Physics theorems"
] |
1,039,873 | https://en.wikipedia.org/wiki/Creative%20writing | Creative writing is any writing that goes outside the bounds of normal professional, journalistic, academic, or technical forms of literature, typically identified by an emphasis on narrative craft, character development, and the use of literary tropes or with various traditions of poetry and poetics. Due to the looseness of the definition, it is possible for writing such as feature stories to be considered creative writing, even though it falls under journalism, because the content of features is specifically focused on narrative and character development. Both fictional and non-fictional works fall into this category, including such forms as novels, biographies, short stories, and poems. In the academic setting, creative writing is typically separated into fiction and poetry classes, with a focus on writing in an original style, as opposed to imitating pre-existing genres such as crime or horror. Writing for the screen and stage—screenwriting and playwriting—are often taught separately, but fit under the creative writing category as well.
Creative writing can technically be considered any writing of original composition. In this sense, creative writing is a more contemporary and process-oriented name for what has been traditionally called literature, including the variety of its genres. In her work, Foundations of Creativity, Mary Lee Marksberry references Paul Witty and Lou LaBrant's Teaching the People's Language to define creative writing. Marksberry notes:
In academia
Unlike its academic counterpart of writing classes that teach students to compose work based on the rules of the language, creative writing is believed to focus on students' self-expression. While creative writing as an educational subject is often available at some stages of, if not throughout, primary and secondary school (K–12), perhaps the most refined form of creative writing as an educational focus is in universities.
Following a reworking of university education in the post-war era, creative writing has progressively gained prominence in the university setting. In the UK, the first formal creative writing program was established as a Master of Arts degree at the University of East Anglia in 1970 by the novelists Malcolm Bradbury and Angus Wilson. With the beginning of formal creative writing programs:
Programs of study
Creative Writing programs are typically available to writers from the high school level all the way through graduate school/university and adult education. Traditionally these programs are associated with the English departments in the respective schools, but this notion has been challenged in recent times as more creative writing programs have spun off into their own department. Creative Writing undergraduate degrees tend to be Bachelor of Arts (BA) or Bachelor of Fine Arts (BFA) degrees, but Bachelor of Science (BSc) degrees also exist. Some continue to pursue a Master of Arts, Master of Fine Arts, or Master of Studies in Creative Writing. Once rare, Ph.D. programs are becoming more prevalent in the field, as more writers attempt to bridge the gap between academic study and artistic pursuit.
Creative writers often place an emphasis in either fiction or poetry, and it is normal to start with short stories or simple poems. They then make a schedule based on this emphasis including literature classes, education classes and workshop classes to strengthen their skills and techniques. Though they have their own programs of study in the fields of film and theatre, screenwriting and playwriting have become more popular in creative writing programs since creative writing programs attempt to work more closely with film and theatre programs as well as English programs. Creative writing students are encouraged to get involved in extracurricular writing-based activities, such as publishing clubs, school-based literary magazines or newspapers, writing contests, writing colonies or conventions, and extended education classes.
In the classroom
Creative writing is usually taught in a workshop format rather than seminar style. In workshops, students usually submit original work for peer critique. Students also format a writing method through the process of writing and re-writing. Some courses teach the means to exploit or access latent creativity or more technical issues such as editing, structural techniques, genres, random idea generating, or unblocking writer's block. Some noted authors, such as Michael Chabon, Sir Kazuo Ishiguro, Kevin Brockmeier, Ian McEwan, Karl Kirchwey, Dame Rose Tremain and reputed screenwriters, such as David Benioff, Darren Star and Peter Farrelly, have graduated from university creative writing programs.
Many educators find that using creative writing can increase students' academic performance and resilience. The activity of completing small goals consistently rather than unfinished big goals creates pride in one's brain, which exudes dopamine throughout the brain and increases motivation. It has been shown to build resilience in students by documenting and analyzing their experiences, which gives the students a new perspective on an old situation and allows sorting of emotions. It also has been proven to increase a student's level of compassion and create a sense of community among students in what could otherwise be deemed an isolating classroom.
Creative writing influence on international students
Creative writing may have an influence not only on native speaking students but also on international students. Educators who advocate for creative writing say incorporating creative writing classes or exercises has the potential to develop students into better readers, analysts, and writers. These same people say creative writing can have similar effects on international students by acting as a platform for them to share their own heritage, experiences, and values. Scholar Youngjoo Yi conducted a case-study that tested this idea over two years. Yi focused on an international student from Korea and examined how her creative writing class influenced her in-school and out-of-school writing. He concluded that taking the creative writing class ultimately made her a more confident writer not only in English but also in other languages. In addition to that, the projects done in her creative writing class encouraged her to express and connect her Korean heritage with her English writing.
Creative writing influence on composition studies
Argument and research writing is a major focus in the field of composition studies. The focus on academic writing tends to leave little room for creative writing in writing studies. Gregory Stephens suggests that focusing heavily on academic writing prevents students from developing their own unique writing style and voice. When he applied creative writing pedagogy techniques to STEM students at University of Puerto Rico-Mayaguez, he found exercises such as "self-characterization" and storytelling assignments helped his STEM students develop empathy, self-awareness, and a narrative voice. He suggests these are skills that are transferable to real-world situations such as professional settings. By engaging in creative writing exercises/activities, students are able to break free from the "constraints of formal thinking and writing" of academic writing, potentially boosting students’ confidence, creativity, and overall writing skills.
Controversy in academia
Creative writing is considered by some academics (mostly in the US) to be an extension of the English discipline, even though it is taught around the world in many languages. The English discipline is traditionally seen as the critical study of literary forms, not the creation of literary forms. Some academics see creative writing as a challenge to this tradition. In the UK and Australia, as well as increasingly in the US and the rest of the world, creative writing is considered a discipline in its own right, not an offshoot of any other discipline.
Those who support creative writing programs either as part or separate from the English discipline, argue for the academic worth of the creative writing experience. They argue that creative writing hones the students' abilities to clearly express their thoughts and that creative writing entails an in-depth study of literary terms and mechanisms so they can be applied to the writer's work to foster improvement. These critical analysis skills are further used in other literary studies outside the creative writing sphere. Indeed, the process of creative writing, the crafting of a thought-out and original piece, is considered by some to constitute experience in creative problem-solving.
Despite a large number of academic creative writing programs throughout the world, many people argue that creative writing cannot be taught. Essayist Louis Menand explores the issue in an article for the New Yorker in which he quotes Kay Boyle, the director of the creative writing program at San Francisco State University for sixteen years, who said, "all creative-writing programs ought to be abolished by law." Contemporary discussions of creative writing at the university level vary widely; some people value MFA programs and regard them with great respect, whereas many MFA candidates and hopefuls lament their chosen programs' lack of both diversity and genre awareness.
The pedagogy of creative writing is also a source of controversy. Critics of MFA and English graduate programs argue that the methods of instruction discriminate against people with disabilities, emphasizing writing practices such as daily writing requirements or location-based writing that students with chronic illness, physical or mental health barriers, and neurodivergent students are unable to access. The selection of texts used in traditional creative writing programs is also being challenged, with critics pointing out that Western literary canon and writing pedagogy is "historically rooted and linked to exclusion and structural racism in creative writing programs."
In prisons
In the late 1960s, American prisons began implementing creative writing programs due to the prisoner rights movement that stemmed from events such as the Attica Prison riot. The creative writing programs are among many art programs that aim to benefit prisoners during and after their time in prison. Programs such as these provide education, structure, and a creative outlet to encourage rehabilitation. These programs' continuation relies heavily on volunteers and outside financial support from sources such as authors and activist groups.
The Poets Playwrights Essayists Editors and Novelists, known as PEN, were among the most significant contributors to creative writing programs in America. In 1971, PEN established the Prison Writing Committee to implement and advocate for creative writing programs in prisons throughout the U.S. The PEN Writing Committee improved prison libraries, inspired volunteer writers to teach prisoners, persuaded authors to host workshops, and founded an annual literary competition for prisoners. Workshops and classes help prisoners build self-esteem, make healthy social connections, and learn new skills, which can ease prisoner reentry.
Creative writing programs offered in juvenile correction facilities have also proved beneficial. In Alabama, Writing Our Stories began in 1997 as an anti-violence initiative to encourage positive self-expression among incarcerated youths. The program found that the participants gained confidence, the ability to empathize and see their peers in a more positive light, and motivation to want to return to society and live a more productive life.
One California study of prison fine arts programs found art education increased emotional control and decreased disciplinary reports. Participation in creative writing and other art programs result in significant positive outcomes for the inmates' mental health, relationship with their families, and the facility's environment. The study evidenced improved writing skills enhanced one's ability in other academic areas of study, portraying writing as a fundamental tool for building one's intellect. Teaching prisoners creative writing can encourage literacy, teach necessary life skills, and provide prisoners with an outlet to express regret, accountability, responsibility, and a kind of restorative justice.
Elements
Action
Character
Conflict
Dialogue
Genre
Narration
Pace
Plot
Point of view
Scene
Setting
Style
Suspense
Theme and motif
Tone
Voice
Forms and genres of literature
Autobiography/Memoir
Creative non-fiction (Personal & Journalistic Essays)
Children's books
Drama
Epic
Flash fiction
Graphic novels/Comics
Novel
Novella
Play
Poetry
Screenplay
Short story
Dialogues
Blogs
See also
Asemic writing
Author
Book report
Clarion Workshop
Collaborative writing
Creativity
Electronic literature
Expository writing
Fan fiction
Fiction writing
High School for Writing and Communication Arts (in New York City)
Iowa Writers' Workshop
Literature
Naked Writing
Show, don't tell
Songwriting
Stream of consciousness (narrative mode)
Writer's block
Writing
Writing circle
Writing process
Writing style
Writing Workshop
References
Further reading
Republished as
External links
Creative Writing Guide - The University of Vermont
Writing in the disciplines: Creative Writing - Kelsey Shields, Writing Center, University of Richmond
Communication design
Creativity
Writing | Creative writing | [
"Engineering",
"Biology"
] | 2,387 | [
"Creativity",
"Behavior",
"Communication design",
"Design",
"Human behavior"
] |
1,039,889 | https://en.wikipedia.org/wiki/Tangent%20half-angle%20formula | In trigonometry, tangent half-angle formulas relate the tangent of half of an angle to trigonometric functions of the entire angle.
Formulae
The tangent of half an angle is the stereographic projection of the circle through the point at angle radians onto the line through the angles . Among these formulas are the following:
Identities
From these one can derive identities expressing the sine, cosine, and tangent as functions of tangents of half-angles:
Proofs
Algebraic proofs
Using double-angle formulae and the Pythagorean identity gives
Taking the quotient of the formulae for sine and cosine yields
Combining the Pythagorean identity with the double-angle formula for the cosine,
rearranging, and taking the square roots yields
and
which, upon division gives
Alternatively,
It turns out that the absolute value signs in these last two formulas may be dropped, regardless of which quadrant is in. With or without the absolute value bars these formulas do not apply when both the numerator and denominator on the right-hand side are zero.
Also, using the angle addition and subtraction formulae for both the sine and cosine one obtains:
Pairwise addition of the above four formulae yields:
Setting and and substituting yields:
Dividing the sum of sines by the sum of cosines one arrives at:
Geometric proofs
Applying the formulae derived above to the rhombus figure on the right, it is readily shown that
In the unit circle, application of the above shows that . By similarity of triangles,
It follows that
The tangent half-angle substitution in integral calculus
In various applications of trigonometry, it is useful to rewrite the trigonometric functions (such as sine and cosine) in terms of rational functions of a new variable . These identities are known collectively as the tangent half-angle formulae because of the definition of . These identities can be useful in calculus for converting rational functions in sine and cosine to functions of in order to find their antiderivatives.
Geometrically, the construction goes like this: for any point on the unit circle, draw the line passing through it and the point . This point crosses the -axis at some point . One can show using simple geometry that . The equation for the drawn line is . The equation for the intersection of the line and circle is then a quadratic equation involving . The two solutions to this equation are and . This allows us to write the latter as rational functions of (solutions are given below).
The parameter represents the stereographic projection of the point onto the -axis with the center of projection at . Thus, the tangent half-angle formulae give conversions between the stereographic coordinate on the unit circle and the standard angular coordinate .
Then we have
and
Both this expression of and the expression can be solved for . Equating these gives the arctangent in terms of the natural logarithm
In calculus, the tangent half-angle substitution is used to find antiderivatives of rational functions of and . Differentiating gives
and thus
Hyperbolic identities
One can play an entirely analogous game with the hyperbolic functions. A point on (the right branch of) a hyperbola is given by . Projecting this onto -axis from the center gives the following:
with the identities
and
Finding in terms of leads to following relationship between the inverse hyperbolic tangent and the natural logarithm:
The hyperbolic tangent half-angle substitution in calculus uses
The Gudermannian function
Comparing the hyperbolic identities to the circular ones, one notices that they involve the same functions of , just permuted. If we identify the parameter in both cases we arrive at a relationship between the circular functions and the hyperbolic ones. That is, if
then
where is the Gudermannian function. The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic ones that does not involve complex numbers. The above descriptions of the tangent half-angle formulae (projection the unit circle and standard hyperbola onto the -axis) give a geometric interpretation of this function.
Rational values and Pythagorean triples
Starting with a Pythagorean triangle with side lengths , , and that are positive integers and satisfy , it follows immediately that each interior angle of the triangle has rational values for sine and cosine, because these are just ratios of side lengths. Thus each of these angles has a rational value for its half-angle tangent, using .
The reverse is also true. If there are two positive angles that sum to 90°, each with a rational half-angle tangent, and the third angle is a right angle then a triangle with these interior angles can be scaled to a Pythagorean triangle. If the third angle is not required to be a right angle, but is the angle that makes the three positive angles sum to 180° then the third angle will necessarily have a rational number for its half-angle tangent when the first two do (using angle addition and subtraction formulas for tangents) and the triangle can be scaled to a Heronian triangle.
Generally, if is a subfield of the complex numbers then implies that .
See also
List of trigonometric identities
Half-side formula
External links
Tangent Of Halved Angle at Planetmath
References
Trigonometry
Conic sections
Mathematical identities | Tangent half-angle formula | [
"Mathematics"
] | 1,102 | [
"Mathematical theorems",
"Mathematical identities",
"Mathematical problems",
"Algebra"
] |
1,039,945 | https://en.wikipedia.org/wiki/Royal%20Meteorological%20Society | The Royal Meteorological Society is a long-established institution that promotes academic and public engagement in weather and climate science. Fellows of the Society must possess relevant qualifications, but Members can be lay enthusiasts. Its Quarterly Journal is one of the world's leading sources of original research in the atmospheric sciences. The chief executive officer is Liz Bentley.
Constitution
The Royal Meteorological Society traces its origins back to 3 April 1850 when the British Meteorological Society was formed as "a society the objects of which should be the advancement and extension of meteorological science by determining the laws of climate and of meteorological phenomena in general". Along with nine others, including James Glaisher, John Drew, Edward Joseph Lowe, The Revd Joseph Bancroft Reade, and Samuel Charles Whitbread, Dr John Lee, an astronomer, of Hartwell House, near Aylesbury, Buckinghamshire founded in the library of his house the British Meteorological Society, which became the Royal Meteorological Society. It became The Meteorological Society in 1866, when it was incorporated by Royal Charter, and the Royal Meteorological Society in 1883, when Her Majesty Queen Victoria granted the privilege of adding 'Royal' to the title. Along with 74 others, the famous meteorologist Luke Howard joined the original 15 members of the Society at its first ordinary meeting on 7 May 1850. As of 2008 it has more than 3,000 members worldwide. The chief executive of the Society is Professor Liz Bentley. Paul Hardaker previously served as chief executive from 2006 to 2012.
Membership
There are four membership categories:
Honorary Fellow
Fellow (FRMetS)
Member
Corporate member
Awards
The society regularly awards a number of medal and prizes, of which the Symons Gold Medal (established in 1901) and the Mason Gold Medal (established in 2006) are pre-eminent. The two medals are awarded alternately.
Other awards include the Buchan Prize, the Hugh Robert Mill Award, the L F Richardson Prize, the Michael Hunt Award, the Fitzroy Prize, the Gordon Manley Weather Prize, the International Journal of Climatology Prize, the Society Outstanding Service Award and the Vaisala Award.
Journals
The society has a number of regular publications:
Atmospheric Science Letters: a monthly journal that provides a peer-reviewed publication route for new shorter contributions in the field of atmospheric and closely related sciences.
Weather: a monthly journal with many full colour illustrations and photos for specialists and general readers with an interest in meteorology. It uses a minimum of mathematics and technical language.
Quarterly Journal of the Royal Meteorological Society: one of the world's leading journals for meteorology, publishing original research in the atmospheric sciences. There are eight issues per year.
Meteorological Applications: this is a journal for applied meteorologists, forecasters and users of meteorological services and has been published since 1994. It is aimed at a general readership and authors are asked to take this into account when preparing papers.
International Journal of Climatology: has 15 issues a year and covers a broad spectrum of research in climatology.
WIREs Climate Change: a journal about climate change
Geoscience Data Journal: an online, open-access journal.
Climate Resilience and Sustainability: an interdisciplinary, open-access journal.
All publications are available online but a subscription is required for some. However certain "classic" papers are freely available on the Society's website.
Local centres and special interest groups
The society has several local centres across the UK.
There are also a number of special interest groups which organise meetings and other activities to facilitate exchange of information and views within specific areas of meteorology. These are informal groups of professionals interested in specific technical areas of the profession of meteorology. The groups are primarily a way of communicating at a specialist level.
Presidents
Source:
1850–1853: Samuel Charles Whitbread, first time
1853–1855: George Leach
1855–1857: John Lee
1857–1858: Robert Stephenson
1859–1860: Thomas Sopwith
1861–1862: Nathaniel Beardmore
1863–1864: Robert Dundas Thomson, died in office
1864: Samuel Charles Whitbread, second time
1865–1866: Charles Brooke
1867–1868: James Glaisher
1869–1870: Charles Vincent Walker
1871–1872: John William Tripe
1873–1875: Robert James Mann
1876–1877: Henry Storks Eaton
1878–1879: Charles Greaves
1880–1881: George James Symons, first time
1882–1883: Sir John Knox Laughton
1884–1885: Robert Henry Scott
1886–1887: William Ellis
1888–1889: William Marcet
1890–1891: Baldwin Latham
1892–1893: Charles Theodore Williams, first time
1894–1895: Richard Inwards
1896–1897: Edward Mawley
1898–1899: Francis Campbell Bayard
1900: George James Symons, second time; died in office
1900: Charles Theodore Williams, second time
1901–1902: William Henry Dines
1903–1904: Captain David W. Barker
1905–1906: Richard Bentley
1907–1908: Hugh Robert Mill
1910–1911: Henry Mellish
1911–1912: Henry Newton Dickson
1913–1914: Charles John Philip Cave, first time
1915–1917: Sir Henry George Lyons
1918–1919: Sir Napier Shaw
1920–1921: Reginald Hawthorn Hooker
1922–1923: Charles Chree
1924–1925: Charles John Philip Cave, second time
1926–1927: Sir Gilbert Walker
1928–1929: Richard Gregory
1930–1931: Rudolf Gustav Karl Lempfert
1932–1933: Sydney Chapman
1934–1935: Ernest Gold
1936–1937: Francis John Welsh Whipple
1938–1939: Sir Bernard A. Keen
1940–1941: Sir George Clarke Simpson
1942–1944: David Brunt
1945–1946: Gordon Manley
1947–1949: G. M. B. Dobson
1949–1951: Sir Robert Alexander Watson-Watt
1951–1953: Sir Charles Normand
1953–1955: Sir Graham Sutton
1955–1957: Reginald Sutcliffe
1957–1959: Percival Albert Sheppard
1959–1961: James Martin Stagg
1961–1963: Howard Latimer Penman
1963–1965: John Stanley Sawyer
1965–1967: G. D. Robinson
1967–1968: F. Kenneth Hare
1968–1970: John Mason
1970–1972: Frank Pasquill
1972–1974: Robert B. Pearce
1974–1976: Raymond Hide
1976–1978: John T. Houghton
1978–1980: John Monteith
1980–1982: Philip Goldsmith
1982–1984: Henry Charnock
1984–1986: Andrew Gilchrist
1986–1988: Richard S. Scorer
1988–1990: Keith Anthony Browning
1990–1992: Stephen Austen Thorpe
1992–1994: Paul James Mason
1994–1996: John E. Harries
1996–1998: David J. Carson
1998–2000: Sir Brian Hoskins
2000–2002: David Burridge
2002–2004: Howard Cattle
2004–2006: Chris Collier
2006–2008: Geraint Vaughan
2008–2010: Julia Slingo
2010–2012: Tim Palmer
2012–2014: Joanna Haigh
2014–2016: Jennie Campbell
2016–2018: Ellie Highwood
2018–2020: David Warrilow
2020–2022: David Griggs
Notable fellows
John Farrah (1849–1907).
See also
List of atmospheric dispersion models
UK Dispersion Modelling Bureau
Met Office
References
External links
The RMetS website
UK Atmospheric Dispersion Modelling Liaison Committee (ADMLC) web site
Meteorological societies
Meteorological
Scientific organisations based in the United Kingdom
Atmospheric dispersion modeling
Climatological research organizations
Climate of the United Kingdom
Geographic societies
Learned societies of the United Kingdom
Scientific organizations established in 1850
1850 establishments in the United Kingdom | Royal Meteorological Society | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,514 | [
"Atmospheric dispersion modeling",
"Environmental modelling",
"Environmental engineering"
] |
1,039,962 | https://en.wikipedia.org/wiki/Return%20period | A return period, also known as a recurrence interval or repeat interval, is an average time or an estimated average time between events such as earthquakes, floods, landslides, or river discharge flows to occur.
It is a statistical measurement typically based on historic data over an extended period, and is used usually for risk analysis. Examples include deciding whether a project should be allowed to go forward in a zone of a certain risk or designing structures to withstand events with a certain return period. The following analysis assumes that the probability of the event occurring does not vary over time and is independent of past events.
Estimating a return period
Recurrence interval
n number of years on record;
m is the rank of observed occurrences when arranged in descending order
For floods, the event may be measured in terms of m3/s or height; for storm surges, in terms of the height of the surge, and similarly for other events. This is Weibull's Formula.
Return period as the reciprocal of expected frequency
The theoretical return period between occurrences is the inverse of the average frequency of occurrence. For example, a 10-year flood has a 1/10 = 0.1 or 10% chance of being exceeded in any one year and a 50-year flood has a 0.02 or 2% chance of being exceeded in any one year.
This does not mean that a 100-year flood will happen regularly every 100 years, or only once in 100 years. Despite the connotations of the name "return period". In any given 100-year period, a 100-year event may occur once, twice, more, or not at all, and each outcome has a probability that can be computed as below.
Also, the estimated return period below is a statistic: it is computed from a set of data (the observations), as distinct from the theoretical value in an idealized distribution. One does not actually know that a certain or greater magnitude happens with 1% probability, only that it has been observed exactly once in 100 years.
That distinction is significant because there are few observations of rare events: for instance, if observations go back 400 years, the most extreme event (a 400-year event by the statistical definition) may later be classed, on longer observation, as a 200-year event (if a comparable event immediately occurs) or a 500-year event (if no comparable event occurs for a further 100 years).
Further, one cannot determine the size of a 1000-year event based on such records alone but instead must use a statistical model to predict the magnitude of such an (unobserved) event. Even if the historic return interval is a lot less than 1000 years, if there are a number of less-severe events of a similar nature recorded, the use of such a model is likely to provide useful information to help estimate the future return interval.
Probability distributions
One would like to be able to interpret the return period in probabilistic models. The most logical interpretation for this is to take the return period as the counting rate in a Poisson distribution since it is the expectation value of the rate of occurrences. An alternative interpretation is to take it as the probability for a yearly Bernoulli trial in the binomial distribution. That is disfavoured because each year does not represent an independent Bernoulli trial but is an arbitrary measure of time. This question is mainly academic as the results obtained will be similar under both the Poisson and binomial interpretations.
Poisson
The probability mass function of the Poisson distribution is
where is the number of occurrences the probability is calculated for, the time period of interest, is the return period and is the counting rate.
The probability of no-occurrence can be obtained simply considering the case for . The formula is
Consequently, the probability of exceedance (i.e. the probability of an event "stronger" than the event with return period to occur at least once within the time period of interest) is
Note that for any event with return period , the probability of exceedance within an interval equal to the return period (i.e. ) is independent from the return period and it is equal to . This means, for example, that there is a 63.2% probability of a flood larger than the 50-year return flood to occur within any period of 50 year.
Example
If the return period of occurrence is 243 years () then the probability of exactly one occurrence in ten years is
Binomial
In a given period of for a unit time (e.g. ), the probability of a given number r of events of a return period is given by the binomial distribution as follows.
This is valid only if the probability of more than one occurrence per unit time is zero. Often that is a close approximation, in which case the probabilities yielded by this formula hold approximately.
If in such a way that then
Take
where
T is return interval
n is number of years on record.
m is the number of recorded occurrences of the event being considered
Example
Given that the return period of an event is 100 years,
So the probability that such an event occurs exactly once in 10 successive years is:
Risk analysis
Return period is useful for risk analysis (such as natural, inherent, or hydrologic risk of failure). When dealing with structure design expectations, the return period is useful in calculating the riskiness of the structure.
The probability of at least one event that exceeds design limits during the expected life of the structure is the complement of the probability that no events occur which exceed design limits.
The equation for assessing this parameter is
where
is the expression for the probability of the occurrence of the event in question in a year;
n is the expected life of the structure.
See also
100-year flood
Cumulative frequency analysis
Frequency of exceedance
Residence time
References
Hydrology
Seismology
Time in science
Durations | Return period | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,186 | [
"Temporal quantities",
"Hydrology",
"Physical quantities",
"Time",
"Environmental engineering",
"Time in science",
"Spacetime",
"Durations"
] |
1,040,128 | https://en.wikipedia.org/wiki/List%20of%20satellites%20in%20geosynchronous%20orbit | This is a list of satellites in geosynchronous orbit (GSO). These satellites are commonly used for communication purposes, such as radio and television networks, back-haul, and direct broadcast. Traditional global navigation systems do not use geosynchronous satellites, but some SBAS navigation satellites do. A number of weather satellites are also present in geosynchronous orbits. Not included in the list below are several more classified military geosynchronous satellites, such as PAN.
A special case of geosynchronous orbit is the geostationary orbit, which is a circular geosynchronous orbit at zero inclination (that is, directly above the equator). A satellite in a geostationary orbit appears stationary, always at the same point in the sky, to ground observers. Popularly or loosely, the term "geosynchronous" may be used to mean geostationary. Specifically, geosynchronous Earth orbit (GEO) may be a synonym for geosynchronous equatorial orbit, or geostationary Earth orbit. To avoid confusion, geosynchronous satellites that are not in geostationary orbit are sometimes referred to as being in an inclined geostationary orbit (IGSO).
Some of these satellites are separated from each other by as little as 0.1° longitude. This corresponds to an inter-satellite spacing of approximately 73 km. The major consideration for spacing of geostationary satellites is the beamwidth at-orbit of uplink transmitters, which is primarily a factor of the size and stability of the uplink dish, as well as what frequencies the satellite's transponders receive; satellites with discontiguous frequency allocations can be much closer together.
As of July 2023, the website UCS Satellite Database lists 6,718 known satellites. Of these, 580 are listed in the database as being at GEO. The website provides a spreadsheet containing details of all the satellites, which can be downloaded.
Listings are from west to east (decreasing longitude in the Western Hemisphere and increasing longitude in the Eastern Hemisphere) by orbital position, starting and ending with the International Date Line. Satellites in inclined geosynchronous orbit are so indicated by a note in the "remarks" columns.
Western hemisphere
Eastern Hemisphere
In transit
Historical
References
External links
SatcoDX, a useful 3rd party resource
Lyngsat, a useful 3rd party resource
Satbeams – satellite footprints
TrackingSat – List of satellites in geostationary orbit
Zarya.info's list of satellites in geosynchronous orbit, updated daily
Geosynchronous orbit
Geo | List of satellites in geosynchronous orbit | [
"Engineering"
] | 544 | [
"Telecommunications engineering",
"Satellite broadcasting"
] |
1,040,286 | https://en.wikipedia.org/wiki/List%20of%20withdrawn%20drugs | Drugs or medicines may be withdrawn from commercial markets because of risks to patients, but also because of commercial reasons (e.g. lack of demand and relatively high production costs). Where risks or harms is the reason for withdrawal, this will usually have been prompted by unexpected adverse effects that were not detected during Phase III clinical trials, i.e. they were only made apparent from postmarketing surveillance data collected from the wider community over longer periods of time.
This list is not limited to drugs that were ever approved by the FDA. Some of them (lumiracoxib, rimonabant, tolrestat, ximelagatran and ximelidine, for example) were approved to be marketed in Europe but had not yet been approved for marketing in the US, when side effects became clear and their developers pulled them from the market. Some drugs in this list (e.g. LSD) were never approved for marketing in the US or Europe.
Significant withdrawals
See also
Adverse drug reaction
Adverse events
European Medicines Agency
Food and Drug Administration
References
External links
CDER Report to the Nation: 2005 Has a list of US withdrawals through 2005.
Withdrawn
Withdrawn | List of withdrawn drugs | [
"Chemistry"
] | 240 | [
"Drug-related lists",
"Drug safety",
"Withdrawn drugs"
] |
1,040,310 | https://en.wikipedia.org/wiki/Bill%20Inmon | William H. Inmon (born 1945) is an American computer scientist, recognized by many as the father of the data warehouse. Inmon wrote the first book, held the first conference (with Arnie Barnett), wrote the first column in a magazine and was the first to offer classes in data warehousing. Inmon created the accepted definition of what a data warehouse is - a subject oriented, nonvolatile, integrated, time variant collection of data in support of management's decisions. Compared with the approach of the other pioneering architect of data warehousing, Ralph Kimball, Inmon's approach is often characterized as a top-down approach.
Biography
William H. Inmon was born July 20, 1945, in San Diego, California. He received his Bachelor of Science degree in mathematics from Yale University in 1967, and his Master of Science degree in computer science from New Mexico State University.
He worked for American Management Systems and Coopers & Lybrand before 1991, when he founded the company Prism Solutions, which he took public. In 1995 he founded Pine Cone Systems, which was renamed Ambeo later on. In 1999, he created a corporate information factory web site for his consulting business.
Inmon coined terms such as the government information factory, as well as data warehousing 2.0. Inmon promotes building, usage, and maintenance of data warehouses and related topics. His books include "Building the Data Warehouse" (1992, with later editions) and "DW 2.0: The Architecture for the Next Generation of Data Warehousing" (2008).
In July 2007, Inmon was named by Computerworld as one of the ten people that most influenced the first 40 years of the computer industry.
Inmon's association with data warehousing stems from the fact that he wrote the first book on data warehousing he held the first conference on data warehousing (with Arnie Barnett), he wrote the first column in a magazine on data warehousing, he has written over 1,000 articles on data warehousing in journals and newsletters, he created the first fold out wall chart for data warehousing and he conducted the first classes on data warehousing.
In 2012, Inmon developed and made public technology known as "textual disambiguation". Textual disambiguation applies context to raw text and reformats the raw text and context into a standard data base format. Once raw text is passed through textual disambiguation, it can easily and efficiently be accessed and analyzed by standard business intelligence technology. Textual disambiguation is accomplished through the execution of TextualETL.
Inmon owns and operates Forest Rim Technology, a company that applies and implements data warehousing solutions executed through textual disambiguation and TextualETL.
Awards
(2002) DAMA International Professional Achievement Award for, "major contributions as the 'father of data warehousing' and a recognized thought leader in decision support" from DAMA International, The Global Data Management Community.
(2018) Received a Lifetime Achievement Award from Data Modelling Zone.
(December 2020) Received a Lifetime Achievement Award from Project Management Institute (PMI).
Publications
Bill Inmon has published more than 60 books in nine languages and 2,000 articles on data warehousing and data management.
Inmon, William H.; Imhoff Claudia; Battas, Greg (1996) Building the Operational Data Store, Wiley,
See also
Single version of the truth
The Kimball lifecycle, a high-level sequence tasks used to design, develop and deploy a data warehouse or business intelligence system
References
External links
Corporate Information Factory - Internet Archive's copy of www.inmoncif.com, retrieved on 2016-01-16
1945 births
Data warehousing
Living people
New Mexico State University alumni
People in information technology
Yale University alumni | Bill Inmon | [
"Technology"
] | 788 | [
"People in information technology",
"Information technology"
] |
1,040,367 | https://en.wikipedia.org/wiki/Mineral%20spring | Mineral springs are naturally occurring springs that produce hard water, water that contains dissolved minerals. Salts, sulfur compounds, and gases are among the substances that can be dissolved in the spring water during its passage underground. In this they are unlike sweet springs, which produce soft water with no noticeable dissolved gasses. The dissolved minerals may alter the water's taste. Mineral water obtained from mineral springs, and the precipitated salts such as Epsom salt have long been important commercial products.
Some mineral springs may contain significant amounts of harmful dissolved minerals, such as arsenic, and should not be drunk. Sulfur springs smell of rotten eggs due to hydrogen sulfide (H2S), which is hazardous and sometimes deadly. It is a gas, and it usually enters the body when it is breathed in. The quantities ingested in drinking water are much lower and are not considered likely to cause harm, but few studies on long-term, low-level exposure have been done, .
The water of mineral springs is sometimes claimed to have therapeutic value. Mineral spas are resorts that have developed around mineral springs, where (often wealthy) patrons would repair to "take the waters" — meaning that they would drink (see hydrotherapy and water cure) or bathe in (see balneotherapy) the mineral water. Historical mineral springs were often outfitted with elaborate stone-works — including artificial pools, retaining walls, colonnades, and roofs — sometimes in the form of fanciful "Greek temples", gazebos, or pagodas. Others were entirely enclosed within spring houses.
Types
For many centuries, in Europe, North America, and elsewhere, commercial proponents of mineral springs classified them according to the chemical composition of the water produced and according to the medicinal benefits supposedly accruing from each:
Arsenical springs contained arsenic
Lithia Springs contained lithium salts.
Chalybeate springs contained salts of iron.
Alum springs contained alum.
Sulfur springs contained hydrogen sulfide gas (see also fumeroles).
Salt (saline) springs contained salts of calcium, magnesium or sodium.
Alkaline springs contained an alkali.
Calcic springs contained lime (calcium hydroxide).
Thermal (hot) springs could contain a high concentration of various minerals.
Soda springs contained carbon dioxide gas (soda water).
Radioactive springs contain traces of radioactive substances such as radium or uranium.
Deposits
Types of sedimentary rock – usually limestone (calcium carbonate) – are sometimes formed by the evaporation, or rapid precipitation, of minerals from spring water as it emerges, especially at the mouths of hot mineral springs. In cold mineral springs, the rapid precipitation of minerals results from the reduction of acidity when the gas bubbles out. (These mineral deposits can also be found in dried lakebeds.) Spectacular formations, including terraces, stalactites, stalagmites and 'frozen waterfalls' can result (see, for example, Mammoth Hot Springs).
One light-colored porous calcite of this type is known as travertine and has been used extensively in Italy and elsewhere as building material. Travertine can have a white, tan, or cream-colored appearance and often has a fibrous or concentric 'grain'.
Another type of spring water deposit, containing siliceous as well as calcareous minerals, is known as tufa. Tufa is similar to travertine but is even softer and more porous.
Chaybeate springs may deposit iron compounds such as limonite. Some such deposits were large enough to be mined as iron ore.
See also
List of hot springs
Sweet springs, those with no detectable sulfur or salt content
References
Cohen, Stan (Revised 1981 edition), Springs of the Virginias: A Pictorial History, Charleston, West Virginia: Quarrier Press.
Bathing
Drinking water
Geomorphology
Natural environment based therapies
Spa waters
Springs (hydrology)
Water chemistry | Mineral spring | [
"Chemistry",
"Environmental_science"
] | 794 | [
"Mineral water",
"Hydrology",
"Springs (hydrology)",
"nan"
] |
1,040,374 | https://en.wikipedia.org/wiki/Babbitt%20%28alloy%29 | Babbitt metal or bearing metal is any of several alloys used for the bearing surface in a plain bearing.
The original Babbitt alloy was invented in 1839 by Isaac Babbitt in Taunton, Massachusetts, United States. He disclosed one of his alloy recipes but kept others as trade secrets. Other formulations were developed later. Like other terms whose eponymous origin is long since deemphasized (such as diesel engine or eustachian tube), the term babbitt metal is frequently styled in lowercase. It is preferred over the term "white metal", because that term refers to zinc die-casting metal, to lead-based alloys, to tin-based alloys, and to the bearing metal.
Babbitt metal is most commonly used as a thin surface layer in a complex, multi-metal assembly, but its original use was as a cast-in-place bulk bearing material. Babbitt metal is characterized by its resistance to galling. Babbitt metal is soft and easily damaged, which suggests that it might be unsuitable for a bearing surface. However, its structure is made up of small hard crystals dispersed in a softer metal, which makes it, technically, a metal matrix composite. As the bearing wears, the softer metal erodes somewhat, creating paths for lubricant between the hard high spots that provide the actual bearing surface. When tin is used as the softer metal, friction causes the tin to melt and function as a lubricant, protecting the bearing from wear when other lubricants are absent.
Internal combustion engines use Babbitt metal which is primarily tin-based because it can withstand cyclic loading.
Traditional Babbitt bearings
In the traditional style of a babbitt metal bearing, a cast iron pillow block is assembled as a loose fit around the shaft, with the shaft in its approximate final position. The inner face of the cast iron pillow block is often drilled to form a key to locate the bearing metal as it is cast into place. The shaft is coated with soot as a release agent, the ends of the bearing are packed with clay to form a mold, and molten metal is poured into the cavity around the shaft, initially filling the lower half of the pillow block. The bearing is stripped, and the metal trimmed back to the top surface of the pillow block. Solidified babbitt metal is soft enough to be cut with a knife or sharp chisel.
A steel shim is inserted to protect the face of the lower bearing and to space the cap of the pillow block away from the shaft. After resealing the ends with clay, more metal is then poured to fill the cap of the pillow block through the hole in the top of the pillow block cap, which will eventually become a lubrication port.
The two halves of the bearing are then split at the shim, the shim removed, the oil holes cleared of metal and oil ways are cut into the surface of the new bearing. The shaft is smeared with engineer's blue and rotated in the bearing. When the bearing is disassembled the blue fills the hollows and is rubbed off the high spots, making them visible. The high spots are scraped down, and the process repeated, until a uniform and evenly distributed pattern of blue shows when the shaft is removed. The bearing is then cleaned and lubricated, and shimmed up such that the shaft is held firmly but not binding in the bearing. The bearing is then "run in" by being run heavily lubricated at low load and slow revolution, completing the process of exposing the hard bearing surface. After final adjustment of the shimming, a very reliable and high load capability bearing results.
Before the advent of low cost electric motors, power was distributed through factories from a central engine via overhead shafts running in hundreds of Babbitt bearings. Often leather, fabric or rubber belts would be used to transfer this rotating power to working machines.
The expression a "run bearing" also derives from this style of bearing, since failure of lubrication will lead to heat build-up due to friction in the bearing, eventually leading to the bearing metal melting and running out of the pillow block.
Modern Babbitt bearings
Until the mid-1950s, poured Babbitt bearings were common in automotive applications. The Babbitt was poured into the block or caps using a form. Tin-based Babbitts were used, as they could stand up to the impact loads found on the connecting rods and crankshaft. The poured Babbitt bearings were kept thin. The rods and caps would have shims that could be peeled off as the Babbitt wore down. Ford was known to use two 0.002" on each cap and Babbitt that was 86% tin, 7% copper, 7% antimony (see the KRW catalogs for the Model T). Steel shims were used, as the brass shims used today tend to compress over time, contributing to shorter bearing life. The poured Babbitt bearings commonly get over 50,000 miles of use before needing replacement. Poured Babbitt bearings are also known to fail gracefully, allowing the car to be driven for extended periods of time. The failed bearing is not likely to damage the crankshaft.
The crankshaft and connecting-rod big-end bearings in current automobile engines are made of a replaceable steel shell, keyed to the bearing caps. The inner surface of the steel shell is plated with a coating of bronze, which is in turn coated with a thin layer of Babbitt metal as the bearing surface.
The process of laying down this layer of metal is known as Babbitting.
Alternative bearings
In many applications, rolling-element bearings, such as ball or roller bearings, have replaced Babbitt bearings. Though such bearings can offer a lower coefficient of friction than plain bearings, their key advantage is that they can operate reliably without a continuous pressurized supply of lubricant. Ball and roller bearings can also be used in configurations that are required to carry both radial and axial thrusts. However, rolling-element bearings lack the beneficial damping and shock-load capability provided by fluid-film bearings, such as the Babbitt.
Babbitt alloys
The science of bearing Babbitt selection
The engineering of a bearing's Babbitt lining is usually completed during the design of the machine. In selecting the proper type of Babbitt for a particular job there are a number of factors to take into consideration, the most important of which are as follows:
Surface speed of the shaft
Load that the bearing is required to carry
There is no doubt that if a bearing is to be highly loaded in relation to its size, a high-tin alloy is desirable; whereas for much lower-speed work and less heavily loaded bearings, a lead-based Babbitt may be employed and is far more economical.
Surface speed of the shaft (the number of feet traveled per minute by the shaft circumferentially):
Formula: S = × D × RPM / 12.
Example: Determine the surface of a 2-inch-diameter shaft going 1,400 revolutions per minute (RPM):
S = × D × RPM / 12 = 3.1416 × 2 × 1,400 / 12 = 733.04 ft/min,
where = 3.1416, D = diameter of shaft in inches, S = surface speed of the shaft.
Load bearing is required to carry (the weight which is being exerted through the combined weights of the shaft and any other direct weights on the shaft and measured in pounds-force per square inch):
Formula: L = W / (I.D × L.O.B.).
Example: Determine the load on a bearing of a 2-inch I.D. bearing, 5 inches long and carrying a weight of 3,100 lbf:
W / (I.D x L.O.B.) = 3,100 / (2 × 5) = 310 lbf/in2,
where W = total weight carried by bearing, I.D = inside diameter of bearing, L.O.B = length of bearing, L = load bearing required to carry.
Babbitt bearing selection
While not subject to precise calculations, the following considerations must also be taken into account:
Continuity of service
Bonding characteristics
Cooling facilities
Lubrication
Cleanliness
Maintenance schedule for the bearing in use
For example, a bearing in continuous use in a harsh environment without regular maintenance will require different Babbitt and lubrication than a bearing in intermittent use in a clean, light duty environment. This so-called art is really the condensation of the experience of the technician and the experience of the bearing being rebuilt.
If the bearing has performed well in use over many years, the bearing needs simply to be rebuilt to its original specification and formulation. In this case the technician's greatest concerns are:
Bearing shell surface preparation
Bonding characteristics of the tinning compound and the Babbitt layer and,
Load bearing surface preparation and finish
Eco-Babbitt
Eco-Babbitt is an alloy of 90% Sn, 7% Zn, 3% Cu that is not technically a Babbitt metal. See Solder alloys for more information on Eco-Babbitt.
References
Bibliography
.
External links
American inventions
Engine technology
Lead alloys
Tin alloys
Antimony alloys
Copper alloys
Arsenic alloys
fr:Régule | Babbitt (alloy) | [
"Chemistry",
"Technology"
] | 1,920 | [
"Arsenic alloys",
"Lead alloys",
"Copper alloys",
"Engines",
"Engine technology",
"Tin alloys",
"Alloys",
"Antimony alloys"
] |
1,040,418 | https://en.wikipedia.org/wiki/Phil%20Kaufman%20Award | The Phil Kaufman Award for Distinguished Contributions to EDA honors individuals for their impact on electronic design by their contributions to electronic design automation (EDA). It was established in 1994 by the EDA Consortium (now the Electronic System Design Alliance, a SEMI Technology Community). The IEEE Council on Electronic Design Automation (CEDA) became a co-sponsor of the award. The first Phil Kaufman Award was presented in 1994.
The IEEE has a policy not to issue awards to deceased persons. To honor individuals who made a significant impact on EDA, but died before the award was established the Phil Kaufman Hall of Fame was created by the ESDA in 2020. The first Hall of Fame honor was presented in June 2021. Phil Kaufman awardees are included in the Phil Kaufman Hall of Fame.
Contributions to qualify for the Phil Kaufman Award are evaluated in any of the following categories:
Business
Industry Direction and Promotion
Technology and Engineering
Educational and Mentoring
The award was established to honor Phil Kaufman, the deceased former president of Quickturn Systems.
The award is described as the "Nobel Prize of EDA".
Recipients
All recipients are listed at the ESDA Phil Kaufman Award webpage.
1994 – Hermann Gummel
1995 – Donald Pederson
1996 – Carver Mead
1997 – James Solomon
1998 – Ernest S. Kuh
1999 – Hugo De Man, known for his contributions in creating and driving the development of design automation tools that have had measurable impact on the productivity of electronic design engineers.
2000 – Paul (Yen-Son) Huang
2001 – Alberto Sangiovanni-Vincentelli
2002 – Ronald A. Rohrer, electronic industry pioneer, entrepreneur, researcher and educator, who head led a students' circuit simulator projects, which had eventually led to the development of SPICE.
2003 – A. Richard Newton
2004 – Joseph Costello
2005 – Phil Moorby, inventor of Verilog
2006 – Robert Dutton, creator of SUPREM (Stanford University Process Engineering Models) and PISCES (Poisson and Continuity Equation Solver) simulation tools and software used in Technology Computer Aided Design.
2007 – Robert K. Brayton, known for work in logic synthesis, formal verification and formal equivalence checking. Co-developer of Espresso.
2008 – Aart de Geus, Synopsys CEO for contributions to the EDA industry, more specifically the Design Compiler tool.
2009 – Randal Bryant, CMU professor, for his seminal technological breakthroughs in the area of formal verification.
2010 – Pat Pistilli, for pioneering the EDA industry and building the Design Automation Conference as its premiere showcase and networking platform
2011 – Chung Laung Liu, for his Distinguished Technical Contributions, Leadership Skills, and Business Acumen in Electronic Design Automation.
2013 – Chenming Hu, for major contributions to transistor modeling enabling the generation of FinFET based design.
2014 – Lucio Lanza, for helping numerous startups to develop innovative technologies.
2015 – Walden C. Rhines, CEO of Mentor Graphics, for his efforts growing the EDA and IC design industries.
2016 – Andrzej Strojwas, CMU professor and chief technologist of PDF Solutions, for his research in the area of design for manufacturing in the semiconductor industry.
2017 – Rob Rutenbar, for his contributions to algorithms and tools for analog and mixed-signal designs.
2018 – Thomas Williams, for his outstanding contributions to test automation and his overall impact on the electronics industry.
2019 – Mary Jane Irwin, Pennsylvania State University, for her extensive contributions to EDA and the community.
2020 – Hiatus due to COVID-19 pandemic
2021 – Anirudh Devgan, Cadence Design Systems CEO for Distinguished Contributions to Electronic System Design.
2022 – Giovanni De Micheli, for his significant impact on the electronic system design industry through pioneering technical contributions.
2023 - Lawrence Pileggi, "for his pioneering contributions to circuit simulation and optimization. These advances have enabled the electronics system design industry to address the challenge of interconnect delay dominated designs."
2024 - Jason Cong, for fundamental contributions to FPGA design automation technology
See also
List of computer-related awards
List of engineering awards
References
Design awards
Computer-related awards
Electrical and electronic engineering awards | Phil Kaufman Award | [
"Engineering"
] | 848 | [
"Electrical and electronic engineering awards",
"Design awards",
"Electronic engineering",
"Electrical engineering",
"Design"
] |
1,040,475 | https://en.wikipedia.org/wiki/Ehrenfeucht%E2%80%93Fra%C3%AFss%C3%A9%20game | In the mathematical discipline of model theory, the Ehrenfeucht–Fraïssé game (also called back-and-forth games)
is a technique based on game semantics for determining whether two structures
are elementarily equivalent. The main application of Ehrenfeucht–Fraïssé games is in proving the inexpressibility of certain properties in first-order logic. Indeed, Ehrenfeucht–Fraïssé games provide a complete methodology for proving inexpressibility results for first-order logic. In this role, these games are of particular importance in finite model theory and its applications in computer science (specifically computer aided verification and database theory), since Ehrenfeucht–Fraïssé games are one of the few techniques from model theory that remain valid in the context of finite models. Other widely used techniques for proving inexpressibility results, such as the compactness theorem, do not work in finite models.
Ehrenfeucht–Fraïssé-like games can also be defined for other logics, such as fixpoint logics and pebble games for finite variable logics; extensions are powerful enough to characterise definability in existential second-order logic.
Main idea
The main idea behind the game is that we have two structures, and two players – Spoiler and Duplicator. Duplicator wants to show that the two structures are elementarily equivalent (satisfy the same first-order sentences), whereas Spoiler wants to show that they are different. The game is played in rounds. A round proceeds as follows: Spoiler chooses any element from one of the structures, and Duplicator chooses an element from the other structure. In simplified terms, the Duplicator's task is to always pick an element "similar" to the one that the Spoiler has chosen, whereas the Spoiler's task is to choose an element for which no "similar" element exists in the other structure. Duplicator wins if there exists an isomorphism between the eventual substructures chosen from the two different structures; otherwise, Spoiler wins.
The game lasts for a fixed number of steps (which is an ordinal – usually a finite number or ).
Definition
Suppose that we are given two structures
and , each with no function symbols and the same set of relation symbols,
and a fixed natural number n. We can then define the Ehrenfeucht–Fraïssé
game to be a game between two players, Spoiler and Duplicator,
played as follows:
The first player, Spoiler, picks either a member of or a member of .
If Spoiler picked a member of , Duplicator picks a member of ; otherwise, Duplicator picks a member of .
Spoiler picks either a member of or a member of .
Duplicator picks an element or in the model from which Spoiler did not pick.
Spoiler and Duplicator continue to pick members of and for more steps.
At the conclusion of the game, we have chosen distinct elements of and of . We therefore have two structures on the set , one induced from via the map sending to , and the other induced from via the map sending to . Duplicator wins if these structures are the same; Spoiler wins if they are not.
For each n we define a relation if Duplicator wins the n-move game . These are all equivalence relations on the class of structures with the given relation symbols. The intersection of all these relations is again an equivalence relation .
Equivalence and inexpressibility
It is easy to prove that if Duplicator wins this game for all finite n, that is, , then and are elementarily equivalent. If the set of relation symbols being considered is finite, the converse is also true.
If a property is true of but not true of , but and can be shown equivalent by providing a winning strategy for Duplicator, then this shows that is inexpressible in the logic captured by this game.
History
The back-and-forth method used in the Ehrenfeucht–Fraïssé game to verify elementary equivalence was given by Roland Fraïssé
in his thesis;
it was formulated as a game by Andrzej Ehrenfeucht. The names Spoiler and Duplicator are due to Joel Spencer. Other usual names are Eloise [sic] and Abelard (and often denoted by and ) after Heloise and Abelard, a naming scheme introduced by Wilfrid Hodges in his book Model Theory, or alternatively Eve and Adam.
Further reading
Chapter 1 of Poizat's model theory text contains an introduction to the Ehrenfeucht–Fraïssé game, and so do Chapters 6, 7, and 13 of Rosenstein's book on linear orders. A simple example of the Ehrenfeucht–Fraïssé game is given in one of Ivars Peterson's MathTrek columns.
Phokion Kolaitis' slides and Neil Immerman's book chapter on Ehrenfeucht–Fraïssé games discuss applications in computer science, the methodology for proving inexpressibility results, and several simple inexpressibility proofs using this methodology.
Ehrenfeucht–Fraïssé games are the basis for the operation of derivative on modeloids. Modeloids are certain equivalence relations and the derivative provides for a generalization of standard model theory.
References
External links
Six Lectures on Ehrenfeucht-Fraïssé games at MATH EXPLORERS' CLUB, Cornell Department of Mathematics.
Modeloids I, Miroslav Benda, Transactions of the American Mathematical Society, Vol. 250 (Jun 1979), pp. 47 – 90 (44 pages)
Model theory | Ehrenfeucht–Fraïssé game | [
"Mathematics"
] | 1,172 | [
"Mathematical logic",
"Model theory"
] |
1,040,671 | https://en.wikipedia.org/wiki/129%20%28number%29 | 129 (one hundred [and] twenty-nine) is the natural number following 128 and preceding 130.
In mathematics
129 is the sum of the first ten prime numbers. It is the smallest number that can be expressed as a sum of three squares in four different ways: , , , and .
129 is the product of only two primes, 3 and 43, making 129 a semiprime. Since 3 and 43 are both Gaussian primes, this means that 129 is a Blum integer.
129 is a repdigit in base 6 (333).
129 is a happy number.
129 is a centered octahedral number.
References
Integers | 129 (number) | [
"Mathematics"
] | 135 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
1,040,853 | https://en.wikipedia.org/wiki/Radio%20window | The radio window is the region of the radio spectrum that penetrate the Earth's atmosphere. Typically, the lower limit of the radio window's range has a value of about 10 MHz (λ ≈ 30 m); the best upper limit achievable from optimal terrestrial observation sites is equal to approximately 1 THz (λ ≈ 0.3 mm).
It plays an important role in astronomy; up until the 1940s, astronomers could only use the visible and near infrared spectra for their measurements and observations. With the development of radio telescopes, the radio window became more and more utilizable, leading to the development of radio astronomy that provided astrophysicists with valuable observational data.
Factors affecting lower and upper limits
The lower and upper limits of the radio window's range of frequencies are not fixed; they depend on a variety of factors.
Absorption of mid-IR
The upper limit is affected by the vibrational transitions of atmospheric molecules such as oxygen (O2), carbon dioxide (CO2), and water (H2O), whose energies are comparable to the energies of mid-infrared photons: these molecules largely absorb the mid-infrared radiation that heads towards Earth.
Ionosphere
The radio window's lower frequency limit is greatly affected by the ionospheric refraction of the radio waves whose frequencies are approximately below 30 MHz (λ > 10 m); radio waves with frequencies below the limit of 10 MHz (λ > 30 m) are reflected back into space by the ionosphere. The lower limit is proportional to the density of the ionosphere's free electrons and coincides with the plasma frequency:
where is the plasma frequency in Hz and the electron density in electrons per cubic meter. Since it is highly dependent on sunlight, the value of changes significantly from daytime to nighttime usually being lower during the day, leading to a decrease of the radio window's lower limit and higher during the night, causing an increase of the radio window's lower frequency end. However, this also depends on the solar activity and the geographic position.
Troposphere
When performing observations, radio astronomers try to extend the upper limit of the radio window towards the 1 THz optimum, since the astronomical objects give spectral lines of greater intensity in the higher frequency range. Tropospheric water vapour greatly affects the upper limit since its resonant absorption frequency bands are 22.3 GHz (λ ≈ 1.32 cm), 183.3 GHz (λ ≈ 1.64 mm) and 323.8 GHz (λ ≈ 0.93 mm). The tropospheric oxygen's bands at 60 GHz (λ ≈ 5.00 mm) and 118.74 GHz (λ ≈ 2.52 mm) also affect the upper limit. To tackle the issue of water vapour, many observatories are built at high altitudes where the climate is more dry. However, few can be done to avoid the oxygen's interference with radio waves propagation.
Radio frequency interference
The width of the radio window is also affected by radio frequency interference which hinders the observations at certain wavelength ranges and undermines the quality of the observational data of radio astronomy.
See also
Infrared window
Optical window
Radio propagation
References
Electromagnetic spectrum | Radio window | [
"Physics"
] | 656 | [
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
1,040,920 | https://en.wikipedia.org/wiki/Sigma%20bond | In chemistry, sigma bonds (σ bonds) or sigma overlap are the strongest type of covalent chemical bond. They are formed by head-on overlapping between atomic orbitals along the internuclear axis. Sigma bonding is most simply defined for diatomic molecules using the language and tools of symmetry groups. In this formal approach, a σ-bond is symmetrical with respect to rotation about the bond axis. By this definition, common forms of sigma bonds are s+s, pz+pz, s+pz and dz2+dz2 (where z is defined as the axis of the bond or the internuclear axis).
Quantum theory also indicates that molecular orbitals (MO) of identical symmetry actually mix or hybridize. As a practical consequence of this mixing of diatomic molecules, the wavefunctions s+s and pz+pz molecular orbitals become blended. The extent of this mixing (or hybridization or blending) depends on the relative energies of the MOs of like symmetry.
For homodiatomics (homonuclear diatomic molecules), bonding σ orbitals have no nodal planes at which the wavefunction is zero, either between the bonded atoms or passing through the bonded atoms. The corresponding antibonding, or σ* orbital, is defined by the presence of one nodal plane between the two bonded atoms.
Sigma bonds are the strongest type of covalent bonds due to the direct overlap of orbitals, and the electrons in these bonds are sometimes referred to as sigma electrons.
The symbol σ is the Greek letter sigma. When viewed down the bond axis, a σ MO has a circular symmetry, hence resembling a similarly sounding "s" atomic orbital.
Typically, a single bond is a sigma bond while a multiple bond is composed of one sigma bond together with pi or other bonds. A double bond has one sigma plus one pi bond, and a triple bond has one sigma plus two pi bonds.
Polyatomic molecules
Sigma bonds are obtained by head-on overlapping of atomic orbitals. The concept of sigma bonding is extended to describe bonding interactions involving overlap of a single lobe of one orbital with a single lobe of another. For example, propane is described as consisting of ten sigma bonds, one each for the two C−C bonds and one each for the eight C−H bonds.
Multiple-bonded complexes
Transition metal complexes that feature multiple bonds, such as the dihydrogen complex, have sigma bonds between the multiple bonded atoms. These sigma bonds can be supplemented with other bonding interactions, such as π-back donation, as in the case of W(CO)3(PCy3)2(H2), and even δ-bonds, as in the case of chromium(II) acetate.
Organic molecules
Organic molecules are often cyclic compounds containing one or more rings, such as benzene, and are often made up of many sigma bonds along with pi bonds. According to the sigma bond rule, the number of sigma bonds in a molecule is equivalent to the number of atoms plus the number of rings minus one.
Nσ = Natoms + Nrings − 1
This rule is a special-case application of the Euler characteristic of the graph which represents the molecule.
A molecule with no rings can be represented as a tree with a number of bonds equal to the number of atoms minus one (as in dihydrogen, H2, with only one sigma bond, or ammonia, NH3, with 3 sigma bonds). There is no more than 1 sigma bond between any two atoms.
Molecules with rings have additional sigma bonds, such as benzene rings, which have 6 C−C sigma bonds within the ring for 6 carbon atoms. The anthracene molecule, C14H10, has three rings so that the rule gives the number of sigma bonds as 24 + 3 − 1 = 26. In this case there are 16 C−C sigma bonds and 10 C−H bonds.
This rule fails in the case of molecules which, when drawn flat on paper, have a different number of rings than the molecule actually has - for example, Buckminsterfullerene, C60, which has 32 rings, 60 atoms, and 90 sigma bonds, one for each pair of bonded atoms; however, 60 + 32 - 1 = 91, not 90. This is because the sigma rule is a special case of the Euler characteristic, where each ring is considered a face, each sigma bond is an edge, and each atom is a vertex. Ordinarily, one extra face is assigned to the space not inside any ring, but when Buckminsterfullerene is drawn flat without any crossings, one of the rings makes up the outer pentagon; the inside of that ring is the outside of the graph. This rule fails further when considering other shapes - toroidal fullerenes will obey the rule that the number of sigma bonds in a molecule is exactly the number of atoms plus the number of rings, as will nanotubes - which, when drawn flat as if looking through one from the end, will have a face in the middle, corresponding to the far end of the nanotube, which is not a ring, and a face corresponding to the outside.
See also
Bond strength
Molecular geometry
References
External links
IUPAC-definition
Chemical bonding | Sigma bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,085 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
1,040,953 | https://en.wikipedia.org/wiki/Recuperator | A recuperator is a special purpose counter-flow energy recovery heat exchanger positioned within the supply and exhaust air streams of an air handling system, or in the exhaust gases of an industrial process, in order to recover the waste heat. Generally, they are used to extract heat from the exhaust and use it to preheat air entering the combustion system. In this way they use waste energy to heat the air, offsetting some of the fuel, and thereby improve the energy efficiency of the system as a whole.
Description
In many types of processes, combustion is used to generate heat, and the recuperator serves to recuperate, or reclaim this heat, in order to reuse or recycle it. The term recuperator refers as well to liquid-liquid counterflow heat exchangers used for heat recovery in the chemical and refinery industries and in closed processes such as ammonia-water or LiBr-water absorption refrigeration cycle.
Recuperators are often used in association with the burner portion of a heat engine, to increase the overall efficiency. For example, in a gas turbine engine, air is compressed, mixed with fuel, which is then burned and used to drive a turbine. The recuperator transfers some of the waste heat in the exhaust to the compressed air, thus preheating it before entering the fuel burner stage. Since the gases have been pre-heated, less fuel is needed to heat the gases up to the turbine inlet temperature. By recovering some of the energy usually lost as waste heat, the recuperator can make a heat engine or gas turbine significantly more efficient.
Energy transfer process
Normally the heat transfer between airstreams provided by the device is termed as "sensible heat", which is the exchange of energy, or enthalpy, resulting in a change in temperature of the medium (air in this case), but with no change in moisture content. However, if moisture or relative humidity levels in the return air stream are high enough to allow condensation to take place in the device, then this will cause "latent heat" to be released and the heat transfer material will be covered with a film of water. Despite a corresponding absorption of latent heat, as some of the water film is evaporated in the opposite airstream, the water will reduce the thermal resistance of the boundary layer of the heat exchanger material and thus improve the heat transfer coefficient of the device, and hence increase efficiency. The energy exchange of such devices now comprises both sensible and latent heat transfer; in addition to a change in temperature, there is also a change in moisture content of the exhaust air stream.
However, the film of condensation will also slightly increase pressure drop through the device, and depending upon the spacing of the matrix material, this can increase resistance by up to 30%. If the unit is not laid to falls, and the condensate not allowed to drain properly, this will increase fan energy consumption and reduce the seasonal efficiency of the device.
Use in ventilation systems
In heating, ventilation and air-conditioning systems, HVAC, recuperators are commonly used to re-use waste heat from exhaust air normally expelled to atmosphere. Devices typically comprises a series of parallel plates of aluminium, plastic, stainless steel, or synthetic fiber, copper alternate pairs of which are enclosed on two sides to form twin sets of ducts at right angles to each other, and which contain the supply and extract air streams. In this manner heat from the exhaust air stream is transferred through the separating plates, and into the supply air stream. Manufacturers claim gross efficiencies of up to 95% depending upon the specification of the unit.
The characteristics of this device are attributable to the relationship between the physical size of the unit, in particular the air path distance, and the spacing of the plates. For an equal air pressure drop through the device, a small unit will have a narrower plate spacing and a lower air velocity than a larger unit, but both units may be just as efficient. Because of the cross-flow design of the unit, its physical size will dictate the air path length, and as this increases, heat transfer will increase but pressure drop will also increase, and so plate spacing is increased to reduce pressure drop, but this in turn will reduce heat transfer.
As a general rule a recuperator selected for a pressure drop of between will have a good efficiency, while having a small effect on fan power consumption, but will have in turn a higher seasonal efficiency than that for physically smaller, but higher pressure drop recuperator.
When heat recovery is not required, it is typical for the device to be bypassed by use of dampers arranged within the ventilation distribution system. Assuming the fans are fitted with inverter speed controls, set to maintain a constant pressure in the ventilation system, then the reduced pressure drop leads to a slowing of the fan motor and thus reducing power consumption, and in turn improves the seasonal efficiency of the system.
Use in metallurgical furnaces
Recuperators have also been used to recover heat from waste gasses to preheat combustion air and fuel for many years by metallic recuperators to reduce energy costs and carbon footprint of operation. Compared to alternatives such as regenerative furnaces, initial costs are lesser, there are no valves to be switching back and forth, there are no induced-draft fans and it does not require a web of gas ducts spread up all over the furnace.
Historically the recovery ratios of recuperators compared to regenerative burners were low. However, recent improvements to technology have allowed recuperators to recover 70-80% of the waste heat and pre-heated air up to is now possible.
Gas turbines
Recuperators can be used to increase the efficiency of gas turbines for power generation, provided the exhaust gas is hotter than the compressor outlet temperature. The exhaust heat from the turbine is used to pre-heat the air from the compressor before further heating in the combustor, reducing the fuel input required. The larger the temperature difference between turbine out and compressor out, the greater the benefit from the recuperator. Therefore, microturbines (<1 MW), which typically have low pressure ratios, have the most to gain from the use of a recuperator. In practice, a doubling of efficiency is possible through the use of a recuperator. The major practical challenge for a recuperator in microturbine applications is coping with the exhaust gas temperature, which can exceed .
Other types of gas-to-gas heat exchangers
Heat pipe
Run-around coil
Thermal wheel, or rotary heat exchanger (including enthalpy wheel and desiccant wheel)
Convection recuperator
Radiation recuperator
See also
Air handler
Energy recovery ventilation
Heat recovery ventilation
HVAC (heating, ventilation, and air conditioning)
Indoor air quality
Regenerative heat exchanger
Thermal comfort
References
External links
Energy conservation
Energy recovery
Engineering thermodynamics
Heat exchangers
Heat transfer
Heating, ventilation, and air conditioning
Heating
Industrial equipment
Low-energy building
Mechanical engineering
Sustainable building | Recuperator | [
"Physics",
"Chemistry",
"Engineering"
] | 1,463 | [
"Transport phenomena",
"Sustainable building",
"Physical phenomena",
"Heat transfer",
"Applied and interdisciplinary physics",
"Chemical equipment",
"Building engineering",
"Engineering thermodynamics",
"Construction",
"Thermodynamics",
"Mechanical engineering",
"Heat exchangers",
"nan"
] |
1,041,015 | https://en.wikipedia.org/wiki/Nitrogen%20trichloride | Nitrogen trichloride, also known as trichloramine, is the chemical compound with the formula . This yellow, oily, and explosive liquid is most commonly encountered as a product of chemical reactions between ammonia-derivatives and chlorine (for example, in swimming pools). Alongside monochloramine and dichloramine, trichloramine is responsible for the distinctive 'chlorine smell' associated with swimming pools, where the compound is readily formed as a product from hypochlorous acid reacting with ammonia and other nitrogenous substances in the water, such as urea from urine.
Preparation and occurrence
The compound is generated by treatment of ammonium chloride with calcium hypochlorite. When prepared in an aqueous-dichloromethane mixture, the trichloramine is extracted into the nonaqueous phase. Intermediates in this conversion include monochloramine and dichloramine, and , respectively.
Nitrogen trichloride, trademarked as Agene, was at one time used to bleach flour, but this practice was banned in the United States in 1949 due to safety concerns.
Structure and properties
Like ammonia, is a pyramidal molecule. The N-Cl distances are 1.76 Å, and the Cl-N-Cl angles are 107°.
Nitrogen trichloride can form in small amounts when public water supplies are disinfected with monochloramine, and in swimming pools by disinfecting chlorine reacting with urea in urine and sweat from bathers.
Reactions and uses
The chemistry of has been well explored. It is moderately polar with a dipole moment of 0.6 D. The nitrogen center is basic but much less so than ammonia. It is hydrolyzed by hot water to release ammonia and hypochlorous acid.
Concentrated samples of NCl3 can explode to give N2 and chlorine gas.
NCl3 can react with certain organic compounds to produce amines.
Safety
Nitrogen trichloride can irritate mucous it is a lachrymatory agent, but has never been used as such. The compound (rarely encountered) is a dangerous explosive, being sensitive to light, heat, even moderate shock, and organic compounds. Pierre Louis Dulong first prepared it in 1812, and lost several fingers and an eye in two explosions. In 1813, an explosion blinded Sir Humphry Davy temporarily, inducing him to hire Michael Faraday as a co-worker. They were both injured in another explosion shortly thereafter.
See also
List of food contamination incidents
References
Further reading
External links
OSHA - Nitrogen trichloride
Nitrogen Trichloride - Health References
Inorganic amines
Nitrogen halides
Inorganic chlorine compounds
Inorganic nitrogen compounds
Explosive chemicals
Nitrogen(III) compounds
Liquid explosives | Nitrogen trichloride | [
"Chemistry"
] | 573 | [
"Explosive chemicals",
"Inorganic chlorine compounds",
"Inorganic compounds",
"Inorganic nitrogen compounds"
] |
1,041,023 | https://en.wikipedia.org/wiki/Money%20illusion | In economics, money illusion, or price illusion, is a cognitive bias where money is thought of in nominal, rather than real terms. In other words, the face value (nominal value) of money is mistaken for its purchasing power (real value) at a previous point in time. Viewing purchasing power as measured by the nominal value is false, as modern fiat currencies have no intrinsic value and their real value depends purely on the price level. The term was coined by Irving Fisher in Stabilizing the Dollar. It was popularized by John Maynard Keynes in the early twentieth century, and Irving Fisher wrote an important book on the subject, The Money Illusion, in 1928.
The existence of money illusion is disputed by monetary economists who contend that people act rationally (i.e. think in real prices) with regard to their wealth. Eldar Shafir, Peter A. Diamond, and Amos Tversky (1997) have provided empirical evidence for the existence of the effect and it has been shown to affect behaviour in a variety of experimental and real-world situations.
Shafir et al. also state that money illusion influences economic behaviour in three main ways:
Price stickiness. Money illusion has been proposed as one reason why nominal prices are slow to change even where inflation has caused real prices to fall or costs to rise.
Contracts and laws are not indexed to inflation as frequently as one would rationally expect.
Social discourse, in formal media and more generally, reflects some confusion about real and nominal value.
Money illusion can also influence people's perceptions of outcomes. Experiments have shown that people generally perceive an approximate 2% cut in nominal income with no change in monetary value as unfair, but see a 2% rise in nominal income where there is 4% inflation as fair, despite them being almost rational equivalents. This result is consistent with the 'Myopic Loss Aversion theory'. Furthermore, the money illusion means nominal changes in price can influence demand even if real prices have remained constant.
Explanations and implications
Explanations of money illusion generally describe the phenomenon in terms of heuristics. Nominal prices provide a convenient rule of thumb for determining value and real prices are only calculated if they seem highly salient (e.g. in periods of hyperinflation or in long term contracts).
Some have suggested that money illusion implies that the negative relationship between inflation and unemployment described by the Phillips curve might hold, contrary to more recent macroeconomic theories such as the "expectations-augmented Phillips curve". If workers use their nominal wage as a reference point when evaluating wage offers, firms can keep real wages relatively lower in a period of high inflation as workers accept the seemingly high nominal wage increase. These lower real wages would allow firms to hire more workers in periods of high inflation.
Money illusion is believed to be instrumental in the Friedmanian version of the Phillips curve. Actually, money illusion is not enough to explain the mechanism underlying this Phillips curve. It requires two additional assumptions. First, prices respond differently to modified demand conditions: an increased aggregate demand exerts its influence on commodity prices sooner than it does on labour market prices. Therefore, the drop in unemployment is, after all, the result of decreasing real wages and an accurate judgement of the situation by employees is the only reason for the return to an initial (natural) rate of unemployment (i.e. the end of the money illusion, when they finally recognize the actual dynamics of prices and wages). The other (arbitrary) assumption refers to a special informational asymmetry: whatever employees are unaware of in connection with the changes in (real and nominal) wages and prices can be clearly observed by employers. The new classical version of the Phillips curve was aimed at removing the puzzling additional presumptions, but its mechanism still requires money illusion.
See also
Behavioural economics
Fiscal Illusion
Framing (social science)
Homo economicus
Map-territory relation
References
Further reading
Thaler, Richard H.(1997) "Irving Fisher: Modern Behavioral Economist" in The American Economic Review Vol 87, No 2, Papers and Proceedings of the Hundred and Fourth Annual Meeting of the American Economic Association (May, 1997)
Huw Dixon (2008), New Keynesian Economics, New Palgrave Dictionary of Economics New Keynesian macroeconomics.
Heuristics
Inflation
Behavioral finance
Cognitive biases | Money illusion | [
"Biology"
] | 876 | [
"Behavioral finance",
"Behavior",
"Human behavior"
] |
1,041,043 | https://en.wikipedia.org/wiki/Nafion | Nafion is a brand name for a sulfonated tetrafluoroethylene based fluoropolymer-copolymer synthesized in 1962 by Dr. Donald J. Connolly at the DuPont Experimental Station in Wilmington Delaware (U.S. Patent 3,282,875). Additional work on the polymer family was performed in the late 1960s by Dr. Walther Grot of DuPont. Nafion is a brand of the Chemours company. It is the first of a class of synthetic polymers with ionic properties that are called ionomers. Nafion's unique ionic properties are a result of incorporating perfluorovinyl ether groups terminated with sulfonate groups onto a tetrafluoroethylene (PTFE) backbone. Nafion has received a considerable amount of attention as a proton conductor for proton exchange membrane (PEM) fuel cells because of its excellent chemical and mechanical stability in the harsh conditions of this application.
The chemical basis of Nafion's ion-conductive properties remain a focus of extensive research. Ion conductivity of Nafion increases with the level of hydration. Exposure of Nafion to a humidified environment or liquid water increases the amount of water molecules associated with each sulfonic acid group. The hydrophilic nature of the ionic groups attract water molecules, which begin to solvate the ionic groups and dissociate the protons from the -SO3H (sulfonic acid) group. The dissociated protons "hop" from one acid site to another through mechanisms facilitated by the water molecules and hydrogen bonding. Upon hydration, Nafion phase-separates at nanometer length scales resulting in formation of an interconnected network of hydrophilic domains which allow movement of water and cations, but the membranes do not conduct electrons and minimally conduct anions due to permselectivity (charge-based exclusion). Nafion can be manufactured with or exchanged to alternate cation forms for different applications (e.g. lithiated for Li-ion batteries) and at different equivalent weights (EWs), alternatively considered as ion-exchange capacities (IECs), to achieve a range of cationic conductivities with trade-offs to other physicochemical properties such as water uptake and swelling.
Nomenclature and molecular weight
Nafion can be produced as both a powder resin and a copolymer. It has various chemical configurations and thus several chemical names in the IUPAC system. Nafion-H, for example, includes the following systematic names:
From Chemical Abstracts: ethanesulfonyl fluoride, 2-[1-[difluoro-[(trifluoroethenyl)oxy]methyl]-1,2,2,2-tetrafluoroethoxy]-1,1,2,2,-tetrafluoro-, with tetrafluoroethylene
acid copolymer
The molecular weight of Nafion is variable due to differences in processing and solution morphology. The structure of a Nafion unit illustrates the variability of the material; for example, the most basic monomer contains chain variation between the ether groups (the z subscript). Conventional methods of determining molecular weight such as light scattering and gel permeation chromatography are not applicable because Nafion is insoluble, although the molecular weight has been estimated at 105–106 Da. Instead, the equivalent weight (EW) and material thickness are used to describe most commercially available membranes. The EW is the number of grams of dry Nafion per mole of sulfonic acid groups when the material is in the acid form. Nafion membranes are commonly categorized in terms of their EW and thickness. For example, Nafion 117 indicates an extrusion-cast membrane with 1100 g/mol EW and 0.007 inches (7 thou) in thickness. In contrast to equivalent weight, conventional ion-exchange resins are usually described in terms of their ion exchange capacity (IEC), which is the multiplicative inverse or reciprocal of the equivalent weight, i.e., IEC = 1000/EW.
Preparation
Nafion derivatives are first synthesized by the copolymerization of tetrafluoroethylene (TFE) (the monomer in Teflon) and a derivative of a perfluoro (alkyl vinyl ether) with sulfonyl acid fluoride. The latter reagent can be prepared by the pyrolysis of its respective oxide or carboxylic acid to give the olefinated structure.
The resulting product is an -SO2F-containing thermoplastic that is extruded into films. Hot aqueous NaOH converts these sulfonyl fluoride (-SO2F) groups into sulfonate groups (-SO3−Na+). This form of Nafion, referred to as the neutral or salt form, is finally converted to the acid form containing the sulfonic acid (-SO3H) groups. Nafion can be dispersed into solution by heating in aqueous alcohol at 250 °C in an autoclave for subsequent casting into thin films or use as polymeric binder in electrodes. By this process, Nafion can be used to generate composite films, coat electrodes, or repair damaged membranes.
Properties
The combination of the stable PTFE backbone with the acidic sulfonic groups gives Nafion its characteristics:
It is highly conductive to cations, making it suitable for many membrane applications.
It resists chemical attack. According to Chemours, only alkali metals (particularly sodium) can degrade Nafion under normal temperatures and pressures.
The PTFE backbone interlaced with the ionic sulfonate groups gives Nafion a high chemical stability temperature (e.g. 190 °C) but a softening point in the range of 85-100 °C give it a moderate operating temperature, e.g. up to 100 °C, with additional challenges in all applications due to the loss of water above 100 °C.
It is a superacid catalyst. The combination of fluorinated backbone, sulfonic acid groups, and the stabilizing effect of the polymer matrix make Nafion a very strong acid, with pKa ~ -6. In this respect Nafion resembles the trifluoromethanesulfonic acid, CF3SO3H, although Nafion is a weaker acid by at least three orders of magnitude.
It is selectively and highly permeable to water.
Its proton conductivity up to 0.2 S/cm depending on temperature, hydration state, thermal history and processing conditions.
The solid phase and the aqueous phase of Nafion are both permeable to gases, which is a drawback for energy conversion devices such as artificial leaves, fuel cells, and water electrolyzers.
Structure/morphology
The morphology of Nafion membranes is a matter of continuing study to allow for greater control of its properties. Other properties such as water management, hydration stability at high temperatures, electro-osmotic drag, as well as the mechanical, thermal, and oxidative stability, are affected by the Nafion structure. A number of models have been proposed for the morphology of Nafion to explain its unique transport properties.
The first model for Nafion, called the cluster-channel or cluster-network model, consisted of an equal distribution of sulfonate ion clusters (also described as 'inverted micelles') with a 40 Å (4 nm) diameter held within a continuous fluorocarbon lattice. Narrow channels about 10 Å (1 nm) in diameter interconnect the clusters, which explains the transport properties.
The difficulty in determining the exact structure of Nafion stems from inconsistent solubility and crystalline structure among its various derivatives. Advanced morphological models have included a core-shell model where the ion-rich core is surrounded by an ion poor shell, a rod model where the sulfonic groups arrange into crystal-like rods, and a sandwich model where the polymer forms two layers whose sulfonic groups attract across an aqueous layer where transport occurs. Consistency between the models include a network of ionic clusters; the models differ in the cluster geometry and distribution. Although no model has yet been determined fully correct, some scientists have demonstrated that as the membrane hydrates, Nafion's morphology transforms from the cluster-channel model to a rod-like model.
A cylindrical-water channel model was also proposed based on simulations of small-angle X-ray scattering data and solid state nuclear magnetic resonance studies. In this model, the sulfonic acid functional groups self-organize into arrays of hydrophilic water channels, each ~ 2.5 nm in diameter, through which small ions can be easily transported. Interspersed between the hydrophilic channels are hydrophobic polymer backbones that provide the observed mechanical stability. Many recent studies, however, favored a phase-separated nanostructure consisting of locally-flat, or ribbon-like, hydrophilic domains based on evidence from direct-imaging studies and more comprehensive analysis of the structure and transport properties.
Applications
Nafion's properties make it suitable for a broad range of applications. Nafion has found use in fuel cells, electrochemical devices, chlor-alkali production, metal-ion recovery, water electrolysis, plating, surface treatment of metals, batteries, sensors, Donnan dialysis cells, drug release, gas drying or humidification, and superacid catalysis for the production of fine chemicals. Nafion is also often cited for theoretical potential (i.e., thus far untested) in a number of fields. With consideration of Nafion's wide functionality, only the most significant will be discussed below.
Chlor-alkali production cell membrane
Chlorine and sodium/potassium hydroxide are among the most produced commodity chemicals in the world. Modern production methods produce Cl2 and NaOH/KOH from the electrolysis of brine using a Nafion membrane between half-cells. Before the use of Nafion, industries used mercury containing sodium amalgam to separate sodium metal from cells or asbestos diaphragms to allow for transfer of sodium ions between half cells; both technologies were developed in the latter half of the 19th century. The disadvantages of these systems is worker safety and environmental concerns associated with mercury and asbestos, economical factors also played a part, and in the diaphragm process chloride contamination of the hydroxide product. Nafion was the direct result of the chlor-alkali industry addressing these concerns; Nafion could tolerate the high temperatures, high electrical currents, and corrosive environment of the electrolytic cells.
The figure to the right shows a chlor-alkali cell where Nafion functions as a membrane between half cells. The membrane allows sodium ions to transfer from one cell to the other with minimal electrical resistance. The membrane was also reinforced with additional membranes to prevent gas product mixing and minimize back transfer of Cl− and −OH ions.
Proton exchange membrane (PEM) for fuel cells
Although fuel cells have been used since the 1960s as power supplies for satellites, recently they have received renewed attention for their potential to efficiently produce clean energy from hydrogen. Nafion was found effective as a membrane for proton exchange membrane (PEM) fuel cells by permitting hydrogen ion transport while preventing electron conduction. Solid Polymer Electrolytes, which are made by connecting or depositing electrodes (usually noble metal) to both sides of the membrane, conduct the electrons through an energy requiring process and rejoin the hydrogen ions to react with oxygen and produce water. Fuel cells are expected to find strong use in the transportation industry.
Superacid catalyst for fine chemical production
Nafion, as a superacid, has potential as a catalyst for organic synthesis. Studies have demonstrated catalytic properties in alkylation, isomerization, oligomerization, acylation, ketalization, esterification, hydrolysis of sugars and ethers, and oxidation. New applications are constantly being discovered. These processes, however, have not yet found strong commercial use. Several examples are shown below:
Alkylation with alkyl halides
Nafion-H gives efficient conversion whereas the alternative method, which employs Friedel-Crafts synthesis, can promote polyalkylation:
Acylation
The amount of Nafion-H needed to catalyze the acylation of benzene with aroyl chloride is 10–30% less than the Friedel-Crafts catalyst:
Catalysis of protection groups
Nafion-H increases reaction rates of protection via dihydropyran or o-trialkylsilation of alcohols, phenol, and carboxylic acids.
Isomerization
Nafion can catalyze a 1,2-hydride shift.
It is possible to immobilize enzymes within the Nafion by enlarging pores with lipophilic salts. Nafion maintains a structure and pH to provide a stable environment for the enzymes. Applications include catalytic oxidation of adenine dinucleotides.
Sensors
Nafion has found use in the production of sensors, with application in ion-selective, metallized, optical, and biosensors. What makes Nafion especially interesting is its demonstration in biocompatibility. Nafion has been shown to be stable in cell cultures as well as the human body, and there is considerable research towards the production of higher sensitivity glucose sensors.
Antimicrobial surfaces
Nafion surfaces show an exclusion zone against bacteria colonization. Moreover, layer-by-layer coatings comprising Nafion show excellent antimicrobial properties.
Dehumidification in spacecraft
The SpaceX Dragon 2 human-rated spacecraft uses Nafion membranes to dehumidify the cabin air. One side of the membrane is exposed to the cabin atmosphere, the other to the vacuum of space. This results in dehumidification since Nafion is permeable to water molecules but not air. This saves power and complexity since cooling is not required (as needed with a condensing dehumidifier), and the removed water is rejected to space with no additional mechanism needed.
Modified Nafion for PEM fuel cells
Normal Nafion will dehydrate (thus lose proton conductivity) when the temperature is above ~80 °C. This limitation troubles the design of fuel cells because higher temperatures are desirable for better efficiency and CO tolerance of the platinum catalyst. Silica and zirconium phosphate can be incorporated into Nafion water channels through in situ chemical reactions to increase the working temperature to above 100 °C.
References
External links
What Nafion Membrane is Right for an Electrolyzer / Hydrogen Generation?
Homepage of Walther G. Grot
Walther G. Grot: "Fluorinated Ionomers"
Isotopic effects on Nafion conductivity
Membrane thickness on conductivity_of_Nafion
Nafion hydration
Plastics
Fluoropolymers
Polyelectrolytes
DuPont products
Membrane technology | Nafion | [
"Physics",
"Chemistry"
] | 3,148 | [
"Separation processes",
"Unsolved problems in physics",
"Membrane technology",
"Amorphous solids",
"Plastics"
] |
1,041,063 | https://en.wikipedia.org/wiki/K%C3%A1rm%C3%A1n%20vortex%20street | In fluid dynamics, a Kármán vortex street (or a von Kármán vortex street) is a repeating pattern of swirling vortices, caused by a process known as vortex shedding, which is responsible for the unsteady separation of flow of a fluid around blunt bodies.
It is named after the engineer and fluid dynamicist Theodore von Kármán, and is responsible for such phenomena as the "singing" of suspended telephone or power lines and the vibration of a car antenna at certain speeds. Mathematical modeling of von Kármán vortex street can be performed using different techniques including but not limited to solving the full Navier-Stokes equations with k-epsilon, SST, k-omega and Reynolds stress, and large eddy simulation (LES) turbulence models, by numerically solving some dynamic equations such as the Ginzburg–Landau equation, or by use of a bicomplex variable.
Analysis
A vortex street forms only at a certain range of flow velocities, specified by a range of Reynolds numbers (Re), typically above a limiting Re value of about 90. The (global) Reynolds number for a flow is a measure of the ratio of inertial to viscous forces in the flow of a fluid around a body or in a channel, and may be defined as a nondimensional parameter of the global speed of the whole fluid flow:
where:
= the free stream flow speed (i.e. the flow speed far from the fluid boundaries like the body speed relative to the fluid at rest, or an inviscid flow speed, computed through the Bernoulli equation), which is the original global flow parameter, i.e. the target to be non-dimensionalised.
= a characteristic length parameter of the body or channel
= the free stream kinematic viscosity parameter of the fluid, which in turn is the ratio:
between:
= the reference fluid density.
= the free stream fluid dynamic viscosity
For common flows (the ones which can usually be considered as incompressible or isothermal), the kinematic viscosity is everywhere uniform over all the flow field and constant in time, so there is no choice on the viscosity parameter, which becomes naturally the kinematic viscosity of the fluid being considered at the temperature being considered. On the other hand, the reference length is always an arbitrary parameter, so particular attention should be put when comparing flows around different obstacles or in channels of different shapes: the global Reynolds numbers should be referred to the same reference length. This is actually the reason for which the most precise sources for airfoil and channel flow data specify the reference length at the Reynolds number. The reference length can vary depending on the analysis to be performed: for a body with circle sections such as circular cylinders or spheres, one usually chooses the diameter; for an airfoil, a generic non-circular cylinder or a bluff body or a revolution body like a fuselage or a submarine, it is usually the profile chord or the profile thickness, or some other given widths that are in fact stable design inputs; for flow channels usually the hydraulic diameter about which the fluid is flowing.
For an aerodynamic profile the reference length depends on the analysis. In fact, the profile chord is usually chosen as the reference length also for aerodynamic coefficient for wing sections and thin profiles in which the primary target is to maximize the lift coefficient or the lift/drag ratio (i.e. as usual in thin airfoil theory, one would employ the chord Reynolds as the flow speed parameter for comparing different profiles). On the other hand, for fairings and struts the given parameter is usually the dimension of internal structure to be streamlined (let us think for simplicity it is a beam with circular section), and the main target is to minimize the drag coefficient or the drag/lift ratio. The main design parameter which becomes naturally also a reference length is therefore the profile thickness (the profile dimension or area perpendicular to the flow direction), rather than the profile chord.
The range of Re values varies with the size and shape of the body from which the eddies are shed, as well as with the kinematic viscosity of the fluid. For the wake of a circular cylinder, for which the reference length is conventionally the diameter d of the circular cylinder, the lower limit of this range is Re ≈ 47. Eddies are shed continuously from each side of the circle boundary, forming rows of vortices in its wake. The alternation leads to the core of a vortex in one row being opposite the point midway between two vortex cores in the other row, giving rise to the distinctive pattern shown in the picture. Ultimately, the energy of the vortices is consumed by viscosity as they move further down stream, and the regular pattern disappears. Above the Re value of 188.5, the flow becomes three-dimensional, with periodic variation along the cylinder. Above Re on the order of 105 at the drag crisis, vortex shedding becomes irregular and turbulence sets in.
When a single vortex is shed, an asymmetrical flow pattern forms around the body and changes the pressure distribution. This means that the alternate shedding of vortices can create periodic lateral (sideways) forces on the body in question, causing it to vibrate. If the vortex shedding frequency is similar to the natural frequency of a body or structure, it causes resonance. It is this forced vibration that, at the correct frequency, causes suspended telephone or power lines to "sing" and the antenna on a car to vibrate more strongly at certain speeds.
In meteorology
The flow of atmospheric air over obstacles such as islands or isolated mountains sometimes gives birth to von Kármán vortex streets. When a cloud layer is present at the relevant altitude, the streets become visible. Such cloud layer vortex streets have been photographed from satellites. The vortex street can reach over from the obstacle and the diameter of the vortices are normally .
Engineering problems
In low turbulence, tall buildings can produce a Kármán street, so long as the structure is uniform along its height. In urban areas where there are many other tall structures nearby, the turbulence produced by these can prevent the formation of coherent vortices. Periodic crosswind forces set up by vortices along object's sides can be highly undesirable, due to the vortex-induced vibrations caused, which can damage the structure, hence it is important for engineers to account for the possible effects of vortex shedding when designing a wide range of structures, from submarine periscopes to industrial chimneys and skyscrapers. For monitoring such engineering structures, the efficient measurements of von Kármán streets can be performed using smart sensing algorithms such as compressive sensing.
Even more serious instability can be created in concrete cooling towers, especially when built together in clusters. Vortex shedding caused the collapse of three towers at Ferrybridge Power Station C in 1965 during high winds.
The failure of the original Tacoma Narrows Bridge was originally attributed to excessive vibration due to vortex shedding, but was actually caused by aeroelastic flutter.
Kármán turbulence is also a problem for airplanes, especially when landing.
Solutions
To prevent vortex shedding and mitigate the unwanted vibration of cylindrical bodies is the use of a tuned mass damper (TMD). A tuned mass damper is a device consisting of a mass-spring system that is specifically designed and tuned to counteract the vibrations induced by vortex shedding.
When a tuned mass damper is installed on a cylindrical structure, such as a tall chimney or mast, it helps to reduce the vibration amplitudes caused by vortex shedding. The tuned mass damper consists of a mass that is attached to the structure through springs or dampers. In many cases, the spring is replaced by suspending the mass on cables such that it forms a pendulum system with the same resonance frequency. The mass is carefully tuned to have a natural frequency that matches the dominant frequency of the vortex shedding.
As the structure is subjected to vortex shedding-induced vibrations, the tuned mass damper oscillates in an out-of-phase motion with the structure. This counteracts the vibrations, reducing their amplitudes and minimizing the potential for resonance and structural damage.
The effectiveness of a tuned mass damper in mitigating vortex shedding-induced vibrations depends on factors such as the mass of the damper, its placement on the structure, and the tuning of the system. Engineers carefully analyze the structural dynamics and characteristics of the vortex shedding phenomenon to determine the optimal parameters for the tuned mass damper.
Another solution to prevent the unwanted vibration of such cylindrical bodies is a longitudinal fin that can be fitted on the downstream side, which, provided it is longer than the diameter of the cylinder, prevents the eddies from interacting, and consequently they remain attached. Obviously, for a tall building or mast, the relative wind could come from any direction. For this reason, helical projections resembling large screw threads are sometimes placed at the top, which effectively create asymmetric three-dimensional flow, thereby discouraging the alternate shedding of vortices; this is also found in some car antennas.
Another countermeasure with tall buildings is using variation in the diameter with height, such as tapering - that prevents the entire building from being driven at the same frequency.
Formula
This formula generally holds true for the range 250 < Red < 200000:
where:
f = vortex shedding frequency.
d = diameter of the cylinder
U = flow velocity.
This dimensionless parameter St is known as the Strouhal number and is named after the Czech physicist, Vincenc Strouhal (1850–1922) who first investigated the steady humming or singing of telegraph wires in 1878.
History
Although named after Theodore von Kármán,<ref>T. von Kármán: and H. Rubach, 1912: Phys. Z.", vol. 13, pp. 49–59.</ref> he acknowledged that the vortex street had been studied earlier by Arnulph Mallock and Henri Bénard. Kármán tells the story in his book Aerodynamics'':
In his autobiography, von Kármán described how his discovery was inspired by an Italian painting of St Christopher carrying the child Jesus whilst wading through water. Vortices could be seen in the water, and von Kármán noted that "The problem for historians may have been why Christopher was carrying Jesus through the water. For me it was why the vortices". It has been suggested by researchers that the painting is one from the 14th century that can be found in the museum of the San Domenico church in Bologna.
See also
References
External links
Vortices
Aerodynamics
Articles containing video clips | Kármán vortex street | [
"Chemistry",
"Mathematics",
"Engineering"
] | 2,174 | [
"Vortices",
"Aerodynamics",
"Aerospace engineering",
"Fluid dynamics",
"Dynamical systems"
] |
1,041,078 | https://en.wikipedia.org/wiki/Lab-on-a-chip | A lab-on-a-chip (LOC) is a device that integrates one or several laboratory functions on a single integrated circuit (commonly called a "chip") of only millimeters to a few square centimeters to achieve automation and high-throughput screening. LOCs can handle extremely small fluid volumes down to less than pico-liters. Lab-on-a-chip devices are a subset of microelectromechanical systems (MEMS) devices and sometimes called "micro total analysis systems" (μTAS). LOCs may use microfluidics, the physics, manipulation and study of minute amounts of fluids. However, strictly regarded "lab-on-a-chip" indicates generally the scaling of single or multiple lab processes down to chip-format, whereas "μTAS" is dedicated to the integration of the total sequence of lab processes to perform chemical analysis.
History
After the invention of microtechnology (≈1954) for realizing integrated semiconductor structures for microelectronic chips, these lithography-based technologies were soon applied in pressure sensor manufacturing (1966) as well. Due to further development of these usually CMOS-compatibility limited processes, a tool box became available to create micrometre or sub-micrometre sized mechanical structures in silicon wafers as well: the microelectromechanical systems (MEMS) era had started.
Next to pressure sensors, airbag sensors and other mechanically movable structures, fluid handling devices were developed. Examples are: channels (capillary connections), mixers, valves, pumps and dosing devices. The first LOC analysis system was a gas chromatograph, developed in 1979 by S.C. Terry at Stanford University. However, only at the end of the 1980s and beginning of the 1990s did the LOC research start to seriously grow as a few research groups in Europe developed micropumps, flowsensors and the concepts for integrated fluid treatments for analysis systems. These μTAS concepts demonstrated that integration of pre-treatment steps, usually done at lab-scale, could extend the simple sensor functionality towards a complete laboratory analysis, including additional cleaning and separation steps.
A big boost in research and commercial interest came in the mid-1990s, when μTAS technologies turned out to provide interesting tooling for genomics applications, like capillary electrophoresis and DNA microarrays. A big boost in research support also came from the military, especially from DARPA (Defense Advanced Research Projects Agency), for their interest in portable systems to aid in the detection of biological and chemical warfare agents. The added value was not only limited to integration of lab processes for analysis but also the characteristic possibilities of individual components and the application to other, non-analysis, lab processes. Hence the term "lab-on-a-chip" was introduced.
Although the application of LOCs is still novel and modest, a growing interest of companies and applied research groups is observed in different fields such as chemical analysis, environmental monitoring, medical diagnostics and cellomics, but also in synthetic chemistry such as rapid screening and microreactors for pharmaceutics. Besides further application developments, research in LOC systems is expected to extend towards downscaling of fluid handling structures as well, by using nanotechnology. Sub-micrometre and nano-sized channels, DNA labyrinths, single cell detection and analysis, and nano-sensors, might become feasible, allowing new ways of interaction with biological species and large molecules. Many books have been written that cover various aspects of these devices, including the fluid transport, system properties, sensing techniques, and bioanalytical applications.
The size of the global lab on chip market was estimated at US$5,698 million in 2021 and is projected to increase to US$14,772 million by 2030, at a CAGR of 11.5% from 2022 to 2030
Chip materials and fabrication technologies
The basis for most LOC fabrication processes is photolithography. Initially most processes were in silicon, as these well-developed technologies were directly derived from semiconductor fabrication. Because of demands for e.g. specific optical characteristics, bio- or chemical compatibility, lower production costs and faster prototyping, new processes have been developed such as glass, ceramics and metal etching, deposition and bonding, polydimethylsiloxane (PDMS) processing (e.g., soft lithography), Off-stoichiometry thiol-ene polymers (OSTEmer) processing, thick-film- and stereolithography-based 3D printing as well as fast replication methods via electroplating, injection molding and embossing. The demand for cheap and easy LOC prototyping resulted in a simple methodology for the fabrication of PDMS microfluidic devices: ESCARGOT (Embedded SCAffold RemovinG Open Technology). This technique allows for the creation of microfluidic channels, in a single block of PDMS, via a dissolvable scaffold (made by e.g. 3D printing).
Furthermore, the LOC field more and more exceeds the borders between lithography-based microsystem technology, nanotechnology and precision engineering. Printing is considered as a well-established yet maturing method for rapid prototyping in chip fabrication.
The development of LOC devices using printed circuit board (PCB) substrates is an interesting alternative due to these differentiating characteristics: commercially available substrates with integrated electronics, sensors and actuators; disposable devices at low cost, and very high potential of commercialization. These devices are known as Lab-on-PCBs (LOPs). The following are some of the advantages of PCB technology:
a) PCB-based circuit design offers great flexibility and can be tailored to specific demands.
b) PCB technology enables the integration of electronic and sensing modules on the same platform, reducing device size while maintaining accuracy of detection.
c) The standardized and established PCB manufacturing process allows for cost-effective large-scale production of PCB-based detection devices.
d) The growth of flexible PCB technology has driven the development of wearable detection devices. As a result, over the past decade, there have been numerous reports on the application of Lab-on-PCB to various biomedical fields.
e) PCBs are compatible with wet deposition methods, to allow for the fabrication of sensors using novel nanomaterials (e.g. graphene).
Advantages
LOCs may provide advantages, which are specific to their application. Typical advantages are:
low fluid volumes consumption (less waste, lower reagents costs and less required sample volumes for diagnostics)
faster analysis and response times due to short diffusion distances, fast heating, high surface to volume ratios, small heat capacities.
better process control because of a faster response of the system (e.g. thermal control for exothermic chemical reactions)
compactness of the systems due to integration of much functionality and small volumes
massive parallelization due to compactness, which allows high-throughput analysis
lower fabrication costs, allowing cost-effective disposable chips, fabricated in mass production
part quality may be verified automatically
safer platform for chemical, radioactive or biological studies because of integration of functionality, smaller fluid volumes and stored energies
Disadvantages
The most prominent disadvantages of labs-on-chip are:
The micro-manufacturing process required to make them is complex and labor-intensive, requiring both expensive equipment and specialized personnel. It can be overcome by the recent technology advancement on low-cost 3D printing and laser engraving.
The complex fluidic actuation network requires multiple pumps and connectors, where fine control is difficult. It can be overcome by careful simulation, an intrinsic pump, such as air-bag embed chip, or by using a centrifugal force to replace the pumping, i.e. centrifugal micro-fluidic biochip.
Most LOCs are novel proof of concept application that are not yet fully developed for widespread use. More validations are needed before practical employment.
In the microliter scale that LOCs deal with, surface dependent effects like capillary forces, surface roughness or chemical interactions are more dominant. This can sometimes make replicating lab processes in LOCs quite challenging and more complex than in conventional lab equipment.
Detection principles may not always scale down in a positive way, leading to low signal-to-noise ratios.
Global health
Lab-on-a-chip technology may soon become an important part of efforts to improve global health, particularly through the development of point-of-care testing devices. In countries with few healthcare resources, infectious diseases that would be treatable in a developed nation are often deadly. In some cases, poor healthcare clinics have the drugs to treat a certain illness but lack the diagnostic tools to identify patients who should receive the drugs. Many researchers believe that LOC technology may be the key to powerful new diagnostic instruments. The goal of these researchers is to create microfluidic chips that will allow healthcare providers in poorly equipped clinics to perform diagnostic tests such as microbiological culture assays, immunoassays and nucleic acid assays with no laboratory support.
Global challenges
For the chips to be used in areas with limited resources, many challenges must be overcome. In developed nations, the most highly valued traits for diagnostic tools include speed, sensitivity, and specificity; but in countries where the healthcare infrastructure is less well developed, attributes such as ease of use and shelf life must also be considered. The reagents that come with the chip, for example, must be designed so that they remain effective for months even if the chip is not kept in a climate controlled environment. Chip designers must also keep cost, scalability, and recyclability in mind as they choose what materials and fabrication techniques to use.
Examples of global LOC application
One of the most prominent and well known LOC devices to reach the market is the at home pregnancy test kit, a device that utilizes paper-based microfluidics technology.
Another active area of LOC research involves ways to diagnose and manage common infectious diseases caused by bacteria, e.g. bacteriuria, or viruses, e.g. influenza. A gold standard for diagnosing bacteriuria (urinary tract infections) is microbial culture. A recent study based on lab-on-a-chip technology, Digital Dipstick, miniaturised microbiological culture into a dipstick format and enabled it to be used at the point-of-care. Lab-on-a-chip technology can also be useful for the diagnosis and management of viral infections. In 2023, researchers developed a working prototype of an RT-LAMP lab-on-a-chip system called LoCKAmp, which provided results for SARS-CoV-2 tests within three minutes. Managing HIV infections is another area where lab-on-a-chips may be useful. Around 36.9 million people are infected with HIV in the world today, and 59% of these people receive anti-retroviral treatment. Only 75% of people living with HIV knew their status. Measuring the number of CD4+ T lymphocytes in a person's blood is an accurate way to determine if a person has HIV and to track the progress of an HIV infection. At the moment, flow cytometry is the gold standard for obtaining CD4 counts, but flow cytometry is a complicated technique that is not available in most developing areas because it requires trained technicians and expensive equipment. Recently such a cytometer was developed for just $5.
Another active area of LOC research is for controlled separation and mixing. In such devices it is possible to quickly diagnose and potentially treat diseases. As mentioned above, a big motivation for development of these is that they can potentially be manufactured at very low cost. One more area of research that is being looked into with regards to LOC is with home security. Automated monitoring of volatile organic compounds (VOCs) is a desired functionality for LOC. If this application becomes reliable, these micro-devices could be installed on a global scale and notify homeowners of potentially dangerous compounds.
Plant sciences
Lab-on-a-chip devices could be used to characterize pollen tube guidance in Arabidopsis thaliana. Specifically, plant on a chip is a miniaturized device in which pollen tissues and ovules could be incubated for plant sciences studies.
See also
Biochemical assays
Dielectrophoresis: detection of cancer cells and bacteria.
Immunoassay: detect bacteria, viruses and cancers based on antigen-antibody reactions.
Ion channel screening (patch clamp)
Microfluidics
Microphysiometry
Organ-on-a-chip
Real-time PCR: detection of bacteria, viruses and cancers.
Testing the safety and efficacy of new drugs, as with lung on a chip
Total analysis system
References
Further reading
Books
Geschke, Klank & Telleman, eds.: Microsystem Engineering of Lab-on-a-chip Devices, 1st ed, John Wiley & Sons. .
(2012) Gareth Jenkins & Colin D Mansfield (eds): Methods in Molecular Biology – Microfluidic Diagnostics, Humana Press,
Integrated circuits
Laboratory types
Nanotechnology
Microfluidics
Optofluidics | Lab-on-a-chip | [
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 2,745 | [
"Microfluidics",
"Computer engineering",
"Microtechnology",
"Optofluidics",
"Laboratory types",
"Materials science",
"Nanotechnology",
"Integrated circuits"
] |
1,041,142 | https://en.wikipedia.org/wiki/Row%20%28database%29 | In a relational database, a row or "record" or "tuple", represents a single, implicitly structured data item in a table. A database table can be thought of as consisting of rows and columns. Each row in a table represents a set of related data, and every row in the table has the same structure.
For example, in a table that represents companies, each row might represent a single company. Columns might represent things like company name, address, etc. In a table that represents the association of employees with departments, each row would associate one employee with one department.
The implicit structure of a row, and the meaning of the data values in a row, requires that the row be understood as providing a succession of data values, one in each column of the table. The row is then interpreted as a relvar composed of a set of tuples, with each tuple consisting of the two items: the name of the relevant column and the value this row provides for that column.
Each column expects a data value of a particular type.
For example, one column might require a unique identifier, another might require text representing a person's name, another might require an integer representing hourly pay in dollars.
References
Data modeling
Relational model
Database management systems | Row (database) | [
"Engineering"
] | 259 | [
"Data modeling",
"Data engineering"
] |
1,041,167 | https://en.wikipedia.org/wiki/Column%20%28database%29 | In a relational database, a column is a set of data values of a particular type, one value for each row of a table. A column may contain text values, numbers, or even pointers to files in the operating system. Columns typically contain simple types, though some relational database systems allow columns to contain more complex data types, such as whole documents, images, or even video clips. A column can also be called an attribute.
Each row would provide a data value for each column and would then be understood as a single structured data value. For example, a database that represents company contact information might have the following columns: ID, Company Name, Address Line 1, Address Line 2, City, and Postal Code. More formally, a row is a tuple containing a specific value for each column, for example: (1234, 'Big Company Inc.', '123 East Example Street', '456 West Example Drive', 'Big City', 98765).
Field
The word 'field' is normally used interchangeably with 'column'. However, database perfectionists tend to favor using 'field' to signify a specific cell of a given row. This is to enable accuracy in communicating with other developers. Columns (really column names) being referred to as field names (common for each row/record in the table). Then a field refers to a single storage location in a specific record (like a cell) to store one value (the field value). The terms record and field come from the more practical field of database usage and traditional DBMS system usage (This was linked into business like terms used in manual databases e.g. filing cabinet storage with records for each customer). The terms row and column come from the more theoretical study of relational theory.
Another distinction between the terms 'column' and 'field' is that the term 'column' does not apply to certain databases, for instance key-value stores, that do not conform to the traditional relational database structure.
See also
Column-oriented DBMS, optimization for column-centric queries
Column (data store), a similar object used in distributed data stores
Row (database)
SQL
Query language
Column groups and row groups
References
zh-yue:行 (數據庫)
Data modeling
Database management systems | Column (database) | [
"Engineering"
] | 469 | [
"Data modeling",
"Data engineering"
] |
1,041,204 | https://en.wikipedia.org/wiki/Granular%20computing | Granular computing is an emerging computing paradigm of information processing that concerns the processing of complex information entities called "information granules", which arise in the process of data abstraction and derivation of knowledge from information or data. Generally speaking, information granules are collections of entities that usually originate at the numeric level and are arranged together due to their similarity, functional or physical adjacency, indistinguishability, coherency, or the like.
At present, granular computing is more a theoretical perspective than a coherent set of methods or principles. As a theoretical perspective, it encourages an approach to data that recognizes and exploits the knowledge present in data at various levels of resolution or scales. In this sense, it encompasses all methods which provide flexibility and adaptability in the resolution at which knowledge or information is extracted and represented.
Types of granulation
As mentioned above, granular computing is not an algorithm or process; there is no particular method that is called "granular computing". It is rather an approach to looking at data that recognizes how different and interesting regularities in the data can appear at different levels of granularity, much as different features become salient in satellite images of greater or lesser resolution. On a low-resolution satellite image, for example, one might notice interesting cloud patterns representing cyclones or other large-scale weather phenomena, while in a higher-resolution image, one misses these large-scale atmospheric phenomena but instead notices smaller-scale phenomena, such as the interesting pattern that is the streets of Manhattan. The same is generally true of all data: At different resolutions or granularities, different features and relationships emerge. The aim of granular computing is to try to take advantage of this fact in designing more effective machine-learning and reasoning systems.
There are several types of granularity that are often encountered in data mining and machine learning, and we review them below:
Value granulation (discretization/quantization)
One type of granulation is the quantization of variables. It is very common that in data mining or machine-learning applications the resolution of variables needs to be decreased in order to extract meaningful regularities. An example of this would be a variable such as "outside temperature" (), which in a given application might be recorded to several decimal places of precision (depending on the sensing apparatus). However, for purposes of extracting relationships between "outside temperature" and, say, "number of health-club applications" (), it will generally be advantageous to quantize "outside temperature" into a smaller number of intervals.
Motivations
There are several interrelated reasons for granulating variables in this fashion:
Based on prior domain knowledge, there is no expectation that minute variations in temperature (e.g., the difference between ) could have an influence on behaviors driving the number of health-club applications. For this reason, any "regularity" which our learning algorithms might detect at this level of resolution would have to be spurious, as an artifact of overfitting. By coarsening the temperature variable into intervals the difference between which we do anticipate (based on prior domain knowledge) might influence number of health-club applications, we eliminate the possibility of detecting these spurious patterns. Thus, in this case, reducing resolution is a method of controlling overfitting.
By reducing the number of intervals in the temperature variable (i.e., increasing its grain size), we increase the amount of sample data indexed by each interval designation. Thus, by coarsening the variable, we increase sample sizes and achieve better statistical estimation. In this sense, increasing granularity provides an antidote to the so-called curse of dimensionality, which relates to the exponential decrease in statistical power with increase in number of dimensions or variable cardinality.
Independent of prior domain knowledge, it is often the case that meaningful regularities (i.e., which can be detected by a given learning methodology, representational language, etc.) may exist at one level of resolution and not at another.
For example, a simple learner or pattern recognition system may seek to extract regularities satisfying a conditional probability threshold such as In the special case where this recognition system is essentially detecting logical implication of the form or, in words, "if then The system's ability to recognize such implications (or, in general, conditional probabilities exceeding threshold) is partially contingent on the resolution with which the system analyzes the variables.
As an example of this last point, consider the feature space shown to the right. The variables may each be regarded at two different resolutions. Variable may be regarded at a high (quaternary) resolution wherein it takes on the four values or at a lower (binary) resolution wherein it takes on the two values Similarly, variable may be regarded at a high (quaternary) resolution or at a lower (binary) resolution, where it takes on the values or respectively. At the high resolution, there are no detectable implications of the form since every is associated with more than one and thus, for all However, at the low (binary) variable resolution, two bilateral implications become detectable: and , since every occurs iff and occurs iff Thus, a pattern recognition system scanning for implications of this kind would find them at the binary variable resolution, but would fail to find them at the higher quaternary variable resolution.
Issues and methods
It is not feasible to exhaustively test all possible discretization resolutions on all variables in order to see which combination of resolutions yields interesting or significant results. Instead, the feature space must be preprocessed (often by an entropy analysis of some kind) so that some guidance can be given as to how the discretization process should proceed. Moreover, one cannot generally achieve good results by naively analyzing and discretizing each variable independently, since this may obliterate the very interactions that we had hoped to discover.
A sample of papers that address the problem of variable discretization in general, and multiple-variable discretization in particular, is as follows: , , , , , , , , , , , , , , , ,
, , , , .
Variable granulation (clustering/aggregation/transformation)
Variable granulation is a term that could describe a variety of techniques, most of which are aimed at reducing dimensionality, redundancy, and storage requirements. We briefly describe some of the ideas here, and present pointers to the literature.
Variable transformation
A number of classical methods, such as principal component analysis, multidimensional scaling, factor analysis, and structural equation modeling, and their relatives, fall under the genus of "variable transformation." Also in this category are more modern areas of study such as dimensionality reduction, projection pursuit, and independent component analysis. The common goal of these methods in general is to find a representation of the data in terms of new variables, which are a linear or nonlinear transformation of the original variables, and in which important statistical relationships emerge. The resulting variable sets are almost always smaller than the original variable set, and hence these methods can be loosely said to impose a granulation on the feature space. These dimensionality reduction methods are all reviewed in the standard texts, such as , , and .
Variable aggregation
A different class of variable granulation methods derive more from data clustering methodologies than from the linear systems theory informing the above methods. It was noted fairly early that one may consider "clustering" related variables in just the same way that one considers clustering related data. In data clustering, one identifies a group of similar entities (using a "measure of similarity" suitable to the domain — ), and then in some sense replaces those entities with a prototype of some kind. The prototype may be the simple average of the data in the identified cluster, or some other representative measure. But the key idea is that in subsequent operations, we may be able to use the single prototype for the data cluster (along with perhaps a statistical model describing how exemplars are derived from the prototype) to stand in for the much larger set of exemplars. These prototypes are generally such as to capture most of the information of interest concerning the entities.
Similarly, it is reasonable to ask whether a large set of variables might be aggregated into a smaller set of prototype variables that capture the most salient relationships between the variables. Although variable clustering methods based on linear correlation have been proposed (;), more powerful methods of variable clustering are based on the mutual information between variables. Watanabe has shown (;) that for any set of variables one can construct a polytomic (i.e., n-ary) tree representing a series of variable agglomerations in which the ultimate "total" correlation among the complete variable set is the sum of the "partial" correlations exhibited by each agglomerating subset (see figure). Watanabe suggests that an observer might seek to thus partition a system in such a way as to minimize the interdependence between the parts "... as if they were looking for a natural division or a hidden crack."
One practical approach to building such a tree is to successively choose for agglomeration the two variables (either atomic variables or previously agglomerated variables) which have the highest pairwise mutual information . The product of each agglomeration is a new (constructed) variable that reflects the local joint distribution of the two agglomerating variables, and thus possesses an entropy equal to their joint entropy.
(From a procedural standpoint, this agglomeration step involves replacing two columns in the attribute-value table—representing the two agglomerating variables—with a single column that has a unique value for every unique combination of values in the replaced columns . No information is lost by such an operation; however, if one is exploring the data for inter-variable relationships, it would generally not be desirable to merge redundant variables in this way, since in such a context it is likely to be precisely the redundancy or dependency between variables that is of interest; and once redundant variables are merged, their relationship to one another can no longer be studied.
System granulation (aggregation)
In database systems, aggregations (see e.g. OLAP aggregation and Business intelligence systems) result in transforming original data tables (often called information systems) into the tables with different semantics of rows and columns, wherein the rows correspond to the groups (granules) of original tuples and the columns express aggregated information about original values within each of the groups. Such aggregations are usually based on SQL and its extensions. The resulting granules usually correspond to the groups of original tuples with the same values (or ranges) over some pre-selected original columns.
There are also other approaches wherein the groups are defined basing on, e.g., physical adjacency of rows. For example, Infobright implemented a database engine wherein data was partitioned onto rough rows, each consisting of 64K of physically consecutive (or almost consecutive) rows. Rough rows were automatically labeled with compact information about their values on data columns, often involving multi-column and multi-table relationships. It resulted in a higher layer of granulated information where objects corresponded to rough rows and attributes - to various aspects of rough information. Database operations could be efficiently supported within such a new framework, with an access to the original data pieces still available .
Concept granulation (component analysis)
The origins of the granular computing ideology are to be found in the rough sets and fuzzy sets literatures. One of the key insights of rough set research—although by no means unique to it—is that, in general, the selection of different sets of features or variables will yield different concept granulations. Here, as in elementary rough set theory, by "concept" we mean a set of entities that are indistinguishable or indiscernible to the observer (i.e., a simple concept), or a set of entities that is composed from such simple concepts (i.e., a complex concept). To put it in other words, by projecting a data set (value-attribute system) onto different sets of variables, we recognize alternative sets of equivalence-class "concepts" in the data, and these different sets of concepts will in general be conducive to the extraction of different relationships and regularities.
Equivalence class granulation
We illustrate with an example. Consider the attribute-value system below:
{| class="wikitable" style="text-align:center; width:30%" border="1"
|+ Sample Information System
! Object !! !! !! !! !!
|-
!
| 1 || 2 || 0 || 1 || 1
|-
!
| 1 || 2 || 0 || 1 || 1
|-
!
| 2 || 0 || 0 || 1 || 0
|-
!
| 0 || 0 || 1 || 2 || 1
|-
!
| 2 || 1 || 0 || 2 || 1
|-
!
| 0 || 0 || 1 || 2 || 2
|-
!
| 2 || 0 || 0 || 1 || 0
|-
!
| 0 || 1 || 2 || 2 || 1
|-
!
| 2 || 1 || 0 || 2 || 2
|-
!
| 2 || 0 || 0 || 1 || 0
|}
When the full set of attributes is considered, we see that we have the following seven equivalence classes or primitive (simple) concepts:
Thus, the two objects within the first equivalence class, cannot be distinguished from one another based on the available attributes, and the three objects within the second equivalence class, cannot be distinguished from one another based on the available attributes. The remaining five objects are each discernible from all other objects. Now, let us imagine a projection of the attribute value system onto attribute alone, which would represent, for example, the view from an observer which is only capable of detecting this single attribute. Then we obtain the following much coarser equivalence class structure.
This is in a certain regard the same structure as before, but at a lower degree of resolution (larger grain size). Just as in the case of value granulation (discretization/quantization), it is possible that relationships (dependencies) may emerge at one level of granularity that are not present at another. As an example of this, we can consider the effect of concept granulation on the measure known as attribute dependency (a simpler relative of the mutual information).
To establish this notion of dependency (see also rough sets), let represent a particular concept granulation, where each is an equivalence class from the concept structure induced by attribute set . For example, if the attribute set consists of attribute alone, as above, then the concept structure will be composed of
The dependency of attribute set on another attribute set , is given by
That is, for each equivalence class in we add up the size of its "lower approximation" (see rough sets) by the attributes in , i.e., More simply, this approximation is the number of objects which on attribute set can be positively identified as belonging to target set Added across all equivalence classes in the numerator above represents the total number of objects which—based on attribute set —can be positively categorized according to the classification induced by attributes . The dependency ratio therefore expresses the proportion (within the entire universe) of such classifiable objects, in a sense capturing the "synchronization" of the two concept structures and The dependency "can be interpreted as a proportion of such objects in the information system for which it suffices to know the values of attributes in to determine the values of attributes in " (Ziarko & Shan 1995).
Having gotten definitions now out of the way, we can make the simple observation that the choice of concept granularity (i.e., choice of attributes) will influence the detected dependencies among attributes. Consider again the attribute value table from above:
{| class="wikitable" style="text-align:center; width:30%" border="1"
|+ Sample Information System
! Object !! !! !! !! !!
|-
!
| 1 || 2 || 0 || 1 || 1
|-
!
| 1 || 2 || 0 || 1 || 1
|-
!
| 2 || 0 || 0 || 1 || 0
|-
!
| 0 || 0 || 1 || 2 || 1
|-
!
| 2 || 1 || 0 || 2 || 1
|-
!
| 0 || 0 || 1 || 2 || 2
|-
!
| 2 || 0 || 0 || 1 || 0
|-
!
| 0 || 1 || 2 || 2 || 1
|-
!
| 2 || 1 || 0 || 2 || 2
|-
!
| 2 || 0 || 0 || 1 || 0
|}
Consider the dependency of attribute set on attribute set That is, we wish to know what proportion of objects can be correctly classified into classes of based on knowledge of The equivalence classes of and of are shown below.
{| class="wikitable"
|-
!
!
|-
|
|
|}
The objects that can be definitively categorized according to concept structure based on are those in the set and since there are six of these, the dependency of on , This might be considered an interesting dependency in its own right, but perhaps in a particular data mining application only stronger dependencies are desired.
We might then consider the dependency of the smaller attribute set on the attribute set The move from to induces a coarsening of the class structure as will be seen shortly. We wish again to know what proportion of objects can be correctly classified into the (now larger) classes of based on knowledge of The equivalence classes of the new and of are shown below.
{| class="wikitable"
|-
!
!
|-
|
|
|}
Clearly, has a coarser granularity than it did earlier. The objects that can now be definitively categorized according to the concept structure based on constitute the complete universe , and thus the dependency of on , That is, knowledge of membership according to category set is adequate to determine category membership in with complete certainty; In this case we might say that Thus, by coarsening the concept structure, we were able to find a stronger (deterministic) dependency. However, we also note that the classes induced in from the reduction in resolution necessary to obtain this deterministic dependency are now themselves large and few in number; as a result, the dependency we found, while strong, may be less valuable to us than the weaker dependency found earlier under the higher resolution view of
In general it is not possible to test all sets of attributes to see which induced concept structures yield the strongest dependencies, and this search must be therefore be guided with some intelligence. Papers which discuss this issue, and others relating to intelligent use of granulation, are those by Y.Y. Yao and Lotfi Zadeh listed in the #References below.
Component granulation
Another perspective on concept granulation may be obtained from work on parametric models of categories. In mixture model learning, for example, a set of data is explained as a mixture of distinct Gaussian (or other) distributions. Thus, a large amount of data is "replaced" by a small number of distributions. The choice of the number of these distributions, and their size, can again be viewed as a problem of concept granulation. In general, a better fit to the data is obtained by a larger number of distributions or parameters, but in order to extract meaningful patterns, it is necessary to constrain the number of distributions, thus deliberately coarsening the concept resolution. Finding the "right" concept resolution is a tricky problem for which many methods have been proposed (e.g., AIC, BIC, MDL, etc.), and these are frequently considered under the rubric of "model regularization".
Different interpretations of granular computing
Granular computing can be conceived as a framework of theories, methodologies, techniques, and tools that make use of information granules in the process of problem solving. In this sense, granular computing is used as an umbrella term to cover topics that have been studied in various fields in isolation. By examining all of these existing studies in light of the unified framework of granular computing and extracting their commonalities, it may be possible to develop a general theory for problem solving.
In a more philosophical sense, granular computing can describe a way of thinking that relies on the human ability to perceive the real world under various levels of granularity (i.e., abstraction) in order to abstract and consider only those things that serve a specific interest and to switch among different granularities. By focusing on different levels of granularity, one can obtain different levels of knowledge, as well as a greater understanding of the inherent knowledge structure. Granular computing is thus essential in human problem solving and hence has a very significant impact on the design and implementation of intelligent systems.
See also
Rough Sets, Discretization
Type-2 Fuzzy Sets and Systems
References
.
Bargiela, A. and Pedrycz, W. (2003) Granular Computing. An introduction, Kluwer Academic Publishers
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Yao, Y.Y. (2004) "A Partition Model of Granular Computing", Lecture Notes in Computer Science (to appear)
Zadeh, L.A. (1997) "Toward a Theory of Fuzzy Information Granulation and its Centrality in Human Reasoning and Fuzzy Logic", Fuzzy Sets and Systems, 90:111-127
.
Theoretical computer science
Machine learning | Granular computing | [
"Mathematics",
"Engineering"
] | 4,523 | [
"Theoretical computer science",
"Applied mathematics",
"Artificial intelligence engineering",
"Machine learning"
] |
1,041,214 | https://en.wikipedia.org/wiki/Proportional%20counter | The proportional counter is a type of gaseous ionization detector device used to measure particles of ionizing radiation. The key feature is its ability to measure the energy of incident radiation, by producing a detector output pulse that is proportional to the radiation energy absorbed by the detector due to an ionizing event; hence the detector's name. It is widely used where energy levels of incident radiation must be known, such as in the discrimination between alpha and beta particles, or accurate measurement of X-ray radiation dose.
A proportional counter uses a combination of the mechanisms of a Geiger–Müller tube and an ionization chamber, and operates in an intermediate voltage region between these. The accompanying plot shows the proportional counter operating voltage region for a co-axial cylinder arrangement.
Operation
In a proportional counter the fill gas of the chamber is an inert gas which is ionized by incident radiation, and a quench gas to ensure each pulse discharge terminates; a common mixture is 90% argon, 10% methane, known as P-10. An ionizing particle entering the gas collides with an atom of the inert gas and ionizes it to produce an electron and a positively charged ion, commonly known as an "ion pair". As the ionizing particle travels through the chamber it leaves a trail of ion pairs along its trajectory, the number of which is proportional to the energy of the particle if it is fully stopped within the gas. Typically a 1 MeV stopped particle will create about 30,000 ion pairs.
The chamber geometry and the applied voltage is such that in most of the chamber the electric field strength is low and the chamber acts as an ion chamber. However, the field is strong enough to prevent re-combination of the ion pairs and causes positive ions to drift towards the cathode and electrons towards the anode. This is the "ion drift" region. In the immediate vicinity of the anode wire, the field strength becomes large enough to produce Townsend avalanches. This avalanche region occurs only fractions of a millimeter from the anode wire, which itself is of a very small diameter. The purpose of this is to use the multiplication effect of the avalanche produced by each ion pair. This is the "avalanche" region.
A key design goal is that each original ionizing event due to incident radiation produces only one avalanche. This is to ensure proportionality between the number of original events and the total ion current. For this reason, the applied voltage, the geometry of the chamber and the diameter of the anode wire are critical to ensure proportional operation. If avalanches start to self-multiply due to UV photons as they do in a Geiger–Muller tube, then the counter enters a region of "limited proportionality" until at a higher applied voltage the Geiger discharge mechanism occurs with complete ionization of the gas enveloping the anode wire and consequent loss of particle energy information.
Therefore, it can be said that the proportional counter has the key design feature of two distinct ionization regions:
Ion drift region: in the outer volume of the chamber – the creation of number ion pairs proportional to incident radiation energy.
Avalanche region: in the immediate vicinity of the anode – charge amplification of ion pair currents, while maintaining localized avalanches.
The process of charge amplification greatly improves the signal-to-noise ratio of the detector and reduces the subsequent electronic amplification required.
In summary, the proportional counter is an ingenious combination of two ionization mechanisms in one chamber which finds wide practical use.
Gas mixtures
Usually the detector is filled with a noble gas; they have the lowest ionization voltages and do not degrade chemically. Typically neon, argon, krypton or xenon are used. Low-energy x-rays are best detected with lighter nuclei (neon), which are less sensitive to higher-energy photons. Krypton or xenon are chosen when for higher-energy x-rays or for higher desired efficiency.
Often the main gas is mixed with a quenching additive. A popular mixture is P10 (10% methane, 90% argon).
Typical working pressure is 1 atmosphere (about 100 kPa).
Signal amplification by multiplication
In the case of a cylindrical proportional counter the multiplication, M, of the signal caused by an avalanche can be modeled as follows:
Where a is the anode wire radius, b is the radius of the counter, p is the pressure of the gas, and V is the operating voltage. K is a property of the gas used and relates the energy needed to cause an avalanche to the pressure of the gas. The final term gives the change in voltage caused by an avalanche.
Applications
Spectroscopy
The proportionality between the energy of the charged particle traveling through the chamber and the total charge created makes proportional counters useful for charged particle spectroscopy. By measuring the total charge (time integral of the electric current) between the electrodes, we can determine the particle's kinetic energy because the number of ion pairs created by the incident ionizing charged particle is proportional to its energy. The energy resolution of a proportional counter, however, is limited because both the initial ionization event and the subsequent 'multiplication' event are subject to statistical fluctuations characterized by a standard deviation equal to the square root of the average number formed. However, in practice these are not as great as would be predicted due to the effect of the empirical Fano factor which reduces these fluctuations. In the case of argon, this is experimentally about 0.2.
Photon detection
Proportional counters are also useful for detection of high energy photons, such as gamma-rays, provided these can penetrate the entrance window. They are also used for the detection of X-rays to below 1 keV energy levels, using thin-walled tubes operating at or around atmospheric pressure.
Radioactive contamination detection
Proportional counters in the form of large area planar detectors are used extensively to check for radioactive contamination on personnel, flat surfaces, tools, and items of clothing. This is normally in the form of installed instrumentation because of the difficulties of providing portable gas supplies for hand-held devices. They are constructed with a large area detection window made from such as metalized mylar which forms one wall of the detection chamber and is part of the cathode. The anode wire is routed in a convoluted manner within the detector chamber to optimize the detection efficiency. They are normally used to detect alpha and beta particles, and can enable discrimination between them by providing a pulse output proportional to the energy deposited in the chamber by each particle. They have a high efficiency for beta, but lower for alpha. The efficiency reduction for alpha is due to the attenuation effect of the entry window, though distance from the surface being checked also has a significant effect, and ideally a source of alpha radiation should be less than 10mm from the detector due to attenuation in air.
These chambers operate at very slight positive pressure above ambient atmospheric pressure. The gas can be sealed in the chamber, or can be changed continuously, in which case they are known as "gas-flow proportional counters". Gas flow types have the advantage that they will tolerate small holes in the mylar screen which can occur in use, but they do require a continuous gas supply.
Guidance on application use
In the United Kingdom the Health and Safety Executive (HSE) has issued a user guidance note on selecting the correct radiation measurement instrument for the application concerned. This covers all radiation instrument technologies and is a useful comparative guide to the use of proportional counters.
See also
Gaseous ionization detectors
Micropattern gaseous detector
Multiwire proportional chamber
References
Glenn F Knoll. Radiation Detection and Measurement, third edition 2000. John Wiley and sons, .
E. Mathieson, Induced charge distributions in proportional detectors, https://web.archive.org/web/20081011022244/http://www.inst.bnl.gov/programs/gasnobledet/publications/Mathieson's_Book.pdf
External links
Patents
, S. Fine, "Proportional counter"
, E. W. Molloy, "Air proportional counter"
Particle detectors
Ionising radiation detectors
Counting instruments
Radiation protection
de:Proportionalzähler
hu:Proporcionális kamra | Proportional counter | [
"Mathematics",
"Technology",
"Engineering"
] | 1,691 | [
"Radioactive contamination",
"Counting instruments",
"Particle detectors",
"Measuring instruments",
"Ionising radiation detectors",
"Numeral systems"
] |
1,041,230 | https://en.wikipedia.org/wiki/Beeline%20%28brand%29 | Beeline (), formerly Bee Line GSM () is a telecommunications brand by company PJSC VimpelCom, founded in Russia.
PJSC VimpelCom is Russia's third-largest wireless and second-largest telecommunications operator. Its headquarters is located in Moscow. Since 2009, PJSC VimpelCom has been a subsidiary of VimpelCom Ltd., which has become Veon in 2017. It is based in Amsterdam. VimpelCom's main competitors in Russia are Mobile TeleSystems, MegaFon and T2 (telecommunications company).
The commercial service was launched under the Beeline brand, a brand developed by Fabela in late 1993 to differentiate the company as a youthful and fun company, rather than a technical company. The name comes from the English term "beeline", meaning the most direct way between two points.
VimpelCom relaunched Beeline with the current characteristic black-and-yellow striped circle in 2005 with a campaign to associate the brand with the principles of brightness, friendliness, effectiveness, simplicity, and positive emotions; with a new slogan "Живи на яркой стороне" (Live on the vibrant side). The rebranding campaign was hugely successful and the principles associated with the brand "captured hearts and minds", in the words of the company.
History in Russia
OJSC VimpelCom was founded in 1992 and initially operated AMPS/D-AMPS network in Moscow area. In 1996 it became a Extel GSM the first Russian company listed on the New York Stock Exchange ().
In November 2005 OJSC VimpelCom stepped further with foreign acquisitions by acquiring 100% of Ukrainian RadioSystems, a marginal Ukrainian GSM operator operating under the Wellcom and Mobi brands. The deal has been surrounded by a controversy involving two major shareholders of VimpelCom: the Russian Alfa Group and Telenor, the incumbent Norwegian telecommunications company.
The company's current (as of July 2008) license portfolio covers a territory where 97% of Russia's population resides, as well as 100% of the territory of Kazakhstan, Ukraine, Uzbekistan, Tajikistan, Georgia, and Armenia. VimpelCom also has a 49.9% stake in Euroset, the largest mobile retailer in Russia and the Commonwealth of Independent States (CIS). In May 2010 VimpelCom merged with Kyivstar to form VimpelCom Ltd., the largest telecom operator group in the CIS. Alexander Izosimov, CEO of OJSC VimpelCom, was appointed president.
On October 09, 2023, VEON sold Beeline and completed its exit from Russia.
In August 2024, Beeline abandoned the 3G frequency range in Moscow. The first Russian mobile operator completed a project to redistribute its own frequency bands from 3G to 4G. On December 13, 2024, Beeline abandoned the 3G frequency range in St. Petersburg and the Leningrad Region.
Outside Russia
Armenia
On 16 November 2006, PJSC VimpelCom acquired the 90% share in Armentel CJSC held by the Hellenic Telecommunications Organization SA (OTE) for €341.9 million.
Australia
In November 2018 it was observed that Beeline owns a range of 1000 numbers with the +61 country code, from +61497906000 to +61497906999. Unfortunately, numbers in this range have been used for a Technical support scam, posing as a Windows Help Desk in cold calls to Australian and New Zealand numbers.
Georgia
Beeline was bought by a Georgian businessman and is no longer owned by Vimpelcom or Veon. It was later rebranded to Cellfie. The first mobile call with Veon Georgia (brand Beeline) was initiated on March 15, 2007. From this time onward, the company has been actively developing and as of today it provides 1.3 million customers with 2G GSM 900/1800MHz, 3G 2100 MHz, 4G 800/1800 MHz and 5G wireless services under the name "Cellfie".
Kazakhstan
In 2004, PJSC VimpelCom, in its first move outside Russia's territory, acquired Kazakhstani cellular operator KaR-Tel (brand names K-Mobile and Excess).
Kyrgyzstan
Beeline is active in Kyrgyzstan and it is one of the most popular cellular carrier in Kyrgyzstan. The first call on GSM network of Kyrgyzstan was made on August 1, 1998. In 2001 the trade name MobiCard was established that turned into brand Mobi in 2007. In 2009 company started to provide services under international brand Beeline. As for 2022 company provides services in standards of GSM-900/1800 and WCDMA/UMTS 2100/900 (3G) and LTE 800/1800/2100/2600 (4G).
Laos
In 2011 Beeline entered Laos as VimpelCom Lao, replacing former Tigo. 22% shareholding remains with Lao government. 3G HSPA+ services began in January 2012. Numbers on Beeline Laos are 020-7xxx-xxxx. Beeline Laos 4G LTE to be launched soon. Now it is on a trial period.
Ukraine
Beeline Ukraine (known as Ukrainian RadioSystems (URS) before February 2007) was a mobile operator in Ukraine with 2.22 million GSM subscribers (February 2007). The company operated under Beeline brand. In 2010 Beeline merged with Kyivstar. Now all Beeline Ukraine subscribers became Kyivstar subscribers.
The company was founded in 1995, and Motorola acquired 49% of the company in 1996. URS obtained a GSM-900 license in 1997, but Motorola backed off the venture the same year due to an alleged government favoritism to another mobile operator. The Korean Daewoo picked up the ownership, didn't do much to grow the business and sold it to a Ukrainian financial group in 2003. In November 2005, 100% of the company's ownership was acquired by the Russian VimpelCom for $230 million. The deal was surrounded by a controversy involving two major shareholders of VimpelCom: the Russian Alfa Group and Telenor, a Norwegian telecom conglomerate.
Following the acquisition by VimpelCom, all of the company's services have been rebranded as Beeline, similar to a VimpelCom's major mobile assets in Russia. In 2010 Beeline Ukraine (URS) was merged with Kyivstar. Now company operates only under Kyivstar brand.
Uzbekistan
Beeline is active in Uzbekistan and it is one of the most popular cellular carrier in Uzbekistan.
Vietnam
In July 2009 VimpelCom cooperated with a Vietnamese telecommunication company, GTel Telecommunications, to open a new mobile phone network in Vietnam called Beeline Vietnam. However, in 2012, after three years of business losses, Beeline withdrew from the joint venture and the Vietnamese market in general. Gtel Mobile continues to exploit the remaining base in Vietnam with a new brand, Gmobile.
References
External links
Investor relations web site
Corporate web site
Russian brands
Telecommunications companies of Russia
Internet service providers of Russia
Streaming television
Mobile phone companies of Russia
Companies based in Moscow
Telecommunications companies established in 1993
VEON
01
Telecommunications companies established in 1992
Telenor
Russian companies established in 1992
Russian companies established in 1993
Mobile phone companies of Vietnam | Beeline (brand) | [
"Technology"
] | 1,526 | [
"Multimedia",
"Streaming television"
] |
1,041,245 | https://en.wikipedia.org/wiki/Wire%20chamber | A wire chamber or multi-wire proportional chamber is a type of proportional counter that detects charged particles and photons and can give positional information on their trajectory, by tracking the trails of gaseous ionization.
The technique was an improvement over the bubble chamber particle detection method, which used photographic techniques, as it allowed high speed electronics to track the particle path.
Description
The multi-wire chamber uses an array of wires at a positive dc voltage (anode)s, which run through a chamber with conductive walls held at a lower potential (cathode). The chamber is filled with gas, such as an argon/methane mix, so that any ionizing particle that passes through the tube will ionize surrounding gaseous atoms and produce ion pairs, consisting of positive ions and electrons. These are accelerated by the electric field across the chamber, preventing recombination; the electrons are accelerated to the anode, and the positive ions to the cathode. At the anode a phenomenon known as a Townsend avalanche occurs. This results in a measurable current flow for each original ionising event which is proportional to the ionisation energy deposited by the detected particle. By separately measuring the current pulses from each wire, the particle trajectory can be found.
Adaptations of this basic design are the thin gap, resistive plate and drift chambers. The drift chamber can also be subdivided into ranges of specific use in the chamber designs known as time projection, microstrip gas, and those types of detectors that use silicon.
Development
In 1968, Georges Charpak, while at the European Organization for Nuclear Research (CERN), invented and developed the multi-wire proportional chamber (MWPC). This invention resulted in him winning the Nobel Prize for Physics in 1992. The chamber was an advancement of the earlier bubble chamber rate of detection of only one or two particles every second to 1000 particle detections every second. The MWPC produced electronic signals from particle detection, allowing scientists to examine data via computers. The multi-wire chamber is a development of the spark chamber.
Fill gases
In a typical experiment, the chamber contains a mixture of these gases:
argon (about )
isobutane (just under )
freon (0.5%)
The chamber could also be filled with:
liquid xenon;
liquid tetramethylsilane; or
tetrakis(dimethylamino)ethylene (TMAE) vapour.
Use
For high-energy physics experiments, it is used to observe a particle's path. For a long time, bubble chambers were used for this purpose, but with the improvement of electronics, it became desirable to have a detector with fast electronic read-out. (In bubble chambers, photographic exposures were made and the resulting printed photographs were then examined.) A wire chamber is a chamber with many parallel wires, arranged as a grid and put on high voltage, with the metal casing being on ground potential. As in the Geiger counter, a particle leaves a trace of ions and electrons, which drift toward the case or the nearest wire, respectively. By marking off the wires which had a pulse of current, one can see the particle's path.
The chamber has a very good relative time resolution, good positional accuracy, and self-triggered operation (Ferbel 1977).
The development of the chamber enabled scientists to study the trajectories of particles with much-improved precision, and also for the first time to observe and study the rarer interactions that occur through particle interaction.
Drift chambers
If one also precisely measures the timing of the current pulses of the wires and takes into account that the ions need some time to drift to the nearest wire, one can infer the distance at which the particle passed the wire. This greatly increases the accuracy of the path reconstruction and is known as a drift chamber.
A drift chamber functions by balancing the loss of energy from particles caused by impacts with particles of gas with the accretion of energy created with high-energy electrical fields in use to cause the particle acceleration. Design is similar to the multi-wire proportional chamber but with a greater distance between central-layer wires. The detection of charged particles within the chamber is possible by the ionizing of gas particles due to the motion of the charged particle.
The Fermilab detector CDF II contains a drift chamber called the Central Outer Tracker. The chamber contains argon and ethane gas, and wires separated by 3.56-millimetre gaps.
If two drift chambers are used with the wires of one orthogonal to the wires of the other, both orthogonal to the beam direction, a more precise detection of the position is obtained. If an additional simple detector (like the one used in a veto counter) is used to detect, with poor or null positional resolution, the particle at a fixed distance before or after the wires, a tri-dimensional reconstruction can be made and the speed of the particle deduced from the difference in time of the passage of the particle in the different parts of the detector. This setup gives us a detector called a time projection chamber (TPC).
For measuring the velocity of the electrons in a gas (drift velocity) there are special drift chambers, velocity drift chambers, which measure the drift time for a known location of ionisation.
See also
Bubble chamber
Gaseous ionization detector
Micropattern gaseous detector
Particle detector
Wilson chamber
References
External links
Heidelberg lecture on research ionisation chambers
Astroparticle physics
CERN
Experimental particle physics
Ionising radiation detectors
Laboratory equipment
Nuclear physics
Particle detectors
French inventions | Wire chamber | [
"Physics",
"Technology",
"Engineering"
] | 1,130 | [
"Nuclear physics",
"Radioactive contamination",
"Astroparticle physics",
"Astrophysics",
"Measuring instruments",
"Particle detectors",
"Ionising radiation detectors",
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
1,041,286 | https://en.wikipedia.org/wiki/Wakame | Wakame (Undaria pinnatifida) is a species of kelp native to cold, temperate coasts of the northwest Pacific Ocean. As an edible seaweed, it has a subtly sweet, but distinctive and strong flavour and satiny texture. It is most often served in soups and salads.
Wakame has long been collected for food in East Asia, and sea farmers in Japan have cultivated wakame since the eighth century (Nara period).
Although native to cold temperate coastal areas of Japan, Korea, China, and Russia, it has established itself in temperate regions around the world, including New Zealand, the United States, Belgium, France, Great Britain, Spain, Italy, Argentina, Australia and Mexico. , the Invasive Species Specialist Group has listed the species on its list of 100 worst globally invasive species.
Wakame, as with all other kelps and brown algae, is plant-like in appearance, but is unrelated to true plants, being, instead, a photosynthetic, multicellular stramenopile protist of the SAR supergroup.
Names
The primary common name is derived from the Japanese name (, , , ).
In English, it can also be called sea mustard.
In Chinese, it is called (裙带菜) or (海帶芽)
In French, it is called or ('sea fern').
In Korean, it is called (미역).
Etymology
In Old Japanese, stood for edible seaweeds in general as opposed to standing for algae. In kanji, such as , and were applied to transcribe the word. Among seaweeds, wakame was likely most often eaten, therefore especially meant wakame. It expanded later to other seaweeds like kajime, hirome (kombu), arame, etc. Wakame is derived from + (, lit. 'young seaweed'). If this is a eulogistic prefix, the same as the of tamagushi, wakame likely stood for seaweeds widely in ancient ages. In the Man'yōshū, in addition to and (both are read as ), (, soft wakame) can be seen. Besides, (, lit. 'beautiful algae'), which often appeared in the , may be wakame depending on poems.
History in the West
The earliest appearance in Western documents is probably in Nippo Jisho (1603), as Vacame.
In 1867 the word wakame appeared in an English-language publication, A Japanese and English Dictionary, by James C. Hepburn.
Starting in the 1960s, the word wakame started to be used widely in the United States, and the product (imported in dried form from Japan) became widely available at natural food stores and Asian-American grocery stores, due to the influence of the macrobiotic movement, and in the 1970s with the growing number of Japanese restaurants and sushi bars.
Aquaculture
Japanese and Korean sea-farmers have grown wakame for centuries, and are still both the leading producers and consumers. Wakame has also been cultivated in France since 1983, in sea fields established near the shores of Brittany.
Wild-grown wakame is harvested in Tasmania, Australia, and then sold in restaurants in Sydney and also sustainably hand-harvested from the waters of Foveaux Strait in Southland, New Zealand and freeze-dried for retail and use in a range of products.
Cuisine
Wakame fronds are green and have a subtly sweet flavour and satiny texture. The leaves should be cut into small pieces as they will expand during cooking.
In Japan and Europe, wakame is distributed either dried or salted, and used in soups (particularly miso soup), and salads (tofu salad), or often simply as a side dish to tofu and a salad vegetable like cucumber. These dishes are typically dressed with soya sauce and vinegar, possibly rice vinegar.
Goma wakame, also known as seaweed salad, is a popular side dish at American and European sushi restaurants. Literally translated, it means "sesame seaweed", as sesame seeds are usually included in the recipe.
In Korea, wakame is used to make seaweed soup called miyeok-guk in which wakame is stir-fried in sesame oil and boiled with meat broth.
Health effects
A study conducted at Hokkaido University found that a compound in wakame known as fucoxanthin may help burn fatty tissue in mice and rats. Studies in mice have shown that fucoxanthin induces expression of the fat-burning protein UCP1 that accumulates in fat tissue around the internal organs. Expression of UCP1 protein was significantly increased in mice fed fucoxanthin. Wakame is also used in topical beauty treatments. See also Fucoidan.
Wakame is a rich source of eicosapentaenoic acid, an omega-3 fatty acid. At over 400 mg/(100 kcal) or almost 1 mg/kJ, it has one of the higher nutrient-to-energy ratios for this nutrient, and among the very highest for a vegetarian source. Wakame is a low calorie food. A typical 10–20 g (1–2 tablespoon) serving of wakame contains roughly and provides 15–30 mg of omega-3 fatty acids. Wakame also has high levels of sodium, calcium, iodine, thiamine and niacin.
In Oriental medicine it has been used for blood purification, intestinal strength, skin, hair, reproductive organs and menstrual regularity.
In Korea, miyeok-guk soup is popularly consumed by women after giving birth as sea mustard () contains a high content of calcium and iodine, nutrients that are important for new nursing mothers. Many women consume it during the pregnancy phase as well. It is also traditionally eaten on birthdays for this reason, a reminder of the first food that the mother has eaten and passed on to her newborn through her milk.
Invasive species
Native to cold temperate coastal areas of Japan, Korea, China, and Russia, in recent decades it has become established in temperate regions around the world, including New Zealand, the United States, Belgium, France, Great Britain, Spain, Italy, Argentina, Australia and Mexico. It was nominated one of the 100 worst invasive species in the world. Undaria is commonly initially introduced or recorded on artificial structures, where its r-selected growth strategy facilitates proliferation and spread to natural reef sites. Undaria populations make a significant but inconsistent contribution of food and habitat to intertidal and subtidal reefs. Undaria invasion can cause changes to native community composition at all trophic levels. As well as increasing primary productivity, it can reduce the abundance and diversity of understory algal assemblages, out-compete some native macroalgal species and affect the abundance and composition of associated epibionts and macrofauna, including gastropods, crabs, urchins and fish. Its dense congregation and capability to latch onto any hard surface has caused it to become a major cause of damage to aquaculture apparatus, decreasing efficiency of fishing industries by clogging underwater equipment and fouling boat hulls.
Eradication of wakame within a localized area usually involves getting rid of the plants underwater, often via regular inspection of aquatic environments. Removing the plants underwater without disrupting native flora is accomplished by humans diving underwater and manually removing the reproductive parts of the wakame to reduce its spread. Proper and regular cleaning of underwater apparatus reduces the potential vectors for wakame spores, reducing the spread of the plant.
New Zealand
In New Zealand, Undaria pinnatifida was declared as an unwanted organism in 2000 under the Biosecurity Act 1993. It was first discovered in Wellington Harbour in 1987 and probably arrived as hull fouling on shipping or fishing vessels from Asia. In 2010, a single Undaria pinnatifida plant was discovered in Fiordland, which has since quickly spread from a small clump and localized itself throughout Fiordland.
Wakame is now found around much of New Zealand, from Stewart Island to as far north as the subtropical waters of Karikari Peninsula. It spreads in two ways: naturally, through the millions of microscopic spores released by each fertile organism, and through human mediated spread, most commonly via hull fouling and with marine farming equipment. It is a highly successful and fertile species, which makes it a serious invader. Its capability to grow in dense congregations on any hard surface allows it to outcompete native flora and fauna for sunlight and space. Although the effects of wakame in New Zealand are not fully understood, with the severity varying depending on the location, the negative impact of wakame is projected to be significant against the fishing and tourism industries in Fiordland, as well as overcrowding in popular diving locations.
Even though it is an invasive species, farming of wakame is permitted in already heavily infested areas of New Zealand, as part of a control program established since 2010. In 2012, the government allowed for the farming of wakame in Wellington, Marlborough and Banks Peninsula. Farmers of wakame must obtain permission from Biosecurity New Zealand to access approval of Sections 52 and 53 from the Biosecurity Act 1993, which deal with exceptions to the possession of pests and unwanted creatures. Furthermore, any farmed wakame must only be naturally settled in pre-existing marine farms; mussel farms are a commonly infested area for wakame. As an exceptional case of permitted farming purely as pest control, profitting from wakame is not permitted, with exception of Ngāi Tahu, in which the iwi's revenue from catching wakame is funded for further pest control.
United States
The seaweed has been found in several harbors in southern California. In May 2009 it was discovered in San Francisco Bay and aggressive efforts are underway to remove it before it spreads.
See also
Kelp
Kombu
Laverbread
Miyeok guk
References
External links
Wakame Seaweed at About.com
AlgaeBase link
Undaria pinnatifida at the FAO
Undaria pinnatifida at the Joint Nature Conservation Committee, UK
Global Invasive species database
Undaria Management at the Monterey Bay National Marine Sanctuary
Alariaceae
Algae of Korea
Marine biota of Asia
Edible algae
Edible seaweeds
Japanese cuisine terms
Plants described in 1873 | Wakame | [
"Biology"
] | 2,166 | [
"Edible algae",
"Algae"
] |
1,041,421 | https://en.wikipedia.org/wiki/Iron%20sights | Iron sights are a system of physical alignment markers used as a sighting device to assist the accurate aiming of ranged weapons such as firearms, airguns, crossbows, and bows, or less commonly as a primitive finder sight for optical telescopes. Iron sights, which are typically made of metal, are the earliest and simplest type of sighting device. Since iron sights neither magnify nor illuminate the target, they rely completely on the viewer's naked eye and the available light by which the target is visible. In this respect, iron sights are distinctly different from optical sight designs that employ optical manipulation or active illumination, such as telescopic sights, reflector (reflex) sights, holographic sights, and laser sights.
Iron sights are typically composed of two components mounted perpendicularly above the weapon's bore axis: a 'rear sight' nearer (or 'proximal') to the shooter's eye, and a 'front sight' farther forward (or 'distal') near the muzzle. During aiming, the shooter aligns their line of sight past a gap at the center of the rear sight and towards the top edge of the front sight. When the shooter's line of sight, the iron sights, and target are all aligned, a 'line of aim' that points straight at the target has been created.
Front sights vary in design but are often a small post, bead, ramp, or ring. There are two main types of rear iron sight: 'open sights', which use an unenclosed notch, and 'aperture sights', which use a circular hole. Nearly all handguns, as well as most civilian, hunting, and police long guns, feature open sights. By contrast, many military service rifles employ aperture sights.
The earliest and simplest iron sights were fixed and could not be easily adjusted. Many modern iron sights are designed to be adjustable for sighting in firearms by adjusting the sights for elevation or windage. On many firearms it is the rear sight that is adjustable.
For precision shooting applications such as varminting or sniping, the iron sights are usually replaced by a telescopic sight. Iron sights may still be fitted alongside other sighting devices (or in the case of some models of optics, incorporated integrally) for back-up usage, if the primary sights are damaged or lost.
Principles
In the case of firearms, where the projectile follows a curved ballistic trajectory below the bore axis, the only way to ensure it will hit an intended target is by aiming at the precise point on the trajectory at that target's intended distance. To do that, the shooter aligns their line of sight with the front and rear sights, forming a consistent 'line of aim' (known as the 'sight axis') and in turn producing what is known as the 'point of aim' (POA) within their own field of view, which then gets pointed directly (i.e. aimed) at the target. The physical distance between the front and rear sights is known as the 'sight radius', the longer of which produces smaller angular errors when aiming.
"Sighting in" is a process in which the sight axis is adjusted to intersect the trajectory of the bullet at a designated distance (typically at 100 yards/meters), in order to produce a pre-determined point of impact (POI) at that distance, known as a "zero". Using that "zero" as a default reference, the point of aim can be readily re-calibrated to superimpose with the bullet's point of impact when shooting at different distances. Modern iron sights can all provide some horizontal and vertical adjustments for sighting-in, and often have elevation markings that allow the shooter to quickly compensate (though with rather limited precision) for increasing bullet drops at extended distances. Because the sight axis (which is a straight line) and the projectile trajectory (which is a parabolic curve) must be within the same vertical plane to have any chance of intersecting, it will be very difficult to shoot accurately if the sights are not perpendicularly above the gun barrel (a situation known as canting) when aiming or sighting-in.
Rear sights on long guns (such as rifles and carbines) are usually mounted on a dovetail slot on the back part of the barrel or the receiver, closer to the eye of the shooter, allowing for easy visual pick-up of the notch. Front sights are mounted to the front end of the barrel by dovetailing, soldering, screwing, or staking very close to the muzzle, frequently on a "ramp". Some front sight assemblies include a detachable hood intended to reduce glare, and if the hood is circular, then this provides a reference where the eye will naturally align one within the other.
In the case of handguns, the rear sight will be mounted on the frame (for revolvers, derringers, and single-shots) or on the slide (for semi-automatic pistols). Exceptions are possible depending on the type of handgun, e.g. the rear sight on a snub-nose revolver is typically a trench milled into the top strap of the frame, and the front sight is the to-be-expected blade. Certain handguns may have the rear sight mounted on a hoop-like bracket that straddles the slide.
With typical blade- or post-type iron sights, the shooter would center the front sight's post in the notch of the rear sight and the tops of both sights should be level. Since the eye is only capable of focusing on one focal plane at a time, and the rear sight, front sight and target are all in separate planes, only one of those three planes can be in focus. Which plane is in focus depends on the type of sight, and one of the challenges to a shooter is to keep the focus on the correct plane to allow for best sight alignment. The general advice, however, is to focus on the front sight.
Due to parallax, even a tiny error in the angle of sight alignment results in a trajectory that diverges from the target on a trajectory directly relative to the distance from the target, causing the bullet to miss the target; for example, with a 10 meter air rifle shooter trying to hit the 10 ring, which is merely a diameter dot on the target at and with a diameter pellet, an error of only in sight alignment can mean a complete miss (a point of impact miss). At , that same misalignment would be magnified 100 times, giving an error of over , 1,500 times the sight misalignment. Increasing the sight radius helps to reduce eventual angular errors and will, in case the sight has an incremental adjustment mechanism, adjust in smaller increments when compared to a further identical shorter sighting line. With the front sight on the front end of the barrel, sight radius may be increased by moving the rear sight from the barrel onto the receiver or tang.
Sights for shotguns used for shooting small, moving targets (such as skeet shooting, trap shooting, and clay pigeon shooting) work quite differently. The rear sight is completely discarded, and the rear reference point is provided by the correct and consistent positioning of the shooter's head. A brightly colored (generally the bead is made of a polished metal such as brass and silver, or a plastic fluorescent material, such as green and orange) round bead is placed at the end of the barrel. Often, this bead will be placed along a raised, flat rib, which is usually ventilated to keep it cool and reduce mirage effects from a hot barrel. Rather than being aimed like a rifle or handgun, the shotgun is pointed with the focus always on the target, and the unfocused image of the barrel and bead are placed below the target (the amount below depends on whether the target is rising or falling) and slightly ahead of the target if there is lateral movement. This method of aiming is not as precise as that of a front sight/rear sight combination, but it is much faster, and the wide spread of shots can allow an effective hit even if there is some aiming error. Some shotguns also provide a mid-bead, which is a smaller bead located halfway down the rib, which allows more feedback on barrel alignment. Some shotguns may also come equipped with rifle-style sights. These types of sights are typically found on shotguns intended for turkey hunting.
Types
Open sights
Open sights generally are used where the rear sight is at significant distance from the shooter's eye. They provide minimum occlusion of the shooter's view, but at the expense of precision. Open sights generally use either a square post or a bead on a post for a front sight. To use the sight, the post or bead is positioned both vertically and horizontally in the center of the rear sight notch. For a center hold, the front sight is positioned on the center of the target, bisecting the target vertically and horizontally. For a 6 o'clock hold, the front sight is positioned just below the target and centered horizontally. A 6 o'clock hold is only good for a known target size at a known distance and will not hold zero without user adjustment if these factors are varied. From the shooter's point of view, there should be a noticeable space between each side of the front sight and the edges of the notch; the spaces are called light bars, and the brightness of the light bars provides the shooter feedback as to the alignment of the post in the notch. Vertical alignment is done by lining up the top of the front post with the top of the rear sight, or by placing the bead just above the bottom of the V or U-notch. If the post is not centered in the V or U notch, the shot will not be accurate. If the post extends over the V or U-notch it will result in a high shot. If the post does not reach the top of the V or U-notch it will result in a low shot.
Patridge sights, named after inventor E. E. Patridge, a 19th-century American sportsman, consist of a square or rectangular post and a flat-bottomed square notch and are the most common form of open sights, being preferred for target shooting, as the majority of shooters find the vertical alignment is more precise than other open sights. V-notch and U-notch sights are a variant of the patridge which substitute a V- or U-shaped rear notch.
Other common open sight types include the buckhorn, semi-buckhorn, and express. Buckhorn sights have extensions protruding from either side of the rear sight forming a large ring which almost meets directly above the "V" of the notch. The semi-buckhorn is similar but has a wider gently curving notch with the more precise "V" at its center and is standard on classic Winchester and Marlin lever-action rifles. Express sights are most often used on heavy caliber rifles intended for the hunting of dangerous big game, and are in the form of a wide and large "V" with a heavy white contrast line marking its bottom and a big white or gold bead front sight. These sights do not occlude the target as much as some other styles which is useful in the case of a charging animal. In cases where the range is close and speed far outweighs accuracy (e.g. the shooter is being charged by dangerous big-game), the front sight is used like a shotgun bead; the rear sight is ignored, and the bead is placed on the target. When more time is available, the bead is placed in the "V" of the rear sight.
Open sights have many advantages: they are very common, inexpensive to produce, uncomplicated to use, sturdy, lightweight, resistant to severe environmental conditions, and they do not require batteries. On the other hand, they are not as precise as other forms of sights, and are difficult or impossible to adjust. Open sights also take much more time to use—the buckhorn type is the slowest, patridge, "U" and "V" type notch sights are only a bit quicker; only the express sight is relatively fast. In addition, open sights tend to block out the lower portion of the shooter's field of view by nature, and because of the depth of field limitations of the human eye, do not work as well for shooters with less than perfect vision.
Shotgun sights
Among those utilizing shotguns for hunting of upland game, directing a shotgun toward its target is considered a slightly different skill than aiming a rifle or pistol. Shotgunners are encouraged to "point" a shotgun versus the accurate aiming of a rifle. Some even espouse a mentality that eliminates the concept of "aim" altogether. Because much of shotgunning involves putting a scatter pattern in the path of moving targets, the concept of a sight is considered a subconscious aid. The front sight of a shotgun is a small spherical "bead" attached to the muzzle, acts as a reference, while the "rear sight" is nothing more than a narrow longitudinal groove on the receiver and barrel rib. When shooting, aligning the rear groove with the front bead is not to be consciously considered, as it comprises only a rough reference allowing the shooter to use their natural point of aim to make the shot.
In the tactical environment, where targets aren't moving across the visual field as quickly, sights do have a role. For many, a fiberoptic front sight is the preferred sighting reference in conjunction with a rear leaf. In this instance, the shotgun is used more like a rifle, allowing intentionally aimed shots. Some even equip their shotguns with open or aperture sights akin to a rifle.
Many shotgun bead sights are designed for a "figure 8" configuration, where a proper sight picture uses a bead mounted at the midpoint of the barrel in conjunction with a front bead mounted toward the muzzle. Many shotgun manufacturers, such as Browning, calibrate these sighting systems to produce a shotgun pattern that is "dead-on" when the front bead is stacked just above the mid-bead, producing the figure-8 sight picture.
Aperture sights
Aperture sights, also known as "peep sights", range from the "ghost ring" sight, whose thin ring blurs to near invisibility (hence "ghost"), to target aperture sights that use large disks or other occluders with pinhole-sized apertures. In general, the thicker the ring, the more precise the sight, and the thinner the ring, the faster the sight.
The theory of operation behind the aperture sight is often stated that the human eye will automatically center the front sight when looking through the rear aperture, thus ensuring accuracy. However, aperture sights are accurate even if the front sight is not centered in the rear aperture due to a phenomenon called parallax suppression. This is because, when the aperture is smaller than the eye's pupil diameter, the aperture itself becomes the entrance pupil for the entire optical system of target, front sight post, rear aperture, and eye. As long as the aperture's diameter is completely contained within the eye's pupil diameter, the exact visual location of the front sight post within the rear aperture ring does not affect the accuracy, and accuracy only starts to degrade slightly due to parallax shift as the aperture's diameter begins to encroach on the outside of the eye's pupil diameter. An additional benefit to aperture sights is that smaller apertures provide greater depth of field, making the target less blurry when focusing on the front sight.
In low light conditions the parallax suppression phenomenon is markedly better. The depth of field looking through the sight remains the same as in bright conditions. This is in contrast to open sights, where the eye's pupil will become wider in low light conditions, meaning a larger aperture and a blurrier target. The downside to this is that the image through an aperture sight is darker than with an open sight.
These sights are used on target rifles of several disciplines and on several military rifles such as the Pattern 1914 Enfield and M1917 Enfield, M1 Garand, the No. 4 series Lee–Enfields, M14 rifle, Stgw 57, G3 and the M16 series of weapons along with several others.
Rifle aperture sights for military combat or hunting arms are not designed for maximal attainable precision like target aperture sights, as these must be usable under suboptimal field conditions.
Ghost ring
The ghost ring sight is considered by some to be the fastest type of aperture sight. It is fairly accurate, easy to use, and obscures the target less than nearly all other non-optical sights. Because of this, ghost ring sights are commonly installed on riot and combat shotguns and customized handguns, and they are also gaining ground as a backup sighting system on rifles. The ghost ring is a fairly recent innovation, and differs from traditional aperture sights in the extreme thinness of the rear ring and the slightly thicker front sight. The thin ring minimizes the occlusion of the target, while the thicker front post makes it easy to find quickly. Factory Mossberg ghost ring sights also have thick steel plates on either side of the extremely thin ring. These are to protect the sight's integrity in cases where, if the shotgun were to fall and impact a surface in a way that could potentially damage or distort the shape of the ring.
Target aperture sights
Target aperture sights are designed for maximum precision. The rear sight element (often called "diopter") is usually a large disk (up to 1 inch or 2.5 cm in diameter) with a small hole in the middle, of approximately or less, and is placed close to the shooter's eye. High end target diopters normally accept accessories like adjustable diopter aperture and optical filter systems to ensure optimal sighting conditions for match shooters. Typical modern target shooting diopters offer windage and elevation corrections in to increments at . Some International Shooting Sport Federation (ISSF) (Olympic) shooting events require this precision level for sighting lines, since the final score of the top competitors last shots series is expressed in tenths of scoring ring points.
The complementing front sight element may be a simple bead or post, but is more often a "globe"-type sight, which consists of a cylinder with a threaded cap, which allows differently shaped removable front sight elements to be used. Most common are posts of varying widths and heights or rings of varying diameter—these can be chosen by the shooter for the best fit to the target being used. Tinted transparent plastic insert elements may also be used, with a hole in the middle; these work the same way as an opaque ring, but provide a less obstructed view of the target. High end target front sight tunnels normally also accept accessories like adjustable aperture and optical systems to ensure optimal sighting conditions for match shooters. Some high end target sight line manufacturers also offer front sights with integrated aperture mechanisms.
The use of round rear and front sighting elements for aiming at round targets, like used in ISSF match shooting, takes advantage of the natural ability of the eye and brain to easily align concentric circles. Even for the maximum precision, there should still be a significant area of white visible around the bullseye and between the front and rear sight ring (if a front ring is being used). Since the best key to determining center is the amount of light passing through the apertures, a narrow, dim ring of light can actually be more difficult to work with than a larger, brighter ring. The precise sizes are quite subjective, and depend on both shooter preference and ambient lighting, which is why target rifles come with easily replaceable front sight inserts, and adjustable aperture mechanisms.
Front aperture size selection
Front aperture size is a compromise between a tight enough aperture to clearly define the aiming point and a loose enough aperture so as to not cause 'flicker'. When the aperture is too small, the boundary between the target and front aperture outline becomes indistinct, requiring the shooter to consciously or subconsciously generate small eye movements to measure the distance around the target. USA Shooting recommends a front aperture that creates at least 3 Minutes of Angle (MOA) of boundary space. In research performed by Precision Shooting, it was found that this increased shooter confidence, reduced hold times, and created more decisive shots. There may be an upper bound to the front aperture size that improves performance, however. In 2013, researchers performed experiments with the game of golf, specifically the skill of putting which is another skill that combines visual alignment with motor skills. They found that by manipulating the perceived size of the target (the golf hole) by surrounding it with concentric rings of various sizes, there was a phenomenon that improved performance when the target was surrounded by smaller circles thereby increasing its perceived size. They found that when the target was perceived as larger, performance increased.
Non-target aperture sights
Aperture sights on military rifles use a larger aperture with a thinner ring, and generally a simple post front sight.
Rifles from the late 19th century often featured one of two types of aperture sight called a "tang sight" or a "ladder sight". Since the black powder used in muzzleloaders and early cartridges was not capable of propelling a bullet at high speed, these sights had very large ranges of vertical adjustments, often on the order of several degrees, allowing very long shots to be made accurately. The .45-70 cartridge, for example, was tested by the military for accuracy at ranges of up to , which required 3 degrees of elevation. Both ladder and tang sights folded down when not in use to reduce the chance of damage to the sights. Ladder sights were mounted on the barrel, and could be used as sights in both the folded and unfolded states. Tang sights were mounted behind the action of the rifle, and provided a very long sight radius, and had to be unfolded for use, though rifles with tang sights often had open sights as well for close range use. Tang sights often had vernier scales, allowing adjustment down to a single minute of arc over the full range of the sight.
Flip up sights
Assault rifles and sporterized semi-automatic rifles can have foldable rear and front sight elements that can be readily flipped up or down by the user. Such iron sights are often used as secondary sighting systems in case the main weapon sight (typically an optical sight such as a telescopic sight or red dot sight) malfunctions or becomes unsuitable for the tactical situation at hand, and are therefore referred to as backup iron sights (BUIS). Backup sights are usually mounted via Rail Integration Systems (most often Picatinny rails) in tandem with optical aiming devices, although "offset" BUISs that are mounted obliquely from the bore axis also exist. When used with non-magnifying optics (e.g. reflex or holographic sights), the flip-up rear and front elements often are designed to appear in the same sight picture, known as cowitnessing, as the primary optical sights.
Adjustment
Fixed sights are sights that are not adjustable. For instance, on many revolvers, the rear sight consists of a fixed sight that is a groove milled into the top of the gun's receiver. Adjustable sights are designed to be adjustable for different ranges, for the effect of wind, or to compensate for varying cartridge bullet weights or propellant loadings, which alter the round's velocity and external ballistics and thus its trajectory and point of impact. Sight adjustments are orthogonal, so the windage can be adjusted without impacting the elevation, and vice versa. If the firearm is held canted instead of level when fired, the adjustments are no longer orthogonal, so it is essential to keep the firearm level for best accuracy.
The downside to adjustable sights is the inherent fragility of the moving parts. A fixed sight is a solid piece of metal, usually steel, and if firmly attached to the gun, little is going to be able to damage it beyond usefulness. Adjustable sights, on the other hand, are bulkier, and have parts that must move relative to the gun. Solid impact on an adjustable sight will usually knock it out of adjustment, if not knock it right off the gun. Because of this, guns for self defense or military use either have fixed sights, or sights with "wings" on the sides for protection (such as those on the M4 carbine).
Iron sights used for hunting guns tend to be a compromise. They will be adjustable, but only with tools—generally either a small screwdriver or an Allen wrench. They will be compact and heavily built, and designed to lock securely into position. Target sights, on the other hand, are much bulkier and easier to adjust. They generally have large knobs to control horizontal and vertical movement without tools, and often they are designed to be quickly and easily detachable from the gun so they can be stored separately in their own protective case.
The most common is a rear sight that adjusts in both directions, though military rifles often have a tangent sight in the rear, which a slider on the rear sight has pre-calibrated elevation adjustments for different ranges. With tangent sights, the rear sight is often used to adjust the elevation, and the front the windage. The M16A2 later M16 series rifles have a dial adjustable range calibrated rear sight, and use an elevation adjustable front sight to "zero" the rifle at a given range. The rear sight is used for windage adjustment and to change the zero range.
Enhancements
While iron sights are very simple, that simplicity also leads to a staggering variety of different implementations. In addition to the purely geometric considerations of the front blade and rear notch, there are some factors that need to be considered when choosing a set of iron sights for a particular purpose.
Glare reduction
Glare, particularly from the front sight, can be a significant problem with iron sights. The glare from the front sight can increase the apparent brightness of the light bar on one side of the sight, causing windage errors in aiming, or lower the apparent height of the front sight, causing elevation errors in aiming. Since the direction of the ambient light is rarely constant for a shooter, the resulting changing glare can significantly affect the point of aim.
The most common solution to the problem of glare is a matte finish on the sights. Serrating or bead blasting the sight is a common solution for brightly finished sights, such as blued steel or stainless steel. Matte finishes such as parkerizing or matte black paint can also help. "Smoking" a sight by holding a match or cigarette lighter under the sight to deposit a fine layer of soot is a technique used by many shooters, and special soot-producing lighters are sold for use by competition shooters. Even a thin layer of mud or dirt applied to the sight will help kill the glare, as long as the coating is thin and consistent enough not to change the shape of the sights.
Many target sights are designed with vertical or even undercut front sight blades, which reduces the angles at which light will produce glare off the sight—the downside of these sights is that they tend to snag on clothing, branches, and other materials, so they are common only on target guns. Sight hoods reduce the chances of snagging an undercut sight and are common on some types of rifles, particularly lever-action rifles, but they are prohibited in some shooting disciplines.
Contrast enhancements
While target shooters generally prefer a matte black finish to their sights, to reduce the chance of glare and increase the contrast between the sights and the light bars, black sights don't offer good visibility with dark targets or in low light conditions, such as those often encountered in hunting, military, or self-defense situations. A variety of different contrast enhancements to the basic Patridge type sight and others have been developed to address this deficiency. The contrast enhancement of the front sight has to be somewhat larger compared to the contrast enhancement(s) used for the rear sight if all contrast enhancements should appear about equally large from the shooters perspective.
Three-dot On semi-automatic handguns, the most common type of enhancement is a bright white dot painted on the front sight near the top of the blade, and a dot on each side of the rear sight notch. In low lighting conditions the front sight dot is centered horizontally between the rear sight dots, with the target placed above the middle (front) dot. Some sight vendors offer differently colored dots for the front and rear sights.
White outline rear A contrast variation which uses a dot front sight with a thick and bright white outline around the rear sight notch.
Straight eight Heinie Specialty Products produces a variant of high visibility sights in which a single dot front sight and a rear notch with a dot below can be lined up vertically to form a figure eight.
Sight inserts Popular on revolvers, this enhancement consists of a colored plastic insert in the front sight blade, usually red or orange in color.
Bar / dot or express sight Similar to the straight eight type, this type of sight is traditional on express rifles and is also found on some handguns. The open, V-shaped rear allows for faster acquisition and wider field of view, though less accurate for longer range precision type shooting. The dot on the front sight is aligned or set directly above the vertical bar on the rear sight, commonly referred to as "dotting the 'I'".
Gold bead Preferred by many competitors in IPSC and IDPA shooting.
Night sights On tactical firearms, the contrast enhancements can consist of small vials containing tritium gas whose radioactive decay causes a fluorescent material to glow. Self-luminous tritium sights provide vital visibility in extremely low light situations where normal sights would be degraded or even useless. The tritium glow is not noticeable in bright conditions such as daylight however. As a result, some manufacturers have started to integrate fiber optic sights with tritium vials to provide bright, high-contrast firearms sights in both bright and dim conditions.
Fiber optic A growing trend, started on air rifles and muzzleloaders, is the use of short pieces of optical fiber for the dots, made in such a way that ambient light falling on the length of the fiber is concentrated at the tip, making the dots slightly brighter than the surroundings. This method is most commonly used in front sights, but many makers offer sights that use fiber optics on front and rear sights. Fiber optic sights can now be found on handguns, rifles, and shotguns, both as aftermarket accessories and a growing number of factory guns.
See also
Laser sight
List of telescope parts and construction
Reflex sight
Telescopic sight
Notes
References
External links
Additional BRNO target sight images: 1
Additional BRNO target sight images: 2
Additional BRNO target sight images: 3
Firearm components
Firearm sights
Artillery components | Iron sights | [
"Technology"
] | 6,324 | [
"Firearm components",
"Artillery components",
"Components"
] |
1,041,473 | https://en.wikipedia.org/wiki/Kavli%20Institute%20for%20Theoretical%20Physics | The Kavli Institute for Theoretical Physics (KITP) is a research institute of the University of California, Santa Barbara dedicated to theoretical physics. KITP is one of 20 Kavli Institutes.
The National Science Foundation has been the principal supporter of the institute since it was founded as the Institute for Theoretical Physics in 1979. In a 2007 article in the Proceedings of the National Academy of Sciences, KITP was given the highest impact index in a comparison of nonbiomedical research organizations across the United States.
About
In the early 2000s, the institute, formerly known as the Institute for Theoretical Physics, or ITP, was named after businessman and philanthropist Fred Kavli, in recognition of his donation of $7.5 million to the institute.
Kohn Hall, which houses KITP, is located just beyond the Henley Gate at the East Entrance of the UCSB campus. The building was designed by the Driehaus Prize winner and New Classical architect Michael Graves, and a new wing designed by Graves was added in 2003–2004.
Members
The directors of the KITP have been:
Walter Kohn, 1979–1984 (Nobel Prize in Chemistry, 1998)
Robert Schrieffer, 1984–1989 (Nobel Prize for Physics, 1972)
James S. Langer, 1989–1995 (Oliver Buckley Prize (APS), 1997)
James Hartle, 1995–1997 (Einstein Prize (APS), 2009)
David Gross, 1997–2012 (Nobel Prize in Physics, 2004)
Lars Bildsten, 2012–present (Helen B. Warner Prize (AAS), 1999; Dannie Heineman Prize for Astrophysics (AAS & American Institute of Physics), 2017)
The Director, Deputy Director Mark Bowick, and Permanent Members of the KITP (Leon Balents, Lars Bildsten, David Gross, and Boris Shraiman) are also on the faculty of the UC Santa Barbara Physics Department. Former Permanent Members include Joseph Polchinski and Physics Nobel laureate Frank Wilczek.
See also
Institute for Theoretical Physics (disambiguation)
Center for Theoretical Physics (disambiguation)
Kavli Institute for Particle Astrophysics and Cosmology
Kavli Institute for the Physics and Mathematics of the Universe
References
External links
The KITP web site
University of California, Santa Barbara
Research institutes in California
Physics research institutes
Michael Graves buildings
Kavli Institutes
Theoretical physics institutes
New Classical architecture in the United States | Kavli Institute for Theoretical Physics | [
"Physics"
] | 496 | [
"Theoretical physics",
"Theoretical physics institutes"
] |
1,041,520 | https://en.wikipedia.org/wiki/Peter%20Chen | Peter Pin-Shan Chen (; born 3 January 1947) is a Taiwanese-American computer scientist and applied mathematician. He is a retired distinguished career scientist and faculty member at Carnegie Mellon University and Distinguished Chair Professor Emeritus at Louisiana State University. He is known for the development of the entity–relationship model in 1976.
Biography
Born in Taichung, Taiwan, Chen received a Bachelor of Science (B.S.) in electrical engineering in 1968 from National Taiwan University and earned a Ph.D. in computer science and applied mathematics at Harvard University in 1973. In 1970, he worked one summer at IBM. After graduating from Harvard, he spent one year at Honeywell and a summer at Digital Equipment Corporation.
From 1974 to 1978 Chen was an assistant professor at the MIT Sloan School of Management. From 1978 to 1983 he was an associate professor at the University of California, Los Angeles (UCLA Management School). From 1983 to 2011 Chen held the position of M. J. Foster Distinguished Chair Professor of Computer Science at Louisiana State University and, for several years, adjunct professor in its Business School and Medical School (Shreveport). During this period, he was a visiting professor once at Harvard in '89-'90 and three times at Massachusetts Institute of Technology (EECS Dept. in '86-'87, Sloan School in '90-'91, and Division of Engineering Systems in 06-'07). From 2010 to 2020, Chen was a Distinguished Career Scientist and faculty member at Carnegie Mellon University, U.S.A.
Besides lecturing around the world, he has also served as an (honorary) professor outside of the U.S. In 1984, under the sponsorship of the United Nations, he taught a one-month short course on databases at Huazhong University of Science and Technology in Wuhan, China, and was awarded as Honorary Professor there. Then, he went to Beijing as a member of the IEEE delegation of the First International Conference on Computers and Applications (the first major IEEE computer conference held in China). From 2008 to 2014, he was an Honorary Chair Professor at the Institute of Service Science at National Tsing Hua University, Taiwan. Starting in 2016, he is an Honorary Chair Professor in the Department of Bioengineering and Bioinformatics, Asia University (Taiwan).
Chen has served as an advisor for government agencies and corporations. He is a member of the advisory board of the Computer and Information Science and Engineering Directorate of National Science Foundation (2004-2006) and the United States Air Force Scientific Advisory Board (2005-2009).
Awards and honors
Chen's original paper is one of the most influential papers in the computer software field based on a survey of more than 1,000 computer science professors documented in a book on "Great Papers in Computer Science". Chen's work is also cited in the book Software Challenges published by Time-Life Books in 1993 in the series on "Understanding Computers." Chen is recognized as one of the pioneers in a book on "Software Pioneers". He is listed in Who's Who in America and Who's Who in the World.
Chen has received many awards in the fields of Information Technology. He received the Data Resource Management Technology Award from the Data Administration Management Association (DAMA International) in New York City in 1990. He was elected as a Fellow of the Association for Computing Machinery (ACM), American Association for the Advancement of Science (AAAS), IEEE, and ER. He won the Achievement Award in Information Management in 2000 from DAMA International. He was an inductee into the Data Management Hall of Fame in 2000. He received the Stevens Award in Software Method Innovation in 2001. In 2003, Chen received the IEEE Harry H. Goode Memorial Award at the IEEE-CS Board of Governors meeting in San Diego. He was presented with the ACM - AAAI Allen Newell Award at the ACM Banquet in San Diego in June 2003 and International Joint Conference on Artificial Intelligence (IJCAI) in Acapulco in August 2003. Chen is also the recipient of the Pan Wen-Yuan Outstanding Research Award in 2004. In June 2011 in Jeju Island, Korea, Chen received the Transformative Achievement Medal from the Software Engineering Society and the Society for Design and Process Science. In 2021, he received the Leadership Award from the IEEE Technical Committee of Service Computing (TCSVC).
His innovative work initiated/accelerated a new field of research and practice called "Conceptual Modeling" based on conceptual model (computer science) or Entity–Relationship model. In 1979, he founded an annual international professional meeting, the International Conference on Conceptual Modeling, which has been held in different countries. He also founded the Data & Knowledge Engineering journal for publishing and disseminating scholarly research results.
Peter P. Chen Award
To recognize Chen's pioneering leadership role, the "Peter P. Chen Award" was established in 2008, to honor excellent researchers/educators for outstanding contributions to the field of conceptual modeling each year. The recipients of the Peter P. Chen Award are:
2008: Bernhard Thalheim, professor, University of Kiel, Germany
2009: David W. Embley , professor, Brigham Young University (BYU), U.S.A.
2010: John Mylopoulos, professor, University of Toronto, Canada, and University of Trento, Italy
2011: Tok Wang Ling, professor, National University of Singapore (NUS), Singapore
2012: Stefano Spaccapietra, honorary professor, Swiss Federal Institute of Technology (EPFL), Switzerland
2013: Carlo Batini , professor, University of Milano-Bicocca, Italy
2014: Antonio L. Furtado, professor, PUC-Rio, Brazil
2015: Il-Yeol Song , professor, Drexel University, U.S.A.
2016: Óscar Pastor, professor, Universitat Politècnica de València, Spain
2017: Yair Wand, CANFOR Professor in MIS, University of British Columbia, CANADA
2018: Veda C. Storey, Tull Professor of Computer Information Systems, Georgia State University, Atlanta, U.S.A.
2019: Eric Yu, professor, University of Toronto, Canada.
2020: Matthias Jarke, professor, RWTH Aachen University, Germany and Chairman, Fraunhofer ICT Group.
2021: Sudha Ram, Anheuser-Busch Professor of MIS, Entrepreneurship and Innovation, University of Arizona, U.S.A.
2022: Maurizio Lenzerini, Professor of Computer Science and Engineering, Sapienza University of Rome, Italy.
2023: Nicola Guarino, the head of the Laboratory for Applied Ontology (LOA), part of the Italian National Research Council (CNR) in Trento, Italy.
Peter Chen Big Data Young Researcher Award
To recognize Chen's pioneering role and contributions in building the foundation for big data modeling and analysis, the "Peter Chen Big Data Young Researcher Award" was established in 2015 by the Service Society and the steering committee of eight co-located IEEE Conferences (IEEE ICWS/SCC/CLOUD/MS/BigDataCongress/SERVICES), to honor a very promising young big data researcher each year in the IEEE Big Data Congress and co-located conferences, starting from IEEE BigData 2015 Congress. The Peter Chen Big Data Young Researcher Award winners are:
2015: Yi Chen, associate professor, New Jersey Institute of Technology, U.S.A.
2016: Wei Tan, Research Staff Member, IBM Thomas J. Watson Research Center, Yorktown Heights, NY USA.
2017: Ilkay Altintas, Chief Data Science Officer, San Diego Supercomputer Center, Univ. of California, San Diego, USA.
Work
Entity–relationship modeling and conceptual data modeling
The entity–relationship model serves as the foundation of many systems analysis and design methodologies, computer-aided software engineering (CASE) tools, and repository systems. The ER model is the basis for IBM's Repository Manager/MVS and DEC's CDD/Plus.
Chen's original paper is commonly cited as the definitive reference for entity relationship modeling though the concept of object relationship had been developed a year earlier by Schmid and Swenson as reported in the 1975 ACM SIGMOD Proceedings . Chen is one of the pioneers of using entity–relationship concepts in software and information system modeling and design. Before Chen's paper, the basic entity–relationship ideas were used mostly informally by practitioners. Chen first published an abstract and presented his ER model in the First Very Large Database Conference in September 1975, the same year of a paper with similar concepts written by A. P. G. Brown. Chen's main contributions are formalizing the concepts, developing a theory with a set of data definition and manipulation operations, and specifying the translation rules from the ER model to several major types of databases (including the Relational Database). He also popularized the model and introduced it to the academic literature.
The ER model was adopted as the meta-model ANSI Standard in Information Resource Directory System (IRDS), and the ER approach has been ranked at the top methodology for database design and one of the top methodologies in systems development by several surveys of Fortune 500 companies.
Computer-aided software engineering
Chen's work is a cornerstone of software engineering, in particular computer-aided software engineering (CASE). In the late 1980s and early 1990s, IBM's Application Development Cycle (AD/Cycle) framework and DB2 repository (RM/MVS) were based on the ER model. Other vendors’ repository systems such as Digital's CDD+ were also based on the ER model. Chen has had a significant impact on the CASE industry through his research and his lecturing around the world on structured system development methodologies. The ER model has influenced most of the major CASE tools, including Computer Associates’ ERWIN, Oracle Corporation’s Designer/2000, and Sybase’s PowerDesigner (and even a general drawing tool like Microsoft Visio), as well as the IDEF1X standard. The ER model is also the basis for Microsoft's ADO.NET Entity Framework.
The hypertext concept, which makes the World Wide Web extremely popular, is very similar to the main concept in the ER model. Chen investigated this linkage as an invited expert of several XML working groups of the World Wide Web Consortium (W3C).
The ER model also serves as the foundation of some of the recent work on Object-oriented analysis and design methodologies and Semantic Web. The UML modeling language has its roots in the ER model.
Computer performance modeling
In his early career, he was active in R&D activities in computer system performance. He was the program chair of an ACM SIGMETRICS conference. He developed a computer performance model for a major computer vendor. His innovative research results were adopted in commercial computer performance tuning and capacity
Memory and storage hierarchy, storage technology, CD-ROM, firmware, and micro-programming
His Ph.D. thesis at Harvard was one of the first studies of cost-performance optimization models of multi-level memory/storage hierarchies. He was also one of the early micro-programmers developing the firmware for a file control unit for an IBM mainframe computer. His article on "CD-ROM" in IEEE Proceedings journal in the 1980s was one of the first articles explaining how CD-ROM worked when CD-ROMs became popular. He was a co-author of the storage technology article in early versions of a computer encyclopedia book published by McGraw-Hill.
Cyber security and terrorist detection
In recent years, he led a multidisciplinary research team in developing new efficient and effective techniques in identifying terrorists and malicious cyber transactions. At CMU, he is active in the R&D activities of CERT Coordination Center and Software Engineering Institute (SEI).
Big data, web services, blockchain and Internet of Things (IoT)
He is active in research and lecturing on Big Data and emerging technologies. He was a keynote speaker and a keynote panelist on Big Data at IEEE Conferences. He was the 2014 program chair, the 2015–16 conference chair, and the 2017 honorary chair of the IEEE BigData Congresses. He was the chair of the 2018 IEEE ICWS Conference and the chair of the Blockchain Panel. He was the general chair of the 2019 IEEE Service Congress, Milan, Italy. He received the 2002 ACM - AAAI Allen Newell Award, one of the top awards jointly sponsored by a computer and an AI professional societies.
Publications
Peter P. Chen has published many books, papers, and articles.
Books (a selection)
2007. Active Conceptual Modeling of Learning: Next Generation Learning-Base System Development. With Leah Y. Wong (Eds.). Springer.
1999. Advances in Conceptual Modeling: ER'99 Workshops on Evolution and Change in Data Management, Reverse Engineering in Information Systems, and the World ... (Lecture Notes in Computer Science). With David W. Embley, Jacques Kouloumdjian, Stephen W. Liddle and John F. Roddick (Eds.) Springer Verlag.
1999. Conceptual Modeling: Current Issues and Future Directions (Lecture Notes in Computer Science) With Jacky Akoka, Hannu Kangassalo, and Bernhard Thalheim.
1985. Data & Knowledge Engineering, Volume 1, Number 1, 1985.
1981. Entity–Relationship Approach to Information Modeling and Analysis.
1980. Entity–Relationship Approach to Systems Analysis and Design. North-Holland.
Articles (a selection)
1976 (March). ISSN 0362-5915.
2002. "Entity–Relationship Modeling: Historical Events, Future Trends, and Lessons Learned". In: Software Pioneers: Contributions to Software Engineering. Broy M. and Denert, E. (eds.), Berlin: Springer-Verlag. Lecture Notes in Computer Sciences, June 2002. pp. 100–114.
References
External links
Home page of Dr. Peter Chen at Louisiana State University
Living people
American computer scientists
American people of Taiwanese descent
Data modeling
1998 fellows of the Association for Computing Machinery
Fellows of the American Association for the Advancement of Science
Fellows of the IEEE
Carnegie Mellon University faculty
Louisiana State University faculty
MIT Sloan School of Management faculty
National Taiwan University alumni
Harvard University alumni
Software engineering researchers
Database researchers
Scientists from Taichung
Members of the European Academy of Sciences and Arts
1947 births
20th-century Taiwanese educators
21st-century Taiwanese educators | Peter Chen | [
"Engineering"
] | 2,942 | [
"Data modeling",
"Data engineering"
] |
1,041,641 | https://en.wikipedia.org/wiki/Radiation%20hardening | Radiation hardening is the process of making electronic components and circuits resistant to damage or malfunction caused by high levels of ionizing radiation (particle radiation and high-energy electromagnetic radiation), especially for environments in outer space (especially beyond low Earth orbit), around nuclear reactors and particle accelerators, or during nuclear accidents or nuclear warfare.
Most semiconductor electronic components are susceptible to radiation damage, and radiation-hardened (rad-hard) components are based on their non-hardened equivalents, with some design and manufacturing variations that reduce the susceptibility to radiation damage. Due to the low demand and the extensive development and testing required to produce a radiation-tolerant design of a microelectronic chip, the technology of radiation-hardened chips tends to lag behind the most recent developments. They also typically cost more than their commercial counterparts.
Radiation-hardened products are typically tested to one or more resultant-effects tests, including total ionizing dose (TID), enhanced low dose rate effects (ELDRS), neutron and proton displacement damage, and single event effects (SEEs).
Problems caused by radiation
Environments with high levels of ionizing radiation create special design challenges. A single charged particle can knock thousands of electrons loose, causing electronic noise and signal spikes. In the case of digital circuits, this can cause results which are inaccurate or unintelligible. This is a particularly serious problem in the design of satellites, spacecraft, future quantum computers, military aircraft, nuclear power stations, and nuclear weapons. In order to ensure the proper operation of such systems, manufacturers of integrated circuits and sensors intended for the military or aerospace markets employ various methods of radiation hardening. The resulting systems are said to be rad(iation)-hardened, rad-hard, or (within context) hardened.
Major radiation damage sources
Typical sources of exposure of electronics to ionizing radiation are the Van Allen radiation belts for satellites, nuclear reactors in power plants for sensors and control circuits, particle accelerators for control electronics (particularly particle detector devices), residual radiation from isotopes in chip packaging materials, cosmic radiation for spacecraft and high-altitude aircraft, and nuclear explosions for potentially all military and civilian electronics.
Secondary particles result from interaction of other kinds of radiation with structures around the electronic devices.
Van Allen radiation belts contain electrons (up to about 10 MeV) and protons (up to 100s MeV) trapped in the geomagnetic field. The particle flux in the regions farther from the Earth can vary wildly depending on the actual conditions of the Sun and the magnetosphere. Due to their position they pose a concern for satellites.
Nuclear reactors produce gamma radiation and neutron radiation which can affect sensor and control circuits in nuclear power plants.
Particle accelerators produce high energy protons and electrons, and the secondary particles produced by their interactions produce significant radiation damage on sensitive control and particle detector components, of the order of magnitude of 10 MRad[Si]/year for systems such as the Large Hadron Collider.
Chip packaging materials were an insidious source of radiation that was found to be causing soft errors in new DRAM chips in the 1970s. Traces of radioactive elements in the packaging of the chips were producing alpha particles, which were then occasionally discharging some of the capacitors used to store the DRAM data bits. These effects have been reduced today by using purer packaging materials, and employing error-correcting codes to detect and often correct DRAM errors.
Cosmic rays come from all directions and consist of approximately 85% protons, 14% alpha particles, and 1% heavy ions, together with X-ray and gamma-ray radiation. Most effects are caused by particles with energies between 0.1 and 20 GeV. The atmosphere filters most of these, so they are primarily a concern for spacecraft and high-altitude aircraft, but can also affect ordinary computers on the surface.
Solar particle events come from the direction of the sun and consist of a large flux of high-energy (several GeV) protons and heavy ions, again accompanied by X-ray radiation.
Nuclear explosions produce a short and extremely intense surge through a wide spectrum of electromagnetic radiation, an electromagnetic pulse (EMP), neutron radiation, and a flux of both primary and secondary charged particles. In case of a nuclear war they pose a potential concern for all civilian and military electronics.
Radiation effects on electronics
Fundamental mechanisms
Two fundamental damage mechanisms take place:
Lattice displacement
Lattice displacement is caused by neutrons, protons, alpha particles, heavy ions, and very high energy gamma photons. They change the arrangement of the atoms in the crystal lattice, creating lasting damage, and increasing the number of recombination centers, depleting the minority carriers and worsening the analog properties of the affected semiconductor junctions. Counterintuitively, higher doses over a short time cause partial annealing ("healing") of the damaged lattice, leading to a lower degree of damage than with the same doses delivered in low intensity over a long time (LDR or Low Dose Rate). This type of problem is particularly significant in bipolar transistors, which are dependent on minority carriers in their base regions; increased losses caused by recombination cause loss of the transistor gain (see neutron effects). Components certified as ELDRS (Enhanced Low Dose Rate Sensitive)-free do not show damage with fluxes below 0.01 rad(Si)/s = 36 rad(Si)/h.
Ionization effects
Ionization effects are caused by charged particles, including ones with energy too low to cause lattice effects. The ionization effects are usually transient, creating glitches and soft errors, but can lead to destruction of the device if they trigger other damage mechanisms (e.g., a latchup). Photocurrent caused by ultraviolet and X-ray radiation may belong to this category as well. Gradual accumulation of holes in the oxide layer in MOSFET transistors leads to worsening of their performance, up to device failure when the dose is high enough (see total ionizing dose effects).
The effects can vary wildly depending on all the parameters – type of radiation, total dose and radiation flux, combination of types of radiation, and even the kind of device load (operating frequency, operating voltage, actual state of the transistor during the instant it is struck by the particle) – which makes thorough testing difficult, time-consuming, and requiring many test samples.
Resultant effects
The "end-user" effects can be characterized in several groups:
A neutron interacting with a semiconductor lattice will displace the atoms in the lattice. This leads to an increase in the count of recombination centers and deep-level defects, reducing the lifetime of minority carriers, thus affecting bipolar devices more than CMOS ones. Bipolar devices on silicon tend to show changes in electrical parameters at levels of 1010 to 1011 neutrons/cm2, while CMOS devices aren't affected until 1015 neutrons/cm2. The sensitivity of devices may increase together with increasing level of integration and decreasing size of individual structures. There is also a risk of induced radioactivity caused by neutron activation, which is a major source of noise in high energy astrophysics instruments. Induced radiation, together with residual radiation from impurities in component materials, can cause all sorts of single-event problems during the device's lifetime. GaAs LEDs, common in optocouplers, are very sensitive to neutrons. The lattice damage influences the frequency of crystal oscillators. Kinetic energy effects (namely lattice displacement) of charged particles belong here too.
Total ionizing dose effects
Total ionizing dose effects represent the cumulative damage of the semiconductor lattice (lattice displacement damage) caused by exposure to ionizing radiation over time. It is measured in rads and causes slow gradual degradation of the device's performance. A total dose greater than 5000 rads delivered to silicon-based devices in a timespan on the order of seconds to minutes will cause long-term degradation. In CMOS devices, the radiation creates electron–hole pairs in the gate insulation layers, which cause photocurrents during their recombination, and the holes trapped in the lattice defects in the insulator create a persistent gate biasing and influence the transistors' threshold voltage, making the N-type MOSFET transistors easier and the P-type ones more difficult to switch on. The accumulated charge can be high enough to keep the transistors permanently open (or closed), leading to device failure. Some self-healing takes place over time, but this effect is not too significant. This effect is the same as hot carrier degradation in high-integration high-speed electronics. Crystal oscillators are somewhat sensitive to radiation doses, which alter their frequency. The sensitivity can be greatly reduced by using swept quartz. Natural quartz crystals are especially sensitive. Radiation performance curves for TID testing may be generated for all resultant effects testing procedures. These curves show performance trends throughout the TID test process and are included in the radiation test report.
Transient dose effects
Transient dose effects result from a brief high-intensity pulse of radiation, typically occurring during a nuclear explosion. The high radiation flux creates photocurrents in the entire body of the semiconductor, causing transistors to randomly open, changing logical states of flip-flops and memory cells. Permanent damage may occur if the duration of the pulse is too long, or if the pulse causes junction damage or a latchup. Latchups are commonly caused by the X-rays and gamma radiation flash of a nuclear explosion. Crystal oscillators may stop oscillating for the duration of the flash due to prompt photoconductivity induced in quartz.
Systems-generated EMP effects
SGEMP effects are caused by the radiation flash traveling through the equipment and causing local ionization and electric currents in the material of the chips, circuit boards, electrical cables and cases.
Digital damage: SEE
Single-event effects (SEE) have been studied extensively since the 1970s. When a high-energy particle travels through a semiconductor, it leaves an ionized track behind. This ionization may cause a highly localized effect similar to the transient dose one - a benign glitch in output, a less benign bit flip in memory or a register or, especially in high-power transistors, a destructive latchup and burnout. Single event effects have importance for electronics in satellites, aircraft, and other civilian and military aerospace applications. Sometimes, in circuits not involving latches, it is helpful to introduce RC time constant circuits that slow down the circuit's reaction time beyond the duration of an SEE.
Single-event transient
An SET happens when the charge collected from an ionization event discharges in the form of a spurious signal traveling through the circuit. This is de facto the effect of an electrostatic discharge. it is considered a soft error, and is reversible.
Single-event upset
Single-event upsets (SEU) or transient radiation effects in electronics are state changes of memory or register bits caused by a single ion interacting with the chip. They do not cause lasting damage to the device, but may cause lasting problems to a system which cannot recover from such an error. It is otherwise a reversible soft error. In very sensitive devices, a single ion can cause a multiple-bit upset (MBU) in several adjacent memory cells. SEUs can become single-event functional interrupts (SEFI) when they upset control circuits, such as state machines, placing the device into an undefined state, a test mode, or a halt, which would then need a reset or a power cycle to recover.
Single-event latchup
An SEL can occur in any chip with a parasitic PNPN structure. A heavy ion or a high-energy proton passing through one of the two inner-transistor junctions can turn on the thyristor-like structure, which then stays "shorted" (an effect known as latch-up) until the device is power-cycled. As the effect can happen between the power source and substrate, destructively high current can be involved and the part may fail. This is a hard error, and is irreversible. Bulk CMOS devices are most susceptible.
Single-event snapback
A single-event snapback is similar to an SEL but not requiring the PNPN structure, and can be induced in N-channel MOS transistors switching large currents, when an ion hits near the drain junction and causes avalanche multiplication of the charge carriers. The transistor then opens and stays opened, a hard error which is irreversible.
Single-event induced burnout
An SEB may occur in power MOSFETs when the substrate right under the source region gets forward-biased and the drain-source voltage is higher than the breakdown voltage of the parasitic structures. The resulting high current and local overheating then may destroy the device. This is a hard error, and is irreversible.
Single-event gate rupture
SEGR are observed in power MOSFETs when a heavy ion hits the gate region while a high voltage is applied to the gate. A local breakdown then happens in the insulating layer of silicon dioxide, causing local overheating and destruction (looking like a microscopic explosion) of the gate region. It can occur even in EEPROM cells during write or erase, when the cells are subjected to a comparatively high voltage. This is a hard error, and is irreversible.
SEE testing
While proton beams are widely used for SEE testing due to availability, at lower energies proton irradiation can often underestimate SEE susceptibility. Furthermore, proton beams expose devices to risk of total ionizing dose (TID) failure which can cloud proton testing results or result in premature device failure. White neutron beams—ostensibly the most representative SEE test method—are usually derived from solid target-based sources, resulting in flux non-uniformity and small beam areas. White neutron beams also have some measure of uncertainty in their energy spectrum, often with high thermal neutron content.
The disadvantages of both proton and spallation neutron sources can be avoided by using mono-energetic 14 MeV neutrons for SEE testing. A potential concern is that mono-energetic neutron-induced single event effects will not accurately represent the real-world effects of broad-spectrum atmospheric neutrons. However, recent studies have indicated that, to the contrary, mono-energetic neutrons—particularly 14 MeV neutrons—can be used to quite accurately understand SEE cross-sections in modern microelectronics.
Radiation-hardening techniques
Physical
Hardened chips are often manufactured on insulating substrates instead of the usual semiconductor wafers. Silicon on insulator (SOI) and silicon on sapphire (SOS) are commonly used. While normal commercial-grade chips can withstand between 50 and 100 gray (5 and 10 krad), space-grade SOI and SOS chips can survive doses between 1000 and 3000 gray (100 and 300 krad). At one time many 4000 series chips were available in radiation-hardened versions (RadHard). While SOI eliminates latchup events, TID and SEE hardness are not guaranteed to be improved.
Choosing a substrate with wide band gap gives it higher tolerance to deep-level defects; e.g. silicon carbide or gallium nitride.
Use of a special process node provides increased radiation resistance. Due to the high development costs of new radiation hardened processes, the smallest "true" rad-hard (RHBP, Rad-Hard By Process) process is 150 nm as of 2016, however, rad-hard 65 nm FPGAs were available that used some of the techniques used in "true" rad-hard processes (RHBD, Rad-Hard By Design). As of 2019 110 nm rad-hard processes are available.
Bipolar integrated circuits generally have higher radiation tolerance than CMOS circuits. The low-power Schottky (LS) 5400 series can withstand 1000 krad, and many ECL devices can withstand 10,000 krad. Using edgeless CMOS transistors, which have an unconventional physical construction, together with an unconventional physical layout, can also be effective.
Magnetoresistive RAM, or MRAM, is considered a likely candidate to provide radiation hardened, rewritable, non-volatile conductor memory. Physical principles and early tests suggest that MRAM is not susceptible to ionization-induced data loss.
Capacitor-based DRAM is often replaced by more rugged (but larger, and more expensive) SRAM. SRAM cells have more transistors per cell than usual (which is 4T or 6T), which makes the cells more tolerant to SEUs at the cost of higher power consumption and size.
Shielding
Shielding the package against radioactivity is straightforward to reduce exposure of the bare device.
To protect against neutron radiation and the neutron activation of materials, it is possible to shield the chips themselves by use of depleted boron (consisting only of isotope boron-11) in the borophosphosilicate glass passivation layer protecting the chips, as naturally prevalent boron-10 readily captures neutrons and undergoes alpha decay (see soft error).
Logical
Error correcting code memory (ECC memory) uses redundant bits to check for and possibly correct corrupted data. Since radiation's effects damage the memory content even when the system is not accessing the RAM, a "scrubber" circuit must continuously sweep the RAM; reading out the data, checking the redundant bits for data errors, then writing back any corrections to the RAM.
Redundant elements can be used at the system level. Three separate microprocessor boards may independently compute an answer to a calculation and compare their answers. Any system that produces a minority result will recalculate. Logic may be added such that if repeated errors occur from the same system, that board is shut down.
Redundant elements may be used at the circuit level. A single bit may be replaced with three bits and separate "voting logic" for each bit to continuously determine its result (triple modular redundancy). This increases area of a chip design by a factor of 5, so must be reserved for smaller designs. But it has the secondary advantage of also being "fail-safe" in real time. In the event of a single-bit failure (which may be unrelated to radiation), the voting logic will continue to produce the correct result without resorting to a watchdog timer. System level voting between three separate processor systems will generally need to use some circuit-level voting logic to perform the votes between the three processor systems.
Hardened latches may be used.
A watchdog timer will perform a hard reset of a system unless some sequence is performed that generally indicates the system is alive, such as a write operation from an onboard processor. During normal operation, software schedules a write to the watchdog timer at regular intervals to prevent the timer from running out. If radiation causes the processor to operate incorrectly, it is unlikely the software will work correctly enough to clear the watchdog timer. The watchdog eventually times out and forces a hard reset to the system. This is considered a last resort to other methods of radiation hardening.
Military and space industry applications
Radiation-hardened and radiation tolerant components are often used in military and aerospace applications, including point-of-load (POL) applications, satellite system power supplies, step down switching regulators, microprocessors, FPGAs, FPGA power sources, and high efficiency, low voltage subsystem power supplies.
However, not all military-grade components are radiation hardened. For example, the US MIL-STD-883 features many radiation-related tests, but has no specification for single event latchup frequency. The Fobos-Grunt space probe may have failed due to a similar assumption.
The market size for radiation hardened electronics used in space applications was estimated to be $2.35 billion in 2021. A new study has estimated that this will reach approximately $4.76 billion by the year 2032.
Nuclear hardness for telecommunication
In telecommunication, the term nuclear hardness has the following meanings:
1) an expression of the extent to which the performance of a system, facility, or device is expected to degrade in a given nuclear environment, 2) the physical attributes of a system or electronic component that will allow survival in an environment that includes nuclear radiation and electromagnetic pulses (EMP).
Notes
Nuclear hardness may be expressed in terms of either susceptibility or vulnerability.
The extent of expected performance degradation (e.g., outage time, data lost, and equipment damage) must be defined or specified. The environment (e.g., radiation levels, overpressure, peak velocities, energy absorbed, and electrical stress) must be defined or specified.
The physical attributes of a system or component that will allow a defined degree of survivability in a given environment created by a nuclear weapon.
Nuclear hardness is determined for specified or actual quantified environmental conditions and physical parameters, such as peak radiation levels, overpressure, velocities, energy absorbed, and electrical stress. It is achieved through design specifications and it is verified by test and analysis techniques.
Examples of rad-hard computers
The System/4 Pi, made by IBM and used on board the Space Shuttle (AP-101 variant), is based on the System/360 architecture.
The RCA1802 8-bit CPU, introduced in 1976, was the first serially-produced radiation-hardened microprocessor.
PIC 1886VE, Russian 50 MHz microcontroller designed by Milandr and manufactured by Sitronics-Mikron on 180 nm bulk-silicon technology.
m68k based:
The Coldfire M5208 used by General Dynamics is a low power (1.5 W) radiation hardened alternative.
MIL-STD-1750A based:
The RH1750 manufactured by GEC-Plessey.
The Proton 100k SBC by Space Micro Inc., introduced in 2003, uses an updated voting scheme called TTMR which mitigates single event upset (SEU) in a single processor. The processor is Equator BSP-15.
The Proton200k SBC by Space Micro Inc, introduced in 2004, mitigates SEU with its patented time triple modular redundancy (TTMR) technology, and single event function interrupts (SEFI) with H-Core technology. The processor is the high speed Texas Instruments 320C6Xx series digital signal processor. The Proton200k operates at 4000 MIPS while mitigating SEU.
MIPS based:
The RH32 is produced by Honeywell Aerospace.
The Mongoose-V used by NASA is a 32-bit microprocessor for spacecraft onboard computer applications (i. e. New Horizons).
The KOMDIV-32 is a 32-bit microprocessor, compatible with MIPS R3000, developed by NIISI, manufactured by Kurchatov Institute, Russia.
PowerPC / POWER based:
The RAD6000 single-board computer (SBC), produced by BAE Systems, includes a rad-hard POWER1 CPU.
The RHPPC is produced by Honeywell Aerospace. Based on hardened PowerPC 603e.
The SP0 and SP0-S are produced by Aitech Defense Systems is a 3U cPCI SBC which utilizes the SOI PowerQUICC-III MPC8548E, PowerPC e500 based, capable of processing speeds ranging from 833 MHz to 1.18 GHz.
The RAD750 SBC, also produced by BAE Systems, and based on the PowerPC 750 processor, is the successor to the RAD6000.
The SCS750 built by Maxwell Technologies, which votes three PowerPC 750 cores against each other to mitigate radiation effects. Seven of those are used by the Gaia spacecraft.
The Boeing Company, through its Satellite Development Center, produces a radiation hardened space computer variant based on the PowerPC 750.
The BRE440 by Moog Inc. IBM PPC440 core based system-on-a-chip, 266 MIPS, PCI, 2x Ethernet, 2x UARTS, DMA controller, L1/L2 cache
The RAD5500 processor, is the successor to the RAD750 based on the PowerPC e5500.
SPARC based:
The ERC32 and LEON 2, 3, 4 and 5 are radiation hardened processors designed by Gaisler Research and the European Space Agency. They are described in synthesizable VHDL available under the GNU Lesser General Public License and GNU General Public License respectively.
The Gen 6 single-board computer (SBC), produced by Cobham Semiconductor Solutions (formerly Aeroflex Microelectronics Solutions), enabled for the LEON microprocessor.
ARM based:
The Vorago VA10820, a 32-bit ARMv6-M Cortex-M0.
NASA and the United States Air Force are developing HPSC, a Cortex-A53 based processor for future spacecraft use
ESA DAHLIA, a Cortex-R52 based processor
RISC-V based:
Cobham Gaisler NOEL-V 64-bit.
NASA Jet Propulsion Laboratory has selected Microchip Technology to develop a new HPSC processor, based on SiFive Intelligence X280
See also
Communications survivability
EMC-aware programming
Institute for Space and Defense Electronics, Vanderbilt University
Mars Reconnaissance Orbiter
MESSENGER Mercury probe
Mars rovers
Tempest (codename)
Juno Radiation Vault
References
Books and Reports
External links
Federal Standard 1037C (link )
(I)ntegrated Approach with COTS Creates Rad-Tolerant (SBC) for Space – By Chad Thibodeau, Maxwell Technologies; COTS Journal, Dec 2003
Sandia Labs to develop (...) radiation-hardened Pentium (...) for space and defense needs – Sandia press release, 8 Dec 1998(also includes a general "backgrounder" section on Sandia's manufacturing processes for radiation-hardening of microelectronics)
Radiation effects on quartz crystals
Vanderbilt University Institute for Space and Defense Electronics
Military communications
Integrated circuits
Avionics computers
Electronics manufacturing
Spaceflight
Radiation effects
Semiconductor device defects | Radiation hardening | [
"Physics",
"Materials_science",
"Astronomy",
"Technology",
"Engineering"
] | 5,408 | [
"Physical phenomena",
"Telecommunications engineering",
"Outer space",
"Computer engineering",
"Technological failures",
"Semiconductor device defects",
"Materials science",
"Military communications",
"Electronic engineering",
"Radiation",
"Condensed matter physics",
"Radiation effects",
"El... |
1,041,645 | https://en.wikipedia.org/wiki/Pentalene | Pentalene is a polycyclic hydrocarbon composed of two fused cyclopentadiene rings. It has chemical formula . It is antiaromatic, because it has 4n π electrons where n is any integer. For this reason it dimerizes even at temperatures as low as −100 °C. The derivative 1,3,5-tri-tert-butylpentalene was synthesized in 1973. Because of the tert-butyl substituents this compound is thermally stable. Pentalenes can also be stabilized by benzannulation for example in the compounds benzopentalene and dibenzopentalene.
Dilithium pentalenide was isolated in 1962, long before pentalene itself in 1997. It is prepared from reaction of dihydropentalene (pyrolysis of an isomer of dicyclopentadiene) with n-butyllithium in solution and forms a stable salt. In accordance with its structure proton NMR shows 2 signals in a 2 to 1 ratio. The addition of two electrons removes the antiaromaticity; it becomes a planar 10π-electron aromatic species and is thus a bicyclic analogue of the cyclooctatetraene (COT) dianion .
The dianion can also be considered as two fused cyclopentadienyl rings, and has been used as a ligand in organometallic chemistry to stabilise many types of mono- and bimetallic complexes, including those containing multiple metal-metal bonds, and anti-bimetallics with extremely high levels of electronic communication between the centers.
See also
Cyclooctatetraene
Benzocyclobutadiene
Acepentalene
Butalene
Heptalene
Octalene
References
Antiaromatic compounds
Hydrocarbons
Bicyclic compounds | Pentalene | [
"Chemistry"
] | 386 | [
"Organic compounds",
"Hydrocarbons"
] |
1,041,739 | https://en.wikipedia.org/wiki/Lebedev%20Institute%20of%20Precision%20Mechanics%20and%20Computer%20Engineering | Lebedev Institute of Precision Mechanics and Computer Engineering (IPMCE) is a Russian research institution. It used to be a Soviet Academy of Sciences organization in Soviet times. The institute specializes itself in the development of:
Computer systems for national security
Hardware and software for digital telecommunication
Multimedia systems for control and training
Positioning and navigational systems
In August 2009 IPMCE became a joint-stock company.
Computers developed by IPMCE
BESM-1
BESM-2
BESM-4
BESM-6
Elbrus-1
Elbrus-2
Elbrus-3
Software developed by IPMCE
Эль-76
External links
IPMCE and IPMCE
References
Computing in the Soviet Union
Institutes of the Russian Academy of Sciences
Research institutes in the Soviet Union
Computer science institutes
Cultural heritage monuments in Moscow | Lebedev Institute of Precision Mechanics and Computer Engineering | [
"Technology"
] | 165 | [
"Computing in the Soviet Union",
"History of computing"
] |
1,041,756 | https://en.wikipedia.org/wiki/For%20all%20practical%20purposes | For all practical purposes (sometimes abbreviated FAPP) is a slogan used in physics to express a pragmatic attitude. A physical theory might be ambiguous in some ways — for example, being founded on untested assumptions or making unclear predictions about what might happen in certain situations — and yet still be successful in practice. Such a theory is said to be successful FAPP.
FAPP is also emerging as a valuable concept and approach in mathematics with a major title by the name "For All Practical Purposes: Mathematical Literacy in Today's World".
There is also a profound joke about FAPP:
An elementary physics professor was teaching about how close you could get to the sun. He laid the foundation of heat and distance, and said that is as close as you can get FAPP. A boy asked, "what does that mean?"
The professor replied "All the girls in the room line up on the right side, and all of the boys line up on the left side. Now halve the distance between each side. Now do it again. After about five times of doing this, as their noses were touching, He said: "You are all close enough for all practical purposes".
See also
Hand waving
Philosophy of science
Metaphysics
Limit (mathematics)
Phenomenalism
Empiricism
References
Rhetoric
Philosophy of physics | For all practical purposes | [
"Physics"
] | 267 | [
"Philosophy of physics",
"Applied and interdisciplinary physics"
] |
1,041,812 | https://en.wikipedia.org/wiki/Superpotential | In theoretical physics, the superpotential is a function in supersymmetric quantum mechanics. Given a superpotential, two "partner potentials" are derived that can each serve as a potential in the Schrödinger equation. The partner potentials have the same spectrum, apart from a possible eigenvalue of zero, meaning that the physical systems represented by the two potentials have the same characteristic energies, apart from a possible zero-energy ground state.
One-dimensional example
Consider a one-dimensional, non-relativistic particle with a two state internal degree of freedom called "spin". (This is not quite the usual notion of spin encountered in nonrelativistic quantum mechanics, because "real" spin applies only to particles in three-dimensional space.) Let b and its Hermitian adjoint b† signify operators which transform a "spin up" particle into a "spin down" particle and vice versa, respectively. Furthermore, take b and b† to be normalized such that the anticommutator {b,b†} equals 1, and take that b2 equals 0. Let p represent the momentum of the particle and x represent its position with [x,p]=i, where we use natural units so that . Let W (the superpotential) represent an arbitrary differentiable function of x and define the supersymmetric operators Q1 and Q2 as
The operators Q1 and Q2 are self-adjoint. Let the Hamiltonian be
where W''' signifies the derivative of W. Also note that {Q1,Q2}=0. Under these circumstances, the above system is a toy model of N=2 supersymmetry. The spin down and spin up states are often referred to as the "bosonic" and "fermionic" states, respectively, in an analogy to quantum field theory. With these definitions, Q1 and Q2 map "bosonic" states into "fermionic" states and vice versa. Restricting to the bosonic or fermionic sectors gives two partner potentials determined by
In four spacetime dimensions
In supersymmetric quantum field theories with four spacetime dimensions, which might have some connection to nature, it turns out that scalar fields arise as the lowest component of a chiral superfield, which tends to automatically be complex valued. We may identify the complex conjugate of a chiral superfield as an anti-chiral superfield. There are two possible ways to obtain an action from a set of superfields:
Integrate a superfield on the whole superspace spanned by and ,
or
Integrate a chiral superfield on the chiral half of a superspace, spanned by and , not on .
The second option tells us that an arbitrary holomorphic function of a set of chiral superfields can show up as a term in a Lagrangian which is invariant under supersymmetry. In this context, holomorphic means that the function can only depend on the chiral superfields, not their complex conjugates. We may call such a function W, the superpotential. The fact that W is holomorphic in the chiral superfields helps explain why supersymmetric theories are relatively tractable, as it allows one to use powerful mathematical tools from complex analysis. Indeed, it is known that W receives no perturbative corrections, a result referred to as the perturbative non-renormalization theorem. Note that non-perturbative processes may correct this, for example through contributions to the beta functions due to instantons.
See also
Komar superpotential
References
Stephen P. Martin, A Supersymmetry Primer''. .
B. Mielnik and O. Rosas-Ortiz, "Factorization: Little or great algorithm?", J. Phys. A: Math. Gen. 37: 10007-10035, 2004
Supersymmetry
Supersymmetric quantum field theory
Potentials | Superpotential | [
"Physics"
] | 829 | [
"Supersymmetric quantum field theory",
"Unsolved problems in physics",
"Physics beyond the Standard Model",
"Supersymmetry",
"Symmetry"
] |
1,041,955 | https://en.wikipedia.org/wiki/Project%2025 | Project 25 (P25 or APCO-25) is a suite of standards for interoperable digital two-way radio products. P25 was developed by public safety professionals in North America and has gained acceptance for public safety, security, public service, and commercial applications worldwide. P25 radios are a direct replacement for analog UHF (typically FM) radios, adding the ability to transfer data as well as voice for more natural implementations of encryption and text messaging. P25 radios are commonly implemented by dispatch organizations, such as police, fire, ambulance and emergency rescue service, using vehicle-mounted radios combined with repeaters and handheld walkie-talkie use.
Starting around 2012, products became available with the newer phase 2 modulation protocol, the older protocol known as P25 became P25 phase 1. P25 phase 2 products use the more advanced AMBE2+ vocoder, which allows audio to pass through a more compressed bitstream and provides two TDMA voice channels in the same RF bandwidth (12.5 kHz), while phase 1 can provide only one voice channel. The two protocols are not compatible. However, P25 Phase 2 infrastructure can provide a "dynamic transcoder" feature that translates between Phase 1 and Phase 2 as needed. In addition to this, phase 2 radios are backwards compatible with phase 1 modulation and analog FM modulation, per the standard. The European Union has created the Terrestrial Trunked Radio (TETRA) and Digital mobile radio (DMR) protocol standards, which fill a similar role to Project 25.
Suite of standards overview
History
Public safety radios have been upgraded from analog FM to digital since the 1990s because of an increased use of data on radio systems for such features as GPS location, trunking, text messaging, metering, and encryption.
Various user protocols and different public safety radio spectrum made it difficult for Public Safety agencies to achieve interoperability and widespread acceptance. However, lessons learned during disasters the United States faced in the past decades have forced agencies to assess their requirements during a disaster when basic infrastructure has failed. To meet the growing demands of public safety digital radio communication, the United States Federal Communications Commission (FCC) at the direction of the United States Congress initiated a 1988 inquiry for recommendations from users and manufacturers to improve existing communication systems. Based on the recommendations, to find solutions that best serve the needs of public safety management, in October 1989 APCO Project 25 came into existence in a coalition with:
Association of Public-Safety Communications Officials-International (APCO)
National Association of State Telecommunications Directors (NASTD)
National Telecommunications and Information Administration (NTIA)
National Communications System (NCS)
National Security Agency (NSA)
Department of Defense (DoD)
A steering committee consisting of representatives from the above-mentioned agencies along with FPIC (Department of Homeland Security Federal Partnership for Interoperable Communication), Coast Guard and the Department of Commerce's National Institute of Standards and Technology (NIST), Office of Law Enforcement Standards was established to decide the priorities and scope of technical development of P25.
Introduction
Interoperable emergency communication is integral to initial response, public health, community safety, national security and economic stability. Of all the problems experienced during disaster events, one of the most serious is poor communication due to lack of appropriate and efficient means to collect, process, and transmit important information in a timely fashion. In some cases, radio communication systems are incompatible and inoperable not just within a jurisdiction but within departments or agencies in the same community. Non-operability occurs due to use of outdated equipment, limited availability of radio frequencies, isolated or independent planning, lack of coordination, and cooperation, between agencies, community priorities competing for resources, funding and ownership, and control of communications systems. Recognizing and understanding this need, Project 25 (P25) was initiated collaboratively by public safety agencies and manufacturers to address the issue with emergency communication systems. P25 is a collaborative project to ensure that two-way radios are interoperable. The goal of P25 is to enable public safety responders to communicate with each other and, thus, achieve enhanced coordination, timely response, and efficient and effective use of communications equipment.
P25 was established to address the need for common digital public safety radio communications standards for first-responders and homeland security/emergency response professionals. The Telecommunications Industry Association's TR-8 engineering committee facilitates such work through its role as an ANSI-accredited standards development organization (SDO) and has published the P25 suite of standards as the TIA-102 series of documents, which now include 49 separate parts on Land Mobile Radio and TDMA implementations of the technology for public safety.
P25-compliant systems are being increasingly adopted and deployed throughout the United States, as well as other countries. Radios can communicate in analog mode with legacy radios, and in either digital or analog mode with other P25 radios. Additionally, the deployment of P25-compliant systems will allow for a high degree of equipment interoperability and compatibility.
P25 standards use the proprietary Improved Multi-Band Excitation (IMBE) and Advanced Multi-Band Excitation (AMBE+2) voice codecs which were designed by Digital Voice Systems, Inc. to encode/decode the analog audio signals. It is rumored that the licensing cost for the voice-codecs that are used in P25 standard devices is the main reason that the cost of P25 compatible devices is so high.
P25 may be used in "talk around" mode without any intervening equipment between two radios, in conventional mode where two radios communicate through a repeater or base station without trunking or in a trunked mode where traffic is automatically assigned to one or more voice channels by a Repeater or Base Station.
The protocol supports the use of Data Encryption Standard (DES) encryption (56 bit), 2-key Triple-DES encryption, three-key Triple-DES encryption, Advanced Encryption Standard (AES) encryption at up to 256 bits keylength, RC4 (40 bits, sold by Motorola as Advanced Digital Privacy), or no encryption.
The protocol also supports the ACCORDION 1.3, BATON, Firefly, MAYFLY and SAVILLE Type 1 ciphers.
P25 open interfaces
P25's Suite of Standards specify eight open interfaces between the various components of a land mobile radio system. These interfaces are:
Common Air Interface (CAI) – standard specifies the type and content of signals transmitted by compliant radios. One radio using CAI should be able to communicate with any other CAI radio, regardless of manufacturer
Subscriber Data Peripheral Interface – standard specifies the port through which mobiles and portables can connect to laptops or data networks
Fixed Station Interface – standard specifies a set of mandatory messages supporting digital voice, data, encryption and telephone interconnect necessary for communication between a Fixed Station and P25 RF Subsystem
Console Subsystem Interface – standard specifies the basic messaging to interface a console subsystem to a P25 RF Subsystem
Network Management Interface – standard specifies a single network management scheme which will allow all network elements of the RF subsystem to be managed
Data Network Interface – standard specifies the RF Subsystem's connections to computers, data networks, or external data sources
Telephone Interconnect Interface – standard specifies the interface to Public Switched Telephone Network (PSTN) supporting both analog and ISDN telephone interfaces.
Inter RF Subsystem Interface (ISSI) – standard specifies the interface between RF subsystems which will allow them to be connected into wide area networks
P25 phases
P25-compliant technology has been deployed over two main phases with future phases yet to be finalized.
Phase 1
Phase 1 radio systems operate in 12.5 kHz digital mode using a single user per channel access method. Phase 1 radios use Continuous 4 level FM (C4FM) modulation—a special type of 4FSK modulation—for digital transmissions at 4,800 baud and 2 bits per symbol, yielding 9,600 bits per second total channel throughput. Of this 9,600, 4,400 is voice data generated by the IMBE codec, 2,800 is forward error correction, and 2,400 is signalling and other control functions. Receivers designed for the C4FM standard can also demodulate the "Compatible quadrature phase shift keying" (CQPSK) standard, as the parameters of the CQPSK signal were chosen to yield the same signal deviation at symbol time as C4FM. Phase 1 uses the IMBE voice codec.
These systems involve standardized service and facility specifications, ensuring that any manufacturers' compliant subscriber radio has access to the services described in such specifications. Abilities include backward compatibility and interoperability with other systems, across system boundaries, and regardless of system infrastructure. In addition, the P25 suite of standards provides an open interface to the radio frequency (RF) subsystem to facilitate interlinking of different vendors' systems.
Phase 2
To improve spectrum use, P25 Phase 2 was developed for trunking systems using a 2-slot TDMA scheme and is now required for all new trunking systems in the 700 MHz band. Phase 2 uses the AMBE+2 voice codec to reduce the needed bitrate so that one voice channel will only require 6,000 bits per second (including error correction and signalling). Phase 2 is not backwards compatible with Phase 1 (due to the TDMA operation), although multi-mode TDMA radios and systems are capable of operating in Phase 1 mode when required, if enabled. A subscriber radio cannot use TDMA transmission without a synchronization source; therefore direct radio to radio communication resorts to conventional FDMA digital operation. Multi-band subscriber radios can also operate on narrow-band FM as a lowest common denominator between almost any two way radios. This makes analog narrow-band FM the de facto "interoperability" mode for some time.
Originally the implementation of Phase 2 was planned to split the 12.5 kHz channel into two 6.25 kHz slots, or Frequency-Division Multiple Access (FDMA). However it proved more advantageous to use existing 12.5 kHz frequency allocations in Time Division Multiple Access (TDMA) mode for a number of reasons. It allowed subscriber radios to save battery life by only transmitting half the time which also yields the ability for the subscriber radio to listen and respond to system requests between transmissions.
Phase 2 is what is known as 6.25 kHz "bandwidth equivalent" which satisfies an FCC requirement for voice transmissions to occupy less bandwidth. Voice traffic on a Phase 2 system transmits with the full 12.5 kHz per frequency allocation, as a Phase 1 system does, however it does so at a faster data rate of 12 kbit/s allowing two simultaneous voice transmissions. As such subscriber radios also transmit with the full 12.5 kHz, but in an on/off repeating fashion resulting in half the transmission and thus an equivalent of 6.25 kHz per each radio. This is accomplished using the AMBE voice coder that uses half the rate of the Phase 1 IMBE voice coders.
Beyond Phase 2
From 2000 to 2009, the European Telecommunications Standards Institute (ETSI) and TIA were working collaboratively on the Public Safety Partnership Project or Project MESA (Mobility for Emergency and Safety Applications), which sought to define a unified set of requirements for a next-generation aeronautical and terrestrial digital wideband/broadband radio standard that could be used to transmit and receive voice, video, and high-speed data in wide-area, multiple-agency networks deployed by public safety agencies.
The final functional and technical requirements have been released by ETSI and were expected to shape the next phases of American Project 25 and European DMR, dPMR, and TETRA, but no interest from the industry followed, since the requirements could not be met by available commercial off-the-shelf technology, and the project was closed in 2010.
During the United States 2008 wireless spectrum auction, the FCC allocated 20 MHz of the 700 MHz UHF radio band spectrum freed in the digital TV transition to public safety networks. The FCC expects providers to employ LTE for high-speed data and video applications.
Conventional implementation
P25 systems do not have to resort to using in band signaling such as Continuous Tone-Coded Squelch System (CTCSS) tone or Digital-Coded Squelch (DCS) codes for access control. Instead they use what is called a Network Access Code (NAC) which is included outside of the digital voice frame. This is a 12-bit code that prefixes every packet of data sent, including those carrying voice transmissions.
The NAC is a feature similar to CTCSS or DCS for analog radios. That is, radios can be programmed to only pass audio when receiving the correct NAC. NACs are programmed as a three-hexadecimal-digit code that is transmitted along with the digital signal being transmitted.
Since the NAC is a three-hexadecimal-digit number (12 bits), there are 4,096 possible NACs for programming, far more than all analog methods combined.
Three of the possible NACs have special functions:
0x293 ($293) – the default NAC
0xf7e ($F7E) – a receiver set for this NAC will pass audio on any decoded signal received
0xf7f ($F7F) – a repeater receiver set for this NAC will allow all incoming decoded signals and the repeater transmitter will retransmit the received NAC.
Adoption
Adoption of these standards has been slowed by budget problems in the US; however, funding for communications upgrades from the Department of Homeland Security usually requires migrating to Project 25. It is also being used in other countries worldwide including Australia, New Zealand, Brazil, Canada, India and Russia. As of mid-2004 there were 660 networks with P25 deployed in 54 countries. At the same time, in 2005, the European Terrestrial Trunked Radio (TETRA) was deployed in sixty countries, and it is the preferred choice in Europe, China, and other countries. This was largely based on TETRA systems being many times cheaper than P25 systems ($900 vs $6,000 for a radio) at the time. However P25 radio prices are rapidly approaching parity with TETRA radio prices through increased competition in the P25 market. The majority of P25 networks are based in Northern America where it has the advantage that a P25 system has the same coverage and frequency bandwidth as the earlier analog systems that were in use so that channels can be easily upgraded one by one. Some P25 networks also allow intelligent migration from the analog radios to digital radios operating within the same network. Both P25 and TETRA can offer varying degrees of functionality, depending on available radio spectrum, terrain and project budget.
While interoperability is a major goal of P25, many P25 features present interoperability challenges. In theory, all P25 compliant equipment is interoperable. In practice, interoperable communications isn't achievable without effective governance, standardized operating procedures, effective training and exercises, and inter-jurisdictional coordination. The difficulties inherent in developing P25 networks using features such as digital voice, encryption, or trunking sometimes result in feature-backlash and organizational retreat to minimal "feature-free" P25 implementations which fulfill the letter of any Project 25 migration requirement without realizing the benefits thereof. Additionally, while not a technical issue per se, frictions often result from the unwieldy bureaucratic inter-agency processes that tend to develop in order to coordinate interoperability decisions.
Naming of P25 technology in regions
Statewide P25 systems in Australia were deployed using the name Government Radio Network (GRN) in New South Wales, South Australia, and Tasmania; Government Wireless Network (GWN) in Queensland; Territory Radio Network (TRN) in the Australian Capital Territory; and Melbourne Metropolitan Radio (MMR) and Rural Mobile Radio (RMR) in Victoria. In New South Wales, the GRN is now called the Public Safety Network (PSN).
Project 25 Compliance Assessment Program (P25 CAP)
The United States DHS's Project 25 Compliance Assessment Program (P25 CAP) aims for interoperability among different vendors by testing to P25 Standards. P25 CAP, a voluntary program, allows suppliers to publicly attest to their products' compliance.
Independent, accredited labs test vendor's P25 radios for compliance to P25 Standards, derived from TIA-102 Standards and following TIA-TR8 testing procedures. Only approved products may be purchased using US federal grant dollars. Generally, non-approved products should not be trusted to be meet P25 standards for performance, conformance, and interoperability.
P25 product labeling varies. "P25" and "P25 compliant" mean nothing while high standards apply for a vendor to claim a product is "P25 CAP compliant" or "P25 compliant with the Statement of Requirements (P25 SOR)"
Security flaws
OP25 Project—Encryption flaws in DES-OFB and ADP ciphers
At the Securecomm 2011 conference in London, security researcher Steve Glass presented a paper, written by himself and co-author Matt Ames, that explained how DES-OFB and Motorola's proprietary ADP (RC4 based) ciphers were vulnerable to brute force key recovery. This research was the result of the OP25 project which uses GNU Radio and the Ettus Universal Software Radio Peripheral (USRP) to implement an open source P25 packet sniffer and analyzer. The OP25 project was founded by Steve Glass in early 2008 while he was performing research into wireless networks as part of his PhD thesis.
The paper is available for download from the NICTA website.
University of Pennsylvania research
In 2011, the Wall Street Journal published an article describing research into security flaws of the system, including a user interface that makes it difficult for users to recognize when transceivers are operating in secure mode. According to the article, "(R)esearchers from the University of Pennsylvania overheard conversations that included descriptions of undercover agents and confidential informants, plans for forthcoming arrests and information on the technology used in surveillance operations." The researchers found that the messages sent over the radios are sent in segments, and blocking just a portion of these segments can result in the entire message being jammed. "Their research also shows that the radios can be effectively jammed (single radio, short range) using a highly modified pink electronic child's toy and that the standard used by the radios 'provides a convenient means for an attacker' to continuously track the location of a radio's user. With other systems, jammers have to expend a lot of power to block communications, but the P25 radios allow jamming at relatively low power, enabling the researchers to prevent reception using a $30 toy pager designed for pre-teens."
The report was presented at the 20th USENIX Security Symposium in San Francisco in August 2011. The report noted a number of security flaws in the Project 25 system, some specific to the way it has been implemented and some inherent in the security design.
Encryption lapses
The report did not find any breaks in the P25 encryption; however, they observed large amounts of sensitive traffic being sent in the clear due to implementations problems. They found switch markings for secure and clear modes difficult to distinguish (∅ vs. o). This is exacerbated by the fact that P25 radios when set to secure mode continue to operate without issuing a warning if another party switches to clear mode. In addition, the report authors said many P25 systems change keys too often, increasing the risk that an individual radio on a net may not be properly keyed, forcing all users on the net to transmit in the clear to maintain communications with that radio.
Jamming vulnerability
One design choice was to use lower levels of error correction for portions of the encoded voice data that are deemed less critical for intelligibility. As a result, bit errors may be expected in typical transmissions, and while harmless for voice communication, the presence of such errors force the use of stream ciphers, which can tolerate bit errors, and prevents the use of a standard technique, message authentication codes (MACs), to protect message integrity from stream cipher attacks. The varying levels of error correction are implemented by breaking P25 message frames into subframes. This allows an attacker to jam entire messages by transmitting only during certain short subframes that are critical to reception of the entire frame. As a result, an attacker can effectively jam Project 25 signals with average power levels much lower than the power levels used for communication. Such attacks can be targeted at encrypted transmissions only, forcing users to transmit in the clear.
Because Project 25 radios are designed to work in existing two-way radio frequency channels, they cannot use spread spectrum modulation, which is inherently jam-resistant. An optimal spread spectrum system can require an effective jammer to use 1,000 times as much power (30 dB more) as the individual communicators. According to the report, a P25 jammer could effectively operate at 1/25th the power (14 dB less) than the communicating radios. The authors developed a proof-of-concept jammer using a Texas Instruments CC1110 single chip radio, found in an inexpensive toy.
Traffic analysis and active tracking
Certain metadata fields in the Project 25 protocol are not encrypted, allowing an attacker to perform traffic analysis to identify users. Because Project 25 radios respond to bad data packets addressed to them with a retransmission request, an attacker can deliberately send bad packets forcing a specific radio to transmit even if the user is attempting to maintain radio silence. Such tracking by authorized users is considered a feature of P25, referred to as "presence".
The report's authors concluded by saying "It is reasonable to wonder why this protocol, which was developed over many years and is used for sensitive and critical applications, is so difficult to use and so vulnerable to attack." The authors separately issued a set of recommendations for P25 users to mitigate some of the problems found. These include disabling the secure/clear switch, using Network Access Codes to segregate clear and encrypted traffic, and compensating for the unreliability of P25 over-the-air rekeying by extending key life.
Comparison between P25 and TETRA
P25 and TETRA are used in more than 53 countries worldwide for both public safety and private sector radio networks. There are some differences in features and capacities:
TETRA is optimized for high population density areas, and has spectral efficiency of 4 time slots in 25 kHz. (Four communications channels per 25 kHz channel, an efficient use of spectrum). It supports full-duplex voice communication, data, and messaging. It does not provide simulcast.
P25 is optimized for wider area coverage with low population density, and also supports simulcast. It is, however, limited with respect to data support. There is a major subdivision within P25 radio systems: Phase I P25 operates analogue, digital, or mixed mode in a single 12.5 kHz channel. Phase II uses a 2-timeslot TDMA structure in each 12.5 kHz channel.
See also
APCO-16, an earlier standard that specified trunking formats and radio operation
Digital Audio Broadcasting
Digital terrestrial television
Government radio networks in Australia, examples deployment of P25 technology
NXDN, a two-way digital radio standard with similar characteristics (Optional TDMA)
Terrestrial Trunked Radio, TETRA, the European(EU) standard equivalent to P25
Notes
External links
P25 Overview TIA Standards Development Activities for Public Safety
https://web.archive.org/web/20110223005820/http://www.apco911.org/frequency/project25.php APCO International Project 25 page
http://www.apco.ca/ APCO Canada
http://www.dvsinc.com/papers/p25_training_guide.pdf Daniels' P25 Radio System Training Guide
https://valid8.com/solutions/p25-issi-cssi-conformance P25 Compliance Test Tools for ISSI & CSSI
https://web.archive.org/web/20170611161725/http://www.dvsinc.com/prj25.htm DVSI P25 Vocoder Software and Hardware
http://www.p25phase2.com Radio users and experts discuss P25 Phase 2
Trunked radio systems
Telecommunications standards
Computer security exploits | Project 25 | [
"Technology"
] | 5,070 | [
"Computer security exploits"
] |
1,042,053 | https://en.wikipedia.org/wiki/Neutron%20scattering | Neutron scattering, the irregular dispersal of free neutrons by matter, can refer to either the naturally occurring physical process itself or to the man-made experimental techniques that use the natural process for investigating materials. The natural/physical phenomenon is of elemental importance in nuclear engineering and the nuclear sciences. Regarding the experimental technique, understanding and manipulating neutron scattering is fundamental to the applications used in crystallography, physics, physical chemistry, biophysics, and materials research.
Neutron scattering is practiced at research reactors and spallation neutron sources that provide neutron radiation of varying intensities. Neutron diffraction (elastic scattering) techniques are used for analyzing structures; where inelastic neutron scattering is used in studying atomic vibrations and other excitations.
Scattering of fast neutrons
"Fast neutrons" (see neutron temperature) have a kinetic energy above 1 MeV. They can be scattered by condensed matter—nuclei having kinetic energies far below 1 eV—as a valid experimental approximation of an elastic collision with a particle at rest. With each collision, the fast neutron transfers a significant part of its kinetic energy to the scattering nucleus (condensed matter), the more so the lighter the nucleus. And with each collision, the "fast" neutron is slowed until it reaches thermal equilibrium with the material in which it is scattered.
Neutron moderators are used to produce thermal neutrons, which have kinetic energies below 1 eV (T < 500K). Thermal neutrons are used to maintain a nuclear chain reaction in a nuclear reactor, and as a research tool in neutron scattering experiments and other applications of neutron science (see below). The remainder of this article concentrates on the scattering of thermal neutrons.
Neutron-matter interaction
Because neutrons are electrically neutral, they penetrate more deeply into matter than electrically charged particles of comparable kinetic energy, and thus are valuable as probes of bulk properties.
Neutrons interact with atomic nuclei and with magnetic fields from unpaired electrons, causing pronounced interference and energy transfer effects in neutron scattering experiments. Unlike an x-ray photon with a similar wavelength, which interacts with the electron cloud surrounding the nucleus, neutrons interact primarily with the nucleus itself, as described by Fermi's pseudopotential. Neutron scattering and absorption cross sections vary widely from isotope to isotope.
Neutron scattering can be incoherent or coherent, also depending on isotope. Among all isotopes, hydrogen has the highest scattering cross section. Important elements like carbon and oxygen are quite visible in neutron scattering—this is in marked contrast to X-ray scattering where cross sections systematically increase with atomic number. Thus neutrons can be used to analyze materials with low atomic numbers, including proteins and surfactants. This can be done at synchrotron sources but very high intensities are needed, which may cause the structures to change. The nucleus provides a very short range, as isotropic potential varies randomly from isotope to isotope, which makes it possible to tune the (scattering) contrast to suit the experiment.
Scattering almost always presents both elastic and inelastic components. The fraction of elastic scattering is determined by the Debye-Waller factor or the Mössbauer-Lamb factor. Depending on the research question, most measurements concentrate on either elastic or inelastic scattering.
Achieving a precise velocity, i.e. a precise energy and de Broglie wavelength, of a neutron beam is important. Such single-energy beams are termed 'monochromatic', and monochromaticity is achieved either with a crystal monochromator or with a time of flight (TOF) spectrometer. In the time-of-flight technique, neutrons are sent through a sequence of two rotating slits such that only neutrons of a particular velocity are selected. Spallation sources have been developed that can create a rapid pulse of neutrons. The pulse contains neutrons of many different velocities or de Broglie wavelengths, but separate velocities of the scattered neutrons can be determined afterwards by measuring the time of flight of the neutrons between the sample and neutron detector.
Magnetic scattering
The neutron has a net electric charge of zero, but has a significant magnetic moment, although only about 0.1% of that of the electron. Nevertheless, it is large enough to scatter from local magnetic fields inside condensed matter, providing a weakly interacting and hence penetrating probe of ordered magnetic structures and electron spin fluctuations.
Inelastic neutron scattering
Inelastic neutron scattering is an experimental technique commonly used in condensed matter research to study atomic and molecular motion as well as magnetic and crystal field excitations. It distinguishes itself from other neutron scattering techniques by resolving the change in kinetic energy that occurs when the collision between neutrons and the sample is an inelastic one. Results are generally communicated as the dynamic structure factor (also called inelastic scattering law) , sometimes also as the dynamic susceptibility where the scattering vector is the difference between incoming and outgoing wave vector, and is the energy change experienced by the sample (negative that of the scattered neutron). When results are plotted as function of , they can often be interpreted in the same way as spectra obtained by conventional spectroscopic techniques; insofar as inelastic neutron scattering can be seen as a special spectroscopy.
Inelastic scattering experiments normally require a monochromatization of the incident or outgoing beam and an energy analysis of the scattered neutrons. This can be done either through time-of-flight techniques (neutron time-of-flight scattering) or through Bragg reflection from single crystals (neutron triple-axis spectroscopy, neutron backscattering). Monochromatization is not needed in echo techniques (neutron spin echo, neutron resonance spin echo), which use the quantum mechanical phase of the neutrons in addition to their amplitudes.
History
The first neutron diffraction experiments were performed in the 1930s. However it was not until around 1945, with the advent of nuclear reactors, that high neutron fluxes became possible, leading to the possibility of in-depth structure investigations. The first neutron-scattering instruments were installed in beam tubes at multi-purpose research reactors. In the 1960s, high-flux reactors were built that were optimized for beam-tube experiments. The development culminated in the high-flux reactor of the Institut Laue-Langevin (in operation since 1972) that achieved the highest neutron flux to this date. Besides a few high-flux sources, there were some twenty medium-flux reactor sources at universities and other research institutes. Starting in the 1980s, many of these medium-flux sources were shut down, and research concentrated at a few world-leading high-flux sources.
Facilities
Today, most neutron scattering experiments are performed by research scientists who apply for beamtime at neutron sources through a formal proposal procedure. Because of the low count rates involved in neutron scattering experiments, relatively long periods of beam time (on the order of days) are usually required for usable data sets. Proposals are assessed for feasibility and scientific interest.
Techniques
Neutron diffraction
Small angle neutron scattering
Spin Echo Small angle neutron scattering
Neutron reflectometry
Inelastic neutron scattering
Neutron triple-axis spectrometry
Neutron time-of-flight scattering
Neutron backscattering
Neutron spin echo
See also
Neutron transport
LARMOR neutron microscope
Born approximation
References
External links
Free, EU-sponsored e-learning resource for neutron scattering
Neutron scattering - a case study
Neutron Scattering - A primer (LANL-hosted black-and-white version) - An introductory article written by Roger Pynn (Los Alamos National Laboratory)
Podcast Interview with two ILL scientists about neutron science/scattering at the ILL
YouTube video explaining the activities of the Jülich Centre for Neutron Scattering
Neutronsources.org
Science and Innovation with Neutrons in Europe in 2020 (SINE2020)
IAEA neutron beam instrument database
Crystallography
scattering
Neutron
Scattering
de:Neutronenstreuung | Neutron scattering | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,595 | [
"Neutron scattering",
"Materials science",
"Crystallography",
"Scattering",
"Condensed matter physics",
"Particle physics",
"Nuclear physics"
] |
1,042,082 | https://en.wikipedia.org/wiki/Dragon%20%28Dragonriders%20of%20Pern%29 | The Dragons of Pern are a fictional race created by Anne McCaffrey as an integral part of the science fiction world depicted in her Dragonriders of Pern novels.
In creating the Pern setting, McCaffrey set out to subvert the clichés associated with dragons in European folklore and in modern fantasy fiction. Pernese dragons are similar to traditional European dragons in the fact that they can breathe fire and resemble great lizards or dinosaurs with wings, but the resemblance ends there. Unlike most dragons in previous Western literature, Pernese dragons are entirely friendly to humanity. Furthermore, they are not magical at all. Instead, they are a heavily genetically modified species based on one of Pern's native life-forms, the fire-lizard.
History
In Dragonsdawn, the race was intentionally engineered to fight Thread, a deadly mycorrhizoid spore that devours all organic matter that it touches, after it first caught the human colonists on Pern unawares with devastating results. Geneticist Kitti Ping Yung designed the dragons by manipulating the genetic code of the indigenous fire-lizards that had been acquired as pets by the colonists. The dragons were named after their resemblance to European dragons from the legends of old Earth. Later genetic manipulation by Ping's granddaughter, Wind Blossom, also resulted in the watch-whers, ungainly, nocturnal creatures who bore a slight resemblance to dragons. The later novels, set during the Third Pass, have shown the watch-whers are more useful than commonly thought in the novels set in the First Pass. On Pern, time is measured in "Turns", or years, and "Passes", which are about fifty Turns long, and occur when a planet named the "Red Star" is close enough to Pern for Thread to traverse space between the Red Star and Pern. Thread only falls during Passes. Periods between Passes, when the Red Star's orbit takes it away from Pern, are referred to as "Intervals"; usually lasting about two hundred Turns. In the novels, "Long Intervals", of about four hundred and fifty Turns, have occurred twice. These Long Intervals have led the inhabitants of Pern to believe that Thread will never return.
Physiology
Dragons are described as carnivorous, oviparous, warm-blooded creatures. Like all of Pern's native large fauna, they have six limbs – four feet and two wings. Their blood, referred to as ichor, is copper-based and green in color. Their head and general body type is described by McCaffrey as being similar in shape to those of horses. On their heads they have small headknobs, similar to those of giraffes, and no visible ears. They have multifaceted eyes that change color depending on the dragon's mood. Unlike the dragons of Terran legend, they have a smooth hide rather than scales; the texture of their skin is described as being reminiscent of suede with a spicy, sweet scent when clean. They are described as having forked tail ends with a defecation opening between the forks; however, most artistic renderings depict their tails as having spade-shaped tips. The dragons usually get from one place to another by utilizing a teleportation process known as "going between".
Kitti Ping designed the dragons to gradually increase in size with every generation until they reached pre-programmed final dimensions. The dragons of the first Hatchings were not much bigger than horses. By the Sixth Pass (1500 years later) they had reached their programmed size and remained at that size until a single isolated dragon population suffered severe inbreeding, resulting in much larger beasts. In the Ninth Pass, in which most of the novels have been set, the largest Pernese dragon on record, Ramoth, hatched. According to the novel All the Weyrs of Pern, these dragons were roughly three times the size of the largest first-generation dragons. Ramoth's great size is often attributed to mutation and the genetic isolation of Benden Weyr for over 400 years (or Turns). Ramoth, at full length, was forty-five meters long. However, The Dragon Lover's Guide to Pern is erroneous on this point and Anne has said herself on multiple occasions that her dragons are measured in feet not meters. Ramoth is intended by the author to be 45 feet long. As referenced in this quote from the 1979 Documentary series "Time Out of Mind - Episode 4: Anne McCaffrey" available on YouTube. "...well you'd want to catch a parasite that just burrowed and fed on any living thing, before it hit the ground and there was your rationale for dragons. But you couldn't have small dragons you'd have to have big ones so I decided 35-40ft that seems an economical size." In All the Weyrs of Pern, AIVAS, an artificial intelligence still in operation after all this time, notes that all of the primary Benden dragons, Ramoth, Mnementh and Canth, are notably larger than Kitti Ping's specified end-size of the dragon species. Newly hatched dragons are the size of very large dogs or small ponies, and reach their full size after eighteen months. Because young dragons grow so fast, their riders must regularly apply oil to their hides to prevent the skin from cracking or drying out.
Dragons, like their fire-lizard ancestors, can breathe fire by chewing a phosphine-bearing rock, called "firestone" in the novels, which reacts with an acid in a special "second stomach" organ. This forms a volatile gas that can be exhaled at will and ignites upon contact with air. The flame is used to burn Thread from the sky before it reaches the ground. However, the chewed firestone must be expelled from the body after it is used up, for the dragons cannot digest it.
Psychic abilities
Despite their relatively low intelligence, fire-lizards communicate through a form of weak telepathy. They also imprint on the first individual who feeds them after they hatch, creating a telepathic bond with them; the Pernese call this phenomenon "Impression". In creating dragons, Kitti Ping intensified the creatures' telepathy, greatly increased their intelligence, and gave them a strong instinctive drive to Impress to a human. Upon hatching, each dragonet chooses one of the humans present (usually) and Impresses to that person; from that moment on, the pair are in a constant state of telepathic contact for as long as they both live. Dragons also use telepathy to communicate with each other and with fire-lizards. They are capable of speaking telepathically to humans besides their own riders, but not all of them will do so except under unusual circumstances.
Dragons and fire-lizards can also teleport. They do this by briefly entering a hyperspace dimension known as between. Both humans and dragons experience between as an extremely cold, sensory-deprived, black void. After spending no more than eight seconds in between, the dragon or fire-lizard can re-emerge anywhere on Pern, along with any passengers or cargo they carried. This ability is explained as having evolved in fire-lizards as a defense against Thread; not only does it allow them to quickly escape from Threadfall, but the intense cold of between kills any Thread that has already burrowed into them. If a dragon attempts to teleport without a clear mental image of the place where they intend to reappear, they may simply fail to emerge from between and thus, be gone forever.
Going between allows dragons to travel through time as well as space, as long as they have a clear picture of what a particular place looked like (or will look like) at the desired time. However, the practice is highly dangerous to both dragon and rider and is severely restricted. Existing in two places at once for extended periods of time, or in close proximity, causes severe weakness and psychological disturbance for humans but not for dragons, the effects of which are discussed in several novels. In addition, while teleporting through space always takes the same amount of time, when a dragon travels through time, the amount of time they spend in between increases depending on how far away in time the destination is. Thus, traveling to remote times poses severe dangers from hypothermia and oxygen deprivation. In the first Pern novel, Dragonflight, Lessa passes out after having travelled back more than four hundred Turns.
The Dragonlover's Guide to Pern states that dragons defecate while between. This idea originated with a statement by Anne McCaffrey herself, in answer to a fan's question about the subject at a con. However, McCaffrey may have been joking when she first said this. As the idea has never been referenced in any of the Pern novels (in fact, defecation was mentioned only a few times in all of the books ever written in the Pern series), it cannot be considered definitively canonical. The Skies of Pern references the use of dragon dung as a repellent against the large felines inhabiting the southern continent.
Dragons are also capable of telekinesis, though this ability is unknown and used in an unconscious manner (to augment flight) until it is discovered as a conscious ability by the green dragon Zaranth and her rider Tai in the Thirty-first Turn of the Ninth Pass. It is speculated that the undersized wings were intentionally created in the dragons by Kitti Ping to reduce the surface area of a dragon that is exposed to possible Thread injury, and that the telekinetic abilities were intended to make up for the loss of wingsail. It is said in many books that a dragon can carry whatever it thinks it can carry. This is likely an extension of the telekinesis, mentally "lifting" the extra load. This is the most likely explanation for the great loads that dragons sometimes carry during emergencies.
Psychology
Unlike their fire-lizard ancestors, dragons are fully sentient. They can communicate fluently in human language (although only telepathically), and have personalities and opinions distinct from those of their riders. However, their intelligence does seem to be somewhat lower than that of the average human. In particular, their long-term memory is severely limited.
Dragons' telepathic communication is usually limited to contact with their rider and with other dragons, however a dragon sometimes communicates well with a person with whom their rider has close emotional ties. They do understand spoken human language and occasionally reply telepathically to people whom they choose to speak.
As a safeguard against the possible damage that could be caused by such powerful creatures, Ping engineered dragons to be profoundly psychologically dependent on their riders. Any dragonet that fails to Impress to a human shortly after hatching will die. If a dragon's rider dies, the dragon immediately suicides by going between without a destination. The only exception is when the rider of a queen dragon dies while the queen is gravid; the dragon waits just long enough to lay her eggs and see them hatch before disappearing between. (Humans who lose their dragons typically commit suicide as well. However, some do survive, although the experience leaves profound psychological trauma.)
Ping also designed the dragons to be fairly calm in temperament. They never fight one another, unless two queens come into estrus at the same time. They are also not dangerous to humans except shortly after hatching, when it is common for confused and frightened dragonets to maul or even kill humans hoping to Impress.
When a dragon hatches, they announce their names to their new riders upon Impression. Pernese dragons' names always end in -th. A watch-wher's name will end in "sk".
Colors
On canon Pern, barring rare mutations, female dragons and fire-lizards are always either green or gold in color, while males are blue, brown or bronze.
Gold dragons, also called queens, are the largest dragons (40–45 feet or meters long) and the only fertile females. Gold dragons are by far the rarest dragons on Pern, at just less than one percent of the population. They are dominant over all other colors; any non-gold dragon will invariably obey a queen's orders, even against the wishes of its own rider. Queens are incapable of digesting firestone and producing flame (see below); however, they do fight Thread – they fly in the lowest wing, with their riders armed with specially-designed flamethrowers to flame any Thread missed by the wings flying above. An egg that is going to hatch a gold dragon is notable: It is gold-colored and larger than other eggs. A gold dragon will always Impress a heterosexual female and are believed by most Weyrfolk to prefer young women who were not raised in the Weyr.
Bronze dragons are the largest males (35–45 feet or meters long), although they are generally significantly smaller than the queens. Bronzes account for about five percent of all dragons. They are almost always the ones to mate with queens, as the smaller colors generally lack the stamina to chase and catch the gold dragons when they rise to mate. Due to the 5-1 bronze/gold ratio and the infrequency of gold mating flights, they often mate with greens (the losers of a gold flight almost always seek a green for their needs), but their size often puts them at a disadvantage in chasing the agile, smaller females. The senior bronze of a Weyr is determined through which bronze wins the mating flight of the senior gold. In canon Pern, the rider of a bronze dragon is always a heterosexual male.
Brown dragons are the next largest color (30–40 feet or meters long). About fifteen percent of all dragons are brown. They may occasionally mate with queens, although this is rare, and becomes even more rare as the dragons increase in size; by Ramoth's time in the Ninth Pass it is unheard of. All brown riders in the Pern novels are men; most are heterosexual, but bisexual or "masculine" homosexual brown riders are not rare.
Blue dragons are the smallest males (24–30 feet or meters long) and make up about a third (thirty percent) of all dragons on Pern. They are nearly as agile as greens, but unlike the greens, they often have enough stamina to last for an entire Threadfall. They mate only with greens, as they are simply too small to keep up with a massive queen over a long mating flight. There are few prominent blue dragons or blue riders in the books. Some assume the position of teaching the new riders after their Impression. Canon blue riders are typically homosexual or bisexual, though some are heterosexual. In interviews, McCaffrey stated that homosexual women may be able to Impress a blue dragon. In the later books written with her son, Todd McCaffrey, blue Tazith is ridden by Xhinna, the first female weyrleader and the first female rider of a blue dragon.
Green dragons are the smallest normal color (20–24 feet or meters long), and make up about half of all dragons on Pern (fifty percent). They are female, but unlike the queens, they are infertile—due to the chronic use of firestone—and can produce flame. They are extremely valuable in Threadfall because of their agility, but they lack the stamina to last an entire Fall and generally fly in two or three shifts. Originally, greens Impressed only girls; however, after various natural disasters and plagues decimated Pern's population, women were needed to help repopulate the planet. Since going between during pregnancy can induce miscarriage and because removing pregnant riders from the Wings reduced the effectiveness of the Wings, it became impractical to present large numbers of women as candidates for Impression. Thus, green dragons began Impressing homosexual boys; by the time of the end of the Second Pass, female green riders were becoming rare. By the time of Moreta in the Sixth Pass, female green riders were entirely forgotten, although greens gradually begin Impressing to women again in the Ninth Pass. Females of any sexual orientation may Impress green.
There is only one white dragon mentioned anywhere in the Pern novels: Ruth, whose rider is Lord Jaxom of Ruatha Hold. He is not an albino, as his hide contains very faint patches of all the normal dragon colors. Ruth's egg would not have hatched if Jaxom had not forced it open and released the dragonet from a thick membrane sac; thus, it seems likely that white coloration in dragons is normally a lethal mutation. Although his parents are the largest queen, Ramoth, and largest bronze, Mnementh, in the history of Pern, Ruth is smaller than even a normal green dragon in his time; he is only slightly larger than the largest dragons of the first generation. While his exact length is never specifically mentioned in the books, it does state that he stands higher than a runnerbeast (horse) at the shoulder, extrapolations suggest that he might be 18 feet or meters long. He is male or neuter (undetermined), and assumed sterile, with no urge to mate. Ruth also has the unusual ability to intuitively orient himself in time.
The larger a color is, the less common it is. For instance, there are more blues than browns, and there are more browns than bronzes. Half the dragon population is female, with green dragons being roughly fifty percent of the population and golds being one percent or slightly less.
Riding a larger color of dragon confers higher social status in Pern's extremely hierarchical society, color rankings following the dragons' own strict instinctual hierarchical organization based on fire-lizard structures. Perhaps as a result of this, it is commonly believed that the larger colors are more intelligent, although recent novels imply that this may not be true.
The Pernese believe that chewing firestone makes female dragons sterile; they therefore refuse to allow queens to use it. Greens, on the other hand, are so common that if they produced offspring it would quickly lead to overpopulation. They always chew firestone, and because of their numbers and agility they are vital to any Thread-fighting force. However, Dragonsdawn suggests that Kitti Ping—possibly motivated by old-fashioned ideas about gender roles—deliberately engineered greens to be infertile and gold dragons to be incapable of producing flame in order to protect the gold dragons, the only reproductively fertile females, from the dangers of Thread fighting.
Mating and reproduction
Mating
Both gold and green dragons experience a periodic mating urge. During a Pass a gold dragon will rise roughly once per Turn, and more often at the beginning of a Pass, yet less often towards the end of a Pass. During an Interval a gold dragon may rise to mate only once every four or five Turns. Greens will rise to mate three or four times a Turn, whether this increases or decreases depending on whether or not it is a Pass is unknown. Greens will mate with any male, usually blues or browns. As they are smaller and have less stamina, a green mating flight is much shorter than a gold one.
When a female comes into estrus, interested males compete to catch her in a mating flight. Usually, the female chooses the male who impresses her the most with his skill in the flight, although inexperienced females may be caught before making their choice. The pair actually mate in midair; thus, the higher they get during the flight, the longer their mating can last. The Pernese commonly believe that longer matings result in larger clutches. For this reason, queen riders are strongly encouraged to restrain their dragons from eating heavily just before a flight, instructing them to drink blood instead for a quick burst of energy.
Effects on rider sexuality
Due to the intense psychic bond between rider and dragon, dragonriders are overcome by the powerful emotions and sensations associated with mating flights. The riders of the mating pair engage in sex themselves, to varying degrees unaware of what they are doing. This contributes to a much looser attitude toward sexuality in general among dragonriders than in the rest of Pernese society.
For much of Pern's history, all green riders were male. During these periods, all green mating flights resulted in homosexual intercourse between the riders of the dragons involved. This homosexual intercourse is accepted in the Weyr as being separate from the rider's sexual orientation unless the rider has shown otherwise. Mating flight sex between two riders, one of whom is not the other's chosen partner (known as a weyrmate) is not considered to be "cheating". It is understood within the Weyr that sex during mating flights is not optional for the rider. Anne McCaffrey stated that "The dragon decides, the rider complies." Dragons do not usually consider the orientation of their riders when considering what female they wish to chase, or, for a female dragon, what male dragon might catch her. A primary example of this behavior is between Weyrleader T'gellan, Weyrwoman Talina, and green rider Mirrim. T'gellan and Mirrim are weyrmates, but T'gellan's bronze dragon must mate with Weyrwoman Talina's gold dragon at least yearly in order for T'gellan to maintain his position as Weyrleader. Mirrim, known to be an extremely acerbic and temperamental rider, shows no jealousy or other problem with her weyrmate's regular sexual contact with Talina.
Effects on non-rider sexuality
Both green and gold dragons broadcast their sexual feelings on a wide band during mating flights. Weyrfolk tend to become somewhat inured to this and therefore can hold their sexual reactions until an appropriate place and time. However, flights are usually not over the Weyr itself and sometimes the flight path of the mating flight brings the mating dragons over Holds or Farmholds where the average people occasionally find themselves engaged in unexpected activities. This is especially common among young teens working out in the fields who react to the sudden, unexpected and overwhelming urges with potentially embarrassing results.
Riders of the losing dragons usually seek sexual relief after the intense flight; if they do not have a chosen partner they may seek the comfort of any willing and available partner of their sexual orientation. The weyrfolk tend to happily accommodate these riders, especially if they have been affected by the flight's sexual urgency. This is one of the major reasons for the Weyr's reputation for being sexually very open.
Fandom considerations
Anne McCaffrey has stated in a number of documents and interviews that dragonets use pheromones to determine the sexual orientation of the humans to whom they Impress. According to these statements, greens Impress only to women or to "effeminate" homosexual men. Blues Impress primarily to homosexual or bisexual men with "masculine" temperaments, or possibly to masculine or lesbian women; browns similarly Impress primarily to heterosexual men, but sometimes to bisexual men. Bronzes and golds Impress exclusively to heterosexual men and heterosexual women, respectively.
However, these ideas have never been made explicit in the books (although it is clear, at least, that most male green and blue riders are homosexual). Many members of online Pern fandom find McCaffrey's ideas about sexuality highly questionable for a number of reasons, both scientific and ethical. (Most infamously, she claimed in an interview that science has proven that being the receptive partner in anal sex triggers a hormonal change that will make a previously heterosexual man become homosexual and effeminate. Thus, she argues, even if a male green rider were originally heterosexual, he would not stay that way.) In later interviews McCaffrey claims that green dragons merely pick up on psychological clues from homosexual boys before they themselves know that they are homosexual. "A green Hatchling is unlikely to be impressed (pun intended) by a heterosexual boy." - Anne McCaffrey 1998 on The Kitchen Table BB.
Pern-based roleplaying games thus sometimes ignore McCaffrey's restrictions on who can Impress to a given color of dragon. MUDs and fanzine-based clubs often ignore everything except the basic rule that only women Impress gold and only men Impress bronze; PBEM games are more likely to accept the restrictions on sexual orientation. Most clubs post their policy on canon strictness. While some accept more liberal thoughts on color/gender/sexual orientation matches, many are very strict on this issue.
For the purposes of roleplaying games, McCaffrey has also officially allowed females (masculine lesbians) to ride browns or blues, though she insists that this could never happen on her (canon) Pern.
Also in fandom, if a rider has strong objections to sex with someone involved in a mating flight or the writer has objections to writing a homosexual encounter or object to their character being involved in a sexual encounter with a person other than their "significant other," they may sequester themselves with a more acceptable partner during the flight. This idea is called "Stand-Ins" and based on a concept McCaffrey introduced in Dragonseye/Red Star Rising, in which a female green rider objects to the idea of a specific bronze rider winning her green's mating flight. However, this concept is not seen in other books and clearly does not exist in the Ninth Pass, as several problems arise regarding green and gold riders who object to the random nature of mating flights and end up raped by the male winner of the flight (most notably in Skies of Pern).
Significance
As the primary line of defense against the Thread, dragons are a requirement for the survival and prosperity of humans and other land life on Pern. However, the great beasts require a good deal of maintenance, to the degree of requiring a large part of Pernese infrastructure—especially cattle farming—to be centered around their upkeep. This has been known to cause resentment among those doing the supporting, especially in times when Thread is not falling.
Significant Pernese Dragons
Ramoth (gold), ridden by Ninth Pass Benden Weyrwoman Lessa. Ramoth is the largest dragon in Pern's history, and together with Lessa rediscovers the lost knowledge that dragons are capable of time travel. She is described as having a beautiful mental voice in the short story The Girl Who Heard Dragons.
Mnementh (bronze), ridden by Ninth Pass Benden Weyrleader F'lar. Although significantly smaller than Ramoth, Mnementh is the largest bronze in the history of Pern. He is described as having a deep, rich mental voice.
Canth (brown), ridden by F'nor of Benden Weyr (F'lar's half-brother) during the Ninth Pass. He and F'nor make the first known attempt by the Pernese to teleport to the surface of another planet. Canth is also unusually large for his color – large enough to rival a small bronze in size – and is the first brown in centuries to attempt participation in a queen's mating flight.
Ruth (white), ridden by Lord Jaxom of Ruatha Hold, during the Ninth Pass. Ruth is the only known white dragon in Pern's history, and the only dragon to be ridden by a reigning Lord Holder. He has a much higher intelligence compared to that of many other dragons, and always knows when as well as where he is in time and space.
Orlith (gold), ridden by Sixth Pass Fort Weyrwoman Moreta, who is one of the most famous dragonriders of Pernese history. Moreta's deeds are celebrated in the well-known song "The Ballad of Moreta's Ride."
Faranth (gold), ridden by First Pass Fort Weyrwoman Sorka Hanrahan; Faranth is not the first queen to hatch, mate or lay eggs (she was actually one of the last of her clutch) but she is the first Senior Gold, ridden by the first Weyrwoman. By later Passes, her status is confused with being the first gold in existence. Although Pern has no real religious beliefs, Faranth comes to occupy ambiguous status in later Pernese culture; "by the egg of Faranth" is a common oath.
Path (green), ridden by Mirrim of Benden Weyr during the Ninth Pass. Path is the first green in several centuries to Impress to a female rider, picking Mirrim when she was not a Candidate.
Zaranth (green), ridden by Tai of Monaco Bay after the ending of the Ninth Pass. Zaranth is the first Pernese dragon to consciously use telekinesis and communicate this ability to other dragons.
Golanth (bronze), ridden by F'lessan, son of Lessa and F'lar at the end of the Ninth Pass. F'lessan and Golanth are the first to create a Weyrhold -specifically Honshū Weyrhold- where dragons and their riders can live in smaller groups once Thread no longer falls on Pern, without relying on tithes.
Notes
Further reading
Anne McCaffrey: A Critical Companion by Robin Roberts, Greenwood Press (1996)
Of Modern Dragons and Other Essays on Genre Fiction by John Lennard, Humanities-Ebooks (2008)
Dragonholder: The Life and Dreams (So Far) of Anne McCaffrey by Todd McCaffrey, Open Road Media (2014)
The Dragonlover's Guide to Pern by Jody Lynn Nye, Random House (1997)
Magill's Guide to Science Fiction and Fantasy Literature: The absolute at large ed by T.A. Schippey, Salem Press (1996)
Dragons of Fantasy: The Scaly Villains & Heroes of Tolkien, Rowling, Mccaffrey, Pratchett & Other Fantasy Greats! by Anne C. Petty, Cold Spring Press (2004)
References
External links
—one of McCaffrey's best-known statements regarding dragon and rider sexuality.
An Interview with Anne McCaffrey—among other topics, McCaffrey explains her beliefs about homosexuality.
Dragonriders of Pern
Fictional dragons
Fictional lizards
Fictional extraterrestrial species and races
Fiction about genetic engineering
Sexuality in science fiction | Dragon (Dragonriders of Pern) | [
"Engineering",
"Biology"
] | 6,202 | [
"Genetic engineering",
"Fiction about genetic engineering"
] |
1,042,164 | https://en.wikipedia.org/wiki/Glossary%20of%20mathematical%20jargon | The language of mathematics has a wide vocabulary of specialist and technical terms. It also has a certain amount of jargon: commonly used phrases which are part of the culture of mathematics, rather than of the subject. Jargon often appears in lectures, and sometimes in print, as informal shorthand for rigorous arguments or precise ideas. Much of this uses common English words, but with a specific non-obvious meaning when used in a mathematical sense.
Some phrases, like "in general", appear below in more than one section.
Philosophy of mathematics
abstract nonsenseA tongue-in-cheek reference to category theory, using which one can employ arguments that establish a (possibly concrete) result without reference to any specifics of the present problem. For that reason, it is also known as general abstract nonsense or generalized abstract nonsense.
canonicalA reference to a standard or choice-free presentation of some mathematical object (e.g., canonical map, canonical form, or canonical ordering). The same term can also be used more informally to refer to something "standard" or "classic". For example, one might say that Euclid's proof is the "canonical proof" of the infinitude of primes.
deepA result is called "deep" if its proof requires concepts and methods that are advanced beyond the concepts needed to formulate the result. For example, the prime number theorem — originally proved using techniques of complex analysis — was once thought to be a deep result until elementary proofs were found. On the other hand, the fact that π is irrational is usually known to be a deep result, because it requires a considerable development of real analysis before the proof can be established — even though the claim itself can be stated in terms of simple number theory and geometry.
elegantAn aesthetic term referring to the ability of an idea to provide insight into mathematics, whether by unifying disparate fields, introducing a new perspective on a single field, or by providing a technique of proof which is either particularly simple, or which captures the intuition or imagination as to why the result it proves is true. In some occasions, the term "beautiful" can also be used to the same effect, though Gian-Carlo Rota distinguished between elegance of presentation and beauty of concept, saying that for example, some topics could be written about elegantly although the mathematical content is not beautiful, and some theorems or proofs are beautiful but may be written about inelegantly.
elementaryA proof or a result is called "elementary" if it only involves basic concepts and methods in the field, and is to be contrasted with deep results which require more development within or outside the field. The concept of "elementary proof" is used specifically in number theory, where it usually refers to a proof that does not resort to methods from complex analysis.
folklore A result is called "folklore" if it is non-obvious and non-published, yet generally known to the specialists within a field. In many scenarios, it is unclear as to who first obtained the result, though if the result is significant, it may eventually find its way into the textbooks, whereupon it ceases to be folklore.
naturalSimilar to "canonical" but more specific, and which makes reference to a description (almost exclusively in the context of transformations) which holds independently of any choices. Though long used informally, this term has found a formal definition in category theory.
pathologicalAn object behaves pathologically (or, somewhat more broadly used, in a degenerated way) if it either fails to conform to the generic behavior of such objects, fails to satisfy certain context-dependent regularity properties, or simply disobeys mathematical intuition. In many occasions, these can be and often are contradictory requirements, while in other occasions, the term is more deliberately used to refer to an object artificially constructed as a counterexample to these properties. A simple example is that from the definition of a triangle having angles which sum to π radians, a single straight line conforms to this definition pathologically.
Note for that latter quote that as the differentiable functions are meagre in the space of continuous functions, as Banach found out in 1931, differentiable functions are colloquially speaking a rare exception among the continuous ones. Thus it can hardly be defended any-more to call non-differentiable continuous functions pathological.
rigor (rigour)The act of establishing a mathematical result using indisputable logic, rather than informal descriptive argument. Rigor is a cornerstone quality of mathematics, and can play an important role in preventing mathematics from degenerating into fallacies.
well-behavedAn object is well-behaved (in contrast with being Pathological) if it satisfies certain prevailing regularity properties, or if it conforms to mathematical intuition (even though intuition can often suggest opposite behaviors as well). In some occasions (e.g., analysis), the term "smooth" can also be used to the same effect.
Descriptive informalities
Although ultimately every mathematical argument must meet a high standard of precision, mathematicians use descriptive but informal statements to discuss recurring themes or concepts with unwieldy formal statements. Note that many of the terms are completely rigorous in context.
almost all A shorthand term for "all except for a set of measure zero", when there is a measure to speak of. For example, "almost all real numbers are transcendental" because the algebraic real numbers form a countable subset of the real numbers with measure zero. One can also speak of "almost all" integers having a property to mean "all except finitely many", despite the integers not admitting a measure for which this agrees with the previous usage. For example, "almost all prime numbers are odd". There is a more complicated meaning for integers as well, discussed in the main article. Finally, this term is sometimes used synonymously with generic, below.
arbitrarily large Notions which arise mostly in the context of limits, referring to the recurrence of a phenomenon as the limit is approached. A statement such as that predicate P is satisfied by arbitrarily large values, can be expressed in more formal notation by . See also frequently. The statement that quantity f(x) depending on x "can be made" arbitrarily large, corresponds to .
arbitrary A shorthand for the universal quantifier. An arbitrary choice is one which is made unrestrictedly, or alternatively, a statement holds of an arbitrary element of a set if it holds of any element of that set. Also much in general-language use among mathematicians: "Of course, this problem can be arbitrarily complicated".
eventuallyIn the context of limits, this is shorthand meaning for sufficiently large arguments; the relevant argument(s) are implicit in the context. As an example, the function log(log(x)) eventually becomes larger than 100"; in this context, "eventually" means "for sufficiently large x."
factor through A term in category theory referring to composition of morphisms. If for three objects A, B, and C a map can be written as a composition with and , then f is said to factor through any (and all) of , , and .
finite When said of the value of a variable assuming values from the non-negative extended reals the meaning is usually "not infinite". For example, if the variance of a random variable is said to be finite, this implies it is a non-negative real number, possibly zero. In some contexts though, for example in "a small but finite amplitude", zero and infinitesimals are meant to be excluded. When said of the value of a variable assuming values from the extended natural numbers the meaning is simply "not infinite". When said of a set or a mathematical whose main component is a set, it means that the cardinality of the set is less than .
frequently In the context of limits, this is shorthand for arbitrarily large arguments and its relatives; as with eventually, the intended variant is implicit. As an example, the sequence is frequently in the interval (1/2, 3/2), because there are arbitrarily large n for which the value of the sequence is in the interval.
formal, formally Qualifies anything that is sufficiently precise to be translated straightforwardly in a formal system. For example. a formal proof, a formal definition.
generic This term has similar connotations as almost all but is used particularly for concepts outside the purview of measure theory. A property holds "generically" on a set if the set satisfies some (context-dependent) notion of density, or perhaps if its complement satisfies some (context-dependent) notion of smallness. For example, a property which holds on a dense Gδ (intersection of countably many open sets) is said to hold generically. In algebraic geometry, one says that a property of points on an algebraic variety that holds on a dense Zariski open set is true generically; however, it is usually not said that a property which holds merely on a dense set (which is not Zariski open) is generic in this situation.
in general In a descriptive context, this phrase introduces a simple characterization of a broad class of , with an eye towards identifying a unifying principle. This term introduces an "elegant" description which holds for "arbitrary" objects. Exceptions to this description may be mentioned explicitly, as "pathological" cases.
left-hand side, right-hand side (LHS, RHS) Most often, these refer simply to the left-hand or the right-hand side of an equation; for example, has on the LHS and on the RHS. Occasionally, these are used in the sense of lvalue and rvalue: an RHS is primitive, and an LHS is derivative.
nice A mathematical is colloquially called nice or sufficiently nice if it satisfies hypotheses or properties, sometimes unspecified or even unknown, that are especially desirable in a given context. It is an informal antonym for pathological. For example, one might conjecture that a differential operator ought to satisfy a certain boundedness condition "for nice test functions," or one might state that some interesting topological invariant should be computable "for nice spaces X."
Anything that can be assigned to a variable and for which equality with another object can be considered. The term was coined when variables began to be used for sets and mathematical structures.
onto A function (which in mathematics is generally defined as mapping the elements of one set A to elements of another B) is called "A onto B" (instead of "A to B" or "A into B") only if it is surjective; it may even be said that "f is onto" (i. e. surjective). Not translatable (without circumlocutions) to some languages other than English.
proper If, for some notion of substructure, are substructures of themselves (that is, the relationship is reflexive), then the qualification proper requires the objects to be different. For example, a proper subset of a set S is a subset of S that is different from S, and a proper divisor of a number n is a divisor of n that is different from n. This overloaded word is also non-jargon for a proper morphism.
regular A function is called regular if it satisfies satisfactory continuity and differentiability properties, which are often context-dependent. These properties might include possessing a specified number of derivatives, with the function and its derivatives exhibiting some nice property (see nice above), such as Hölder continuity. Informally, this term is sometimes used synonymously with smooth, below. These imprecise uses of the word regular are not to be confused with the notion of a regular topological space, which is rigorously defined.
resp. (Respectively) A convention to shorten parallel expositions. "A (resp. B) [has some relationship to] X (resp. Y)" means that A [has some relationship to] X and also that B [has (the same) relationship to] Y. For example, squares (resp. triangles) have 4 sides (resp. 3 sides); or compact (resp. Lindelöf) spaces are ones where every open cover has a finite (resp. countable) open subcover.
sharp Often, a mathematical theorem will establish constraints on the behavior of some ; for example, a function will be shown to have an upper or lower bound. The constraint is sharp (sometimes optimal) if it cannot be made more restrictive without failing in some cases. For example, for arbitrary non-negative real numbers x, the exponential function ex, where e = 2.7182818..., gives an upper bound on the values of the quadratic function x2. This is not sharp; the gap between the functions is everywhere at least 1. Among the exponential functions of the form αx, setting α = e2/e = 2.0870652... results in a sharp upper bound; the slightly smaller choice α = 2 fails to produce an upper bound, since then α3 = 8 < 32. In applied fields the word "tight" is often used with the same meaning.
smooth Smoothness is a concept which mathematics has endowed with many meanings, from simple differentiability to infinite differentiability to analyticity, and still others which are more complicated. Each such usage attempts to invoke the physically intuitive notion of smoothness.
strong, stronger A theorem is said to be strong if it deduces restrictive results from general hypotheses. One celebrated example is Donaldson's theorem, which puts tight restraints on what would otherwise appear to be a large class of manifolds. This (informal) usage reflects the opinion of the mathematical community: not only should such a theorem be strong in the descriptive sense (below) but it should also be definitive in its area. A theorem, result, or condition is further called stronger than another one if a proof of the second can be easily obtained from the first but not conversely. An example is the sequence of theorems: Fermat's little theorem, Euler's theorem, Lagrange's theorem, each of which is stronger than the last; another is that a sharp upper bound (see sharp above) is a stronger result than a non-sharp one. Finally, the adjective strong or the adverb strongly may be added to a mathematical notion to indicate a related stronger notion; for example, a strong antichain is an antichain satisfying certain additional conditions, and likewise a strongly regular graph is a regular graph meeting stronger conditions. When used in this way, the stronger notion (such as "strong antichain") is a technical term with a precisely defined meaning; the nature of the extra conditions cannot be derived from the definition of the weaker notion (such as "antichain").
sufficiently large, suitably small, sufficiently close In the context of limits, these terms refer to some (unspecified, even unknown) point at which a phenomenon prevails as the limit is approached. A statement such as that predicate P holds for sufficiently large values, can be expressed in more formal notation by ∃x : ∀y ≥ x : P(y). See also eventually.
upstairs, downstairs A descriptive term referring to notation in which two are written one above the other; the upper one is upstairs and the lower, downstairs. For example, in a fiber bundle, the total space is often said to be upstairs, with the base space downstairs. In a fraction, the numerator is occasionally referred to as upstairs and the denominator downstairs, as in "bringing a term upstairs".
up to, modulo, mod out by An extension to mathematical discourse of the notions of modular arithmetic. A statement is true up to a condition if the establishment of that condition is the only impediment to the truth of the statement. Also used when working with members of equivalence classes, especially in category theory, where the equivalence relation is (categorical) isomorphism; for example, "The tensor product in a weak monoidal category is associative and unital up to a natural isomorphism."
vanish To assume the value 0. For example, "The function sin(x) vanishes for those values of x that are integer multiples of π." This can also apply to limits: see Vanish at infinity.
weak, weaker The converse of strong.
well-defined Accurately and precisely described or specified. For example, sometimes a definition relies on a choice of some ; the result of the definition must then be independent of this choice.
Proof terminology
The formal language of proof draws repeatedly from a small pool of ideas, many of which are invoked through various lexical shorthands in practice.
aliter An obsolescent term which is used to announce to the reader an alternative method, or proof of a result. In a proof, it therefore flags a piece of reasoning that is superfluous from a logical point of view, but has some other interest.
by way of contradiction (BWOC), or "for, if not, ..." The rhetorical prelude to a proof by contradiction, preceding the negation of the statement to be proved.
if and only if (iff) An abbreviation for logical equivalence of statements.
in general In the context of proofs, this phrase is often seen in induction arguments when passing from the base case to the induction step, and similarly, in the definition of sequences whose first few terms are exhibited as examples of the formula giving every term of the sequence.
necessary and sufficient A minor variant on "if and only if"; "A is necessary (sufficient) for B" means "A if (only if) B". For example, "For a field K to be algebraically closed it is necessary and sufficient that it have no finite field extensions" means "K is algebraically closed if and only if it has no finite extensions". Often used in lists, as in "The following conditions are necessary and sufficient for a field to be algebraically closed...".
need to show (NTS), required to prove (RTP), wish to show, want to show (WTS) Proofs sometimes proceed by enumerating several conditions whose satisfaction will together imply the desired theorem; thus, one needs to show just these statements.
one and only one A statement of the existence and uniqueness of an ; the object exists, and furthermore, no other such object exists.
Q.E.D. (Quod erat demonstrandum): A Latin abbreviation, meaning "which was to be demonstrated", historically placed at the end of proofs, but less common currently, having been supplanted by the Halmos end-of-proof mark, a square sign ∎.
sufficiently nice A condition on in the scope of the discussion, to be specified later, that will guarantee that some stated property holds for them. When working out a theorem, the use of this expression in the statement of the theorem indicates that the conditions involved may be not yet known to the speaker, and that the intent is to collect the conditions that will be found to be needed in order for the proof of the theorem to go through.
the following are equivalent (TFAE) Often several equivalent conditions (especially for a definition, such as normal subgroup) are equally useful in practice; one introduces a theorem stating an equivalence of more than two statements with TFAE.
transport of structure It is often the case that two are shown to be equivalent in some way, and that one of them is endowed with additional structure. Using the equivalence, we may define such a structure on the second object as well, via transport of structure. For example, any two vector spaces of the same dimension are isomorphic; if one of them is given an inner product and if we fix a particular isomorphism, then we may define an inner product on the other space by factoring through the isomorphism.
without (any) loss of generality (WLOG, WOLOG, WALOG), we may assume (WMA) Sometimes a proposition can be more easily proved with additional assumptions on the objects it concerns. If the proposition as stated follows from this modified one with a simple and minimal explanation (for example, if the remaining special cases are identical but for notation), then the modified assumptions are introduced with this phrase and the altered proposition is proved.
Proof techniques
Mathematicians have several phrases to describe proofs or proof techniques. These are often used as hints for filling in tedious details.
angle chasing Used to describe a geometrical proof that involves finding relationships between the various angles in a diagram.
back-of-the-envelope calculation An informal computation omitting much rigor without sacrificing correctness. Often this computation is "proof of concept" and treats only an accessible special case.
brute force Rather than finding underlying principles or patterns, this is a method where one would evaluate as many cases as needed to sufficiently prove or provide convincing evidence that the thing in question is true. Sometimes this involves evaluating every possible case (where it is also known as proof by exhaustion).
by example A proof by example is an argument whereby a statement is not proved but instead illustrated by an example. If done well, the specific example would easily generalize to a general proof.
by inspection A rhetorical shortcut made by authors who invite the reader to verify, at a glance, the correctness of a proposed expression or deduction. If an expression can be evaluated by straightforward application of simple techniques and without recourse to extended calculation or general theory, then it can be evaluated by inspection. It is also applied to solving equations; for example to find roots of a quadratic equation by inspection is to 'notice' them, or mentally check them. 'By inspection' can play a kind of gestalt role: the answer or solution simply clicks into place.
by intimidation Style of proof where claims believed by the author to be easily verifiable are labelled as 'obvious' or 'trivial', which often results in the reader being confused.
clearly, can be easily shown A term which shortcuts around calculation the mathematician perceives to be tedious or routine, accessible to any member of the audience with the necessary expertise in the field; Laplace used obvious (French: évident).
complete intuition commonly reserved for jokes (puns on complete induction).
diagram chasing Given a commutative diagram of objects and morphisms between them, if one wishes to prove some property of the morphisms (such as injectivity) which can be stated in terms of elements, then the proof can proceed by tracing the path of elements of various objects around the diagram as successive morphisms are applied to it. That is, one chases elements around the diagram, or does a diagram chase.
handwaving A non-technique of proof mostly employed in lectures, where formal argument is not strictly necessary. It proceeds by omission of details or even significant ingredients, and is merely a plausibility argument.
in general In a context not requiring rigor, this phrase often appears as a labor-saving device when the technical details of a complete argument would outweigh the conceptual benefits. The author gives a proof in a simple enough case that the computations are reasonable, and then indicates that "in general" the proof is similar.
index battle For proofs involving objects with multiple indices which can be solved by going to the bottom (if anyone wishes to take up the effort). Similar to diagram chasing.
morally true Used to indicate that the speaker believes a statement should be true, given their mathematical experience, even though a proof has not yet been put forward. As a variation, the statement may in fact be false, but instead provide a slogan for or illustration of a correct principle. Hasse's local-global principle is a particularly influential example of this.
obviously See clearly.
the proof is left as an exercise to the reader Usually applied to a claim within a larger proof when the proof of that claim can be produced routinely by any member of the audience with the necessary expertise, but is not so simple as to be obvious.
trivial Similar to clearly. A concept is trivial if it holds by definition, is an immediate corollary to a known statement, or is a simple special case of a more general concept.
Miscellaneous
This section features terms used across different areas in mathematics, or terms that do not typically appear in more specialized glossaries. For the terms used only in some specific areas of mathematics, see glossaries in :Category:Glossaries of mathematics.
B
C
D
F
I
M
P
S
See also
Glossary of areas of mathematics
List of mathematical constants
List of mathematical symbols
:Category:Mathematical terminology
Notes
References
.
.
(Parts I and II).
.
.
.
.
.
.
.
.
Bibliography
Encyclopedia of Mathematics
Jargon
Wikipedia glossaries using description lists | Glossary of mathematical jargon | [
"Mathematics"
] | 5,152 | [
"nan"
] |
1,042,263 | https://en.wikipedia.org/wiki/Poynting%27s%20theorem | In electrodynamics, Poynting's theorem is a statement of conservation of energy for electromagnetic fields developed by British physicist John Henry Poynting. It states that in a given volume, the stored energy changes at a rate given by the work done on the charges within the volume, minus the rate at which energy leaves the volume. It is only strictly true in media which is not dispersive, but can be extended for the dispersive case.
The theorem is analogous to the work-energy theorem in classical mechanics, and mathematically similar to the continuity equation.
Definition
Poynting's theorem states that the rate of energy transfer per unit volume from a region of space equals the rate of work done on the charge distribution in the region, plus the energy flux leaving that region.
Mathematically:
where:
is the rate of change of the energy density in the volume.
∇•S is the energy flow out of the volume, given by the divergence of the Poynting vector S.
J•E is the rate at which the fields do work on charges in the volume (J is the current density corresponding to the motion of charge, E is the electric field, and • is the dot product).
Integral form
Using the divergence theorem, Poynting's theorem can also be written in integral form:
where
S is the energy flow, given by the Poynting Vector.
is the energy density in the volume.
is the boundary of the volume. The shape of the volume is arbitrary but fixed for the calculation.
Continuity equation analog
In an electrical engineering context the theorem is sometimes written with the energy density term u expanded as shown. This form resembles the continuity equation:
,
where
ε0 is the vacuum permittivity and μ0 is the vacuum permeability.
is the density of reactive power driving the build-up of electric field,
is the density of reactive power driving the build-up of magnetic field, and
is the density of electric power dissipated by the Lorentz force acting on charge carriers.
Derivation
The rate of work done by the electromagnetic field on the infintesimal charge is given by the Lorentz Force Law as:
(the dot product because from the definition of cross product the cross product of v and B is perpendicular to v.
Where is the volume charge density and is the current density at the point and time where is the velocity of the charge dq.
The rate of work done on the whole charges in the volume V will be the volume integral
By Ampère's circuital law:
(Note that the H and D forms of the magnetic and electric fields are used here. The B and E forms could also be used in an equivalent derivation.)
Substituting this into the expression for rate of work gives:
Using the vector identity :
By Faraday's Law:
giving:
Continuing the derivation requires the following assumptions:
the charges are moving in a medium which is not dispersive.
the total electromagnetic energy density, even for time-varying fields, is given by
It can be shown that:
and
and so:
Returning to the equation for rate of work,
Since the volume is arbitrary, this can be cast in differential form as:
where is the Poynting vector.
Poynting vector in macroscopic media
In a macroscopic medium, electromagnetic effects are described by spatially averaged (macroscopic) fields. The Poynting vector in a macroscopic medium can be defined self-consistently with microscopic theory, in such a way that the spatially averaged microscopic Poynting vector is exactly predicted by a macroscopic formalism. This result is strictly valid in the limit of low-loss and allows for the unambiguous identification of the Poynting vector form in macroscopic electrodynamics.
Alternative forms
It is possible to derive alternative versions of Poynting's theorem. Instead of the flux vector as above, it is possible to follow the same style of derivation, but instead choose , the Minkowski form , or perhaps . Each choice represents the response of the propagation medium in its own way: the form above has the property that the response happens only due to electric currents, while the form uses only (fictitious) magnetic monopole currents. The other two forms (Abraham and Minkowski) use complementary combinations of electric and magnetic currents to represent the polarization and magnetization responses of the medium.
Modification
The derivation of the statement is dependent on the assumption that the materials the equation models can be described by a set of susceptibility properties that are linear, isotropic, homogenous and independent of frequency. The assumption that the materials have no absorption must also be made. A modification to Poynting's theorem to account for variations includes a term for the rate of non-Ohmic absorption in a material, which can be calculated by a simplified approximation based on the Drude model.
Complex Poynting vector theorem
This form of the theorem is useful in Antenna theory, where one has often to consider harmonic fields propagating in the space.
In this case, using phasor notation, and .
Then the following mathematical identity holds:
where is the current density.
Note that in free space, and are real, thus,
taking the real part of the above formula, it expresses the fact that the averaged radiated power flowing through is equal to the work on the charges.
References
External links
Eric W. Weisstein "Poynting Theorem" From ScienceWorld – A Wolfram Web Resource.
Electrodynamics
Eponymous theorems of physics
Circuit theorems | Poynting's theorem | [
"Physics",
"Mathematics"
] | 1,121 | [
"Equations of physics",
"Eponymous theorems of physics",
"Circuit theorems",
"Electrodynamics",
"Physics theorems",
"Dynamical systems"
] |
1,042,310 | https://en.wikipedia.org/wiki/Ground%20station | A ground station, Earth station, or Earth terminal is a terrestrial radio station designed for extraplanetary telecommunication with spacecraft (constituting part of the ground segment of the spacecraft system), or reception of radio waves from astronomical radio sources. Ground stations may be located either on the surface of the Earth, or in its atmosphere. Earth stations communicate with spacecraft by transmitting and receiving radio waves in the super high frequency (SHF) or extremely high frequency (EHF) bands (e.g. microwaves). When a ground station successfully transmits radio waves to a spacecraft (or vice versa), it establishes a telecommunications link. A principal telecommunications device of the ground station is the parabolic antenna.
Ground stations may have either a fixed or itinerant position. Article 1 § III of the International Telecommunication Union (ITU) Radio Regulations describes various types of stationary and mobile ground stations, and their interrelationships.
Specialized satellite Earth stations or satellite tracking stations are used to telecommunicate with satellites — chiefly communications satellites. Other ground stations communicate with crewed space stations or uncrewed space probes. A ground station that primarily receives telemetry data, or that follows space missions, or satellites not in geostationary orbit, is called a ground tracking station, or space tracking station, or simply a tracking station.
When a spacecraft or satellite is within a ground station's line of sight, the station is said to have a view of the spacecraft (see pass). A spacecraft can communicate with more than one ground station at a time. A pair of ground stations are said to have a spacecraft in mutual view when the stations share simultaneous, unobstructed, line-of-sight contact with the spacecraft.
Telecommunications port
A telecommunications port — or, more commonly, teleport — is a satellite ground station that functions as a hub connecting a satellite or geocentric orbital network with a terrestrial telecommunications network, such as the Internet.
Teleports may provide various broadcasting services among other telecommunications functions, such as uploading computer programs or issuing commands over an uplink to a satellite.
In May 1984, the Dallas/Fort Worth Teleport became the first American teleport to commence operation.
Earth terminal complexes
In Federal Standard 1037C, the United States General Services Administration defined an Earth terminal complex as the assemblage of equipment and facilities necessary to integrate an Earth terminal (ground station) into a telecommunications network. FS-1037C has since been subsumed by the ATIS Telecom Glossary, which is maintained by the Alliance for Telecommunications Industry Solutions (ATIS), an international, business-oriented, non-governmental organization. The Telecommunications Industry Association also acknowledges this definition.
Satellite communications standards
The ITU Radiocommunication Sector (ITU-R), a division of the International Telecommunication Union, codifies international standards agreed-upon through multinational discourse. From 1927 to 1932, the International Consultative Committee for Radio administered standards and regulations now governed by the ITU-R.
In addition to the body of standards defined by the ITU-R, each major satellite operator provides technical requirements and standards that ground stations must meet in order to communicate with the operator's satellites. For example, Intelsat publishes the Intelsat Earth Station Standards (IESS) which, among other things, classifies ground stations by the capabilities of their parabolic antennas, and pre-approves certain antenna models. Eutelsat publishes similar standards and requirements, such as the Eutelsat Earth Station Standards (EESS). The Interagency Operations Advisory Group offers a Service Catalog describing standard services, Spacecraft Emergency Cross Support Standard, and Consultative Committee for Space Data Systems data standards.
The Teleport (originally called a Telecommunications Satellite Park) innovation was conceived and developed by Joseph Milano in 1976 as part of a National Research Council study entitled, ''Telecommunications for Metropolitan Areas: Near-Term Needs and Opportunities".
Networks
A network of ground stations is a group of stations located to support spacecraft communication, tracking, or both. A network is established to provide dedicated support to a specific mission, function, program or organization.
Ground station networks include:
United States Space Force Satellite Control Network (SCN)
NASA Near Space Network
NASA Deep Space Network
Russia tracking network
European Space Tracking (ESTRACK) network
ISRO Telemetry, Tracking and Command Network (ISTRAC)
JAXA Near-Earth Tracking and Control Network
China Satellite Launch and Tracking Control (CLTC)
Norway Kongsberg Satellite Services (KSAT)
Swedish Space Corporation (SSC) CONNECT ground station network
RBC Signals Global Ground Station Network
Leaf Space ground station network
Amazon Web Services Ground Station network
SatNOGS Network
Other historical networks have included:
Smithsonian Astrophysical Observatory (SAO) Optical Tracking Network
US Minitrack
Applied Physics Laboratory Transit Network (Tranet)
Interkosmos network
Major Earth stations and Earth terminal complexes
See also
References
External links
UplinkStation.com, a corporate directory of commercial teleports, satellite television operators, et al.
World Teleport Association
Satellite broadcasting
Telecommunications infrastructure | Ground station | [
"Engineering"
] | 1,026 | [
"Telecommunications engineering",
"Satellite broadcasting"
] |
1,042,498 | https://en.wikipedia.org/wiki/Spontaneous%20parametric%20down-conversion | Spontaneous parametric down-conversion (also known as SPDC, parametric fluorescence or parametric scattering) is a nonlinear instant optical process that converts one photon of higher energy (namely, a pump photon) into a pair of photons (namely, a signal photon, and an idler photon) of lower energy, in accordance with the law of conservation of energy and law of conservation of momentum. It is an important process in quantum optics, for the generation of entangled photon pairs, and of single photons.
Basic process
A nonlinear crystal is used to produce pairs of photons from a photon beam. In accordance with the law of conservation of energy and law of conservation of momentum, the pairs have combined energies and momenta equal to the energy and momentum of the original photon. Because the index of refraction changes with frequency (dispersion), only certain triplets of frequencies will be phase-matched so that simultaneous energy and momentum conservation can be achieved. Phase-matching is most commonly achieved using birefringent nonlinear materials, whose index of refraction changes with polarization. As a result of this, different types of SPDC are categorized by the polarizations of the input photon (the pump) and the two output photons (signal and idler). If the signal and idler photons share the same polarization with each other and with the destroyed pump photon it is deemed Type-0 SPDC; if the signal and idler photons share the same polarization to each other, but are orthogonal to the pump polarization, it is Type-I SPDC; and if the signal and idler photons have perpendicular polarizations, it is deemed Type II SPDC.
The conversion efficiency of SPDC is typically very low, with the highest efficiency obtained on the order of 4x10−6 incoming photons for PPLN in waveguides. However, if one half of the pair is detected at any time then its partner is known to be present. The degenerate portion of the output of a Type I down converter is a squeezed vacuum that contains only even photon number terms. The nondegenerate output of the Type II down converter is a two-mode squeezed vacuum.
Example
In a commonly used SPDC apparatus design, a strong laser beam, termed the "pump" beam, is directed at a BBO (beta-barium borate) or lithium niobate crystal. Most of the photons continue straight through the crystal. However, occasionally, some of the photons undergo spontaneous down-conversion with Type II polarization correlation, and the resultant correlated photon pairs have trajectories that are constrained along the sides of two cones whose axes are symmetrically arranged relative to the pump beam. Due to the conservation of momentum, the two photons are always symmetrically located on the sides of the cones, relative to the pump beam. In particular, the trajectories of a small proportion of photon pairs will lie simultaneously on the two lines where the surfaces of the two cones intersect. This results in entanglement of the polarizations of the pairs of photons emerging on those two lines. The photon pairs are in an equal weight quantum superposition of the unentangled states and , corresponding to polarizations of left-hand side photon, right-hand side photon.
Another crystal is KDP (potassium dihydrogen phosphate) which is mostly used in Type I down conversion, where both photons have the same polarization.
Some of the characteristics of effective parametric down-converting nonlinear crystals include:
Nonlinearity: The refractive index of the crystal changes with the intensity of the incident light. This is known as the nonlinear optical response.
Periodicity: The crystal has a regular, repeating structure. This is known as the lattice structure, which is responsible for the regular arrangement of the atoms in the crystal.
Optical anisotropy: The crystal has different refractive indices along different crystallographic axes.
Temperature and pressure sensitivity: The nonlinearity of the crystal can change with temperature and pressure, and thus the crystal should be kept in a stable temperature and pressure environment.
High nonlinear coefficient: Large nonlinear coefficient is desirable, this allow to generate a high number of entangled photons.
High optical damage threshold: Crystal with high optical damage threshold can endure high intensity of the pumping beam.
Transparency in the desired wavelength range: It is important for the crystal to be transparent in the wavelength range of the pump beam for efficient nonlinear interactions
High optical quality and low absorption: The crystal should be high optical quality and low absorption to minimize loss of the pump beam and the generated entangled photons.
History
SPDC was demonstrated as early as 1967 by S. E. Harris, M. K. Oshman, and R. L. Byer, as well as by D. Magde and H. Mahr. It was first applied to experiments related to coherence by two independent pairs of researchers in the late 1980s: Carroll Alley and Yanhua Shih, and Rupamanjari Ghosh and Leonard Mandel. The duality between incoherent (Van Cittert–Zernike theorem) and biphoton emissions was found.
Applications
SPDC allows for the creation of optical fields containing (to a good approximation) a single photon. As of 2005, this is the predominant mechanism for an experimenter to create single photons (also known as Fock states). The single photons as well as the photon pairs are often used in quantum information experiments and applications like quantum cryptography and Bell test experiments.
SPDC is widely used to create pairs of entangled photons with a high degree of spatial correlation. Such pairs are used in ghost imaging, in which information is combined from two light detectors: a conventional, multi-pixel detector that does not view the object, and a single-pixel (bucket) detector that does view the object.
Alternatives
The newly observed effect of two-photon emission from electrically driven semiconductors has been proposed as a basis for more efficient sources of entangled photon pairs. Other than SPDC-generated photon pairs, the photons of a semiconductor-emitted pair usually are not identical but have different energies. Until recently, within the constraints of quantum uncertainty, the pair of emitted photons were assumed to be co-located: they are born from the same location. However, a new nonlocalized mechanism for the production of correlated photon pairs in SPDC has highlighted that occasionally the individual photons that constitute the pair can be emitted from spatially separated points.
See also
Photon upconversion
References
Quantum optics
Articles containing video clips
Light | Spontaneous parametric down-conversion | [
"Physics"
] | 1,349 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Quantum optics",
"Quantum mechanics",
"Electromagnetic spectrum",
"Waves",
"Light"
] |
1,042,514 | https://en.wikipedia.org/wiki/Mongolian%20gerbil | The Mongolian gerbil or Mongolian jird (Meriones unguiculatus) is a rodent belonging to the subfamily Gerbillinae. Their body size is typically , with a tail, and body weight , with adult males larger than females. The animal is used in science and research or kept as a small house pet. Their use in science dates back to the latter half of the 19th century, but they only started to be kept as pets in the English-speaking world after 1954, when they were brought to the United States. However, their use in scientific research has fallen out of favor.
Taxonomy and evolution
The first known mention of gerbils came in 1866, by Father Armand David, who sent "yellow rats" to the French National Museum of Natural History in Paris, from northern China. They were named Gerbillus unguiculatus by the scientist Alphonse Milne-Edwards in 1867.
There is a popular misconception about the meaning of this scientific name, appearing both in printed works and in websites, due to the genus Meriones sharing the name with Greek warrior Meriones in Homer's Iliad; however, translations like "clawed warrior" are incorrect. The genus was named by Johann Karl Wilhelm Illiger in 1811, deriving from the Greek word μηρος (femur). Combined with 'unguiculate', meaning to have claws or nails in Latin, the name can be loosely translated as 'clawed femur'.
Habitat
Mongolian gerbils inhabit grassland, shrubland and desert, including semidesert and steppes in China, Mongolia, and the Russian Federation.
Soil on the steppes is sandy and is covered with grasses, herbs, and shrubs. The steppes have cool, dry winters and hot summers. The temperature can get up to , but the average temperature for most of the year is around .
In the wild, these gerbils live in patriarchal groups generally consisting of one parental pair, the most recent litter, and a few older pups; sometimes the dominant female's sister(s) also live with them. Only the dominant females will produce pups, and will mostly mate with the dominant male while in estrus (heat), female gerbils are generally more loyal than male gerbils. One group of gerbils generally ranges over .
A group lives in a central burrow with 10–20 exits. Some deeper burrows with only one to three exits in their territory may exist. These deeper burrows are used to escape from predators when they are too far from the central burrow. A group's burrows often interconnect with other groups.
In science
Gerbils have a long history of use in scientific research, although nowadays they are rarely used. For example, in the United Kingdom in 2017, only around 300 Mongolian gerbils were used in experimental procedures, compared to over 2 million mice.
Tumblebrook Farm
Most gerbils used in scientific research are derived from the Tumblebrook Farm strain, which has its origins in 20 pairs of wild-caught Mongolian gerbils sent to Japan in 1935. Eleven of these animals were subsequently sent to Dr. V. Schwentker's Tumblebrook Farm in Brant Lake, New York, United States in 1954, with additional animals later sent to Charles River Ltd in Italy in 1996.
Hearing
Gerbils have a wide hearing range, from detection of low frequency foot drumming to higher frequency chirps and therefore may be a more suitable model of human hearing loss than mice and rats, which are high-frequency specialists.
Vocal
Male gerbils can produce ultrasonic sounds with frequencies ranging from approximately 27 to 35 kHz and amplitudes ranging from approximately 0 to 70 dBa. Their larynx is involved in the production of these ultrasonic sounds. Experimentation revealed five findings of interest, which are that adults only emit ultrasonic sounds when stimulated socially, males signal more frequently than females, dominant males are more active in vocalizations than subordinate males, ultrasounds are triggered by conspecific odors, and that d-amphetamine, a central nervous system stimulant, contributes high levels of ultrasounds while chlorpromazine, an antipsychotic medication, lowers the emission rate.
Epilepsy
10–20% of gerbils exhibit spontaneous epileptiform seizures, typically in response to a stressor such as handling or cage cleaning. Epilepsy in gerbils has a genetic basis, and seizure-prone and seizure-resistant lines have been bred.
Diabetes
Like other desert rodents such as fat sandrats, Mongolian gerbils are susceptible to diet-induced diabetes, although incidence is low. A diabetes-prone line has recently been generated, showing that gerbil diabetes has at least some genetic basis.
Genetics and genomics
Laboratory gerbils are derived from a small number of founders, and so genetic diversity was generally assumed to be low. Initial genetic studies based on small numbers of genetic markers appeared to support this, but more recent genome-wide Genotyping-by-Sequencing (GBS) data has shown that genetic diversity is actually quite high. It has been suggested that laboratory gerbils should be considered domesticated, and designated "M. unguiculatus forma domestica" to differentiate them from wild animals. A Mongolian gerbil genome sequence was published in 2018 and a genetic map comprising 22 linkage groups (one per chromosome) in 2019.
Reproduction
In the wild, Mongolian gerbils breed in February and October. Males do not become sexually mature for about 70–80 days, while the vaginal opening occurs in females about 33–50 days after birth. For other gerbils such as the hairy footed gerbil, sexual maturity has a slightly earlier and longer window of 60-90 days in comparison with a later and shorter window for Mongolian gerbils, 70–84 days. Females reach sexual maturity shortly after this opening occurs. They experience oestrus cycles every 4–6 days. Mongolian gerbils are regarded as monogamous within science. Even with this said, many Mongolian gerbils have still been found in laboratory tests regarding their sexual reproduction behavior to have shown signs of promiscuity and mating with other females while their monogamous partner is absent in laboratory setting.
Gerbils are for the most part selective when it comes to picking a mate for copulation. An average litter size for the Mongolian Gerbil is around 4–8 pups. If the litter only contains around 1–2 young then the mother will neglect them and they will die from starvation. Mongolian Gerbils are monogamous and mate with their selected partner for life. When their mate dies, many gerbils refrain from seeking other mates to reproduce with. Males generally find new mates whereas females may not. When older females lose their mate they almost always give up on seeking reproduction. Their behavior tends to vary when faced with different settings; in the wild, the large population of gerbils means that finding and selecting a mate is not a problem. Within a laboratory setting, though, many gerbils tend to keep to themselves and refrain from copulation.
Behavior
Gerbils are social animals, and live in groups in the wild. They rely on their sense of smell to identify other members of their clan. Gerbils are known to attack and often kill those carrying an unfamiliar scent. Groups of gerbils often have a "dominant" gerbil which may "bully" the others by humping them.
Relationship with humans
As pets
A gentle and hardy animal, the Mongolian gerbil has become a popular small house pet. It was first brought from China to Paris in the 19th century, and became a popular house pet there. It was later brought to the United States in 1954 by Dr. Victor Schwentker for use in research. Dr. Schwentker soon recognized their potential as pet animals. Selective breeding for the pet trade has resulted in a wide range of different color and pattern varieties. Gerbils became popular pets in the US around the late 1950s and were imported to the United Kingdom in 1964, where they became popular pets too. They are now found in pet shops throughout the UK and the US.
However, due to the threat they pose to indigenous ecosystems and existing agricultural operations, it is illegal to purchase, import, or keep a gerbil as a pet in the U.S. state of California. It is also illegal to import the animal into New Zealand and Australia.
Gerbils are typically not aggressive, and they rarely bite unprovoked or without stress. They are small and easy to handle, since they are sociable creatures that enjoy the company of humans and other gerbils. Gerbils also have adapted their kidneys to produce a minimum of waste to conserve body fluids, which makes them very clean with little odor. Gerbils have many different aesthetic coat patterns, such as pied slate, described below.
Health concerns
Misalignment of incisors due to injury or malnutrition may result in overgrowth, which can cause injury to the roof of the mouth. Symptoms include a dropped or loss of appetite, drooling, weight loss, or foul breath.
Common injuries are caused by gerbils being dropped or falling, often while inside of a hamster ball, which can cause broken limbs or a fractured spine (for which there is no treatment).
A common problem for all small rodents is neglect, which can cause the gerbils to not receive adequate food and water, causing serious health concerns, including dehydration, starvation, stomach ulcers, eating of bedding material, and cannibalism.
Between 20 and 50% of pet gerbils have the seizure disorder epilepsy. The seizures are thought to be caused by fright, handling, or a new environment. The attacks can be mild to severe, but do not typically appear to have any long-term effects, except for rare cases where death results from very severe seizures. A way to prevent a gerbil from having a seizure is to refrain from blowing in the animal's face (often used to "train" the pet not to bite). This technique is used in a lab environment to induce seizures for medical research.
Tumors, both benign and malignant, are fairly common in pet gerbils, and are most common in females over the age of two. Usually, the tumors involve the ovaries, causing an extended abdomen, or the skin, with tumors most often developing around the ears, feet, midabdomen, and base of the tail, appearing as a lump or abscess.
Gerbils can lose their tails due to improper handling, being attacked by another animal, or getting their tails stuck. The first sign is a loss of fur from the tip of the tail, then, the skinless tail dies off and sloughs, with the stump usually healing without complications.
The most common infectious disease in gerbils is Tyzzer's disease, a bacterial disease, which stress can make animals more susceptible to. It produces symptoms such as ruffled fur, lethargy, hunched posture, poor appetite, diarrhoea, and often death. It quickly spreads between gerbils in close contact.
A problem with the inner ear may cause a gerbil to lean noticeably to one side. This may be caused by ear infections. Gerbils with "extreme white spotting" colouring are susceptible to deafness; this is thought to be due to the lack of pigmentation in and around the ear.
Captive-bred gerbils
Many color varieties of gerbils are available in pet shops today, generally the result of years of selective breeding.
Over 20 different coat colors occur in the Mongolian gerbil, which has been captive-bred the longest.
The fat-tailed gerbil or duprasi is also kept as a pet. They are smaller than the common Mongolian gerbils, and have long, soft coats and short, fat tails, appearing more like a hamster. The variation on the normal duprasi coat is more gray in color, which may be a mutation, or it may be the result of hybrids between the Egyptian and Algerian subspecies of duprasi.
White spotting has been reported in not only the Mongolian gerbil, but also the pallid gerbil and possibly Sundervall's Jird.
A long-haired mutation, a grey agouti or chinchilla mutation, white spotting, and possibly a dilute mutation have also appeared in Shaw's jirds, and white spotting and a dilute mutation have shown up in bushy-tailed jirds.
References
External links
The National Gerbil Society (U.K.)
The American Gerbil Society
The Gerbils.com – Everything about the gerbil
The Underwhite/Underwhite Dense gene
eGerbil - For everything gerbil!
Gerbil Care
Meriones (rodent)
Rodents as pets
Rodents of Asia
Mammals described in 1867
Space-flown life | Mongolian gerbil | [
"Biology"
] | 2,686 | [
"Space-flown life"
] |
1,042,529 | https://en.wikipedia.org/wiki/Air%20launch | Air launching is the practice of releasing a rocket, missile, parasite aircraft or other aircraft payload from a mother ship or launch aircraft. The payload craft or missile is often tucked under the wing of the larger mother ship and then "dropped" while in flight. It may also be stored within a bomb bay, beneath the main fuselage or even on the back of the carrier aircraft, as in the case of the D-21 drone. Air launching provides several advantages over ground launching, giving the smaller craft an altitude and range boost, while saving it the weight of the fuel and equipment needed to take off on its own.
History
One of the earliest uses of air launching used an airship as a carrier and docking station for biplane parasite fighters. These planes would connect to their mothership through a trapeze-like rig, mounted to the top of the upper wing, that attached to a hook dangling from the bottom of the dirigible above. Fighters could be both launched and retrieved this way, giving the airship the speed and striking power of fixed-wing craft, while giving the fighters the range and lingering time of an airship. With advances in airplane technology, especially in range, the value of a dirigible mothership was reduced and the concept became obsolete.
The parasite fighter concept was later revived several times, in an attempt to solve the problem of how to protect bombers from fighter attack. The Convair B-36 was used to air launch several prototype fighters for defense, but none offered performance that could match ground-launched fighters. Even the largest bomber ever mass-produced was too small a mothership for the jet age. Docking also presented its own plethora of problems.
Air launch is the standard way to launch air-to-surface missiles and air-to-air missiles and is mainly used for rocket-powered craft, allowing them to conserve their fuel until lifted to altitude by a larger aircraft. The B-29, B-50, and B-52 have all served in the carrier role for research programs such as the Bell X-1 and X-15.
In the 1960s the SR-71 aircraft was used to launch the Lockheed D-21/M-21 drone to speeds of up to Mach 3. However, this added a degree of difficulty due to the shock wave pattern around an aircraft at supersonic speeds. After three successful tests, the fourth resulted in a collision with the carrier aircraft, in which both craft were destroyed and one crew member drowned. The project was subsequently abandoned.
During the development of the Space Shuttle orbiter in the 1970s, NASA used two modified Boeing 747 airliners, known as the Shuttle Carrier Aircraft, to launch the Space Shuttle Enterprise, a crewed atmospheric test vehicle used to test the orbiter's approach and landing capabilities. These aircraft were subsequently used throughout the Space Shuttle Program to transport the shuttles across long distances.
The Pegasus launch vehicle became the first air-launched orbital rocket when it was launched on April 5, 1990 by the private company Orbital Sciences Corporation (now a part of Northrop Grumman), from a NASA-owned B-52 Stratofortress. It has flown more than 40 times since, launched mostly from the company's own Lockheed L-1011 known as Stargazer. Orbital Sciences was developing the Pegasus II launcher that would have dropped from the purpose-built Scaled Composites Stratolaunch. Pegasus's capacity to low Earth orbit was projected to be 13,500 pounds (6,100 kg).
In the early 2000s, the B-52 was used to launch the X-43 hypersonic testbed aircraft. Recently, the air launch method has gained popularity with commercial launch providers. The Ansari X-Prize $10 million purse was won by a team led by Burt Rutan and Paul Allen for the successful launch of SpaceShipOne twice in a row in two weeks, which were the criteria.
In 2010, the SpaceShipTwo launch vehicle began launching. It was the successor the SpaceShipOne launch vehicle, which was retired in 2004. The first SpaceShipTwo aircraft, the VSS Enterprise flew 35 flights successfully from 2010 to 2014. However, on 31 October 2014, the vehicle was destroyed in-flight after an unintentional feathering. Michael Alsbury was killed and Peter Siebold was severely injured. In 2016, another ship, the VSS Unity conducted it's maiden flight, which went smoothly. The VSS Unity conducted 20 test flights. The last one reached space, but without commercial crew onboard. On 11 July 2021, SpaceShipTwo launched 6 people to space. 2 were pilots and the rest tourists. Virgin Galactic repeatedly announced plans to refly the SpaceShipTwo. These plans have been delayed by more than a year and a half. It is currently expected to fly no earlier than the second quarter of 2023.
In 2021, Virgin Galactic announced plans for a SpaceShipThree. This would provide point-to-point transportation across the world. Two ships, the VSS Imagine and VSS Inspire are planned. In 2022, Virgin Galactic announced plans for the new Delta-class launch vehicle. This would be able to reach orbit. It is expected to launch in 2026.
See also
Composite aircraft
Parasite aircraft
Mother ship
Pegasus (rocket)
Air launch to orbit
Airborne aircraft carrier
Rocket sled launch
Megascale engineering
SpaceShipOne
SpaceShipTwo
LauncherOne
List of Virgin Galactic launches
References
External links
A Study of Air Launch Methods for RLVs (AIAA 2001)
Launch
Missile operation
Rocketry | Air launch | [
"Engineering"
] | 1,120 | [
"Rocketry",
"Aerospace engineering"
] |
1,042,649 | https://en.wikipedia.org/wiki/History%20of%20the%20bicycle | Vehicles that have two wheels and require balancing by the rider date back to the early 19th century. The first means of transport making use of two wheels arranged consecutively, and thus the archetype of the bicycle, was the German draisine dating back to 1817. The term bicycle was coined in France in the 1860s, and the descriptive title "penny farthing", used to describe an "ordinary bicycle", is a 19th-century term.
Earliest unverified bicycle
There are several early claims regarding the invention of the bicycle, but many remain unverified.
A sketch from around 1500 AD is attributed to Gian Giacomo Caprotti, a pupil of Leonardo da Vinci, but it was described by Hans-Erhard Lessing in 1998 as a purposeful fraud, a description now generally accepted. However, the authenticity of the bicycle sketch is still vigorously maintained by followers of Augusto Marinoni, a lexicographer and philologist, who was entrusted by the Commissione Vinciana of Rome with the transcription of Leonardo's Codex Atlanticus.
Later, and equally unverified, is the contention that a certain "Comte de Sivrac" developed a célérifère in 1792, demonstrating it at the Palais-Royal in France. The célérifère supposedly had two wheels set on a rigid wooden frame and no steering, directional control being limited to that attainable by leaning. A rider was said to have sat astride the machine and pushed it along using alternate feet. It is now thought that the two-wheeled célérifère never existed (though there were four-wheelers) and it was instead a misinterpretation by the well-known French journalist Louis Baudry de Saunier in 1891.
In Japan, a pedal-powered tricycle called '陸舟奔車 (Rikushu-honsha)' was described in '新製陸舟奔車之記 (Records of a Newly Made Rikushu-honsha)' (owned by the Hikone Public Library, Hikone, Japan), written in 1732 by 平石久平次時光 (Hiraishi Kuheiji Tokimitsu) (1696-1771), a retainer of the Hikone domain. However, it was not further developed, and the practical use of bicycles in Japan did not occur until modern bicycles were imported from Europe.
19th century
1817 to 1819: The Draisine or Velocipede
The first verifiable claim for a practically used bicycle belongs to German Baron Karl von Drais Sauerbronn, a civil servant to the Grand Duke of Baden in Germany. Drais invented his Laufmaschine (German for "running machine") in 1817, that was called Draisine (English) or draisienne (French) by the press. Karl von Drais patented this design in 1818, which was the first commercially successful two-wheeled, steerable, human-propelled machine, commonly called a velocipede, and nicknamed hobby-horse or dandy horse. It was initially manufactured in Germany and France.
Hans-Erhard Lessing (Drais's biographer) found from circumstantial evidence that Drais's interest in finding an alternative to the horse was the starvation and death of horses caused by crop failure in 1816, the Year Without a Summer (following the volcanic eruption of Tambora in 1815).
On his first reported ride from Mannheim on June 12, 1817, he covered 13 km (eight miles) in less than an hour. Constructed almost entirely of wood, the draisine weighed 22 kg (48 pounds), had brass bushings within the wheel bearings, iron shod wheels, a rear-wheel brake and 152 mm (6 inches) of trail of the front-wheel for a self-centering caster effect. This design was welcomed by mechanically minded men daring to balance, and several thousand copies were built and used, primarily in Western Europe and in North America. Its popularity rapidly faded when, partly due to increasing numbers of accidents, some city authorities began to prohibit its use. However, in 1866 Paris a Chinese visitor named Bin Chun could still observe foot-pushed velocipedes. The Draisine is regarded as the first bicycle and Karl von Drais is seen as the "father of the bicycle".
The concept was picked up by a number of British cartwrights; the most notable was Denis Johnson of London announcing in late 1818 that he would sell an improved model. Johnson called his machine as a pedestrian curricle or velocipede, but the public preferred nicknames like "hobby-horse," after the children's toy or, worse still, "dandyhorse," after the foppish men, then called dandies, who often rode them. Johnson's machine was an improvement on Drais's, being notably more elegant: his wooden frame had a serpentine shape instead of Drais's straight one, allowing the use of larger wheels without raising the rider's seat, but was still the same design.
During the summer of 1819, the "hobby-horse", thanks in part to Johnson's marketing skills and better patent protection, became the craze and fashion in London society. The dandies, the Corinthians of the Regency, adopted it, and therefore the poet John Keats referred to it as "the nothing" of the day. Riders wore out their boots surprisingly rapidly, and the fashion ended within the year, after riders on pavements (sidewalks) were fined two pounds.
1820s to 1850s: An Era of 3 and 4-Wheelers
The intervening decades of the 1820s–1850s witnessed many developments concerning human-powered vehicles often using technologies similar to the draisine, even if the idea of a workable two-wheel design, requiring the rider to balance, had been dismissed. These new machines had three wheels (tricycles) or four (quadracycles) and came in a very wide variety of designs, using pedals, treadles, and hand-cranks, but these designs often suffered from high weight and high rolling resistance. However, Willard Sawyer in Dover successfully manufactured a range of treadle-operated 4-wheel vehicles and exported them worldwide in the 1850s.
1830s: The Reported Scottish Inventions
The first mechanically propelled two-wheel vehicle is believed by some to have been built by Kirkpatrick Macmillan, a Scottish blacksmith, in 1839. A nephew later claimed that his uncle developed a rear-wheel-drive design using mid-mounted treadles connected by rods to a rear crank, similar to the transmission of a steam locomotive. Proponents associate him with the first recorded instance of a bicycling traffic offense, when a Glasgow newspaper reported in 1842 an accident in which an anonymous "gentleman from Dumfries-shire... bestride a velocipede... of ingenious design" knocked over a pedestrian in the Gorbals and was fined five shillings. However, the evidence connecting this with Macmillan is weak, since it is unlikely that the artisan Macmillan would have been termed a gentleman, nor is the report clear on how many wheels the vehicle had.
A similar machine was said to have been produced by Gavin Dalzell of Lesmahagow, circa 1845. There is no record of Dalzell ever having laid claim to inventing the machine. It is believed that he copied the idea having recognized the potential to help him with his local drapery business and there is some evidence that he used the contraption to take his wares into the rural community around his home. A replica still exists today in the Riverside Museum in Glasgow. The museum holds the honor of exhibiting the oldest bike in existence today. The first documented producer of rod-driven two-wheelers, treadle bicycles, was Thomas McCall, of Kilmarnock in 1869. The design was inspired by the French front-crank velocipede of the Lallement/Michaux type.
1853 and the invention of the first bicycle with pedal crank "Tretkurbelfahrrad" by Philipp Moritz Fischer
Philipp Moritz Fischer, who used the draisine to get to school from the age of 9, invented the pedal crank in 1853. After years of living all over Europe, he left London to go back to his native town of Schweinfurt, Bavaria, when his first son died at a young age. He built the very first bicycle with pedals in 1853; however, he did not make the invention public. The Tretkurbelfahrrad from 1853 is still sustained and is on public display in the municipality museum in Schweinfurt.
1860s and the Michaux "Velocipede", aka "Boneshaker"
The first widespread and commercially successful design was French. An example is at the Canada Science and Technology Museum, in Ottawa, Ontario. Initially developed around 1863, it sparked a fashionable craze briefly during 1868–70. Its design was simpler than the Macmillan bicycle; it used rotary cranks and pedals mounted to the front wheel hub. Pedaling made it easier for riders to propel the machine at speed, but the rotational speed limitation of this design created stability and comfort concerns which would lead to the large front wheel of the "penny farthing". It was difficult to pedal the wheel that was used for steering. The use of metal frames reduced the weight and provided sleeker, more elegant designs, and also allowed mass-production. Different braking mechanisms were used depending on the manufacturer. In England, the velocipede earned the name of "bone-shaker" because of its rigid frame and iron-banded wheels that resulted in a "bone-shaking experience for riders".
The velocipede's renaissance began in Paris during the late 1860s. Its early history is complex and has been shrouded in some mystery, not least because of conflicting patent claims: all that has been stated for sure is that a French metalworker attached pedals to the front wheel; at present, the earliest year bicycle historians agree on is 1864. The identity of the person who attached cranks is still an open question at International Cycling History Conferences (ICHC). The claims of Ernest Michaux and of Pierre Lallement, and the lesser claims of rear-pedaling Alexandre Lefebvre, have their supporters within the ICHC community.
Bicycle historian David V. Herlihy documents that Lallement claimed to have created the pedal bicycle in Paris in 1863. He had seen someone riding a draisine in 1862 then originally came up with the idea to add pedals to it. It is a fact that he filed the earliest and only patent for a pedal-driven bicycle, in the US in 1866. Lallement's patent drawing shows a machine which looks exactly like Johnson's draisine, but with the pedals and rotary cranks attached to the front wheel hub, and a thin piece of iron over the top of the frame to act as a spring supporting the seat, for a slightly more comfortable ride.
By the early 1860s, the blacksmith Pierre Michaux, besides producing parts for the carriage trade, was producing "vélocipède à pédales" on a small scale. The wealthy Olivier brothers Aimé and René were students in Paris at this time, and these shrewd young entrepreneurs adopted the new machine. In 1865 they travelled from Paris to Avignon on a velocipede in only eight days. They recognized the potential profitability of producing and selling the new machine. Together with their friend Georges de la Bouglise, they formed a partnership with Pierre Michaux, Michaux et Cie ("Michaux and company"), in 1868, avoiding use of the Olivier family name and staying behind the scenes, lest the venture prove to be a failure. This was the first company which mass-produced bicycles, replacing the early wooden frame with one made of two pieces of cast iron bolted together—otherwise, the early Michaux machines look exactly like Lallement's patent drawing. Together with a mechanic named Gabert in his hometown of Lyon, Aimé Olivier created a diagonal single-piece frame made of wrought iron which was much stronger, and as the first bicycle craze took hold, many other blacksmiths began forming companies to make bicycles using the new design. Velocipedes were expensive, and when customers soon began to complain about the Michaux serpentine cast-iron frames breaking, the Oliviers realized by 1868 that they needed to replace that design with the diagonal one which their competitors were already using, and the Michaux company continued to dominate the industry in its first years.
On the new macadam paved boulevards of Paris it was easy riding, although initially still using what was essentially horse coach technology. It was still called "velocipede" in France, but in the United States, the machine was commonly called the "bone-shaker". Later improvements included solid rubber tires and ball bearings. Lallement had left Paris in July 1865, crossed the Atlantic, settled in Connecticut and patented the velocipede, and the number of associated inventions and patents soared in the US. The popularity of the machine grew on both sides of the Atlantic and by 1868–69 the velocipede craze was strong in rural areas as well. Even in a relatively small city such as Halifax, Nova Scotia, Canada, there were five velocipede rinks, and riding schools began opening in many major urban centers. Essentially, the velocipede was a stepping stone that created a market for bicycles that led to the development of more advanced and efficient machines.
However, the Franco-Prussian war of 1870 destroyed the velocipede market in France, and the "bone-shaker" enjoyed only a brief period of popularity in the United States, which ended by 1870. There is debate among bicycle historians about why it failed in the United States, but one explanation is that American road surfaces were much worse than European ones, and riding the machine on these roads was simply too difficult. Certainly another factor was that Calvin Witty had purchased Lallement's patent, and his royalty demands soon crippled the industry. The UK was the only place where the bicycle never fell completely out of favour.
In 1869, William Van Anden of Poughkeepsie, New York, USA, invented the freewheel for the bicycle. His design placed a ratchet device in the hub of the front wheel (the driven wheel on the 'velocipede' designs of the time), which allowed the rider to propel himself forward without pedaling constantly. Initially, bicycle enthusiasts rejected the idea of a freewheel because they believed it would complicate the mechanical functions of the bicycle. Bicycle enthusiasts believed that the bicycle was supposed to remain as simple as possible without any additional mechanisms, such as the freewheel.
1870s: the high-wheel bicycle
The high-bicycle was the logical extension of the boneshaker, the front wheel enlarging to enable higher speeds (limited by the inside leg measurement of the rider), the rear wheel shrinking and the frame being made lighter. Frenchman Eugène Meyer is now regarded as the father of the high bicycle by the ICHC in place of James Starley. Meyer invented the wire-spoke tension wheel in 1869 and produced a classic high bicycle design until the 1880s.
James Starley in Coventry added the tangent spokes and the mounting step to his famous bicycle named "Ariel". He is regarded as the father of the British cycling industry. Ball bearings, solid rubber tires and hollow-section steel frames became standard, reducing weight and making the ride much smoother. Depending on the rider's leg length, the front wheel could now have a diameter up to 60 in (1.5 m).
Much later, when this type of bicycle was beginning to be replaced by a later design, it came to be referred to as the "ordinary bicycle". (While it was in common use no such distinguishing adjective was used, since there was then no other kind.) and was later nicknamed "penny-farthing" in England (a penny representing the front wheel, and a coin smaller in size and value, the farthing, representing the rear). They were fast, but unsafe. The rider was high up in the air and traveling at a great speed. If he hit a bad spot in the road he could easily be thrown over the front wheel and be seriously injured (two broken wrists were common, in attempts to break a fall) or even killed. "Taking a header" (also known as "coming a cropper"), was not at all uncommon.
The rider's legs were often caught underneath the handlebars, so falling free of the machine was often not possible. The dangerous nature of these bicycles (as well as Victorian mores) made cycling the preserve of adventurous young men. The risk averse, such as elderly gentlemen, preferred the more stable tricycles or quadracycles. In addition, women's fashion of the day made the "ordinary" bicycle inaccessible. Queen Victoria owned Starley's "Royal Salvo" tricycle, though there is no evidence she actually rode it.
Although French and English inventors modified the velocipede into the high-wheel bicycle, the French were still recovering from the Franco-Prussian war, so English entrepreneurs put the high-wheeler on the English market, and the machine became very popular there, Coventry, Oxford, Birmingham and Manchester being the centers of the English bicycle industry (and of the arms or sewing machine industries, which had the necessary metalworking and engineering skills for bicycle manufacturing, as in Paris and St. Etienne, and in New England). Soon bicycles found their way across the English Channel. By 1875, high-wheel bicycles were becoming popular in France, though ridership expanded slowly.
In 1877, Joseph Henry Hughes' provisional patent application was allowed, titled "Improvements in the bearings of bicycles and velocipedes or carriages". Hughes, a local of Birmingham, described a ball bearing race for bicycle and carriage wheels which allowed for initial adjustment of the system to ensure optimal contacts between components, and for subsequent adjustments to compensate for wear of components from use. William Bown, an already successful owner of Bown Manufacturing Company, persuaded Hughes to sell rights to this patent to him. Having patented improvements to sewing machines and horse clippers himself, Bown also persuaded Hughes join him on further bearing innovations for the next decade. This turned into the successful Aeolus brand of ball bearings, used in the first ball-race-pedals and wheel-bearings for bicycles and carriage wheels.
In the United States, Bostonians such as Frank Weston started importing bicycles in 1877 and 1878, and Albert Augustus Pope started production of his "Columbia" high-wheelers in 1878, and gained control of nearly all applicable patents, starting with Lallement's 1866 patent. Pope lowered the royalty (licensing fee) previous patent owners charged, and took his competitors to court over the patents. The courts supported him, and competitors either paid royalties ($10 per bicycle), or he forced them out of business. There seems to have been no patent issue in France, where English bicycles still dominated the market. In 1880, G.W. Pressey invented the high-wheeler American Star Bicycle, whose smaller front wheel was designed to decrease the frequency of "headers". By 1884 high-wheelers and tricycles were relatively popular among a small group of upper-middle-class people in all three countries, the largest group being in England. Their use also spread to the rest of the world, chiefly because of the extent of the British Empire.
Pope also introduced mechanization and mass production (later copied and adopted by Ford and General Motors), vertically integrated, (also later copied and adopted by Ford), advertised aggressively (as much as ten percent of all advertising in U.S. periodicals in 1898 was by bicycle makers), promoted the Good Roads Movement (which had the side benefit of acting as advertising, and of improving sales by providing more places to ride), and litigated on behalf of cyclists (It would, however, be Western Wheel Works of Chicago which would drastically reduce production costs by introducing stamping to the production process in place of machining, significantly reducing costs, and thus prices.) In addition, bicycle makers adopted the annual model change (later derided as planned obsolescence, and usually credited to General Motors), which proved very successful.
Even so, bicycling remained the province of the urban well-to-do, and mainly men, until the 1890s, and was an example of conspicuous consumption.
The safety bicycle and the bike bubble: 1880s and 1890s
The development of the safety bicycle was arguably the most important change in the history of the bicycle. It shifted their use and public perception from being a dangerous toy for sporting young men to being an everyday transport tool for men and women of all ages.
Aside from the obvious safety problems, the high-wheeler's direct front wheel drive limited its top speed. One attempt to solve both problems with a chain-driven front wheel was the dwarf bicycle, exemplified by the Kangaroo. Inventors also tried a rear wheel chain drive. Although Harry John Lawson invented a rear-chain-drive bicycle in 1879 with his "bicyclette", it still had a huge front wheel and a small rear wheel. Detractors called it "The Crocodile", and it failed in the market.
John Kemp Starley, James Starley's nephew, produced the first successful "safety bicycle", the "Rover," in 1885, which he never patented. It featured a steerable front wheel that had significant caster, equally sized wheels and a chain drive to the rear wheel.
Widely imitated, the safety bicycle completely replaced the high-wheeler in North America and Western Europe by 1890. Meanwhile, John Dunlop's reinvention of the pneumatic bicycle tire in 1888 had made for a much smoother ride on paved streets; the previous type were quite smooth-riding, when used on the dirt roads common at the time. As with the original velocipede, safety bicycles had been much less comfortable than high-wheelers precisely because of the smaller wheel size, and frames were often buttressed with complicated bicycle suspension spring assemblies. The pneumatic tire made all of these obsolete, and frame designers found a diamond pattern to be the strongest and most efficient design.
On 10 October 1899, Isaac R Johnson, an African-American inventor, lodged his patent for a folding bicycle – the first with a recognisably modern diamond frame, the pattern still used in 21st-century bicycles.
The chain drive improved comfort and speed, as the drive was transferred to the non-steering rear wheel and allowed for smooth, relaxed and injury free pedaling (earlier designs that required pedalling the steering front wheel were difficult to pedal while turning, due to the misalignment of rotational planes of leg and pedal). With easier pedaling, the rider more easily turned corners.
The pneumatic tire and the diamond frame improved rider comfort but do not form a crucial design or safety feature. A hard rubber tire on a bicycle is just as rideable but is bone jarring. The frame design allows for a lighter weight, and more simple construction and maintenance, hence lower price.
Most likely the first electric bicycle was built in 1897 by Hosea W. Libbey.
In the middle of the decade, bicycle sales were one of the few areas of the economy where sales were growing despite a severe economic depression, leading hundreds of manufacturers to enter business. This resulted in a downward spiral of market saturation, over-supply and intense price competition, eventually leading to the collapse of many manufacturers as the bicycle bubble burst.
20th century
The roadster
The ladies' version of the roadster's design was very much in place by the 1890s. It had a step-through frame rather than the diamond frame of the gentlemen's model so that ladies, with their dresses and skirts, could easily mount and ride their bicycles, and commonly came with a skirt guard to prevent skirts and dresses becoming entangled in the rear wheel and spokes. As with the gents' roadster, the frame was of steel construction and the positioning of the frame and handlebars gave the rider a very upright riding position. Though they originally came with front spoon-brakes, technological advancements meant that later models were equipped with the much-improved coaster brakes or rod-actuated rim or drum-brakes.
The Dutch cycle industry grew rapidly from the 1890s onwards. Since by then it was the British who had the strongest and best-developed market in bike design, Dutch framemakers either copied them or imported them from England. In 1895, 85 percent of all bikes bought in the Netherlands were from Britain; the vestiges of that influence can still be seen in the solid, gentlemanly shape of a traditional Dutch bike even now.
Though the ladies' version of the roadster largely fell out of fashion in England and many other Western nations as the 20th century progressed, it remains popular in the Netherlands; this is why some people refer to bicycles of this design as Dutch bikes. In Dutch the name of these bicycles is Omafiets ("grandma's bike").
Popularity in Europe, decline in US
Cycling steadily became more important in Europe over the first half of the twentieth century, but it dropped off dramatically in the United States between 1900 and 1910. Automobiles became the preferred means of transportation. Over the 1920s, bicycles gradually became considered children's toys, and by 1940 most bicycles in the United States were made for children. In Europe cycling remained an adult activity, and bicycle racing, commuting, and "cyclotouring" were all popular activities. In addition, specialist bicycles for children appeared before 1916.
From the early 20th century until after World War II, the roadster constituted most adult bicycles sold in the United Kingdom and in many parts of the British Empire. For many years after the advent of the motorcycle and automobile, they remained a primary means of adult transport. Major manufacturers in England were Raleigh and BSA, though Carlton, Phillips, Triumph, Rudge-Whitworth, Hercules, and Elswick Hopper also made them.
Technical innovations
Bicycles continued to evolve to suit the varied needs of riders. The derailleur developed in France between 1900 and 1910 among cyclotourists, and was improved over time. Only in the 1930s did European racing organizations allow racers to use gearing; until then they were forced to use a two-speed bicycle. The rear wheel had a sprocket on either side of the hub. To change gears, the rider had to stop, remove the wheel, flip it around, and remount the wheel. When racers were allowed to use derailleurs, racing times immediately dropped.
World War II
Although multiple-speed bicycles were widely known by this time, most or all military bicycles used in the Second World War were single-speed. Bicycles were used by paratroopers during the war to help them with transportation, creating the term "bomber bikes" to refer to US planes dropping bikes for troops to use. The German Volksgrenadier units each had a battalion of bicycle infantry attached. The Invasion of Poland saw many bicycle-riding scouts in use, with each bicycle company using 196 bicycles and 1 motorcycle. By September 1939, there were 41 bicycle companies mobilized.
During the Second Sino-Japanese War, Japan used around 50,000 bicycle troops. The Malayan Campaign saw many bicycles used. The Japanese confiscated bicycles from civilians due to the abundance of bicycles among the civilian population. Japanese bicycle troops were efficient in both speed and carrying capacity, as they could carry of equipment compared to a normal British soldier, who could carry .
China and the Flying Pigeon
The Flying Pigeon was at the forefront of the bicycle phenomenon in the People's Republic of China. The vehicle was the government approved form of transport, and the nation became known as zixingche wang guo (自行车王国) the 'Kingdom of Bicycles'. A bicycle was regarded as one of the three "must-haves" of every citizen, alongside a sewing machine and watch – essential items in life that also offered a hint of wealth. The Flying Pigeon bicycle became a symbol of an egalitarian social system that promised little comfort but a reliable ride through life.
Throughout the 1960s and 1970s, the logo became synonymous with almost all bicycles in the country. The Flying Pigeon became the single most popular mechanized vehicle on the planet, becoming so ubiquitous that Deng Xiaoping — the post-Mao leader who launched China's economic reforms in the 1970s — defined prosperity as "a Flying Pigeon in every household".
In the early 1980s, Flying Pigeon was the country's biggest bike manufacturer, selling 3 million cycles in 1986. Its 20-kilo black single-speed models were popular with workers, and there was a waiting list of several years to get one, and even then buyers needed good guanxi (relationship) in addition to the purchase cost, which was about four months' wages for most workers.
North America: Cruiser vs. racer
At mid-century there were two predominant bicycle styles for recreational cyclists in North America. Heavyweight cruiser bicycles, preferred by the typical (hobby) cyclist, featuring balloon tires, pedal-driven "coaster" brakes and only one gear, were popular for their durability, comfort, streamlined appearance, and a significant array of accessories (lights, bells, springer forks, speedometers, etc.). Lighter cycles, with hand brakes, narrower tires, and a three-speed hub gearing system, often imported from England, first became popular in the United States in the late 1950s. These comfortable, practical bicycles usually offered generator-powered headlamps, safety reflectors, kickstands, and frame-mounted tire pumps. In the United Kingdom, like the rest of Europe, cycling was seen as less of a hobby, and lightweight but durable bikes had been preferred for decades.
In the United States, the sports roadster was imported after World War II, and was known as the "English racer". It quickly became popular with adult cyclists seeking an alternative to the traditional youth-oriented cruiser bicycle. While the English racer was no racing bike, it was faster and better for climbing hills than the cruiser, thanks to its lighter weight, tall wheels, narrow tires, and internally geared rear hubs. In the late 1950s, U.S. manufacturers such as Schwinn began producing their own "lightweight" version of the English racer.
In the late 1960s, Americans' increasing consciousness of the value of exercise and later the advantage of energy efficient transportation led to the American bike boom of the 1970s. Annual U.S. sales of adult bicycles doubled between 1960 and 1970, and doubled again between 1971 and 1975, the peak years of the adult cycling boom in the United States, eventually reaching nearly 17 million units.
Most of these sales were to new cyclists, who overwhelmingly preferred models imitating popular European derailleur-equipped racing bikesvariously called sports models, sport/tourers, or simply ten-speedsto the older roadsters with hub gears which remained much the same as they had been since the 1930s. These lighter bicycles, long used by serious cyclists and by racers, featured dropped handlebars, narrow tires, derailleur gears, five to fifteen speeds, and a narrow 'racing' type saddle. By 1980, racing and sport/touring derailleur bikes dominated the market in North America. Fatbike was invented for off-road usage in 1980.
Europe
In Britain, the utility roadster declined noticeably in popularity during the early 1970s, as a boom in recreational cycling caused manufacturers to concentrate on lightweight (), affordable derailleur sport bikes, actually slightly-modified versions of the racing bicycle of the era.
In the early 1980s, Swedish company Itera invented a new type of bicycle, made entirely of plastic. It was a commercial failure.
In the 1980s, UK cyclists began to shift from road-only bicycles to all-terrain models such as the mountain bike. The mountain bike's sturdy frame and load-carrying ability gave it additional versatility as a utility bike, usurping the role previously filled by the roadster. By 1990, the roadster was almost dead; while annual UK bicycle sales reached an all-time record of 2.8 million, almost all of them were mountain and road/sport models.
BMX bikes
BMX bikes are specially designed bicycles that usually have 16 to 24-inch wheels (the norm being the 20-inch wheel), which originated in the state of California in the early 1970s when teenagers imitated their motocross heroes on their bicycles. Children were racing standard road bikes off-road, around purpose-built tracks in the Netherlands. The 1971 motorcycle racing documentary On Any Sunday is generally credited with inspiring the movement nationally in the US. In the opening scene, kids are shown riding their Schwinn Sting-Rays off-road. It was not until the middle of the decade the sport achieved critical mass, and manufacturers began creating bicycles designed specially for the sport.
It has grown into an international sport with several different disciplines such as Freestyle, Racing, Street, and Flatland.
Mountain bikes
In 1981, the first mass-produced mountain bike appeared, intended for use off-pavement over a variety of surfaces. It was an immediate success, and examples flew off retailers' shelves during the 1980s, their popularity spurred by the novelty of all-terrain cycling and the increasing desire of urban dwellers to escape their surroundings via mountain biking and other extreme sports. These cycles featured sturdier frames, wider tires with large knobs for increased traction, a more upright seating position (to allow better visibility and shifting of body weight), and increasingly, various front and rear suspension designs. By 2000, mountain bike sales had far outstripped that of racing, sport/racer, and touring bicycles.
21st century
The 21st century has seen a continued application of technology to bicycles (which started in the 20th century): in designing them, building them, and using them. Bicycle frames and components continue to get lighter and more aerodynamic without sacrificing strength largely through the use of computer aided design, finite element analysis, and computational fluid dynamics. Recent discoveries about bicycle stability have been facilitated by computer simulations. Once designed, new technology is applied to manufacturing such as hydroforming and automated carbon fiber layup. Finally, electronic gadgetry has expanded from just cyclocomputers to now include cycling power meters and electronic gear-shifting systems.
Hybrid and commuter bicycles
In recent years, bicycle designs have trended towards increased specialization, as the number of casual, recreational and commuter cyclists has grown. For these groups, the industry responded with the hybrid bicycle, sometimes marketed as a city bike, cross bike, or commuter bike. Hybrid bicycles combine elements of road racing and mountain bikes, though the term is applied to a wide variety of bicycle types.
Hybrid bicycles and commuter bicycles can range from fast and light racing-type bicycles with flat bars and other minimal concessions to casual use, to wider-tired bikes designed for primarily for comfort, load-carrying, and increased versatility over a range of different road surfaces. Enclosed hub gears have become popular again – now with up to 8, 11 or 14 gears – for such bicycles due to ease of maintenance and improved technology.
Recumbent bicycle
The recumbent bicycle was invented in 1893. In 1934, the Union Cycliste Internationale banned recumbent bicycles from all forms of officially sanctioned racing, at the behest of the conventional bicycle industry, after relatively little-known Francis Faure beat world champion Henri Lemoine and broke Oscar Egg's hour record by half a mile while riding Mochet's Velocar. Some authors assert that this resulted in the stagnation of the upright racing bike's frame geometry which has remained essentially unchanged for 70 years. This stagnation finally started to reverse with the formation of the International Human Powered Vehicle Association which holds races for "banned" classes of bicycle. Sam Whittingham set a human powered speed record of 132 km/h (82 mph) on level ground in a faired recumbent streamliner in 2009 at Battle Mountain.
While historically most bike frames have been steel, recent designs, particularly of high-end racing bikes, have made extensive use of carbon and aluminum frames.
Recent years have also seen a resurgence of interest in balloon tire cruiser bicycles for their low-tech comfort, reliability, and style.
In addition to influences derived from the evolution of American bicycling trends, European, Asian and African cyclists have also continued to use traditional roadster bicycles, as their rugged design, enclosed chainguards, and dependable hub gearing make them ideal for commuting and utility cycling duty.
See also
Bicycling and feminism
Bike boom, also known as "bicycle craze", a name used for several periods in cycling history
Cyclability
Hour record
Timeline of transportation technology
Electric bicycle
References
Further reading
Bijker, Wiebe E. (1995). Of bicycles, bakelites, and bulbs: toward a theory of sociotechnical change. Cambridge, Massachusetts: MIT Press. .
Cycle History vol. 1–24, Proceedings of the International Cycling History Conference (ICHC), 1990–2014
Friss, Evan. The Cycling City: Bicycles and Urban America in the 1890s (University of Chicago Press, 2015). x, 267 pp.
Tony Hadland & Hans-Erhard Lessing: Bicycle Design – An Illustrated History. The MIT-Press, Cambridge (USA) 2014,
David Gordon Wilson Bicycling Science 3rd ed. 2004
David V. Herlihy Bicycle – The History. 2004
Hans-Erhard Lessing Automobilitaet – Karl Drais und die unglaublichen Anfaenge, 2003 (in German)
Pryor Dodge The Bicycle 1996 (French ed 1996, German eds 1997, 2002, 2007)
How I Saved The British Empire. Reminiscences of a Bicycling Tour of Great Britain in the Year 1901 A novel released by Ailemo Books in July 2015. Author Michael Waldock. . Library of Congress: 2015909543.
External links
International Cycling History Conference (ICHC)
Karl-Drais memorial
Karl Drais seen by ADFC Mannheim – Focus on events in Mannheim, being the place of his invention. A 3-page Drais biography is available in more than 15 languages.
Menotomy Vintage Bicycles – Antique bicycle photos, features, price guide and research tools.
Metz Bicycle Museum in Freehold, NJ
Myths and Milestones in Bicycle Evolution by William Hudson (accessed 2005-11-17)
A Quick History of Bicycles from the Pedaling History Bicycle Museum (accessed 2005-01-06)
Bicyclette of Harry John Lawson
VeloPress has published dozens of books on the history of cycling and the bicycle.
The Wheelmen organization
History of technology | History of the bicycle | [
"Technology"
] | 8,032 | [
"Science and technology studies",
"History of science and technology",
"History of technology"
] |
1,042,670 | https://en.wikipedia.org/wiki/IBM%208100%20DPCX | DPCX (Distributed Processing Control eXecutive) was an operating system for the IBM 8100 small computer system. IBM hoped it would help their installed base of IBM 3790 customers migrate to the 8100 and the DPPX operating system. It was mainly deployed to support a word processing system, Distributed Office Support Facility (DOSF) which was derived from the earlier IBM 3730 word processing system.
Like DPPX, it was written in the PL/S-like PL/DS language. The applications, including much of DOSF, however, were written an interpreted language that was "compiled" using the System/370 assembler macro facility.
The 8100/DPCX/DOSF system was the first type of distributed system to connect to the IBM Distributed Office Support System (DISOSS) running on data host. Later versions of DISOSS relied on SNA Distribution System (SNADS) and eventually became peer-to-peer communication of documents which complied with Document Interchange Architecture (DIA) and Document Content Architecture (DCA) as other types of distributed system gained DISOSS support – Scanmaster, Displaywriter, and 5520 Office System.
References
8100 DPCX | IBM 8100 DPCX | [
"Technology"
] | 245 | [
"Operating system stubs",
"Computing stubs"
] |
1,042,720 | https://en.wikipedia.org/wiki/Universal%20Media%20Disc | The Universal Media Disc (UMD) is a discontinued optical disc medium developed by Sony for use on its PlayStation Portable handheld gaming and multimedia platform. It can hold up to 1.8 gigabytes of data and is capable of storing video games, feature-length films, and music. UMD is the trademark of Sony Computer Entertainment for their optical disk cartridge (ODC).
Video storage format
While the primary application for UMD discs is as a storage medium for PSP games, the format is also used for the storage of motion pictures and, to a lesser degree, television shows for playback on the PSP. The video is encoded in the H.264/MPEG-4 AVC format, with the audio in ATRAC3plus or PCM. Video stored on UMD is typically encoded in 720×480 resolution, but is scaled down when displayed on the PSP. To date, there are around 1,500 films released on UMD (around 1,000 are common for all regions and around 500 are region exclusives).
The American punk rock band The Offspring released their Complete Music Video Collection on the format. The BBC released a number of its programmes on UMD in the UK, including The Office, The Mighty Boosh, Doctor Who and Little Britain. WWE also released some wrestler highlights and documentary content on UMD format, such as the Monday Night War, Jake "The Snake" Roberts: Pick Your Poison, and WWE Raw Homecoming (a special episode of WWE Raw celebrating the return to USA Network); the only WWE pay-per-view released on UMD format was WrestleMania XXIV.
Tupac's performance, Live at the House of Blues, was also released on the UMD, which also included several music videos, including Hit 'Em Up.
Some adult films have been released on UMD in Japan. Sony reportedly took offence at adult film studios publishing pornography on the medium, but claimed that they were unable to restrict films on UMD like with games and other software for the PSP.
Specifications
ECMA-365: Data Interchange on 60 mm Read-Only ODC – Capacity: 1.8 GB (UMD)
Dimensions: approx. 64 mm (diameter) × 4.2 mm (thickness)
Maximum capacity: 1.80 GB (dual layer), 900 MB (single-layer)
Laser wavelength: 660 nm (red laser)
Numerical aperture: 0.64
Track pitch: 0.70 μm
Minimum pit length: 0.1384 μm
Modulation: 8-to-16 RLL(2,10)
Encryption: AES 128-bit
The case dimensions for UMD discs are 177×104×14mm.
Provisions
According to the official ECMA specification Sony designed the UMD to support two possible future enhancements and products.
Protective Shutter: Similar to the MiniDisc and 3-inch floppy disk, this protective shutter would shield the inner disc from accidental contact.
Auto-Loading: UMDs were designed for possible future slot loading devices with Auto-Loading mechanisms. These would be very similar to the auto-loading mechanism used in slot loading MiniDisc home and car decks. It would also be similar to the Sony U-Matic auto-loading mechanism. Unlike the current clamshell loading design the PSP uses, a slot loading device using an Auto-Loading mechanism would be motorized and completely automatic. The user would insert the disc into the device slot, the motorized mechanism would then take over and draw the disc inside the drive completing the loading process. The disc would also be ejected fully automatically by the motorized mechanism, like a VCR. This would also mean that power would be required in order to insert or eject a disc.
Region coding
DVD region coding has been applied to most UMD movies and music. However, all PSP games are region-free, although some require pay-to-continue.
Region ALL: Worldwide (region-free)
Region 1: North America, Central America, Latin America
Region 2: Europe (without Russia or Belarus), Japan, Middle East, South Africa, Greenland
Region 3: Southeast Asia, Taiwan, South Korea, Hong Kong
Region 4: Oceania, South America
Region 5: Russia, Ukraine, Belarus, India, Pakistan, Africa (without Egypt or South Africa), North Korea, Mongolia
Region 6: China
Availability and support
UMD offers large capacity and the capability to store quality audio/video content; however, the format's proprietary nature, the lack of writers and accompanying blank media, made adoption difficult. The UMD format never saw implementation on any device other than the PlayStation Portable, and as a result the market was very limited compared to those of other optical media formats. The high price of UMD movie releases were another contributing factor: they often retailed at comparable prices to DVD, but lacked extra content. Poor sales of UMD movies early in the format's life caused major studios like Universal and Paramount to rescind their support. Retail support of the format experienced similar troubles, and in 2006, Wal-Mart began phasing out shelf space devoted to UMD movies.
In late 2009, Sony began pushing developers away from the UMD format and towards digital distribution on the PlayStation Network in preparation for the launch of the digital-download-only PSP Go, which was the first (and only) PSP model to not include a UMD drive. However, the system experienced lackluster sales compared to previous models, with most consumers still choosing the UMD-compatible PSP-3000 model, which continued to be sold alongside the PSP Go. Despite the earlier push for PlayStation Network releases around the PSP Go's launch, over half of the PSP's library was only made available in UMD format including Crisis Core: Final Fantasy VII and Kingdom Hearts Birth by Sleep. There have been a few PlayStation Network exclusive releases since the PSP Go's launch, such as LocoRoco Midnight Carnival. Still, most new games continued to be distributed via UMD, and, aside from those published by SCE, not all have been released on PlayStation Network.
The successor of the PlayStation Portable, the PlayStation Vita, did not include UMD support, nor was it added throughout its lifespan. In move similar to the PSP Go, Sony focused on digital downloads and opted for low-profile flash-based cartridges as the system's main media format. UMD releases of films ended in 2011. Games were published on UMD up until 2013.
UMD can be dumped into disc image files (.iso or .cso), using a modified PSP. This file can be loaded by a modified PSP through the Memory Stick, similar to titles that were distributed through the PlayStation Network.
See also
List of optical disc manufacturers
MiniDisc—a similar Sony format
References
External links
Sony PSP Movie Sales Strong Article on the early success of movie sales on UMD from MP3 Newswire
Breaking news: Sony's UMDs aren't selling well News story about the disappointing UMD sales.
UMD Movie Database
Partial archive of film releases on UMD
Audiovisual introductions in 2004
Audio storage
Discontinued media formats
Ecma standards
Japanese inventions
PlayStation Portable
Products and services discontinued in 2016
Video game distribution
Video storage | Universal Media Disc | [
"Technology"
] | 1,476 | [
"Computer standards",
"Ecma standards"
] |
1,042,722 | https://en.wikipedia.org/wiki/Resource%20Access%20Control%20Facility | RACF [pronounced Rack-Eff], short for Resource Access Control Facility, is an IBM software product. It is a security system that provides access control and auditing functionality for the z/OS and z/VM operating systems. RACF was introduced in 1976. Originally called RACF it was renamed to z/OS Security Server (RACF), although most mainframe folks still refer to it as RACF.
Its main features are:
Identification and verification of a user via user id and password check (authentication)
Identification, classification and protection of system resources
Maintenance of access rights to the protected resources (access control)
Controlling the means of access to protected resources
Logging of accesses to a protected system and protected resources (auditing)
RACF establishes security policies rather than just permission records. It can set permissions for file patterns—that is, set the permissions even for files that do not yet exist. Those permissions are then used for the file (or other object) created at a later time.
Community
There is a long established technical support community for RACF based around a LISTSERV operated out of the University of Georgia. The list is called RACF-L which is described as RACF Discussion List. The email address of the listserv is RACF-L@LISTSERV.UGA.EDU and can also be viewed via a webportal at https://listserv.uga.edu/scripts/wa-UGA.exe .
Books
The first text book published (first printing December 2007) aimed at giving security professionals an introduction to the concepts and conventions of how RACF is designed and administered was Mainframe Basics for Security Professionals: Getting Started with RACF by Ori Pomerantz, Barbara Vander Weele, Mark Nelson, and Tim Hahn.
Evolution
RACF has continuously evolved to support such modern security features as digital certificates/public key infrastructure services, LDAP interfaces, and case sensitive IDs/passwords. The latter is a reluctant concession to promote interoperability with other systems, such as Unix and Linux. The underlying zSeries (now IBM Z) hardware works closely with RACF. For example, digital certificates are protected within tamper-proof cryptographic processors. Major mainframe subsystems, especially Db2, use RACF to provide multi-level security (MLS).
Its primary competitors have been ACF2 and TopSecret, both now produced by CA Technologies.
References
External links
What is RACF?
RACF - An Overview
IBM mainframe operating systems
Operating system security
IBM mainframe technology | Resource Access Control Facility | [
"Technology"
] | 548 | [
"Computer security stubs",
"Computing stubs"
] |
1,042,727 | https://en.wikipedia.org/wiki/Data%20set%20%28IBM%20mainframe%29 | In the context of IBM mainframe computers in the S/360 line, a data set (IBM preferred) or dataset is a computer file having a record organization. Use of this term began with, e.g., DOS/360, OS/360, and is still used by their successors, including the current z/OS. Documentation for these systems historically preferred this term rather than file.
A data set is typically stored on a direct access storage device (DASD) or magnetic tape, however unit record devices, such as punch card readers, card punches, line printers and page printers can provide input/output (I/O) for a data set (file).
Data sets are not unstructured streams of bytes, but rather are organized in various logical record and block structures determined by the DSORG (data set organization), RECFM (record format), and other parameters. These parameters are specified at the time of the data set allocation (creation), for example with Job Control Language DD statements. Within a running program they are stored in the Data Control Block (DCB) or Access Control Block (ACB), which are data structures used to access data sets using access methods.
Records in a data set may be fixed, variable, or “undefined” length.
Data set organization
For OS/360, the DCB's DSORG parameter specifies how the data set is organized. It may be
CQ
Queued Telecommunications Access Method (QTAM) in Message Control Program (MCP)
CX
Communications line group
DA
Basic Direct Access Method (BDAM)
GS
Graphics device for Graphics Access Method(GAM)
IS
Indexed Sequential Access Method (ISAM)
MQ
QTAM message queue in application
PO
Partitioned Organization
PS
Physical Sequential
among others.
Data sets on tape may only be DSORG=PS. The choice of organization depends on how the data is to be accessed, and in particular, how it is to be updated.
Programmers utilize various access methods (such as QSAM or VSAM) in programs for reading and writing data sets. Access method depends on the given data set organization.
Record format (RECFM)
Regardless of organization, the physical structure of each record is essentially the same, and is uniform throughout the data set. This is specified in the DCB RECFM parameter. RECFM=F means that the records are of fixed length, specified via the LRECL parameter. RECFM=V specifies a variable-length record. V records when stored on media are prefixed by a Record Descriptor Word (RDW) containing the integer length of the record in bytes and flag bits. With RECFM=FB and RECFM=VB, multiple logical records are grouped together into a single physical block on tape or DASD. FB and VB are fixed-blocked, and variable-blocked, respectively. RECFM=U (undefined) is also variable length, but the length of the record is determined by the length of the block rather than by a control field.
The BLKSIZE parameter specifies the maximum length of the block. RECFM=FBS could be also specified, meaning fixed-blocked standard, meaning all the blocks except the last one were required to be in full BLKSIZE length. RECFM=VBS, or variable-blocked spanned, means a logical record could be spanned across two or more blocks, with flags in the RDW indicating whether a record segment is continued into the next block and/or was continued from the previous one.
This mechanism eliminates the need for using any "delimiter" byte value to separate records. Thus data can be of any type, including binary integers, floating-point, or characters, without introducing a false end-of-record condition. The data set is an abstraction of a collection of records, in contrast to files as unstructured streams of bytes.
Partitioned data set
A partitioned data set (PDS)
is a data set containing multiple members, each of which holds a separate sub-data set, similar to a directory in other types of file systems. This type of data set is often used to hold load modules (old format bound executable programs), source program libraries (especially Assembler macro definitions), ISPF screen definitions, and Job Control Language. A PDS may be compared to a Zip file or COM Structured Storage.
A Partitioned Data Set can only be allocated on a single volume and have a maximum size of 65,535 tracks.
Besides members, a PDS contains also a directory. Each member can be accessed indirectly via the directory structure. Once a member is located, the data stored in that member are handled in the same manner as a PS (sequential) data set.
Whenever a member is deleted, the space it occupied is unusable for storing other data. Likewise, if a member is re-written, it is stored in a new spot at the back of the PDS and leaves wasted “dead” space in the middle. The only way to recover “dead” space is to perform file compression. Compression, which is done using the IEBCOPY utility,
moves all members to the front of the data space and leaves free usable space at the back. (Note that in modern parlance, this kind of operation might be called defragmentation or garbage collection; data compression nowadays refers to a different, more complicated concept.) PDS files can only reside on DASD, not on magnetic tape, in order to use the directory structure to access individual members. Partitioned data sets are most often used for storing multiple job control language files, utility control statements, and executable modules.
An improvement of this scheme is a Partitioned Data Set Extended (PDSE or PDS/E, sometimes just libraries) introduced with DFSMSdfp for MVS/XA and MVS/ESA systems. A PDS/E library can store program objects or other types of members, but not both. BPAM cannot process a PDS/E containing program objects.
PDS/E structure is similar to PDS and is used to store the same types of data. However, PDS/E files have a better directory structure which does not require pre-allocation of directory blocks when the PDS/E is defined (and therefore does not run out of directory blocks if not enough were specified). Also, PDS/E automatically stores members in such a way that compression operation is not needed to reclaim "dead" space. PDS/E files can only reside on DASD in order to use the directory structure to access individual members.
Generation Data Group
A Generation Data Group (GDG) is a group of non-VSAM data sets that are successive generations of historically-related data stored on an IBM mainframe (running OS or DOS/VSE).
A GDG is usually cataloged.
An individual member of the GDG collection is called a "Generation Data Set." The latter may be identified by an absolute number, , or a relative number: (-1) for the previous generation, (0) for the current one, and (+1) the next generation.
GDG JCL & features
Generation Data Groups are defined using either the BLDG statement of the IEHPROGM utility or the DEFINE GENERATIONGROUP statement of the newer IDCAMS utility, which allows setting various parameters.
would limit the number of generations limit to 10.
would retain each member, up to the limited#generations, at least 91 days.
IDCAMS can also delete (and optionally uncatalog) a GDG.
References
Introduction to the New Mainframe: z/OS Basics , Ch. 5, "Working with data sets", March 29, 2011.
Data management
IBM mainframe operating systems
Computer file systems
Computer files | Data set (IBM mainframe) | [
"Technology"
] | 1,606 | [
"Data management",
"Data"
] |
1,042,735 | https://en.wikipedia.org/wiki/IEFBR14 | IEFBR14 is an IBM mainframe utility program. It runs in all IBM mainframe environments derived from OS/360, including z/OS. It is a placeholder that returns the exit status zero, similar to the true command on UNIX-like systems.
Purpose
Allocation (also called Initiation)
On OS/360 and derived mainframe systems, most programs never specify files (usually called datasets) directly, but instead reference them indirectly through the Job Control Language (JCL) statements that invoke the programs. These data definition (or "DD") statements can include a "disposition" (DISP=...) parameter that indicates how the file is to be managed — whether a new file is to be created or an old one re-used; and whether the file should be deleted upon completion or retained; etc.
IEFBR14 was created because while DD statements can create or delete files easily, they cannot do so without a program to be run due to a certain peculiarity of the Job Management system, which always requires that the Initiator actually execute a program, even if that program is effectively a null statement. The program used in the JCL does not actually need to use the files to cause their creation or deletion — the DD DISP=... specification does all the work. Thus a very simple do-nothing program was needed to fill that role.
IEFBR14 can thus be used to create or delete a data set using JCL.
Deallocation (also called Termination)
A secondary reason to run IEFBR14 was to unmount devices (usually tapes or disks) that had been left mounted from a previous job, perhaps because of an error in that job's JCL or because the job ended in error. In either event, the system operators would often need to demount the devices, and a started task – DEALLOC – was often provided for this purpose.
Simply entering the command
S DEALLOC
at the system console would run the started task, which consisted of just one step. However, due to the design of Job Management, DEALLOC must actually exist in the system's procedure library, SYS1.PROCLIB, lest the start command fail.
Also, all such started tasks must be a single jobstep as the "Started Task Control" (STC) module within the Job Management component of the operating system only accepts single-step jobs, and it fails all multi-step jobs, without exception.
//STEP01 EXEC PGM=IEFBR14
Parsing and validation
At least on z/OS, branching off to execute another program would cause the calling program to be evaluated for syntax errors at that point.
Naming
The "IEF" derives from a convention on mainframe computers that programs supplied by IBM were grouped together by function or creator and that each group shared a three-letter prefix. In OS/360, the first letter was almost always "I", and the programs produced by the Job Management group (including IEFBR14) all used the prefix "IEF". Other common prefixes included "IEB" for dataset utility programs, "IEH" for system utility programs, and "IEW" for program linkage and loading. Other major components were (and still are) "IEA" (Operating System Supervisor) and "IEC" (Input/Output Supervisor).
As explained below, "BR 14" was the essential function of the program, to simply return to the operating system. This portion of a program name was often mnemonic — for example, IEBUPDTE was the dataset utility (IEB) that applied updates (UPDTE) to source code files, and IEHINITT was the system utility (IEH) that initialized (INIT) magnetic tape labels (T).
As explained further in "Usage" below, the name "BR14" comes from the IBM assembler-language instruction "Branch (to the address in) Register 14", which by convention is used to "return from a subroutine". Most early users of OS/360 were familiar with IBM Assembler Language and would have recognized this at once.
Usage
Example JCL would be :
//IEFBR14 JOB ACCT,'DELETE DATASET',MSGCLASS=J,CLASS=A
//STEP0001 EXEC PGM=IEFBR14
//DELDD DD DSN=xxxxx.yyyyy.zzzzz,
// DISP=(MOD,DELETE,DELETE),UNIT=DASD
To create a Partitioned Data Set:
//TZZZ84R JOB NOTIFY=&SYSUID,MSGCLASS=X
//STEP01 EXEC PGM=IEFBR14
//DD1 DD DSN=TKOL084.DEMO,DISP=(NEW,CATLG,DELETE),
// DCB=(RECFM=FB,LRECL=80,BLKSIZE=80,DSORG=PO),
// SPACE=(TRK,(1,1,1),RLSE),
// UNIT=SYSDA
Implementation
IEFBR14 consisted initially of a single instruction a "Branch to Register" 14. The mnemonic used in the IBM Assembler was BR and hence the name: IEF BR 14. BR 14 is identically equivalent to BCR 15,14 (Branch Always [ mask = 15 = always ] to the address contained in general purpose register 14). BR is a pseudo instruction for BCR 15. The system assembler accepts many cases of such pseudo-instructions, as logical equivalents to the canonical System/360 instructions. The canonical instance of BR 14 is BCR 15,14.
The linkage convention for OS/360 and its descendants requires that a program be invoked with register 14 containing the address to return control to when complete, and register 15 containing the address at which the called program is loaded into memory; at completion, the program loads a return code in register 15, and then branches to the address contained in register 14. But, initially IEFBR14 was not coded with these characteristics in mind, as IEFBR14 was initially used as a dummy control section, one which simply returned to the caller, not as an executable module.
The original version of the program did not alter register 15 at all as its original application was as a placeholder in certain load modules which were generated during Sysgen (system generation), not as an executable program, per se. Since IEFBR14 was always invoked by the functional equivalent of the canonical BALR 14,15 instruction, the return code in register 15 was always non-zero. Later, a second instruction was to be added to clear the return code so that it would exit with a determinate status, namely zero. Initially, programmers were not using all properties of the Job Control Language, anyway, so an indeterminate return code was not a problem. However, subsequently programmers were indeed using these properties, so a determinate status became mandatory. This modification to IEFBR14 did not in any way impact its original use as a placeholder.
The machine code for the modified program is:
SR R15,R15 put zero completion code into register 15
BR R14 branch to the address in register 14 (which is actually an SVC 3 instruction in the Communications Vector Table)
The equivalent machine code, eliminating the BR for clarity, is:
SR R15,R15 put zero completion code into register 15
SVC 3 issue EXIT SVC to terminate the jobstep
This makes perfect sense as the OS/360 Initiator initially "attaches" the job-step task using the ATTACH macro-instruction (SVC 42), and "unwinding" the effect of this ATTACH macro (it being a Type 2 SVC instruction) must be a complementary instruction, namely an EXIT macro (necessarily a Type 1 SVC instruction, SVC 3).
See also
true - the UNIX-equivalent "do nothing" program
References
Trombetta, Michael & Finkelstein Sue Carolyn (1985). "OS JCL and utilities". Addison Wesley. page 152.
IBM mainframe operating systems | IEFBR14 | [
"Technology",
"Engineering"
] | 1,718 | [
"Software engineering",
"Computer science",
"Software",
"nan"
] |
1,042,798 | https://en.wikipedia.org/wiki/Jettying | Jettying (jetty, jutty, from Old French getee, jette) is a building technique used in medieval timber-frame buildings in which an upper floor projects beyond the dimensions of the floor below. This has the advantage of increasing the available space in the building without obstructing the street. Jettied floors are also termed jetties. In the U.S., the most common surviving colonial version of this is the garrison house. Most jetties are external, but some early medieval houses were built with internal jetties.
Structure
A jetty is an upper floor that depends on a cantilever system in which a horizontal beam, the jetty bressummer, supports the wall above and projects forward beyond the floor below (a technique also called oversailing). The bressummer (or breastsummer) itself rests on the ends of a row of jetty beams or joists which are supported by jetty plates. Jetty joists in their turn were slotted sideways into the diagonal dragon beams at angle of 45° by means of mortise and tenon joints.
The overhanging corner posts are often reinforced by curved jetty brackets.
The origins of jettying are unclear but some reasons put forward for their purpose are:
to gain space.
the structural advantage of the jettied wall counteracting forces in the joists or tying a stone wall together
to shelter the lower walls of the house from the weather.
to simplify joinery.
uses shorter timbers, a benefit due to timber shortages and difficult handling of long timbers especially in city streets.
as a "symbol of wealth and status."
Jetties were popular in the 16th century but banned in Rouen in 1520 relating to air circulation and the plague, and London in 1667 relating to the great fire. They are considered a Gothic style.
Structurally, jetties are of several types:
framed on multiple joists.
framed on a few beams.
framed on brackets added to the posts.
hewn jetty also called a false jetty: Framed on projections of the posts rather than on cantilevered beams (or brackets).
Vertical elements
The vertical elements of jetties can be summarized as:
the more massive corner posts of the timber frame that support the dragon beam from the floor below and are supported in their turn by the dragon beam for the extended floor above.
the less substantial studs of the close studding along the walls above and below the jetty.
Horizontal elements
The horizontal elements of jetties are:
the jetty breastsummer (or bressummer), the sill on which the projecting wall above rests; the bressummer stretches across the whole width of the jetty wall
the dragon-beam which runs diagonally from one corner to another, and supports the corner posts above and is supported in turn by the corner posts below
the jetty beams or joists which conform to the greater dimensions of the floor above but rest at right angles on the jetty-plates that conform to the shorter dimensions of the floor below. The jetty beams are morticed at 45° into the sides of the dragon beams. They are the main constituents of the cantilever system and they determine how far the jetty projects
the jetty-plates, designed to carry the jetty-beams or joints. The jetty-plate itself is supported by the corner posts of the recessed floor below.
Cantilever
Jettying was used for timber-framed buildings, but was succeeded by cantilever which are used for the same reason as jettying, to maximise space in buildings. This is often utilised on buildings which are on a narrow plot and space is at a premium.
Forebay
The Pennsylvania barn in the U.S. has a distinctive cantilever called a forebay, not a jetty.
Mediterranean area
The traditional Turkish house is a half-timbered house with a cantilevered or supported overhang called a cumba.
In the North African Maghreb, houses in medieval city kasbahs often featured jetties. Contemporary examples still survive in the Casbah of Algiers.
The House of Opus Craticum, built before AD 79 in Roman Herculaneum, has a supported cantilever.
See also
Cantilever – modern buildings still use cantilevered floors, but the term jettying is rarely used. See for example 945 Madison Avenue in New York.
Machicolation
Overhang (architecture)
Corbels, brackets that may be under a jetty
References
Timber framing
Architectural elements
Medieval architecture
fr:Encorbellement | Jettying | [
"Technology",
"Engineering"
] | 931 | [
"Timber framing",
"Building engineering",
"Structural system",
"Architectural elements",
"Components",
"Architecture"
] |
1,042,859 | https://en.wikipedia.org/wiki/131%20%28number%29 | 131 (one hundred thirty one) is the natural number following 130 and preceding 132.
In mathematics
131 is a Sophie Germain prime, an irregular prime, the second 3-digit palindromic prime, and also a permutable prime with 113 and 311. It can be expressed as the sum of three consecutive primes, 131 = 41 + 43 + 47. 131 is an Eisenstein prime with no imaginary part and real part of the form . Because the next odd number, 133, is a semiprime, 131 is a Chen prime. 131 is an Ulam number.
131 is a full reptend prime in base 10 (and also in base 2). The decimal expansion of 1/131 repeats the digits 007633587786259541984732824427480916030534351145038167938931 297709923664122137404580152671755725190839694656488549618320 6106870229 indefinitely.
131 is the fifth discriminant of imaginary quadratic fields with class number 5, where the 131st prime number 739 is the fifteenth such discriminant. Meanwhile, there are conjectured to be a total of 131 discriminants of class number 8 (only one more discriminant could exist).
References
Integers | 131 (number) | [
"Mathematics"
] | 299 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
1,042,866 | https://en.wikipedia.org/wiki/Acute%20toxicity | Acute toxicity describes the adverse effects of a substance that result either from a single exposure or from multiple exposures in a short period of time (usually less than 24 hours). To be described as acute toxicity, the adverse effects should occur within 14 days of the administration of the substance.
Acute toxicity is distinguished from chronic toxicity, which describes the adverse health effects from repeated exposures, often at lower levels, to a substance over a longer time period (months or years).
It is widely considered unethical to use humans as test subjects for acute (or chronic) toxicity research. However, some information can be gained from investigating accidental human exposures (e.g., factory accidents). Otherwise, most acute toxicity data comes from animal testing or, more recently, in vitro testing methods and inference from data on similar substances.
Measures of acute toxicity
Regulatory values
Limits for short-term exposure, such as STELs or CVs, are defined only if there is a particular acute toxicity associated with a substance. These limits are set by the American Conference of Governmental Industrial Hygienists (ACGIH) and the Occupational Safety and Health Administration (OSHA), based on experimental data. The values set by these organizations do not always coincide exactly, and in the chemical industry it is general practice to choose the most conservative value in order to ensure the safety of employees. The values can typically be found in a material safety data sheet. There are also different values based on the method of entry of the compound (oral, dermal, or inhalation).
Threshold limit value-time-weighted-average: The maximum concentration to which a worker can be exposed every work day (8 hours) and experience no adverse health effects.
Short-Term Exposure Limit, STEL or Threshold limit value-short-term exposure limit, TLV-STEL: The concentration which no person should be exposed to for more than 15 minutes during an 8-hour work day.
Ceiling value, CV or Threshold limit value-ceiling, TLV-C: The concentration which no person should ever be exposed to.
Experimental values
No-observed-adverse-effect level, NOAEL
Lowest-observed-adverse-effect level, LOAEL
Maximum tolerable concentration, MTC, LC0; Maximum tolerable dose, MTD, LD0
Minimum lethal concentration, LCmin; Minimum lethal dose, LDmin
Median lethal concentration, LC50; Median lethal dose, LD50; Median lethal time, LT50 (LT50)
Absolute lethal concentration, LC100; Absolute lethal dose, LD100
The most referenced value in the chemical industry is the median lethal dose, or LD50. This is the concentration of substance which resulted in the death of 50% of test subjects (typically mice or rats) in the laboratory.
Responses and treatments
When a person has been exposed to an acutely toxic dose of a substance, they can be treated in a number of ways in order to minimize the harmful effects. The severity of the response is related to the severity of the toxic response exhibited. These treatment methods include (but are not limited to):
Emergency showers used for removing irritating or hazardous chemicals from the skin.
Emergency eye washes used for removing any irritating or hazardous chemicals from the eyes.
Activated charcoal used to bind and remove harmful substances consumed orally. This is used as an alternative to conventional stomach pumping.
References
Toxicology | Acute toxicity | [
"Environmental_science"
] | 690 | [
"Toxicology"
] |
1,042,902 | https://en.wikipedia.org/wiki/GNU%20MPFR | The GNU Multiple Precision Floating-Point Reliable Library (GNU MPFR) is a GNU portable C library for arbitrary-precision binary floating-point computation with correct rounding, based on GNU Multi-Precision Library.
Library
MPFR's computation is both efficient and has a well-defined semantics: the functions are completely specified on all the possible operands and the results do not depend on the platform. This is done by copying the ideas from the ANSI/IEEE-754 standard for fixed-precision floating-point arithmetic (correct rounding and exceptions, in particular). More precisely, its main features are:
Support for special numbers: signed zeros (+0 and −0), infinities and not-a-number (a single NaN is supported: MPFR does not differentiate between quiet NaNs and signaling NaNs).
Each number has its own precision (in bits since MPFR uses radix 2). The floating-point results are correctly rounded to the precision of the target variable, in one of the five supported rounding modes (including the four from IEEE 754-1985).
Supported functions: MPFR implements all mathematical functions from C99 and other usual mathematical functions: the logarithm and exponential in natural base, base 2 and base 10, the log(1+x) and exp(x)−1 functions (log1p and expm1), the six trigonometric and hyperbolic functions and their inverses, the gamma, zeta and error functions, the arithmetic–geometric mean, the power (xy) function. All those functions are correctly rounded over their complete range.
Subnormal numbers are not supported, but can be emulated with the mpfr_subnormalize function.
MPFR is not able to track the accuracy of numbers in a whole program or expression; this is not its goal. Interval arithmetic packages like Arb, MPFI, or Real RAM implementations like iRRAM, which may be based on MPFR, can do that for the user.
MPFR is dependent upon the GNU Multiple Precision Arithmetic Library (GMP).
MPFR is needed to build the GNU Compiler Collection (GCC). Other software uses MPFR, such as ALGLIB, CGAL, FLINT, GNOME Calculator, the Julia language implementation, the Magma computer algebra system, Maple, GNU MPC, and GNU Octave.
References
External links
Official MPFR web site
C (programming language) libraries
Computer arithmetic
Free software programmed in C
GNU Project software
Numerical libraries
Software using the GNU Lesser General Public License | GNU MPFR | [
"Mathematics"
] | 524 | [
"Computer arithmetic",
"Arithmetic"
] |
1,043,036 | https://en.wikipedia.org/wiki/Bertrand%27s%20ballot%20theorem | In combinatorics, Bertrand's ballot problem is the question: "In an election where candidate A receives p votes and candidate B receives q votes with p > q, what is the probability that A will be strictly ahead of B throughout the count under the assumption that votes are counted in a randomly picked order?" The answer is
The result was first published by W. A. Whitworth in 1878, but is named after Joseph Louis François Bertrand who rediscovered it in 1887.
In Bertrand's original paper, he sketches a proof based on a general formula for the number of favourable sequences using a recursion relation. He remarks that it seems probable that such a simple result could be proved by a more direct method. Such a proof was given by Désiré André, based on the observation that the unfavourable sequences can be divided into two equally probable cases, one of which (the case where B receives the first vote) is easily computed; he proves the equality by an explicit bijection. A variation of his method is popularly known as André's reflection method, although André did not use any reflections.
Bertrand's ballot theorem is related to the cycle lemma. They give similar formulas, but the cycle lemma considers circular shifts of a given ballot counting order rather than all permutations.
Example
Suppose there are 5 voters, of whom 3 vote for candidate A and 2 vote for candidate B (so p = 3 and q = 2). There are ten equally likely orders in which the votes could be counted:
AAABB
AABAB
ABAAB
BAAAB
AABBA
ABABA
BAABA
ABBAA
BABAA
BBAAA
For the order AABAB, the tally of the votes as the election progresses is:
For each column the tally for A is always larger than the tally for B, so A is always strictly ahead of B. For the order AABBA the tally of the votes as the election progresses is:
For this order, B is tied with A after the fourth vote, so A is not always strictly ahead of B.
Of the 10 possible orders, A is always ahead of B only for AAABB and AABAB. So the probability that A will always be strictly ahead is
and this is indeed equal to as the theorem predicts.
Equivalent problems
Favourable orders
Rather than computing the probability that a random vote counting order has the desired property, one can instead compute the number of favourable counting orders, then divide by the total number of ways in which the votes could have been counted. (This is the method that was used by Bertrand.) The total number of ways is the binomial coefficient ; Bertrand's proof shows that the number of favourable orders in which to count the votes is (though he does not give this number explicitly). And indeed after division this gives .
Random walks
Another equivalent problem is to calculate the number of random walks on the integers that consist of n steps of unit length, beginning at the origin and ending at the point m, that never become negative. As n and m have the same parity and , this number is
When and is even, this gives the Catalan number . Thus the probability that a random walk is never negative and returns to origin at time is . By Stirling's formula, when , this probability is .
[Note that have the same parity as follows: let be the number of "positive" moves, i.e., to the right, and let be the number of "negative" moves, i.e., to the left. Since and , we have and . Since and are integers, have the same parity]
Proof by reflection
For A to be strictly ahead of B throughout the counting of the votes, there can be no ties. Separate the counting sequences according to the first vote. Any sequence that begins with a vote for B must reach a tie at some point, because A eventually wins. For any sequence that begins with A and reaches a tie, reflect the votes up to the point of the first tie (so any A becomes a B, and vice versa) to obtain a sequence that begins with B. Hence every sequence that begins with A and reaches a tie is in one-to-one correspondence with a sequence that begins with B, and the probability that a sequence begins with B is , so the probability that A always leads the vote is
the probability of sequences that tie at some point
the probability of sequences that tie at some point and begin with A or B
the probability of sequences that tie at some point and begin with B
the probability that a sequence begins with B
Proof by induction
Another method of proof is by mathematical induction:
We loosen the condition to . Clearly, the theorem is correct when , since in this case the first candidate will not be strictly ahead after all the votes have been counted (so the probability is 0).
Clearly the theorem is true if p > 0 and q = 0 when the probability is 1, given that the first candidate receives all the votes; it is also true when p = q > 0 as we have just seen.
Assume it is true both when p = a − 1 and q = b, and when p = a and q = b − 1, with a > b > 0. (We don't need to consider the case here, since we have already disposed of it before.) Then considering the case with p = a and q = b, the last vote counted is either for the first candidate with probability a/(a + b), or for the second with probability b/(a + b). So the probability of the first being ahead throughout the count to the penultimate vote counted (and also after the final vote) is:
And so it is true for all p and q with p > q > 0.
Proof by the cycle lemma
A simple proof is based on the cycle lemma of Dvoretzky and Motzkin.
Call a ballot sequence dominating if A is strictly ahead of B throughout the counting of the votes. The cycle lemma asserts that any sequence of A's and B's, where , has precisely dominating cyclic permutations. To see this, just arrange the given sequence of A's and B's in a circle and repeatedly remove adjacent pairs AB until only A's remain. Each of these A's was the start of a dominating cyclic permutation before anything was removed. So out of the cyclic permutations of any arrangement of A votes and B votes are dominating.
Proof by martingales
Let . Define the "backwards counting" stochastic process
where is the lead of candidate A over B, after votes have come in.
Claim: is a martingale process. Given , we know that , so of the first votes, were for candidate A, and were for candidate B. So, with probability , we have , and . Similarly for the other one. Then compute to find . Define the stopping time as either the minimum such that , or if there's no such . Then the probability that candidate A leads all the time is just , which by the optional stopping theorem is
Bertrand's and André's proofs
Bertrand expressed the solution as
where is the total number of voters and is the number of voters for the first candidate. He states that the result follows from the formula
where is the number of favourable sequences, but "it seems probable that such a simple result could be shown in a more direct way". Indeed, a more direct proof was soon produced by Désiré André. His approach is often mistakenly labelled "the reflection principle" by modern authors but in fact uses a permutation. He shows that the "unfavourable" sequences (those that reach an intermediate tie) consist of an equal number of sequences that begin with A as those that begin with B. Every sequence that begins with B is unfavourable, and there are such sequences with a B followed by an arbitrary sequence of (q-1) B's and p A's. Each unfavourable sequence that begins with A can be transformed to an arbitrary sequence of (q-1) B's and p A's by finding the first B that violates the rule (by causing the vote counts to tie) and deleting it, and interchanging the order of the remaining parts. To reverse the process, take any sequence of (q-1) B's and p A's and search from the end to find where the number of A's first exceeds the number of B's, and then interchange the order of the parts and place a B in between. For example, the unfavourable sequence AABBABAA corresponds uniquely to the arbitrary sequence ABAAAAB. From this, it follows that the number of favourable sequences of p A's and q B's is
and thus the required probability is
as expected.
Variant: ties allowed
The original problem is to find the probability that the first candidate is always strictly ahead in the vote count. One may instead consider the problem of finding the probability that the second candidate is never ahead (that is, with ties are allowed). In this case, the answer is
The variant problem can be solved by the reflection method in a similar way to the original problem. The number of possible vote sequences is . Call a sequence "bad" if the second candidate is ever ahead, and if the number of bad sequences can be enumerated then the number of "good" sequences can be found by subtraction and the probability can be computed.
Represent a voting sequence as a lattice path on the Cartesian plane as follows:
Start the path at (0, 0)
Each time a vote for the first candidate is received move right 1 unit.
Each time a vote for the second candidate is received move up 1 unit.
Each such path corresponds to a unique sequence of votes and will end at (p, q). A sequence is 'good' exactly when the corresponding path never goes above the diagonal line y = x; equivalently, a sequence is 'bad' exactly when the corresponding path touches the line y = x + 1.
For each 'bad' path P, define a new path P′ by reflecting the part of P up to the first point it touches the line across it. P′ is a path from (−1, 1) to (p, q). The same operation applied again restores the original P. This produces a one-to-one correspondence between the 'bad' paths and the paths from (−1, 1) to (p, q). The number of these paths is and so that is the number of 'bad' sequences. This leaves the number of 'good' sequences as
Since there are altogether, the probability of a sequence being good is .
In fact, the solutions to the original problem and the variant problem are easily related. For candidate A to be strictly ahead throughout the vote count, they must receive the first vote and for the remaining votes (ignoring the first) they must be either strictly ahead or tied throughout the count. Hence the solution to the original problem is
as required.
Conversely, the tie case can be derived from the non-tie case. Note that the number of non-tie sequences with p+1 votes for A is equal to the number of tie sequences with p votes for A. The number of non-tie votes with p + 1 votes for A votes is , which by algebraic manipulation is , so the fraction of sequences with p votes for A votes is .
Notes
References
Ballot theorems, old and new, L. Addario-Berry, B.A. Reed, 2007, in Horizons of combinatorics, Editors Ervin Győri, G. Katona, Gyula O. H. Katona, László Lovász, Springer, 2008,
External links
The Ballot Problem (includes scans of the original French articles and English translations)
Bernard Bru, Les leçons de calcul des probabilités de Joseph Bertrand, history of the problem (in French)
Probability problems
Enumerative combinatorics
Theorems in combinatorics
Probability theorems
Articles containing proofs
Voting theory | Bertrand's ballot theorem | [
"Mathematics"
] | 2,477 | [
"Mathematical theorems",
"Theorems in combinatorics",
"Enumerative combinatorics",
"Combinatorics",
"Theorems in discrete mathematics",
"Theorems in probability theory",
"Probability problems",
"Articles containing proofs",
"Mathematical problems"
] |
7,378,933 | https://en.wikipedia.org/wiki/Vitamin%20A%20deficiency | Vitamin A deficiency (VAD) or hypovitaminosis A is a lack of vitamin A in blood and tissues. It is common in poorer countries, especially among children and women of reproductive age, but is rarely seen in more developed countries. Vitamin A plays a major role in phototransduction, so this deficiency impairs vision, often presenting with nyctalopia (night blindness). In more severe VAD cases, it can progress to xerophthalmia, keratomalacia, and complete blindness.
Vitamin A deficiency is the leading cause of preventable childhood blindness worldwide and is a major cause of childhood mortality. Each year, approximately 250,000 to 500,000 malnourished children in the developing world go blind from a VAD, with about half of whom dying within a year of losing their sight. Addressing VAD has been a critical focus of global health initiatives, including Sustainable Development Goal 2: to end hunger, achieve food security and improved nutrition and promote sustainable agriculture.
In pregnant women, VAD is associated with a high prevalence of night blindness and poor maternal health outcomes including an increased risk of maternal mortality and complications during pregnancy and lactation. VAD also affects the immune system and diminishes the body's ability to fight infections. In countries where children are not immunized, VAD is linked to higher fatality rates from infectious diseases such as measles. Even mild, subclinical deficiency can also be a problem, as it may increase children's risk of developing respiratory and diarrheal infections, decrease growth, impair bone development, and reduce their likelihood of surviving serious illnesses.
Globally, VAD is estimated to affect about one-third of children under the age of five, causing an estimated 670,000 deaths in children under five annually. It is most prevalent in sub-Saharan Africa (48 percent) and South Asia (44 percent). Although VAD is well-managed in many high income nations, it remains a significant concern in resource-poor settings. Public health interventions, such as vitamin A supplementation, reached 59% of targeted children in 2022, highlighting the ongoing need for comprehensive efforts to combat VAD.
Signs and symptoms
Vitamin A deficiency is the most common cause of blindness in developing countries. The WHO estimated in 1995 that 13.8 million children had some degree of visual loss related to VAD. Night blindness and its worsened condition, xerophthalmia, are markers of Vitamin A deficiency; collections of keratin in the conjunctiva, known as Bitot's spots, and ulceration and necrosis of cornea keratomalacia can be seen. Conjunctival epithelial defects occur around lateral aspect of the limbus in the subclinical stage of VAD. These conjunctival epithelial defects are not visible on a biomicroscope, but they take up black stain and become readily visible after instillation of kajal (surma); this is called "Imtiaz's sign".
Night blindness
A process called dark adaptation typically causes an increase in photopigment amounts in response to low levels of illumination. This occurs to an enormous magnitude, increasing light sensitivity by up to 100,000 times its sensitivity in normal daylight conditions. VAD affects vision by inhibiting the production of rhodopsin, the photopigment responsible for sensing low-light situations. Rhodopsin is found in the retina and is composed of retinal (an active form of vitamin A) and opsin (a protein).
Night blindness caused by VAD has been associated with the loss of goblet cells in the conjunctiva, a membrane covering the outer surface of the eye. Goblet cells are responsible for secretion of mucus, and their absence results in xerophthalmia, a condition where the eyes fail to produce tears. Dead epithelial and microbial cells accumulate on the conjunctiva and form debris that can lead to infection and possibly blindness.
Decreasing night blindness requires the improvement of vitamin A status in at-risk populations. Supplements and fortification of food have been shown to be effective interventions. Supplement treatment for night blindness includes massive doses of vitamin A (200,000 IU) in the form of retinyl palmitate to be taken by mouth, which is administered two to four times a year. Intramuscular injections are poorly absorbed and are ineffective in delivering sufficient bioavailable vitamin A. Fortification of food with vitamin A is costly, but can be done in wheat, sugar, and milk. Households may circumvent expensive fortified food by altering dietary habits. Consumption of yellow-orange fruits and vegetables rich in carotenoids, specifically beta-carotene, provides provitamin A precursors that can prevent VAD-related night blindness. However, the conversion of carotene to retinol varies from person to person and bioavailability of carotene in food varies.
Infection
Along with poor diet, infection and disease are common in many developing communities. Infection depletes vitamin A reserves which in turn make the affected individual more susceptible to further infection. Increased incidence of xerophthalmia has been observed after an outbreak of measles, with mortality correlated with severity of eye disease. In longitudinal studies of preschool children, susceptibility to disease increased substantially when severe VAD was present. While VAD can make measles worse, Vitamin A supplements do not prevent measles, high doses may be dangerous, and vaccines remain the most effective was to prevent the disease.
The reason for the increased infection rate in vitamin A deficient individuals is that killer T-cells require the retinol metabolite retinoic acid to proliferate correctly. Retinoic acid is a ligand for nuclear retinoic acid receptors that bind the promoter regions of specific genes, thus activating transcription and stimulating T cell replication. Vitamin A deficiency will often entail deficient retinol intake, resulting in a reduced number of T-cells and lymphocytes, leading to an inadequate immune response and consequently a greater susceptibility to infections. In the presence of dietary deficiency of vitamin A, VAD and infections reciprocally aggravate each other.
Causes
In addition to dietary problems, other causes of VAD are known. Iron deficiency can affect vitamin A uptake. Other causes include fibrosis, pancreatic insufficiency, inflammatory bowel disease, and small-bowel bypass surgery. Protein energy malnutrition is often seen in VAD. This is because suppressed synthesis of retinol binding protein (RBP) due to protein deficiency leads to reduced retinol uptake.
Excess alcohol consumption can deplete vitamin A, and a stressed liver may be more susceptible to vitamin A toxicity. People who consume large amounts of alcohol should seek medical advice before taking vitamin A supplements.
Other causes of vitamin A deficiency are inadequate intake, fat malabsorption, or liver disorders. Deficiency impairs immunity and hematopoiesis and causes rashes and typical ocular effects (e.g., xerophthalmia, night blindness). In general, people should also seek medical advice before taking vitamin A supplements if they have any condition associated with fat malabsorption such as pancreatitis, cystic fibrosis, tropical sprue, and biliary obstruction.
Diagnosis
Initial assessment may be made based on clinical signs of VAD. The most common sign of VAD is night blindness, but VAD might also present with conjunctival xerosis, Bitot spots (foamy lesions), corneal xerosis, or corneal ulcerations.
A VAD deficiency diagnosis is confirmed with laboratory findings. Several methods of assessing bodily vitamin A levels are available, with plasma retinol levels being the most common method of assessing VAD in individuals. A plasma or serum retinol level below 0.70 μmol/L suggests subclinical vitamin A deficiency in both children and adults, while a level below 0.35 μmol/L indicates a severe deficiency of vitamin A.
Other biochemical assessments include measuring serum retinol levels, serum zinc, plasma retinol ester levels, plasma and urinary retonioic acid levels, and vitamin A in breast milk. While liver biopsies are regarded as the gold standard for assessing total body vitamin A, they are rarely used outside of research settings because of the risks associated with the procedure. Additionally, conjunctival impression cytology can be used to assess the presence of xerophthalmia which is strongly correlated with VAD status (and can be used to monitor recovery progress).
Vitamin A sources
Prevention and treatment
Treatment of VAD can be undertaken with both oral vitamin A and injectable forms, generally as vitamin A palmitate.
High dose vitamin A supplementation has been proven to be an effective and cost effective treatment. Current World Health Organization guidance recommends biannual vitamin A supplementation for children aged 6 to 59 months in areas with high levels of retinol deficiency. Children aged 6-12 months should receive a dose of 100,000 IU and children aged 1-5 years should receive a dose of 200,000 IU each time. This significantly reduces the risk of morbidity, especially from severe diarrhea, and reduces mortality from measles and all-cause mortality. Vitamin A supplementation of children under five who are at risk of VAD has been found to reduce all‐cause mortality by 12 to 24%.
Side effects of vitamin A supplements are rare. Vitamin A toxicity is a rare concern associated with high levels of vitamin A over prolonged periods of time. Symptoms may include nausea, vomiting, headache, dizziness, irritability, blurred vision, and a lack of muscle coordination. However, when administered in the correct dose, vitamin A is generally safe and effective.
The World Health Organization also recommends vitamin A supplementation during pregnancy and lactation in areas where VAD is prevalent. Maternal high supplementation benefits both mother and breast-fed infant: high-dose vitamin A supplementation of the lactating mother in the first month postpartum can provide the breast-fed infant with an appropriate amount of vitamin A through breast milk. It also reduces the risk of infection, night blindness, and anemia in the mother. However, vitamin A supplementation in developed countries and high-dose supplementation of pregnant women should be avoided because it can cause miscarriage and birth defects.
Although synthetic vitamin A supplementation is a useful and effective treatment for VAD, a 2017 review (updated in 2022) reported that synthetic vitamin A supplementation may not be the best long‐term solution for vitamin A deficiency, but rather food fortification, improved food distribution programs, and crop improvement, such as for fortified rice or vitamin A-rich sweet potato, may be more effective in eradicating vitamin A deficiency.
Food fortification is also useful for improving VAD. A variety of oily and dry forms of the retinol esters, retinyl acetates, and retinyl palmitate are available for food fortification of vitamin A. Margarine and oil are the ideal food vehicles for vitamin A fortification. They protect vitamin A from oxidation during storage and prompt absorption of vitamin A. Beta-carotene and retinyl acetate or retinyl palmitate are used as a form of vitamin A for vitamin A fortification of fat-based foods. Fortification of sugar with retinyl palmitate as a form of vitamin A has been used extensively throughout Central America. Cereal flours, milk powder, and liquid milk are also used as food vehicles for vitamin A fortification.
In addition to adding synthetic vitamin A to foods, researchers have explored fortifying foods such as rice and corn through genetic engineering.
Research on rice began in 1982, and the first field trials of golden rice cultivars were conducted in 2004. The result was "Golden Rice", a variety of Oryza sativa rice produced through genetic engineering to biosynthesize beta-carotene, a precursor of retinol, in the edible parts of rice. In May 2018, regulatory agencies in the United States, Canada, Australia and New Zealand concluded that Golden Rice met food safety standards. On 21 July 2021, the Philippines became the first country to officially issue the biosafety permit for commercially propagating Golden Rice. In 2023, however, the Supreme Court of the Philippines ordered the agriculture department to stop commercial propagation of golden rice in relation to a petition filed by MASIPAG (a group of farmers and scientists), who claimed that golden rice poses risk to the health of consumers and to the environment.
Researchers at the U.S. Agricultural Research Service have been able to identify genetic sequences in corn that are associated with higher levels of beta-carotene, the precursor to vitamin A. They found that breeders can cross certain variations of corn to produce a crop with an 18-fold increase in beta-carotene.
Dietary diversification can also reduce risk of VAD. Non-animal sources of vitamin A like fruits and vegetables contain pro-vitamin A and account for greater than 80% of intake for most individuals in the developing world. The increase in consumption of vitamin A-rich foods of animal origin such as liver, milk, cheese, or eggs, also has beneficial effects on VAD.
Public health initiatives
Some countries where VAD is a public-health problem address its elimination by including vitamin A supplements available in capsule form with national immunization days (NIDs) for polio eradication or measles. When the correct dosage is given, vitamin A is safe and has no negative effect on seroconversion rates for oral polio or measles vaccines. Additionally, the delivery of vitamin A supplements, during integrated child health events such as child health days, has helped ensure high coverage of vitamin A supplementation in a large number of least developed countries. Child health events enable many countries in West and Central Africa to achieve over 80% coverage of vitamin A supplementation. According to UNICEF data, in 2013 worldwide, 65% of children between the ages of 6 and 59 months were fully protected with two high-dose vitamin A supplements.Since NIDs provide only one dose per year, NIDs-linked vitamin A distribution must be complemented by other programs to maintain vitamin A in children.
Global efforts to support national governments in addressing VAD are led by the Global Alliance for Vitamin A (GAVA), which is an informal partnership between Nutrition International, Helen Keller International, UNICEF, WHO, and CDC. About 75% of the vitamin A required for supplementation of preschool-aged children in low- and middle-income countries is supplied through a partnership between Nutrition International and UNICEF, with support from Global Affairs Canada. An estimated 1.25 million deaths due to vitamin A deficiency have been averted in 40 countries since 1998.
References
Further reading
UNICEF, Vitamin A Supplementation: A Decade of Progress, UNICEF, New York, 2007.
Flour Fortification Initiative, GAIN, Micronutrient Initiative, USAID, The World Bank, UNICEF, Investing in the Future: A United Call to Action on Vitamin and Mineral Deficiencies, 2009.
UNICEF, Improving Child Nutrition: The achievable imperative for global progress, UNICEF, New York, 2013.
External links
Nutrition International
Global Alliance for Vitamin A
UNICEF Data on Vitamin A Deficiency and Supplementation
Helen Keller International
A2Z
World Health Organization Database on Vitamin A Deficiency
Vitamin A Deficiency on IAPB
Vitamin deficiencies
Blindness | Vitamin A deficiency | [
"Chemistry"
] | 3,213 | [
"Vitamin A",
"Biomolecules"
] |
7,378,996 | https://en.wikipedia.org/wiki/Phlebopus%20marginatus | Phlebopus marginatus, commonly known as the salmon gum mushroom in Western Australia, is a member of the Boletales or pored fungi. An imposing sight in forests of south-eastern and south-western Australia, it is possibly Australia's largest terrestrial mushroom, with the weight of one specimen from Victoria recorded at . Initially described in 1845 as Boletus marginatus, and also previously known by scientific names such as Phaeogyroporus portentosus and Boletus portentosus, it is not as closely related to typical boletes as previously thought.
Taxonomy
English naturalist Miles Joseph Berkeley initially described Boletus marginatus in 1845, from the writings and specimens of James Drummond, from the vicinity of the Swan River Colony in Western Australia. Berkeley and Broome described Boletus portentosus in a report published in 1873 of the fungi of Ceylon, from a specimen with a 25 cm (8 in) diameter cap collected on June 15, 1869. They held it to be related to Boletus aestivalis. Microscopic differences led to it being reclassified; Boedijn noted the shape of its spores, lack of cystidia and short tubes and allocated it to the genus Phlebopus in 1951. New Zealand botanist Robert McNabb followed Rolf Singer who had determined Phlebopus was a nomen dubium (though conceding Singer was likely in error), and coined the binomial Phaeogyroporus portentosus, by which it was known for some years. In his 1982 review of the genus, mycologist Paul Heinemann used this latter designation. The generic name is derived from the Greek Φλεψ/Φλεβο- "vein", and πους "foot".
Considering the two taxa to be the same, mycologist Roy Watling proposed the name Phlebopus marginatus over P. portentosus in 2001, pointing out that the former name predated the latter. He noted specimens across its range conform to the species description, although queried whether a single species occurs over so wide a range.
It is not as closely related to typical boletes as was previously thought. The genus Phlebopus is a member of the suborder Sclerodermatineae, which makes it more closely related to earth balls than to typical boletes. Within this suborder, Phlebopus makes up the family Boletinellaceae with Boletinellus. Boletus brevitubus, described from Cephalocitrus grandis and Delonix regia forests of Yunnan, China in 1991, was placed into synonymy with Phlebopus marginatus in 2009.
A common name in Western Australia is salmon gum mushroom. Common names in Asia include hed har and hed tub tao dum in Thailand, or tropical black bolete.
Description
Possibly Australia's largest terrestrial mushroom, Phlebopus marginatus produces fruit bodies that can reach huge proportions. The weight of one specimen from western Victoria recorded at 29 kg (64 pounds). John Burton Cleland reported finding specimens with a cap diameter of , weighing over , but reports about specimens with caps over in diameter also exist. The cap is convex to flat, occasionally with a depressed centre. It is brown to olive and covered in fine hair. Records between countries vary as to the colour change on cutting or bruising of flesh, with those of Western Australia indicating no change, while New Zealand and Indonesian collections are reported to have some bluish discoloration on bruising at the top of the stem. The spores are yellow-brown. Mature specimens are very attractive to insects and often infested with them.
Distribution and habitat
Phlebopus marginatus is an example of a Gondwanan fungus, being found in Indonesia, Malaysia and Sri Lanka as well as Australia and New Zealand, with related species found in South America. In fact, it is a pantropical species.
Within Australia it has been recorded from the southeast of South Australia to New South Wales. Within Australia it occurs in eucalypt forests and may be found any time after rain. Fruit bodies may be isolated or spring up in groups or even fairy rings. It occurs in rainforest in the Cooloola section of the Great Sandy National Park in Queensland.
In New Zealand it is possibly associated with Nothofagus truncata. McNabb was unsure of whether it was introduced or indigenous to New Zealand though suspected it was the latter due to it being found in dense native forest near Rotorua. Other collections were in the vicinity of Auckland.
It is common in Java and Sumatra.
In China it is found in Yunnan, Guangxi and Hainan provinces. In China, it grows in association with poinciana (Delonix regia), mango (Mangifera indica), coffee (Coffea arabica), pomelo (Citrus grandis), jackfruit (Artocarpus heterophyllus) and oak (Quercus) species.
Edibility
As with many Australian mushrooms, Phlebopus marginatus is not widely eaten although recorded in several publications as edible and mild tasting or bland. Australian mushroom expert Bruce Fuhrer warns of its propensity to be maggot-ridden.
It is consumed in Laos, northern Thailand, Myanmar and southern China, namely the tropical areas of Yunnan province, where excessive picking for markets has depleted wild populations. Its large size and flavour make it a desired mushroom in markets in the Xishuangbanna region.
It is also consumed on the island of Réunion.
Since 2003, efforts have been made to try and cultivate it.
References
Cited texts
External links
Australia's Gondwanan and Asian connections - Fungi
IMC8 Fungus of the Month - July 2004, Phlebopus marginatus
Boletales
Fungi of Asia
Fungi native to Australia
Fungi of New Zealand
Fungi found in fairy rings
Taxa named by Roy Watling
Fungus species | Phlebopus marginatus | [
"Biology"
] | 1,231 | [
"Fungi",
"Fungus species"
] |
7,379,193 | https://en.wikipedia.org/wiki/Rutinose | Rutinose is the disaccharide also known as 6-O-α-L-rhamnosyl-D-glucose (C12H22O10) that is present in some flavonoid glycosides. It is prepared from rutin by hydrolysis with the enzyme rhamnodiastase.
References
Disaccharides
Deoxy sugars | Rutinose | [
"Chemistry"
] | 83 | [
"Carbohydrates",
"Deoxy sugars",
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
7,379,205 | https://en.wikipedia.org/wiki/Genetix | Genetix is a virtual machine created by theoretical physicist Bernard Hodson containing only 34 executable instructions. It was inspired by the principles of Alan Turing and allows for an entire operating system, including a word processor and utilities, to run on 32 kilobytes.
"Genes" are sequences of 50 to 100 pointers that either point directly to one of the 34 basic instructions or to another gene. The 700 genes take up approximately 26 kilobytes in size all together. The "gene pool" consists of a closed section and an open section where the users can add their own made genes.
Upsides are security and efficiency.
Hodson suggested that a simple compiler could process any application and that the rules were so simple that an application could be developed without the need for a compiler at all. He also suggested that embedded systems might be a good market for Genetix.
See also
Turing machine
von Neumann machine (disambiguation)
References
External links
Bernard Hodson's page on Genetix
Virtual machines | Genetix | [
"Technology"
] | 209 | [
"Computing stubs"
] |
7,379,355 | https://en.wikipedia.org/wiki/McCabe%E2%80%93Thiele%20method | The McCabe–Thiele method is a technique that is commonly employed in the field of chemical engineering to model the separation of two substances by a distillation column. It uses the fact that the composition at each theoretical tray is completely determined by the mole fraction of one of the two components. This method is based on the assumptions that the distillation column is isobaric—i.e the pressure remains constant—and that the flow rates of liquid and vapor do not change throughout the column (i.e., constant molar overflow). The assumption of constant molar overflow requires that:
The heat needed to vaporize a certain amount of liquid of the feed components are equal,
For every mole of liquid vaporized, a mole of vapor is condensed, and
Heat effects such as heat needed to dissolve the substance(s) are negligible.
The method was first published by Warren L. McCabe and Ernest Thiele in 1925, both of whom were working at the Massachusetts Institute of Technology (MIT) at the time.
Construction and use
A McCabe–Thiele diagram for the distillation of a binary (two-component) feed is constructed using the vapor-liquid equilibrium (VLE) data—which is how vapor is concentrated when in contact with its liquid form—for the component with the lower boiling point.
On a planar graph, both axes represent the mole fractions of the lighter (lower boiling) component; the horizontal (x) and vertical (y) axes represents the liquid and vapor phase compositions, respectively. The x = y line (see Figure 1) represents the scenarios where the compositions of liquid and vapor are the same. The vapor-liquid equilibrium line (the curved line from (0,0) to (1,1) in Figure 1) represents the vapor phase composition for a given liquid phase composition at equilibrium. Vertical lines drawn from the horizontal axis up to the x = y line indicate the composition of the inlet feed stream, the composition of the top (distillate) product stream, and the composition of the bottoms product (shown in red in Figure 1).
The rectifying section operating line for the section above the inlet feed stream of the distillation column (shown in green in Figure 1) starts at the intersection of the distillate composition line and the x = y line and continues at a downward slope of L / (D + L), where L is the molar flow rate of reflux and D is the molar flow rate of the distillate product, until it intersects the q-line.
The stripping section operating line for the section below the feed inlet (shown in magenta in Figure 1) starts at the intersection of the red bottoms composition line and the x = y line and continues up to the point where the blue q-line intersects the green rectifying section operating line.
The q-line (depicted in blue in Figure 1) intersects the point of intersection of the feed composition line and the x = y line and has a slope of q / (q - 1), where the parameter q denotes mole fraction of liquid in the feed. For example, if the feed is a saturated liquid, q = 1 and the slope of the q-line is infinite (drawn as a vertical line). As another example, if the feed is saturated vapor, q = 0 and the slope of the q-line is 0 (a horizontal line). The typical McCabe–Thiele diagram in Figure 1 uses a q-line representing a partially vaporized feed. Example q-line slopes are presented in Figure 2.
The number of steps between the operating lines and the equilibrium line represents the number of theoretical plates (or equilibrium stages) required for the distillation. For the binary distillation depicted in Figure 1, the required number of theoretical plates is 6.
Constructing a McCabe–Thiele diagram is not always straightforward. In continuous distillation with a varying reflux ratio, the mole fraction of the lighter component in the top part of the distillation column will decrease as the reflux ratio decreases. Each new reflux ratio will alter the gradient of the rectifying section curve.
When the assumption of constant molar overflow is not valid, the operating lines will not be straight. Using mass and enthalpy balances in addition to vapor-liquid equilibrium data and enthalpy-concentration data, operating lines can be constructed using the Ponchon–Savarit method.
If the mixture can form an azeotrope, its vapor-liquid equilibrium line will cross the x = y line, preventing further separation no matter the number of theoretical plates.
See also
Fractional distillation
Azeotropic distillation
Batch distillation
References
External links
More detailed information on how to draw a McCabe–Thiele Diagram
Detailed discussion of McCabe–Thiele method by Tore Haug-Warberg, Norwegian University of Science and Technology, Norway
Interactive McCabe–Thiele Diagram
Distillation | McCabe–Thiele method | [
"Chemistry"
] | 1,033 | [
"Distillation",
"Separation processes"
] |
7,379,631 | https://en.wikipedia.org/wiki/List%20of%20lakes%20by%20depth | These articles lists the world's deepest lakes.
Lakes ranked by maximum depth
This list contains all lakes whose maximum depth is reliably known to exceed
Geologically, the Caspian Sea, like the Black and Mediterranean seas, is a remnant of the ancient Tethys Ocean. The deepest area is oceanic rather than continental crust. However, it is generally regarded by geographers as a large endorheic salt lake. Of these registered lakes; 10 have a deepest point above the sea level. These are: Issyk-Kul, Crater Lake, Quesnel, Sarez, Toba, Tahoe, Kivu, Nahuel Huapi, Van and Poso.
Lakes ranked by mean depth
Mean depth can be a more useful indicator than maximum depth for many ecological purposes. Unfortunately, accurate mean depth figures are only available for well-studied lakes, as they must be calculated by dividing the lake's volume by its surface area. A reliable volume figure requires a bathymetric survey. Therefore, mean depth figures are not available for many deep lakes in remote locations. The average lake on Earth has the mean depth 41.8 meters (137.14 feet)
The Caspian Sea ranks much further down the list on mean depth, as it has a large continental shelf (significantly larger than the oceanic basin that contains its greatest depths).
Of the 127 registered lakes; 67 are known to be cryptodepressions. These include: Vostok (subglacial surface), Concordia (subglacial surface), (Caspian Sea) (subsea surface), Dead Sea (subsea surface) and Jökulsárlón (glacial lagoon estuary). The remaining 60 lakes have got their entire basin above the sea level.
This list contains all lakes whose mean depth is reliably known to exceed 100 metres (328 ft).
Greatest maximum depth by continent
Africa — 1: Tanganyika, 2: Malawi, 3: Kivu
Antarctica — 1: Radok (surface lake). — 1: Vostok (subglacial lake), 2: Concordia (subglacial lake), 3: Ellsworth (subglacial lake).
Asia — 1: Baikal, 2: Issyk Kul, 3: Matano
Eurasia — 1: Baikal, 2: Caspian Sea, 3: Issyk Kul
Europe — 1: Hornindalsvatnet, 2: Salvatnet, 3: Lake Tinn
North America — 1: Great Slave, 2: Crater, 3: Quesnel
Central America — 1: Atitlán, 2: Chicabal, 3: Ilopango
Oceania — 1: Hauroko, 2: Manapouri, 3: Te Anau
Australia — 1: St Clair
South America — 1: Viedma, 2: O'Higgins/San Martín, 3: Argentino
Greatest mean depth by continent
Africa — 1: Tanganyika, 2: Malawi, 3: Kivu
Antarctica — 1: Vostok (subglacial lake), 2: Concordia (subglacial lake), 3: Ellsworth (subglacial lake).
Asia — 1: Baikal, 2: Tazawa, 3: Issyk-Kul
Europe — 1: Crveno, 2: Hornindalsvatnet, 3: Lake Tinn
North America — 1: Crater, 2: Tahoe, 3: Adams
Oceania — 1: Te Anau, 2: Manapouri, 3: Wakatipu
South America — 1: General Carrera-Buenos Aires, 2: Quilotoa, 3: Fagnano
See also
List of lakes by area
List of lakes by volume
List of largest lakes of Europe
Notes
Note: Lake depths often vary depending on sources. The depths used here are the most reliable figures available in recent sources. See the articles on individual lakes for more details and data sources.
Sources
Worldlakes.org, Deepest lakes
External links
Environmentalgraffiti.com - 10 deepest lakes with pictures
Lakes
Vertical position | List of lakes by depth | [
"Physics"
] | 830 | [
"Vertical position",
"Physical quantities",
"Distance"
] |
7,379,750 | https://en.wikipedia.org/wiki/Bor%20S.%20Luh%20International%20Award | The Bor S. Luh International Award has been awarded every year since 1956. Before 2005, this award was named the International Award. It is given to an individual or institution that had outstanding efforts in one of the following areas in food technology: 1) International exchange of ideas, 2) better international understanding, and/or 3) practical successful technology transfer to an economically depressed area in a developed or developing area.
The award was renamed for Bor S. Luh (1916-2001), who was born and educated in China before completing his education in the United States. Luh was the first president of the Chinese American Food Society in 1974-5 and received its Professional Achievement Award in 1984.
Award winners receive a plaque from the Bor S. Luh Endowment Fund of the Institute of Food Technologists Foundation and a USD 3000 honorarium.
Winners
References
List of past winners - Official site
Information on Bor S. Luh
Food technology awards | Bor S. Luh International Award | [
"Technology"
] | 194 | [
"Science and technology awards",
"Food technology awards"
] |
7,379,899 | https://en.wikipedia.org/wiki/Seest%20fireworks%20disaster | The fireworks accident in Seest was a disaster that occurred on 3 November 2004, when the N. P. Johnsens Fyrværkerifabrik fireworks warehouse exploded in Seest, a suburb of Kolding, Denmark. One firefighter died; seven from the rescue team as well as 17 locals were injured. In addition 34 rescuers, 8 police officers, and 27 from the Danish Emergency Management Agency were treated for smoke inhalation. The evacuation of 2,000 people from the immediate surrounding area saved many lives. Eight fire and rescue vehicles were also destroyed.
The surrounding area was hit hard by the explosion, with 355 houses reported damaged, and 176 of them rendered uninhabitable. Altogether, 2,107 buildings were damaged by the explosion, with the cost of the damage rounding to an estimated € 100 million.
N. P. Johnsens fyrværkerifabrik was the main importer of fireworks in Denmark at the time, accounting for 25% of the total trade. At the time of the disaster, the company was storing 284 net tons (netto explosive mass) of fireworks in its warehouse, the maximum it was allowed to store was 300 tons.
Following the disaster, there was an investigation. Initially, it was thought that the factory had stored significantly more than it was allowed to. However, this was later denied by the authorities who concluded that the disaster was due to an accident which the factory was not responsible for. When working inside a container, two employees had accidentally dropped a box containing fireworks, which had caused the fireworks to ignite. The two employees had to flee the container. Once the fire crew arrived, they initially thought that they were dealing with a simple container fire. However, the blaze was too intense, and they were unsuccessful when trying to put it out. Since the firemen had to flee the scene as well, once it became apparent that the container would explode, the rest of the fireworks were eventually ignited as well, causing further violent explosions.
References
Kolding Municipality homepage about the accident (in Danish)
Amateur video of the fire and explosions
2004 disasters in Denmark
Explosions in 2004
Explosions in Denmark
Fireworks accidents and incidents
2004 industrial disasters
2004 in Denmark
Industrial fires and explosions
Kolding Municipality
November 2004 events in Europe
2004 fires in Europe | Seest fireworks disaster | [
"Chemistry"
] | 466 | [
"Industrial fires and explosions",
"Explosions"
] |
7,380,068 | https://en.wikipedia.org/wiki/Bor%20S.%20Luh | Bor Shium Luh (January 13, 1916 – June 4, 2001) was a Chinese-born American food scientist who was known was for his research in fruit and vegetable products and in developing food science and technology in Asia, Latin America, and the Middle East. He was a noted researcher on the topic of rice research and development.
Early life and education
Born in Shanghai, he earned a Bachelor of Science in chemistry in 1938 at Chiao Tung University, then earned his Master of Science degree in food science (1948) and Doctor of Philosophy degree in agricultural chemistry (1952) both at the University of California, Berkeley.
Family
He was married to Bai Tsain Luh (passed 2010).
Career
Luh joined the University of California, Davis faculty in 1952 as a researcher and lecturer in food chemistry, working his way to professor rank until his 1986 retirement. He would be named a Fellow of the Institute of Food Technologists (IFT) that same year. Luh's career would involve mentoring over 100 graduate students, many of whom would have successful careers of their own.
Legacy and professional career
The UC Davis food science department would dedicate his food chemistry lab as the Bor S. Luh Food Laboratory in May 2001, less than a month prior to his death while visiting in Hilo, Hawaii. Memorials were held in Hawaii and Davis, California on June 8, and June 13, 2001, respectively.
Luh was also active in the Chinese American Food Society being named its first president in 1974-5 and receiving its Professional Achievement award in 1984.
IFT would rename their International Award in his honor starting in 2005.
Published works
Rice, Volume 2: Utilization
Notes and references
"Death Notices: Bor Shium Luh." Food Technology. September 2001: p. 16.
External links
Rice Production Volume I by Bor S. Luh
List of IFT Awards
List of IFT Fellows
1971 Plant Physiology journal article co-authored by Luh
1916 births
2001 deaths
American food chemists
American food scientists
Chinese food scientists
Fellows of the Institute of Food Technologists
University of California, Davis faculty
Educators from Shanghai
Scientists from Shanghai
Chinese emigrants to the United States
20th-century American chemists | Bor S. Luh | [
"Chemistry"
] | 447 | [
"Food chemists",
"American food chemists"
] |
7,380,112 | https://en.wikipedia.org/wiki/Laser%20TV | Laser color television (laser TV), or laser color video display, is a type of television that utilizes two or more individually modulated optical (laser) rays of different colors to produce a combined spot that is scanned and projected across the image plane by a polygon-mirror system or less effectively by optoelectronic means to produce a color-television display. The systems work either by scanning the entire picture a dot at a time and modulating the laser directly at high frequency, much like the electron beams in a cathode ray tube, or by optically spreading and then modulating the laser and scanning a line at a time, the line itself being modulated in much the same way as with digital light processing (DLP).
The special case of one ray reduces the system to a monochrome display as, for example, in black and white television. This principle applies to a direct view display as well as to a (front or rear) laser projector system.
Laser TV technology began to appear in the 1990s. In the 21st century, the rapid development and maturity of semiconductor lasers and other technologies gave it new advantages.
History
The laser source for television or video display was originally proposed by Helmut K.V. Lotsch in the German Patent 1 193 844. In December 1977 H.K.V. Lotsch and F. Schroeter explained laser color television for conventional as well as projection-type systems and gave examples of potential applications. 18 years later the German-based company Schneider AG presented a functional laser-TV prototype at IFA'95 in Berlin, Germany. Due to the bankruptcy of Schneider AG, however, the prototype was never developed further to a market-ready product.
Proposed in 1966, laser illumination technology remained too costly to be used in commercially viable consumer products.
At the Las Vegas Consumer Electronics Show in 2006, Novalux Inc., developer of Necsel semiconductor laser technology, demonstrated their laser illumination source for projection displays and a prototype rear-projection "laser" TV.
First reports on the development of a commercial Laser TV were published as early as February 16, 2006 with a decision on the large-scale availability of laser televisions expected by early 2008.
On January 7, 2008, at an event associated with the Consumer Electronics Show 2008, Mitsubishi Digital Electronics America, a key player in high-performance red-laser
and large-screen HDTV markets, unveiled their first commercial Laser TV, a 65" 1080p model.
A Popular Science writer was impressed by the color rendering of a Mitsubishi laser video display at CES 2008.
Some even described it as being too intense to the point of seeming artificial.
This laser TV, branded "Mitsubishi LaserVue TV", went on sale, November 16, 2008 for $6,999, but Mitsubishi's entire laser TV project was killed in 2012.
LG introduced a front projected laser TV in 2013
as a consumer product that displays images and videos measuring 100 inches (254 centimeters) with a full high-definition resolution of 1920 x 1080 pixels. It can project images onto the screen at a distance of 22 inches (56 centimeters).
In China, the Sixth Session of the Seventh Council of the China Electronic Video Industry Association formally approved the establishment of a laser TV industry branch. The establishment of the industry branch also symbolizes that the entire industrial chain connecting the upstream and downstream of the laser TV field is officially opened, in order to make the laser TV industry bigger and stronger. By 2022, sales of laser TVs in the Chinese market will exceed 1 million units, and sales will reach 11.8 billion CNY.
Principle
Laser TV images are reflected by the screen and enter the human eye for imaging. The principle of laser TV is to use DLP technology for image display. Take the DMD chip as an example. The DMD chip is the imaging core component of a laser TV. There are millions of small mirrors arranged, and each small mirror can flip in the positive and negative directions at a frequency of tens of thousands of times per second. The light reflects directly on the screen through these small mirrors to form an image. Due to the visual inertia of the human eye, the three primary colors that are irradiated on the same pixel at high speed are mixed and superimposed to form a color.
Technology
Lasers may become an ideal replacement for the UHP lamps which are currently in use in projection display devices such as rear-projection TV and front projectors. LG claims a lifetime of 25,000 hours for their laser projector, compared to 10,000 hours for a UHP.
Current televisions are capable of displaying only 40% of the color gamut that humans can potentially perceive.
Laser TVs utilize a laser light source, which offers several advantages over traditional LED and OLED technologies. The lasers typically use specific wavelengths of light, resulting in a wider color gamut and superior brightness. Unlike LED or OLED, laser light sources can produce purer colors, enhancing the viewing experience with more vibrant and accurate color reproduction. Additionally, laser light sources generally have a longer lifespan and are more energy-efficient, contributing to lower operational costs and environmental impact.
Color television requires light in three distinct wavelengths—red, green, and blue. While red laser diodes are commercially available, there are no commercially available green laser diodes which can provide the required power at room temperature with an adequate lifetime. Instead, frequency doubling can be used to provide the green wavelengths. Several types of lasers can be used as the frequency doubled sources: fibre lasers, inter-cavity doubled lasers, external cavity doubled lasers, eVCSELs, and OPSLs (Optically Pumped Semiconductor Lasers). Among the inter-cavity doubled lasers, VCSELs have shown much promise and potential to be the basis for a mass-produced frequency doubled laser.
The blue laser diodes became openly available around 2010.
A VECSEL is a vertical cavity, and is composed of two mirrors. On top of one of them is a diode as the active medium. These lasers combine high overall efficiency with good beam quality. The light from the high power IR-laser diodes is converted into visible light by means of extra-cavity waveguided second-harmonic generation. Laser pulses with about 10kHz repetition rate and various lengths are sent to a digital micromirror device where each mirror directs the pulse either onto screen or into the dump. Because the wavelengths are known all coatings can be optimized to reduce reflections and therefore speckle.
Characteristics
Laser TV images are reflected by the screen and enter the human eye for imaging. According to ophthalmologists and professional evaluations, laser TV products are display products that are harmless to the naked eye. The screen has no electromagnetic radiation, which is eye-protecting, healthy and comfortable. Compared with paper reading comfort, it is 20% higher. Laser TVs are mainly large-sized, with pure light sources, bright colors, and authenticity, also support 4K display resolution.
Laser TVs have lower power consumption than LCD TVs of the same size. For example, a 100-inch laser TV consumes less than 300 watts, which is ½-⅓ of the same size LCD TV. Laser TVs are about one-tenth the weight of LCD TVs of the same size, and people can watch 80-inch laser TVs at a viewing distance of 3 meters.
Assembly
Laser signal modulation
The video signal is introduced to the laser beam by an acousto-optic modulator (AOM) that uses a photorefractive crystal to separate the beam at distinct diffraction angles. The beam must enter the crystal at the specific Bragg angle of that AOM crystal. A piezoelectric element transforms the video signal into vibrations in the crystal to create an image.
Horizontal and vertical refresh
A rapidly rotating polygonal mirror gives the laser beam the horizontal refresh modulation. It reflects off of a curved mirror onto a galvanometer-mounted mirror which provides the vertical refresh. Another way is to optically spread the beam and modulate each entire line at once, much like in a DLP, reducing the peak power needed in the laser and keeping power consumption constant.
Display characteristics
Maintain full power output for the lifespan of the laser; the picture quality will not degrade
Have a very wide color gamut, which can produce up to 90% of the colors a human eye can perceive by adjusting the wavelength of the laser
Capable of displaying 3D stereoscopic video
Can be projected onto any depth or shape surface while maintaining focus.
Applications
There are several realizations of laser projectors, one example being based on the principle of a flying light spot writing the image directly onto a screen. A laser projector of this type consists of three main components — a laser source uses the video signal to provide modulated light composed of the three sharp spectral colors — red, green, and blue — which a flexible, fiber-optic waveguide then transports to a relatively small projection head. The projection head deflects the beam according to the pixel clock and emits it onto a screen at an arbitrary distance. Such laser projection techniques are used in handheld projectors, planetariums, and for flight simulators and other virtual reality applications.
Due to the special features of laser projectors, such as a high depth of field, it is possible to project images or data onto any kind of projection surface, even non-flat. Typically, the sharpness, color space, and contrast ratio are higher than those of other projection technologies. For example, the on-off contrast of a laser projector is typically 50,000:1 and higher, while modern DLP and LCD projectors range from 1000:1 to 40,000:1. In comparison to conventional projectors, laser projectors provide a lower luminous flux output, but because of the extremely high contrast the brightness actually appears to be greater.
Development status
In order to further accelerate the adoption of laser displays, the China Ministry of Science and Technology has prioritized the "engineering and development of next-generation laser display technology" as one of the eight major industrial development directions. As related technical problems are gradually resolved, the popularization of laser TV products in households remains a major goal.
At the end of December 2019, the CESI Laboratory of the China National Institute of Electronic Standardization and a team of ophthalmologists from Peking Union Medical College Hospital conducted a research project regarding the visual perception and eye strain of laser displays. In the study, 32 subjects were placed in the same environmental conditions comparing a laser TV and a LCD TV. Eye blinking frequency and the subjective perception score were compared and analyzed between the displays. The results found that watching the LCD TV for an extended period of time produced certain symptoms such as eye swelling, eye pain, photophobia, dry eyes, and blurred vision, while watching the laser TV, there was no obvious visual change or eye discomfort.
On January 16, 2020, the Laser Television Industry Branch of the China Electronic Video Industry Association released the industry's first White Paper on Laser TV Eye Care in Shanghai. The white paper published the eye-care evaluation data of laser TVs and traditional LCD TVs by ophthalmology experts of China Electronics Technology Standardization Institute's CESI Laboratory and Peking Union Medical College Hospital, and made scientific suggestions on how to protect the visual health of adolescents. The market for laser TVs has seen an overall compound growth rate of 281% from 2014 to 2019. In 2019, the Hisense Laser TV 80L5 ranked first in the annual TV bestseller list. According to user survey data, more than 93% of users chose laser TVs because of the claimed benefits of eye health protection.
Prospect
Compared with LED backlit LCD TVs, laser TVs have many advantages in large-screen imaging. In terms of technical composition, a laser TV is composed of laser light source, imaging module, circuit control system, and display. The technological progress of each of these units will help to increase market share compared to competing display technologies. Additionally, laser light sources have the advantages of lower manufacturing carbon emissions, higher color gamut, and higher energy efficiency. The advancement of laser television combined with better optical imaging technology can be lucrative in the future home display market.
Technical challenges
Lasers are the most expensive components of laser televisions. More advanced laser diodes usually need more semiconductor materials to be manufactured, so reducing costs will remain an issue for the industrialization of laser TV for the foreseeable future. Existing laser TV products generally use imported semiconductor devices. In current large-screen display solutions, there are a variety of competing technologies such as LCD, OLED, and upcoming Micro LED displays. Laser TVs must continue to develop to maintain a competitive advantage in order to occupy a larger market share.
References
Display technology | Laser TV | [
"Engineering"
] | 2,599 | [
"Electronic engineering",
"Display technology"
] |
7,380,371 | https://en.wikipedia.org/wiki/Computer%20programming%20in%20the%20punched%20card%20era | From the invention of computer programming languages up to the mid-1970s, most computer programmers created, edited and stored their programs line by line on punch cards.
Punched cards
A punched card is a flexible write-once medium that encodes data, most commonly 80 characters. Groups or "decks" of cards form programs and collections of data. The term is often used interchangeably with punch card, the difference being that an unused card is a "punch card," but once information had been encoded by punching holes in the card, it was now a "punched card." For simplicity, this article will use the term punched card to refer to either.
Often programmers first wrote their program out on special forms called coding sheets, taking care to distinguish the digit zero from the letter O, the digit one from the letter I, eight from B, two from Z, and so on using local conventions such as the "slashed zero". These forms were then taken by keypunch operators, who using a keypunch machine such as the IBM 026 (later IBM 029) punched the deck. Often another keypunch operator would then take that deck and re-punch from the coding sheets – but using a "verifier" such as the IBM 059 that checked that the original punching had no errors.
A typing error generally necessitated re-punching an entire card. The editing of programs was facilitated by reorganizing the cards, and removing or replacing the lines that had changed; programs were backed up by duplicating the deck, or writing it to magnetic tape.
In smaller organizations programmers might do their own punching, and in all cases would often have access to a keypunch to make small changes to a deck.
Work environment
The description below describes an all-IBM shop (a "shop" is programmer jargon for a programming site) but shops using other brands of mainframes (or minicomputers) would have similar equipment although because of cost or availability might have different manufacturer's equipment, e.g. an NCR, ICL, Hewlett-Packard (HP) or Control Data shop would have NCR, ICL, HP, or Control Data computers, printers and so forth, but have IBM 029 keypunches. IBM's huge size and industry footprint often caused many of their conventions to be adopted by other vendors, so the example below is fairly similar to most places, even in non-IBM shops.
A typical corporate or university computer installation would have a suite of rooms, with a large, access-restricted, air-conditioned room for the computer (similar to today's server room) and a smaller quieter adjacent room for submitting jobs. Nearby would be a room full of keypunch machines for programmer use. An IBM 407 Accounting Machine might be set up to allow newly created or edited programs to be listed (printed out on fan-fold paper) for proofreading. An IBM 519 might be provided to reproduce program decks for backup or to punch sequential numbers in columns 73-80.
In such mainframe installations, known as "closed shops," programmers submitted the program decks, often followed by data cards to be read by the program, to a person working behind a counter in the computer room. During peak times, it was common to stand in line waiting to submit a deck. To solve that problem, the card reader could be reinstalled (or initially installed) outside of the computer room to allow programmers to do "self-service" job submission.
Many computer installations used cards with the opposite corner cut (sometimes no corner cut) as "job separators", so that an operator could stack several job decks in the card reader at the same time and be able to quickly separate the decks manually when they removed them from the stacker. These cards (e.g., a JCL "JOB" card to start a new job) were often pre-punched in large quantities in advance. This was especially useful when the main computer did not read the cards directly, but instead read their images from magnetic tape that was prepared offline by smaller computers such as the IBM 1401. After reading the cards in, the computer operator would return the card deck – typically to one of a set of alphabetically labelled cubby holes, based on the programmer's last initial. Because programs were run in batch-mode processing it might be a considerable time before any hardcopy printed or punched output was produced, and put into these same cubby holes – however, on a lightly-used system, it was possible to make alterations and rerun a program in less than an hour.
Dedicated programmers might stay up well past midnight to get a few quick turnarounds. Use of this expensive equipment was often charged to a user's account. A mainframe computer could cost millions of dollars and usage was measured in seconds per job.
Smaller computers like the IBM 1620 and 1130, and minicomputers such as the PDP-11 were less expensive, and often run as an "open shop", where programmers had exclusive use of the computer for a block of time. A keypunch was usually located nearby for quick corrections – although many of these smaller machines ran from punched tape.
Identification and sequence
Many early programming languages, including FORTRAN, COBOL and the various IBM assembler languages, used only the first 72 columns of a card – a tradition that traces back to the IBM 711 card reader used on the IBM 704/709/7090/7094 series (especially the IBM 704, the first mass-produced computer with floating-point arithmetic hardware), which could only read 72 of the 80 columns in one pass.
Columns 73-80 were ignored by the compilers and could be used for identification or a sequence number so that if the card deck was dropped it could be restored to its proper order using a card sorter. Depending on the programming language, debugging output statements could be quickly activated and "commented out" by using cards with such statements punched with the comment character (e.g., 'C' in Fortran) in column 80 of the card; turning the card end-for-end would put the 'C' in the leading column, which transformed the now backwards card's contents into a comment while leaving the physical card in place in deck.
(An alternative, imperfect but commonly employed technique to maintain proper card order was to draw one or more diagonal stripes across the top edge of all the cards in a deck.)
In later years, as punch card data was converted to magnetic tape files the sequence numbers were often used as a column in an array as an index value that can be correlated to time sequences, such as in the natural sciences where the data on the cards were related to the periodic output of a measuring device such as water stage level recorders for rivers and streams in hydrology, or temperatures in meteorology. Entire vaults full of card decks could be reduced to much smaller racks of nine-track tapes.
See also
Unit record equipment history
Footnotes
References
External links
: Article about the programming culture that developed around use of the punched card, following Fisk's experience of "learning the craft" from people around him.
Gallery
Punched card era
History of software | Computer programming in the punched card era | [
"Technology",
"Engineering"
] | 1,487 | [
"Computer programming",
"Software engineering",
"History of software",
"Computers",
"History of computing"
] |
7,380,835 | https://en.wikipedia.org/wiki/Autonomation | Autonomation describes a feature of machine design to effect the principle of (じどうか jidōka), used in the Toyota Production System (TPS) and lean manufacturing. It may be described as "intelligent automation" or "automation with a human touch". This type of automation implements some supervisory functions rather than production functions. At Toyota, this usually means that if an abnormal situation arises, the machine stops and the worker will stop the production line. It is a quality control process that applies the following four principles:
Autonomation aims to:
Purpose and implementation
Shigeo Shingo calls autonomation "pre-automation". It separates workers from machines through mechanisms that detect production abnormalities (many machines in Toyota have these). He says there are twenty-three stages between purely manual and fully automated work. To be fully automated machines must be able to detect and correct their own operating problems which is currently not cost-effective. However, ninety percent of the benefits of full automation can be gained by Autonomation.
The purpose of autonomation is that it makes possible the rapid or immediate address, identification and correction of mistakes that occur in a process. Autonomation relieves the worker of the need to continuously judge whether the operation of the machine is normal; their efforts are now only engaged when there is a problem alerted by the machine. As well as making the work more interesting this is a necessary step if the worker is to be asked later to supervise several machines. The first example of this at Toyota was the auto-activated loom of Sakichi Toyoda that automatically and immediately stopped the loom if the vertical or lateral threads broke or ran out.
For instance rather than waiting until the end of a production line to inspect a finished product, autonomation may be employed at early steps in the process to reduce the amount of work that is added to a defective product. A worker who is self-inspecting their own work, or source-inspecting the work produced immediately before their work station is encouraged to stop the line when a defect is found. This detection is the first step in Jidoka. A machine performing the same defect detection process is engaged in autonomation.
Once the line is stopped a supervisor or person designated to help correct problems gives immediate attention to the problem the worker or machine has discovered. To complete Jidoka, not only is the defect corrected in the product where discovered, but the process is evaluated and changed to remove the possibility of making the same mistake again. One solution to the problems can be to insert a "mistake-proofing" device somewhere in the production line. Such a device is known as poka-yoke.
Relationship with just-in-time
Taiichi Ohno and Sakichi Toyoda, originators of the TPS and practices in the manufacturing of textiles, machinery and automobiles considered just-in-time manufacturing and Autonomation as the pillars upon which TPS is built. Jeffrey Liker and David Meier indicate that Jidoka or "the decision to stop and fix problems as they occur rather than pushing them down the line to be resolved later" is a large part of the difference between the effectiveness of Toyota and other companies who have tried to adopt lean manufacturing. Autonomation, therefore can be said to be a key element in successful Lean Manufacturing implementations.
For just-in-time (JIT) systems, it is absolutely vital to produce with zero defects, or else these defects can disrupt the production process – or the orderly flow of work.
JIT and Lean Manufacturing are always searching for targets for continuous improvement in its quest for quality improvements, finding and eliminating the causes of problems so they do not continually crop up.
Jidoka involves the automatic detection of errors or defects during production. When a defect is detected the halting of the production forces immediate attention to the problem.
The halting causes slowed production but it is believed that this helps to detect a problem earlier and avoids the spread of bad practices.
Etymology
The word "autonomation" 自働化, a loan word from the Sino-Japanese vocabulary, is a portmanteau of "autonomous" and "automation" 自動化, which is written using three kanji characters: 自(じ ji) "self", 動(どう dou)movement, and 化(か ka)"-ization". In the Toyota Production System, the second character is replaced with 働(どう dou) "work", which is a character derived by adding a radical representing "human" to the original 動.
Zenjidoka
Zenjidoka (全自働化) is described as "taking jidoka all the way to the customer" and refers to extended practices in which sales, service and technical staff also have power to interrupt production to correct faults.
See also
Andon – a method of signaling a problem in order to get help immediately, typically in the form of an "andon team", to avoid halting the production line
Kaizen – continuous improvement
Semi-automation – a process or procedure that is performed by the combined activities of man and machine
References
Lean manufacturing
Toyota Production System | Autonomation | [
"Engineering"
] | 1,050 | [
"Lean manufacturing"
] |
7,381,179 | https://en.wikipedia.org/wiki/Avrami%20equation | The Avrami equation describes how solids transform from one phase to another at constant temperature. It can specifically describe the kinetics of crystallisation, can be applied generally to other changes of phase in materials, like chemical reaction rates, and can even be meaningful in analyses of ecological systems.
The equation is also known as the Johnson–Mehl–Avrami–Kolmogorov (JMAK) equation. The equation was first derived by Johnson, Mehl, Avrami and Kolmogorov (in Russian) in a series of articles published in the Journal of Chemical Physics between 1939 and 1941. Moreover, Kolmogorov treated statistically the crystallization of a solid in 1937 (in Russian, Kolmogorov, A. N., Izv. Akad. Nauk. SSSR., 1937, 3, 355).
Transformation kinetics
Transformations are often seen to follow a characteristic s-shaped, or sigmoidal, profile where the transformation rates are low at the beginning and the end of the transformation but rapid in between.
The initial slow rate can be attributed to the time required for a significant number of nuclei of the new phase to form and begin growing. During the intermediate period the transformation is rapid as the nuclei grow into particles and consume the old phase while nuclei continue to form in the remaining parent phase.
Once the transformation approaches completion, there remains little untransformed material for further nucleation, and the production of new particles begins to slow. Additionally, the previously formed particles begin to touch one another, forming a boundary where growth stops.
Derivation
The simplest derivation of the Avrami equation makes a number of significant assumptions and simplifications:
Nucleation occurs randomly and homogeneously over the entire untransformed portion of the material.
The growth rate does not depend on the extent of transformation.
Growth occurs at the same rate in all directions.
If these conditions are met, then a transformation of into will proceed by the nucleation of new particles at a rate per unit volume, which grow at a rate into spherical particles and only stop growing when they impinge upon each other. During a time interval , nucleation and growth can only take place in untransformed material. However, the problem is more easily solved by applying the concept of an extended volume – the volume of the new phase that would form if the entire sample was still untransformed. During the time interval to the number of nuclei N that appear in a sample of volume V will be given by
where is one of two parameters in this simple model: the nucleation rate per unit volume, which is assumed to be constant. Since growth is isotropic, constant and unhindered by previously transformed material, each nucleus will grow into a sphere of radius , and so the extended volume of due to nuclei appearing in the time interval will be
where is the second of the two parameters in this simple model: the growth velocity of a crystal, which is also assumed constant. The integration of this equation between and will yield the total extended volume that appears in the time interval:
Only a fraction of this extended volume is real; some portion of it lies on previously transformed material and is virtual. Since nucleation occurs randomly, the fraction of the extended volume that forms during each time increment that is real will be proportional to the volume fraction of untransformed . Thus
rearranged
and upon integration:
where Y is the volume fraction of ().
Given the previous equations, this can be reduced to the more familiar form of the Avrami (JMAK) equation, which gives the fraction of transformed material after a hold time at a given temperature:
where , and .
This can be rewritten as
which allows the determination of the constants n and from a plot of vs . If the transformation follows the Avrami equation, this yields a straight line with slope n and intercept .
Final crystallite (domain) size
Crystallization is largely over when reaches values close to 1, which will be at a crystallization time defined by , as then the exponential term in the above expression for will be small. Thus crystallization takes a time of order
i.e., crystallization takes a time that decreases as one over the one-quarter power of the nucleation rate per unit volume, , and one over the three-quarters power of the growth velocity . Typical crystallites grow for some fraction of the crystallization time and so have a linear dimension , or
i.e., the one quarter power of the ratio of the growth velocity to the nucleation rate per unit volume. Thus the size of the final crystals only depends on this ratio, within this model, and as we should expect, fast growth rates and slow nucleation rates result in large crystals. The average volume of the crystallites is of order this typical linear size cubed.
This all assumes an exponent of , which is appropriate for the uniform (homogeneous) nucleation in three dimensions. Thin films, for example, may be effectively two-dimensional, in which case if nucleation is again uniform the exponent . In general, for uniform nucleation and growth, , where is the dimensionality of space in which crystallization occurs.
Interpretation of Avrami constants
Originally, n was held to have an integer value between 1 and 4, which reflected the nature of the transformation in question. In the derivation above, for example, the value of 4 can be said to have contributions from three dimensions of growth and one representing a constant nucleation rate. Alternative derivations exist, where n has a different value.
If the nuclei are preformed, and so all present from the beginning, the transformation is only due to the 3-dimensional growth of the nuclei, and n has a value of 3.
An interesting condition occurs when nucleation occurs on specific sites (such as grain boundaries or impurities) that rapidly saturate soon after the transformation begins. Initially, nucleation may be random, and growth unhindered, leading to high values for n (3 or 4). Once the nucleation sites are consumed, the formation of new particles will cease.
Furthermore, if the distribution of nucleation sites is non-random, then the growth may be restricted to 1 or 2 dimensions. Site saturation may lead to n values of 1, 2 or 3 for surface, edge and point sites respectively.
Applications in biophysics
The Avrami equation was applied in cancer biophysics in two aspects. First aspect is connected with tumor growth and cancer cells kinetics, which can be described by the sigmoidal curve. In this context the Avrami function was discussed as an alternative to the widely used Gompertz curve. In the second aspect the Avrami nucleation and growth theory was used together with multi-hit theory of carcinogenesis to show how the cancer cell is created. The number of oncogenic mutations in cellular DNA can be treated as nucleation particles which can transform whole DNA molecule into cancerous one (neoplastic transformation). This model was applied to clinical data of gastric cancer, and shows that Avrami's constant n is between 4 and 5 which suggest the fractal geometry of carcinogenic dynamics. Similar findings were published for breast and ovarian cancers, where n=5.3.
Multiple Fitting of a Single Dataset (MFSDS)
The Avrami equation was used by Ivanov et al. to fit multiple times a dataset generated by another model, the so called αDg to а sequence of the upper values of α, always starting from α=0, in order to generate a sequence of values of the Avrami parameter n. This approach was shown effective for a given experimental dataset, see the plot, and the n values obtained follow the general direction predicted by fitting multiple times the α21 model.
References
External links
IUPAC Compendium of Chemical Terminology 2nd ed. (the "Gold Book"), Oxford (1997)
Crystallography
Equations | Avrami equation | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,656 | [
"Mathematical objects",
"Materials science",
"Equations",
"Crystallography",
"Condensed matter physics"
] |
7,381,530 | https://en.wikipedia.org/wiki/Diorama%20%28Efteling%29 | The Diorama is a miniature world in Efteling amusement park in the Netherlands. The highly detailed mountainous world, or Diorama, was designed by Anton Pieck and opened in 1971, in honour of the 20th birthday of Efteling.
The visitors can walk around a 60 metre long show-case with mountains, little villages, castles and churches, moving trains and automobiles and flowing water. Most of the Diorama is set in day-time, but a smaller part is devoted to night-time
The landscape has been built entirely out of styrofoam. The Diorama is the first attraction with contributions of Ton van de Ven, the creative director of Efteling at that time. He made some sketches for it, but they weren't used for the Diorama; years later they were used for one of the scenes in the dark-ride Dreamflight.
The attraction was completely renovated in 2007, all rail tracks were replaced, switching mechanisms for the signals are removed and there are seven small attractions from the "real efteling" where added.
Trivia
One of the wooden bridges over the railroad tracks in the first scene has collapsed and a new wooden bridge has been built next to it. It seems to be a creative idea, but it was actually stepped on by one of the builders by accident.
Exceptionally, Märklin for many years manufactured the Minex steam trains specially for Efteling.
References
Scale modeling
Efteling | Diorama (Efteling) | [
"Physics"
] | 305 | [
"Scale modeling"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.