id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
2,670,644 | https://en.wikipedia.org/wiki/Kappa%20Sculptoris | The Bayer designation Kappa Sculptoris (κ Scl, κ Sculptoris) is shared by two star systems, κ¹ Sculptoris and κ² Sculptoris, in the constellation Sculptor. They are separated by 0.53° in the sky.
κ1 Sculptoris (HR 24), binary containing two F-type giants
κ2 Sculptoris (HR 34), K-type giant
Sculptoris, Kappa
Sculptor (constellation) | Kappa Sculptoris | Astronomy | 88 |
39,643,921 | https://en.wikipedia.org/wiki/Markov%E2%80%93Krein%20theorem | In probability theory, the Markov–Krein theorem gives the best upper and lower bounds on the expected values of certain functions of a random variable where only the first moments of the random variable are known. The result is named after Andrey Markov and Mark Krein.
The theorem can be used to bound average response times in the M/G/k queueing system.
References
Probability theorems | Markov–Krein theorem | Mathematics | 83 |
78,147,827 | https://en.wikipedia.org/wiki/Electrostatic%20solitary%20wave | In space physics, an electrostatic solitary wave (ESW) is a type of electromagnetic soliton occurring during short time scales (when compared to the general time scales of variations in the average electric field) in plasma. When a rapid change occurs in the electric field in a direction parallel to the orientation of the magnetic field, and this perturbation is caused by a unipolar or dipolar electric potential, it is classified as an ESW.
Since the creation of ESWs is largely associated with turbulent fluid interactions, some experiments use them to compare how chaotic a measured plasma's mixing is. As such, many studies which involve ESWs are centered around turbulence, chaos, instabilities, and magnetic reconnection.
History
The discovery of solitary waves in general is attributed to John Scott Russell in 1834, with their first mathematical conceptualization being finalized in 1871 by Joseph Boussinesq (and later refined and popularized by Lord Rayleigh in 1876). However, these observations and solutions were for oscillations of a physical medium (usually water), and not describing the behavior of non-particle waves (including electromagnetic waves). For solitary waves outside of media, which ESWs are classified as, the first major framework was likely developed by Louis de Broglie in 1927, though his work on the subject was temporarily abandoned and was not completed until the 1950s.
Electrostatic structures were first observed near Earth's polar cusp by Donald Gurnett and Louis A. Frank using data from the Hawkeye 1 satellite in 1978. However, it is Michael Temerin, William Lotko, Forrest Mozer, and Keith Cerny who are credited with the first observation of electrostatic solitary waves in Earth's magnetosphere in 1982. Since then, a wide variety of magnetospheric satellites have observed and documented ESWs, allowing for analysis of them and the surrounding plasma conditions.
Detection
Electrostatic solitary waves, by their nature, are a phenomenon occurring in the electric field of a plasma. As such, ESWs are technically detectable by any instrument that can measure changes to the electric field during a sufficiently short time window. However, since a given plasma's electric field can vary widely depending on the properties of the plasma and since ESWs occur in short time windows, detection of ESWs can require additional screening of the data in addition to the measurement of the electric field itself. One solution to this obstacle for detecting ESWs, implemented by NASA's Magnetospheric Multiscale Mission (MMS), is to use a digital signal processor to analyze the electric field data and isolate short-duration spikes as a candidate for an ESW. Though the following detection algorithm is specific to MMS, other ESW-detecting algorithms function on similar principles.
To detect an ESW, the data from a device measuring the electric field is sent to the digital signal processor. This data is analyzed across a short time window (in the case of MMS, 1 millisecond), taking both the average electric field magnitude and the largest electric field magnitude during that time window. If the peak field strength exceeds some multiple of the average field strength (4 times the field strength in MMS), then the time window is considered to contain an ESW. After this occurs, the ESW can be associated with the peak electric field strength and categorized accordingly. These algorithms vary in success at detection, since both the time window and detection multiplier are chosen by scientists based on the parameters they wish to detect. As such, these algorithms often have false positives and false negatives.
Interactions
One of the primary physical consequences of ESWs is their creation of electron phase-space holes, a type of structure which prevents low velocity electrons from remaining close to the source of the ESW. These phase-space holes, like the ESWs themselves, can travel stably through the surrounding plasma. Since most plasmas are overall electrically neutral, these phase-space holes often end up behaving as a positive pseudoparticle.
In general, in order to form an electron phase-space hole, the electric potential energy associated with the ESW's potential needs to exceed the kinetic energy of electrons in the plasma (behavior analogous to potential hills). Research has shown that one possible set of situations where this occurs naturally are kinetic instabilities. One observed example of this is the increased occurrence of these holes near Earth's bow shock and magnetopause, where the incoming solar wind collides with Earth's magnetosphere to produce large amounts of turbulence in the plasma.
Forms
The definition of an ESW is broad enough that, on occasion, research distinguishes between different types:
Ion-acoustic solitary waves: A type of ESW that occurs when the electric potential that causes the ESW produces an ion acoustic wave.
Electron-acoustic solitary waves: A type of ESW that produces an acoustic wave associated with electrons. These tend to be substantially faster and higher frequency than ion-acoustic solitary waves.
Supersolitary waves: A type of ESW whose electric potential include pulses on even smaller time scales than the ESW itself.
See also
Soliton
Interplanetary magnetic field
Solar wind
Electric potential
Turbulence
Time domain electromagnetics
Notes
a.An ESW itself is strictly an electromagnetic phenomenon, and as such is technically non-dependent on media. However, this technicality should be observed with caution. Nearly all conditions that give rise to an ESW are theorized to be dependent on the plasma medium they reside in.
b.Though the identity of the other 3 co-authors is known for certain, the career of K. Cerny after the publishing of their paper is poorly documented. The first name, date, school, and major associated with graduation heavily suggest that Keith Cerny is the K. Cerny credited on the paper, but this is (as-of-yet) unconfirmed.
References
Physics
Wave mechanics
Space physics
Solitons
1982 in science
Waves in plasmas
Quasiparticles | Electrostatic solitary wave | Physics,Materials_science,Astronomy | 1,224 |
342,457 | https://en.wikipedia.org/wiki/Cellulase | Cellulase (; systematic name 4-β-D-glucan 4-glucanohydrolase) is any of several enzymes produced chiefly by fungi, bacteria, and protozoans that catalyze cellulolysis, the decomposition of cellulose and of some related polysaccharides:
Endohydrolysis of (1→4)-β-D-glucosidic linkages in cellulose, lichenin and cereal β-D-glucan
The name is also used for any naturally occurring mixture or complex of various such enzymes, that act serially or synergistically to decompose cellulosic material.
Cellulases break down the cellulose molecule into monosaccharides ("simple sugars") such as β-glucose, or shorter polysaccharides and oligosaccharides. Cellulose breakdown is of considerable economic importance, because it makes a major constituent of plants available for consumption and use in chemical reactions. The specific reaction involved is the hydrolysis of the 1,4-β-D-glycosidic linkages in cellulose, hemicellulose, lichenin, and cereal β-D-glucans. Because cellulose molecules bind strongly to each other, cellulolysis is relatively difficult compared to the breakdown of other polysaccharides such as starch.
Most mammals have only very limited ability to digest dietary fibres like cellulose by themselves. In many herbivorous animals such as ruminants like cattle and sheep and hindgut fermenters like horses, cellulases are produced by symbiotic bacteria. Endogenous cellulases are produced by a few types of animals, such as some termites, snails, and earthworms.
Cellulases have also been found in green microalgae (Chlamydomonas reinhardtii, Gonium pectorale and Volvox carteri) and their catalytic domains (CD) belonging to GH9 Family show highest sequence homology to metazoan endogenous cellulases. Algal cellulases are modular, consisting of putative novel cysteine-rich
carbohydrate-binding modules (CBMs), proline/serine-(PS) rich linkers in addition to putative Ig-like and unknown domains in some members. Cellulase from Gonium pectorale consisted of two CDs separated by linkers and with a C-terminal CBM.
Several different kinds of cellulases are known, which differ structurally and mechanistically. Synonyms, derivatives, and specific enzymes associated with the name "cellulase" include endo-1,4-β-D-glucanase (β-1,4-glucanase, β-1,4-endoglucan hydrolase, endoglucanase D, 1,4-(1,3;1,4)-β-D-glucan 4-glucanohydrolase), carboxymethyl cellulase (CMCase), avicelase, celludextrinase, cellulase A, cellulosin AP, alkali cellulase, cellulase A 3, 9.5 cellulase, celloxylanase and pancellase SS. Enzymes that cleave lignin have occasionally been called cellulases, but this old usage is deprecated; they are lignin-modifying enzymes.
Types and action
Five general types of cellulases based on the type of reaction catalyzed:
Endocellulases (EC 3.2.1.4) randomly cleave internal bonds at amorphous sites that create new chain ends.
Exocellulases or cellobiohydrolases (EC 3.2.1.91) cleave two to four units from the ends of the exposed chains produced by endocellulase, resulting in tetrasaccharides or disaccharides, such as cellobiose. Exocellulases are further classified into type I, that work processively from the reducing end of the cellulose chain, and type II, that work processively from the nonreducing end.
Cellobiases (EC 3.2.1.21) or β-glucosidases hydrolyse the exocellulase product into individual monosaccharides.
Oxidative cellulases depolymerize cellulose by radical reactions, as for instance cellobiose dehydrogenase (acceptor).
Cellulose phosphorylases depolymerize cellulose using phosphates instead of water.
Within the above types there are also progressive (also known as processive) and nonprogressive types. Progressive cellulase will continue to interact with a single polysaccharide strand, nonprogressive cellulase will interact once then disengage and engage another polysaccharide strand.
Cellulase action is considered to be synergistic as all three classes of cellulase can yield much more sugar than the addition of all three separately. Aside from ruminants, most animals (including humans) do not produce cellulase in their bodies and can only partially break down cellulose through fermentation, limiting their ability to use energy in fibrous plant material.
Structure
Most fungal cellulases have a two-domain structure, with one catalytic domain and one cellulose binding domain, that are connected by a flexible linker. This structure is adapted for working on an insoluble substrate, and it allows the enzyme to diffuse two-dimensionally on a surface in a caterpillar-like fashion. However, there are also cellulases (mostly endoglucanases) that lack cellulose binding domains.
Both binding of substrates and catalysis depend on the three-dimensional structure of the enzyme which arises as a consequence of the level of protein folding. The amino acid sequence and arrangement of their residues that occur within the active site, the position where the substrate binds, may influence factors like binding affinity of ligands, stabilization of substrates within the active site and catalysis. The substrate structure is complementary to the precise active site structure of enzyme. Changes in the position of residues may result in distortion of one or more of these interactions. Additional factors like temperature, pH and metal ions influence the non-covalent interactions between enzyme structure. The Thermotoga maritima species make cellulases consisting of 2 β-sheets (protein structures) surrounding a central catalytic region which is the active-site.<ref name="onlinelibrary.wiley.com">{{cite journal | vauthors = Cheng YS, Ko TP, Wu TH, Ma Y, Huang CH, Lai HL, Wang AH, Liu JR, Guo RT | display-authors = 6 | title = Crystal structure and substrate-binding mode of cellulase 12A from Thermotoga maritima | journal = Proteins | volume = 79 | issue = 4 | pages = 1193–204 | date = April 2011 | pmid = 21268113 | doi = 10.1002/prot.22953 | s2cid = 23572933 }}</ref> The enzyme is categorised as an endoglucanase, which internally cleaves β-1,4-glycosydic bonds in cellulose chains facilitating further degradation of the polymer. Different species in the same family as T. maritima make cellulases with different structures. Cellulases produced by the species Coprinopsis cinerea consists of seven protein strands in the shape of an enclosed tunnel called a β/α barrel. These enzymes hydrolyse the substrate carboxymethyl cellulose. Binding of the substrate in the active site induces a change in conformation which allows degradation of the molecule.
Cellulase complexes
In many bacteria, cellulases in vivo are complex enzyme structures organized in supramolecular complexes, the cellulosomes. They can contain, but are not limited to, five different enzymatic subunits representing namely endocellulases, exocellulases, cellobiases, oxidative cellulases and cellulose phosphorylases wherein only exocellulases and cellobiases participate in the actual hydrolysis of the β(1→4) linkage. The number of sub-units making up cellulosomes can also determine the rate of enzyme activity.
Multidomain cellulases are widespread among many taxonomic groups, however, cellulases from anaerobic bacteria, found in cellulosomes, have the most complex architecture consisting of different types of modules. For example, Clostridium cellulolyticum produces 13 GH9 modular cellulases containing a different number and arrangement of catalytic-domain (CD), carbohydrate-binding module (CBM), dockerin, linker and Ig-like domain.
The cellulase complex from Trichoderma reesei, for example, comprises a component labeled C1 (57,000 daltons) that separates the chains of crystalline cellulose, an endoglucanase (about 52,000 daltons), an exoglucanase (about 61,000 dalton), and a β-glucosidase (76,000 daltons).
Numerous "signature" sequences known as dockerins and cohesins have been identified in the genomes of bacteria that produce cellulosomes. Depending on their amino acid sequence and tertiary structures, cellulases are divided into clans and families.
Multimodular cellulases are more efficient than free enzyme (with only CD) due to synergism because of the close proximity between the enzyme and the cellulosic substrate. CBM are involved in binding of cellulose whereas glycosylated linkers provide flexibility to the CD for higher activity and protease protection, as well as increased binding to the cellulose surface.
Mechanism of cellulolysis
Uses
Cellulase is used for commercial food processing in coffee. It performs hydrolysis of cellulose during drying of beans. Furthermore, cellulases are widely used in textile industry and in laundry detergents. They have also been used in the pulp and paper industry for various purposes, and they are even used for pharmaceutical applications.
Cellulase is used in the fermentation of biomass into biofuels, although this process is relatively experimental at present.
Paper and pulp
Cellulases have a wide varierty of applications in the paper and pulp industry. In the production and recycling processes cellulases can be applied to improve debarking, pulping, bleaching, drainage or deinking.
The use of cellulase can also improve the quality of the paper. Cellulases affect the fiber morphology, which may lead to improved fibre-fibre bonding, resulting in increased fibre cohesion. Additional effects on the paper may include increased tensile strength, higher bulk, porosity and tissue softness.
Pharmaceutical
Cellulase is used in medicine as a treatment for phytobezoars, a form of cellulose bezoar found in the human stomach, and it has exhibited efficacy in degrading polymicrobial bacterial biofilms by hydrolyzing the β(1-4) glycosidic linkages within the structural, matrix exopolysaccharides of the extracellular polymeric substance (EPS).
Textiles
Various uses of cellulases in the textile industry include biostoning of jeans, polishing of textile fibres, softening of garments, removal of excess dye or the restoration of colour brightness.
Agriculture
Cellulases can be used in the agricultural sector as a plant pathogen and for disease control. It is also applied to enhance seed germination and improvement of the root system, and may lead to improved soil quality and recude the dependence on mineral fertilisers.
Measurement
As the native substrate, cellulose, is a water-insoluble polymer, traditional reducing sugar assays using this substrate can not be employed for the measurement of cellulase activity. Analytical scientists have developed a number of alternative methods.
DNSA Method''' Cellulase activity was determined by incubating 0.5 ml of supernatant with 0.5 ml of 1% carboxymethylcellulose (CMC) in 0.05M citrate buffer (pH 4.8) at 50 °C for 30 minutes. The reaction was terminated by the addition of 3 ml dinitrosalicylic acid reagent. Absorbance was read at 540 nm.
A viscometer can be used to measure the decrease in viscosity of a solution containing a water-soluble cellulose derivative such as carboxymethyl cellulose upon incubation with a cellulase sample. The decrease in viscosity is directly proportional to the cellulase activity. While such assays are very sensitive and specific for endo-cellulase (exo-acting cellulase enzymes produce little or no change in viscosity), they are limited by the fact that it is hard to define activity in conventional enzyme units (micromoles of substrate hydrolyzed or product produced per minute).
Cellooligosaccharide substrates
The lower DP cello-oligosaccharides (DP2-6) are sufficiently soluble in water to act as viable substrates for cellulase enzymes. However, as these substrates are themselves 'reducing sugars', they are not suitable for use in traditional reducing sugar assays because they generate a high 'blank' value. However their cellulase mediated hydrolysis can be monitored by HPLC or IC methods to gain valuable information on the substrate requirements of a particular cellulase enzyme.
Reduced cello-oligosaccharide substrates
Cello-oligosaccharides can be chemically reduced through the action of sodium borohydride to produce their corresponding sugar alcohols. These compounds do not react in reducing sugar assays but their hydrolysis products do. This makes borohydride reduced cello-oligosaccharides valuable substrates for the assay of cellulase using traditional reducing sugar assays such as the Nelson-Symogyi method.
Dyed polysaccharide substrates
These substrates can be subdivided into two classes-
Insoluble chromogenic substrates: An insoluble cellulase substrate such as AZCL-HE-cellulose absorbs water to create gelatinous particles when placed in solution. This substrate is gradually depolymerised and solubilised by the action of cellulase. The reaction is terminated by adding an alkaline solution to stop enzyme activity and the reaction slurry is filtered or centrifuged. The colour in the filtrate or supernatant is measured and can be related to enzyme activity.
Soluble chromogenic substrates: A cellulase sample is incubated with a water-soluble substrate such as azo-CM-cellulose, the reaction is terminated and high molecular weight, partially hydrolysed fragments are precipitated from solution with an organic solvent such as ethanol or methoxyethanol. The suspension is mixed thoroughly, centrifuged, and the colour in the supernatant solution (due to small, soluble, dyed fragments) is measured. With the aid of a standard curve, the enzyme activity can be determined.
Enzyme coupled reagents
New reagents have been developed that allow for the specific measurement of endo''-cellulase. These methods involve the use of functionalised oligosaccharide substrates in the presence of an ancillary enzyme. In the example shown, a cellulase enzyme is able to recognise the trisaccharide fragment of cellulose and cleave this unit. The ancillary enzyme present in the reagent mixture (β-glucosidase) then acts to hydrolyse the fragment containing the chromophore or fluorophore. The assay is terminated by the addition of a basic solution that stops the enzymatic reaction and deprotonates the liberated phenolic compound to produce the phenolate species. The cellulase activity of a given sample is directly proportional to the quantity of phenolate liberated which can be measured using a spectrophotometer. The acetal functionalisation on the non-reducing end of the trisaccharide substrate prevents the action of the ancillary β-glucosidase on the parent substrate.
See also
Cellulose 1,4-beta-cellobiosidase, an efficient cellulase
Cellulase unit, a unit for quantifying cellulase activity
References
Further reading
The Merck Manual of Diagnosis and Therapy, Chapter 24
Carbohydrate metabolism
Cellulose
Enzymes | Cellulase | Chemistry | 3,561 |
35,604,835 | https://en.wikipedia.org/wiki/Internet%20Hall%20of%20Fame | The Internet Hall of Fame is an honorary lifetime achievement award administered by the Internet Society (ISOC) in recognition of individuals who have made significant contributions to the development and advancement of the Internet.
Overview
The Internet Hall of Fame was established in 2012, on the 20th anniversary of ISOC. Its stated purpose is to "publicly recognize a distinguished and select group of visionaries, leaders and luminaries who have made significant contributions to the development and advancement of the global Internet".
Nominations may be made by anyone through an applications process. The Internet Hall of Fame Advisory Board is responsible for the final selection of inductees. The advisory board is made up of professionals in the Internet industry.
History
In 2012, there were 33 inaugural inductees into the Hall of Fame, announced on April 23, 2012, at the Internet Society's Global INET conference in Geneva, Switzerland.
There were 32 inductees in 2013. They were announced on June 26, 2013, and the induction ceremony was held on August 3, 2013, in Berlin, Germany. The ceremony was originally to be held in Istanbul, but the venue was changed due to the ongoing government protests in Turkey.
The class of 2014 inducted 24 people. They were announced at an event in Hong Kong.
There were no inductees in 2015 or 2016, while the ISOC worked to create an Advisory Board to provide leadership on the program's direction. This Advisory Board would be responsible for the selection of the inductees going forward.
On September 18, 2017, the Internet Society gathered to honor the fourth class of Internet Hall of Fame Inductees at UCLA, where nearly 50 years before, the first electronic message was sent over the Internet's predecessor, the ARPANET.
On September 27, 2019, 11 new members were inducted into the Internet Hall of Fame in a ceremony in San Jose, Costa Rica. The inductees included Larry Irving, the first African-American to be inducted.
In February 2021, the Internet Hall of Fame announced that nominations were open for 2021 inductees until April 23, 2021, which was later extended to May 7, 2021.
Inductees
From 2012 to 2017, inductees were considered in three categories:
Pioneers: "Individuals who were instrumental in the early design and development of the Internet."
Global Connectors: "Individuals from around the world who have made significant contributions to the global growth and use of the Internet."
Innovators: "Individuals who made outstanding technological, commercial, or policy advances and helped to expand the Internet's reach."
An asterisk (*) indicates a posthumous recipient. Since 2019, inductees are not assigned categories.
Pioneers
2012
Paul Baran*
Vint Cerf
Danny Cohen
Steve Crocker
Donald Davies*
Elizabeth J. Feinler
Charles Herzfeld*
Robert Kahn
Peter Kirstein
Leonard Kleinrock
John Klensin
Jon Postel*
Louis Pouzin
Lawrence Roberts
2013
David Clark
David Farber
Howard Frank
Kanchana Kanchanasut
J.C.R. Licklider*
Bob Metcalfe
Jun Murai
Kees Neggers
Nii Quaynor
Glenn Ricart
Robert Taylor
Stephen Wolff
Werner Zorn
2014
Douglas Engelbart*
Susan Estrada
Frank Heart*
Dennis Jennings
Rolf Nordhagen*
Radia Perlman
Global connectors
2012
Randy Bush
Kilnam Chon
Al Gore
Nancy Hafkin
Geoff Huston
Brewster Kahle
Daniel Karrenberg
Toru Takahashi
Tan Tin Wee
2013
Karen Banks
Gihan Dias
Anriette Esterhuysen
Steve Goldstein
Teus Hagen
Ida Holz
Qiheng Hu
Haruhisa Ishida*
Barry Leiner*
George Sadowsky
2014
Dai Davies
Demi Getschko
Masaki Hirabaru*
Erik Huizer
Steve Huter
Abhaya Induruwa
Dorcas Muthoni
Mahabir Pun
Srinivasan Ramani
Michael Roberts
Ben Segal
Douglas Van Houweling
2017
Nabil Bukhalid
Ira Fuchs
Shigeki Goto
Mike Jensen
Ermanno Pietrosemoli
Tadao Takahashi
Florencio Utreras
Jianping Wu
Innovators
2012
Mitchell Baker
Tim Berners-Lee
Robert Cailliau
Van Jacobson
Lawrence Landweber
Paul Mockapetris
Craig Newmark
Raymond Tomlinson
Linus Torvalds
Philip Zimmermann
2013
Marc Andreessen
John Perry Barlow
François Flückiger
Stephen Kent
Anne-Marie Eklund Löwinder
Henning Schulzrinne
Richard Stallman
Aaron Swartz*
Jimmy Wales
2014
Eric Allman
Eric Bina
Karlheinz Brandenburg
John Cioffi
Hualin Qian
Paul Vixie
2017
Jaap Akkerhuis
Yvonne Marie Andrés
Alan Emtage
Ed Krol
Tracy LaQuey Parker
Craig Partridge
Inductees since 2019
2019
Adiel Akplogan
Kimberly Claffy
Douglas Comer
Elise Gerich
Larry Irving
Daniel C. Lynch
Jean Armour Polly
José Soriano
Michael Stanton
Klaas Wierenga
Suguru Yamaguchi*
2021
Carlos Afonso
Rob Blokzijl*
Hans-Werner Braun
Frode Greisen
Jan Gruntorad
Saul Hahn
Kim Hubbard
Rafael (Lito) Ibarra
Xing Li
Yngvar G. Lundh*
Dan Kaminsky*
DaeYoung Kim
Kenneth J. Klingenstein
Alejandro Pisanty
Yakov Rekhter
Philip Smith
Pål Spilling*
Liane Tarouco
Virginia Travers
George Varghese
Lixia Zhang
2023
Abhay Bhushan
Laura Breeden
Ivan Moura Campos
Steve Cisler
Peter Eckersley*
Hartmut Richard Glaser
Simon S. Lam
William L. Schrader
Guy F. de Téramond
Advisory board
2012
Lishan Adam
Joi Ito
Mark Mahaney
Chris Anderson
Mike Jensen
Alejandro Pisanty
Alex Corenthin
Aleks Krotoski
Lee Rainie
William H. Dutton
Loïc Le Meur
2013 and 2014
Lishan Adam
Raúl Echeberría
C.L. Liu
Hessa Al Jaber
Hartmut Glaser
Alejandro Pisanty
Grace Chng
Katie Hafner
Oliver Popov
Alex Corenthin
Mike Jensen
Lee Rainie
William H. Dutton
Aleks Krotoski
Andreu Veà Baró
2017
Randy Bush
Steven Huter
Srinivasan Ramani
Kilnam Chon
Abhaya Induruwa
Glenn Ricart
Gihan Dias
Dennis Jennings
Lawrence Roberts
Anriette Esterhuysen
John Klensin
George Sadowsky
Susan Estrada
Lawrence Landweber
Douglas Van Houweling
Demi Getschko
Paul Mockapetris
Paul Vixie
Nancy Hafkin
Radia Perlman
See also
Internet celebrity
Internet pioneers
IEEE Internet Award
SIGCOMM Award
References
External links
Q&As with the living inductees, from Wired, 2012
Lifetime achievement awards
Awards established in 2012
Science and technology halls of fame | Internet Hall of Fame | Technology | 1,370 |
34,994,976 | https://en.wikipedia.org/wiki/Martin%20Zirnbauer | Martin R. Zirnbauer (born 25 April 1958) is a professor of theoretical physics at the University of Cologne.
Zirnbauer studied at the Technical University of Munich and Oxford University, where he earned his PhD in 1982. In 1987 he was appointed at age 29 to Cologne. In 1996 he acquired his professorial chair. Among his foreign research sabbaticals, he visited the California Institute of Technology in Pasadena. His research specialty is the mathematical physics of mesoscopic systems.
Awards
In 2009 for his research he received the prestigious Leibniz prize from the Deutsche Forschungsgemeinschaft, which granted him over a period of seven years 2.5 million €. In 2012 he was awarded the Max Planck medal.
External links
Zirnbauer website at U. of Cologne
1958 births
20th-century German physicists
Theoretical physicists
Technical University of Munich alumni
Living people
Winners of the Max Planck Medal
21st-century German physicists
Academic staff of the University of Cologne | Martin Zirnbauer | Physics | 201 |
42,566,408 | https://en.wikipedia.org/wiki/Infrared%20Science%20Archive | The Infrared Science Archive (IRSA) is the primary archive for the infrared and submillimeter astronomical projects of NASA, the space agency of the United States. IRSA curates the science products of over 15 missions, including the Spitzer Space Telescope, the Wide-field Infrared Survey Explorer (WISE), the Infrared Astronomical Satellite (IRAS), and the Two Micron All-Sky Survey (2MASS). It also serves data from infrared and submillimeter European Space Agency missions with NASA participation, including the Infrared Space Observatory (ISO), Planck, and the Herschel Space Observatory. , IRSA provides access to more than 1 petabyte of data consisting of roughly 1 trillion astronomical measurements, which span wavelengths from 1 micron to 10 millimeters and include all-sky coverage in 24 bands. Approximately 10% of all refereed astronomical journal articles cite data sets curated by IRSA.
IRSA is part of the Infrared Processing and Analysis Center (IPAC) and is located on the campus of the California Institute of Technology. It is one of NASA's Astrophysics Data Centers, along with the High Energy Astrophysics Science Archive Research Center (HEASARC), the Mikulski Archive for Space Telescopes (MAST), and others.
References
External links
Infrared Science Archive
Infrared Processing and Analysis Center
Astronomical databases
Government databases in the United States
Infrared imaging | Infrared Science Archive | Astronomy | 279 |
40,547,030 | https://en.wikipedia.org/wiki/Multibody%20simulation | Multibody simulation (MBS) is a method of numerical simulation in which multibody systems are composed of various rigid or elastic bodies. Connections between the bodies can be modeled with kinematic constraints (such as joints) or force elements (such as spring dampers). Unilateral constraints and Coulomb-friction can also be used to model frictional contacts between bodies.
Multibody simulation is a useful tool for conducting motion analysis. It is often used during product development to evaluate characteristics of comfort, safety, and performance. For example, multibody simulation has been widely used since the 1990s as a component of automotive suspension design. It can also be used to study issues of biomechanics, with applications including sports medicine, osteopathy, and human-machine interaction.
The heart of any multibody simulation software program is the solver. The solver is a set of computation algorithms that solve equations of motion. Types of components that can be studied through multibody simulation range from electronic control systems to noise, vibration and harshness. Complex models such as engines are composed of individually designed components, e.g. pistons and crankshafts.
The MBS process often can be divided in 5 main activities. The first activity of the MBS process chain is the "3D CAD master model", in which product developers, designers and engineers are using the CAD system to generate a CAD model and its assembly structure related to given specifications. This 3D CAD master model is converted during the activity "Data transfer" to the MBS input data formats i.e. STEP. The "MBS Modeling" is the most complex activity in the process chain. Following rules and experiences, the 3D model in MBS format, multiple boundaries, kinematics, forces, moments or degrees of freedom are used as input to generate the MBS model. Engineers have to use MBS software and their knowledge and skills in the field of engineering mechanics and machine dynamics to build the MBS model including joints and links. The generated MBS model is used during the next activity "Simulation". Simulations, which are specified by time increments and boundaries like starting conditions are run by MBS Software. It is also possible to perform MBS simulations using free and open source packages. The last activity is the "Analysis and evaluation". Engineers use case-dependent directives to analyze and evaluate moving paths, speeds, accelerations, forces or moments. The results are used to enable releases or to improve the MBS model, in case the results are insufficient. One of the most important benefits of the MBS process chain is the usability of the results to optimize the 3D CAD master model components. Due to the fact that the process chain enables the optimization of component design, the resulting loops can be used to achieve a high level of design and MBS model optimization in an iterative process.
References
computational physics
Dynamical systems | Multibody simulation | Physics,Mathematics | 592 |
9,915,335 | https://en.wikipedia.org/wiki/HIV%20Vaccine%20Trials%20Network | The HIV Vaccine Trials Network (HVTN) is a non-profit organization which connects physicians and scientists with activists and community educators for the purpose of conducting clinical trials seeking a safe and effective HIV vaccine. Collaboratively, researchers and laypeople review potential vaccines for safety, immune response, and efficacy. The HVTN is a network for testing vaccines, and while its members may also work in vaccine development for other entities, the mission of the HVTN does not include vaccine design.
The HVTN is the only HIV vaccine research network sponsored by the American government. It also manages the only large-scale HIV vaccine research trial network in Africa. The HVTN collaborates with the Division of Acquired Immunodeficiency Syndrome (DAIDS). Funding comes from the National Institute of Allergy and Infectious Diseases and National Institutes of Health, which oversee DAIDS. HVTN is headquartered at the Fred Hutchinson Cancer Research Center in Seattle. The vaccines being tested come from various producers, both commercial and non-profit.
Community involvement
Typically, researchers conduct clinical research on human subjects by asking volunteers to give informed consent to participate in an experiment by taking drugs that have not always been proven safe or effective in humans, though their safety has been tested (usually in animals) prior to any human trials. At the HVTN, many current vaccine studies are using products with a safety record that has been established in previous human trials.
The Nuremberg Code, the Declaration of Helsinki, and the Belmont Report are legal documents written in layman's terms which local governments use to model their laws for establishing rules for conducting clinical trials, and all contemporary clinical trials of international worth follow all the rules set by these precedents.
However, HIV vaccine research requires more than just these protections, and because of this, from the inception of their research the HVTN has instituted a "community advisory board" (CAB) system in addition to the usual controls. The CAB is similar to an Institutional Review Board (IRB) in that the researchers facilitate the granting of public data to both entities, but the difference is that the IRB consists of a professional ethics committee and the CAB consists of any community member who wants to supervise the safety, ethics, efficacy, or any other aspect of the research.
The researchers of the HVTN deemed the creation of the CAB necessary for HIV vaccine research when it has not been necessary for other clinical research because the HIV epidemic is especially urgent, new research techniques are available now that did not exist before recent major advances in genetic engineering, the public is generally overly-willing to volunteer to receive experimental vaccines for this cause, and yet the educational infrastructure already in place to disseminate information about the inherent risk in participating in vaccine research is lacking in society. For too many reasons, there is no precedent for research of this sort on this scale, and without integrating educational programs about this research into existing community institutions, the HVTN simply could not educate people to the required level to make such a fast-moving, expensive, inherently non-commercial research project possible.
African HIV vaccine trials
In 2003 the HVTN partnered with Harvard University in establishing a small-scale vaccine trials unit in Botswana. A major reason for this project was gathering data about the HIV prevalence in Africa and assessing the feasibility of getting grassroots support for vaccine trials.
In 2007 the HVTN started the first large-scale HIV vaccine trials in Africa, called HVTN 503/Phambili, with financial assistance from the SA Aids Vaccine Initiative. Phambili was halted in 2007 due to its similarity to the ineffective vaccine used in the American STEP study.
In 2011, the HVTN collaborated with South African researcher Glenda Gray on a trial called HVTN 097 which is the only study outside Thailand to test the pox-protein vaccine regimen that had been found to be partially efficacious in the RV144 trial.
Since 2016, the HVTN is collaborating with African researchers and communities on multiple HIV vaccine efficacy trials in Africa, including HVTN 702 which tests a pox-protein vaccine regimen, HVTN 703 (AMP) which tests passive immunisation, and HVTN 705 (Imbokodo) which tests a global antigen vaccine. On August 31, 2021, Johnson & Johnson announced the results of the primary analysis of the Imbokodo study showing that the vaccine did not provide sufficient protection against HIV in the cohort of young women enrolled in the trial. Based on these results, the study was discontinued.
American HIV vaccine trials
HVTN 502/ STEP was the name for a double-blind randomized controlled trial conducted in the US and managed by the HVTN. It was thoroughly reviewed when more participants in the experimental group contracted HIV than participants in the control group. The vaccine contained no HIV and no one could have contracted HIV from the vaccine, but there was intense discussion as to whether the vaccine could have increased anyone's risk of contracting HIV. Because the study was stopped early, it probably will never be possible to determine why more participants in the experimental group contracted HIV, but various theories have been proposed.
In October 2009, the HVTN began a clinical trial in the USA called HVTN 505. HVTN 505 tested whether two vaccines, a DNA plasmid vaccine plus a recombinant adenovirus type 5 vector vaccine (DNA/rAd5), could prevent HIV. The vaccines were developed for HIV subtypes A, B and C by the Vaccine Research Center at the National Institutes of Health (NIH). In April 2013, the data and safety monitoring board recommended stopping vaccinations because there was no evidence that the vaccines could prevent people from getting HIV. The vaccines could also not treat HIV. The vaccines were, however, found to be safe and well tolerated.
In 2019, Johnson & Johnson announced that its subsidiary, Janssen Vaccines, would launch a Phase 3 clinical trial of a mosaic-based HIV vaccine candidate under the trial name HVTN 706/HPX3002/MOSAICO with a target enrollment of 3800 participants for 55 clinical sites in Argentina, Brazil, Italy, Mexico, Peru, Poland, Spain, and the United States. The focus of MOSAICO is high-risk men who have sex with men and transgender people, with results expected in 2023.
See also
World AIDS Vaccine Day
Julie McElrath
References
External links
HIV Vaccine Trials Network
Hope Takes Action
Fred Hutchinson Cancer Research Center
Division of Acquired Immunodeficiency Syndrome, NIH
HIV/AIDS organizations in the United States
HIV/AIDS in South Africa
HIV vaccine research
Vaccination-related organizations
Research institutes in Seattle | HIV Vaccine Trials Network | Chemistry,Biology | 1,391 |
45,398,316 | https://en.wikipedia.org/wiki/Energy%20Technology%20%28journal%29 | Energy Technology is a monthly peer-reviewed scientific journal covering applied energy research. It was established in 2013 and is published by Wiley-VCH.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2022 impact factor of 3.8, ranking it 75th out of 121 journals in the category "Energy & Fuels".
References
External links
Academic journals established in 2013
Energy and fuel journals
Wiley-VCH academic journals
English-language journals
Monthly journals | Energy Technology (journal) | Environmental_science | 105 |
36,172,886 | https://en.wikipedia.org/wiki/Uranium%20disulfide | Uranium disulfide is an inorganic chemical compound of uranium in oxidation state +4 and sulfur in oxidation state -2. It is radioactive and appears in the form of black crystals.
Uranium disulfide has two allotropic forms: α-uranium disulfide, which is stable above the transition temperature (about 1350 °C) and metastable below it, and β-uranium disulfide which is stable below this temperature. The tetragonal crystal structure of α-US2 is identical to α-USe2.
Uranium disulfide can be synthesized by reduction of gaseous hydrogen sulfide with uranium metal powder at elevated temperatures.
References
Uranium(IV) compounds
Sulfides
Dichalcogenides | Uranium disulfide | Chemistry | 147 |
5,452,760 | https://en.wikipedia.org/wiki/Protein%20precursor | A protein precursor, also called a pro-protein or pro-peptide, is an inactive protein (or peptide) that can be turned into an active form by post-translational modification, such as breaking off a piece of the molecule or adding on another molecule. The name of the precursor for a protein is often prefixed by pro-. Examples include proinsulin and proopiomelanocortin, which are both prohormones.
Protein precursors are often used by an organism when the subsequent protein is potentially harmful, but needs to be available on short notice and/or in large quantities. Enzyme precursors are called zymogens or proenzymes. Examples are enzymes of the digestive tract in humans.
Some protein precursors are secreted from the cell. Many of these are synthesized with an N-terminal signal peptide that targets them for secretion. Like other proteins that contain a signal peptide, their name is prefixed by pre. They are thus called pre-pro-proteins or pre-pro-peptides. The signal peptide is cleaved off in the endoplasmic reticulum. An example is preproinsulin.
Pro-sequences are areas in the protein that are essential for its correct folding, usually in the transition of a protein from an inactive to an active state. Pro-sequences may also be involved in pro-protein transport and secretion.
Pro-domain (or prodomain) is the domain of a proprotein.
References
External links | Protein precursor | Chemistry | 309 |
1,249,261 | https://en.wikipedia.org/wiki/Bulk%20micromachining | Bulk micromachining is a process used to produce micromachinery or microelectromechanical systems (MEMS).
Unlike surface micromachining, which uses a succession of thin film deposition and selective etching, bulk micromachining defines structures by selectively etching inside a substrate. Whereas surface micromachining creates structures on top of a substrate, bulk micromachining produces structures inside a substrate.
Usually, silicon wafers are used as substrates for bulk micromachining, as they can be anisotropically wet etched, forming highly regular structures. Wet etching typically uses alkaline liquid solvents, such as potassium hydroxide (KOH) or tetramethylammonium hydroxide (TMAH) to dissolve silicon which has been left exposed by the photolithography masking step. These alkali solvents dissolve the silicon in a highly anisotropic way, with some crystallographic orientations dissolving up to 1000 times faster than others. Such an approach is often used with very specific crystallographic orientations in the raw silicon to produce V-shaped grooves. The surface of these grooves can be atomically smooth if the etch is carried out correctly, and the dimensions and angles can be precisely defined. Pressure sensors are usually created by bulk micromachining technique.
Bulk micromachining starts with a silicon wafer or other substrates which is selectively etched, using photolithography to transfer a pattern from a mask to the surface. Like surface micromachining, bulk micromachining can be performed with wet or dry etches, although the most common etch in silicon is the anisotropic wet etch. This etch takes advantage of the fact that silicon has a crystal structure, which means its atoms are all arranged periodically in lines and planes. Certain planes have weaker bonds and are more susceptible to etching. The etch results in pits that have angled walls, with the angle being a function of the crystal orientation of the substrate. This type of etching is inexpensive and is generally used in early, low-budget research.
See also
Deep reactive-ion etching
References
External links
Bulk micromachining from microfab Service GmbH
Nanotechnology
Microtechnology
Microelectronic and microelectromechanical systems | Bulk micromachining | Materials_science,Engineering | 481 |
60,786,392 | https://en.wikipedia.org/wiki/Nanotechnology%20in%20warfare | Nanotechnology in warfare is a branch of nano-science in which molecular systems are designed, produced and created to fit a nano-scale (1-100 nm). The application of such technology, specifically in the area of warfare and defence, has paved the way for future research in the context of weaponisation. Nanotechnology unites a variety of scientific fields including material science, chemistry, physics, biology and engineering.
Advancements in this area, have led to categorized development of such nano-weapons with classifications varying from; small robotic machines, hyper-reactive explosives, and electromagnetic super-materials. With this technological growth, has emerged implications of associated risks and repercussions, as well as regulation to combat these effects. These impacts give rise to issues concerning global security, the safety of society, and the environment. Nanotechnology has the ability to dramatically escalate the destructive capacity of preexisting weaponry. Legislation may need to be constantly monitored to keep up with the dynamic growth and development of nano-science, due to the potential benefits or dangers of its use. Anticipation of such impacts through regulation, would 'prevent irreversible damages' of implementing defence related nanotechnology in warfare.
Origins
Historical use of nanotechnology in the area of warfare and defence has been rapid and expansive. Over the past two decades, numerous countries have funded military applications of this technology including; China, United Kingdom, Russia, and most notably the United States. The US government has been considered a national leader of research and development in this area, however now rivalled by international competition as appreciation of nanotechnology's eminence increases. Therefore, the growth of this area in the use of its power has a dominant platform in the front line of military interests.
U.S. National Nanotechnology Initiative
In 2000, the United States government developed a National Nanotechnology Initiative to focus funding towards the development of nano-science and its technology, with a heavy focus on utilizing the potential of nano-weapons. This initial US proposal has now grown to coordinate application of nanotechnology in numerous defence programs, as well as all military factions including Air Force, Army and Navy. From the financial year 2001 through to 2014, the US government contributed around $19.4 billion to nano-science, moreover the development and manufacturing of nano-weapons for military defence. The 21st Century Nanotechnology Research and Development Act (2003), envisions the United States continuing its leadership in the field of nanotechnology through national collaboration, productivity and competitiveness, to maintain this dominance.
Developments
Successful transitions of nanotechnology into defence products:
Lifetime of material coatings increased from hours to years, however further development continuing (see below).
Nano-structured silicate manipulation reducing insulation weight by 980 lbs.
High Power Microwave (HPM) devices with reduced weight, shape and power consumption.
The United States government has had military purposed development of nanotechnology at the forefront of its national budget and policy throughout the Clinton and Bush administrations, with the Department of Defense planning to continue with this priority throughout the 21st century. In response to America's assertive public funding of defence purposed nanotechnology, numerous global actors have since created similar programmes.
China
In the sub-category of nano materials, China secures second place behind the United States in the amount of research publications they have released. Conjecture stands over the purpose of China's quick development to rival the U.S., with 1/5 of their government budget spent on research (US$337million). In 2018, Tsinghua University, Beijing, released their findings where they have enhanced carbon nanotubes to now withstand the weight of over 800 tonnes, requiring just 1of material. The scientific nanotechnology team hinted at aerospace, and armour boosting applications, showing promise for defence related nano-weapons. The Chinese Academy of Science's Vice President Chunli Bai, has stated the need to focus on closing the gap between "basic research and application," in order for China to advance its global competitiveness in nanotechnology.
Between 2001 and 2004, approximately 60 countries globally implemented national nanotechnology programmes. According to R.D Shelton, an international technology assessor, research and development in this area "has now become a socio-economic target...an area of intense international collaboration and competition." As of 2017, data showed 4725 patents published in USPTO by the USA alone, maintaining their position as a leader in nanotechnology for over 20 years.
Current research
Most recent research into military nanotechnological weapons includes production of defensive military apparatus, with objectives of enhancing existing designs of lightweight, flexible and durable materials. These innovative designs are equipped with features to also enhance offensive strategy through sensing devices and manipulation of electromechanical properties.
Soldier battlesuit
The Institute for Soldier Nanotechnologies (ISN), deriving from a partnership between the United States Army and MIT, provided an opportunity to focus funding and research activities purely on developing armour to increase soldier survival. Each of seven teams produces innovative enhancements for different aspects of a future U.S. soldier bodysuit. These additional characteristics include energy-absorbing material protecting from blasts or ammunition shocks, engineered sensors to detect chemicals and toxins, as well as built in nano devices to identify personal medical issues such as haemorrhages and fractures. This suit would be made possible with advanced nano-materials such as carbon nanotubes woven into fibres, allowing strengthened structural capacities and flexibility, however preparation becomes an issue due to inability to use automated manufacturing.
Adaptive concealment and stealth
With the use of nanoparticles, it is evidently possibly to procure "invisible" suits for soldiers that will act as the ultimate camouflage. This possibility is rooted from the fact that objects are only visible due to how light reflects off of them. Therefore, if the particle is smaller than the wavelength of the light, it is not visible. Visible light wavelengths fall within the range of 300-700 nanometers. In order to achieve this invisible cloak, the particles that make up the suit should be varying in size. Another approach for concealment includes cloaks that act as a chameleon. Instead of being completely invisible, this nanoparticle coated cloak adapts to any colors surrounding it in order to blend in.
Enhanced materials
Creation of sol-gel ceramic coatings has protected metals from; wear, fractures and moisture, allowing adjustability to numerous shapes and sizes, as well as aiding "materials that cannot withstand high temperature". Current research focuses on resolving durability issues, where stress cracks between the coating and material set limitations on its use and longevity. The drive for this research is finding more efficient and cost effective uses in application of nanotechnology for Airforce and Navy military groups. Integration of fibre-reinforced nano-materials in structural features, such as missile casings, can limit overheating, increase reliability, strength and ductility of the materials used for such nanotechnology.
Communication devices
Nanotechnology designed for advanced communication is expected to equip soldiers and vehicles with micro antenna rays, tags for remote identification, acoustic arrays, micro GPS receivers and wireless communication. Nanotech facilitates easier defence related communications due to lower energy consumption, light weight, efficiency of power, as well as smaller and cheaper to manufacture. Specific military uses of this technology include aerospace applications such as; solid oxide fuel cells to provide three times the energy, surveillance cameras on microchips, performance monitors, and cameras as light as 18g.
Mini-nukes
The United States, along with countries such as Russia and Germany, are sing the convenience of small nanotechnologies, adhering it to nuclear "mini-nuke" explosive devices. This weapon would weigh 5 lbs, with the force of 100 tonnes of TNT, giving it the possibility to annihilate and threaten humanity. The structural integrity would remain the same as nuclear bombs, however manufactured with nano-materials to allow production to a smaller scale.
Engineers and scientists alike, realise some of these proposed developments may not be feasible within the next two decades as more research needs to be undertaken, improving models to be quicker and more efficient. Particularly molecular nanotechnology, requires further understanding of manipulation and reaction, in order to adapt it to a military arena.
Implications
Nanotechnology and its use in warfare promises economic growth however comes with the increased threat to international security and peacekeeping. The rapid emergence of new nanotechnologies have sparked discussion surrounding the impacts such developments will have on geo-politics, ethics, and the environment.
Geo-political
Difficulty in categorisation of nano-weapons, and their intended purposes (defensive or offensive) compromises the balance of stability and trust in the global environment. "A lack of transparency about an emerging technology not only negatively effects public perception but also negatively impacts the perceived balance of powers in the existing security environment." The peace and cohesion of the international structure may possibly be negatively affected with a continuing military-focused development of nanotechnology in warfare. Ambiguity and a lack of transparency in research increases difficulty of regulation in this area. Similarly, arguments put forward from a scientific standpoint, highlight the limited information known, concerning the implications of creating such powerful technology, in regards to reaction of the nano-particles themselves. "Although great scientific and technological progress has been made, many questions about the behaviour of matter at the nanoscale level remain, and considerable scientific knowledge has yet to be learned."
Environmental
The introduction of nanotechnology into everyday life enables potential benefits of use, yet carries the possibility of unknown consequences for the environment and safety. Possible positive developments include creation of nano-devices to decrease remaining radio-activity in areas, as well as sensors to detect pollutants and adjust fuel-air mixtures. Associated risks may involve; military personnel inhaling nanoparticles added to fuel, possible absorption of nanoparticles from sensors into the skin, water, air or soil, dispersion of particles from blasts through the environment (via wind), alongside disposal of nano-tech batteries potentially affecting ecosystems. Applications for materials or explosive devices, allow a greater volume of nano-powders to be packed into a smaller weapon, resulting in a stronger and possibly lethal toxic effect.
Social and ethical
It is unknown the full extent of consequences that may arise in social and ethical areas. Estimates can be made on the associated impacts as they may mirror similar progression of technological developments and affect all areas. The main ethical uncertainties entail the degree to which modern nanotechnology will threaten privacy, global equity and fairness, while giving rise to patent and property right disputes. An overarching social and humanitarian issue, branches from the creative intention of these developments. 'The power to kill or capture debate', highlights the unethical purpose and function of destruction these nanotechnological weapons supply to the user.
Controversy surrounding the innovation and application of nanotechnology in warfare highlights dangers of not pre-determining risks, or accounting for possible impacts of such technology. "The threat of nuclear weapons led to the cold war. The same trend is foreseen with nanotechnology, which may lead to the so-called nanowars, a new age of destruction", stated by the U.S. Department of Defense. Similarly a report released by Oxford University, warns of the pre-eminent extinction of the human race with a 5% risk of this occurring due to development of 'molecular nanotech weapons'.
Regulation
International regulation for such concerns surrounding issues of nanotechnology and its military application, are non existent. There is currently no framework to enforce or support international cooperation to limit production or monitor research and development of nanotechnology for defensive use. "Even if a transnational regulatory framework is established, it is impossible to determine if a nation is non-compliant if one is unable to determine the entire scope of research, development, or manufacturing."
Producing legislation to keep-up with the rapid development of products and new materials in the scientific spheres, would pose as a hindrance to constructing working and relevant regulation. Productive regulation should assure public health and safety, account for environmental and international concerns, yet not restrict innovation of emerging ideas and applications for nanotechnology.
Proposed regulation
Approaches to development of legislation, possibly include progression towards classified non-disclosive information pertaining to military use of nanotechnology. A paper written by Harvard Journal of Law and Technology, discusses laws that would revolve around specific export controls and discourage civilian or private research into nano-materials. This proposal suggests mimicking the U.S. Atomic Energy Act of 1954, restricting any distribution of information regarding the properties and features of the nanotechnology at creation.
The Nanomaterial Registry
A United States National Registry for Nanotechnology has enabled a public sphere where reports are available for curated data on physico-chemical characteristics and interactions of nanomaterials. Requiring further development and more frequent voluntary additions, the register could initiate global regulation and cooperation regarding nanotechnology in warfare.
The registry was developed to assist in the standardisation, formatting, and sharing of data. With more compliance and cooperation this data sharing model may "simplify the community level of effort in assessing nanomaterial data from environmental and biological interaction studies." Analysis of such a registry would be carried out with expertise by professional nano-scientists, creating a filtering mechanism for any potentially newly developed or dangerous materials.
However, this idea of a specific nonmaterial registry is not original, as several databases have been developed previously including the caNanoLab and InterNano which are both engaging and accessible to the public, informatively curated by experts, and detail tools of nano manufacturing . The National Nanomaterial Registry, is a more updated version in which information is collated from a range of these sources and multiple additional data resources. It translates a greater range of content regarding; comparison tools with other materials, encouraging standard methods, alongside compliance rating features.
References | Nanotechnology in warfare | Materials_science,Engineering | 2,842 |
21,403,146 | https://en.wikipedia.org/wiki/Hydroxymetronidazole | Hydroxymetronidazole is the main metabolite of metronidazole. Both have antibiotic and antiprotozoal activity.
References
Nitroimidazoles
Human drug metabolites | Hydroxymetronidazole | Chemistry | 46 |
66,265,679 | https://en.wikipedia.org/wiki/Vesatolimod | Vesatolimod (GS-9620) is an antiviral drug developed by Gilead Sciences, which acts as a potent and selective agonist of Toll-like receptor 7 (TLR7), a receptor involved in the regulation of the immune system. It is used to stimulate the immune system, which can increase its ability to combat chronic viral infections. Vesatolimod is in clinical trials to determine whether it is safe and effective in patients with Hepatitis B and HIV/AIDS, and has also shown activity against other viral diseases such as norovirus and enterovirus 71.
See also
Imiquimod
Motolimod
References
Antiviral drugs | Vesatolimod | Chemistry,Biology | 140 |
72,207,575 | https://en.wikipedia.org/wiki/Metric%20fixation | Metric fixation refers to a tendency for decision-makers to place excessively large emphases on selected metrics.
In management (and many other social science fields), decision makers typically use metrics to measure how well a person or an organization attain desired goal(s). E.g., a company might use "the number of new customers gained" as a metric to evaluate the success of a marketing campaign. The issue of metric fixation is said to arise if the decision maker(s) focus excessively on the metrics, often to the point that they treat "attaining desired values on the metrics" as a core goal (instead of simply an indicator of successes). For example, a school may want to improve the number of students who pass a certain test (metric = "number of students who pass"). This is based on the assumption that the said test truly can evaluate students' ability to succeed in the real world (assuming there already is a good definition of what "success" means). If the said test fails to evaluate the students' ability to function in the working world, focusing solely on increasing their scores on this test might cause the school to ignore other learning goals also crucial for real world functioning. As a result, the students' developments might be impaired.
Although related to several similar, older concepts, the term "metric fixation" was first mentioned in the 2018 book The Tyranny of Metrics. and has since drawn the attention of some management researchers and data scientists.
See also
References
Management
Measurement | Metric fixation | Physics,Mathematics | 316 |
48,336,080 | https://en.wikipedia.org/wiki/Emily%20A.%20Carter | Emily A. Carter is the Gerhard R. Andlinger Professor in Energy and the Environment and a professor of Mechanical and Aerospace Engineering (MAE), the Andlinger Center for Energy and the Environment (ACEE), and Applied and Computational Mathematics at Princeton University. She is also a member of the executive management team at the Princeton Plasma Physics Laboratory (PPPL), serving as Senior Strategic Advisor and Associate Laboratory Director for Applied Materials and Sustainability Sciences.
The author of over 475 publications and patents, Carter has delivered over 600 invited and plenary lectures worldwide and has served on advisory boards spanning a wide range of disciplines. Among other honors, Carter is an elected foreign member of The Royal Society (2024), and fellow of the Royal Society of Chemistry (2022), the National Academy of Inventors (2014), the American Academy of Arts and Sciences (2008), the Institute of Physics (2004), American Association for the Advancement of Science (2000), the American Vacuum Society (1995), the American Physical Society (1994), and the American Chemical Society. She is also an elected member of the European Academy of Sciences (2020), the National Academy of Engineering (2016), International Academy of Quantum Molecular Science (2009), the National Academy of Sciences (2008).
Biography
Emily Carter received a Bachelor of Science in chemistry from the University of California, Berkeley, in 1982 (graduating Phi Beta Kappa). She earned her PhD in physical chemistry in 1987 from the California Institute of Technology, where she worked with William Andrew Goddard III, studying homogeneous and heterogeneous catalysis. During her postdoc at the University of Colorado, Boulder, she worked with James T. Hynes carrying out studies on the dynamics of (photo-induced) electron transfer in solution. She also worked with James Hynes, Giovanni Ciccotti, and Ray Kapral to develop the widely used Blue Moon ensemble, a rare-event sampling method for condensed matter simulations.
From 1988 to 2004, she held professorships in chemistry and materials science and engineering at the University of California, Los Angeles. During those years, she was the Dr. Lee's visiting research fellow in the Sciences at Christ Church, Oxford (1996), a visiting scholar in the department of physics at Harvard University (1999), and a visiting associate in aeronautics at the California Institute of Technology (2001). She moved to Princeton University in 2004. In 2006, she was named Arthur W. Marks ’19 Professor. From 2009 to 2014, she was co-director of the Department of Energy Frontier Research Center on Combustion Science. She became the founding director of the Andlinger Center for Energy and the Environment in 2010, Gerhard R. Andlinger Professor in 2011, and dean of the school of engineering and applied science in 2016. After a national search, Prof. Carter served from 2016 to 2019 as Dean of the Princeton University School of Engineering and Applied Science and the Gerhard R. Andlinger Professor in Energy and the Environment. She was also a professor in the department of mechanical and aerospace engineering and the Program in Applied and Computational Mathematics at Princeton University. She was an associated faculty member in the Andlinger Center for Energy and the Environment, the department of chemistry, the department of chemical and biological engineering, the Princeton Institute for Computational Science and Engineering (PICSciE), the High Meadows Environmental Institute (HMEI), and the Princeton Institute for the Science and Technology of Materials (PRISM). She was the founding director of the Andlinger Center for Energy and the Environment from 2010 to 2016. She served as UCLA's Executive Vice Chancellor and Provost (EVCP) from 2019 to 2021 and was Distinguished Professor of Chemical and Biomolecular Engineering. She is currently a member of the executive management team at the Princeton Plasma Physics Laboratory (PPPL), serving as Senior Strategic Advisor and Associate Laboratory Director for Applied Materials and Sustainability Sciences.
Research
Carter has made significant contributions to theoretical and computational chemistry and physics, including the development of ab initio quantum chemistry methods, methods for accurate description of molecules at the quantum level, and an algorithm for identifying transitional states in chemical reactions. She pioneered the combination of ab initio quantum chemistry with kinetic Monte Carlo simulations (KMC), molecular dynamics (MD), and quasi-continuum solid mechanics simulations relevant to the study of surfaces and interfaces of materials. She has extensively investigated the chemical and mechanical causes and mechanisms of failure in materials such as silicon, germanium, iron and steel, and proposed methods for protecting materials from failure.
She has developed fast methods for orbital-free density functional theory (OF-DFT) that can be applied to large numbers of atoms as well as embedded correlated wavefunction theory for the study of local condensed matter electronic structure. This work has relevance to the understanding of photoelectrocatalysis. Her current research focuses on the understanding and design of materials for sustainable energy. Applications include conversion of sunlight to electricity, clean and efficient use of biofuels and solid oxide fuel cells, and development of materials for use in fuel-efficient vehicles and fusion reactors.
Carter's research is supported by multiple grants from the U.S. Department of Defense and the Department of Energy. She was elected as a member into the National Academy of Engineering (2016) for the development of quantum chemistry computational methods for the design of molecules and materials for sustainable energy.
Selected publications
E. A. Carter, S. Atsumi, M. Byron, J. Chen, S. Comello, M. Fan, B, Freeman, M. Fry, S. Jordaan, H. Mahgerefteh, A.-H. Park, J. Powell, A. R. Ramirez, V. Sick, S. Stewart, J. Trembly, J. Yang, J. Yuan, C. Wise, and E. Zeitler, “Carbon Utilization Infrastructure, Markets, and Research and Development: A Final Report,” National Academies of Sciences, Engineering, and Medicine (NASEM). Washington DC: The National Academies Press. ISBN: 978-0-309-71775-5 (2024).
E. A. Carter, “Our Role in Solving Global Challenges: An Opinion,” J. Am. Chem. Soc., 146, 21193-21195 (2024)
X. Wen, J.-N. Boyn, J. M. P. Martirez, Q. Zhao, and E. A. Carter, “Strategies to obtain reliable energy landscapes from embedded multireference correlated wavefunction methods for surface reactions,” Journal of Chemical Theory and Computation, 20, 6037-6048 (2024).
B. Bobell, J.-N. Boyn, J. M. P. Martirez, and E. A. Carter, "Modeling Bicarbonate Formation in an Alkaline Solution with Multi-Level Quantum Mechanics/Molecular Dynamics Simulations," Molecular Physics Special Issue in Honour of Giovanni Ciccotti, e2375370 (2024).
X. Wen, J. M. P. Martirez, and E. A. Carter, "Plasmon-driven ammonia decomposition on Pd(111): Hole transfer’s role in changing rate-limiting steps," ACS Catalysis, 14, 9539 (2024).
Z. Wei, J. M. P. Martirez, and E. A. Carter, “First-Principles Insights into the Thermodynamics of Variable-Temperature Ammonia Synthesis on Transition-Metal-Doped Cu (100) and (111),” ACS Energy Lett., 9, 3012 (2024).
A. G. Rajan, J. M. P. Martirez, and E. A. Carter, “Strongly Facet-Dependent Activity of Iron-Doped β-Nickel Oxyhydroxide for the Oxygen Evolution Reaction,” Phys. Chem. Chem. Phys. 25th Anniversary Special Issue, 26, 14721 (2024).
J.-N. Boyn and E. A. Carter, “Probing pH-Dependent Dehydration Dynamics of Mg and Ca Cations in Aqueous Solutions with Multi-Level Quantum Mechanics/Molecular Dynamics Simulations,” J. Am. Chem. Soc., 145, 20462 (2023).
R. B. Wexler, G. S. Gautam, R. Bell, S. Shulda, N. A. Strange, J. A. Trindell, J. D. Sugar, E. Nygren, S. Sainio, A. H. McDaniel, D. Ginley, E. A. Carter, and E. B. Stechel, “Multiple and nonlocal cation redox in Ca–Ce–Ti–Mn oxide perovskites for solar thermochemical applications,” Energy Environ. Sci., 16, 2550 (2023).
J. Cai, Q. Zhao, W.-Y. Hsu, C. Choi, J. M. P. Martirez, C. Chen, J. Huang, E. A. Carter, and Y. Huang, “Highly Selective Electrochemical Reduction of CO2 into Methane on Nanotwinned Cu,” J. Am. Chem. Soc., 145, 9136 (2023).
Y. Yuan, L. Zhou, J. L. Bao, J. Zhou, A. Bayles, L. Yuan, M. Lou, M. Lou, S. Khatiwada, H. Robatjazi, E. A. Carter, P. Nordlander, and N. J. Halas, “Earth-abundant photocatalyst for H2 generation from NH3 with light-emitting diode illumination,” Science, 378, 889 (2022).
J. M. P. Martirez and E. A. Carter, “First-Principles Insights into the Thermocatalytic Cracking of Ammonia-Hydrogen Blends on Fe(110). 1. Thermodynamics,” J. Phys. Chem. C, 126, 19733 (2022). (Virtual Special Issue: Honoring Michael R. Berman)
E. A. Carter, “Autobiography of Emily A. Carter,” J. Phys. Chem. A, 125, 1671 (2021); J. Phys. Chem. C, 125, 4333 (2021).
Recent awards and honors
2019 – John Scott Award, Board of City Trusts, Philadelphia, PA
2019 – Camille & Henry Dreyfus Lectureship, University of Basel, Switzerland
2019 – Inaugural WiSE Presidential Distinguished Lecturer, University of Southern California
2019 – 18th NCCR MARVEL Distinguished Lecturer, L’École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
2019 – Graduate Mentoring Award, McGraw Center for Teaching and Learning, Princeton University
2019 – Distinguished Alumni Award, California Institute of Technology
2019 – Eyring Lecturer in Molecular Sciences, Arizona State University
2019 – Mildred Dresselhaus Memorial Lecturer, Ras Al Khaimah Centre for Advanced Materials, United Arab Emirates
2019 – Dow Foundation Distinguished Lecturer, University of California, Santa Barbara
2020 – Brumley D. Pritchett Lecturer, Georgia Institute of Technology, School of Materials Science and Engineering
2020 – Member, European Academy of Sciences
2020 – UCLA Chemistry & Biochemistry Distinguished Lecturer, University of California, Los Angeles
2021 – Materials Theory Award, Materials Research Society
2022 – Fellow, Royal Society of Chemistry
2022 – Paint Branch Distinguished Lecturer in Applied Physics, University of Maryland, Institute for Research in Electronics and Applied Physics
2022 – Richard S. H. Mah Lecturer, Northwestern University, Department of Chemical and Biological Engineering
2022 – Harrison Shull Distinguished Lecturer, Indiana University Bloomington, Department of Chemistry
2023 – Gilbert Newton Lewis Memorial Lecturer, University of California, Berkeley
2023 – Robert S. Mulliken Award, University of Chicago
2023 – 27th John Stauffer Lecturer in Chemistry, Stanford University
2024 – William H. Nichols Medal, American Chemical Society (New York Section)
2024 – Foreign Member of the Royal Society
2024 – Marsha I. Lester Award for Exemplary Impact in Physical Chemistry, American Chemical Society, Physical Chemistry Division
2024 – Annabelle Lee Lecturer, VA Tech, Blacksburg
News stories related to Carter
"2024 Marsha I. Lester Award for Exemplary Impact in Physical Chemistry" — American Chemical Society, Physical Chemistry Section (August 2024)
"Emily Carter elected to the Royal Society" — The Royal Society (May 16, 2024)
"PPPL Envisions a Future of Fusion Energy Solutions and Plasma Science Progress" — US 1 News (May 15, 2024)
"Andlinger Center meeting spotlights next-decade technologies and design approaches for the clean energy transition" — Princeton University Engineering (Dec 06, 2023)
"Dr. Emily Carter: International Leader in Sustainability Science at Princeton University" — Girl Power Gurus Podcast (Nov 25, 2023)
"Ammonia fuel offers great benefits but demands careful action" — Princeton University Engineering (Nov 7, 2023); Andlinger Center for Energy and the Environment (Nov 7, 2023)
"Is Energy Efficiency our Panacea for Power?" — Forbes (Oct 29, 2023)
C-Change Conversation Interview with Kathleen Biggins — C-Change Conversation (May 5, 2023)
References
1960 births
Living people
21st-century American chemists
California Institute of Technology alumni
Computational chemists
Princeton University faculty
Theoretical chemists
UC Berkeley College of Chemistry alumni
American women chemists
American women academics
21st-century American women scientists
Fellows of the American Physical Society
Foreign members of the Royal Society | Emily A. Carter | Chemistry | 2,827 |
10,822,838 | https://en.wikipedia.org/wiki/Rail%20profile | The rail profile is the cross sectional shape of a railway rail, perpendicular to its length.
Early rails were made of wood, cast iron or wrought iron. All modern rails are hot rolled steel with a cross section (profile) approximate to an I-beam, but asymmetric about a horizontal axis (however see grooved rail below). The head is profiled to resist wear and to give a good ride, and the foot profiled to suit the fixing system.
Unlike some other uses of iron and steel, railway rails are subject to very high stresses and are made of very high quality steel. It took many decades to improve the quality of the materials, including the change from iron to steel. Minor flaws in the steel that may pose no problems in other applications can lead to broken rails and dangerous derailments when used on railway tracks.
By and large, the heavier the rails and the rest of the track work, the heavier and faster the trains these tracks can carry.
Rails represent a substantial fraction of the cost of a railway line. Only a small number of rail sizes are made by steelworks at one time, so a railway must choose the nearest suitable size. Worn, heavy rail from a mainline is often reclaimed and downgraded for re-use on a branch line, siding or yard.
History
The earliest rails used on horse-drawn wagonways were wooden,. In the 1760s strap-iron rails were introduced with thin strips of cast iron fixed onto the top of the wooden rails. This increased the durability of the rails. Both wooden and strap-iron rails were relatively inexpensive, but could only carry a limited weight. The metal strips of strap-iron rails sometimes separated from the wooden base and speared into the floor of the carriages above, creating what was referred to as a "snake head". The long-term maintenance expense involved outweighed the initial savings in construction costs.
Cast-iron rails with vertical flanges were introduced by Benjamin Outram of B. Outram & Co. which later became the Butterley Company in Ripley. The wagons that ran on these plateway rails had a flat profile. Outram's partner William Jessop preferred the use of "edge rails" where the wheels were flanged and the rail heads were flat - this configuration proved superior to plateways. Jessop's (fishbellied) first edge rails were cast by the Butterley Company.
The earliest of these in general use were the so-called cast iron fishbelly rails from their shape. Rails made from cast iron were brittle and broke easily. They could only be made in short lengths which would soon become uneven. John Birkinshaw's 1820 patent, as rolling techniques improved, introduced wrought iron in longer lengths, replaced cast iron and contributed significantly to the explosive growth of railroads in the period 1825–40. The cross-section varied widely from one line to another, but were of three basic types as shown in the diagram. The parallel cross-section which developed in later years was referred to as bullhead.
Meanwhile, in May 1831, the first flanged T rail (also called T-section) arrived in America from Britain and was laid into the Pennsylvania Railroad by Camden and Amboy Railroad. They were also used by Charles Vignoles in Britain.
The first steel rails were made in 1857 by Robert Forester Mushet, who laid them at Derby station in England. Steel is a much stronger material, which steadily replaced iron for use on railway rail and allowed much longer lengths of rails to be rolled.
The American Railway Engineering Association (AREA) and the American Society for Testing Materials (ASTM) specified carbon, manganese, silicon and phosphorus content for steel rails. Tensile strength increases with carbon content, while ductility decreases. AREA and ASTM specified 0.55 to 0.77 percent carbon in rail, 0.67 to 0.80 percent in rail weights from , and 0.69 to 0.82 percent for heavier rails. Manganese increases strength and resistance to abrasion. AREA and ASTM specified 0.6 to 0.9 percent manganese in 70 to 90 pound rail and 0.7 to 1 percent in heavier rails. Silicon is preferentially oxidised by oxygen and is added to reduce the formation of weakening metal oxides in the rail rolling and casting procedures. AREA and ASTM specified 0.1 to 0.23 percent silicon. Phosphorus and sulfur are impurities causing brittle rail with reduced impact-resistance. AREA and ASTM specified maximum phosphorus concentration of 0.04 percent.
The use of welded rather than jointed track began in around the 1940s and had become widespread by the 1960s.
Types
Strap rail
The earliest rails were simply lengths of timber. To resist wear, a thin iron strap was laid on top of the timber rail. This saved money as wood was cheaper than metal. The system had the flaw that every so often the passage of the wheels on the train would cause the strap to break away from the timber. The problem was first reported by Richard Trevithick in 1802. The use of strap rails in the United States (for instance on the Albany and Schenectady Railroad 1837) led to passengers being threatened by "snake-heads" when the straps curled up and penetrated the carriages.
T rail
T-rail was a development of strap rail which had a 'T' cross-section formed by widening the top of the strap into a head. This form of rail was generally short-lived, being phased out in America by 1855.
Plate rail
Plate rail was an early type of rail and had an 'L' cross-section in which the flange kept an unflanged wheel on the track. The flanged rail has seen a minor revival in the 1950s, as guide bars, with the Paris Métro (Rubber-tyred metro or French Métro sur pneus) and more recently as the Guided bus. In the Cambridgeshire Guided Busway the rail is a thick concrete beam with a lip to form the flange. The buses run on normal road wheels with side-mounted guidewheels to run against the flanges. Buses are steered normally when off the busway, analogous to the 18th-century wagons which could be manoeuvered around pitheads before joining the track for the longer haul.
Bridge rail
Bridge rail is a rail with an inverted-U profile. Its simple shape is easy to manufacture, and it was widely used before more sophisticated profiles became cheap enough to make in bulk. It was notably used on the Great Western Railway's gauge baulk road, designed by Isambard Kingdom Brunel.
Barlow rail
Barlow rail was invented by William Henry Barlow in 1849. It was designed to be laid straight onto the ballast, but the lack of sleepers (ties) meant that it was difficult to keep it in gauge.
Flat bottomed rail
Flat bottomed rail is the dominant rail profile in worldwide use.
Flanged T rail
Flanged T rail (also called T-section) is the name for flat bottomed rail used in North America.
Iron-strapped wooden rails were used on all American railways until 1831. Col. Robert L. Stevens, the President of the Camden and Amboy Railroad, conceived the idea that an all-iron rail would be better suited for building a railroad. There were no steel mills in America capable of rolling long lengths, so he sailed to the United Kingdom which was the only place where his flanged T rail (also called T-section) could be rolled. Railways in the UK had been using rolled rail of other cross-sections which the ironmasters had produced.
In May 1831, the first 500 rails, each long and weighing , reached Philadelphia and were placed in the track, marking the first use of the flanged T rail. Afterwards, the flanged T rail became employed by all railroads in the United States.
Col. Stevens also invented the hooked spike for attaching the rail to the crosstie (or sleeper). In 1860, the screw spike was introduced in France where it was widely used. Screw spikes are the most common form of spike in use worldwide in the 21st century.
Flat-bottom or Vignoles rail
Vignoles rail is the popular name for flat-bottomed rail, recognising engineer Charles Vignoles who introduced it to Britain.
Charles Vignoles observed that wear was occurring with wrought iron rails and cast iron chairs on stone blocks, the most common system at that time. In 1836 he recommended flat-bottomed rail to the London and Croydon Railway for which he was consulting engineer.
His original rail had a smaller cross-section than the Stevens rail, with a wider base than modern rail, fastened with screws through the base. Other lines which adopted it were the Hull and Selby, the Newcastle and North Shields, and the Manchester, Bolton and Bury Canal Navigation and Railway Company.
When it became possible to preserve wooden sleepers with mercuric chloride (a process called Kyanising) and creosote, they gave a much quieter ride than stone blocks and it was possible to fasten the rails directly using clips or rail spikes. Their use, and Vignoles's name, spread worldwide.
The joint where the ends of two rails are connected to each other is the weakest part of a rail line. The earliest iron rails were joined by a simple fishplate or bar of metal bolted through the web of the rail. Stronger methods of joining two rails together have been developed. When sufficient metal is put into the rail joint, the joint is almost as strong as the rest of the rail length. The noise generated by trains passing over the rail joints, described as "the clickity clack of the railroad track", can be eliminated by welding the rail sections together. Continuously welded rail has a uniform top profile even at the joints.
Double-headed rail
In late 1830s, Britain's railways used a range of different rail patterns. The London and Birmingham Railway, which had offered a prize for the best design, and was one of the earliest lines to use double-headed rail, where the head and foot of the rail had the same profile. These rails were supported by chairs fastened to the sleepers.
The advantage of double-headed rails was that, when the rail head became worn, they could be turned over and re-used. In 1835 Peter Barlow of the London and Birmingham Railway expressed concern that this would not be successful because the supporting chair would cause indentations in the lower surface of the rail, making it unsuitable as the running surface. Although the Great Northern Railway did experience this problem, double-headed rails were successfully used and turned by the London and South Western Railway, the North Eastern Railway, the London, Brighton and South Coast Railway and the South Eastern Railway. Double-headed rails continued in widespread use in Britain until the First World War.
Bullhead rail
Bullhead rail was developed from double-headed rail. The profile of the head of the rail is not the same as the foot. Because it does not have a symmetrical profile, it was not possible to reverse bullhead rail over and use the foot as the head. It was an expensive method of laying track as heavy cast iron chairs were needed to support the rail, which was secured in the chairs by wooden (later steel) wedges or "keys", which required regular attention.
Bullhead rail was the standard for the British railway system from the mid-19th until the mid-20th century. In 1954, bullhead rail was used on of new track and flat-bottom rail on . One of the first British Standards, BS 9, was for bullhead rail - it was originally published in 1905, and revised in 1924. Rails manufactured to the 1905 standard were referred to as "O.B.S." (Original), and those manufactured to the 1924 standard as "R.B.S." (Revised).
Bullhead rail has been almost completely replaced by flat-bottom rail on the British rail system, although it survives on some branch lines and sidings. It can also be found on heritage railways, due both to the desire to maintain an historic appearance, and the use of old track components salvaged from main lines. The London Underground continued to use bullhead rail after it had been phased out elsewhere in Britain but, in the last few years, there has been a concerted effort to replace it with flat-bottom rail. However, the process of replacing track in tunnels is a slow one, due to the difficulty of using heavy plant and machinery.
Grooved rail
Where a rail is laid in a road surface (pavement) or within grassed surfaces, there has to be accommodation for the flange. This is provided by a slot called the flangeway. The rail is then known as grooved rail, groove rail, or girder rail. The flangeway has the railhead on one side and the guard on the other. The guard carries no weight, but may act as a checkrail.
Grooved rail was invented in 1852 by Alphonse Loubat, a French inventor who developed improvements in tram and rail equipment, and helped develop tram lines in New York City and Paris. The invention of grooved rail enabled tramways to be laid without causing a nuisance to other road users, except unsuspecting cyclists, who could get their wheels caught in the groove. The grooves may become filled with gravel and dirt (particularly if infrequently used or after a period of idleness) and need clearing from time to time, this being done by a "scrubber" vehicle (either a specialised tram, or a maintenance road-rail vehicle). Failure to clear the grooves can lead to a bumpy ride for the passengers, damage to either wheel or rail and possibly derailing.
Girder guard rail
The traditional form of grooved rail is the girder guard section illustrated to the left. This rail is a modified form of flanged rail and requires a special mounting for weight transfer and gauge stabilisation. If the weight is carried by the roadway subsurface, steel ties are needed at regular intervals to maintain the gauge. Installing these means that the whole surface needs to be excavated and reinstated.
Block rail
Block rail is a lower profile form of girder guard rail with the web eliminated. In profile it is more like a solid form of bridge rail, with a flangeway and guard added. Simply removing the web and combining the head section directly with the foot section would result in a weak rail, so additional thickness is required in the combined section.
A modern block rail with a further reduction in mass is the LR55 rail which is polyurethane grouted into a prefabricated concrete beam. It can be set in trench grooves cut into an existing asphalt road bed for Light Rail (trams).
Rail weights and sizes
The weight of a rail per length is an important factor in determining rail strength and hence axleloads and speeds.
Weights are measured in pounds per yard (imperial units in Canada, the United Kingdom and United States) and kilograms per metre in mainland Europe and Australia). .
Commonly, in rail terminology pound is a metonym for the expression pounds per yard and hence a 132–pound rail means a rail of 132 pounds per yard.
Europe
Rails are made in a large number of different sizes. Some common European rail sizes include:
In the countries of the former USSR, rails and rails (not thermally hardened) are common. Thermally hardened rails also have been used on heavy-duty railroads like Baikal–Amur Mainline, but have proven themselves deficient in operation and were mainly rejected in favor of rails.
North America
The American Society of Civil Engineers (or ASCE) specified rail profiles in 1893 for increments from . Height of rail equaled width of foot for each ASCE tee-rail weight; and the profiles specified fixed proportion of weight in head, web and foot of 42%, 21% and 37%, respectively. ASCE profile was adequate; but heavier weights were less satisfactory. In 1909, the American Railway Association (or ARA) specified standard profiles for increments from . The American Railway Engineering Association (or AREA) specified standard profiles for , and rails in 1919, for and rails in 1920, and for rails in 1924. The trend was to increase rail height/foot-width ratio and strengthen the web. Disadvantages of the narrower foot were overcome through use of tie plates. AREA recommendations reduced the relative weight of rail head down to 36%, while alternative profiles reduced head weight to 33% in heavier weight rails. Attention was also focused on improved fillet radii to reduce stress concentration at the web junction with the head. AREA recommended the ARA profile. Old ASCE rails of lighter weight remained in use, and satisfied the limited demand for light rail for a few decades. AREA merged into the American Railway Engineering and Maintenance-of-Way Association in 1997.
By the mid-20th century, most rail production was medium heavy () and heavy (). Sizes under rail are usually for lighter duty freight, low use trackage, or light rail. Track using rail is for lower speed freight branch lines or rapid transit; for example, most of the New York City Subway system track is constructed with rail. Main line track is usually built with rail or heavier. Some common North American rail sizes include:
Crane rails
Some common North American crane rail sizes include:
Australia
Some common Australian rail sizes include:
rails are used on the heavy-haul iron ore railways in the north-west of the state of Western Australia.
50 kg/m and 60 kg/m are the current standard on mainlines elsewhere, although some other sizes are still manufactured.
Rail lengths
Advances in rail lengths produced by rolling mills include the following:
1825: Stockton and Darlington Railway
1830: Liverpool and Manchester Railway fish-belly rails
1850: United States, to fit gondola cars
1895: London and North Western Railway (UK) – four times and twice
2003: (Railtrack (UK) rail delivery train)
2010: Bhilai Steel Plant, India – four times and twice
2016: at Bhilai Steel Plant.
Welding of rails into longer lengths was first introduced around 1893. Welding can be done in a central depot or in the field.
1895 Hans Goldschmidt, Thermit welding
1935 Charles Cadwell, non-ferrous Thermit welding.
Conical or cylindrical wheels
It has long been recognised that conical wheels and rails that are sloped by the same amount follow curves better than cylindrical wheels and vertical rails. A few railways such as Queensland Railways for a long time had cylindrical wheels until much heavier traffic required a change.
Cylindrical wheel treads have to "skid" on track curves so increase both drag and rail and wheel wear. On very straight track a cylindrical wheel tread rolls more freely and does not "hunt". The gauge is narrowed slightly and the flange fillets keep the flanges from rubbing the rails. United States practice is a 1 in 20 cone when new. As the tread wears it approaches an unevenly cylindrical tread, at which time the wheel is trued on a wheel lathe or replaced.
Manufacturers
ArcelorMittal Steelton, United States
ArcelorMittal Ostrava, Czech Republic
ArcelorMittal Gijón, Spain
ArcelorMittal Huta Katowice, Poland
ArcelorMittal Rodange, Luxembourg
British Steel, UK
Evraz, Pueblo, Colorado, United States
Evraz, Russia
JFE Steel, Japan
Kardemir, Turkey
Liberty Steel Group, Whyalla, Australia – formerly OneSteel and Arrium
Metinvest, Ukraine
Nippon Steel and Sumitomo Metal, Japan
Steel Authority of India, India
Steel Dynamics, United States
Voestalpine, Austria
Defunct manufacturers
Algoma Steel Company, Canada
Australian Iron & Steel, Australia
Barrow Steel Works, England
Bethlehem Steel, United States
Călărași steel works, Romania
Dowlais Ironworks, Wales
Lackawanna Steel Company, United States
Sydney Steel Corporation, Canada
Standards
EN 13674-1 - Railway applications - Track - Rail - Part 1: Vignole railway rails 46 kg/m and above EN 13674-1
EN 13674-4 - Railway applications - Track - Rail - Part 4: Vignole railway rails from 27 kg/m to, but excluding 46 kg/m EN 13674-4
See also
Common structural shapes
Difference between train and tram rails
Hunting oscillation
History of rail transport
Iron rails
Permanent way (history)
Plateway
Rail lengths
Rail Squeal
Rail tracks
Railway guide rail
Structural steel
Tramway track
References
External links
British Steel rail, Vignoles rail
British Steel crane rail, Crane rails
Table of North American tee rail (flat bottom) sections
ArcelorMittal Crane Rails
Track components and materials
Wirth Girder Rail
MRT Track & Services Co., Inc / Krupp, T and girder rails, scroll down.
Railroad Facts… Construction, Safety and More.
Permanent way
Rail technologies
Structural steel | Rail profile | Engineering | 4,286 |
77,585,390 | https://en.wikipedia.org/wiki/NGC%205260 | NGC 5260 is a barred spiral galaxy in the constellation of Hydra. Its velocity with respect to the cosmic microwave background is 6789 ± 21 km/s, which corresponds to a Hubble distance of 100.13 ± 7.02 Mpc (∼327 million light-years). It was discovered by American astronomer Lewis Swift on 6 April 1885.
According to the SIMBAD database, NGC 5260 is a Seyfert II galaxy, i.e. it has a quasar-like nuclei with very high surface brightnesses whose spectra reveal strong, high-ionisation emission lines, but unlike quasars, the host galaxy is clearly detectable.
NGC 5260 forms a physical pair with galaxy ESO 509- G 093, collectively named RR 254, with an optical separation of between them.
Supernovae
Two supernovae have been observed in NGC 5260:
SN 2022jkx (type Ib, mag. 18.819) was discovered by ATLAS on 3 May 2022.
SN 2023dtd (type II, mag. 18.516) was discovered by ATLAS on 20 March 2023.
See also
List of NGC objects (5001–6000)
References
External links
5260
048371
-04-32-050
509-092
13375-2336
Hydra (constellation)
18850406
Discoveries by Lewis Swift
Barred spiral galaxies
Seyfert galaxies | NGC 5260 | Astronomy | 294 |
11,881,862 | https://en.wikipedia.org/wiki/RADiations%20Effects%20on%20Components%20and%20Systems | The RADECS association is a non-profit professional organization that promotes basic and applied research in the field of radiation and its effects on materials, components and systems. The acronym RADECS stands for "RADiations Effects on Components and Systems.
History
The first “Radiation and its Effects on Components and Systems" (RADECS) conference was held in Montpellier, France in 1989 as a French national conference. In 1991, the members of the organizing committee expanded the scope of RADECS to become a European conference. Since then, the RADECS Conference and RADECS Workshop have run in alternate years.
The activities of the RADECS association are as follows:
RADECS biannual European Conference
Biannual Technical Workshop
Promote research activities on radiation effects due to charged, un-charged particles and ionizing radiation
Scientific publications or promotion of scientific publications
Cooperation and exchange with other organizations (e.g. IEEE Nuclear and Plasma Sciences Society)
RADECS conferences
The RADECS conference and workshops address technical issues related to radiation effects on devices, integrated circuits, sensors, and systems, as well as radiation hardening, testing, and environmental modeling methods. Papers from the events are published in a biennial issue of the IEEE Transactions on Nuclear Science journal.
References
External links
EASii IC – Company for RADECS website
International professional associations
Radiation Effects
Nuclear organizations
Radiation Effects
International organizations based in France
Professional associations based in France
Organizations based in Montpellier | RADiations Effects on Components and Systems | Astronomy,Engineering | 292 |
45,358,106 | https://en.wikipedia.org/wiki/LTE%20in%20unlicensed%20spectrum | LTE in unlicensed spectrum (LTE-Unlicensed, LTE-U) is an extension of the Long-Term Evolution (LTE) wireless standard that allows cellular network operators to offload some of their data traffic by accessing the unlicensed 5 GHz frequency band. LTE-Unlicensed is a proposal, originally developed by Qualcomm, for the use of the 4G LTE radio communications technology in unlicensed spectrum, such as the 5 GHz band used by 802.11a and 802.11ac compliant Wi-Fi equipment. It would serve as an alternative to carrier-owned Wi-Fi hotspots. Currently, there are a number of variants of LTE operation in the unlicensed band, namely LTE-U, License Assisted Access (LAA), MulteFire, sXGP and CBRS.
LTE in Unlicensed spectrum (LTE-U)
The first version of LTE-Unlicensed is called LTE-U and is developed by the LTE-U Forum to work with the existing 3GPP Releases 10/11/12. LTE-U was designed for quick launch in countries, such as the United States and China, that do not mandate implementing the listen-before-talk (LBT) technique. LTE-U would allow cellphone carriers to boost coverage in their cellular networks, by using the unlicensed 5 GHz band already populated by Wi-Fi devices.
LTE-U is intended to let cell networks boost data speeds over short distances, without requiring the user to use a separate Wi-Fi network as they normally would. It differs from Wi-Fi calling; there remains a control channel using LTE, but all data (not just phone calls) flows over the unlicensed 5 GHz band, instead of the carrier's frequencies.
In 2014, the LTE-U Forum was created by Verizon, in conjunction with Alcatel-Lucent, Ericsson, Qualcomm, and Samsung as members. The forum collaborates and creates technical specifications for base stations and consumer devices passing LTE-U on the unlicensed 5 GHz band, as well as coexistence specs to handle traffic contention with existing Wi-Fi devices.
T-Mobile and Verizon Wireless have indicated early interest in deploying such a system as soon as 2016. While cell providers ordinarily rely on the radio spectrum to which they have exclusive licenses, LTE-U would share space with Wi-Fi equipment already inhabiting that band – smartphones, laptops and tablets connecting to home broadband networks, free hotspots provided by businesses, and so on.
As of late January 2019, there were three LTE-U deployed/launched networks in three countries; eight further operators are investing in the technology in the form of trials or pilots in seven countries.
License Assisted Access (LAA)
The second variant of LTE-Unlicensed is Licensed Assisted Access (LAA) and has been standardized by the 3GPP in Rel-13. LAA adheres to the requirements of the LBT protocol, which is mandated in Europe and Japan. It promises to provide a unified global framework that complies with the regulatory requirements in the different regions of the world.
3GPP Rel-13 defines LAA only for the downlink (DL).
3GPP Rel-14 defines enhanced-Licensed Assisted Access (eLAA), which includes uplink (UL) operation in the unlicensed channel.
3GPP Rel-15 The technology continued to be developed in 3GPP's release 15 under the title Further Enhanced LAA (feLAA).
Ericsson uses the term License Assisted Access (LAA) to describe similar technology. LAA is the 3rd Generation Partnership Project's (3GPP) effort to standardize operation of LTE in the Wi-Fi bands. It uses a contention protocol known as listen-before-talk (LBT), mandated in some European countries, to coexist with other Wi-Fi devices on the same band.
MulteFire
MulteFire is another variant of LTE in unlicensed bands and has been proposed as a standalone version of LTE for small cells. This variant will use only the unlicensed spectrum as the primary and only carrier, and it will provide an opportunity for neutral hosts to deploy LTE in the future. The idea of standalone operation of LTE in unlicensed bands was originally proposed by a small minority of vendors in 3GPP but rejected by the network operators who wanted the technology to be reliant on their licensed spectrum holdings. This technology is now developed by the MuLTEfire Alliance.
Controversy
The proposed use of LTE-U by mobile phone network operators is the subject of controversy in the telecommunications industry. In June 2015, Google sent the Federal Communications Commission (FCC) of the United States a 25-page protest, making an argument against LTE-U in highly technical detail. Since Google's study did not use actual LTE-U equipment in the tests, some industry experts have called its conclusions into question, with one commenter calling the study "utterly artificial and speculative" and "embarrassing".
In August 2015, the Wi-Fi Alliance and National Cable & Telecommunications Association (NCTA) also voiced opposition to LTE-U approval before more testing can be done, citing concerns that it would severely degrade performance of other Wi-Fi devices. Also in August 2015, Qualcomm responded to the allegations made in Google's whitepaper in a detailed filing with the FCC. Qualcomm stated that it conducted tests that were "specifically designed to replicate (to the fullest extent possible) the test scenarios cited in Google’s FCC filing, in particular", and that they "collectively showed that LTE-U coexists very well with Wi-Fi when LTE-U is operating either above or below Wi-Fi’s Energy Detect ('ED') level." Qualcomm explained that the divergence in results was caused by the fact that "the testing the opposing parties conducted for LTE-U/Wi-Fi coexistence below the ED level utilized extremely pessimistic and impractical technical assumptions", whereas Qualcomm's tests were conducted "using a far more realistic setup", including actual LTE-U equipment (versus signal generators in Google's study).
In May 2016, the New York City Mayor's Office sent a letter to the FCC, 3GPP, Wi-Fi Alliance, and IEEE, expressing concern over LTE-U interference with Wi-Fi, given the City's broad investment in the technology. These concerns were discussed at a public event.
In June 2016 the Wi-Fi Alliance announced its co-existence test plan would be ready in August. In FCC filings, Qualcomm, Verizon and T-Mobile said they plan to use this plan, some with the aim of full implementation before the end of 2016. However, in August 2016, Qualcomm demurred. “The latest version of the test plan released by the Wi-Fi Alliance lacks technical merit, is fundamentally biased against LTE-U, and rejects virtually all the input that Qualcomm provided for the last year, even on points that were not controversial,” said Dean Brenner, senior vice president of government affairs. Qualcomm asserts that the plan biased in favor of Wi-Fi, and also that the testing regimen is extended to cover not just LTE-U, but also LAA, despite it already being a 3GPP standard. Verizon also opposed the test plan, saying it was "fundamentally unfair and biased".
Research from the University of Chicago in 2021 also showed a marked decrease in Wi-FI performance when LAA was in active use.
Deployments
In November 2016 Verizon, separate to the Wi-Fi Alliance coexistence plan, filed a Special Temporary Authority (STA) application with the FCC to test 40 small cells in the 5Ghz band. According to a separate filing, Verizon will conduct the tests in Oklahoma City, Raleigh and Cary, North Carolina, and Irving, Texas.
In February 2017, the FCC approved the use of LTE-U on base stations manufactured by Ericsson and Nokia.
As of June 26, 2017, T-Mobile declared that they have successfully launched LTE-U in Bellevue, Washington; Brooklyn, New York; Dearborn, Michigan; Las Vegas, Nevada; Richardson, Texas; and Simi Valley, California.
In January 2019, the Global Mobile Suppliers Association reported that 32 operators are investing in LAA across 21 countries; this had increased to 37 operators in 21 countries by July 2019. Eight of these have announced LAA network launches in six countries, while 29 operators are trialling or deploying the technology in 18 countries. The GSA also identified 21 chipsets containing modems that support one or more of LTE-U, LAA, LWA or CBRS from vendors including GCT, Intel, Mediatek, Qualcomm, and Samsung.
See also
Mobile data offloading
MulteFire
White spaces (radio)
LTE Advanced Pro
LTE-WLAN Aggregation, an alternative proposal that explicitly uses Wi-Fi access instead of competing with it for spectrum
References
LTE (telecommunication)
Mobile technology | LTE in unlicensed spectrum | Technology | 1,954 |
33,639,322 | https://en.wikipedia.org/wiki/Philadelphus%20%C3%97%20lemoinei | Philadelphus × lemoinei is a shrub in the genus Philadelphus. In 1884, Victor Lemoine crossed Philadelphus microphyllus with Philadelphus coronarius and produced this hybrid plant which he named P. lemoinei. The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit:-
Philadelphus 'Manteau d'Hermine'
Philadelphus 'Belle Étoile'
References
Arnold Arboretum Bulletin of Popular Information, vol VI no. 10, June 24, 1920.
Arnold Arboretum Bulletin of Popular Information, vol XXV no. 5, June 18, 1965.
External links
lemoinei
Hybrid plants | Philadelphus × lemoinei | Biology | 147 |
484,824 | https://en.wikipedia.org/wiki/American%20alligator | The American alligator (Alligator mississippiensis), sometimes referred to as a gator, or common alligator is a large crocodilian reptile native to the Southeastern United States and a small section of northeastern Mexico. It is one of the two extant species in the genus Alligator, and is larger than the only other living alligator species, the Chinese alligator.
Adult male American alligators measure in length, and can weigh up to , with unverified sizes of up to and weights of making it the second largest member by length and the heaviest of the family Alligatoridae, after the black caiman. Females are smaller, measuring in length. The American alligator inhabits subtropical and tropical freshwater wetlands, such as marshes and cypress swamps, from southern Texas to North Carolina. It is distinguished from the sympatric American crocodile by its broader snout, with overlapping jaws and darker coloration, and is less tolerant of saltwater but more tolerant of cooler climates than the American crocodile, which is found only in tropical and warm subtropical climates.
American alligators are apex predators and consume fish, amphibians, reptiles, birds, and mammals. Hatchlings feed mostly on invertebrates. They play an important role as ecosystem engineers in wetland ecosystems through the creation of alligator holes, which provide both wet and dry habitats for other organisms. Throughout the year (in particular during the breeding season), American alligators bellow to declare territory, and locate suitable mates. Male American alligators use infrasound to attract females. Eggs are laid in a nest of vegetation, sticks, leaves, and mud in a sheltered spot in or near the water. Young are born with yellow bands around their bodies and are protected by their mother for up to one year. This species displays parental care, which is rare for most reptiles. Mothers protect their eggs during the incubation period, and moves the hatchlings to the water using her mouth.
The conservation status of the American alligator is listed as Least Concern by the International Union for Conservation of Nature. Historically, hunting had decimated their population, and the American alligator was listed as an endangered species by the Endangered Species Act of 1973. Subsequent conservation efforts have allowed their numbers to increase and the species was removed from endangered status in 1987. The species is the official state reptile of three states: Florida, Louisiana, and Mississippi.
History and taxonomy
The American alligator was first classified in 1801 by French zoologist François Marie Daudin as Crocodilus mississipiensis. In 1807, Georges Cuvier created the genus Alligator for it, based on the English common name alligator (derived from Spanish word , "the lizard").
The American alligator and its closest living relative, the Chinese alligator, belong the subfamily Alligatorinae. Alligatorinae is the sister group to the caimans of Caimaninae, which together comprise the family Alligatoridae, which can be shown in the cladogram below:
Evolution
Fossils identical to the existing American alligator are found throughout the Pleistocene, from 2.5 million to 11.7 thousand years ago. In 2016, a Late Miocene fossil skull of an alligator, dating to approximately seven or eight million years ago, was discovered in Marion County, Florida. Unlike the other extinct alligator species of the same genus, the fossil skull was virtually indistinguishable from that of the modern American alligator. This alligator and the American alligator are now considered to be sister taxa, suggesting that the A. mississippiensis lineage has existed in North America for seven to eight million years.
The alligator's full mitochondrial genome was sequenced in the 1990s, and it suggests the animal evolved at a rate similar to mammals and greater than birds and most cold-blooded vertebrates. However, the full genome, published in 2014, suggests that the alligator evolved much more slowly than mammals and birds.
Characteristics
Domestic American alligators range from long and slender to short and robust, possibly in response to variations in factors such as growth rate, diet, and climate.
Size
The American alligator is a relatively large species of crocodilian. On average, it is the largest species in the family Alligatoridae, with only the black caiman being possibly larger. Weight varies considerably depending on length, age, health, season, and available food sources. Similar to many other reptiles that range expansively into temperate zones, American alligators from the northern end of their range, such as southern Arkansas, Alabama, and northern North Carolina, tend to reach smaller sizes. Large adult American alligators tend to be relatively robust and bulky compared to other similar-length crocodilians; for example, captive males measuring were found to weigh , although captive specimens may outweigh wild specimens due to lack of hunting behavior and other stressors.
Large male American alligators reach an expected maximum size up to in length and weigh up to , while females reach an expected maximum of . However, the largest free-ranging female had a total length of and weighed . On rare occasions, a large, old male may grow to an even greater length.
Largest
During the 19th and 20th centuries, larger males reaching were reported. The largest reported individual size was a male killed in 1890 by Edward McIlhenny on Marsh Island, Louisiana, and reportedly measured at in length, but no voucher specimen was available, since the American alligator was left on a muddy bank after having been measured due to having been too massive to relocate. If the size of this animal was correct, it would have weighed about . In Arkansas, a man killed an American alligator that was and . The largest American alligator ever killed in Florida was , as reported by the Everglades National Park, although this record is unverified. The largest American alligator scientifically verified in Florida for the period from 1977 to 1993 was reportedly and weighed , although another specimen (size estimated from skull) may have measured . A specimen that was long and weighed is the largest American alligator killed in Alabama and has been declared the SCI world record in 2014.
Reported sizes
Average
American alligators do not normally reach such extreme sizes. In mature males, most specimens grow up to about in length, and weigh up to , while in females, the mature size is normally around , with a body weight up to . In Newnans Lake, Florida, adult males averaged in weight and in length, while adult females averaged and measured . In Lake Griffin State Park, Florida, adults weighed on average . Weight at sexual maturity per one study was stated as averaging while adult weight was claimed as .
Relation to age
There is a common belief stated throughout reptilian literature that crocodilians, including the American alligator, exhibit indeterminate growth, meaning the animal continues to grow for the duration of its life. However, these claims are largely based on assumptions and observations of juvenile and young adult crocodilians, and recent studies are beginning to contradict this claim. For example, one long-term mark-recapture study (1979–2015) done at the Tom Yawkey Wildlife Center in South Carolina found evidence to support patterns of determinate growth, with growth ceasing upon reaching a certain age (43 years for males and 31 years for females).
Sexual dimorphism
While noticeable in very mature specimens, the sexual dimorphism in size of the American alligator is relatively modest among crocodilians. For contrast, the sexual dimorphism of saltwater crocodiles is much more extreme, with mature males nearly twice as long as and at least four times as heavy as female saltwater crocodiles. Given that female American alligators have relatively higher survival rates at an early age and a large percentage of given populations consists of immature or young breeding American alligators, relatively few large mature males of the expected mature length of or more are typically seen.
Color
Dorsally, adult American alligators may be olive, brown, gray, or black. However, they are on average one of the most darkly colored modern crocodilians (although other alligatorid family members are also fairly dark), and can reliably be distinguished by color via their more blackish dorsal scales against crocodiles. Meanwhile, their undersides are cream-colored. Some American alligators are missing or have an inhibited gene for melanin, which makes them albino. These American alligators are extremely rare and almost impossible to find in the wild. They could only survive in captivity, as they are very vulnerable to the sun and predators.
Jaws, teeth, and snout
American alligators have 74–80 teeth. As they grow and develop, the morphology of their teeth and jaws change significantly. Juveniles have small, needle-like teeth that become much more robust and narrow snouts that become broader as the individuals develop. These morphological changes correspond to shifts in the American alligators' diets, from smaller prey items such as fish and insects to larger prey items such as turtles, birds, and other large vertebrates. American alligators have broad snouts, especially in captive individuals. When the jaws are closed, the edges of the upper jaws cover the lower teeth, which fit into the jaws' hollows. Like the spectacled caiman, this species has a bony nasal ridge, though it is less prominent. American alligators are often mistaken for a similar animal: the American crocodile. An easy characteristic to distinguish the two is the fourth tooth. Whenever an American alligator's mouth is closed, the fourth tooth is no longer visible. It is enclosed in a pocket in the upper jaw.
Bite
Adult American alligators held the record as having the strongest laboratory-measured bite of any living animal, measured at up to . This experiment had not been, at the time of the paper published, replicated in any other crocodilians, and the same laboratory was able to measure a greater bite force of in saltwater crocodiles; notwithstanding this very high biting force, the muscles opening the American alligator's jaw are quite weak, and the jaws can be held closed by hand or tape when an American alligator is captured. No significant difference is noted between the bite forces of male and female American alligators of equal size. Another study noted that as the American alligator increases in size, the force of its bite also increases.
Movement
When on land, an American alligator moves either by sprawling or walking, the latter involving the reptile lifting its belly off the ground. The sprawling of American alligators and other crocodylians is not similar to that of salamanders and lizards, being similar to walking. Therefore, the two forms of land locomotion can be termed the "low walk" and the "high walk". Unlike most other land vertebrates, American alligators increase their speed through the distal rather than proximal ends of their limbs.
In the water, American alligators swim like fish, moving their pelvic regions and tails from side to side. Swimming is assisted by webbed rear feet as well, which bear four toes in contrast to the five toes of the front feet. During respiration, air flow is unidirectional, looping through the lungs during inhalation and exhalation; the American alligator's abdominal muscles can alter the position of the lungs within the torso, thus shifting the center of buoyancy, which allows the American alligator to dive, rise, and roll within the water.
Distribution
American alligators, being native both to the Nearctic and Neotropical realms, are found in the wild in the Southeastern United States, from the Lowcountry in South Carolina, south to Everglades National Park in Florida, and west to the southeastern region of Texas. They are found in parts of North Carolina, South Carolina, Georgia, Florida, Louisiana, Alabama, Mississippi, Arkansas, Oklahoma and Texas. Some of these locations appear to be relatively recent introductions, with often small but reproductive populations. Louisiana has the largest American alligator population of any U.S. state. In the future, possible American alligator populations may be found in areas of Mexico adjacent to the Texas border. The range of the American alligator is slowly expanding northwards, including into areas they once found such as Virginia. American alligators have been naturally expanding their range into Tennessee, and have established a small population in the southwestern part of that state via inland waterways, according to the state's wildlife agency. They have been extirpated from Virginia, and occasional vagrants from North Carolina wander into the Great Dismal Swamp.
In 2021, an individual was found in Calvert County, Maryland, near Chesapeake Bay, where it was shot and killed by a hunter using a crossbow. Additional reports of American alligators from this region exist, though they are believed to be escaped or released exotic pets.
Conservation status
American alligators are currently listed as least concern by the IUCN Red List, even though from the 1800s to the mid-1900s, they were being hunted and poached by humans unsustainably.
Historically, hunting and habitat loss have severely affected American alligator populations throughout their range, and whether the species would survive was in doubt. In 1967, the American alligator was listed as an endangered species (under a law that was the precursor to the Endangered Species Act of 1973), since it was believed to be in danger of extinction throughout all or a significant portion of its range.
Both the United States Fish and Wildlife Service (USFWS) and state wildlife agencies in the South contributed to the American alligator's recovery. Protection under the Endangered Species Act allowed the species to recuperate in many areas where it had been depleted. States began monitoring their American alligator populations to ensure that they would continue to grow. In 1987, the USFWS removed the animal from the endangered species list, as it was considered to be fully recovered. The USFWS still regulates the legal trade in American alligators and their products to protect still endangered crocodilians that may be passed off as American alligators during trafficking.
American alligators are listed under Appendix II of the Convention on International Trade in Endangered Species (CITES) meaning that international trade in the species (including parts and derivatives) is regulated.
Habitat
They inhabit swamps, streams, rivers, ponds, and lakes as well as wetland prairies interspersed with shallow open water and canals with associated levees. A lone American alligator was spotted for over 10 years living in a river north of Atlanta, Georgia. Females and juveniles are also found in Carolina Bays and other seasonal wetlands. While they prefer fresh water, American alligators may sometimes wander into brackish water, but are less tolerant of salt water than American crocodiles, as the salt glands on their tongues do not function. One study of American alligators in north-central Florida found the males preferred open lake water during the spring, while females used both swampy and open-water areas. During summer, males still preferred open water, while females remained in the swamps to construct their nests and lay their eggs. Both sexes may den underneath banks or clumps of trees during the winter.
In some areas of their range, American alligators are an unusual example of urban wildlife; golf courses are often favored by the species due to an abundance of water and a frequent supply of prey animals such as fish and birds.
Cold tolerance
American alligators are less vulnerable to cold than American crocodiles. Unlike an American crocodile, which would immediately succumb to the cold and drown in water at or less, an American alligator can survive in such temperatures for some time without displaying any signs of discomfort. This adaptiveness is thought to be why American alligators are widespread further north than the American crocodile. In fact, the American alligator is found farther from the equator and is more equipped to handle cooler conditions than any other crocodilian. When the water begins to freeze, American alligators go into a period of brumation; they stick their snouts through the surface, which allows them to breathe above the ice, and they can remain in this state for several days.
Ecology and behavior
Basking
American alligators primarily bask on shore, but also climb into and perch on tree limbs to bask if no shoreline is available. This is not often seen, since if disturbed, they quickly retreat back into the water by jumping from their perch.
Holes
American alligators modify wetland habitats, most dramatically in flat areas such as the Everglades, by constructing small ponds known as alligator holes. This behavior has qualified the American alligator to be considered a keystone species. Alligator holes retain water during the dry season and provide a refuge for aquatic organisms, which survive the dry season by seeking refuge in alligator holes, so are a source of future populations. The construction of nests along the periphery of alligator holes, as well as a buildup of soils during the excavation process, provides drier areas for other reptiles to nest and a place for plants that are intolerant of inundation to colonize. Alligator holes are an oasis during the Everglades dry season, so are consequently important foraging sites for other organisms. In the limestone depressions of cypress swamps, alligator holes tend to be large and deep, while those in marl prairies and rocky glades are usually small and shallow, and those in peat depressions of ridge and slough wetlands are more variable.
Feeding
Bite and mastication
The teeth of the American alligator are designed to grip prey, but cannot rip or chew flesh like teeth of some other predators (such as canids and felids), and depend on their gizzard, instead, to masticate their food. The attainment of adulthood enables the consumption of large mammals and the crushing of large turtles. The American alligator is capable of biting through a turtle's shell or a moderately sized mammal bone.
Possible tool use
American alligators have been documented using lures to hunt prey such as birds. This means they are among the first reptiles recorded to use tools. By balancing sticks and branches on their heads, American alligators are able to lure birds looking for suitable nesting material to kill and consume. This strategy, which is shared by the mugger crocodile, is particularly effective during the nesting season, in which birds are more likely to gather appropriate nesting materials. This strategy has been documented in two Florida zoos occurring multiple times a day in peak nesting season and in some parks in Louisiana. The use of tools was documented primarily during the peak rookery season when birds were primarily looking for sticks.
However, a three-day experiment to reproduce the use of sticks as lures, published in 2019, failed to document the behavior. Researchers placed sticks at densities of 30 to 35 sticks per meter squared near four captive populations, two near rookeries and two at no-rookery sites. While stick-displaying behavior was observed several times, it was not more frequent near rookeries. In fact, in some comparisons, it was associated with no-rookery sites. This implies American alligators do not tailor this behavior to specific contexts, leaving the purpose, if any, of stick-displaying ambiguous.
Aquatic vs terrestrial prey
Fish and other aquatic prey taken in the water or at the water's edge form the major part of American alligator's diet and may be eaten at any time of the day or night. Adult American alligators also spend considerable time hunting on land, up to from water, ambushing terrestrial animals on trailsides and road shoulders. Usually, terrestrial hunting occurs on nights with warm temperatures. When hunting terrestrial prey, American alligators may also ambush them from the edge of the water by grabbing them and pulling the prey into the water, the preferred method of predation of larger crocodiles.
Additionally, American alligators have recently been filmed and documented killing and eating sharks and rays; four incidents documented indicated that bonnetheads, lemon sharks, Atlantic stingrays, and nurse sharks are components of the animal's diet. Sharks are also known to prey on American alligators, in turn, indicating that encounters between the two predators are common.
Common prey
American alligators are considered an apex predator throughout their range. They are opportunists and their diet is determined largely by both their size and age and the size and availability of prey. Most American alligators eat a wide variety of animals, including invertebrates, fish, birds, turtles, snakes, amphibians, and mammals. Hatchlings mostly feed on invertebrates such as insects, insect larvae, snails, spiders, and worms, as well as small fish and frogs. As they grow, American alligators gradually expand to larger prey. Once an American alligator reaches full size and power in adulthood, any animal living in the water or coming to the water to drink is potential prey. Most animals captured by American alligators are considerably smaller than itself. A few examples of animals consumed are largemouth bass, spotted gar, freshwater pearl mussels, American green tree frogs, yellow mud turtles, cottonmouths, common moorhens, and feral wild boars. Stomach contents show, among native mammals, muskrats and raccoons are some of the most commonly eaten species. In Louisiana, where introduced nutria are common, they are perhaps the most regular prey for adult American alligators, although only larger adults commonly eat this species. It has also been reported that large American alligators prey on medium-sized American alligators, which had preyed on hatchlings and smaller juveniles.
If an American alligator's primary food resource is not available, it will sometimes feed on carrion and non-prey items such as rocks and artificial objects, like bottle caps. These items help the American alligator in the process of digestion by crushing up the meat and bones of animals, especially animals with shells.
Large animals
Other animals may occasionally be eaten, even large deer or feral wild boars, but these are not normally part of the diet. American alligators occasionally prey on large mammals, but usually do so when fish and smaller prey levels go down. Rarely, American alligators have been observed killing and eating bobcats, but such events are not common and have little effect on bobcat populations. Although American alligators have been listed as predators of the Nilgai and the West Indian manatees, very little evidence exists of such predation. In the 2000s, when invasive Burmese pythons first occupied the Everglades, American alligators have been recorded preying on sizable snakes, possibly controlling populations and preventing the invasive species from spreading northwards. However, the python is also known to occasionally prey on alligators, a form of both competition and predation. American alligator predation on Florida panthers is rare, but has been documented. Such incidents usually involve a panther trying to cross a waterway or coming down to a swamp or river to get a drink. American alligator predation on American black bears has also been recorded.
Domestic animals
Occasionally, domestic animals, including dogs, cats, and calves, are taken as available, but are secondary to wild and feral prey. Other prey, including snakes, lizards, and various invertebrates, are eaten occasionally by adults.
Birds
Water birds, such as herons, egrets, storks, waterfowl and large dabbling rails such as gallinules or coots, are taken when possible. Occasionally, unwary adult birds are grabbed and eaten by American alligators, but most predation on bird species occurs with unsteady fledgling birds in late summer, as fledgling birds attempt to make their first flights near the water's edge.
Fruit
In 2013, American alligators and other crocodilians were reported to also eat fruit. Such behavior has been witnessed, as well as documented from stomach contents, with the American alligators eating such fruit as wild grapes, elderberries, and citrus fruits directly from the trees. Thirty-four families and 46 genera of plants were represented among seeds and fruits found in the stomach contents of American alligators. The discovery of this unexpected part of the American alligator diet further reveals that they may be responsible for spreading seeds from the fruit they consume across their habitat.
Cooperative hunting
Additionally, American alligators engage in what seems to be cooperative hunting. One observation of cooperative hunting techniques was where there are pushing American alligators and catching American alligators and they were observed taking turns in each position. Another observation said that about 60 American alligators gathered in an area and would form a semicircle with about half of them and would push the fish closer to the bank. Once one of the American alligators caught a fish another one would enter into its spot, and it would take the fish to the resting area. This was reported to have occurred two days in a row.
In Florida and East Texas
The diet of adult American alligators from central Florida lakes is dominated by fish, but the species is highly opportunistic based upon local availability. In Lake Griffin, fish made up 54% of the diet by weight, with catfish being most commonly consumed, while in Lake Apopka, fish made up 90% of the food and mostly shad were taken; in Lake Woodruff, the diet was 84% fish and largely consists of bass and sunfish. Unusually in these regions, reptiles and amphibians were the most important nonpiscivore prey, mostly turtles and water snakes. In southern Louisiana, crustaceans (largely crawfish and crabs) were found to be present in the southeastern American alligators, but largely absent in southwestern American alligators, which consumed a relatively high proportion of reptiles, although fish were the most recorded prey for adults, and adult males consumed a large portion of mammals.
In East Texas, diets were diverse and adult American alligators took mammals, reptiles, amphibians, and invertebrates (e.g. snails) in often equal measure as they did fish.
Vocalizations
Mechanism
An American alligator is able to abduct and adduct the vocal folds of its larynx, but not to elongate or shorten them; yet in spite of this, it can modulate fundamental frequency very well. Their vocal folds consists of epithelium, lamina propria and muscle. Sounds ranged from 50 to 1200 Hz. In one experiment conducted on the larynx, the fundamental frequency depended on both the glottal gap and stiffness of the larynx tissues. As the frequency increases, there's high tension and large strains. The fundamental frequency has been influenced by the glottal gap size and subglottal pressure and when the phonation threshold pressure has been exceeded, there will be vocal fold vibration.
Calls
Crocodilians are the most vocal of all non-avian reptiles and have a variety of different calls depending on the age, size, and sex of the animal. The American alligator can perform specific vocalizations to declare territory, signal distress, threaten competitors, and locate suitable mates. Juveniles can perform a high-pitched hatchling call (a "yelping" trait common to many crocodilian species' hatchling young) to alert their mothers when they are ready to emerge from the nest. Juveniles also make a distress call to alert their mothers if they are being threatened. Adult American alligators can growl, hiss, or cough to threaten others and declare territory.
Bellowing
Both males and females bellow loudly by sucking air into their lungs and blowing it out in intermittent, deep-toned roars to attract mates and declare territory. Males are known to use infrasound during mating bellows. Their bellowing initiates the beginning of the courtship period for American alligators. Bellowing is performed in a "head oblique, tail arched" posture. Infrasonic waves from a bellowing male can cause the surface of the water directly over and to either side of his back to literally "sprinkle", in what is commonly called the "water dance". Large bellowing "choruses" of American alligators during the breeding season are commonly initiated by females and perpetuated by males. Observers of large bellowing choruses have noted they are often felt more than they are heard due to the intense infrasound emitted by males. American alligators bellow in B flat (specifically "B♭1", defined as an audio frequency of 58.27 Hz), and bellowing choruses can be induced by tuba players, sonic booms, and large aircraft.
Lifespan
American alligators typically live to the age of 50, and possibly over 70 years old. Males reach sexual maturity at around 11.6 years, and females at around 15.8 years. Although it was originally thought that American alligators never stop growing, studies have now found that males stop growing at around the age of 43 years, and females stop growing at around the age of 31 years.
Reproduction
Breeding season
The breeding season begins in the spring. On spring nights, American alligators gather in large numbers for group courtship, in the aforementioned "water dances". A study conducted in the 1980s at an alligator farm showed that homosexual courtship is common, with two-thirds of the recorded instances of sexual behaviour having been between two males. The female builds a nest of vegetation, sticks, leaves, and mud in a sheltered spot in or near the water.
Eggs
After the female lays her 20 to 50 white eggs, about the size of a goose egg, she covers them with more vegetation, which heats as it decays, helping to keep the eggs warm. This differs from Nile crocodiles, which lay their eggs in pits. The temperature at which American alligator eggs develop determines their sex (see temperature-dependent sex determination). Studies have found that eggs hatched at a temperature below or a temperature above will produce female offspring, while those at a temperature between will produce male offspring. The nests built on levees are warmer, thus produce males, while the cooler nests of wet marsh produce females. The female remains near the nest throughout the 65-day incubation period, protecting it from intruders. When the young begin to hatch — their "yelping" calls can sometimes even be heard just before hatching commences — the mother quickly digs them out and carries them to the water in her mouth, as some other crocodilian species are known to do.
Young
The young are tiny replicas of adults, with a series of yellow bands around their bodies that serve as camouflage. Hatchlings gather into pods and are guarded by their mother and keep in contact with her through their "yelping" vocalizations. Young American alligators eat small fish, frogs, crayfish, and insects. They are preyed on by large fish, birds, raccoons, Florida panthers, and adult American alligators. Mother American alligators eventually become more aggressive towards their young, which encourages them to disperse. Young American alligators grow a year and reach adulthood at .
Parasites
American alligators are commonly infected with parasites. In a 2016 Texas study, 100% of the specimens collected were infected with parasites, and by at least 20 different species of parasites, including lung pentastomids, gastric nematodes, intestinal helminths. When compared to American alligators from different states there was no significant difference in prevalence.
Interactions with exotic species
Nutria were introduced into coastal marshes from South America in the mid-20th century, and their population has since exploded into the millions. They cause serious damage to coastal marshes and may dig burrows in levees. Hence, Louisiana has had a bounty to try to reduce nutria numbers. Large American alligators feed heavily on nutria, so American alligators may not only control nutria populations in Louisiana, but also prevent them spreading east into the Everglades. Since hunting and trapping preferentially take the large American alligators that are the most important in eating nutria, some changes in harvesting may be needed to capitalize on their ability to control nutria.
Recently, a population of Burmese pythons became established in Everglades National Park. Substantial American alligator populations in the Everglades might be a contributing factor, as a competitor, in keeping the python populations low, preventing the spread of the species north. While events of predation by Burmese pythons on sizable American alligators have been observed, no evidence of a net negative effect has been seen on overall American alligator populations.
Indicators of environmental restoration
American alligators play an important role in the restoration of the Everglades as biological indicators of restoration success. American alligators are highly sensitive to changes in the hydrology, salinity, and productivity of their ecosystems; all are factors that are expected to change with Everglades restoration. American alligators also may control the long-term vegetation dynamics in wetlands by reducing the population of small mammals, particularly nutria, which may otherwise overgraze marsh vegetation. In this way, the vital ecological service they provide may be important in reducing rates of coastal wetland losses in Louisiana. They may provide a protection service for water birds nesting on islands in freshwater wetlands. American alligators prevent predatory mammals from reaching island-based rookeries and in return eat spilled food and birds that fall from their nests. Wading birds appear to be attracted to areas with American alligators and have been known to nest at heavily trafficked tourist attractions with large numbers of American alligators, such as the St. Augustine Alligator Farm in St. Augustine, Florida.
Relationship with humans
Attacks on humans
American alligators are capable of killing humans, but fatal attacks are rare. Mistaken identity leading to an attack is always possible, especially in or near cloudy waters. American alligators are often less aggressive towards humans than larger crocodile species, a few of which (mainly the Nile and saltwater crocodiles) may prey on humans with some regularity. Alligator bites are serious injuries, due to the reptile's sheer bite force and risk of infection. Even with medical treatment, an American alligator bite may still result in a fatal infection.
As human populations increase, and as they build houses in low-lying areas, or fish or hunt near water, incidents are inevitable where humans intrude on American alligators and their habitats. Since 1948, 257 documented attacks on humans in Florida (about five incidents per year) have been reported, of which an estimated 23 resulted in death. Only nine fatal attacks occurred in the United States throughout the 1970s–1990s, but American alligators killed 12 people between 2001 and 2007. An additional report of alligator attacks showed a total of 376 injuries and 15 deaths recorded all from 1948 to 2004, leading this to an increase of the alligator population. In May 2006, American alligators killed three Floridians in less than a week. At least 28 fatal attacks by American alligators have occurred in the United States since 1970.
Wrestling
Since the late 1880s, alligator wrestling has been a source of entertainment for some. Created by the Miccosukee and Seminole tribes prior to its popularity for tourism, this tourism tradition remains popular despite criticism from animal-rights activists.
Farming
Today, alligator farming is a large, growing industry in Georgia, Florida, Texas, and Louisiana. These states produce a combined annual total of some 45,000 alligator hides. Alligator hides bring good prices and hides in the 6- to 7-ft range have sold for $300 each. The market for alligator meat is growing, and about of meat are produced annually. According to the Florida Department of Agriculture and Consumer Services, raw alligator meat contains roughly 200 Calories (840 kJ) per 3-oz (85-g) portion, of which 27 Calories (130 kJ) come from fat.
Culture and film
The American alligator is the official state reptile of Florida, Louisiana, and Mississippi. Several organizations and products from Florida have been named after the animal.
"Gators" has been the nickname of the University of Florida's sports teams since 1911. In 1908, a printer made a spur-of-the-moment decision to print an alligator emblem on a shipment of the school's football pennants. The mascot stuck, and was made official in 1911, perhaps because the team captain's nickname was Gator. Allegheny College and San Francisco State University both have Gators as their mascots, as well.
The Gator Bowl is a college football game held in Jacksonville annually since 1946, with Gator Bowl Stadium hosting the event until the 1993 edition. The Gatornationals is a NHRA drag race held at the Gainesville Raceway in Gainesville since 1970.
See also
Chinese alligator, the other living species of alligator
Muja, the oldest living American alligator in captivity, lived in Belgrade Zoo, Serbia
Saturn, an American alligator that survived the destruction of the Berlin Zoological Garden during World War II
The Alligator People
Gatorland
Brazos Bend State Park
Sewer alligator
References
Further reading
Boulenger GA (1889). Catalogue of the Chelonians, Rhynchocephalians, and Crocodiles in the British Museum (Natural History). New Edition. London: Trustees of the British Museum (Natural History). (Taylor and Francis, printers). x + 311 pp. + Plates I-VI. (Alligator mississippiensis, p. 290).
Daudin FM (1802). Histoire Naturelle, Générale et Particulière des Reptiles; Ouvrage faisant suite à l'Histoire Naturelle générale et particulière, composée par Leclerc de Buffon; et rédigée par C.S. Sonnini, membre de plusieurs sociétés savantes. Tome Second [Volume 2]. 432 pp. Paris: F. Dufart. ("Crocodilus mississipiensis [sic]", new species, pp. 412–416). (in French and Latin).
Powell R, Conant R, Collins JT (2016). Peterson Field Guide to Reptiles and Amphibians of Eastern and Central North America, Fourth Edition. Boston and New York: Houghton Mifflin Harcourt. xiv + 494 pp., 47 Plates, 207 Figures. . (Alligator mississippiensis, p. 170 + Plate 13 + photographs on pp. 166–167, 465).
Smith, Hobart M.; Brodie, Edmund D., Jr. (1982). Reptiles of North America: A Guide to Field Identification. New York: Golden Press. 240 pp. . (Alligator mississippiensis, pp. 208–209).
External links
Crocodilian Online
Photo exhibit on alligators in Florida from State Archives of Florida
Why the Gulf Coast needs more big alligators
Alligator bellows and hisses – sound clips from the U.S. Fish and Wildlife Service
Alligator
Apex predators
Articles containing video clips
Crocodilians of North America
Cuisine of the Southern United States
Extant Tortonian first appearances
Miocene reptiles of North America
Fauna of the Southeastern United States
Native American cuisine of the Southeastern Woodlands
Reptiles described in 1802
Reptiles of the United States
Symbols of Florida
Symbols of Louisiana
Symbols of Mississippi
Tool-using animals
Natural history of Florida | American alligator | Biology | 7,898 |
13,567,507 | https://en.wikipedia.org/wiki/Saxon%20Shore | The Saxon Shore () was a military command of the Late Roman Empire, consisting of a series of fortifications on both sides of the Channel. It was established in the late 3rd century and was led by the "Count of the Saxon Shore". In the late 4th century, his functions were limited to Britain, while the fortifications in Gaul were established as separate commands. Several well-preserved Saxon Shore forts survive in east and south-east England.
Background
During the latter half of the 3rd century, the Roman Empire faced a grave crisis: Weakened by civil wars, the rapid succession of short-lived emperors, and secession in the provinces, the Romans now faced new waves of attacks by barbarian tribes. Most of Britain had been part of the empire since the mid-1st century. It was protected from raids in the north by the Hadrianic and Antonine Walls, while a fleet of some size was also available.
However, as the frontiers came under increasing external pressure, fortifications were built throughout the Empire in order to protect cities and guard strategically important locations. It is in this context that the forts of the Saxon Shore were constructed. Already in the 230s, under Severus Alexander, several units had been withdrawn from the northern frontier and garrisoned at locations in the south, and had built new forts at Brancaster and Caister-on-Sea in Norfolk and Reculver in Kent. Dover was already fortified in the early 2nd century, and the other forts in this group were constructed in the period between the 270s and 290s.
Meaning of the term and role
The only contemporary reference we possess that mentions the name "Saxon Shore" comes in the late 4th-century Notitia Dignitatum, which lists its commander, the Comes Litoris Saxonici per Britanniam ("Count of the Saxon Shore in Britain"), and gives the names of the sites under his command and their respective complements of military personnel. However, due to the absence of further evidence, theories have varied among scholars as to the exact meaning of the name, and also the nature and purpose of the chain of forts it refers to.
Two interpretations were put forward as to the meaning of the adjective "Saxon": either a shore attacked by Saxons, or a shore settled by Saxons. Some argue that the latter hypothesis is supported by Eutropius, who states that during the 280s the sea along the coasts of Belgica and Armorica was "infested with Franks and Saxons", and that this was why Carausius was first put in charge of the fleet there. It also receives support from archaeological finds, as artefacts of a Germanic style have been found in burials, while there is evidence of the presence of Saxons in southern England and the northern coasts of Gaul around Boulogne-sur-Mer and Bayeux from the middle of the 5th century onwards. This, in turn, could mirror a well documented practice of deliberately settling Germanic tribes (Franks became foederati in 358 AD under Emperor Julian) to strengthen Roman defences. Nevertheless, the evidence for extensive Saxon settlement in Britain typically dates to the 5th century, later than the channel defences of the late 3rd and 4th century associated with the Saxon Shore.
The other interpretation holds that the forts fulfilled a coastal defence role against seaborne invaders, mostly Saxons and Franks, and acted as bases for the naval units operating against them. This view is reinforced by the parallel chain of fortifications across the Channel on the northern coasts of Gaul, which complemented the British forts, suggesting a unified defensive system, although this could also be accounted for the Saxons having been settled on both sides of the coast as the archeological evidence presented earlier suggests.
Other scholars like John Cotterill however consider the threat posed by Germanic raiders, at least in the 3rd and early 4th centuries, to be exaggerated. They interpret the construction of the forts at Brancaster, Caister-on-Sea and Reculver in the early 3rd century and their location at the estuaries of navigable rivers as pointing to a different role: fortified points for transport and supply between Britain and Gaul, without any relation (at least at that time) to countering seaborne piracy. This view is supported by contemporary references to the supplying of the army of Julian the Apostate by Caesar with grain from Britain during his campaign in Gaul in 359, and their use as secure landing places by Count Theodosius during the suppression of the Great Conspiracy a few years later.
Another theory, proposed by D.A. White, was that the extended system of large stone forts was disproportionate to any threat by seaborne Germanic raiders, and that it was actually conceived and constructed during the secession of Carausius and Allectus (the Carausian Revolt) in 289–296, and with an entirely different enemy in mind: they were to guard against an attempt at reconquest by the Empire. This view, although widely disputed, has found recent support from archaeological evidence at Pevensey, which dates the fort's construction to the early 290s.
Whatever their original purpose, it is virtually certain that in the late 4th century the forts and their garrisons were employed in operations against Frankish and Saxon pirates. Britain was abandoned by Rome in 410, with Armorica following soon after. The forts on both sides continued to be inhabited in the following centuries, and in Britain in particular several continued in use well into the Anglo-Saxon period.
The forts
In Britain
The nine forts mentioned in the Notitia Dignitatum for Britain are listed here, from north to south, with their garrisons.
Branodunum (Brancaster, Norfolk). One of the earliest forts, dated to the 230s. It was built to guard the Wash approaches and is of a typical rectangular castrum layout. It was garrisoned by the Equites Dalmatae Brandodunenses, although evidence exists suggesting that its original garrison was the cohors I Aquitanorum.
Gariannonum (Burgh Castle, Norfolk). Established between 260 and the mid-270s to guard the River Yare (Gariannus Fluvius), it was garrisoned by the Equites Stablesiani Gariannoneses. Although there is some discussion as to whether this is actually the fort at Caister-on-Sea, and being on the opposite bank of the same estuary as Burgh Castle.
Othona (Bradwell-on-Sea, Essex). Garrisoned by the Numerus Fortensium.
Regulbium (Reculver, Kent). Together with Brancaster one of the earliest forts, built in the 210s to guard the Thames estuary, it is likewise a castrum. It was garrisoned by the cohors I Baetasiorum since the 3rd century.
Rutupiae (Richborough, Kent), garrisoned by parts of the Legio II Augusta.
Dubris (Dover Castle, Kent), garrisoned by the Milites Tungrecani.
Portus Lemanis (Lympne, Kent), garrisoned by the Numerus Turnacensium.
Anderitum (Pevensey Castle, East Sussex), garrisoned by the Numerus Abulcorum.
Portus Adurni (Portchester Castle, Hampshire), garrisoned by a Numerus Exploratorum.
There are a few other sites that clearly belonged to the system of the British branch of the Saxon Shore (the so-called "Wash-Solent limes"), although they are not included in the Notitia, such as the forts at Walton Castle, Suffolk, which has by now sunk into the sea due to erosion, and at Caister-on-Sea in Norfolk. In the south, Carisbrooke Castle on the Isle of Wight and Clausentum (Bitterne, in modern Southampton) are also regarded as westward extensions of the fortification chain. Other sites probably connected to the Saxon Shore system are the sunken fort at Skegness, and the remains of possible signal stations at Thornham in Norfolk, Corton in Suffolk and Hadleigh in Essex.
Further north on the coast, the precautions took the form of central depots at Lindum (Lincoln) and Malton with roads radiating to coastal signal stations. When an alert was relayed to the base, troops could be dispatched along the road. Further up the coast in North Yorkshire, a series of coastal watchtowers (at Huntcliff, Filey, Ravenscar, Goldsborough, and Scarborough) was constructed, linking the southern defences to the northern military zone of the Wall. Similar coastal fortifications are also found in Wales, at Cardiff and Caer Gybi. The only fort in this style in the northern military zone is Lancaster, Lancashire, built sometime in the mid-late 3rd century replacing an earlier fort and extramural community, which may reflect the extent of coastal protection on the north-west coast from invading tribes from Ireland.
In Gaul
The Notitia also includes two separate commands for the northern coast of Gaul, both of which belonged to the Saxon Shore system. However, when the list was compiled, in , Britain had been abandoned by Roman forces. The first command controlled the shores of the province Belgica Secunda (roughly between the estuaries of the Scheldt and the Somme), under the dux Belgicae Secundae with headquarters at Portus Aepatiaci:
Marcae (unidentified location near Calais, possibly Marquise or Marck), garrisoned by the Equites Dalmatae. In the Notitia, together with Grannona, it is the only site on the Gallic shore to be explicitly referred to as lying in litore Saxonico.
Locus Quartensis sive Hornensis (probably at the mouth of the Somme), the port of the classis Sambrica ("Fleet of the Somme")
Portus Aepatiaci (possibly Étaples), garrisoned by the milites Nervii.
Although not mentioned in the Notitia, the port of Gesoriacum or Bononia (Boulogne-sur-Mer), which until 296 was the main base of the Classis Britannica, would also have come under the dux Belgicae Secundae.
To this group also belongs the Roman fort at Oudenburg in Belgium.
Further west, under the dux tractus Armoricani et Nervicani, were mainly the coasts of Armorica, nowadays Normandy and Brittany. The Notitia lists the following sites:
Grannona (disputed location, either at the mouths of the Seine or at Port-en-Bessin), the seat of the dux, garrisoned by the cohors prima nova Armoricana. In the Notitia, it is explicitly mentioned as lying in litore Saxonico.
Rotomagus (Rouen), garrisoned by the milites Ursariensii
Constantia (Coutances), garrisoned by the legio I Flavia Gallicana Constantia
Abricantis (Avranches), garrisoned by the milites Dalmati
Grannona (uncertain whether this is a different location than the first Grannona, perhaps Granville), garrisoned by the milites Grannonensii
Aleto or Aletum (Aleth, near Saint-Malo), garrisoned by the milites Martensii
Osismis (Brest), garrisoned by the milites Mauri Osismiaci
Blabia (perhaps Hennebont), garrisoned by the milites Carronensii
Benetis (possibly Vannes), garrisoned by the milites Mauri Beneti
Manatias (Nantes), garrisoned by the milites superventores
In addition, there are several other sites where a Roman military presence has been suggested. At Alderney, the fort known as "The Nunnery" is known to date to Roman times, and the settlement at Longy Common has been cited as evidence of a Roman military establishment, though the archaeological evidence there is, at best, scant.
In popular culture
In 1888, Alfred Church wrote a historical novel entitled The Count of the Saxon Shore. It is available online.
The American band Saxon Shore takes its name from the region.
The Saxon Shore is the fourth book in Jack Whyte's Camulod Chronicles.
Since 1980, the "Saxon Shore Way" exists, a coastal footpath in Kent which passes by many of the forts.
David Rudkin's play The Saxon Shore takes place near Hadrian's Wall as the Romans are withdrawing from Britain.
References
Notes
Sources
Cottrell, Leonard (1964). The Roman Forts of the Saxon Shore, London: HMSO.
Myers John N.L. (1986) The English Settlements, Oxford University Press
Strugnell, Kenneth Wenham (1973). Seagates to the Saxon Shore, Terence Dalton Ltd.
External links
The Saxon Shore forts on "Roman Britain"
Sites of the Litus Saxonicum forts on Google Maps
History of Pevensey Castle
Fortifications in France
Fortification lines
4th century in Roman Gaul
Roman Britain
Roman fortifications in England
Roman fortifications in France
Military history of the English Channel | Saxon Shore | Engineering | 2,703 |
39,315,122 | https://en.wikipedia.org/wiki/Taxuyunnanine | Taxuyunnanines is a class of taxoids isolated from plants of the genus Taxus.
References
Taxanes | Taxuyunnanine | Chemistry | 25 |
15,523,181 | https://en.wikipedia.org/wiki/Riemannian%20Penrose%20inequality | In mathematical general relativity, the Penrose inequality, first conjectured by Sir Roger Penrose, estimates the mass of a spacetime in terms of the total area of its black holes and is a generalization of the positive mass theorem. The Riemannian Penrose inequality is an important special case. Specifically, if (M, g) is an asymptotically flat Riemannian 3-manifold with nonnegative scalar curvature and ADM mass m, and A is the area of the outermost minimal surface (possibly with multiple connected components), then the Riemannian Penrose inequality asserts
This is purely a geometrical fact, and it corresponds to the case of a complete three-dimensional, space-like, totally geodesic submanifold
of a (3 + 1)-dimensional spacetime. Such a submanifold is often called a time-symmetric initial data set for a spacetime. The condition of (M, g) having nonnegative scalar curvature is equivalent to the spacetime obeying the dominant energy condition.
This inequality was first proved by Gerhard Huisken and Tom Ilmanen in 1997 in the case where A is the area of the largest component of the outermost minimal surface. Their proof relied on the machinery of weakly defined inverse mean curvature flow, which they developed. In 1999, Hubert Bray gave the first complete proof of the above inequality using a conformal flow of metrics. Both of the papers were published in 2001.
Physical motivation
The original physical argument that led Penrose to conjecture such an inequality invoked the Hawking area theorem and the cosmic censorship hypothesis.
Case of equality
Both the Bray and Huisken–Ilmanen proofs of the Riemannian Penrose inequality state that under the hypotheses, if
then the manifold in question is isometric to a slice of the Schwarzschild spacetime outside its outermost minimal surface, which is a sphere of Schwarzschild radius.
Penrose conjecture
More generally, Penrose conjectured that an inequality as above should hold for spacelike submanifolds of spacetimes that are not necessarily time-symmetric. In this case, nonnegative scalar curvature is replaced with the dominant energy condition, and one possibility is to replace the minimal surface condition with an apparent horizon condition. Proving such an inequality remains an open problem in general relativity, called the Penrose conjecture.
In popular culture
In episode 6 of season 8 of the television sitcom The Big Bang Theory, Dr. Sheldon Cooper claims to be in the process of solving the Penrose Conjecture while at the same time composing his Nobel Prize acceptance speech.
References
Riemannian geometry
Geometric inequalities
General relativity
Theorems in geometry | Riemannian Penrose inequality | Physics,Mathematics | 551 |
20,183,055 | https://en.wikipedia.org/wiki/Psilocybe%20stuntzii | Psilocybe stuntzii, also known as Stuntz's blue legs and blue ringers it is a psilocybin mushroom of the family Hymenogastraceae, having psilocybin and psilocin as main active compounds.
It is in the section Stuntzae, other members of the section include Psilocybe caeruleoannulata, Psilocybe meridionalis, Psilocybe mescaleroensis, Psilocybe ovoideocystidiata, Psilocybe rostrata, Psilocybe subaeruginascens, Psilocybe subaeruginascens var. septentrionalis and Psilocybe uruguayensis.
Etymology and history
The mushroom is named in honor of mycologist Daniel Stuntz of the University of Washington. It was originally identified growing on the University of Washington campus.
Description
The pileus is .5–3.5 cm, obtusely conic to convex, expanding to convex-umbonate or flat with age. The margin is translucent-striate when moist and uplifted in age. It is hygrophanous, glabrous, dark chestnut brown while lighter towards the center. The pileus is olive-greenish at times, fading to a pale yellowish brown or pale yellow. It is viscid when moist from a gelatinous pellicle, staining slightly greenish-blue when injured or with age.
The gills are adnate or sinuate or adnexed, close to sub-distant and moderately broad, yellowish brown at first, soon violet brown or chocolate brown to blackish violet, and uniform or somewhat mottled, with whitish edges. The spore print is dark violaceous brown.
The spores are 8.2–13.5 x 6 – 7.1–7.7 x 5.5–6.6 μm, subrhomboid in face view, subellipsoid in side view, with a hilar appendage visible and a truncate apex with a broad germ pore, thick walled, and dingy yellow brown.
The stipe is 2–7.5 cm x 1.5–6 mm, equal or slightly enlarged at the base, cylindric or subcylindric, twisted striate at times, flexuous, glabrous to slightly fibrillose, dry, stuffed with a pith and becoming hollow, and white or whitish silky to ochraceous or brownish fibrillose. The partial veil is thinly membranous, leaving a fragile annulus that becomes more noticeable as it darkens with spores. It stains blue-green when injured, most noticeably on the ring.
The taste and odor of Psilocybe stuntzii are farinaceous.
Microscopic features: The basidia are 16.5–33 x 5.5–8.8 μm, 4-spored, and hyaline. Pleurocystidia are absent and cheilocystidia are 22–30 x 4.4–6.6 μm, abundant, forming a sterile band, hyaline, lageniform, fusiform-lanceolate or fusoid-ampullaceous, with an elongate and flexuous neck, and are 1–2.2 μm in diameter, sometimes irregularly branched. Clamp connections are present.
Habitat and distribution
Psilocybe stuntzii is found growing scattered to gregarious to cespitose, rarely solitary, in conifer wood chips and bark mulch, in soils rich in woody debris, and in new lawns of freshly laid sod or any newly mulched garden throughout the western region of the Pacific Northwest. It appears from late July through December, being observed all year long in the Seattle area, also reportedly appearing in California, rarely as far south as Santa Cruz. There was a time when this mushroom appeared in over 40 percent of all new lawns and mulched in areas in the Puget Sound region of the Pacific Northwest. Due to a disappearance of pastures south of Seattle in the Tukwila-Kent-Auburn areas, this mushroom now only appears sporadically in certain new lawns which are well fertilized and manicured.
Edibility
This mushroom is hallucinogenic. Additionally, it closely resembles the highly toxic Galerina marginata, and several poisonings have been attributed to collectors consuming G. marginata after mistaking them for hallucinogenic P. stuntzii.
See also
Psilocybin mushrooms
List of Psilocybin mushrooms
Notes
References
Mycologia 68(6): 1261 (1977)
Guzmán, G. The Genus Psilocybe: A Systematic Revision of the Known Species Including the History, Distribution and Chemistry of the Hallucinogenic Species. Beihefte zur Nova Hedwigia Heft 74. J. Cramer, Vaduz, Germany (1983) [now out of print].
Entheogens
Psychoactive fungi
stuntzii
Psychedelic tryptamine carriers
Fungi of North America
Taxa named by Gastón Guzmán
Fungus species | Psilocybe stuntzii | Biology | 1,064 |
26,843,343 | https://en.wikipedia.org/wiki/Morphy%20number | The Morphy number is a measure of how closely a chess player is connected to Paul Morphy (1837–1884) by way of playing chess games.
Description
People who played a chess game with Morphy have a Morphy number of 1. Players who did not play Morphy but played someone with a Morphy number of 1 have a Morphy number of 2. People who played someone with a Morphy number of 2 have a Morphy number of 3, et cetera.
, there are very few known living players with Morphy number 3. Many ordinary players have a Morphy number of 6 or more.
The idea is similar to the Erdős number for mathematicians, the Bacon number for actors, and the Shusaku number, the equivalent for the board game of Go.
Origin
Taylor Kingston states that the idea of the Morphy number may have originated in a June 2000 note by Tim Krabbé, who has Morphy number 4. Krabbé wrote "I once played an official game with Euwe, who played Tarrasch, who played Paulsen, who played Morphy."
Morphy number of famous players
These are players who are important in making links for Morphy numbers.
Morphy number 1
Morphy is known to have played about 100 people, but prior to 2010 all of the known links for players with Morphy number 2 went through just four players. A few years after the early lists of Morphy numbers tabulated, it was discovered that a fifth player, James Mortimer, was Morphy's friend and he played casual games with him. This gives Mortimer a Morphy number of 1, creating a need to drastically revise those previous lists to include many more players. Mortimer had a very long, if not particularly successful, career, including the Ostende-B 1907 tournament, which enabled many famous younger players to gain a Morphy number of 2, including Mieses, Tartakower, Znosko-Borovsky, and Bernstein, who played beyond WW2, enabling still younger players to gain a Morphy number of 3, and so on.
Adolf Anderssen
Henry Bird
James Mortimer
John Owen
Louis Paulsen
Other opponents
Morphy number 2
Everyone in this group played someone in the group above. The Australian champion Frederick Esling achieved MN2 by beating Anderssen in an offhand game and another Australian champion, Julius Leigh Jacobsen (1862–1916) achieved MN2 by beating Bird in a casual match +4-2=1, enabling many Australian players of the early 20th century to achieve MN3.
The following are some of the most important players who have achieved MN2.
Semyon Alapin
Ossip Bernstein
Joseph Blackburne
Amos Burn
Mikhail Chigorin
Eugene Ernest Colman
Oldřich Duras
Frederick Esling
Isidor Gunsberg
David Janowski
Emanuel Lasker
S. Lipschütz
George Mackenzie
Frank Marshall
James Mason
Jacques Mieses
Géza Maróczy
Reginald Michell
Aron Nimzowitsch
Harry Pillsbury
Akiba Rubinstein
Carl Schlechter
Edward Guthlac Sergeant
Jackson Showalter
Rudolf Spielmann
Wilhelm Steinitz
Siegbert Tarrasch
Savielly Tartakower
Richard Teichmann
Sir George Thomas
Szymon Winawer
Eugene Znosko-Borovsky
Johannes Zukertort
Morphy number 3
Most of the masters in this group played several members of the previous group. This group includes some of the most important players for making connections to later generations. Botvinnik and Reshevsky played older masters such as Lasker and Janowski, had long careers, and played many younger players. Najdorf was Tartakower's pupil and they played a number of published games together, and Najdorf played blitz right into his 80s, allowing many younger players to achieve 4. Smyslov and Keres had very long careers, so much younger players achieved MN4 by playing them. Gligoric also played Tartakover, allowing many Yugoslav players to achieve 4. C.J.S. Purdy played Tartakower, enabling many Australian players to achieve 4. Fairhurst, who played Tartakover, was many times champion of Scotland, and later moved to New Zealand, so a number of players in these countries achieved 4 by playing him.
As of August 2024, living players with Morphy number 3 are Leonard Barden, Bernard Cafferty, Owen Hindle, Christian Langeweg, Friðrik Ólafsson, Oliver Penrose, Stewart Reuben and Jim Walsh.
James Macrae Aitken
Alexander Alekhine
Conel Hugh O'Donel Alexander
Leonard Barden
Pal Benko
Arthur Bisguier
Efim Bogolyubov
Fedor Bogatyrchuk
Isaac Boleslavsky
Mikhail Botvinnik
David Bronstein
Bernard Cafferty
José Raúl Capablanca
Martin Christoffel
Arthur Dake
Arnold Denker
Jan Hein Donner
Marcel Duchamp
Erich Eliskases
Max Euwe
William Fairhurst
Reuben Fine
Salo Flohr
Svetozar Gligorić
Owen Hindle
Borislav Ivkov
Paul Keres
George Koltanowski
Alexander Kotov
Čeněk Kottnauer
Franciscus Kuijpers
Christian Langeweg
Bent Larsen
Edward Lasker
Andor Lilienthal
Aleksandar Matanović
Vera Menchik
Stuart Milner-Barry
Vladimir Nabokov
Miguel Najdorf
Friðrik Ólafsson
Frank Parr
Jonathan Penrose
Oliver Penrose
Arturo Pomar
Lodewijk Prins
David Pritchard
C.J.S. Purdy
Samuel Reshevsky
Stewart Reuben
Friedrich Sämisch
Vasily Smyslov
Rudolf Spielmann
Herman Steiner
László Szabó
Wolfgang Unzicker
Milan Vidmar
Robert Wade
Jim Walsh
Norman Whitaker
Baruch Harold Wood
Morphy number 4
many of these players are still alive; a few (such as Anand, Adams, Nakamura, Svidler and Ivanchuk) are still active.
Michael Adams
Viswanathan Anand
Ulf Andersson
Lev Aptekar
Keith Arkell
Yuri Averbakh
Alexander Beliavsky
Harold Bloom
Walter Browne
Donald Byrne
Murray Chandler
Maia Chiburdanidze
Nigel Davies
Mark Dvoretsky
Ben Finegold
Bobby Fischer
Semyon Furman
Nona Gaprindashvili
Paul Garbett
Efim Geller
Florin Gheorghiu
Ewen Green
Vlastimil Hort
Robert Hübner
Vassily Ivanchuk
Gata Kamsky
Anatoly Karpov
Garry Kasparov
Allen Kaufman
Alexander Khalifman
Ratmir Kholmov
Viktor Korchnoi
Gary Lane
Ljubomir Ljubojević
Sergio Mariotti
Tony Miles
Hikaru Nakamura
Predrag Nikolić
John Nunn
Bruce Pandolfini
Tigran Petrosian
Judit Polgár
Susan Polgar
Ruslan Ponomariov
Lajos Portisch
Lev Polugaevsky
Hans Ree
Zoltán Ribli
Ian Rogers
Valery Salov
Ortvin Sarapu
Jonathan Sarfati
Yasser Seirawan
Alexei Shirov
Nigel Short
Vernon Small
Boris Spassky
Peter Svidler
Richard John Sutton
Mark Taimanov
Mikhail Tal
Jan Timman
Veselin Topalov
Anna Ushenina
Rafael Vaganian
John L. Watson
Morphy number 5
, many of the top grandmasters were thought to be in this group (along with a large number of lower-rated players). However, several players initially thought to be in this group were actually MN4s; for instance, based on playing Smyslov, who played Tartakower and Bernstein.
Magnus Carlsen
Boris Gelfand
Mikhail Gurevich
Igor Ivanov
Rustam Kasimdzhanov
Vladimir Kramnik
Joël Lautier
Péter Lékó
Karsten Müller
Levy Rozman
Jon Speelman
Artur Yusupov
See also
List of chess players
References
External links
Paul Morphy at Chess Games
Chess terminology
Separation numbers | Morphy number | Mathematics | 1,603 |
38,394,034 | https://en.wikipedia.org/wiki/4-DAMP | 4-DAMP (1,1-dimethyl-4-diphenylacetoxypiperidinium iodide) is a selective muscarinic acetylcholine receptor (mAChR) M3 antagonist. It is also able to antagonize M1 receptors but has preferential activity at the M3 receptor. It competitively binds to the acetylcholine binding site on mAChRs, causing right-ward shift in the dose response curves for mAChR agonists.
References
Carboxylate esters
Iodides
M1 receptor antagonists
M2 receptor antagonists
M3 receptor antagonists
M4 receptor antagonists
M5 receptor antagonists
Piperidines
Quaternary ammonium compounds | 4-DAMP | Chemistry | 145 |
11,700,262 | https://en.wikipedia.org/wiki/TRAU | A Transcoder and Rate Adaptation Unit (TRAU) performs transcoding function for speech channels and RA (Rate Adaptation) for data channels in the GSM network. The Transcoder/Rate Adaptation Unit (TRAU) is the data rate conversion unit. The PSTN/ISDN switch is a switch for 64 kbit/s voice. Current technology permits to decrease the bit-rate (in GSM radio interface it is 16 kbit/s for full rate and 8 kbit/s for half rate). Since MSC is basically a PSTN/ISDN switch its bit-rate is still 64 kbit/s. That is why a rate conversion is required in between the BSC and MSC.
Transcoding is the compression of speech data from 64 kbit/s to 13/12.2/6.5 kbit/s in case FR/EFR/HR (respectively) speech coding.
Rate adaptation without transcoding allows Tandem Free Operation (TFO), allowing the original encoded speech data to be carried in a 64 kbit/s channel. TFO offers benefits because transcoding can lead to a degradation of speech quality and requires computational resources.
TRAU was also the term used for the frame format used in transport of the compressed bits from these speech coders.
Brief explanation
For an MS-to-MS call, the transmission path covers the radio access network (RAN) as
well as the core network (CN). Since the transmission modes and coding standards are
different for RAN and CN, speech data is converted/transcoded at the transition points
from RAN to CN. This conversion is performed in the TRAU network element which
connects RAN and CN.
16 kbit/s for FR (Full Rate), Redundancy (Channel Coding)= 9.8 kbit/s
> Gross data rate after adding redundancy 22.8 kbit/s
> 12.2 kbit/s for EFR (Enhanced Full rate) => Gross data rate after adding redundancy 11.4 kbit/s.
See also
GSM Full Rate
GSM Half Rate
GSM Enhanced Full Rate
GSM Adaptive Multi-Rate
References
Audio format converters | TRAU | Technology | 455 |
4,253,959 | https://en.wikipedia.org/wiki/Chinese%20Academy%20of%20Engineering | The Chinese Academy of Engineering (CAE, ) is the national academy of the People's Republic of China for engineering. It was established in 1994 and is an institution of the State Council of China. The CAE and the Chinese Academy of Sciences are often referred to together as the "Two Academies". Its current president is Li Xiaohong.
Since the establishment of CAE, entrusted by the relevant ministries and commissions, the academy has offered consultancy to the State on major programs, planning, guidelines, and policies. With the incitation by various ministries of the central government as well as local governments, the academy has organized its members to make surveys on the forefront, and to put forward strategic opinions and proposals. These entrusted projects have played an important role in maximizing the participation of the members in the macro decision-making of the State. In the meantime, the members, based on their own experiences and perspectives accumulated in a long term and in combination with international trends of the development of engineering science and technology, have regularly and actively put forward their opinions and suggestions.
List of presidents
Zhu Guangya (1994–1998)
Song Jian (1998–2002)
Xu Kuangdi (2002–2010)
Zhou Ji (2010–2018)
Li Xiaohong (2018–present)
Structure
The CAE is composed of elected members with the highest honor in the community of engineering and technological sciences of the nation. The General Assembly of the CAE is the highest decision-making body of the academy and is held during the first week of June bi-annually.
Membership
Membership of Chinese Academy of Engineering is the highest academic title in engineering science and technology in China. It is a lifelong honor and must be elected by existing members.
The academy consists of members, senior members and foreign members, who are distinguished and recognized for their respective field of engineering.
As of January 2020, the academy has 920 Chinese members, in addition to 93 foreigner members. The composition of its members include:
Deng Zhonghan was elected to the Chinese Academy of Engineering in 2009 at the age of 41, making him the youngest academician in the history of the CAE.
Division of Mechanical and Vehicle Engineering: 130 members
Division of Information and Electronic Engineering: 131 members
Division of Chemical, Metallurgical and Materials Engineering: 115 members
Division of Energy and Mining Engineering: 125 members
Division of Civil and Hydraulic Engineering and Architecture: 110 members
Division of Light Industry and Environmental Engineering: 61 members
Division of Agriculture: 84 members
Division of Medicine and Health: 125 members
Division of Engineering Management: 39 members
Criteria and Qualifications
The senior engineers, professors and other scholars or specialists, who shall have the Chinese citizenship (including those who reside in Taiwan, Hong Kong Special Administrative Region, Macao Special Administrative Region and overseas) and who have made significant and creative achievements and contributions in the fields of engineering and technological sciences, are qualified for the membership of the academy.
Elections of members
The election of new members (academicians) is conducted biennially. Total numbers of members to be elected in each election is decided by the governing body of the academy. Examination and election of the candidates are done in every Academy Division and the voting is anonymous. The results of the voting is then examined and validated by the governing board.
Publications
Engineering Sciences (journal)
ISSN Print 1009-1724
Engineering (journal)
ISSN Print 2095-8099
ISSN Online 2096-0026
Collaborations
The Chinese Academy of Engineering has collaborated with other major academies (in policy development, engineering research projects, etc.), such as those
from UK and USA:
Royal Academy of Engineering
National Academy of Engineering
See also
Chinese Academy of Sciences
Scientific publishing in China
References
External links
National academies of engineering
Research institutes in China
Science and technology in the People's Republic of China
1994 establishments in China | Chinese Academy of Engineering | Engineering | 764 |
19,344,297 | https://en.wikipedia.org/wiki/Desorption%20atmospheric%20pressure%20photoionization | Desorption atmospheric pressure photoionization (DAPPI) is an ambient ionization technique for mass spectrometry that uses hot solvent vapor for desorption in conjunction with photoionization. Ambient Ionization techniques allow for direct analysis of samples without pretreatment. The direct analysis technique, such as DAPPI, eliminates the extraction steps seen in most nontraditional samples. DAPPI can be used to analyze bulkier samples, such as, tablets, powders, resins, plants, and tissues. The first step of this technique utilizes a jet of hot solvent vapor. The hot jet thermally desorbs the sample from a surface. The vaporized sample is then ionized by the vacuum ultraviolet light and consequently sampled into a mass spectrometer. DAPPI can detect a range of both polar and non-polar compounds, but is most sensitive when analyzing neutral or non-polar compounds. This technique also offers a selective and soft ionization for highly conjugated compounds.
History
The history of desorption atmospheric pressure photoionization is relatively new, but can be traced back through developments of ambient ionization techniques dating back to the 1970s. DAPPI is a combination of popular techniques, such as, atmospheric pressure photoionziation (APPI) and surface desorption techniques. The photoionization techniques were first developed in the late 1970s and began being used in atmospheric pressure experiments in the mid 1980s. Early developments in the desorption of open surface and free matrix experiments were first reported in literature in 1999 in an experiment using desorption/ionization on silicon (DIOS). DAPPI replaced techniques such as desorption electrospray ionization (DESI) and direct analysis in real time (DART). This generation of techniques are all recent developments seen in the 21st century. DESI was discovered in 2004 at Purdue University, while DART was discovered in 2005 by Laramee and Cody. DAPPI was developed soon after in 2007 at the University of Helsinki, Finland. The development of DAPPI widened the range of detection for nonpolar compounds and added a new dimension of thermal desorption of direct analysis samples.
Principle of operation
The first operation to occur during desorption atmospheric pressure photoionization is desorption. Desorption of the sample is initiated by a hot jet of solvent vapor that is targeted onto the sample by a nebulizer microchip. The nebulizer microchip is a glass device bonded together by pyrex wafers with flow channels embedded from a nozzle at the edge of the chip. The microchip is heated to 250-350C in order to vaporize the entering solvent and create dopant molecules. Dopant molecules are added to help facilitate the ionization of the sample. Some of the common solvents include: nitrogen, toluene, acetone, and anisole. The desorption process can occur by two mechanisms: thermal desorption or momentum transfer/liquid spray. Thermal desorption uses heat to volatilize the sample and increase the surface temperature of the substrate. As the substrate's surface temperature is increased, the higher the sensitivity of the instrument. While studying the substrate temperature, it was seen that the solvent did not have a noticeable effect on the final temperature or heat rate of the substrate. Momentum transfer or liquid spray desoprtion is based on the solvent interaction with the sample, causing the release of specific ions. The momentum transfer is propagated by the collision of the solvent with the sample along with the transfer of ions with the sample. The transfer of positive ions, such as protons and charge transfers, are seen with the solvents: toluene and anisole. Toluene goes through a charge exchange mechanism with the sample, while acetone promotes a proton transfer mechanism with the sample. A beam of 10 eV photons that are given off by a UV lamp is directed at the newly desorbed molecules, as well as the dopant molecules. Photoionization then occurs, which knocks out the molecule's electron and produces an ion. This technique alone is not highly efficient for different varieties of molecules, particularly those that are not easily protonated or deprotonated. In order to completely ionize samples, dopant molecules must help. The gaseous solvent can also undergo photoionization and act as an intermediate for ionization of the sample molecules. Once dopant ions are formed, proton transfer can occur with the sample, creating more sample ions. The ions are then sent to the mass analyzer for analysis.
Ionization mechanisms
The main desorption mechanism in DAPPI is thermal desorption due to rapid heating of the surface. Therefore, DAPPI only works well for surfaces of low thermal conductivity. The ionization mechanism depends on the analyte and solvent used. For example, the following analyte (M) ions may be formed: [M + H]+, [M - H]−, M+•, M−•.
Types of component geometries
Reflection geometry
Considered the normal or conventional geometry of DAPPI, this mode is ideal for solid samples that do not need any former preparation. The microchip is parallel to the MS inlet. The microchip heater is aimed to hit the samples at . The UV lamp is directly above the sample and it releases photons to interact with the desorbed molecules that are formed. The conventional method generally uses a higher heating power and gas flow rate for the nebulizer gas, while also increasing the amount of dopant used during the technique. These increases can cause higher background noise, analyte interference, substrate impurities, and more ion reactions from excess dopant ions.
Transmission geometry
This mode is specialized for analyzing liquid samples, with a metal or polymer mesh replacing the sample plate in reflection geometry. The mesh is oriented from the nebulizer microchip and the mass spec inlet, with the lamp directing photons to the area where the mesh releases newly desorbed molecules. The analyte is thermally desorbed as both the dopant vapor and nebulizer gas are directed through the mesh. It has been seen that steel mesh with low density and narrow strands produces better signal intensities. This type of mesh allows for larger openings in the surface and quicker heating of strands. Transmission mode uses a lower microchip heating power which eliminates some of the issues seen with the reflection geometry above, including low signal noise. This method can also improve the S/N ratio of smaller non-polar compounds.
Instrument coupling
Separation techniques
Thin layer chromatography (TLC) is a simple separation technique that can be coupled with DAPPI-MS to identify lipids. Some of the lipids that were seen to be separated and ionized include: cholesterol, triacylglycerols, 1,2-diol diesters, wax esters, hydrocarbons, and cholesterol esters. TLC is normally coupled with instruments in vacuum or atmospheric pressure, but vacuum pressure gives poor sensitivity for more volatile compounds and has minimal area in the vacuum chambers. DAPPI was used for its ability to ionize neutral and non-polar compounds, and was seen to be a fast and efficient method for lipid detection as it was coupled with both NP-TLC and HPTLC plates.
Laser desorption is normally used in the presence of a matrix, such as matrix assisted laser desorption ionization (MALDI), but research has combined techniques of laser desoprtion in atmospheric pressure conditions to produce a method that does not use a matrix or discharge. This method is able to help with smaller compounds, and generates both positive and negative ions for detection. A transmission geometry is taken as the beam and spray are guided at a angle into the coupled MS. Studies have shown the detection of organic compounds such as: farnesene, squalene, tetradecahydroanthracene, 5-alpha cholestane, perylene, benzoperylene, coronene, tetradecylprene, dodecyl sulfide, benzodiphenylene sulfide, dibenzosuberone, carbazole, and elipticine. This method was also seen to be coupled with the mass spectroscopy technique, FTICR, to detect shale oils and some smaller nitrogen containing aromatics.
Mass spectrometry
Fourier transform ion cyclotron resonance (FTICR) is a technique that is normally coupled with electrospray ionization (ESI), DESI, or DART, which allows for the detection of polar compounds. DAPPI allows for a broader range of polarities to be detected, and a range of molecular weights. Without separation or sample preparation, DAPPI is able to thermally desorb compounds such as oak biochars. The study did cite an issue with DAPPI. If the sample is not homogeneous, then the neutral ions will ionize only the surface, which does not provide an accurate detection for the substance. The scanning of the FTICR allows for the detection of complex compounds with high resolution, which leads to the ability to analyze elemental composition.
Applications
DAPPI can analyze both polar (e.g. verapamil) and nonpolar (e.g. anthracene) compounds. This technique has an upper detection limit of 600 Da. Compared to desorption electrostray ionization (DESI), DAPPI is less likely to be contaminated by biological matrices. DAPPI was also seen to be more sensitive and contain less background noise than popular techniques such as direct analysis in real time (DART). Performance of DAPPI has also been demonstrated on direct analysis of illicit drugs. Other applications include lipid detection and drug analysis sampling. Lipids can be detected through a coupling procedure with orbitrap mass spectroscopy. DAPPI has also been known to couple with liquid chromotography and gas chromotography mass spectroscopy for the analysis of drugs and aerosol compounds. Studies have also shown where DAPPI has been used to find harmful organic compounds in the environment and in food, such as polycyclic aromatic hydrocarbons (PAH) and pesticides.
See also
Orbitrap
Atmospheric pressure chemical ionization
Desorption atmospheric pressure chemical ionization
References
Mass spectrometry
Ion source | Desorption atmospheric pressure photoionization | Physics,Chemistry | 2,148 |
3,605,764 | https://en.wikipedia.org/wiki/Device%20Bay | Device Bay was a standard jointly developed by Compaq, Intel and Microsoft in 1997, as a simple way to add, remove, and share hardware devices. Originally intended to be introduced in the second half of 1998, Device Bay was never finalized and has long since been abandoned. The official website disappeared in mid-2001.
Making use of new technologies at the time, such as USB and FireWire, Device Bay was intended to make adding and removing devices from the PC easier, through the use of plug-n-play. It allowed peripherals such as hard drives, CD/DVD-ROM drives, audio devices, and modems, to be added to the PC without having to open the case or even turn the PC off. Devices could also be removed from the PC while it was still turned on; this could also be done through software in the operating system. Another advantage of Device Bay was that it allowed certain devices to be swapped between a desktop and laptop computer.
HP released a line on PCs that uses the idea of device bay to expand the personal storage on a personal computer and marketed them as "HP Personal Media Drives". These Drives/Bays are primarily available on the HP Media Center PCs.
External links
Device Bay Technology To Enable Easy-To-Configure, More Affordable PCs / Intel Press-release, March 31, 1997
"Device Bay" on Webopedia
Device bay whitepaper
Device bay official site www.device-bay.org (on archive.org)
Computer peripherals | Device Bay | Technology | 306 |
46,177,880 | https://en.wikipedia.org/wiki/Oxyselenide | Oxyselenides are a group of chemical compounds that contain oxygen and selenium atoms (Figure 1). Oxyselenides can form a wide range of structures in compounds containing various transition metals, and thus can exhibit a wide range of properties. Most importantly, oxyselenides have a wide range of thermal conductivity, which can be controlled with changes in temperature in order to adjust their thermoelectric performance. Current research on oxyselenides indicates their potential for significant application in electronic materials.
Synthesis
The first oxyselenide to be crystallized was manganese oxyselenide in 1900. In 1910, oxyselenides containing phosphate were created by treating P2Se5 with metal hydroxides. Uranium oxyselenide was formed next by treating H2Se with uranium dioxides at 1000 °C. This technique was also utilized in synthesizing oxyselenides of rare-earth elements in the mid-1900s. Synthesis of oxyselenide compounds currently involves treating oxides with aluminum powder and selenium at high temperatures.
Recent discoveries in iron oxyarsenides and their superconductivity have highlighted the importance of mixed anion systems. Mixed copper oxychalcogenides came about when the electronic properties of both chalcogenides and oxides were taken into account. Chemists began pursuing the synthesis of a compound with metallic and charge density wave properties as well as high temperature superconductivity. Upon synthesizing the copper oxyselenide Na1.9Cu2Se2·Cu2O by reacting Na2Se3.6 with Cu2O, they concluded that a new type of oxychalcogenides could be synthesized by reacting metal oxides with polychalcogenide fluxes.
Derivatives
New oxyselenides of the formula Sr2AO2M2Se2 (A=Co, Mn; M=Cu, Ag) have been synthesized. They crystallize into structures consisting of alternating perovskite-like (metal oxide) and antifluorite (metal selenide) layers (Figure 2). The optical band gap of each oxyselenide is very narrow, indicating semiconductivity.
Another derivative that reveals oxyselenide properties is β-La2O2MSe2 (M= Fe, Mn). This molecule possesses an orthorhombic structure (Figure 3), opening up the possibilities for different packing arrangements of oxyselenides. They are ferromagnetic at low temperatures (~27 K) and show high resistivity at room temperature. The Mn analogue, diluted in NaCl solution, suggests an optical band gap of 1.6 eV at room temperature, making it an insulator. Meanwhile, the band gap for the Fe analogue is approximately 0.7 eV between 150 K and 300 K, making it a semiconductor. In contrast, cobalt oxyselenide La2Co2O3Se2 is antiferromagnetically ordered, suggesting that although the different transition metals are responsible for the changes in an oxyselenide's magnetic property, the molecule's overall lattice structure may also influence its conductivity.
The magnetic and conducting properties of different metal compounds coordinated with oxyselenide are not only affected by the transition metal used, but also by the synthesis conditions. For example, the percentage of aluminium used during the synthesis of Ce2O2ZnSe2 as an oxygen retriever affected the band gaps, indicated by the varying product colours. Various structures allow for many potential configurations. For example, as observed before in La2Co2O3Se2, Sr2F2Mn2Se2O exhibits a frustrated magnetic correlation in the structure resulting in an antiferromagnetic lattice.
In 2010, p-type polycrystalline BiCuSeO oxyselenides were reported as possible thermoelectric materials.
The weak bonds between the [Cu2Se2]−2 conducting and [Bi2O2]+2 insulating layer, as well as the anharmonic crystal lattice structure, may account for the substance's low thermal conductivity and high thermoelectric performance. Recently, BiCuSeO's ZT value, a dimensionless figure-of-merit indicating thermoelectric performance, has been increased from 0.5 to 1.4. Experiment has shown that Ca doping can improve electrical conductivity, thereby increasing the ZT value. Additionally, replacing 15% of the Bi3+ ions with group 2 metal ions, Ca2+, Sr2+, or Ba2+ (Figure 4), also optimizes the charge carrier concentration.
References
Mixed anion compounds
Materials science
Oxygen compounds
Selenium(−II) compounds | Oxyselenide | Physics,Chemistry,Materials_science,Engineering | 978 |
21,502,793 | https://en.wikipedia.org/wiki/Urban%20planner | An urban planner (also known as town planner) is a professional who practices in the field of town planning, urban planning or city planning.
An urban planner may focus on a specific area of practice and have a title such as city planner, town planner, regional planner, long-range planner, transportation planner, infrastructure planner, environmental planner, parks planner, physical planner, health planner, planning analyst, urban designer, community development director, economic development specialist, or other similar combinations. The Royal Town Planning Institute is the oldest professional body of town and urban planners founded in 1914 and the University of Liverpool established the first dedicated planning school in the world in 1909, followed by Harvard University in 1924. There also exists evidence of urban planners in ancient cities in Egypt, China, India, and the Mediterranean world. For instance, Hippodamus has often been accredited the title of “the father of city planning” in Aristotle’s Book 2.
Education and training
Urban Planning as a profession is a relatively young discipline. Urban Planning is an interdisciplinary field closely related to civil engineering. Few government agencies restrict or license the profession. As a result, a number of other related disciplines actively claim to have the training, expertise and professional scope to practise urban planning. While organizations such as the American Planning Association, the Canadian Institute of Planners and the Royal Town Planning Institute certify professional planners, others in related fields like Landscape Architecture also claim to have professional autonomy in urban planning. Efforts internationally have attempted to define the role of urban planners through licensure acts. The US State of New Jersey and the Canadian province of Nova Scotia license Professional Planners. All Canadian provinces and territories except Newfoundland and Labrador and Quebec restrict the use of the term 'Registered Professional Planner' to licensed urban planners. In Quebec, urban planners must be licensed by the l'Orde des Urbaniste du Quebec.
Urban planners by country
Canada
Urban planners in Canada usually hold bachelor's degrees in planning or a master's degree, typically accredited as an M.Pl. (Master of Planning), MUP (Master of Urban Planning) MCP (Master of City Planning), MSc.Pl. (Master of Science in Planning), M.Pl.(Master of Urban and Regional Planning), MES (Master of Environmental Studies) or simply an MA (Master of Arts).
Professional certification is offered by the Canadian Institute of Planners and its provincial and territorial affiliates. The Institute accredits planning education programs, and sets standards for entry into the profession. Each provincial or territorial body is responsible for licensing and regulating members within its borders. Provincial and territorial affiliates may allow certified members to use the title of Registered Professional Planner (RPP).
Greece
Urban planners in Greece typically graduate from Engineering faculties. Aristotle University of Thessaloniki and University of Thessaly are the two universities that provide undergraduate studies in urban planning in Greece.
Hong Kong
The Hong Kong Institute of Planners is the statutory corporation in Hong Kong regulating professional town planners' accreditation and development. Full members of the institute are eligible to register as a Registered Professional Planner through Planners Registration Board in Hong Kong.
India
Though planning is not a recognized profession under Indian law, the profession began in 1941 in Delhi College of Engineering (now the Delhi Technological University). It was later integrated with the School of Town and Country Planning which was established in 1955 by the Government of India to provide facilities for rural, urban and regional planning. On integration, the school was renamed as School of Planning and Architecture in 1959. Today, it is one of the premier schools of pursuing planning studies at bachelor, masters and post doctorate levels.
The Institute of Town Planners, India (ITPI), set up on the lines of the Royal Town Planning Institute in London is the body representing planning professionals in India. A small group formed itself into an Indian Board of Town Planners which after three years of continuous work formed the ITPI. The institute, which was established in July 1951, today, has a membership of over 2800, apart from a sizable number of student members, many of whom have qualified Associateship Examination (AITP) conducted by ITPI. As of 2012, the institute has 21 regional chapters across India.
School of Planning and Architecture (SPA), Delhi is one of the premier institutes in dissemination knowledge of Architecture and Planning in India. It was established in 1941. In 1979, the Government of India, through the then Ministry of Education and Culture, conferred on the School of Planning and Architecture the status of "Deemed to be a University"(http://spa.ac.in/Home.aspx?ReturnUrl=%2f)
School of Planning and Architecture-Bhopal (M.P.) and School of Planning and Architecture-Vijayawada established in year 2008 by Ministry of Human Resource Development, Government of India.
Centre for Environment Planning and Technology (CEPT) University in Ahmedabad and Malaviya National Institute of Technology (NIT) Jaipur, Maulana Azad National Institute of Technology (N.I.T) in Bhopal along with NIT Patna and State Universities like IGDTUW, Delhi are one of the pioneering Institutes in India where urban planning is taught. Post-graduation such as Master of engineering, Master of technology are also available in India.
Israel
The Israel Planners Association was founded in 1965. Urban planning is taught by two institutions. the Technion Faculty of Architecture and Urban Planning in Haifa and the Center for Urban and Regional Studies.
The second institution is The Hebrew University of Jerusalem qualifying a Graduate Degree (M.A.) in Geography and Urban and Regional Planning.
Italy
The role of urban planners has a long-standing tradition in Italy. The exclusive skills of this profession were originally attributed to engineers and architects, as established by law and related implementing regulation. Later, in 1992, agronomists and foresters were also granted prerogatives related to territorial and urban planning. Since 2001, following the implementation of the reform of university regulations and professional orders (DPR 328/2001), the skills of territorial planners and urban planners have been the exclusive competence of territorial planners enrolled in the Order of Architects, Planners, and Landscapers - Section A. Additionally, agronomists and foresters enrolled in the Order of Agronomists and Foresters - Section A are also eligible for this role. This is without prejudice to the skills previously acquired by civil engineers and architects enrolled in the Order prior to the DPR 328/2001 reform.
Malaysia
In Malaysia, urban planners typically hold a bachelor's degree or master's degree in planning. Several universities in Malaysia, such as Universiti Teknologi Malaysia (UTM), Universiti Malaya (UM), Universiti Teknologi MARA (UiTM), International Islamic University Malaysia (UIAM) and Universiti Sains Malaysia (USM) offer programs in urban and regional planning at both the undergraduate and postgraduate levels. In addition, there are also colleges that offer diploma and certificate courses in urban planning. These educational institutions provide students with the knowledge and skills necessary to pursue careers in the field of urban planning.
Professional certification is available through the Malaysian Institute of Planners (MIP) and its state-level affiliates. The Institute accredits planning education programs and establishes standards for entry into the profession. Each state-level body is responsible for licensing and regulating members within its jurisdiction. State-level affiliates may allow certified members to use the title of Registered Town Planner (RTP).
Mexico
Urban planners in Mexico typically graduate from an Architecture background provided by major universities in the country. Most of such degrees can be awarded at Masters' graduate studies, although there are also bachelor's degrees available.
New Zealand
A planner brings professional expertise and knowledge to the development and implementation of policy in the interests of productive, liveable and sustainable environments. Planners support communities and provide leadership in making informed choices about the consequences of human actions and in bridging the gap between the present and the future. Planners must consider and balance a range of strategic, policy, technical, legal, administrative, community and environmental factors in their contributions to informed decision-making.
Planners are employed in diverse public and private roles. They use their knowledge and experience in various institutional and community settings to provide leadership, undertake research, solve problems, evaluate alternatives and outcomes, manage change, and envision, advise on and enact desirable future directions.
In applying their expertise, planners must be aware of and responsive to cultural, social, economic, environmental, ethical and political values. In New Zealand, these include the bicultural mandate for planning, including the partnership relationships established by the Treaty of Waitangi/te Tiriti o Waitangi, and New Zealand's increasingly multicultural society.
A key attribute of a planner is the ability to work across disciplinary and institutional boundaries and to integrate knowledge from a range of disciplines within the distinctive framework of the discipline of planning.
A professional planner is someone who has gained a professional qualification through tertiary study, continues to learn post qualification, undertakes continuing professional development, is a member or is working towards becoming a member of the New Zealand Planning Institute (NZPI), contributes to the planning profession, and is committed to upholding the principles and ethical practices of the planning profession.
Nigeria
In Nigeria, the Nigerian Institute of Town Planners (NITP) and the Town Planners Registration Council (TOPREC) are the leading bodies tasked with the responsibility of improving the training, education and professional practice of planning in Nigeria.
To be a town planner in Nigeria, first must complete a degree in Urban and regional planning or a relevant discipline and then complete a final year in the form of a masters in Urban and regional planning which must be accredited by the Town Planners Registration Council (TOPREC ), or a four-year degree encapsulating all aspects. they can then become eligible to be a member of the Nigerian Institute of Town Planners (NITP), but must first complete two years work based training, to be a full member, and subsequently register and sit for the TOPREC professional examination, to become a registered town planner.
Palestine
Planners in Palestine took responsibility after the Palestinian Authority took governance in the West Bank and Gaza- Palestine. Planners have been trained by a Norwegian consultants As Plan Viak at the very beginning as part of the institutional capacity training project funded by the Norwegian Government. Both Birzeit and Alanjah Universities run bachelor's and master's degree in planning and planners could specialize in different fields.
South Africa
The South African Council for Planners (SACPLAN) is the statutory Council of nominated members appointed in terms of the Planning Profession Act, 2002 (Act 36 of 2002) by the Minister of Rural Development and Land Reform (Department of Rural Development and Land Reform) to regulate the Planning Profession(Planning is both the organizational process of creating and maintaining a plan) in terms of the Act. The Planning Profession Principles applies to all registered planners. The SACPLAN through the Act assures quality in the planning profession through the identification of planning profession work that only registered planners can undertake. The functions of the SACPLAN are contained in Section 7 of the Act. The powers and duties of the SACPLAN are contained in Section 8 of the Act. The Act further prescribes a Professional Code of Conduct for registered planners
United Kingdom
Those wishing to be a town planner, in the United Kingdom, first must complete a degree in a relevant discipline and then complete a final year in the form of a masters in town and country planning which must be accredited by the Royal Town Planning Institute (RTPI), or a four-year degree encapsulating all aspects. They can then become eligible to be a member of the RTPI, but must first complete two years work based training, to be a full, chartered member.
Town planners in the UK are responsible for all aspects of the built environment, wherever you are within the UK a town planner will have at sometime planned the built aspects of the environment. Local planning authorities grant planning permission to individuals, private builders and corporations, and employed officers of these authorities (which usually is a specific council for an area) involved in the decision-making process are referred to as planning officers (though those employed with a specialisation may have a different role title, such as conservation officer or landscape officer).
United States
Planners in the U.S. typically complete an undergraduate or graduate degree from a university offering the program of study. Professional certification is only offered through the American Institute of Certified Planners (AICP), a branch of the American Planning Association. To gain AICP certification, a planner must meet specific educational and experience requirements, as well as pass an exam covering the nature and practice of the discipline. Although AICP certification is not required to be a practicing planner, it does serve as a means in which a planner can verify his or her professional expertise. Although most states do not require any sort of certification to be appointed as a planner, the state of New Jersey requires all professional planners to undergo certification from the State Board of Professional Planners. This requirement does not apply to other planning-adjacent professions, such as land surveyors, engineers, or registered architects.
Urban planners in the United States are responsible for devising a development plan for cities. The American Planning Association (APA) oversees planners across the USA, although regional and national planning is not a well-defined field as planners’ responsibilities vary from state to state. In Hawaii, planners manage state land use while in Vermont, regional planners exist only to monitor local planners.
In 1954, the District of Columbia Redevelopment Land Agency (DCRLA) won in the case Berman v. Parker, which set a nation-wide precedent for using eminent domain as a means of revitalizing urban cores. However, planning commissions still need to battle the interests of private corporations and wealthy residents, who expect compensation from governments. This is in contrast to the planning culture in the United Kingdom, where such legal battles are seldom.
Urban planning in media
In the Seinfeld episode "The Van Buren Boys," George offers a scholarship to an average student who is initially interested in becoming an architect, but later decides they are more interested in urban planning.
See also
List of urban planners
List of urban theorists
Municipal engineering
Urban planning education
Royal Town Planning Institute
Footnotes
Further reading
Alexander, D. & Calliou, S. (1991). "Planner as educator: A vision of a new practitioner". Plan Canada, 31(6), 38–45. Retrieved from VIUSpace.
External links
Planning | Urban planner | Engineering | 2,963 |
32,717,007 | https://en.wikipedia.org/wiki/Flora%20Huayaquilensis | Flora Huayaquilensis is the popular name for the body of work produced by botanist Juan José Tafalla Navascués while he was in South America.
Navascués made one of the first expeditions to South America with a Spaniard who documented plants of the area. His unpublished works were kept in the archives for 200 years.
In 1985, Eduardo Estrella was researching in the archives of the Royal Botanical Gardens in Madrid, Spain, when he found the documents of the "Fourth Division," for the expedition of Ruiz and Pavon in Peru and Chile. Estrella found descriptions of plants whose origins correspond to the places belonging to the Royal Audience of Quito.
The folios were numbered and contained the mysterious initials FH. Other folios that did not correspond to the flora of the Royal Court had the initials FP. The work was eventually published, credited to Navascués' expedition. The Botanical Expedition to the Viceroyalty of Peru was very similar to Navascués' expedition. Estrella founded the Ecuador National Museum of Medicine.
Not all early explorers of Ecuador had their documents survive. Theodor Wolf (February 13, 1841 - June 22, 1924) was a German naturalist who studied the Galápagos Islands during the late nineteenth century. Wolf Island (Wenman Island) is named after him. Wolf had performed a geologic survey of mainland Ecuador, but his collections were lost in storage.
Flora Huayaquilensis
Flora Huayaquilensis Juan Tafalla. (Research, Annotations and Historical Study Edition). Madrid: Institute for the Conservation of Nature (ICONA)-Royal Botanical Gardens, 1989. 2 volumes. Second Edition Guayaquil: Guayaquil Botanical Garden-Banco del Progreso, 1995.
Flora Huayaquilensis Matriti : Editio facta ab Instituto ad Conservandam Naturam (ICONA, M.A.P.A.) ; et ab Horto Regio Matritense (C.S.I.C.) 1989
References
External links
Flora Huayaquilensis sive descriptiones et icones plantarum Huayaquilensium secundum systema Linnaeanum digestae / Auctore Johanne Tafalla . 1989 vol. I: Icona, Real jardín botánico. Estudio introductorio Eduardo Estrella.
Flora Huayaquilensis sive descriptiones et icones plantarum Huayaquilensium secundum systema Linnaeanum digestae / Auctore Johanne Tafalla . 1991 vol. II: Icona, Real jardín botánico.
Florae (publication)
Herbalism
Botany in South America | Flora Huayaquilensis | Biology | 549 |
25,446,917 | https://en.wikipedia.org/wiki/Enumerations%20of%20specific%20permutation%20classes | In the study of permutation patterns, there has been considerable interest in enumerating specific permutation classes, especially those with relatively few basis elements. This area of study has turned up unexpected instances of Wilf equivalence, where two seemingly-unrelated permutation classes have the same numbers of permutations of each length.
Classes avoiding one pattern of length 3
There are two symmetry classes and a single Wilf class for single permutations of length three.
Classes avoiding one pattern of length 4
There are seven symmetry classes and three Wilf classes for single permutations of length four.
No non-recursive formula counting 1324-avoiding permutations is known. A recursive formula was given by .
A more efficient algorithm using functional equations was given by , which was enhanced by , and then further enhanced by who give the first 50 terms of the enumeration.
have provided lower and upper bounds for the growth of this class.
Classes avoiding two patterns of length 3
There are five symmetry classes and three Wilf classes, all of which were enumerated in .
Classes avoiding one pattern of length 3 and one of length 4
There are eighteen symmetry classes and nine Wilf classes, all of which have been enumerated. For these results, see or .
Classes avoiding two patterns of length 4
There are 56 symmetry classes and 38 Wilf equivalence classes. Only 3 of these remain unenumerated, and their generating functions are conjectured not to satisfy any algebraic differential equation (ADE) by ; in particular, their conjecture would imply that these generating functions are not D-finite.
Heatmaps of each of the non-finite classes are shown on the right. The lexicographically minimal symmetry is used for each class, and the classes are ordered in lexicographical order. To create each heatmap, one million permutations of length 300 were sampled uniformly at random from the class. The color of the point represents how many permutations have value at index . Higher resolution versions can be obtained at PermPal
See also
Baxter permutation
Riffle shuffle permutation
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
External links
The Database of Permutation Pattern Avoidance, maintained by Bridget Tenner, contains details of the enumeration of many other permutation classes with relatively few basis elements.
Enumerative combinatorics
Permutation patterns | Enumerations of specific permutation classes | Mathematics | 514 |
1,951,419 | https://en.wikipedia.org/wiki/Critical%20point%20%28thermodynamics%29 | In thermodynamics, a critical point (or critical state) is the end point of a phase equilibrium curve. One example is the liquid–vapor critical point, the end point of the pressure–temperature curve that designates conditions under which a liquid and its vapor can coexist. At higher temperatures, the gas comes into a supercritical phase, and so cannot be liquefied by pressure alone. At the critical point, defined by a critical temperature Tc and a critical pressure pc, phase boundaries vanish. Other examples include the liquid–liquid critical points in mixtures, and the ferromagnet–paramagnet transition (Curie temperature) in the absence of an external magnetic field.
Liquid–vapor critical point
Overview
For simplicity and clarity, the generic notion of critical point is best introduced by discussing a specific example, the vapor–liquid critical point. This was the first critical point to be discovered, and it is still the best known and most studied one.
The figure shows the schematic P-T diagram of a pure substance (as opposed to mixtures, which have additional state variables and richer phase diagrams, discussed below). The commonly known phases solid, liquid and vapor are separated by phase boundaries, i.e. pressure–temperature combinations where two phases can coexist. At the triple point, all three phases can coexist. However, the liquid–vapor boundary terminates in an endpoint at some critical temperature Tc and critical pressure pc. This is the critical point.
The critical point of water occurs at and .
In the vicinity of the critical point, the physical properties of the liquid and the vapor change dramatically, with both phases becoming even more similar. For instance, liquid water under normal conditions is nearly incompressible, has a low thermal expansion coefficient, has a high dielectric constant, and is an excellent solvent for electrolytes. Near the critical point, all these properties change into the exact opposite: water becomes compressible, expandable, a poor dielectric, a bad solvent for electrolytes, and mixes more readily with nonpolar gases and organic molecules.
At the critical point, only one phase exists. The heat of vaporization is zero. There is a stationary inflection point in the constant-temperature line (critical isotherm) on a PV diagram. This means that at the critical point:
Above the critical point there exists a state of matter that is continuously connected with (can be transformed without phase transition into) both the liquid and the gaseous state. It is called supercritical fluid. The common textbook knowledge that all distinction between liquid and vapor disappears beyond the critical point has been challenged by Fisher and Widom, who identified a p–T line that separates states with different asymptotic statistical properties (Fisher–Widom line).
Sometimes the critical point does not manifest in most thermodynamic or mechanical properties, but is "hidden" and reveals itself in the onset of inhomogeneities in elastic moduli, marked changes in the appearance and local properties of non-affine droplets, and a sudden enhancement in defect pair concentration.
History
The existence of a critical point was first discovered by Charles Cagniard de la Tour in 1822 and named by Dmitri Mendeleev in 1860 and Thomas Andrews in 1869. Cagniard showed that CO2 could be liquefied at 31 °C at a pressure of 73 atm, but not at a slightly higher temperature, even under pressures as high as 3000 atm.
Theory
Solving the above condition for the van der Waals equation, one can compute the critical point as
However, the van der Waals equation, based on a mean-field theory, does not hold near the critical point. In particular, it predicts wrong scaling laws.
To analyse properties of fluids near the critical point, reduced state variables are sometimes defined relative to the critical properties
The principle of corresponding states indicates that substances at equal reduced pressures and temperatures have equal reduced volumes. This relationship is approximately true for many substances, but becomes increasingly inaccurate for large values of pr.
For some gases, there is an additional correction factor, called Newton's correction, added to the critical temperature and critical pressure calculated in this manner. These are empirically derived values and vary with the pressure range of interest.
Table of liquid–vapor critical temperature and pressure for selected substances
Mixtures: liquid–liquid critical point
The liquid–liquid critical point of a solution, which occurs at the critical solution temperature, occurs at the limit of the two-phase region of the phase diagram. In other words, it is the point at which an infinitesimal change in some thermodynamic variable (such as temperature or pressure) leads to separation of the mixture into two distinct liquid phases, as shown in the polymer–solvent phase diagram to the right. Two types of liquid–liquid critical points are the upper critical solution temperature (UCST), which is the hottest point at which cooling induces phase separation, and the lower critical solution temperature (LCST), which is the coldest point at which heating induces phase separation.
Mathematical definition
From a theoretical standpoint, the liquid–liquid critical point represents the temperature–concentration extremum of the spinodal curve (as can be seen in the figure to the right). Thus, the liquid–liquid critical point in a two-component system must satisfy two conditions: the condition of the spinodal curve (the second derivative of the free energy with respect to concentration must equal zero), and the extremum condition (the third derivative of the free energy with respect to concentration must also equal zero or the derivative of the spinodal temperature with respect to concentration must equal zero).
See also
Conformal field theory
Critical exponent
Critical phenomena (more advanced article)
Critical points of the elements (data page)
Curie point
Joback method, Klincewicz method, Lydersen method (estimation of critical temperature, pressure, and volume from molecular structure)
Liquid–liquid critical point
Lower critical solution temperature
Néel point
Percolation thresholds
Phase transition
Rushbrooke inequality
Scale invariance
Self-organized criticality
Supercritical fluid, Supercritical drying, Supercritical water oxidation, Supercritical fluid extraction
Tricritical point
Triple point
Upper critical solution temperature
Widom scaling
References
Further reading
Conformal field theory
Critical phenomena
Phase transitions
Renormalization group
Threshold temperatures
Gases | Critical point (thermodynamics) | Physics,Chemistry,Materials_science,Mathematics | 1,311 |
12,110,590 | https://en.wikipedia.org/wiki/HDR%2C%20Inc. | HDR, Inc. is an American design and engineering company based in Omaha, Nebraska.
History
In 1917, the Henningson Engineering Company started as a civil engineering firm in Omaha, where HDR's headquarters remain today. Willard Richardson and Charles W. "Chuck" Durham joined the firm in 1939 as interns. Circa 1950, Richardson and Durham had purchased shares in the firm, and it became known as Henningson, Durham and Richardson, Inc.
The company's first project was designing a power station for the city of Ogallala, Nebraska. Similar projects followed as the firm built water, sewer, electric, and road systems for cities and towns throughout the Midwestern United States, emerging from frontier status.
In 1983, Bouygues SA, France's largest construction company, purchased HDR for $60 million. An employee group bought back HDR in 1996 for $55 million. The company has since grown from 1,100 employees to over 12,000.
Acquisitions
Since the employee buyout in 1996 from the French conglomerate Bouygues, HDR has acquired over 60 firms around the world. In February 2011, HDR acquired Cooper Medical, an Oklahoma City, Oklahoma, based firm providing integrated design and construction services for healthcare facilities throughout the U.S. The new alliance, HDR Cooper Medical, will provide a service design and construction delivery model to healthcare clients. In February 2011, HDR acquired Schiff Associates, a recognized leader in corrosion engineering headquartered in Claremont, California, with offices in Houston, Las Vegas, and San Diego.
In January 2011, HDR acquired HydroQual, Inc., which specializes in water resource management. Based in Mahwah, N.J., HydroQual had nine offices in New Jersey, New York, Massachusetts, Florida, Utah and Dubai. HydroQual is now conducting business as HDR. Also in January 2011, HDR acquired Amnis Engineering Ltd., based in Vancouver, British Columbia. The firm provides engineering and consulting services in British Columbia and a number of international locations for hydropower and water resources infrastructure.
In March 2013, HDR acquired TMK Architekten • Ingenieure, a German healthcare architecture firm. The merged company was the hub for HDR's healthcare and science design programs in Europe. In 2023, HDR sold its subsidiary HDR GmbH to a small group of employees in the country. The new operating company is Telluride Architektur GmbH.
In April 2013, HDR acquired Salva Resources, a global provider of technical and commercial services for mining exploration and investment in Brisbane, Australia.
In July 2013, HDR acquired the business and assets of Sharon Greene + Associates, a firm specializing in transportation economics and financial analysis.
In November 2013, HDR acquired Rice Daubney Architects, a firm in Sydney, Australia. The merged company is the hub for HDR's healthcare, defence, retail, and commercial work in Australia and HDR's retail and commercial work throughout the globe.
In January 2015, HDR acquired the assets of MEI, LLC, a liquid natural gas engineering and consulting firm based in Pooler, Georgia.
In July, 2015, HDR acquired CEI Architecture of Vancouver, British Columbia, an architectural, planning and interior design consultant.
In September, 2017, HDR acquired long-time partner, Maintenance Design Group, a firm specializing in the planning and design of vehicle and fleet operations and maintenance facilities. HDR sought to add MDG's strengths in facility planning and design to complement its asset life-cycle approach to infrastructure development.
In 2018, HDR expanded its water resources services by acquiring the assets of David Ford Consulting Engineers, a firm based in Sacramento, California. The firm specializes in water hydraulics, flood risk analysis, reservoir systems and operations, water resource planning and hydro-economics.
Hurley Palmer Flatt
In July 2019, HDR expanded its footprint in Europe and Asia by acquiring the British firm Hurley Palmer Flatt, as well as its subsidiaries; Hurley Palmer Flatt rebranded to HDR in early 2022.
Hurley Palmer Flatt was a multi-disciplinary engineering consultancy based in London. It provided mechanical and electrical engineering consultancy and associated services. It was established in 1968 in the UK by John Hurley as a building services consultancy. It expanded into a global company operating in Dubai, India, Australia, Singapore and the US, engaging in both public and private sector development across various fields.
In 2009 Hurley Palmer Flatt acquired ATCO Consulting, expanding its reach in Scotland.
In 2014 it acquired London-based mechanical and engineering firm Andrew Reid and Partners (AR&P) and took a majority controlling share of the business. AR&P had been established in 1970 by Andrew Reid. Its core services included diagnostic assessments of under-performing buildings and the management or independent validation of building engineering services commissioning. Its work in commissioning began at the Barbican Arts Centre, through all 14 phases of London's Broadgate development to the present day including the 750,000ft2 headquarters for UBS at 5 Broadgate in London. From the late 1990s, AR&P successfully commissioned data centres for several of the world most successful brands. Its design engineers were involved in work at The National Gallery from the mid-1980s when the company designed the building services for the new Sainsbury Wing, and have been involved in other museums and galleries including Dulwich Picture Gallery, the National Maritime Museum and the Imperial War Museum.
In 2016, Hurley Palmer Flatt acquired a majority share in the civil and structural engineering business, Bradbrook Consulting, which had UK offices in London, Kingston, Watford, and Manchester, and a Dubai office.
In 2016, the company moved its central London office to 240 Blackfriars at the South Bank Tower on a 10-year lease as part of its expansion plan.
Notable Hurley Palmer Flatt projects
Sea Containers House, UK
Taymouth Castle, UK
195-197 Kings Road, UK, for Martins Properties
Weston Library, Oxford University, UK, shortlisted for the 2016 Stirling Prize.
Renovated Archive & Book Storage Facility, Bodleian Library, Oxford University, UK
Aberdeen Exhibition And Conference Centre Energy Centre, UK, for Henry Boot Developments
Singapore Changi Airport, Singapore
New Data Centre Australian Securities Exchange, Australia
1 Knightsbridge, for JP Morgan, UK
Croydon Data Centre, for Morgan Stanley, UK
Dundee railway station, UK
10 Upper Bank Street & The Zig Zag Building, Victoria, for Deutsche Bank, UK
131 Sloane Street, for Marshall Wace, UK
Notable Andrew Reid and Partners projects
The National Gallery
The Imperial War Museum
National Maritime Museum
Broadgate development, London
Deutsche Bank
Mary Rose Trust
Ashmolean Museum
University of Greenwich
Guildhall School of Music
Hurley Palmer Flatt awards and recognition
Number 34 in Top 150 Consultants 2016 by Building
View58 (58 Victoria Embankment) was Highly Commended the 2016 Property Awards in the Sustainability category
Nominated for the Training Initiative of the Year at the Consultancy and Engineering Awards 2016
Weston Library won AJ100 Building of the Year
Weston Library also won the RIBA National Award 2016, RIBA South Award 2016 and the RIBA South Building of the Year 2016
Winner in the Affordable Housing category at the Scottish Design Awards 2008 for Fyne Homes & CP Architects Gigha project in West Scotland
Nominated for the CIBSE Employer of the Year 2017 Award
Number 24 in Top 150 Consultants 2019 by Building
In June 2021, HDR acquired WKE, a multimodal transportation engineering firm based in Santa Ana, California. Their practice complements HDR's collaborative, full life cycle approach to infrastructure development and delivery of critical transportation programs in the Southwest region of the United States.
In September 2021, HDR acquired WRECO, a leader in innovative engineering solutions for communities throughout California based in Walnut Creek. The addition expands transportation and water resources services in the region.
In February 2022, HDR acquired SPF Water Engineering, a water, wastewater and hydrogeologic consulting firm based in Boise, Idaho. As part of the asset acquisition, SPF includes MDS Drafting, which provides value-added services in BIM.
Controversies
Prison design
HDR Architecture's jail and prison design projects have faced criticism from advocates in communities where the projects are proposed. In 2019, advocates in Travis County, TX opposed the construction of a new women's jail, arguing the resources would be better spent on programs to address concerns like addiction and mental health. Following community pressure, Travis County commissioners indefinitely paused HDR's $4.6 million contract to design the women's jail in June 2021. HDR also faced criticism from advocates in Massachusetts after being selected in 2021 to design a new women's prison for the Massachusetts Department of Correction. Advocates opposed all new prison construction and particularly argued against HDR's proposed “trauma-informed” design, saying it was not possible in a prison environment.
Monitoring of activists
In August 2021, a Motherboard story detailed HDR's monitoring services provided to government agencies conducting controversial projects. The report highlighted HDR's "corporate counterinsurgency" work, especially social media monitoring, to anticipate and disrupt public opposition to projects, including highways built through sacred Indigenous sites and prison and jail construction.
Awards
In 2018, the American Council of Engineering Companies awarded the Grand Conceptor Award to HDR and joint venture partner WSP USA for the design and construction of a new roadway within the steel-arch Bayonne Bridge—64 feet above an existing highway it was to replace. The Grand Conceptor Award signifies the year's most outstanding engineering achievement. The recognition marked HDR's fourth Grand Conceptor in the company's 100-year history, and the second time that HDR received the award two years in a row. In 2017, the State Route 520 floating bridge earned the American Council of Engineering Companies' Grand Conceptor Award.
HDR also won back-to-back Grand Conceptor Awards in 2010 and 2011. The 2011 award winner was the Hoover Dam Bypass. HDR was the project manager for this project. The Hoover Dam Bypass won several other industry awards. The 2010 winner was the Gills Onions Advanced Energy Recovery System in Oxnard, California, which uses onion waste to produce renewable energy.
Miscellaneous
HDR has worked on projects in all 50 U.S. states and in 60 countries, including notable projects such as the Hoover Dam Bypass, Alexander T. Augusta Military Medical Center, and Roslin Institute building. The firm employs over 12,000 professionals and represents hundreds of disciplines in various markets. HDR is the 5th largest employee-owned company in the United States with revenues of $2.8 billion in 2022. Engineering News-Record ranked HDR as the 6th largest design firm in the United States in 2023.
References
Design companies established in 1917
Architecture firms based in Nebraska
Construction and civil engineering companies of the United States
Engineering consulting firms of the United States
International engineering consulting firms
Companies based in Omaha, Nebraska
1917 establishments in Nebraska
Construction and civil engineering companies established in 1917
Business services companies established in 1917 | HDR, Inc. | Engineering | 2,233 |
578,365 | https://en.wikipedia.org/wiki/MIRACL | MIRACL, or Mid-Infrared Advanced Chemical Laser, is a directed energy weapon developed by the US Navy. It is a deuterium fluoride laser, a type of chemical laser.
The MIRACL laser first became operational in 1980. It can produce over a megawatt of output for up to 70 seconds, making it the most powerful continuous wave (CW) laser in the US. Its original goal was to be able to track and destroy anti-ship cruise missiles, but in later years it was used to test phenomenologies associated with national anti-ballistic and anti-satellite laser weapons. Originally tested at a contractor facility in California, as of the later 1990s and early 2000s, it was located at the former MAR-1 facility () in the White Sands Missile Range in New Mexico.
The beam size in the resonator is about wide. The beam is then reshaped to a square.
Amid much controversy in October 1997, MIRACL was tested against MSTI-3, a US Air Force satellite at the end of its original mission in orbit at a distance of . MIRACL failed during the test and was damaged and the Pentagon claimed mixed results for other portions of the test. A second, lower-powered chemical laser was able to temporarily blind the MSTI-3 sensors during the test.
References
Further reading
Chemical lasers
Military lasers
Directed-energy weapons of the United States
Military equipment introduced in the 1980s | MIRACL | Chemistry | 290 |
14,761,759 | https://en.wikipedia.org/wiki/CRX%20%28gene%29 | Cone-rod homeobox protein is a protein that in humans is encoded by the CRX gene.
Function
The protein encoded by this gene is a photoreceptor-specific transcription factor which plays a role in the differentiation of photoreceptor cells. This homeodomain protein is necessary for the maintenance of normal cone and rod function. Mutations in this gene are associated with photoreceptor degeneration, Leber's congenital amaurosis type III and the autosomal dominant cone-rod dystrophy 2. Several alternatively spliced transcript variants of this gene have been described, but the full-length nature of some variants has not been determined.
Mammalian CRX encodes a 299 amino acid protein containing a DNA binding homeodomain (HD) near its N-terminus followed by glutamine rich (Gln), and basic amino acid regions, then a C-terminal transactivation domain (AD). While structural biochemistry has demonstrated that the CRX HD adopts a canonical homeodomain protein fold, the AD is predicted to be flexible and disordered. The structural attributes of the CRX AD have yet to be solved.
Evolution
CRX is a divergent duplicate of OTX produced during the 2 rounds of vertebrate whole genome duplication.
In the eutherian mammals, CRX has again duplicated by tandem gene duplication, with six ancestral duplicates, which are collectively referred to as ETCHbox genes.
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on Retinitis Pigmentosa Overview
Transcription factors | CRX (gene) | Chemistry,Biology | 335 |
2,784,687 | https://en.wikipedia.org/wiki/Foot%20plough | The foot plough is a type of plough used like a spade with the foot in order to cultivate the ground.
New Zealand
Before the widespread use of metal farm tools from Europe, the Māori people used the , a version of the foot plough made entirely of wood.
Scotland
Prevalent in northwest Scotland, the Scottish Gaelic language contains many terms for the various varieties, for example 'straight foot' for the straighter variety and on, but 'bent foot' is the most common variety and refers to the crooked spade. The cas-chrom went out of use in the Hebrides in the early years of the 20th century.
Describing the Scottish Highlands around 1760, Samuel Smiles wrote:The plough had not yet penetrated into the Highlands; an instrument called the cas-chrom, literally the "crooked foot"- the use of which had been forgotten for hundreds of years in every other country in Europe, was almost the only tool employed in tillage in those parts of the Highlands which were separated by almost impassable mountains from the rest of the United Kingdom.
The cas-chrom was a rude combination of a lever for the removal of rocks, a spade to cut the earth, and a foot-plough to turn it. ... It weighed about eighteen pounds. In working it, the upper part of the handle, to which the left hand was applied, reached the workman's shoulder, and being slightly elevated, the point, shod with iron, was pushed into the ground horizontally; the soil being turned over by inclining the handle to the furrow side, at the same time making the heel act as a fulcrum to raise the point of the instrument. In turning up unbroken ground, it was first employed with the heel uppermost, with pushing strokes to cut the breadth of the sward to be turned over; after which, it was used horizontally as above described. We are indebted to a Parliamentary Blue Book for the following representation of this interesting relic of ancient agriculture. It is given in the appendix to the 'Ninth Report of the Commissioners for Highland Roads and Bridges,' ordered by the House of Commons to be printed,19th April, 1821. It was an implement of tillage peculiar to the Highlands, used for turning the ground where an ordinary plough could not work on account of the rough, stony, uneven ground. It is of great antiquity and is described as follows by Armstrong:
In the Western Isles, with a foot plough, one man can perhaps do the work of four men with an ordinary spade, and while it is disadvantaged compared to a horse-plough, it is well suited to the country.
Andes
The most advanced agricultural tool known in the New World before the coming of the Europeans was the Andean footplough, also known as the or simply . It evolved from the digging stick and combined three advantages: metal point, curved handle, and footrest. No other indigenous tool utilized the pressure of the foot in digging up the sod which made it different from all farming implements known elsewhere in the Americas in pre-Columbian times. Although is a relatively simple instrument, it has persisted long after more sophisticated technology was introduced into the Central Andes, and its enduring presence demonstrates that more advanced innovations do not necessarily displace primitive forms that under certain conditions may be more efficient.
Historic distribution and the current diversity of forms point to the mountainous region of Southern Peru as the likely place of origin of the . With the expansion of the Inca Empire, the was carried north to Ecuador and south to Bolivia where early colonial writings confirmed its presence. It probably never occurred in Southern Chile, either before or after the conquest by the Spaniards.
It is probable, nevertheless, that agricultural peoples living on the Peruvian coast long before the Incas contributed to the idea of the . Copper-shod digging sticks known by the Mochica culture () may have been a forerunner of the . Pottery representations and remains of proto- tools from the Chimu culture (1300 CE) on the coast verifies its development by at least that time. However, the friable soils of the coastal desert were easily turned without the , and the incentive to develop such a tool probably came from the adjacent Highlands.
Men wielded the plow, called a . It was made of a pole about long with a pointed end of wood or bronze, a handle or curvature at the top, and a foot rest lashed near the bottom.
The Inca Emperor and accompanying provincial lords used foot ploughs in the "opening of the earth" ceremony at the beginning of the agricultural cycle. Incan agriculture used the or , a type of foot plough.
are still used by peasant farmers of native heritage in some parts of the Peruvian and Bolivian Andes. Modern have a steel point.
See also
Laia - the Basque h-shaped tool, also described as a foot plough.
Lazybed, a form of agriculture
Loy
References
External links
Foot plough at National Museums Scotland
Agriculture in New Zealand
Agriculture in Peru
Agriculture in Scotland
Economic history of Scotland
Gardening tools
Mechanical hand tools
Ploughs | Foot plough | Physics | 1,033 |
575,920 | https://en.wikipedia.org/wiki/Oncomouse | The OncoMouse or Harvard mouse is a type of laboratory mouse (Mus musculus) that has been genetically modified using modifications designed by Philip Leder and Timothy A Stewart of Harvard University to carry a specific gene called an activated oncogene (v-Ha-ras under the control of the mouse mammary tumor virus promoter). The activated oncogene significantly increases the mouse's susceptibility to cancer, and thus makes the mouse a suitable model for cancer research.
OncoMouse was not the first transgenic mouse to be developed for use in cancer research. Ralph L. Brinster and Richard Palmiter had developed such mice previously. However, OncoMouse was the first mammal to be patented. Because DuPont had funded Philip Leder's research, Harvard University agreed to give DuPont exclusive rights to any inventions commercialized as a result of the funding. Patent applications on the OncoMouse were filed back in the mid-1980s in numerous countries such as in the United States, in Canada, in Europe through the European Patent Office (EPO) and in Japan. Initially the rights to the OncoMouse invention were owned by DuPont. However, in 2011 the USPTO decided that the final patent actually expired in 2005, which meant that the Oncomouse became free for use by other parties (although the name is not, as "OncoMouse" is a registered trademark).
The patenting of OncoMouse had a significant effect on mouse geneticists, who had previously shared their information and mice from their colonies openly. Once a strain of mice had been first described in published research, mice were stored and acquired through Jackson Laboratory, a nonprofit research institute. The patenting of OncoMouse, and the breadth of the claims made in those patents, were considered to be unreasonable by many of their contemporaries. More broadly, the patenting of OncoMouse was a first step in shifting academic research away from a culture of open and free (or very inexpensive) shared resources towards a commercial culture of expensive proprietary purchase and licensing requirements. This shift was felt far beyond the mouse genetics community. Harvard later said that it regretted the handling of the OncoMouse patents.
Patent procedures
Canada
In Canada, the Supreme Court in 2002 rejected the patent in Harvard College v. Canada (Commissioner of Patents), overturning a Federal Court of Appeal verdict which ruled in favor of the patent. However, on 7 October 2003, Canadian patent 1,341,442 was granted to Harvard College. The patent was amended to omit the "composition of matter" claims on the transgenic mice. The Supreme Court had rejected the entire patent application on the basis of these claims, but Canadian patent law allowed the amended claims to grant under rules that predated the General Agreement on Tariffs and Trade, and the patent was valid until 2020.
Europe (through the EPO)
European patent application 85304490.7 was filed in June 1985 by "The President and Fellows of Harvard College". It was initially refused in 1989 by an Examining Division of the European Patent Office (EPO) among other things on the grounds that the European Patent Convention (EPC) excludes patentability of animals per se. The decision was appealed and the Board of Appeal held that animal varieties were excluded of patentability by the EPC (and especially its ), while animals (as such) were not excluded from patentability. The Examining Division then granted the patent in 1992 (its publication number is ).
The European patent was then opposed by several third parties, more precisely by 17 opponents, notably on the grounds laid out in , according to which "inventions, the publication or exploitation of which would be contrary to "ordre public" or morality are excluded from patentability. After oral proceedings took place in November 2001, the patent was maintained in amended form. This decision was then appealed and the appeal decision was taken on July 6, 2004. The case was eventually remitted to the first instance, i.e. the Opposition Division, with the order to maintain the patent on a newly amended form. However, revocation of the patent was eventually published on August 16, 2006, more than 20 years after the filing date (the normal term of a European patent under ), for failure to pay the fees and to file the translations of the amended claims under .
United States
In 1988, the United States Patent and Trademark Office (USPTO) granted (filed Jun 22, 1984, issued Apr 12, 1988, expired April 12, 2005) to Harvard College claiming "a transgenic non-human mammal whose germ cells and somatic cells contain a recombinant activated oncogene sequence introduced into said mammal..." The claim explicitly excluded humans, apparently reflecting moral and legal concerns about patents on human beings, and about modification of the human genome. Remarkably, there were no US courts called to decide on the validity of this patent. Two separate patents were issued to Harvard College covering methods for providing a cell culture from a transgenic non-human animal (; filed Mar 22, 1988, issued Feb 11, 1992, expired Feb 11, 2009) and testing methods using transgenic mice expressing an oncogene (; filed Sep 19, 1991, issued Jul 20, 1999, expires July 20, 2016). Both these patents were found to expire in 2005 by the USPTO due to a terminal disclaimer. Dupont is currently bringing suit in the Eastern District of Virginia.
See also
Biobreeding rat
Biological patent
Knockout mouse
Animal testing
References
Further reading
Bioethics
Genetically modified organisms
Patent law
Laboratory mouse strains
Harvard University
Cancer research | Oncomouse | Technology,Engineering,Biology | 1,149 |
71,743,947 | https://en.wikipedia.org/wiki/Trinovid | Trinovid is the protected model designation of a roof prism binoculars series from the company Leitz (optics) (since 1986 Leica Camera) based in Wetzlar, a German centre for optics as well as an important location for the precision engineering industry.
The Trinovid binoculars were introduced in 1958 based on a patent request filed in 1953 and featured:
special pentaprisms (so-called Uppendahl roof prism systems that allow the construction of compact optical instruments);
a built-in diopter adjustment, and
a fully internal focusing system, that moves internal optical lenses to prevent intrusion of dust and moisture into the binocular body with centrally located adjusting means to compensate for vision differences of the two eyes.
Because of these at the time three innovations in binoculars, the binoculars series was named Trinovid.
They included both larger and smaller (compact) binoculars and were initially practically unsuitable for people who wear glasses and weatherproof, but not waterproof. The binoculars series was updated and modified several times throughout its production history and switched to Schmidt-Pechan roof prism systems around 1990, which also brought a new series onto the market. These binoculars, which have been on the market for high-quality compact binoculars for a long time, had the optical parameters 8×20 and 10×25. The "B" designation added to updated models means that there is sufficient eye relief for eyeglasses [Brille in German] wearers. The Trinovid series were supplemented in 2004 by the Ultravid series and in 2016 by the Noctivid series with higher-quality optical glass, better optical coatings and completely recalculated optical imaging qualities, but are still available as the entry-level binoculars series offered by Leica.
References
Roof-Prism Leica Binoculars
Leica “Leitz” Trinovid 10 × 40 B
Leica “Leitz” Trinovid 7 × 35 B
Leitz Trinovid 10×40 (1963-1975 model)
Binoculars
1958 introductions | Trinovid | Astronomy | 406 |
40,344,067 | https://en.wikipedia.org/wiki/Clavulinopsis%20fusiformis | Clavulinopsis fusiformis is a clavarioid fungus in the family Clavariaceae. In the UK, it has been given the recommended English name of golden spindles. In North America it has also been called spindle-shaped yellow coral or golden fairy spindle. Clavulinopsis fusiformis forms cylindrical, bright yellow fruit bodies that grow in dense clusters on the ground in agriculturally unimproved grassland or in woodland litter. It was originally described from England and is part of a species complex as yet unresolved.
Taxonomy and etymology
The species was first described in 1799 by English botanist and mycologist James Sowerby from collections made in Hampstead Heath in London. It was transferred to Clavulinopsis by English mycologist E.J.H. Corner in 1950. Initial molecular research, based on cladistic analysis of DNA sequences, indicates that C. fusiformis is part of a complex of related species.
The specific epithet fusiformis, derived from Latin, means "spindle-shaped".
Description
The fruit bodies of Clavulinopsis fusiformis are cylindrical, bright yellow, up to 150 x 10 mm, growing in fasciculate (densely crowded) clusters. Microscopically, the hyphae are hyaline, up to 12 μm diameter, with clamp connections. The basidiospores are hyaline, smooth, globose to subglobose, 4.5 to 7.5 μm, with a large apiculus.
Similar species
In European grasslands, Clavulinopsis helvola, C. laeticolor, and C. luteoalba have similarly coloured, simple fruit bodies but are typically smaller and grow singly or sparsely clustered. The uncommon Clavaria amoenoides produces densely clustered fruit bodies but they are pale yellow and, microscopically, lack clamp connections.
Distribution and habitat
The species was initially described from England and is common throughout Europe. Its distribution outside Europe is uncertain because of confusion with similar, closely related species in the complex. Clavulinopsis fusiformis sensu lato has been reported from North America, Central and South America, and Asia, including Iran, China, Nepal, and Japan.
The species typically occurs in large, dense clusters on the ground and is presumed to be saprotrophic. In Europe it generally occurs in agriculturally unimproved, short-sward grassland (pastures and lawns). Such waxcap grasslands are a declining and threatened habitat, but Clavulinopsis fusiformis is one of the commoner species and is not currently considered of conservation concern. Elsewhere, C. fusiformis sensu lato occurs in woodland. In China it is one of the dominant macrofungal species found in Fargesia spathacea-dominated community forest at an elevation of .
Economic usage
Fruit bodies are commonly collected and consumed in Nepal, where the fungus is known locally as Kesari chyau.
Chemistry
Extracts of "Clavulinopsis fusiformis" from Japan have been found to contain anti-B red blood cell agglutinin.
References
External links
Fungi described in 1799
Fungi of Asia
Fungi of Europe
Fungi of North America
Clavariaceae
Taxa named by James Sowerby
Fungus species | Clavulinopsis fusiformis | Biology | 685 |
2,466,815 | https://en.wikipedia.org/wiki/Plug%20compatible | Plug compatible refers to "hardware that is designed to perform exactly like another vendor's product." The term PCM was originally applied to manufacturers who made replacements for IBM peripherals. Later this term was used to refer to IBM-compatible computers.
PCM and peripherals
Before the rise of the PCM peripheral industry, computing systems were either configured with peripherals designed and built by the CPU vendor, or designed to use vendor-selected rebadged devices.
The first example of plug-compatible IBM subsystems were tape drives and controls offered by Telex beginning 1965. Memorex in 1968 was first to enter the IBM plug-compatible disk followed shortly thereafter by a number of suppliers such as CDC, Itel, and Storage Technology Corporation. This was boosted by the world's largest user of computing equipment in both directions.
Ultimately plug-compatible products were offered for most peripherals and system main memory.
PCM and computer systems
A plug-compatible machine is one that has been designed to be backward compatible with a prior machine. In particular, a new computer system that is plug-compatible has not only the same connectors and protocol interfaces to peripherals, but also binary-code compatibility—it runs the same software as the old system. A plug compatible manufacturer or PCM is a company that makes such products.
One recurring theme in plug-compatible systems is the ability to be bug compatible as well. That is, if the forerunner system had software or interface problems, then the successor must have (or simulate) the same problems. Otherwise, the new system may generate unpredictable results, defeating the full compatibility objective. Thus, it is important for customers to understand the difference between a "bug" and a "feature", where the latter is defined as an intentional modification to the previous system (e.g. higher speed, lighter weight, smaller package, better operator controls, etc.).
PCM and IBM mainframes
The original example of PCM mainframes was the Amdahl 470 mainframe computer which was plug-compatible with the IBM System 360 and 370, costing millions of dollars to develop. Similar systems were available from Comparex, Fujitsu, and Hitachi. Not all were large systems. Most of these system vendors eventually left the PCM market. In late 1981, there were eight PCM companies, and collectively they had 36 IBM-compatible models.
Non-computer usage of the term
The term may also be used to define replacement criteria for other components available from multiple sources. For example, a plug-compatible cooling fan may need to have not only the same physical size and shape, but also similar capability, run from the same voltage, use similar power, attach with a standard electrical connector, and have similar mounting arrangements. Some non-conforming units may be re-packaged or modified to meet plug-compatible requirements, as where an adapter plate is provided for mounting, or a different tool and instructions are supplied for installation, and these modifications would be reflected in the bill of materials for such components. Similar issues arise for computer system interfaces when competitors wish to offer an easy upgrade path.
In general, plug-compatible systems are designed where industry or de facto standards have rigorously defined the environment, and there is a large installed population of machines that can benefit from third-party enhancements. Plug compatible does not mean identical replacement. However, nothing prevents a company from developing follow-on products that are backward-compatible with its own early products.
See also
Bug compatibility
Clone (computing)
Computer compatibility
Drop-in replacement
Hercules (emulator)
Pin compatibility
Proprietary hardware
Vendor lock-in
Honeywell 200, chasing the IBM 1401 market
Xerox 530, chasing the IBM 1130 market
References
Classes of computers
Computer hardware
Interoperability | Plug compatible | Technology,Engineering | 762 |
25,801,759 | https://en.wikipedia.org/wiki/Silicone%20foam | Silicone foam is a synthetic rubber product used in gasketing, sheets and firestops. It is available in solid, cured form as well as in individual liquid components for field installations.
Uses
Gaskets
Sheets
High temperature tubes for autoclaves
Firestops
Surfactants
Vulcanisation
When the constituent components of silicone foam are mixed together, they evolve hydrogen gas, which causes bubbles to form within the rubber, as it changes from liquid to solid. This results in an outward pressure. Temperature and humidity can influence the rate of expansion.
See also
Firestop
Vulcanisation
Rubber
Silicone resin
References
External links
YouTube video showing silicone foam being mixed and expanded
Silicone foam used as high temperature tube for autoclave
Stockwell Elastomerics silicone foam used in gasketing
Silicone Engineering Silicone Foam used in Rail and Mass Transit Systems
Silicone sponge sheets
Silicone foam surfactant, the Marangioni effect on cell stabilisation
Silicone foam without use of toxic blowing agents
Foams
Rubber
Passive fire protection
Firestops | Silicone foam | Chemistry | 211 |
61,603,260 | https://en.wikipedia.org/wiki/Armin%20Moczek | Armin Moczek (born 8 July 1969 in Munich) is a German evolutionary biologist and full professor at Indiana University Bloomington.
Biography
Moczek studied biology at the University of Würzburg, where he graduated in 1996 with a master's degree in zoology. Joining Fred Nijhout’s lab at Duke University he developed a deep interest in Evolutionary developmental biology, receiving his PhD in 2002. From 2002 to 2004 he joined the University of Arizona as a postdoctoral fellow in the Postdoctoral Excellence in Research and Teaching (PERT) program. In 2004, he assumed the position of assistant professor at the Department of Biology at Indiana University, where he was promoted to associate professor in 2009 and full professor in 2014. His research focuses on the genetic, developmental, and ecological mechanisms, and the interactions among them, that facilitate innovation in living systems.
Awards
Guggenheim Fellow (2017) Indiana University Bloomington, College of Arts and Sciences, Department of Biology
Fulbright Distinguished Chair in Science, Technology, and Innovation Award (2017), Fulbright Award, Location: Australia, Indiana University Bloomington
American Association for the Advancement of Science (2015), American Association for the Advancement of Science, Indiana University Bloomington, College of Arts and Sciences
American Society of Naturalists (ASN) Young Investigator Prize (2004)
References
External links
Armin Moczek, Indiana University
Moczek Lab
Armin Moczek, local coordinator of the Extended Evolutionary Synthesis research program
Evolution Evolving: Armin Moczek (YouTube)
Sir John Guggenheim Memorial Foundation: Armin P. Moczek
The Fulbright Program
Google Scholar citations
1969 births
Extended evolutionary synthesis
Theoretical biologists
Evolutionary biologists
Indiana University
Living people | Armin Moczek | Biology | 333 |
1,075,071 | https://en.wikipedia.org/wiki/Transcriptome | The transcriptome is the set of all RNA transcripts, including coding and non-coding, in an individual or a population of cells. The term can also sometimes be used to refer to all RNAs, or just mRNA, depending on the particular experiment. The term transcriptome is a portmanteau of the words transcript and genome; it is associated with the process of transcript production during the biological process of transcription.
The early stages of transcriptome annotations began with cDNA libraries published in the 1980s. Subsequently, the advent of high-throughput technology led to faster and more efficient ways of obtaining data about the transcriptome. Two biological techniques are used to study the transcriptome, namely DNA microarray, a hybridization-based technique and RNA-seq, a sequence-based approach. RNA-seq is the preferred method and has been the dominant transcriptomics technique since the 2010s. Single-cell transcriptomics allows tracking of transcript changes over time within individual cells.
Data obtained from the transcriptome is used in research to gain insight into processes such as cellular differentiation, carcinogenesis, transcription regulation and biomarker discovery among others. Transcriptome-obtained data also finds applications in establishing phylogenetic relationships during the process of evolution and in in vitro fertilization. The transcriptome is closely related to other -ome based biological fields of study; it is complementary to the proteome and the metabolome and encompasses the translatome, exome, meiome and thanatotranscriptome which can be seen as ome fields studying specific types of RNA transcripts. There are quantifiable and conserved relationships between the Transcriptome and other -omes, and Transcriptomics data can be used effectively to predict other molecular species, such as metabolites. There are numerous publicly available transcriptome databases.
Etymology and history
The word transcriptome is a portmanteau of the words transcript and genome. It appeared along with other neologisms formed using the suffixes -ome and -omics to denote all studies conducted on a genome-wide scale in the fields of life sciences and technology. As such, transcriptome and transcriptomics were one of the first words to emerge along with genome and proteome. The first study to present a case of a collection of a cDNA library for silk moth mRNA was published in 1979. The first seminal study to mention and investigate the transcriptome of an organism was published in 1997 and it described 60,633 transcripts expressed in S. cerevisiae using serial analysis of gene expression (SAGE). With the rise of high-throughput technologies and bioinformatics and the subsequent increased computational power, it became increasingly efficient and easy to characterize and analyze enormous amount of data. Attempts to characterize the transcriptome became more prominent with the advent of automated DNA sequencing during the 1980s. During the 1990s, expressed sequence tag sequencing was used to identify genes and their fragments. This was followed by techniques such as serial analysis of gene expression (SAGE), cap analysis of gene expression (CAGE), and massively parallel signature sequencing (MPSS).
Transcription
The transcriptome encompasses all the ribonucleic acid (RNA) transcripts present in a given organism or experimental sample. RNA is the main carrier of genetic information that is responsible for the process of converting DNA into an organism's phenotype. A gene can give rise to a single-stranded messenger RNA (mRNA) through a molecular process known as transcription; this mRNA is complementary to the strand of DNA it originated from. The enzyme RNA polymerase II attaches to the template DNA strand and catalyzes the addition of ribonucleotides to the 3' end of the growing sequence of the mRNA transcript.
In order to initiate its function, RNA polymerase II needs to recognize a promoter sequence, located upstream (5') of the gene. In eukaryotes, this process is mediated by transcription factors, most notably Transcription factor II D (TFIID) which recognizes the TATA box and aids in the positioning of RNA polymerase at the appropriate start site. To finish the production of the RNA transcript, termination takes place usually several hundred nuclecotides away from the termination sequence and cleavage takes place. This process occurs in the nucleus of a cell along with RNA processing by which mRNA molecules are capped, spliced and polyadenylated to increase their stability before being subsequently taken to the cytoplasm. The mRNA gives rise to proteins through the process of translation that takes place in ribosomes.
Types of RNA transcripts
Almost all functional transcripts are derived from known genes. The only exceptions are a small number of transcripts that might play a direct role in regulating gene expression near the prompters of known genes. (See Enhancer RNA.)
Gene occupy most of prokaryotic genomes so most of their genomes are transcribed. Many eukaryotic genomes are very large and known genes may take up only a fraction of the genome. In mammals, for example, known genes only account for 40-50% of the genome. Nevertheless, identified transcripts often map to a much larger fraction of the genome suggesting that the transcriptome contains spurious transcripts that do not come from genes. Some of these transcripts are known to be non-functional because they map to transcribed pseudogenes or degenerative transposons and viruses. Others map to unidentified regions of the genome that may be junk DNA.
Spurious transcription is very common in eukaryotes, especially those with large genomes that might contain a lot of junk DNA. Some scientists claim that if a transcript has not been assigned to a known gene then the default assumption must be that it is junk RNA until it has been shown to be functional. This would mean that much of the transcriptome in species with large genomes is probably junk RNA. (See Non-coding RNA)
The transcriptome includes the transcripts of protein-coding genes (mRNA plus introns) as well as the transcripts of non-coding genes (functional RNAs plus introns).
Ribosomal RNA/rRNA: Usually the most abundant RNA in the transcriptome.
Long non-coding RNA/lncRNA: Non-coding RNA transcripts that are more than 200 nucleotides long. Members of this group comprise the largest fraction of the non-coding transcriptome other than introns. It is not known how many of these transcripts are functional and how many are junk RNA.
transfer RNA/tRNA
micro RNA/miRNA: 19-24 nucleotides (nt) long. Micro RNAs up- or downregulate expression levels of mRNAs by the process of RNA interference at the post-transcriptional level.
small interfering RNA/siRNA: 20-24 nt
small nucleolar RNA/snoRNA
Piwi-interacting RNA/piRNA: 24-31 nt. They interact with Piwi proteins of the Argonaute family and have a function in targeting and cleaving transposons.
enhancer RNA/eRNA:
Scope of study
In the human genome, all genes get transcribed into RNA because that's how the molecular gene is defined. (See Gene.) The transcriptome consists of coding regions of mRNA plus non-coding UTRs, introns, non-coding RNAs, and spurious non-functional transcripts.
Several factors render the content of the transcriptome difficult to establish. These include alternative splicing, RNA editing and alternative transcription among others. Additionally, transcriptome techniques are capable of capturing transcription occurring in a sample at a specific time point, although the content of the transcriptome can change during differentiation. The main aims of transcriptomics are the following: "catalogue all species of transcript, including mRNAs, non-coding RNAs and small RNAs; to determine the transcriptional structure of genes, in terms of their start sites, 5′ and 3′ ends, splicing patterns and other post-transcriptional modifications; and to quantify the changing expression levels of each transcript during development and under different conditions".
The term can be applied to the total set of transcripts in a given organism, or to the specific subset of transcripts present in a particular cell type. Unlike the genome, which is roughly fixed for a given cell line (excluding mutations), the transcriptome can vary with external environmental conditions. Because it includes all mRNA transcripts in the cell, the transcriptome reflects the genes that are being actively expressed at any given time, with the exception of mRNA degradation phenomena such as transcriptional attenuation. The study of transcriptomics, (which includes expression profiling, splice variant analysis etc.), examines the expression level of RNAs in a given cell population, often focusing on mRNA, but sometimes including others such as tRNAs and sRNAs.
Methods of construction
Transcriptomics is the quantitative science that encompasses the assignment of a list of strings ("reads") to the object ("transcripts" in the genome). To calculate the expression strength, the density of reads corresponding to each object is counted. Initially, transcriptomes were analyzed and studied using expressed sequence tags libraries and serial and cap analysis of gene expression (SAGE).
Currently, the two main transcriptomics techniques include DNA microarrays and RNA-Seq. Both techniques require RNA isolation through RNA extraction techniques, followed by its separation from other cellular components and enrichment of mRNA.
There are two general methods of inferring transcriptome sequences. One approach maps sequence reads onto a reference genome, either of the organism itself (whose transcriptome is being studied) or of a closely related species. The other approach, de novo transcriptome assembly, uses software to infer transcripts directly from short sequence reads and is used in organisms with genomes that are not sequenced.
DNA microarrays
The first transcriptome studies were based on microarray techniques (also known as DNA chips). Microarrays consist of thin glass layers with spots on which oligonucleotides, known as "probes" are arrayed; each spot contains a known DNA sequence.
When performing microarray analyses, mRNA is collected from a control and an experimental sample, the latter usually representative of a disease. The RNA of interest is converted to cDNA to increase its stability and marked with fluorophores of two colors, usually green and red, for the two groups. The cDNA is spread onto the surface of the microarray where it hybridizes with oligonucleotides on the chip and a laser is used to scan. The fluorescence intensity on each spot of the microarray corresponds to the level of gene expression and based on the color of the fluorophores selected, it can be determined which of the samples exhibits higher levels of the mRNA of interest.
One microarray usually contains enough oligonucleotides to represent all known genes; however, data obtained using microarrays does not provide information about unknown genes. During the 2010s, microarrays were almost completely replaced by next-generation techniques that are based on DNA sequencing.
RNA sequencing
RNA sequencing is a next-generation sequencing technology; as such it requires only a small amount of RNA and no previous knowledge of the genome. It allows for both qualitative and quantitative analysis of RNA transcripts, the former allowing discovery of new transcripts and the latter a measure of relative quantities for transcripts in a sample.
The three main steps of sequencing transcriptomes of any biological samples include RNA purification, the synthesis of an RNA or cDNA library and sequencing the library. The RNA purification process is different for short and long RNAs. This step is usually followed by an assessment of RNA quality, with the purpose of avoiding contaminants such as DNA or technical contaminants related to sample processing. RNA quality is measured using UV spectrometry with an absorbance peak of 260 nm. RNA integrity can also be analyzed quantitatively comparing the ratio and intensity of 28S RNA to 18S RNA reported in the RNA Integrity Number (RIN) score. Since mRNA is the species of interest and it represents only 3% of its total content, the RNA sample should be treated to remove rRNA and tRNA and tissue-specific RNA transcripts.
The step of library preparation with the aim of producing short cDNA fragments, begins with RNA fragmentation to transcripts in length between 50 and 300 base pairs. Fragmentation can be enzymatic (RNA endonucleases), chemical (trismagnesium salt buffer, chemical hydrolysis) or mechanical (sonication, nebulisation). Reverse transcription is used to convert the RNA templates into cDNA and three priming methods can be used to achieve it, including oligo-DT, using random primers or ligating special adaptor oligos.
Single-cell transcriptomics
Transcription can also be studied at the level of individual cells by single-cell transcriptomics. Single-cell RNA sequencing (scRNA-seq) is a recently developed technique that allows the analysis of the transcriptome of single cells, including bacteria. With single-cell transcriptomics, subpopulations of cell types that constitute the tissue of interest are also taken into consideration. This approach allows to identify whether changes in experimental samples are due to phenotypic cellular changes as opposed to proliferation, with which a specific cell type might be overexpressed in the sample. Additionally, when assessing cellular progression through differentiation, average expression profiles are only able to order cells by time rather than their stage of development and are consequently unable to show trends in gene expression levels specific to certain stages. Single-cell trarnscriptomic techniques have been used to characterize rare cell populations such as circulating tumor cells, cancer stem cells in solid tumors, and embryonic stem cells (ESCs) in mammalian blastocysts.
Although there are no standardized techniques for single-cell transcriptomics, several steps need to be undertaken. The first step includes cell isolation, which can be performed using low- and high-throughput techniques. This is followed by a qPCR step and then single-cell RNAseq where the RNA of interest is converted into cDNA. Newer developments in single-cell transcriptomics allow for tissue and sub-cellular localization preservation through cryo-sectioning thin slices of tissues and sequencing the transcriptome in each slice. Another technique allows the visualization of single transcripts under a microscope while preserving the spatial information of each individual cell where they are expressed.
Analysis
A number of organism-specific transcriptome databases have been constructed and annotated to aid in the identification of genes that are differentially expressed in distinct cell populations.
RNA-seq is emerging (2013) as the method of choice for measuring transcriptomes of organisms, though the older technique of DNA microarrays is still used. RNA-seq measures the transcription of a specific gene by converting long RNAs into a library of cDNA fragments. The cDNA fragments are then sequenced using high-throughput sequencing technology and aligned to a reference genome or transcriptome which is then used to create an expression profile of the genes.
Applications
Mammals
The transcriptomes of stem cells and cancer cells are of particular interest to researchers who seek to understand the processes of cellular differentiation and carcinogenesis. A pipeline using RNA-seq or gene array data can be used to track genetic changes occurring in stem and precursor cells and requires at least three independent gene expression data from the former cell type and mature cells.
Analysis of the transcriptomes of human oocytes and embryos is used to understand the molecular mechanisms and signaling pathways controlling early embryonic development, and could theoretically be a powerful tool in making proper embryo selection in in vitro fertilisation. Analyses of the transcriptome content of the placenta in the first-trimester of pregnancy in in vitro fertilization and embryo transfer (IVT-ET) revealed differences in genetic expression which are associated with higher frequency of adverse perinatal outcomes. Such insight can be used to optimize the practice. Transcriptome analyses can also be used to optimize cryopreservation of oocytes, by lowering injuries associated with the process.
Transcriptomics is an emerging and continually growing field in biomarker discovery for use in assessing the safety of drugs or chemical risk assessment.
Transcriptomes may also be used to infer phylogenetic relationships among individuals or to detect evolutionary patterns of transcriptome conservation.
Transcriptome analyses were used to discover the incidence of antisense transcription, their role in gene expression through interaction with surrounding genes and their abundance in different chromosomes. RNA-seq was also used to show how RNA isoforms, transcripts stemming from the same gene but with different structures, can produce complex phenotypes from limited genomes.
Plants
Transcriptome analysis have been used to study the evolution and diversification process of plant species. In 2014, the 1000 Plant Genomes Project was completed in which the transcriptomes of 1,124 plant species from the families viridiplantae, glaucophyta and rhodophyta were sequenced. The protein coding sequences were subsequently compared to infer phylogenetic relationships between plants and to characterize the time of their diversification in the process of evolution. Transcriptome studies have been used to characterize and quantify gene expression in mature pollen. Genes involved in cell wall metabolism and cytoskeleton were found to be overexpressed. Transcriptome approaches also allowed to track changes in gene expression through different developmental stages of pollen, ranging from microspore to mature pollen grains; additionally such stages could be compared across species of different plants including Arabidopsis, rice and tobacco.
Relation to other ome fields
Similar to other -ome based technologies, analysis of the transcriptome allows for an unbiased approach when validating hypotheses experimentally. This approach also allows for the discovery of novel mediators in signaling pathways. As with other -omics based technologies, the transcriptome can be analyzed within the scope of a multiomics approach. It is complementary to metabolomics but contrary to proteomics, a direct association between a transcript and metabolite cannot be established.
There are several -ome fields that can be seen as subcategories of the transcriptome. The exome differs from the transcriptome in that it includes only those RNA molecules found in a specified cell population, and usually includes the amount or concentration of each RNA molecule in addition to the molecular identities. Additionally, the transcritpome also differs from the translatome, which is the set of RNAs undergoing translation.
The term meiome is used in functional genomics to describe the meiotic transcriptome or the set of RNA transcripts produced during the process of meiosis. Meiosis is a key feature of sexually reproducing eukaryotes, and involves the pairing of homologous chromosome, synapse and recombination. Since meiosis in most organisms occurs in a short time period, meiotic transcript profiling is difficult due to the challenge of isolation (or enrichment) of meiotic cells (meiocytes). As with transcriptome analyses, the meiome can be studied at a whole-genome level using large-scale transcriptomic techniques. The meiome has been well-characterized in mammal and yeast systems and somewhat less extensively characterized in plants.
The thanatotranscriptome consists of all RNA transcripts that continue to be expressed or that start getting re-expressed in internal organs of a dead body 24–48 hours following death. Some genes include those that are inhibited after fetal development. If the thanatotranscriptome is related to the process of programmed cell death (apoptosis), it can be referred to as the apoptotic thanatotranscriptome. Analyses of the thanatotranscriptome are used in forensic medicine.
eQTL mapping can be used to complement genomics with transcriptomics; genetic variants at DNA level and gene expression measures at RNA level.
Relation to proteome
The transcriptome can be seen as a subset of the proteome, that is, the entire set of proteins expressed by a genome.
However, the analysis of relative mRNA expression levels can be complicated by the fact that relatively small changes in mRNA expression can produce large changes in the total amount of the corresponding protein present in the cell. One analysis method, known as gene set enrichment analysis, identifies coregulated gene networks rather than individual genes that are up- or down-regulated in different cell populations.
Although microarray studies can reveal the relative amounts of different mRNAs in the cell, levels of mRNA are not directly proportional to the expression level of the proteins they code for. The number of protein molecules synthesized using a given mRNA molecule as a template is highly dependent on translation-initiation features of the mRNA sequence; in particular, the ability of the translation initiation sequence is a key determinant in the recruiting of ribosomes for protein translation.
Transcriptome databases
Ensembl:
OmicTools:
Transcriptome Browser:
ArrayExpress:
See also
Notes
References
Further reading
Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL, Gillette MA, Paulovich A, Pomeroy SL, Golub TR, Lander ES, Mesirov JP. (2005). Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci USA 102(43):15545-50.
Laule O, Hirsch-Hoffmann M, Hruz T, Gruissem W, and P Zimmermann. (2006) Web-based analysis of the mouse transcriptome using Genevestigator. BMC Bioinformatics 7:311
Gene expression
Omics
RNA
RNA splicing | Transcriptome | Chemistry,Biology | 4,476 |
14,667,955 | https://en.wikipedia.org/wiki/HD%20183263 | HD 183263 is a star with a pair of orbiting exoplanets located in the equatorial constellation of Aquila. It has an apparent visual magnitude of 7.86, which is too faint to be visible to the naked eye. The distance to this system is 178 light years based on parallax measurements, but it is drifting closer with a heliocentric radial velocity of −50 km/s. Judging from its motion through space, this star is predicted to approach to within of the Sun in around 952,000 years. At that distance, it will be faintly visible to the naked eye.
This is an older star with a spectrum matching a stellar classification of G2 IV, indicating it is about to leave the main sequence after exhausting the supply of hydrogen at its core. It will then evolve into a red giant before dying as a white dwarf. This star has an absolute magnitude (apparent magnitude at 10 pc) of 4.16 compared to the Sun’s 4.83, which indicates the star is more luminous than the Sun, and therefore hotter by about 100 K. At the age of 8.1 billion years, the magnetic activity in its chromosphere is quiet and it is spinning slowly with a rotation period of 32 days.
Planetary system
The star has two known super-jovian exoplanets in orbit around it. Exoplanet b was discovered in 2005 while exoplanet c was discovered in 2008. A 2022 study estimated the true mass of HD 183263 c at about via astrometry, although this estimate is poorly constrained.
See also
List of multiplanetary systems
List of exoplanetary host stars
References
External links
G-type subgiants
Planetary systems with two confirmed planets
Aquila (constellation)
Durchmusterung objects
183263
095740 | HD 183263 | Astronomy | 373 |
176,571 | https://en.wikipedia.org/wiki/Small%20appliance | A small domestic appliance, also known as a small electric appliance or minor appliance or simply a small appliance, small domestic or small electric, is a portable or semi-portable machine, generally used on table-tops, counter-tops or other platforms, to accomplish a household task. Examples include microwave ovens, kettles, toasters, humidifiers, food processors and coffeemakers. They contrast with major appliances (known as "white goods" in the UK), such as the refrigerators and washing machines, which cannot be easily moved and are generally placed on the floor. Small appliances also contrast with consumer electronics (British "brown goods") which are for leisure and entertainment rather than purely practical tasks.
Uses
Some small appliances perform the same or similar function as their larger counterparts. For example, a toaster oven is a small appliance that performs a similar function as an oven. Small appliances often have a home version and a commercial version, for example waffle irons, food processors, and blenders are small household appliances. A food processor will perform the tasks of chopper, slicer, mixer as well as juicer. Instead of buying multiple small appliances you can buy a single food processor. The commercial, or industrial, version is designed to be used nearly continuously in a restaurant or other similar setting. Commercial appliances are typically connected to a more powerful electrical outlet, are larger and stronger, have more user-serviceable parts, and cost significantly more. These commercial versions are capable of performing heavy tasks in one go.
Types and examples
Small appliances include those used for :
Beverage-making, such as electric kettles, coffeemakers or iced tea-makers
Cleaning, such as vacuum cleaner
Cooking, such as on a hot plate or with a microwave oven
Lighting, using light fixtures
Thermal comfort, such as an electric heater or fan
Many small appliances perform a combination of the above processes such as mixing, heating by a bread machine
Prices
Small appliances can be very inexpensive, such as an electric can opener, hot pot, toaster, or coffee maker which may cost only a few U.S. dollars, or very expensive, such as an elaborate espresso maker, which may cost several thousand U.S. dollars. Most homes in developed economies contain several cheaper home appliances, with perhaps a few more expensive appliances, such as a high-end microwave oven or mixer. A small appliances like chopper, juicer, grinder and mixer may cost you few dollars. Instead of buying separate units the food processor will be less expensive. Sometime these kind of smart appliances saves money as well as the space.
Powering
Many small appliances are powered by electricity. The appliance may use a permanently attached cord that is plugged into a wall outlet or a detachable cord. The appliance may have a cord storage feature. A few hand-held appliances use batteries, which may be disposable or rechargeable. Some appliances consist of an electrical motor upon which is mounted various attachments so as to constitute several individual appliances, such as a blender, a food processor, or a juicer. Many stand mixers, while functioning primarily as a mixer, have attachments that can perform additional functions.
A few gasoline and gas-powered appliances exist for use in situations where electricity is not expected to be available, but these are typically larger and not as portable as most small appliances. Items that perform the same function as small appliances but are hand-powered are generally referred to as tools or gadgets, for example a hand cranked egg beater, a grater, a mandoline, or a hand-powered meat grinder.
Safety
Small appliances which are defective or improperly used or maintained may cause house fires and other property damage, or may harbor bacteria if not properly cleaned. It is important that users read the instructions carefully and that appliances that use a grounded cord be attached to a grounded outlet. Because of the risk of fire, some appliances have a short detachable cord that is connected to the appliance magnetically. If the appliance is moved further than the cord length from the wall, the cord will detach from the appliance.
Designations and regulations
Designations and regulations of "small appliances" vary by country and are not simply determined by physical sizes. For instance, United States Environmental Protection Agency regulations mandate that small appliances must meet two standards:
Completely manufactured, charged, and sealed in a factory
Contains five pounds or less of refrigerant
The designations and regulations of "small appliances" are very important in order to assure the safety of the public. Here are some rules and regulations for small appliances:
Altering the design of certified recovered small appliances in a way that would affect the equipment's ability to meet the certification standards is strictly prohibited
The equipment must meet the minimum requirements for certification
See also
Appliance (disambiguation)
Domestic technology
List of cooking appliances
List of home appliances
Standby power
Yellow goods (retail classification)
References | Small appliance | Physics,Technology | 1,017 |
53,667,030 | https://en.wikipedia.org/wiki/Notch%20tensile%20strength | The notch tensile strength (NTS) of a material is the value given by performing a standard tensile strength test on a notched specimen of the material. The ratio between the NTS and the tensile strength is called the notch strength ratio (NSR).
See also
Charpy impact test
References
Physical quantities
Fracture mechanics
Materials testing
Elasticity (physics) | Notch tensile strength | Physics,Materials_science,Mathematics,Engineering | 75 |
8,358,782 | https://en.wikipedia.org/wiki/CAPS%20%28buffer%29 | CAPS is the common name for 3-(Cyclohexylamino)-1-propanesulfonic acid, a chemical used as buffering agent in biochemistry. The similar substance N-cyclohexyl-2-hydroxyl-3-aminopropanesulfonic acid (CAPSO) is also used as buffering agent in biochemistry. Its useful pH range is 9.7-11.1.
See also
CHES
Good's buffers § List of Good's buffers
References
Buffer solutions
Sulfonic acids | CAPS (buffer) | Chemistry,Biology | 116 |
72,796,265 | https://en.wikipedia.org/wiki/Pleurotus%20geesterani | Pleurotus geesterani, also known as pocket-sized oyster, is an edible species of fungus in the family Pleurotaceae, described as new to science by mycologist Rolf Singer in 1962. It can be cultivated, and it has gained popularity in China (under the name 秀珍菇, xiùzhēn gū) for its umami taste.
See also
List of Pleurotus species
References
External links
Fungi described in 1962
Pleurotaceae
Fungus species | Pleurotus geesterani | Biology | 99 |
44,417,556 | https://en.wikipedia.org/wiki/Behavioral%20observation%20audiometry | Behavioral observation audiometry (BOA) is a type of audiometry (a test of hearing for ability to recognize pitch, volume, etc.) done in children less than six months old.
References
Acoustics
Hearing
Ear procedures | Behavioral observation audiometry | Physics | 47 |
52,328,312 | https://en.wikipedia.org/wiki/G.%20S.%20R.%20Subba%20Rao | Ganugapati Sree Rama Subba Rao (born 21 August 1937) is an Indian natural product chemist and a former chair of the department of sciences at the Indian Institute of Science (IISc). He is known for his researches on dihydroaromatics obtained through Birch reduction of aromatic compounds and is an elected fellow of the Indian National Science Academy, and the Indian Academy of Sciences. The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards, in 1982, for his contributions to chemical sciences.
Biography
G. S. R. Subba Rao, born on 21 August 1937 in Kolavennu, in the south Indian state of Andhra Pradesh to Satyanarayana Ganugapati and Lakshmi, did his college studies at Andhra University from where he graduated in chemistry in 1957 and followed it up with a master's degree in 1959. Subsequently, he enrolled for doctoral studies under the guidance of L. Ramachandra Row and secured the degree of Doctor of Science in 1962 and moved to University of Manchester for his post-doctoral studies at the laboratory of Arthur J. Birch. He obtained a PhD in 1966 and completed his post doctoral studies at Australian National University. On his return to India in 1971, he joined the Indian Institute of Science as a member of faculty at the department of organic chemistry where he set up his research group and served as the dean of the faculty of science, eventually superannuating from academic duties as the chair of the department. He also serves as a director of Novosynth Research Labs and Bal Research Foundation, two entities involved in scientific research.
Rao is married to Lakshmi Sita Valluri and the couple has two sons, Rama and Krishna. The family lives in Bengaluru.
Legacy
Rao's early researches were focused on natural product chemistry and through his studies of the dihydroaromatics, he developed new protocols for synthesising aromatic compounds using Birch reduction. His contributions are reported in the synthesis of steroids and polyketides as well as the studies of the mechanistic aspects of dissolving metal reductions. His work has been documented by way of over 150 articles published in peer-reviewed journals and his writings have been cited by many authors. He has guided 28 doctoral scholars in their studies, has been associated with several journals as a member of their editorial boards and served as a council member of the Indian National Science Academy from 2001 to 2003.
Awards and honours
The Council of Scientific and Industrial Research awarded Subba Rao the Shanti Swarup Bhatnagar Prize, one of the highest Indian science awards, in 1982. He received the Sir C. V. Raman Award of the University Grants Commission of India in 1992 and the T. R. Seshadri 70th Birthday Commemoration Medal of the Indian National Science Academy (INSA) in 1997. INSA honoured him again with the Senior Scientist Award in 2003 and the Golden Jubilee Commemoration Medal in 2004. In between, he received the Alumni Award for Excellence in Research in Science of the Indian Institute of Science in 1998. He is an elected fellow of the Indian National Science Academy and the Indian Academy of Sciences and has delivered a number of award orations including the Professor Venkataraman Memorial Lecture of the National Chemical Laboratory.
Citations
Selected bibliography
See also
Steroids
Polyketides
Notes
References
Recipients of the Shanti Swarup Bhatnagar Award in Chemical Science
1937 births
Scientists from Andhra Pradesh
Living people
Indian organic chemists
Fellows of the Indian Academy of Sciences
Fellows of the Indian National Science Academy
Andhra University alumni
Alumni of the University of Manchester
Australian National University alumni
Academic staff of the Indian Institute of Science
20th-century Indian chemists
Telugu people
People from Krishna district | G. S. R. Subba Rao | Chemistry | 775 |
47,204,911 | https://en.wikipedia.org/wiki/Tide%20dial | A tide dial, also known as a mass dial or a scratch dial, is a sundial marked with the canonical hours rather than or in addition to the standard hours of daylight. Such sundials were particularly common between the 7th and 14th centuries in Europe, at which point they began to be replaced by mechanical clocks. There are more than 3,000 surviving tide dials in England and at least 1,500 in France.
Name
The name tide dial preserves the Old English term , used for hours and canonical hours prior to the Norman Conquest of England, after which the Norman French hour gradually replaced it. The actual Old English name for sundials was or "day-marker".
History
Jews long recited prayers at fixed times of day. Psalm 119 in particular mentions praising God seven times a day, and the apostles Peter and John are mentioned attending afternoon prayers. Christian communities initially followed numerous local traditions with regard to prayer, but Charlemagne compelled his subjects to follow the Roman liturgy, and his son Louis the Pious imposed the Rule of StBenedict upon their religious communities.
The canonical hours adopted by Benedict and imposed by the Frankish kings were the office of matins in the wee hours of the night, Lauds at dawn, Prime at the 1st hour of sunlight, Terce at the 3rd, Sext at the 6th, Nones at the 9th, Vespers at sunset, and Compline before retiring in complete silence. Monks were called to these hours by their abbot or by the ringing of the church bell, with the time between services organised in reading the Bible or other religious texts, in manual labour, or in sleep.
The need for these monastic communities and others to organize their times of prayer prompted the establishment of tide dials built into the walls of churches. They began to be used in England in the late 7th century and spread from there across continental Europe through copies of Bede's works and by the Saxon and Hiberno-Scottish missions. Within England, tide dials fell out of favour after the Norman Conquest. By the 13th century, some tide dials – like that at Strasbourg Cathedral – were constructed as independent statues rather than built into the walls of the churches. From the 14th century onwards, the cathedrals and other large churches began to use mechanical clocks and the canonical sundials lost their utility, except in small rural churches, where they remained in use until the 16th century.
There are more than 3,000 surviving tide dials in England and at least 1,500 in France, mainly in Normandy, Touraine, Charente, and at monasteries along the pilgrimage routes to Santiago de Compostela in northwestern Spain.
Design
With Christendom confined to the Northern Hemisphere, the tide dials were often carved vertically onto the south side of the church chancel at eye level near the priest's door. In an abbey or large monastery, dials were carefully carved into the stonen walls, while in rural churches they were very often just scratched onto the wall.
Some tide dials have a stonen gnomon, but many have a circular hole which is used to hold a more easily replaced or adjusted wooden gnomon. These gnomons were perpendicular to the wall and cast a shadow upon the dial, a semicircle divided into a number of equal sectors. Most dials have supplementary lines marking the other 8 daytime hours, but are characterized by their noting the canonical hours particularly. The lines for the canonical hours may be longer or marked with a dot or cross. The divisions are seldom numbered.
Dials often have holes along the circumference of their semicircle. As additional gnomons were needless and these holes are often quite shallow, T.W. Cole suggests they were used as markers to quickly and easily reconstruct the tide dials following a fresh whitewash of the church walls with chalk or lime.
Examples
Bewcastle Cross
The oldest surviving English tide dial is on the 7th- or 8th-century Bewcastle Cross in the church graveyard of St Cuthbert's in Bewcastle, Cumbria. It is carved on the south face of a Celtic cross at some height from the ground and is divided by five principal lines into four tides. Two of these lines, those for 9am and noon, are crossed at the point. The four spaces are further subdivided so as to give the twelve daylight hours of the Romans. On one side of the dial, there is a vertical line which touches the semicircular border at the second afternoon hour. This may be an accident, but the same kind of line is found on the dial in the crypt of Bamburgh Church, where it marks a later hour of the day. The sundial may have been used for calculating the date of the spring equinox and hence Easter.
Nendrum Sundial
Nendrum Monastery in Northern Ireland, supposedly founded in the 5th century by St Machaoi, now has a reconstructed tide dial. The 9th-century tide dial gives the name of its sculptor and a priest.
Kirkdale Sundial
The 1056 1065 tide dial at St Gregory's Minster, Kirkdale in North Yorkshire has four principal divisions marked by five crossed lines, subdivided by single lines. One marking ¼ of the way between sunrise and noon is an incised cross that would indicate about 9 am at midwinter and 6 am at midsummer. It was dedicated to a "Hawarth".
Gallery
Proper tide dials prominently displaying the canonical hours:
Other ecclesiastical sundials ("Mass dials") used to determine times for prayer and Mass during the same period:
See also
Notes
References
Citations
Bibliography
.
.
.
.
.
.
.
External links
Exhaustive treatment of Mass dials in the Gironde, France, at Wikicommons
Tide dials in Touraine, France
Tide dials in Tarn, France
Tide dials decoded by P.T.J. Rumley
Tide dials in Sussex, Britain
The Kirkdale Sundial: 1 & 2
The Bishopstone Sundial: 1 & 2
Horology
Timekeeping
Sundials | Tide dial | Physics | 1,237 |
199,077 | https://en.wikipedia.org/wiki/Period%202%20element | A period 2 element is one of the chemical elements in the second row (or period) of the periodic table of the chemical elements. The periodic table is laid out in rows to illustrate recurring (periodic) trends in the chemical behavior of the elements as their atomic number increases; a new row is started when chemical behavior begins to repeat, creating columns of elements with similar properties.
The second period contains the elements lithium, beryllium, boron, carbon, nitrogen, oxygen, fluorine, and neon. In a quantum mechanical description of atomic structure, this period corresponds to the filling of the second () shell, more specifically its 2s and 2p subshells. Period 2 elements (carbon, nitrogen, oxygen, fluorine and neon) obey the octet rule in that they need eight electrons to complete their valence shell (lithium and beryllium obey duet rule, boron is electron deficient.), where at most eight electrons can be accommodated: two in the 2s orbital and six in the 2p subshell.
Periodic trends
Period 2 is the first period in the periodic table from which periodic trends can be drawn. Period 1, which only contains two elements (hydrogen and helium), is too small to draw any conclusive trends from it, especially because the two elements behave nothing like other s-block elements. Period 2 has much more conclusive trends. For all elements in period 2, as the atomic number increases, the atomic radius of the elements decreases, the electronegativity increases, and the ionization energy increases.
Period 2 only has two metals (lithium and beryllium) of eight elements, less than for any subsequent period both by number and by proportion. It also has the most number of nonmetals, namely five, among all periods. The elements in period 2 often have the most extreme properties in their respective groups; for example, fluorine is the most reactive halogen, neon is the most inert noble gas, and lithium is the least reactive alkali metal.
All period 2 elements completely obey the Madelung rule; in period 2, lithium and beryllium fill the 2s subshell, and boron, carbon, nitrogen, oxygen, fluorine, and neon fill the 2p subshell. The period shares this trait with periods 1 and 3, none of which contain transition elements or inner transition elements, which often vary from the rule.
{|
| colspan="3" | Chemical element || Block || Electron configuration
|-bgcolor=""
|| 3 || Li || Lithium || s-block || [He] 2s1
|-bgcolor=""
|| 4 || Be || Beryllium || s-block || [He] 2s2
|-bgcolor=""
|| 5 || B || Boron || p-block || [He] 2s2 2p1
|-bgcolor=""
|| 6 || C || Carbon || p-block || [He] 2s2 2p2
|-bgcolor=""
|| 7 || N || Nitrogen || p-block || [He] 2s2 2p3
|-bgcolor=""
|| 8 || O || Oxygen || p-block || [He] 2s2 2p4
|-bgcolor=""
|| 9 || F || Fluorine || p-block || [He] 2s2 2p5
|-bgcolor=""
|| 10 || Ne || Neon || p-block || [He] 2s2 2p6
|}
Lithium
Lithium (Li) is an alkali metal with atomic number 3, occurring naturally in two isotopes: 6Li and 7Li. The two make up all natural occurrence of lithium on Earth, although further isotopes have been synthesized. In ionic compounds, lithium loses an electron to become positively charged, forming the cation Li+. Lithium is the first alkali metal in the periodic table, and the first metal of any kind in the periodic table. At standard temperature and pressure, lithium is a soft, silver-white, highly reactive metal. With a density of 0.564 g⋅cm−3, lithium is the lightest metal and the least dense solid element.
Lithium is one of the few elements synthesized in the Big Bang.
Lithium is the 31st most abundant element on earth, occurring in concentrations of between 20 and 70 ppm by weight, but due to its high reactivity it is only found naturally in compounds.
Lithium salts are used in the pharmacology industry as mood stabilising drugs. They are used in the treatment of bipolar disorder, where they have a role in treating depression and mania and may reduce the chances of suicide. The most common compounds used are lithium carbonate, Li2CO3, lithium citrate, Li3C6H5O7, lithium sulphate, Li2SO4, and lithium orotate, LiC5H3N2O4·H2O. Lithium is also used in batteries as an anode and its alloys with aluminium, cadmium, copper and manganese are used to make high performance parts for aircraft, most notably the external tank of the Space Shuttle.
Beryllium
Beryllium (Be) is the chemical element with atomic number 4, occurring in the form of 9Be. At standard temperature and pressure, beryllium is a strong, steel-grey, light-weight, brittle, bivalent alkaline earth metal, with a density of 1.85 g⋅cm−3. It also has one of the highest melting points of all the light metals. Beryllium's most common isotope is 9Be, which contains 4 protons and 5 neutrons. It makes up almost 100% of all naturally occurring beryllium and is its only stable isotope; however other isotopes have been synthesised. In ionic compounds, beryllium loses its two valence electrons to form the cation, Be2+.
Small amounts of beryllium were synthesised during the Big Bang, although most of it decayed or reacted further to create larger nuclei, like carbon, nitrogen or oxygen. Beryllium is a component of 100 out of 4000 known minerals, such as bertrandite, Be4Si2O7(OH)2, beryl, Al2Be3Si6O18, chrysoberyl, Al2BeO4, and phenakite, Be2SiO4. Precious forms of beryl are aquamarine, red beryl and emerald. The most common sources of beryllium used commercially are beryl and bertrandite and production of it involves the reduction of beryllium fluoride with magnesium metal or the electrolysis of molten beryllium chloride, containing some sodium chloride as beryllium chloride is a poor conductor of electricity.
Due to its stiffness, light weight, and dimensional stability over a wide temperature range, beryllium metal is used in as a structural material in aircraft, missiles and communication satellites. It is used as an alloying agent in beryllium copper, which is used to make electrical components due to its high electrical and heat conductivity. Sheets of beryllium are used in X-ray detectors to filter out visible light and let only X-rays through. It is used as a neutron moderator in nuclear reactors because light nuclei are more effective at slowing down neutrons than heavy nuclei. Beryllium's low weight and high rigidity also make it useful in the construction of tweeters in loudspeakers.
Beryllium and beryllium compounds are classified by the International Agency for Research on Cancer as Group 1 carcinogens; they are carcinogenic to both animals and humans. Chronic berylliosis is a pulmonary and systemic granulomatous disease caused by exposure to beryllium. Between 1% – 15% of people are sensitive to beryllium and may develop an inflammatory reaction in their respiratory system and skin, called chronic beryllium disease or berylliosis. The body's immune system recognises the beryllium as foreign particles and mounts an attack against them, usually in the lungs where they are breathed in. This can cause fever, fatigue, weakness, night sweats and difficulty in breathing.
Boron
Boron (B) is the chemical element with atomic number 5, occurring as 10B and 11B. At standard temperature and pressure, boron is a trivalent metalloid that has several different allotropes. Amorphous boron is a brown powder formed as a product of many chemical reactions. Crystalline boron is a very hard, black material with a high melting point and exists in many polymorphs: Two rhombohedral forms, α-boron and β-boron containing 12 and 106.7 atoms in the rhombohedral unit cell respectively, and 50-atom tetragonal boron are the most common. Boron has a density of 2.34−3. Boron's most common isotope is 11B at 80.22%, which contains 5 protons and 6 neutrons. The other common isotope is 10B at 19.78%, which contains 5 protons and 5 neutrons. These are the only stable isotopes of boron; however other isotopes have been synthesised. Boron forms covalent bonds with other nonmetals and has oxidation states of 1, 2, 3 and 4.
Boron does not occur naturally as a free element, but in compounds such as borates. The most common sources of boron are tourmaline, borax, Na2B4O5(OH)4·8H2O, and kernite, Na2B4O5(OH)4·2H2O. it is difficult to obtain pure boron. It can be made through the magnesium reduction of boron trioxide, B2O3. This oxide is made by melting boric acid, B(OH)3, which in turn is obtained from borax. Small amounts of pure boron can be made by the thermal decomposition of boron bromide, BBr3, in hydrogen gas over hot tantalum wire, which acts as a catalyst. The most commercially important sources of boron are: sodium tetraborate pentahydrate, Na2B4O7 · 5H2O, which is used in large amounts in making insulating fiberglass and sodium perborate bleach; boron carbide, a ceramic material, is used to make armour materials, especially in bulletproof vests for soldiers and police officers; orthoboric acid, H3BO3 or boric acid, used in the production of textile fiberglass and flat panel displays; sodium tetraborate decahydrate, Na2B4O7 · 10H2O or borax, used in the production of adhesives; and the isotope boron-10 is used as a control for nuclear reactors, as a shield for nuclear radiation, and in instruments used for detecting neutrons.
Boron is an essential plant micronutrient, required for cell wall strength and development, cell division, seed and fruit development, sugar transport and hormone development. However, high soil concentrations of over 1.0 ppm can cause necrosis in leaves and poor growth. Levels as low as 0.8 ppm can cause these symptoms to appear in plants particularly boron-sensitive. Most plants, even those tolerant of boron in the soil, will show symptoms of boron toxicity when boron levels are higher than 1.8 ppm. In animals, boron is an ultratrace element; in human diets, daily intake ranges from 2.1 to 4.3 mg boron/kg body weight (bw)/day. It is also used as a supplement for the prevention and treatment of osteoporosis and arthritis.
Carbon
Carbon is the chemical element with atomic number 6, occurring as 12C, 13C and 14C. At standard temperature and pressure, carbon is a solid, occurring in many different allotropes, the most common of which are graphite, diamond, the fullerenes and amorphous carbon. Graphite is a soft, hexagonal crystalline, opaque black semimetal with very good conductive and thermodynamically stable properties. Diamond however is a highly transparent colourless cubic crystal with poor conductive properties, is the hardest known naturally occurring mineral and has the highest refractive index of all gemstones. In contrast to the crystal lattice structure of diamond and graphite, the fullerenes are molecules, named after Richard Buckminster Fuller whose architecture the molecules resemble. There are several different fullerenes, the most widely known being the "buckeyball" C60. Little is known about the fullerenes and they are a current subject of research. There is also amorphous carbon, which is carbon without any crystalline structure. In mineralogy, the term is used to refer to soot and coal, although these are not truly amorphous as they contain small amounts of graphite or diamond. Carbon's most common isotope at 98.9% is 12C, with six protons and six neutrons. 13C is also stable, with six protons and seven neutrons, at 1.1%. Trace amounts of 14C also occur naturally but this isotope is radioactive and decays with a half life of 5730 years; it is used for radiocarbon dating. Other isotopes of carbon have also been synthesised. Carbon forms covalent bonds with other non-metals with an oxidation state of −4, −2, +2 or +4.
Carbon is the fourth most abundant element in the universe by mass after hydrogen, helium and oxygen and is the second most abundant element in the human body by mass after oxygen, the third most abundant by number of atoms. There are an almost infinite number of compounds that contain carbon due to carbon's ability to form long stable chains of C — C bonds. The simplest carbon-containing molecules are the hydrocarbons, which contain carbon and hydrogen, although they sometimes contain other elements in functional groups. Hydrocarbons are used as fossil fuels and to manufacture plastics and petrochemicals. All organic compounds, those essential for life, contain at least one atom of carbon. When combined with oxygen and hydrogen, carbon can form many groups of important biological compounds including sugars, lignans, chitins, alcohols, fats, and aromatic esters, carotenoids and terpenes. With nitrogen it forms alkaloids, and with the addition of sulfur also it forms antibiotics, amino acids, and rubber products. With the addition of phosphorus to these other elements, it forms DNA and RNA, the chemical-code carriers of life, and adenosine triphosphate (ATP), the most important energy-transfer molecule in all living cells.
Nitrogen
Nitrogen is the chemical element with atomic number 7, the symbol N and atomic mass 14.00674 u. Elemental nitrogen is a colorless, odorless, tasteless and mostly inert diatomic gas at standard conditions, constituting 78.08% by volume of Earth's atmosphere. The element nitrogen was discovered as a separable component of air, by Scottish physician Daniel Rutherford, in 1772. It occurs naturally in form of two isotopes: nitrogen-14 and nitrogen-15.
Many industrially important compounds, such as ammonia, nitric acid, organic nitrates (propellants and explosives), and cyanides, contain nitrogen. The extremely strong bond in elemental nitrogen dominates nitrogen chemistry, causing difficulty for both organisms and industry in breaking the bond to convert the molecule into useful compounds, but at the same time causing release of large amounts of often useful energy when the compounds burn, explode, or decay back into nitrogen gas.
Nitrogen occurs in all living organisms, and the nitrogen cycle describes movement of the element from air into the biosphere and organic compounds, then back into the atmosphere. Synthetically produced nitrates are key ingredients of industrial fertilizers, and also key pollutants in causing the eutrophication of water systems. Nitrogen is a constituent element of amino acids and thus of proteins, and of nucleic acids (DNA and RNA). It resides in the chemical structure of almost all neurotransmitters, and is a defining component of alkaloids, biological molecules produced by many organisms.
Oxygen
Oxygen is the chemical element with atomic number 8, occurring mostly as 16O, but also 17O and 18O.
Oxygen is the third-most common element by mass in the universe (although there are more carbon atoms, each carbon atom is lighter). It is highly electronegative and non-metallic, usually diatomic, gas down to very low temperatures. Only fluorine is more reactive among non-metallic elements. It is two electrons short of a full octet and readily takes electrons from other elements. It reacts violently with alkali metals and white phosphorus at room temperature and less violently with alkali earth metals heavier than magnesium. At higher temperatures it burns most other metals and many non-metals (including hydrogen, carbon, and sulfur). Many oxides are extremely stable substances difficult to decompose—like water, carbon dioxide, alumina, silica, and iron oxides (the latter often appearing as rust). Oxygen is part of substances best described as some salts of metals and oxygen-containing acids (thus nitrates, sulfates, phosphates, silicates, and carbonates.
Oxygen is essential to all life. Plants and phytoplankton photosynthesize water and carbon dioxide and water, both oxides, in the presence of sunlight to form sugars with the release of oxygen. The sugars are then turned into such substances as cellulose and (with nitrogen and often sulfur) proteins and other essential substances of life. Animals especially but also fungi and bacteria ultimately depend upon photosynthesizing plants and phytoplankton for food and oxygen.
Fire uses oxygen to oxidize compounds typically of carbon and hydrogen to water and carbon dioxide (although other elements may be involved) whether in uncontrolled conflagrations that destroy buildings and forests or the controlled fire within engines or that supply electrical energy from turbines, heat for keeping buildings warm, or the motive force that drives vehicles.
Oxygen forms roughly 21% of the Earth's atmosphere; all of this oxygen is the result of photosynthesis. Pure oxygen has use in medical treatment of people who have respiratory difficulties. Excess oxygen is toxic.
Oxygen was originally associated with the formation of acids—until some acids were shown to not have oxygen in them. Oxygen is named for its formation of acids, especially with non-metals. Some oxides of some non-metals are extremely acidic, like sulfur trioxide, which forms sulfuric acid on contact with water. Most oxides with metals are alkaline, some extremely so, like potassium oxide. Some metallic oxides are amphoteric, like aluminum oxide, which means that they can react with both acids and bases.
Although oxygen is normally a diatomic gas, oxygen can form an allotrope known as ozone. Ozone is a triatomic gas even more reactive than oxygen. Unlike regular diatomic oxygen, ozone is a toxic material generally considered a pollutant. In the upper atmosphere, some oxygen forms ozone which has the property of absorbing dangerous ultraviolet rays within the ozone layer. Land life was impossible before the formation of an ozone layer.
Fluorine
Fluorine is the chemical element with atomic number 9. It occurs naturally in its only stable form 19F.
Fluorine is a pale-yellow, diatomic gas under normal conditions and down to very low temperatures. Short one electron of the highly stable octet in each atom, fluorine molecules are unstable enough that they easily snap, with loose fluorine atoms tending to grab single electrons from just about any other element. Fluorine is the most reactive of all elements, and it even attacks many oxides to replace oxygen with fluorine. Fluorine even attacks silica, one of the favored materials for transporting strong acids, and burns asbestos. It attacks common salt, one of the most stable compounds, with the release of chlorine. It never appears uncombined in nature and almost never stays uncombined for long. It burns hydrogen simultaneously if either is liquid or gaseous—even at temperatures close to absolute zero. It is extremely difficult to isolate from any compounds, let alone keep uncombined.
Fluorine gas is extremely dangerous because it attacks almost all organic material, including live flesh. Many of the binary compounds that it forms (called fluorides) are themselves highly toxic, including soluble fluorides and especially hydrogen fluoride. Fluorine forms very strong bonds with many elements. With sulfur it can form the extremely stable and chemically inert sulfur hexafluoride; with carbon it can form the remarkable material Teflon that is a stable and non-combustible solid with a high melting point and a very low coefficient of friction that makes it an excellent liner for cooking pans and raincoats. Fluorine-carbon compounds include some unique plastics.
it is also used as a reactant in the making of toothpaste.
Neon
Neon is the chemical element with atomic number 10, occurring as 20Ne, 21Ne and 22Ne.
Neon is a monatomic gas. With a complete octet of outer electrons it is highly resistant to removal of any electron, and it cannot accept an electron from anything. Neon has no tendency to form any normal compounds under normal temperatures and pressures; it is effectively inert. It is one of the so-called "noble gases".
Neon is a trace component of the atmosphere without any biological role.
Notes
References
External links
Periods (periodic table)
Pages containing element color directly | Period 2 element | Chemistry | 4,571 |
2,151,928 | https://en.wikipedia.org/wiki/M-SG%20reducing%20agent | In M-SG an alkali metal is absorbed into silica gel at elevated temperatures. The resulting black powder material is an effective reducing agent and safe to handle as opposed to the pure metal. The material can also be used as a desiccant and as a hydrogen source.
The metal is either sodium or a sodium - potassium alloy (Na2K). The molten metal is mixed with silica gel under constant agitation at room temperature. This phase 0 material must be handled in an inert atmosphere. Heating phase 0 at takes it to phase I. When this material is exposed to dry oxygen the reducing power is not affected. At further heating to phase II can be handled safely in an ambient environment.
The metal reacts with the silica gel in an exothermic reaction in which Na4Si4 nanoparticles are formed. The powder reacts with water to form hydrogen.
Compounds such as biphenyl and naphthalene are reduced by the powder and form highly coloured radical anions. The powder can also be introduced in a column chromatography setup and eluted with organic reactants in order to probe the reducing power. The powder is mixed with additional (wet) silica gel which provides additional hydrogen. A Birch reduction of naphthalene takes 5 minutes elution time. The column converts benzyl chloride to bibenzyl in a Wurtz coupling and in a similar fashion dibenzothiophene is reduced to biphenyl.
See also
Potassium graphite
References
Desiccants
Reducing agents | M-SG reducing agent | Physics,Chemistry | 317 |
70,418,834 | https://en.wikipedia.org/wiki/Spat%20%28distance%20unit%29 | The spat (symbol S) is an obsolete unit of distance used in astronomy. It is equal to . A light-year is about .
References
Units of length
Obsolete units of measurement | Spat (distance unit) | Mathematics | 38 |
16,509,656 | https://en.wikipedia.org/wiki/Power%20plant%20efficiency | The efficiency of a plant is the percentage of the total energy content of a power plant's fuel that is converted into electricity. The remaining energy is usually lost to the environment as heat unless it is used for district heating.
Rating efficiency is complicated by the fact that there are two different ways to measure the fuel energy input:
LCV = Lower Calorific Value (same as NCV = Net Calorific Value) neglects thermal energy gained from exhaust H2O condensation
HCV = Higher Calorific Value (same as GCV, Gross Calorific Value) includes exhaust H2O condensed to liquid water
Depending on which convention is used, a differences of 10% in the apparent efficiency of a gas fired plant can arise, so it is very important to know which convention, HCV or LCV (NCV or GCV) is being used.
Heat rate
Heat rate is a term commonly used in power stations to indicate the power plant efficiency.
The heat rate is the inverse of the efficiency: a lower heat rate is better.
The term efficiency is a dimensionless measure (sometimes quoted in percent), and strictly heat rate is dimensionless as well, but often written as energy per energy in relevant units. In SI-units it is joule per joule, but often also expressed as joule/kilowatt hour or British thermal units/kWh. This is because kilowatt hour is often used when referring to electrical energy and joule or Btu is commonly used when referring to thermal energy.
Heat rate in the context of power plants can be thought of as the input needed to produce one unit of output. It generally indicates the amount of fuel required to generate one unit of electricity. Performance parameters tracked for any thermal power plant like efficiency, fuel costs, plant load factor, emissions level, etc. are a function of the station heat rate and can be linked directly.
Given that heat rate and efficiency are inversely related to each other, it is easy to convert from one to the other.
A 100% efficiency implies equal input and output: for 1 kWh of output, the input is 1 kWh. This thermal energy input of 1 kWh = 3.6 MJ = 3,412 Btu
Therefore, the heat rate of a 100% efficient plant is simply 1, or 1 kWh/kWh, or 3.6 MJ/kWh, or 3,412 Btu/kWh
To express the efficiency of a generator or power plant as a percentage, invert the value if dimensionless notation or same unit are used. For example:
A heat rate value of 5 gives an efficiency factor of 20%.
A heat rate value of 2 kWh/kWh gives an efficiency factor of 50%.
A heat rate value of 4 MJ/MJ gives an efficiency factor of 25%.
For other units, make sure to use a corresponding conversion factor for the units. For example, if using Btu/kWh, use a conversion factor of 3,412 Btu per kWh to calculate the efficiency factor. For example, if the heat rate is 10,500 Btu/kWh, the efficiency is 32.5% (since 3,412 Btu / 10,500 Btu = 32.5%).
The higher the heat rate (i.e. the more energy input that is required to produce one unit of electric output), the lower the efficiency of the power plant.
The U.S. Energy Information Administration gives a general explanation for how to translate a heat rate value into a power plant's efficiency value.
Most power plants have a target or design heat rate. If the actual heat rate does not match the target, the difference between the actual and target heat rate is the heat rate deviation.
See also
Fuel efficiency
Energy conversion efficiency
Thermal efficiency
Electrical efficiency
Mechanical efficiency
Cost of electricity by source
References
Thermodynamic properties
Energy conversion
Energy conservation
Engineering thermodynamics
Energy economics
Power stations | Power plant efficiency | Physics,Chemistry,Mathematics,Engineering,Environmental_science | 818 |
3,754,835 | https://en.wikipedia.org/wiki/Lemniscate%20elliptic%20functions | In mathematics, the lemniscate elliptic functions are elliptic functions related to the arc length of the lemniscate of Bernoulli. They were first studied by Giulio Fagnano in 1718 and later by Leonhard Euler and Carl Friedrich Gauss, among others.
The lemniscate sine and lemniscate cosine functions, usually written with the symbols and (sometimes the symbols and or and are used instead), are analogous to the trigonometric functions sine and cosine. While the trigonometric sine relates the arc length to the chord length in a unit-diameter circle the lemniscate sine relates the arc length to the chord length of a lemniscate
The lemniscate functions have periods related to a number called the lemniscate constant, the ratio of a lemniscate's perimeter to its diameter. This number is a quartic analog of the (quadratic) , ratio of perimeter to diameter of a circle.
As complex functions, and have a square period lattice (a multiple of the Gaussian integers) with fundamental periods and are a special case of two Jacobi elliptic functions on that lattice, .
Similarly, the hyperbolic lemniscate sine and hyperbolic lemniscate cosine have a square period lattice with fundamental periods
The lemniscate functions and the hyperbolic lemniscate functions are related to the Weierstrass elliptic function .
Lemniscate sine and cosine functions
Definitions
The lemniscate functions and can be defined as the solution to the initial value problem:
or equivalently as the inverses of an elliptic integral, the Schwarz–Christoffel map from the complex unit disk to a square with corners
Beyond that square, the functions can be analytically continued to the whole complex plane by a series of reflections.
By comparison, the circular sine and cosine can be defined as the solution to the initial value problem:
or as inverses of a map from the upper half-plane to a half-infinite strip with real part between and positive imaginary part:
Relation to the lemniscate constant
The lemniscate functions have minimal real period , minimal imaginary period and fundamental complex periods and for a constant called the lemniscate constant,
The lemniscate functions satisfy the basic relation analogous to the relation
The lemniscate constant is a close analog of the circle constant , and many identities involving have analogues involving , as identities involving the trigonometric functions have analogues involving the lemniscate functions. For example, Viète's formula for can be written:
An analogous formula for is:
The Machin formula for is and several similar formulas for can be developed using trigonometric angle sum identities, e.g. Euler's formula . Analogous formulas can be developed for , including the following found by Gauss:
The lemniscate and circle constants were found by Gauss to be related to each-other by the arithmetic-geometric mean :
Argument identities
Zeros, poles and symmetries
The lemniscate functions and are even and odd functions, respectively,
At translations of and are exchanged, and at translations of they are additionally rotated and reciprocated:
Doubling these to translations by a unit-Gaussian-integer multiple of (that is, or ), negates each function, an involution:
As a result, both functions are invariant under translation by an even-Gaussian-integer multiple of . That is, a displacement with for integers , , and .
This makes them elliptic functions (doubly periodic meromorphic functions in the complex plane) with a diagonal square period lattice of fundamental periods and . Elliptic functions with a square period lattice are more symmetrical than arbitrary elliptic functions, following the symmetries of the square.
Reflections and quarter-turn rotations of lemniscate function arguments have simple expressions:
The function has simple zeros at Gaussian integer multiples of , complex numbers of the form for integers and . It has simple poles at Gaussian half-integer multiples of , complex numbers of the form , with residues . The function is reflected and offset from the function, . It has zeros for arguments and poles for arguments with residues
Also
for some and
The last formula is a special case of complex multiplication. Analogous formulas can be given for where is any Gaussian integer – the function has complex multiplication by .
There are also infinite series reflecting the distribution of the zeros and poles of :
Pythagorean-like identity
The lemniscate functions satisfy a Pythagorean-like identity:
As a result, the parametric equation parametrizes the quartic curve
This identity can alternately be rewritten:
Defining a tangent-sum operator as gives:
The functions and satisfy another Pythagorean-like identity:
Derivatives and integrals
The derivatives are as follows:
The second derivatives of lemniscate sine and lemniscate cosine are their negative duplicated cubes:
The lemniscate functions can be integrated using the inverse tangent function:
Argument sum and multiple identities
Like the trigonometric functions, the lemniscate functions satisfy argument sum and difference identities. The original identity used by Fagnano for bisection of the lemniscate was:
The derivative and Pythagorean-like identities can be used to rework the identity used by Fagano in terms of and . Defining a tangent-sum operator and tangent-difference operator the argument sum and difference identities can be expressed as:
These resemble their trigonometric analogs:
In particular, to compute the complex-valued functions in real components,
Gauss discovered that
where such that both sides are well-defined.
Also
where such that both sides are well-defined; this resembles the trigonometric analog
Bisection formulas:
Duplication formulas:
Triplication formulas:
Note the "reverse symmetry" of the coefficients of numerator and denominator of . This phenomenon can be observed in multiplication formulas for where whenever and is odd.
Lemnatomic polynomials
Let be the lattice
Furthermore, let , , , , (where ), be odd, be odd, and . Then
for some coprime polynomials
and some where
and
where is any -torsion generator (i.e. and generates as an -module). Examples of -torsion generators include and . The polynomial is called the -th lemnatomic polynomial. It is monic and is irreducible over . The lemnatomic polynomials are the "lemniscate analogs" of the cyclotomic polynomials,
The -th lemnatomic polynomial is the minimal polynomial of in . For convenience, let and . So for example, the minimal polynomial of (and also of ) in is
and
(an equivalent expression is given in the table below). Another example is
which is the minimal polynomial of (and also of ) in
If is prime and is positive and odd, then
which can be compared to the cyclotomic analog
Specific values
Just as for the trigonometric functions, values of the lemniscate functions can be computed for divisions of the lemniscate into parts of equal length, using only basic arithmetic and square roots, if and only if is of the form where is a non-negative integer and each (if any) is a distinct Fermat prime.
Relation to geometric shapes
Arc length of Bernoulli's lemniscate
, the lemniscate of Bernoulli with unit distance from its center to its furthest point (i.e. with unit "half-width"), is essential in the theory of the lemniscate elliptic functions. It can be characterized in at least three ways:
Angular characterization: Given two points and which are unit distance apart, let be the reflection of about . Then is the closure of the locus of the points such that is a right angle.
Focal characterization: is the locus of points in the plane such that the product of their distances from the two focal points and is the constant .
Explicit coordinate characterization: is a quartic curve satisfying the polar equation or the Cartesian equation
The perimeter of is .
The points on at distance from the origin are the intersections of the circle and the hyperbola . The intersection in the positive quadrant has Cartesian coordinates:
Using this parametrization with for a quarter of , the arc length from the origin to a point is:
Likewise, the arc length from to is:
Or in the inverse direction, the lemniscate sine and cosine functions give the distance from the origin as functions of arc length from the origin and the point , respectively.
Analogously, the circular sine and cosine functions relate the chord length to the arc length for the unit diameter circle with polar equation or Cartesian equation using the same argument above but with the parametrization:
Alternatively, just as the unit circle is parametrized in terms of the arc length from the point by
is parametrized in terms of the arc length from the point by
The notation is used solely for the purposes of this article; in references, notation for general Jacobi elliptic functions is used instead.
The lemniscate integral and lemniscate functions satisfy an argument duplication identity discovered by Fagnano in 1718:
Later mathematicians generalized this result. Analogously to the constructible polygons in the circle, the lemniscate can be divided into sections of equal arc length using only straightedge and compass if and only if is of the form where is a non-negative integer and each (if any) is a distinct Fermat prime. The "if" part of the theorem was proved by Niels Abel in 1827–1828, and the "only if" part was proved by Michael Rosen in 1981. Equivalently, the lemniscate can be divided into sections of equal arc length using only straightedge and compass if and only if is a power of two (where is Euler's totient function). The lemniscate is not assumed to be already drawn, as that would go against the rules of straightedge and compass constructions; instead, it is assumed that we are given only two points by which the lemniscate is defined, such as its center and radial point (one of the two points on the lemniscate such that their distance from the center is maximal) or its two foci.
Let . Then the -division points for are the points
where is the floor function. See below for some specific values of .
Arc length of rectangular elastica
The inverse lemniscate sine also describes the arc length relative to the coordinate of the rectangular elastica. This curve has coordinate and arc length:
The rectangular elastica solves a problem posed by Jacob Bernoulli, in 1691, to describe the shape of an idealized flexible rod fixed in a vertical orientation at the bottom end, and pulled down by a weight from the far end until it has been bent horizontal. Bernoulli's proposed solution established Euler–Bernoulli beam theory, further developed by Euler in the 18th century.
Elliptic characterization
Let be a point on the ellipse in the first quadrant and let be the projection of on the unit circle . The distance between the origin and the point is a function of (the angle where ; equivalently the length of the circular arc ). The parameter is given by
If is the projection of on the x-axis and if is the projection of on the x-axis, then the lemniscate elliptic functions are given by
Series Identities
Power series
The power series expansion of the lemniscate sine at the origin is
where the coefficients are determined as follows:
where stands for all three-term compositions of . For example, to evaluate , it can be seen that there are only six compositions of that give a nonzero contribution to the sum: and , so
The expansion can be equivalently written as
where
The power series expansion of at the origin is
where if is even and
if is odd.
The expansion can be equivalently written as
where
For the lemniscate cosine,
where
Ramanujan's cos/cosh identity
Ramanujan's famous cos/cosh identity states that if
then
There is a close relation between the lemniscate functions and . Indeed,
and
Continued fractions
For :
Methods of computation
Several methods of computing involve first making the change of variables and then computing
A hyperbolic series method:
Fourier series method:
The lemniscate functions can be computed more rapidly by
where
are the Jacobi theta functions.
Fourier series for the logarithm of the lemniscate sine:
The following series identities were discovered by Ramanujan:
The functions and analogous to and on the unit circle have the following Fourier and hyperbolic series expansions:
The following identities come from product representations of the theta functions:
A similar formula involving the function can be given.
The lemniscate functions as a ratio of entire functions
Since the lemniscate sine is a meromorphic function in the whole complex plane, it can be written as a ratio of entire functions. Gauss showed that has the following product expansion, reflecting the distribution of its zeros and poles:
where
Here, and denote, respectively, the zeros and poles of which are in the quadrant . A proof can be found in. Importantly, the infinite products converge to the same value for all possible orders in which their terms can be multiplied, as a consequence of uniform convergence.
Proof by logarithmic differentiation
It can be easily seen (using uniform and absolute convergence arguments to justify interchanging of limiting operations) that
(where are the Hurwitz numbers defined in Lemniscate elliptic functions § Hurwitz numbers) and
Therefore
It is known that
Then from
and
we get
Hence
Therefore
for some constant for but this result holds for all by analytic continuation. Using
gives which completes the proof.
Proof by Liouville's theorem
Let
with patches at removable singularities.
The shifting formulas
imply that is an elliptic function with periods and , just as .
It follows that the function defined by
when patched, is an elliptic function without poles. By Liouville's theorem, it is a constant. By using , and , this constant is , which proves the theorem.
Gauss conjectured that (this later turned out to be true) and commented that this “is most remarkable and a proof of this property promises the most serious increase in analysis”. Gauss expanded the products for and as infinite series (see below). He also discovered several identities involving the functions and , such as
and
Thanks to a certain theorem on splitting limits, we are allowed to multiply out the infinite products and collect like powers of . Doing so gives the following power series expansions that are convergent everywhere in the complex plane:
This can be contrasted with the power series of which has only finite radius of convergence (because it is not entire).
We define and by
Then the lemniscate cosine can be written as
where
Furthermore, the identities
and the Pythagorean-like identities
hold for all .
The quasi-addition formulas
(where ) imply further multiplication formulas for and by recursion.
Gauss' and satisfy the following system of differential equations:
where . Both and satisfy the differential equation
The functions can be also expressed by integrals involving elliptic functions:
where the contours do not cross the poles; while the innermost integrals are path-independent, the outermost ones are path-dependent; however, the path dependence cancels out with the non-injectivity of the complex exponential function.
An alternative way of expressing the lemniscate functions as a ratio of entire functions involves the theta functions (see Lemniscate elliptic functions § Methods of computation); the relation between and is
where .
Relation to other functions
Relation to Weierstrass and Jacobi elliptic functions
The lemniscate functions are closely related to the Weierstrass elliptic function (the "lemniscatic case"), with invariants and . This lattice has fundamental periods and . The associated constants of the Weierstrass function are
The related case of a Weierstrass elliptic function with , may be handled by a scaling transformation. However, this may involve complex numbers. If it is desired to remain within real numbers, there are two cases to consider: and . The period parallelogram is either a square or a rhombus. The Weierstrass elliptic function is called the "pseudolemniscatic case".
The square of the lemniscate sine can be represented as
where the second and third argument of denote the lattice invariants and . The lemniscate sine is a rational function in the Weierstrass elliptic function and its derivative:
The lemniscate functions can also be written in terms of Jacobi elliptic functions. The Jacobi elliptic functions and with positive real elliptic modulus have an "upright" rectangular lattice aligned with real and imaginary axes. Alternately, the functions and with modulus (and and with modulus ) have a square period lattice rotated 1/8 turn.
where the second arguments denote the elliptic modulus .
The functions and can also be expressed in terms of Jacobi elliptic functions:
Relation to the modular lambda function
The lemniscate sine can be used for the computation of values of the modular lambda function:
For example:
Inverse functions
The inverse function of the lemniscate sine is the lemniscate arcsine, defined as
It can also be represented by the hypergeometric function:
which can be easily seen by using the binomial series.
The inverse function of the lemniscate cosine is the lemniscate arccosine. This function is defined by following expression:
For in the interval , and
For the halving of the lemniscate arc length these formulas are valid:
Furthermore there are the so called Hyperbolic lemniscate area functions:
Expression using elliptic integrals
The lemniscate arcsine and the lemniscate arccosine can also be expressed by the Legendre-Form:
These functions can be displayed directly by using the incomplete elliptic integral of the first kind:
The arc lengths of the lemniscate can also be expressed by only using the arc lengths of ellipses (calculated by elliptic integrals of the second kind):
The lemniscate arccosine has this expression:
Use in integration
The lemniscate arcsine can be used to integrate many functions. Here is a list of important integrals (the constants of integration are omitted):
Hyperbolic lemniscate functions
Fundamental information
For convenience, let . is the "squircular" analog of (see below). The decimal expansion of (i.e. ) appears in entry 34e of chapter 11 of Ramanujan's second notebook.
The hyperbolic lemniscate sine () and cosine () can be defined as inverses of elliptic integrals as follows:
where in , is in the square with corners . Beyond that square, the functions can be analytically continued to meromorphic functions in the whole complex plane.
The complete integral has the value:
Therefore, the two defined functions have following relation to each other:
The product of hyperbolic lemniscate sine and hyperbolic lemniscate cosine is equal to one:
The functions and have a square period lattice with fundamental periods .
The hyperbolic lemniscate functions can be expressed in terms of lemniscate sine and lemniscate cosine:
But there is also a relation to the Jacobi elliptic functions with the elliptic modulus one by square root of two:
The hyperbolic lemniscate sine has following imaginary relation to the lemniscate sine:
This is analogous to the relationship between hyperbolic and trigonometric sine:
Relation to quartic Fermat curve
Hyperbolic Lemniscate Tangent and Cotangent
This image shows the standardized superelliptic Fermat squircle curve of the fourth degree:
In a quartic Fermat curve (sometimes called a squircle) the hyperbolic lemniscate sine and cosine are analogous to the tangent and cotangent functions in a unit circle (the quadratic Fermat curve). If the origin and a point on the curve are connected to each other by a line , the hyperbolic lemniscate sine of twice the enclosed area between this line and the x-axis is the y-coordinate of the intersection of with the line . Just as is the area enclosed by the circle , the area enclosed by the squircle is . Moreover,
where is the arithmetic–geometric mean.
The hyperbolic lemniscate sine satisfies the argument addition identity:
When is real, the derivative and the original antiderivative of and can be expressed in this way:
{|class = "wikitable"
|
|}
There are also the Hyperbolic lemniscate tangent and the Hyperbolic lemniscate coangent als further functions:
The functions tlh and ctlh fulfill the identities described in the differential equation mentioned:
The functional designation sl stands for the lemniscatic sine and the designation cl stands for the lemniscatic cosine.
In addition, those relations to the Jacobi elliptic functions are valid:
When is real, the derivative and quarter period integral of and can be expressed in this way:
{|class = "wikitable"
|
|}
Derivation of the Hyperbolic Lemniscate functions
The horizontal and vertical coordinates of this superellipse are dependent on twice the enclosed area w = 2A, so the following conditions must be met:
The solutions to this system of equations are as follows:
The following therefore applies to the quotient:
The functions x(w) and y(w) are called cotangent hyperbolic lemniscatus and hyperbolic tangent.
The sketch also shows the fact that the derivation of the Areasinus hyperbolic lemniscatus function is equal to the reciprocal of the square root of the successor of the fourth power function.
First proof: comparison with the derivative of the arctangent
There is a black diagonal on the sketch shown on the right. The length of the segment that runs perpendicularly from the intersection of this black diagonal with the red vertical axis to the point (1|0) should be called s. And the length of the section of the black diagonal from the coordinate origin point to the point of intersection of this diagonal with the cyan curved line of the superellipse has the following value depending on the slh value:
This connection is described by the Pythagorean theorem.
An analogous unit circle results in the arctangent of the circle trigonometric with the described area allocation.
The following derivation applies to this:
To determine the derivation of the areasinus lemniscatus hyperbolicus, the comparison of the infinitesimally small triangular areas for the same diagonal in the superellipse and the unit circle is set up below. Because the summation of the infinitesimally small triangular areas describes the area dimensions. In the case of the superellipse in the picture, half of the area concerned is shown in green. Because of the quadratic ratio of the areas to the lengths of triangles with the same infinitesimally small angle at the origin of the coordinates, the following formula applies:
Second proof: integral formation and area subtraction
In the picture shown, the area tangent lemniscatus hyperbolicus assigns the height of the intersection of the diagonal and the curved line to twice the green area. The green area itself is created as the difference integral of the superellipse function from zero to the relevant height value minus the area of the adjacent triangle:
The following transformation applies:
And so, according to the chain rule, this derivation holds:
Specific values
This list shows the values of the Hyperbolic Lemniscate Sine accurately. Recall that,
whereas so the values below such as are analogous to the trigonometric .
That table shows the most important values of the Hyperbolic Lemniscate Tangent and Cotangent functions:
Combination and halving theorems
Given the hyperbolic lemniscate tangent () and hyperbolic lemniscate cotangent (). Recall the hyperbolic lemniscate area functions from the section on inverse functions,
Then the following identities can be established,
hence the 4th power of and for these arguments is equal to one,
so a 4th power version of the Pythagorean theorem. The bisection theorem of the hyperbolic sinus lemniscatus reads as follows:
This formula can be revealed as a combination of the following two formulas:
In addition, the following formulas are valid for all real values :
These identities follow from the last-mentioned formula:
Hence, their 4th powers again equal one,
The following formulas for the lemniscatic sine and lemniscatic cosine are closely related:
Coordinate Transformations
Analogous to the determination of the improper integral in the Gaussian bell curve function, the coordinate transformation of a general cylinder can be used to calculate the integral from 0 to the positive infinity in the function integrated in relation to x. In the following, the proofs of both integrals are given in a parallel way of displaying.
This is the cylindrical coordinate transformation in the Gaussian bell curve function:
And this is the analogous coordinate transformation for the lemniscatory case:
In the last line of this elliptically analogous equation chain there is again the original Gauss bell curve integrated with the square function as the inner substitution according to the Chain rule of infinitesimal analytics (analysis).
In both cases, the determinant of the Jacobi matrix is multiplied to the original function in the integration domain.
The resulting new functions in the integration area are then integrated according to the new parameters.
Number theory
In algebraic number theory, every finite abelian extension of the Gaussian rationals is a subfield of for some positive integer . This is analogous to the Kronecker–Weber theorem for the rational numbers which is based on division of the circle – in particular, every finite abelian extension of is a subfield of for some positive integer . Both are special cases of Kronecker's Jugendtraum, which became Hilbert's twelfth problem.
The field (for positive odd ) is the extension of generated by the - and -coordinates of the -torsion points on the elliptic curve .
Hurwitz numbers
The Bernoulli numbers can be defined by
and appear in
where is the Riemann zeta function.
The Hurwitz numbers named after Adolf Hurwitz, are the "lemniscate analogs" of the Bernoulli numbers. They can be defined by
where is the Weierstrass zeta function with lattice invariants and . They appear in
where are the Gaussian integers and are the Eisenstein series of weight , and in
The Hurwitz numbers can also be determined as follows: ,
and if is not a multiple of . This yields
Also
where such that
just as
where (by the von Staudt–Clausen theorem).
In fact, the von Staudt–Clausen theorem determines the fractional part of the Bernoulli numbers:
where is any prime, and an analogous theorem holds for the Hurwitz numbers: suppose that is odd, is even, is a prime such that , (see Fermat's theorem on sums of two squares) and . Then for any given , is uniquely determined; equivalently where is the number of solutions of the congruence in variables that are non-negative integers. The Hurwitz theorem then determines the fractional part of the Hurwitz numbers:
The sequence of the integers starts with
Let . If is a prime, then . If is not a prime, then .
Some authors instead define the Hurwitz numbers as .
Appearances in Laurent series
The Hurwitz numbers appear in several Laurent series expansions related to the lemniscate functions:
Analogously, in terms of the Bernoulli numbers:
A quartic analog of the Legendre symbol
Let be a prime such that . A quartic residue (mod ) is any number congruent to the fourth power of an integer. Define
to be if is a quartic residue (mod ) and define it to be if is not a quartic residue (mod ).
If and are coprime, then there exist numbers (see for these numbers) such that
This theorem is analogous to
where is the Legendre symbol.
World map projections
The Peirce quincuncial projection, designed by Charles Sanders Peirce of the US Coast Survey in the 1870s, is a world map projection based on the inverse lemniscate sine of stereographically projected points (treated as complex numbers).
When lines of constant real or imaginary part are projected onto the complex plane via the hyperbolic lemniscate sine, and thence stereographically projected onto the sphere (see Riemann sphere), the resulting curves are spherical conics, the spherical analog of planar ellipses and hyperbolas. Thus the lemniscate functions (and more generally, the Jacobi elliptic functions) provide a parametrization for spherical conics.
A conformal map projection from the globe onto the 6 square faces of a cube can also be defined using the lemniscate functions. Because many partial differential equations can be effectively solved by conformal mapping, this map from sphere to cube is convenient for atmospheric modeling.
See also
Elliptic function
Abel elliptic functions
Dixon elliptic functions
Jacobi elliptic functions
Weierstrass elliptic function
Elliptic Gauss sum
Lemniscate constant
Peirce quincuncial projection
Schwarz–Christoffel mapping
Notes
External links
Relation shown in the video amounts to
References
E252. (Figures)
E 605.
Supplement No. 1 to The Canadian Cartographer 13.
Modular forms
Elliptic functions | Lemniscate elliptic functions | Mathematics | 6,194 |
11,552,664 | https://en.wikipedia.org/wiki/Phoma%20strasseri | Phoma strasseri is a fungal plant pathogen infecting mint.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Mint diseases
strasseri
Fungi described in 1924
Fungus species | Phoma strasseri | Biology | 48 |
10,497,038 | https://en.wikipedia.org/wiki/Yetter%E2%80%93Drinfeld%20category | In mathematics a Yetter–Drinfeld category is a special type of braided monoidal category. It consists of modules over a Hopf algebra which satisfy some additional axioms.
Definition
Let H be a Hopf algebra over a field k. Let denote the coproduct and S the antipode of H. Let V be a vector space over k. Then V is called a (left left) Yetter–Drinfeld module over H if
is a left H-module, where denotes the left action of H on V,
is a left H-comodule, where denotes the left coaction of H on V,
the maps and satisfy the compatibility condition
for all ,
where, using Sweedler notation, denotes the twofold coproduct of , and .
Examples
Any left H-module over a cocommutative Hopf algebra H is a Yetter–Drinfeld module with the trivial left coaction .
The trivial module with , , is a Yetter–Drinfeld module for all Hopf algebras H.
If H is the group algebra kG of an abelian group G, then Yetter–Drinfeld modules over H are precisely the G-graded G-modules. This means that
,
where each is a G-submodule of V.
More generally, if the group G is not abelian, then Yetter–Drinfeld modules over H=kG are G-modules with a G-gradation
, such that .
Over the base field all finite-dimensional, irreducible/simple Yetter–Drinfeld modules over a (nonabelian) group H=kG are uniquely given through a conjugacy class together with (character of) an irreducible group representation of the centralizer of some representing :
As G-module take to be the induced module of :
(this can be proven easily not to depend on the choice of g)
To define the G-graduation (comodule) assign any element to the graduation layer:
It is very custom to directly construct as direct sum of X´s and write down the G-action by choice of a specific set of representatives for the -cosets. From this approach, one often writes
(this notation emphasizes the graduation , rather than the module structure)
Braiding
Let H be a Hopf algebra with invertible antipode S, and let V, W be Yetter–Drinfeld modules over H. Then the map ,
is invertible with inverse
Further, for any three Yetter–Drinfeld modules U, V, W the map c satisfies the braid relation
A monoidal category consisting of Yetter–Drinfeld modules over a Hopf algebra H with bijective antipode is called a Yetter–Drinfeld category. It is a braided monoidal category with the braiding c above. The category of Yetter–Drinfeld modules over a Hopf algebra H with bijective antipode is denoted by .
References
Hopf algebras
Quantum groups
Monoidal categories | Yetter–Drinfeld category | Mathematics | 630 |
39,374,434 | https://en.wikipedia.org/wiki/2045%20Initiative | The 2045 Initiative is a nonprofit organization that develops a network and community of researchers in the field of life extension, focusing on combining brain emulation and robotics to create forms of cyborgs. It was founded by Russian entrepreneur Dmitry Itskov in February 2011 with the participation of Russian specialists in the field of neural interfaces, robotics, artificial organs and systems. Philippe van Nedervelde serves as the Director of International Development.
The main goal of the 2045 Initiative, as stated on its website, is "to create technologies enabling the transfer of an individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality. We devote particular attention to enabling the fullest possible dialogue between the world’s major spiritual traditions, science and society".
Future prospects
The 2045 Initiative has a roadmap for developing cybernetic immortality. The Initiative has the goal for an avatar controlled by a "brain-computer" interface to be developed between 2015 and 2020, between 2020 and 2025 creating an autonomous life-support system for the human brain linked to a robot, between 2030 and 2035 creating a computer model of the brain and human consciousness with the means to transfer it into an artificial carrier, and by 2045 create a new era for humanity with holographic bodies.
Avatar Project
One of the featured life-extension projects is to design an artificial humanoid body (called an "avatar") and an advanced brain–computer interface system. On the biological side, a life support system will be developed for hosting a human brain inside the avatar and maintaining it alive and functional. A later phase of the project will research into the creation of an artificial brain into which the original individual consciousness may be transferred.
Avatar A
A robotic copy of a human body remotely capable of interpreting commands directly from the mind, and sending information back to the mind in a form that can be interpreted via brain–computer interface. It is estimated to be popularized in or before 2020. This, however, has failed to transpire.
Avatar B
An avatar in which a human brain is transplanted at the end of one's life. Avatar B has an autonomous system providing life support for the brain and allowing it interaction with the environment, possibly mounted into an existing Avatar A Chassis. Deadline of this phase is year 2025.
Avatar C
An avatar with an artificial brain to which a human personality is transferred for emulation at the end of one's life. The first successful attempt to upload one's personality to a computer is estimated to happen around 2035.
Avatar D
A hologram- or diagram-like avatar. This is the ultimate goal of this project but is optional since, assuming either the upload is involuntary or all humans chose to upload, biological diseases are prevented in the previous phase, and it is far away from current technological achievement and our understanding on physics.
Reception
Professor George M. Church has complained that "there's a lot of dots that aren't connected in (Itskov's) plans. It's not a real road map." Martine A. Rothblatt, the founder of United Therapeutics, has claimed the Avatar Project is "no more wild than in the early '60s, when we saw the advent of liver and kidney transplants."
See also
Brain transplant
Cyborg
Exocortex
Human enhancement
Isolated brain
Mind uploading
Transhumanism
References
External links
Global Future 2045 Congress
Transhumanism
Transhumanist organizations
Biogerontology organizations
Life extension organizations
Social movements
Futures studies organizations
Immortality
Neurotechnology
Non-profit organizations based in Russia
Organizations established in 2011
2045 in science | 2045 Initiative | Technology,Engineering,Biology | 742 |
11,864,935 | https://en.wikipedia.org/wiki/Haar-like%20feature | Haar-like features are digital image features used in object recognition. They owe their name to their intuitive similarity with Haar wavelets and were used in the first real-time face detector.
Working with only image intensities (i.e., the RGB pixel values at each and every pixel of image) made the task of feature calculation computationally expensive. A publication by Papageorgiou et al. discussed working with an alternate feature set based on Haar wavelets instead of the usual image intensities. Paul Viola and Michael Jones adapted the idea of using Haar wavelets and developed the so-called Haar-like features. A Haar-like feature considers adjacent rectangular regions at a specific location in a detection window, sums up the pixel intensities in each region and calculates the difference between these sums. This difference is then used to categorize subsections of an image.
For example, with a human face, it is a common observation that among all faces the region of the eyes is darker than the region of the cheeks. Therefore, a common Haar feature for face detection is a set of two adjacent rectangles that lie above the eye and the cheek region. The position of these rectangles is defined relative to a detection window that acts like a bounding box to the target object (the face in this case).
In the detection phase of the Viola–Jones object detection framework, a window of the target size is moved over the input image, and for each subsection of the image the Haar-like feature is calculated. This difference is then compared to a learned threshold that separates non-objects from objects. Because such a Haar-like feature is only a weak learner or classifier (its detection quality is slightly better than random guessing) a large number of Haar-like features are necessary to describe an object with sufficient accuracy. In the Viola–Jones object detection framework, the Haar-like features are therefore organized in something called a classifier cascade to form a strong learner or classifier.
The key advantage of a Haar-like feature over most other features is its calculation speed. Due to the use of integral images, a Haar-like feature of any size can be calculated in constant time (approximately 60 microprocessor instructions for a 2-rectangle feature).
Rectangular Haar-like features
A simple rectangular Haar-like feature can be defined as the difference of the sum of pixels of areas inside the rectangle, which can be at any position and scale within the original image. This modified feature set is called 2-rectangle feature. Viola and Jones also defined 3-rectangle features and 4-rectangle features. The values indicate certain characteristics of a particular area of the image. Each feature type can indicate the existence (or absence) of certain characteristics in the image, such as edges or changes in texture. For example, a 2-rectangle feature can indicate where the border lies between a dark region and a light region.
Fast computation of Haar-like features
One of the contributions of Viola and Jones was to use summed-area tables, which they called integral images. Integral images can be defined as two-dimensional lookup tables in the form of a matrix with the same size of the original image. Each element of the integral image contains the sum of all pixels located on the up-left region of the original image (in relation to the element's position). This allows to compute sum of rectangular areas in the image, at any position or scale, using only four lookups:
where points belong to the integral image , as shown in the figure.
Each Haar-like feature may need more than four lookups, depending on how it was defined. Viola and Jones's 2-rectangle features need six lookups, 3-rectangle features need eight lookups, and 4-rectangle features need nine lookups.
Tilted Haar-like features
Lienhart and Maydt introduced the concept of a tilted (45°) Haar-like feature. This was used to increase the dimensionality of the set of features in an attempt to improve the detection of objects in images. This was successful, as some of these features are able to describe the object in a better way. For example, a 2-rectangle tilted Haar-like feature can indicate the existence of an edge at 45°.
Messom and Barczak extended the idea to a generic rotated Haar-like feature. Although the idea is sound mathematically, practical problems prevent the use of Haar-like features at any angle. In order to be fast, detection algorithms use low resolution images introducing rounding errors. For this reason rotated Haar-like features are not commonly used.
References
Further reading
Haar A. Zur Theorie der orthogonalen Funktionensysteme, Mathematische Annalen, 69, pp. 331–371, 1910.
Bioinformatics
Feature detection (computer vision) | Haar-like feature | Engineering,Biology | 1,025 |
34,321,074 | https://en.wikipedia.org/wiki/Association%20for%20Contextual%20Behavioral%20Science | The Association for Contextual Behavioral Science (ACBS) is a worldwide nonprofit professional membership organization associated with acceptance and commitment therapy (ACT), and relational frame theory (RFT) among other topics. The term "contextual behavioral science" refers to the application of functional contextualism to human behavior, including contextual forms of applied behavior analysis, cognitive behavioral therapy, and evolution science. In the applied area Acceptance and Commitment Therapy is perhaps the best known wing of contextual behavioral science, and is an emphasis of ACBS, along with other types of contextual CBT, and efforts in education, organizational behavior, and other areas. ACT is considered an empirically validated treatment by the American Psychological Association, with the status of "Modest Research Support" in depression, Obsessive Compulsive Disorder, mixed anxiety disorders, and psychosis, and "Strong Research Support" in chronic pain. ACT is also listed as evidence-based by the Substance Abuse and Mental Health Services Administration of the United States federal government which has examined randomized trials for ACT in the areas of psychosis, work site stress, and obsessive compulsive disorder, including depression outcomes. In the basic area, Relational Frame Theory is a research program in language and cognition that is considered part of contextual behavioral science, and is a focus of ACBS. Unlike the better known behavioral approach proposed by B.F. Skinner in his book Verbal Behavior, experimental RFT research has emerged in a number of areas traditionally thought to be beyond behavioral perspectives, such as grammar, metaphor, perspective taking, implicit cognition and reasoning.
History
Established in 2005, ACBS has about 9,000 members. Slightly more than one half are outside of the United States. There are 45 ACBS chapters covering many areas of the world including Italy, Japan, Belgium, the Netherlands, Brazil, Australia/New Zealand, France, the United Kingdom, Türkiye, Malaysia, and more. Chapters exist in the United States and Canada as well, including the mid-Atlantic, New England, Washington, Ontario, and several other areas. There are also over 40 Special Interest Groups covering a wide range of basic and applied areas such as children and adolescents, veteran's affairs, ACT for Health, social work, and many other areas.
Activities
ACBS sponsors an annual conference, the ACBS World Conference. The 2023 (21st annual) meeting was held 24–28 July, in Nicosia, Cyprus. 2024 July, it will be held in Buenos Aires Argentina, and in 2025 July, it will be held in New Orleans, USA.
In 2012 Elsevier began publishing the official journal of ACBS, the Journal of Contextual Behavioral Science. In 2022 JCBS Impact Factor was 5.00.
Other activities:
A scholarship program that sponsors participants from the developing world to attend the World Conferences.
Listservs for professionals and the public. Most Special Interest Groups maintain email listservs as well. The largest listserv is on Acceptance and Commitment Therapy and is for professionals who are ACBS members, with the second largest listserv focusing on Relational Frame Theory (the ACT listserv for professionals spawned its own reference books of popular questions/topics called Talking ACT published by New Harbinger Publications and Context Press.). There is also a free listserv for members of the public who are reading ACT self-help books.
A grant program for projects in contextual behavioral science.
The association's website contains resources such as therapist tools, workshops, metaphors, protocols, and assessment materials, and provides information on recent books on acceptance and commitment therapy (ACT), Relational Frame Theory (RFT), and Contextual Behavioral Science (CBS).
See also
Acceptance and commitment therapy
Verbal behavior
Relational frame theory
Clinical behavior analysis
Applied behavior analysis
Cognitive behavior therapy
Behaviorism
Radical behaviorism
References
External links
Association for Contextual Behavioral Science homepage
Behavioural sciences
Psychology organizations based in the United States
Behaviorism
Cognitive behavioral therapy
Health care-related professional associations based in the United States | Association for Contextual Behavioral Science | Biology | 819 |
1,495,134 | https://en.wikipedia.org/wiki/Insular%20cortex | The insular cortex (also insula and insular lobe) is a portion of the cerebral cortex folded deep within the lateral sulcus (the fissure separating the temporal lobe from the parietal and frontal lobes) within each hemisphere of the mammalian brain.
The insulae are believed to be involved in consciousness and play a role in diverse functions usually linked to emotion or the regulation of the body's homeostasis. These functions include compassion, empathy, taste, perception, motor control, self-awareness, cognitive functioning, interpersonal relationships, and awareness of homeostatic emotions such as hunger, pain and fatigue. In relation to these, it is involved in psychopathology.
The insular cortex is divided by the central sulcus of the insula, into two parts: the anterior insula and the posterior insula in which more than a dozen field areas have been identified. The cortical area overlying the insula toward the lateral surface of the brain is the operculum (meaning lid). The opercula are formed from parts of the enclosing frontal, temporal, and parietal lobes.
Structure
The insula is divided into an anterior and a posterior part by the central sulcus of the insula.
Connections
The anterior part of the insula is subdivided by shallow sulci into three or four short gyri.
The anterior insula receives a direct projection from the basal part of the ventral medial nucleus of the thalamus and a particularly large input from the central nucleus of the amygdala. In addition, the anterior insula itself projects to the amygdala.
One study on rhesus monkeys revealed widespread reciprocal connections between the insular cortex and almost all subnuclei of the amygdaloid complex. The posterior insula projects predominantly to the dorsal aspect of the lateral and to the central amygdaloid nuclei. In contrast, the anterior insula projects to the anterior amygdaloid area as well as the medial, the cortical, the accessory basal magnocellular, the medial basal, and the lateral amygdaloid nuclei.
The posterior part of the insula is formed by a long gyrus.
The posterior insula connects reciprocally with the secondary somatosensory cortex and receives input from spinothalamically activated ventral posterior inferior thalamic nuclei. It has also been shown that this region receives inputs from the ventromedial nucleus (posterior part) of the thalamus that are highly specialized to convey homeostatic information such as pain, temperature, itch, local oxygen status, and sensual touch.
A human neuroimaging study using diffusion tensor imaging revealed that the anterior insula is interconnected to regions in the temporal and occipital lobe, opercular and orbitofrontal cortex, triangular and opercular parts of the inferior frontal gyrus. The same study revealed differences in the anatomical connection patterns between the left and right hemisphere.
The circular sulcus of insula (or sulcus of Reil) is a semicircular sulcus or fissure that separates the insula from the neighboring gyri of the operculum in the front, above, and
behind.
Cytoarchitecture
The insular cortex has regions of variable cell structure or cytoarchitecture, changing from granular in the posterior portion to agranular in the anterior portion. The insula also receives differential cortical and thalamic input along its length. The anterior insular cortex contains a population of spindle neurons (also called von Economo neurons), identified as characterising a distinctive subregion as the agranular frontal insula.
Development
The insular cortex is considered a separate lobe of the telencephalon by some authorities. Other sources see the insula as a part of the temporal lobe. It is also sometimes grouped with limbic structures deep in the brain into a limbic lobe. As a paralimbic cortex, the insular cortex is considered to be a relatively old structure.
Function
Multimodal sensory processing, sensory binding
Functional imaging studies show activation of the insula during audio-visual integration tasks.
Taste
The anterior insula is part of the primary gustatory cortex. Research in rhesus monkeys has also reported that apart from numerous taste-sensitive neurons, the insular cortex also responds to non-taste properties of oral stimuli related to the texture (viscosity, grittiness) or temperature of food.
Speech
The sensory speech region, Wernicke’s area, and the motor speech region, Broca’s area, are interconnected by a large axonal fiber system known as the arcuate fasciculus which passes directly beneath the insular cortex. On account of this anatomical architecture, ischemic strokes in the insular region can disrupt the arcuate fasciculus. Functional imaging studies on the cerebral correlates of language production also suggest that the anterior insula forms part of the brain network of speech motor control. Moreover, electrical stimulation of the posterior insular can evoke speech disturbances such as speech arrest and reduced voice intensity.
Lesion of the pre-central gyrus of the insula can also cause “pure speech apraxia” (i.e. the inability to speak with no apparent aphasic or orofacial motor impairments). This demonstrates that the insular cortex forms part of a critical circuit for the coordination of complex articulatory movements prior to and during the execution of the motor speech plans. Importantly, this specific cortical circuit is different from those that relate to the cognitive aspects of language production (e.g., Broca’s area on the inferior frontal gyrus). Subvocal, or silent, speech has also been shown to activate right insular cortex, further supporting the theory that the motor control of speech proceeds from the insula.
Interoceptive awareness
There is evidence that, in addition to its base functions, the insula may play a role in certain higher-level functions that operate only in humans and other great apes. The spindle neurons found at a higher density in the right frontal insular cortex are also found in the anterior cingulate cortex, which is another region that has reached a high level of specialization in great apes. It has been speculated that these neurons are involved in cognitive-emotional processes that are specific to primates including great apes, such as empathy and metacognitive emotional feelings. This is supported by functional imaging results showing that the structure and function of the right frontal insula is correlated with the ability to feel one's own heartbeat, or to empathize with the pain of others. It is thought that these functions are not distinct from the lower-level functions of the insula but rather arise as a consequence of the role of the insula in conveying homeostatic information to consciousness. The right anterior insula is engaged in interoceptive awareness of homeostatic emotions such as thirst, pain and fatigue, and the ability to time one's own heartbeat. Moreover, greater right anterior insular gray matter volume correlates with increased accuracy in this subjective sense of the inner body, and with negative emotional experience. It is also involved in the control of blood pressure, in particular during and after exercise, and its activity varies with the amount of effort a person believes he/she is exerting.
The insular cortex also is where the sensation of pain is judged as to its degree. Lesion of the insula is associated with dramatic loss of pain perception and isolated insular infarction can lead to contralateral elimination of pinprick perception. Further, the insula is where a person imagines pain when looking at images of painful events while thinking about their happening to one's own body. Those with irritable bowel syndrome have abnormal processing of visceral pain in the insular cortex related to dysfunctional inhibition of pain within the brain.
Physiological studies in rhesus monkeys have shown that neurons in the insula respond to skin stimulation. PET studies have also revealed that the human insula can also be activated by vibrational stimulation to the skin.
Another perception of the right anterior insula is the degree of nonpainful warmth or nonpainful coldness of a skin sensation. Other internal sensations processed by the insula include stomach or abdominal distension. A full bladder also activates the insular cortex.
One brain imaging study suggests that the unpleasantness of subjectively perceived dyspnea is processed in the right human anterior insula and amygdala.
The cerebral cortex processing vestibular sensations extends into the insula, with small lesions in the anterior insular cortex being able to cause loss of balance and vertigo.
Other noninteroceptive perceptions include passive listening to music, laughter and crying, empathy and compassion, and language.
Motor control
In motor control, it contributes to hand-and-eye motor movement, swallowing, gastric motility, and speech articulation. It has been identified as a "central command” centre that ensures that heart rate and blood pressure increase at the onset of exercise. Research upon conversation links it to the capacity for long and complex spoken sentences. It is also involved in motor learning and has been identified as playing a role in the motor recovery from stroke.
Homeostasis
It plays a role in a variety of homeostatic functions related to basic survival needs, such as taste, visceral sensation, and autonomic control. The insula controls autonomic functions through the regulation of the sympathetic and parasympathetic systems. It has a role in regulating the immune system.
Self
The insula has been identified as playing a role in the experience of bodily self-awareness, sense of agency, and sense of body ownership.
Social emotions
The anterior insula processes a person's sense of disgust both to smells and to the sight of contamination and mutilation — even when just imagining the experience. This associates with a mirror neuron-like link between external and internal experiences.
In social experience, it is involved in the processing of norm violations, emotional processing, empathy, and orgasms.
The insula is active during social decision making. Tiziana Quarto et al. measured emotional intelligence (EI) (the ability to identify, regulate, and process emotions of themselves and of others) of sixty-three healthy subjects. Using fMRI EI was measured in correlation with left insular activity. The subjects were shown various pictures of facial expressions and tasked with deciding to approach or avoid the person in the picture. The results of the social decision task yielded that individuals with high EI scores had left insular activation when processing fearful faces. Individuals with low EI scores had left insular activation when processing angry faces.
Emotions
The insular cortex, in particular its most anterior portion, is considered a limbic-related cortex. The insula has increasingly become the focus of attention for its role in body representation and subjective emotional experience. In particular, Antonio Damasio has proposed that this region plays a role in mapping visceral states that are associated with emotional experience, giving rise to conscious feelings. This is in essence a neurobiological formulation of the ideas of William James, who first proposed that subjective emotional experience (i.e., feelings) arise from our brain's interpretation of bodily states that are elicited by emotional events. This is an example of embodied cognition.
In terms of function, the insula is believed to process convergent information to produce an emotionally relevant context for sensory experience. To be specific, the anterior insula is related more to olfactory, gustatory, viscero-autonomic, and limbic function, whereas the posterior insula is related more to auditory-somesthetic-skeletomotor function. Functional imaging experiments have revealed that the insula has an important role in pain experience and the experience of a number of basic emotions, including anger, fear, disgust, happiness, and sadness.
The anterior insular cortex (AIC) is believed to be responsible for emotional feelings, including maternal and romantic love, anger, fear, sadness, happiness, sexual arousal, disgust, aversion, unfairness, inequity, indignation, uncertainty, disbelief, social exclusion, trust, empathy, sculptural beauty, a ‘state of union with God’, and hallucinogenic states.
Functional imaging studies have also implicated the insula in conscious desires, such as food craving and drug craving. What is common to all of these emotional states is that they each change the body in some way and are associated with highly salient subjective qualities. The insula is well-situated for the integration of information relating to bodily states into higher-order cognitive and emotional processes. The insula receives information from "homeostatic afferent" sensory pathways via the thalamus and sends output to a number of other limbic-related structures, such as the amygdala, the ventral striatum, and the orbitofrontal cortex, as well as to motor cortices.
A study using magnetic resonance imaging found that the right anterior insula is significantly thicker in people that meditate. Other research into brain activity and meditation has shown an increase in grey matter in areas of the brain including the insular cortex.
Another study using voxel-based morphometry and MRI on experienced Vipassana meditators was done to extend the findings of Lazar et al., which found increased grey matter concentrations in this and other areas of the brain in experienced meditators.
The strongest evidence against a causative role for the insula cortex in emotion comes from Damasio et al. (2012) which showed that a patient who suffered bilateral lesions of the insula cortex expressed the full complement of human emotions, and was fully capable of emotional learning.
Salience
Functional neuroimaging research suggests the insula is involved in two types of salience. Interoceptive information processing that links interoception with emotional salience to generate a subjective representation of the body. This involves, first, the anterior insular cortex with the pregenual anterior cingulate cortex (Brodmann area 33) and the anterior and posterior mid-cingulate cortices, and, second, a general salience network concerned with environmental monitoring, response selection, and skeletomotor body orientation that involves all of the insular cortex and the mid-cingulate cortex. A related idea is that the anterior insula, as part of the salience network, interacts with the mid-posterior insula to combine salient stimuli with autonomic information, leading to a high state of physiological awareness of salient stimuli.
An alternative or perhaps complementary proposal is that the right anterior insular regulates the interaction between the salience of the selective attention created to achieve a task (the dorsal attention system) and the salience of arousal created to keep focused upon the relevant part of the environment (ventral attention system). This regulation of salience might be particularly important during challenging tasks where attention might fatigue and so cause careless mistakes but if there is too much arousal it risks creating poor performance by turning into anxiety.
Decision making
Studies have shown that damage or dysfunction in the insular cortex can impair decision-making, emotional regulation, and social behavior. The insula is considered a key brain structure in the neural circuitry underlying complex decision-making processes. It plays a significant role in integrating internal and external cues to facilitate adaptive choices.
Auditory perception
Research indicates that the insular cortex is involved in auditory perception. Responses to sound stimuli were obtained using intracranial EEG recordings acquired from patients with epilepsy. The posterior part of the insula showed auditory responses that resemble those observed in Heschl's gyrus, whereas the anterior part responded to the emotional contents of the auditory stimuli. Clinical data additionally shows that bilateral damage to the insula after ischemic injury or trauma can lead to auditory agnosia. Functional magnetic resonance studies have also demonstrated that the insular cortex participates in many key auditory processes such as tuning into novel auditory stimuli and allocating auditory attention.
Direct recordings from the posterior part of the insula showed responses to unexpected sounds within regular auditory streams, a process known as auditory deviance detection. Researchers observed a mismatch negativity (MMN) potential, a well known event related potential, as well as the high frequency activity signals originating from local neurons.
Simple auditory illusions and hallucinations were elicited by electrical functional mapping.
Clinical significance
Progressive expressive aphasia
Progressive expressive aphasia is the deterioration of normal language function that causes individuals to lose the ability to communicate fluently while still being able to comprehend single words and intact other non-linguistic cognition. It is found in a variety of degenerative neurological conditions including Pick's disease, motor neuron disease, corticobasal degeneration, frontotemporal dementia, and Alzheimer's disease. It is associated with hypometabolism and atrophy of the left anterior insular cortex.
Addiction
A number of functional brain imaging studies have shown that the insular cortex is activated when drug users are exposed to environmental cues that trigger cravings. This has been shown for a variety of drugs, including cocaine, alcohol, opiates, and nicotine. Despite these findings, the insula has been ignored within the drug addiction literature, perhaps because it is not known to be a direct target of the mesocortical dopamine system, which is central to current dopamine reward theories of addiction. Research published in 2007 has shown that cigarette smokers suffering damage to the insular cortex, from a stroke for instance, have their addiction to cigarettes practically eliminated. These individuals were found to be up to 136 times more likely to undergo a disruption of smoking addiction than smokers with damage in other areas. Disruption of addiction was evidenced by self-reported behavior changes such as quitting smoking less than one day after the brain injury, quitting smoking with great ease, not smoking again after quitting, and having no urge to resume smoking since quitting. The study was conducted on average eight years after the strokes, which opens up the possibility that recall bias could have affected the results. More recent prospective studies, which overcome this limitation, have corroborated these findings This suggests a significant role for the insular cortex in the neurological mechanisms underlying addiction to nicotine and other drugs, and would make this area of the brain a possible target for novel anti-addiction medication. In addition, this finding suggests that functions mediated by the insula, especially conscious feelings, may be particularly important for maintaining drug addiction, although this view is not represented in any modern research or reviews of the subject.
A recent study in rats by Contreras et al. corroborates these findings by showing that reversible inactivation of the insula disrupts amphetamine conditioned place preference, an animal model of cue-induced drug craving. In this study, insula inactivation also disrupted "malaise" responses to lithium chloride injection, suggesting that the representation of negative interoceptive states by the insula plays a role in addiction. However, in this same study, the conditioned place preference took place immediately after the injection of amphetamine, suggesting that it is the immediate, pleasurable interoceptive effects of amphetamine administration, rather than the delayed, aversive effects of amphetamine withdrawal that are represented within the insula.
A model proposed by Naqvi et al. (see above) is that the insula stores a representation of the pleasurable interoceptive effects of drug use (e.g., the airway sensory effects of nicotine, the cardiovascular effects of amphetamine), and that this representation is activated by exposure to cues that have previously been associated with drug use. A number of functional imaging studies have shown the insula to be activated during the administration of addictive psychoactive drugs. Several functional imaging studies have also shown that the insula is activated when drug users are exposed to drug cues, and that this activity is correlated with subjective urges. In the cue-exposure studies, insula activity is elicited when there is no actual change in the level of drug in the body. Therefore, rather than merely representing the interoceptive effects of drug use as it occurs, the insula may play a role in memory for the pleasurable interoceptive effects of past drug use, anticipation of these effects in the future, or both. Such a representation may give rise to conscious urges that feel as if they arise from within the body. This may make addicts feel as if their bodies need to use a drug, and may result in persons with lesions in the insula reporting that their bodies have forgotten the urge to use, according to this study.
Subjective certainty in ecstatic seizures
A common quality in mystical experiences is a strong feeling of certainty which cannot be expressed in words. Fabienne Picard proposes a neurological explanation for this subjective certainty, based on clinical research of epilepsy.
According to Picard, this feeling of certainty may be caused by a dysfunction of the anterior insula, a part of the brain which is involved in interoception, self-reflection, and in avoiding uncertainty about the internal representations of the world by "anticipation of resolution of uncertainty or risk". This avoidance of uncertainty functions through the comparison between predicted states and actual states, that is, "signaling that we do not understand, i.e., that there is ambiguity." Picard notes that "the concept of insight is very close to that of certainty," and refers to Archimedes' "Eureka!" Picard hypothesizes that during ecstatic seizures the comparison between predicted states and actual states no longer functions, and that mismatches between predicted state and actual state are no longer processed, blocking "negative emotions and negative arousal arising from predictive uncertainty," which will be experienced as emotional confidence. Picard concludes that "[t]his could lead to a spiritual interpretation in some individuals."
Other clinical conditions
The insular cortex has been suggested to have a role in anxiety disorders, emotion dysregulation, and anorexia nervosa.
History
The insula was first described by Johann Christian Reil while describing cranial and spinal nerves and plexuses. Henry Gray in Gray's Anatomy is responsible for it being known as the Island of Reil. John Allman and colleagues showed that anterior insular cortex contains spindle neurons.
Additional images
See also
List of regions in the human brain
References
External links
. Location and literature citations for the insula
Homeostasis
Human homeostasis
Cerebral cortex | Insular cortex | Biology | 4,669 |
63,944 | https://en.wikipedia.org/wiki/Key-agreement%20protocol | In cryptography, a key-agreement protocol is a protocol whereby two (or more) parties generate a cryptographic key as a function of information provided by each honest party so that no party can predetermine the resulting value.
In particular, all honest participants influence the outcome. A key-agreement protocol is a specialisation of a key-exchange protocol.
At the completion of the protocol, all parties share the same key. A key-agreement protocol precludes undesired third parties from forcing a key choice on the agreeing parties. A secure key agreement can ensure confidentiality and data integrity in communications systems, ranging from simple messaging applications to complex banking transactions.
Secure agreement is defined relative to a security model, for example the Universal Model. More generally, when evaluating protocols, it is important to state security goals and the security model. For example, it may be required for the session key to be authenticated. A protocol can be evaluated for success only in the context of its goals and attack model. An example of an adversarial model is the Dolev–Yao model.
In many key exchange systems, one party generates the key, and sends that key to the other party; the other party has no influence on the key.
Exponential key exchange
The first publicly known public-key agreement protocol that meets the above criteria was the Diffie–Hellman key exchange, in which two parties jointly exponentiate a generator with random numbers, in such a way that an eavesdropper cannot feasibly determine what the resultant shared key is.
Exponential key agreement in and of itself does not specify any prior agreement or subsequent authentication between the participants. It has thus been described as an anonymous key agreement protocol.
Symmetric key agreement
Symmetric key agreement (SKA) is a method of key agreement that uses solely symmetric cryptography and cryptographic hash functions as cryptographic primitives. It is related to symmetric authenticated key exchange.
SKA may assume the use of initial shared secrets or a trusted third party with whom the agreeing parties share a secret is assumed. If no third party is present, then achieving SKA can be trivial: we tautologically assume that two parties that share an initial secret and have achieved SKA.
SKA contrasts with key-agreement protocols that include techniques from asymmetric cryptography, such as key encapsulation mechanisms.
The initial exchange of a shared key must be done in a manner that is private and integrity-assured. Historically, this was achieved by physical means, such as by using a trusted courier.
An example of a SKA protocol is the Needham–Schroeder protocol. It establishes a session key between two parties on the same network, using a server as a trusted third party.
The original Needham–Schroeder protocol is vulnerable to a replay attack. Timestamps and nonces are included to fix this attack. It forms the basis for the Kerberos protocol.
Types of key agreement
Boyd et al. classify two-party key agreement protocols according to two criteria as follows:
whether a pre-shared key already exists or not
the method of generating the session key.
The pre-shared key may be shared between the two parties, or each party may share a key with a trusted third party. If there is no secure channel (as may be established via a pre-shared key), it is impossible to create an authenticated session key.
The session key may be generated via: key transport, key agreement and hybrid. If there is no trusted third party, then the cases of key transport and hybrid session key generation are indistinguishable. SKA is concerned with protocols in which the session key is established using only symmetric primitives.
Authentication
Anonymous key exchange, like Diffie–Hellman, does not provide authentication of the parties, and is thus vulnerable to man-in-the-middle attacks.
A wide variety of cryptographic authentication schemes and protocols have been developed to provide authenticated key agreement to prevent man-in-the-middle and related attacks. These methods generally mathematically bind the agreed key to other agreed-upon data, such as the following:
public–private key pairs
shared secret keys
passwords
Public keys
A widely used mechanism for defeating such attacks is the use of digitally signed keys that must be integrity-assured: if Bob's key is signed by a trusted third party vouching for his identity, Alice can have considerable confidence that a signed key she receives is not an attempt to intercept by Eve. When Alice and Bob have a public-key infrastructure, they may digitally sign an agreed Diffie–Hellman key, or exchanged Diffie–Hellman public keys. Such signed keys, sometimes signed by a certificate authority, are one of the primary mechanisms used for secure web traffic (including HTTPS, SSL or TLS protocols). Other specific examples are MQV, YAK and the ISAKMP component of the IPsec protocol suite for securing Internet Protocol communications. However, these systems require care in endorsing the match between identity information and public keys by certificate authorities in order to work properly.
Hybrid systems
Hybrid systems use public-key cryptography to exchange secret keys, which are then used in a symmetric-key cryptography systems. Most practical applications of cryptography use a combination of cryptographic functions to implement an overall system that provides all of the four desirable features of secure communications (confidentiality, integrity, authentication, and non-repudiation).
Passwords
Password-authenticated key agreement protocols require the separate establishment of a password (which may be smaller than a key) in a manner that is both private and integrity-assured. These are designed to resist man-in-the-middle and other active attacks on the password and the established keys. For example, DH-EKE, SPEKE, and SRP are password-authenticated variations of Diffie–Hellman.
Other tricks
If one has an integrity-assured way to verify a shared key over a public channel, one may engage in a Diffie–Hellman key exchange to derive a short-term shared key, and then subsequently authenticate that the keys match. One way is to use a voice-authenticated read-out of the key, as in PGPfone. Voice authentication, however, presumes that it is infeasible for a man-in-the-middle to spoof one participant's voice to the other in real-time, which may be an undesirable assumption. Such protocols may be designed to work with even a small public value, such as a password. Variations on this theme have been proposed for Bluetooth pairing protocols.
In an attempt to avoid using any additional out-of-band authentication factors, Davies and Price proposed the use of the interlock protocol of Ron Rivest and Adi Shamir, which has been subject to both attack and subsequent refinement.
See also
Key (cryptography)
Computer security
Cryptanalysis
Secure channel
Digital signature
Key encapsulation mechanism
Key management
Password-authenticated key agreement
Interlock protocol
Zero-knowledge password proof
Quantum key distribution
References
Cryptography | Key-agreement protocol | Mathematics,Engineering | 1,443 |
910,234 | https://en.wikipedia.org/wiki/Cut-elimination%20theorem | The cut-elimination theorem (or Gentzen's Hauptsatz) is the central result establishing the significance of the sequent calculus. It was originally proved by Gerhard Gentzen in part I of his landmark 1935 paper "Investigations in Logical Deduction" for the systems LJ and LK formalising intuitionistic and classical logic respectively. The cut-elimination theorem states that any judgement that possesses a proof in the sequent calculus making use of the cut rule also possesses a cut-free proof, that is, a proof that does not make use of the cut rule.
The cut rule
A sequent is a logical expression relating multiple formulas, in the form , which is to be read as "If all of hold, then at least one of must hold", or (as Gentzen glossed): "If ( and and …) then ( or or …)." Note that the left-hand side (LHS) is a conjunction (and) and the right-hand side (RHS) is a disjunction (or).
The LHS may have arbitrarily many or few formulae; when the LHS is empty, the RHS is a tautology. In LK, the RHS may also have any number of formulae—if it has none, the LHS is a contradiction, whereas in LJ the RHS may only have one formula or none: here we see that allowing more than one formula in the RHS is equivalent, in the presence of the right contraction rule, to the admissibility of the law of the excluded middle. However, the sequent calculus is a fairly expressive framework, and there have been sequent calculi for intuitionistic logic proposed that allow many formulae in the RHS. From Jean-Yves Girard's logic LC it is easy to obtain a rather natural formalisation of classical logic where the RHS contains at most one formula; it is the interplay of the logical and structural rules that is the key here.
"Cut" is a rule of inference in the normal statement of the sequent calculus, and equivalent to a variety of rules in other proof theories, which, given
and
allows one to infer
That is, it "cuts" the occurrences of the formula out of the inferential relation.
Cut elimination
The cut-elimination theorem states that (for a given system) any sequent provable using the rule Cut can be proved without use of this rule.
For sequent calculi that have only one formula in the RHS, the "Cut" rule reads, given
and
allows one to infer
If we think of as a theorem, then cut-elimination in this case simply says that a lemma used to prove this theorem can be inlined. Whenever the theorem's proof mentions lemma , we can substitute the occurrences for the proof of . Consequently, the cut rule is admissible.
Consequences of the theorem
For systems formulated in the sequent calculus, analytic proofs are those proofs that do not use Cut. Typically such a proof will be longer, of course, and not necessarily trivially so. In his essay "Don't Eliminate Cut!" George Boolos demonstrated that there was a derivation that could be completed in a page using cut, but whose analytic proof could not be completed in the lifespan of the universe.
The theorem has many, rich consequences:
A system is inconsistent if it admits a proof of the absurd. If the system has a cut elimination theorem, then if it has a proof of the absurd, or of the empty sequent, it should also have a proof of the absurd (or the empty sequent), without cuts. It is typically very easy to check that there are no such proofs. Thus, once a system is shown to have a cut elimination theorem, it is normally immediate that the system is consistent.
Normally also the system has, at least in first-order logic, the subformula property, an important property in several approaches to proof-theoretic semantics.
Cut elimination is one of the most powerful tools for proving interpolation theorems. The possibility of carrying out proof search based on resolution, the essential insight leading to the Prolog programming language, depends upon the admissibility of Cut in the appropriate system.
For proof systems based on higher-order typed lambda calculus through a Curry–Howard isomorphism, cut elimination algorithms correspond to the strong normalization property (every proof term reduces in a finite number of steps into a normal form).
See also
Deduction theorem
Gentzen's consistency proof for Peano's axioms
Notes
References
External links
Theorems in the foundations of mathematics
Proof theory | Cut-elimination theorem | Mathematics | 959 |
30,730,915 | https://en.wikipedia.org/wiki/Internet%20Shakespeare%20Editions | The Internet Shakespeare Editions is a non-profit organization that produces a website devoted to William Shakespeare and his works. The organization is an associate member of the Shakespeare Theatre Association of America, under the classification of theatre service provider, and is supported by the University of Victoria and the Social Sciences and Humanities Research Council of Canada.
The website includes a variety of Shakespeare-related resources, including fully annotated texts of his plays and poems, multimedia materials and records of his plays in performance, and historical information about Shakespeare's life and the Renaissance.
History
The project began on a floppy disc in 1988, which creator Michael Best called Shakespeare's Life and Times. In 1994 he transferred it to CD-ROM. By 1996, Best decided that the Internet was the ideal medium for his Shakespeare project. He translated his earlier formats into a web format and formally founded the Internet Shakespeare Editions in 1999.
In 2010, the freely accessible site was redesigned to include advertisements.
In addition to the website, the Internet Shakespeare Editions also published a CD-ROM titled A Shakespeare Suite. The CD-ROM, published in 2002 and distributed by Insight Media, is a companion tool for Shakespeare studies designed for teachers and students.
Editing and publishing
The ISE's academic development is overseen by an Editorial Board of 23 members from universities in Canada, the United States, the United Kingdom, and Germany. In addition to featuring searchable texts and facsimiles of the folios and quartos, the website is a venue for Shakespeare scholars to publish their fully edited and annotated texts. Existing peer-reviewed publications include:
As You Like It, edited by David Bevington;
Cymbeline, edited by Jennifer Forsyth;
Henry IV, Part One, edited by Rosemary Gaby;
Julius Caesar, edited by John D. Cox;
Romeo and Juliet, edited by Roger Apfelbaum;
The Tempest, edited by Brent Whitted and Paul Yachnin;
Troilus and Cressida, edited by William Godshalk and Anthony Colianne; and
Venus and Adonis, edited by Hardy M. Cook.
Additional modern texts, complete with annotations and collations, have been published but have not yet been peer reviewed.
The ISE also publishes editions of plays written for the Queen's Men, many of which influenced Shakespeare. These plays are prepared for the project Shakespeare and the Queen's Men, based out of McMaster University and the University of Toronto.
Theater
The Internet Shakespeare Editions' database of Shakespeare in Performance is a searchable archive of information and materials related to performances of Shakespeare's plays. There are thousands of digitized artifacts—images, audio clips, and videos—currently available for public viewing on the site, as well as cast and crew lists for hundreds of films and live performances. The database is overseen by a board of actors, directors, and academics.
In winter 2007, the ISE launched its Performance Chronicle. The Performance Chronicle is a searchable blog-style database of reviews of contemporary Shakespearean productions, written by the general public and scholars. It also includes reviews from scholarly journals, such as Cahiers Élizabéthains, Shakespeare Bulletin, and Early Modern Literary Studies.
References
External links
Shakespeare Theatre Association of America
Digital humanities
Shakespearean scholarship | Internet Shakespeare Editions | Technology | 660 |
22,616,176 | https://en.wikipedia.org/wiki/Analytical%20regularization | In physics and applied mathematics, analytical regularization is a technique used to convert boundary value problems which can be written as Fredholm integral equations of the first kind involving singular operators into equivalent Fredholm integral equations of the second kind. The latter may be easier to solve analytically and can be studied with discretization schemes like the finite element method or the finite difference method because they are pointwise convergent. In computational electromagnetics, it is known as the method of analytical regularization. It was first used in mathematics during the development of operator theory before acquiring a name.
Method
Analytical regularization proceeds as follows. First, the boundary value problem is formulated as an integral equation. Written as an operator equation, this will take the form
with representing boundary conditions and inhomogeneities, representing the field of interest, and the integral operator describing how Y is given from X based on the physics of the problem.
Next, is split into , where is invertible and contains all the singularities of and is regular. After splitting the operator and multiplying by the inverse of , the equation becomes
or
which is now a Fredholm equation of the second type because by construction is compact on the Hilbert space of which is a member.
In general, several choices for will be possible for each problem.
References
, Paperpack (also available online). Read Chapter 8 for Analytic Regularization.
External links
E-Polarized Wave Scattering from Infinitely Thin and Finitely Width Strip Systems
Diffraction
Electromagnetism
Applied mathematics
Computational electromagnetics
Fredholm theory | Analytical regularization | Physics,Chemistry,Materials_science,Mathematics | 312 |
25,312,513 | https://en.wikipedia.org/wiki/Twins%20and%20handedness | Left-handedness always occurs at a lower frequency than right-handedness. Generally, left-handedness is found in 10.6% of the overall population.
Some studies have reported that left-handedness is more common in twins than in singletons, while other studies find no such pattern.
Twins and singletons left hand prevalence
Monozygotic twins also known as identical twins are siblings that share the same genetic information because of their prenatal development. Monozygotic twins result from the fertilization of one egg and the division of that single embryo forming two embryos. However, just because a set of twins share the same genetic information, it does not mean they will exhibit the same traits and behaviors. There are different versions of a gene, which are called alleles. How a gene is expressed depends on the development of an individual throughout their life. Twins, although they come from the same background experience different things. So due to environmental factors a set of twins, even monozygotic, express genes differently. Handwriting is one of the traits that depend on the environment. For instance, the cerebellum, located in the hind brain is responsible for motor movements, such as handwriting. It uses sensory information, information from external environments, to control physical movements. Taking this fact into account, it is reasonable to assume that there would not be a correlation between twins and handwriting. However, there is a higher prevalence of left-handedness in twins compared to singletons, but this fact has yet to be determined. Referencing the mean proportions of left-handedness singletons are 8.5%, dizygotic twins are 14% and monozygotic twins are 14.5%. Using this data, it is theorized that twins have higher prevalence for left-handedness because of prenatal complications. For example, the pathological left-handedness syndrome has been speculated to contribute to why twins having a higher prevalence for left-handedness left-handedness syndrome states that when an injury occurs during early development it affects lateralization and ultimately handedness. Twins are more prone to perinatal injuries and are statistically more likely to have a premature birth compared to singletons.
Dizygotic twins and monozygotic twins prevalence for left handedness
Unlike monozygotic twins, dizygotic twins result from the fertilization of two eggs by two separate sperms within the same pregnancy. This causes the set of twins to have genetic variations, so their genetic information is unique from one another. In studies conducted between 1924 and 1976, there were more left-handed monozygotic twins. Specifically, 15% of monozygotic twins were left-handed while 13% of dizygotic twins were left-handed. In another study, the frequency of right-handed and left-handed pairs of dizygotic twins is about 23%, while twins with both individuals displaying left-handedness are less than 4% and the frequency of pairs of monozygotic twins in which only one twin is left-handed is about 21% and in which both twins are left-handed is less than 4%. However, there was no difference in the handedness frequency between monozygotic and dizygotic twins.
Currently, there is not much evidence to further prove the idea that monozygotic twins have a higher prevalence for left-handedness using the pathological left-handedness syndrome because of the improvements within medicine causing a decrease in birth defects and complications. In a recent analysis, it was even determined that there is no specific developmental complication that contributes to the higher prevalence of left-handedness between monozygotic and dizygotic twins.
There is no conclusive evidence to support the idea that a certain type of twin may have a higher prevalence of left-handedness because the results from studies conducted contradict one another. Even studies analyzing how gender within monozygotic and dizygotic populations may have a prevalence for left-handedness, some found that males have a higher prevalence while other studies show that gender does not have an impact on handedness. Further studies addressing the topic need to be performed to come to a conclusive answer on whether a type of twin or gender affects handedness. Although there are many theories, such as cerebral symmetry, the reason has not been conclusively proven.
Chances of handedness
If the parents are both right-handed, in dizygotic and monozygotic twins there is a 21% chance of one being left-handed. If one parent is left-handed, in DZ and MZ twins there is a 57% chance of one being left-handed. If both parents are left-handed, it is almost certain one twin will be left-handed.
Cross-dominance in twins
19% of twins are cross-dominant. This is the same for both dizygotic and monozygotic. Cross-dominance is when a dominant eye and dominant hand are different.
Monozygotic twins: dichorionic and monochorionic and mirror imaging
During the early development of monozygotic twins, the time in which the embryo divide has an impact on placentation. If the split of the embryo occurs within three days of fertilization, two individual placentas are formed resulting in monozygotic dichorionic twins. If the split of the embryo occurs between 3 and 12 days after fertilization, a placenta will be shared between the offspring resulting in monozygotic monochorionic twins. Since the zygote of monozygotic monochorionic twins occurs after the establishment of an axis of bilateral symmetry, it was theorized that opposite handedness within the same pair of twins is more frequent than in monozygotic dichorionic twins because of mirror imaging. When in the embryo, after the axis of bilateral symmetry is established, twins are facing each other and would develop traits opposite of one another because their actions are perceived to be matching. However, when comparing the frequency of discordant pairs of handedness, pairs that exhibit opposite handwriting, there was little to no difference in frequency. The frequency of left-handedness in monozygotic dichorionic twins was 22% and the data of frequencies of left-handedness in monozygotic monochorionic twins was 23%. Subsequently, this emphasized that chorion did not affect left-handedness. Similarly, placentation or the placement of the placenta does not affect left-handedness.
Other factors of handedness
Family history of left-handedness
Birth stress- forces and injuries of baby's head
Gestational age
Sex
Hair whorl
Social pressure
See also
Footedness
References
Handedness
Handedness | Twins and handedness | Physics,Chemistry,Biology | 1,387 |
997,557 | https://en.wikipedia.org/wiki/Master%20of%20Architecture | The Master of Architecture (M.Arch or MArch) is a professional degree in architecture qualifying the graduate to move through the various stages of professional accreditation (internship, exams) that result in receiving a license.
Overview
The degree is earned through several possible paths of study, depending on both a particular program's construction, and the candidate's previous academic experience and degrees. M.Arch degrees vary in kind, so they are frequently given names such as "M.Arch I" and "M.Arch II" to distinguish them. All M.Arch degrees are professional degrees in architecture. There are, however, other master's degrees offered by architecture schools that are not accredited in any way.
Many schools offer several possible tracks of architectural education. Including study at the bachelor's and master's level, these tracks range up to 7.5 years in duration.
One possible route is what is commonly referred to as the "4+2" course. This path entails completing a four-year, accredited, pre-professional Bachelor of Arts in architecture or a Bachelor of Science in architecture. This degree is not 3-year, depending on the nature and quality of your undergraduate study performance, and the evaluation of your master's degree program school of your undergraduate study) Master of Architecture program. This route offers several advantages: your first four years are a bit more loose, allowing the inclusion of some liberal arts study; you can attend two different institutions for your undergraduate and graduate study, which is helpful in that it allows you to have a more varied architectural education, and you can pick the best place for you to complete your thesis (because chances are, you might not pick the program that has the exact focus that you will want when it becomes time for your thesis study); and you will finish the 4+2 course of study with a master's degree that will provide you the career option of teaching architecture at the collegiate level.
The second route to obtaining an accredited master's degree begins in graduate school, with a 3 or 3.5-year master's degree (commonly called an "M.Arch I"). The advantage to this route is that the student can study something else they are interested in their undergraduate study. Because students come from different undergraduate backgrounds, the breadth of knowledge and experience in the student body of an M.Arch I program is often considered an advantage. One possible disadvantage is that the total time in school is longer (7 or 7.5 years with an undergraduate degree). Another disadvantage is that the student has a very short time to cover the extremely broad scope of subject areas of which architects are expected to have a working knowledge. Nevertheless, major schools of architecture including MIT and Harvard often offer a 3.5-year program to students already with strong architectural background, fostering a competitive and productive academic environment.
A third possible route is what schools are calling a "post-professional" master's degree. It is research-based and often a stepping-stone to a Doctor of Philosophy in Architecture. Schools include Cornell, Harvard, Princeton, MIT, and RISD.
Some institutions offer a 5-year professional degree program. Depending on the school and course of study, this could be either a Bachelor of Architecture (B.Arch) or an M.Arch. In the U.S., it is typically a 5-year B.Arch Either degree qualifies those who complete it to sit for the ARE (the Architectural Registration Exam, the architecture equivalent of the bar exam), which leads to an architect's license in the U.S.. One disadvantage of the B.Arch degree is that it is rarely considered as sufficient qualification for teaching architecture at the university/college level in the U.S. (though there are many exceptions). Many architects who wish to teach and have only received a B.Arch choose to pursue a 3-semester master's degree (not an M.Arch) to obtain further academic qualification.
Graduate-level architecture programs consist of course work in design, building science, structural engineering, architectural history, theory, professional practice, and elective courses. For those without any prior knowledge of the field, coursework in calculus, physics, computers, statics and strengths of materials, architectural history, studio, and building science is usually required. Some architecture programs allow students to specialize in a specific aspect of architecture, such as architectural technologies or digital media. A thesis or final project is usually required to graduate.
In the United States, The National Architectural Accrediting Board (NAAB) is the sole accrediting body for professional degree programs in architecture. Since most state registration boards in the United States require any applicant for licensure to have graduated from an NAAB-accredited program, obtaining such a degree is an essential aspect of preparing for the professional practice of architecture. First time students matriculating with a 5-year B.Arch degree can also qualify for registration without obtaining a master's degree. Some programs offer a concurrent learning model, allowing students the opportunity to work in the profession while they earn their degree, so that they can test for licensure immediately upon graduation.
In Canada, Master of Architecture degrees may be accredited by the Canadian Architectural Certification Board (CACB), allowing the recipient to qualify for both the ARE and the Examination for Architects in Canada (ExAC).
As of June 2022, there were 120 accredited Master of Architecture programs in the United States, including Puerto Rico.
Master's degree programs
United States
Canada
Colleges and universities in Canada with accredited Master of Architecture degree programs are listed below:
University of British Columbia
University of Calgary
Carleton University
Université Laval
McGill University
University of Manitoba
Université de Montréal
University of Guelph (Only Master of Landscape Architecture)
University of Toronto
Dalhousie University
University of Waterloo, School of Architecture
Ryerson University
Australia and New Zealand
Universities in Australia and New Zealand with accredited Master of Architecture degree programs are listed below Architects Accreditation Council Of Australia « Recognised Architecture Qualifications:
Curtin University
Griffith University
Deakin University
Monash University
Queensland University of Technology (QUT)
RMIT University
University of Adelaide
University of Canberra
University of Melbourne
University of Newcastle
University of New South Wales
University of Queensland
University of South Australia (UniSA)
University of Sydney
University of Tasmania
University of Technology, Sydney (UTS)
University of Western Australia
University of Auckland
Unitec New Zealand
Victoria University of Wellington
Hong Kong
The only 2 universities offering HKIA (Hong Kong Institute of Architects), CAA (Commonwealth Association of Architects) & RIBA (Royal Institute of British Architects) accredited Master of Architecture for architect professional registration.
The Chinese University of Hong Kong, School of Architecture, Hong Kong, founded in 1992
The University of Hong Kong, Faculty of Architecture, Department of Architecture, Hong Kong, founded in 1950
China
Tsinghua University, Beijing
Beijing University of Civil Engineering and Architecture, Beijing
Tongji University, Shanghai
Southeast University, Nanjing
Xi'an Jiaotong-Liverpool University, starting fall 2014, language: English
Hunan University, Changsha
Singapore
National University of Singapore
Singapore University of Technology and Design
Mexico
In Mexico, an officially recognized Bachelor of Architecture is sufficient for practice.
Faculty of Architecture at the National Autonomous University of Mexico
Monterrey Institute of Technology and Higher Education
Universidad Autónoma Benito Juárez de Oaxaca
Universidad Autónoma de Guadalajara
Universidad Autónoma de Nuevo León
Universidad Autónoma de San Luis Potosí
Africa
University of Pretoria
University of Cape Town
University of the Witwatersrand
University of Johannesburg
Tshwane University of Technology
University of Nigeria, Enugu Campus
University of Carthage
Uganda Martyrs University
University of the Free State
Nelson Mandela Metropolitan University
University of Nairobi
Caleb University
Bells University of Technology
Ardhi University, Tanzania
Kwame Nkrumah University of Science and Technology, Ghana
Ahmadu Bello University, Zaria
Federal University of Technology, Akure
Federal University of Technology, Minna. Nigeria.
India
In India, the Council of Architecture regulates the architectural education and maintains a registry of higher education institutions approved to offer a 2-year long Master of Architecture degree. While 5-year long Bachelor of Architecture degree allows a person to register with Council of Architecture as an architect and practice architecture in India, Master of Architecture is often required to teaching architecture at the collegiate level.
Iran
Some universities in Iran with accredited Master of Architecture degree programs are listed below:
Tehran University
Shahid Beheshti University (SBU)
Iran University of Science and Technology
Tarbiat Modares University (TMU)
Tabriz Islamic Art University
Yazd University
University of Shahrood
Islamic Azad University
Sooreh University
Shiraz University
Schools and Universities in Europe
Austria
Academy of fine Arts, Vienna Institute for Art and Architecture (B.Arch. and M.Arch. language: German and English) (Austria)
Bosnia and Herzegovina
University of Sarajevo Faculty of Architecture (B.Arch. and M.Arch. language: Bosnian and English)
Belgium
WENK Gent Brussels (Sint Lucas Institute of Architecture) Sint Lucas Ghent Brussels in Belgium (language: English)
Denmark
Royal Danish Academy of Fine Arts (M.A. Professional Degree, language: English)(Denmark)
Finland
University of Oulu (M.S. Professional Degree, language: English)(Finland)
University of Tampere (M.S. Professional Degree, language: English)(Finland)
Aalto University (M.S. Professional Degree, language: English)(Finland)
Germany
DIA Dessau (Dessau International Architecture) at the Hochschule Anhalt / Bauhaus Dessau in Germany (language: English)
Hochschule Wismar (language: German and English) in Wismar, Germany
Ireland
Cork Centre for Architectural Education (University College Cork/Munster Technological University)
Technological University Dublin
University College Dublin
Italy
Politecnico di Torino - I Facoltà di Architettura I (Italy)
Politecnico di Torino - II Facoltà di Architettura (Italy)
Liechtenstein
Hochschule Liechtenstein (candidate for accreditation, language: English)
Netherlands
TU Delft Faculty of Architecture (M.S. Professional Degree, language: English)
Academy of Architecture at the Amsterdam School of Art
Artez Academy of Architecture in Arnhem
Academie van Bouwkunst Groningen
Academie van bouwkunst Maastricht
The Rotterdam Academy of Architecture and Urban Design
TU Eindhoven Faculty of Architecture, Building and Planning (M.S. Professional Degree, language: English)
Poland
Warsaw University of Technology Architecture and Urban Planning with specialisation Architecture for Society of Knowledge (M.Arch. language: English) (Poland)
Cracow University of Technology Department of Architecture with specialisation Architecture and Urban Planning (M.Arch. RIBA accredited) (Poland)
Wroclaw University of Science and Technology Faculty of Architecture
(M.Arch. language: English) (Poland)
Serbia
University of Belgrade Architecture and Urban Planning (M.Arch. RIBA accredited)(M.Arch. language: Serbian, English)
University of Novi Sad Architecture (M.Arch. language: Serbian, English)
Slovenia
University of Ljubljana Architecture and Urban Planning (M.Arch. language: English) (M.I.A. Language: Slovenian)
Spain
Universidad de Navarra Department of Architecture (M.D.A. language: Spanish and English) (Spain)
The University of the Basque Country The University of the Basque Country (M.D.A. Language: Basque or Spanish) (Basque Country, Spain)
Switzerland
Jointmaster of Architecture in Berne, Fribourg and Geneva, (languages: English and French) (Switzerland)
Accademia di Architettura di Mendrisio (Switzerland)
Academie van Bouwkunst Tilburg (the Netherlands)
United Kingdom
All M.Arch courses listed below comply with RIBA and ARB accreditation, complying to RIBA's Part 2 stage before Part 3 and Architect registry.
England
University of Bath, Department of Architecture and Civil Engineering, Bath, as MArch
Birmingham City University, Birmingham School of Architecture, Birmingham, as MArch
Arts University Bournemouth, Bournemouth, as MArch
University of Brighton, Brighton, as MArch
University of the West of England (UWE Bristol), Bristol, as MArch
University of Cambridge, Department of Architecture, Cambridge as MPhil
The University of Creative Arts, Canterbury School of Architecture, as MArch
The University of Kent (Canterbury), Kent School of Architecture, as MArch
The University of Huddersfield, School of Art, Design and Architecture. as M.Arch or M.Arch (International)
Leeds Beckett University, School of Arts, as MArch or Level 7 Architecture Apprenticeship.
De Montfort University, The Leicester School of Architecture, Leicester, as March or Level 7 Architecture Apprenticeship.
University of Lincoln, The Lincoln School of Architecture, Lincoln, as MArch
University of Liverpool, Liverpool School of Architecture, Liverpool, as MArch
Liverpool John Moores University, Liverpool, as MArch
Architectural Association School of Architecture, London, as Final Examination
The London School of Architecture as MArch
The University College of London, The Bartlett School of Architecture, as MArch
The University of Arts, London, Central Saint Martins College of Art and Design, London, as MArch
The University of East London, School of Architecture, Computing and Engineering, as March
The University of Greenwich London, School of Architecture, Design and Construction, London, as MArch
Kingston University London, Kingston School of Art, London, as MArch
London Metropolitan University, School of Art, Architecture and Design, as MArch or Level 7 Architect Apprenticeship
Royal College of Art, School of Architecture, as MA
London South Bank University, Engineering, Science and the Built Environment, as MArch or Level 7 Architect Apprenticeship
The University of Westminster, Department of Architecture, as M.Arch
University of Manchester and Manchester Metropolitan University, The Manchester School of Architecture, as MArch or Level 7 Architect Apprenticeship
The University of Newcastle upon Tyne, School of Architecture, Planning and Landscape, Newcastle, as M.Arch
Northumbria University, Architecture Department, School of the Built Environment, Newcastle upon Tyne, as MArch orLevel 7 Architect Apprenticeship
The University of Nottingham, Architecture and Built Environment, Nottingham, as MArch
Nottingham Trent University, School of Architecture, Design and the Built Environment. as M.Arch
Oxford Brookes University, School of Architecture, Oxford, as M.ARchD
University of Central Lancashire (UCLAN), (Preston) The Grenfell-Baines School of Architecture, Construction and Environment, as MArch
RIBA Studio, as Diploma
The University of Plymouth, Plymouth School of Architecture, Design and Environment, Plymouth, as M.Arch
The University of Portsmouth, Portsmouth School of Architecture, Portsmouth, as MArch
The University of Sheffield, Sheffield School of Architecture, Sheffield, as MArch
Sheffield Hallam University, Department of Architecture and Planning, Sheffield, as M.Arch
Northern Ireland
The Queen's University Belfast as MArch
The University of Ulster as MArch
Scotland
University of Dundee as MArch (with Honours)
University of Edinburgh, The Edinburgh College of Art, MArch
University of Strathclyde (Glasgow) as PgDip or MArch
Glasgow School of Art, Mackintosh School of Architecture, as MArch
Robert Gordon University, The Scott Sutherland School of Architecture & Built Environment, via BSc/MArch (Integrated Degree) or MArch
Duncan of Jordanstone College of Art and Design as MArch
Wales
Cardiff University, Welsh School of Architecture, via BSc/MArch (Integrated Degree) or MArch
Schools and Universities in the Middle East
Technion Department of Architecture (M.Arch. language: English) (Israel)
Bezalel Academy of Art and Design Department of Architecture (B.Arch. language: Hebrew) (Israel)
Ariel University Department of Architecture (B.Arch. language: Hebrew) (Israel)
Middle East Technical University Department of Architecture (M.Arch. language: English) (Turkey)
Mimar Sinan Fine Arts University (B.Arch. and M.Arch. language: Turkish) (Turkey)
King Saud University, college of architecture and planning ( Riyadh, Saudi Arabia)
B.Arch main major:
1- Science of building
2- Urban design
And there's master's degree and PHD
Language: English, Arabic
Rank (according to NAAB 2012) #1 architecture school in middle east
See also
Bachelor of Architecture
Doctor of Architecture
National Council of Architectural Registration Boards
References
Architecture schools
Architecture
Architectural education | Master of Architecture | Engineering | 3,300 |
826,723 | https://en.wikipedia.org/wiki/Angular%20diameter | The angular diameter, angular size, apparent diameter, or apparent size is an angular separation (in units of angle) describing how large a sphere or circle appears from a given point of view. In the vision sciences, it is called the visual angle, and in optics, it is the angular aperture (of a lens). The angular diameter can alternatively be thought of as the angular displacement through which an eye or camera must rotate to look from one side of an apparent circle to the opposite side.
A person can resolve with their naked eyes diameters down to about 1 arcminute (approximately 0.017° or 0.0003 radians). This corresponds to 0.3 m at a 1 km distance, or to perceiving Venus as a disk under optimal conditions.
Formulation
The angular diameter of a circle whose plane is perpendicular to the displacement vector between the point of view and the center of said circle can be calculated using the formula
in which is the angular diameter (in units of angle, normally radians, sometimes in degrees, depending on the arctangent implementation), is the linear diameter of the object (in units of length), and is the distance to the object (also in units of length). When , we have:
,
and the result obtained is necessarily in radians.
For a sphere
For a spherical object whose linear diameter equals and where is the distance to the of the sphere, the angular diameter can be found by the following modified formula
Such a different formulation is due to the fact that the apparent edges of a sphere are its tangent points, which are closer to the observer than the center of the sphere, and have a distance between them which is smaller than the actual diameter. The above formula can be found by understanding that in the case of a spherical object, a right triangle can be constructed such that its three vertices are the observer, the center of the sphere, and one of the sphere's tangent points, with as the hypotenuse and as the sine.
The formula is related to the zenith angle to the horizon,
where R is the radius of the sphere and h is the distance to the near of the sphere.
The difference with the case of a perpendicular circle is significant only for spherical objects of large angular diameter, since the following small-angle approximations hold for small values of :
Estimating angular diameter using the hand
Estimates of angular diameter may be obtained by holding the hand at right angles to a fully extended arm, as shown in the figure.
Use in astronomy
In astronomy, the sizes of celestial objects are often given in terms of their angular diameter as seen from Earth, rather than their actual sizes. Since these angular diameters are typically small, it is common to present them in arcseconds (). An arcsecond is 1/3600th of one degree (1°) and a radian is 180/π degrees. So one radian equals 3,600 × 180/ arcseconds, which is about 206,265 arcseconds (1 rad ≈ 206,264.806247"). Therefore, the angular diameter of an object with physical diameter d at a distance D, expressed in arcseconds, is given by:
.
These objects have an angular diameter of 1:
an object of diameter 1 cm at a distance of 2.06 km
an object of diameter 725.27 km at a distance of 1 astronomical unit (AU)
an object of diameter 45 866 916 km at 1 light-year
an object of diameter 1 AU (149 597 871 km) at a distance of 1 parsec (pc)
Thus, the angular diameter of Earth's orbit around the Sun as viewed from a distance of 1 pc is 2, as 1 AU is the mean radius of Earth's orbit.
The angular diameter of the Sun, from a distance of one light-year, is 0.03, and that of Earth 0.0003. The angular diameter 0.03 of the Sun given above is approximately the same as that of a human body at a distance of the diameter of Earth.
This table shows the angular sizes of noteworthy celestial bodies as seen from Earth:
The angular diameter of the Sun, as seen from Earth, is about 250,000 times that of Sirius. (Sirius has twice the diameter and its distance is 500,000 times as much; the Sun is 1010 times as bright, corresponding to an angular diameter ratio of 105, so Sirius is roughly 6 times as bright per unit solid angle.)
The angular diameter of the Sun is also about 250,000 times that of Alpha Centauri A (it has about the same diameter and the distance is 250,000 times as much; the Sun is 4×1010 times as bright, corresponding to an angular diameter ratio of 200,000, so Alpha Centauri A is a little brighter per unit solid angle).
The angular diameter of the Sun is about the same as that of the Moon. (The Sun's diameter is 400 times as large and its distance also; the Sun is 200,000 to 500,000 times as bright as the full Moon (figures vary), corresponding to an angular diameter ratio of 450 to 700, so a celestial body with a diameter of 2.5–4 and the same brightness per unit solid angle would have the same brightness as the full Moon.)
Even though Pluto is physically larger than Ceres, when viewed from Earth (e.g., through the Hubble Space Telescope) Ceres has a much larger apparent size.
Angular sizes measured in degrees are useful for larger patches of sky. (For example, the three stars of the Belt cover about 4.5° of angular size.) However, much finer units are needed to measure the angular sizes of galaxies, nebulae, or other objects of the night sky.
Degrees, therefore, are subdivided as follows:
360 degrees (°) in a full circle
60 arc-minutes () in one degree
60 arc-seconds () in one arc-minute
To put this in perspective, the full Moon as viewed from Earth is about °, or 30 (or 1800). The Moon's motion across the sky can be measured in angular size: approximately 15° every hour, or 15 per second. A one-mile-long line painted on the face of the Moon would appear from Earth to be about 1 in length.
In astronomy, it is typically difficult to directly measure the distance to an object, yet the object may have a known physical size (perhaps it is similar to a closer object with known distance) and a measurable angular diameter. In that case, the angular diameter formula can be inverted to yield the angular diameter distance to distant objects as
In non-Euclidean space, such as our expanding universe, the angular diameter distance is only one of several definitions of distance, so that there can be different "distances" to the same object. See Distance measures (cosmology).
Non-circular objects
Many deep-sky objects such as galaxies and nebulae appear non-circular and are thus typically given two measures of diameter: major axis and minor axis. For example, the Small Magellanic Cloud has a visual apparent diameter of × .
Defect of illumination
Defect of illumination is the maximum angular width of the unilluminated part of a celestial body seen by a given observer. For example, if an object is 40 of arc across and is 75% illuminated, the defect of illumination is 10.
See also
Angular diameter distance
Angular resolution
Apparent magnitude
List of stars with resolved images
Moon illusion
Perceived visual angle
Solid angle
Visual acuity
Visual angle
References
External links
Small-Angle Formula (archived 7 October 1997)
Visual Aid to the Apparent Size of the Planets
Elementary geometry
Astrometry
Angle
Equations of astronomy | Angular diameter | Physics,Astronomy,Mathematics | 1,585 |
8,980,050 | https://en.wikipedia.org/wiki/Fouling%20community | Fouling communities are communities of organisms found on artificial surfaces like the sides of docks, marinas, harbors, and boats. Settlement panels made from a variety of substances have been used to monitor settlement patterns and to examine several community processes (e.g., succession, recruitment, predation, competition, and invasion resistance). These communities are characterized by the presence of a variety of sessile organisms including ascidians, bryozoans, mussels, tube building polychaetes, sea anemones, sponges, barnacles, and more. Common predators on and around fouling communities include small crabs, starfish, fish, limpets, chitons, other gastropods, and a variety of worms.
Ecology
Fouling communities follow a distinct succession pattern in a natural environment.
Environmental impact
Impacts on Humans
Fouling communities can have a negative economic impact on humans, by damaging the bottom of boats, docks, and other marine human-made structures. This effect is known as Biofouling, and has been combated by Anti-fouling paint, which is now known to introduce toxic metals to the marine environment. Fouling communities have a variety of species, and many of these are filter feeders, meaning that organisms in the fouling community can also improve water clarity.
Invasive Species
Fouling communities do grow on natural structures, however these communities are largely made up of native species, whereas the communities growing on man-made structures have larger populations of invasive species. This difference between the species diversity across human structures and natural substrate is likely dependent on human pollution, which is known to weaken native species and create a community and environment dominated by non-indigenous species. These largely non-indigenous species communities living on docks and boats usually have a higher resistance to anthropogenic disturbances. This effect is sorely felt in untouched native marine communities, as non-indigenous species growing on boat hulls are transported across the world, to wherever the boat anchors.
Research history
Fouling communities were highlighted particularly in the literature of marine ecology as a potential example of alternate stable states through the work of John Sutherland in the 1970s at Duke University, although this was later called into question by Connell and Sousa.
Fouling communities have been used to test the ecological effectiveness of artificial coral reefs.
See also
Biofouling
Ecological succession
Didemnum vexillum
References
External links
http://research.ncl.ac.uk/biofouling/ is the Newcastle University barnacle and biofouling information site.
http://www.imo.org/en/OurWork/Environment/Biofouling/Pages/default.aspx is the International Maritime Organization information about biofouling which includes a comprehensive list of invasive species in the fouling community.
https://darchive.mblwhoilibrary.org/bitstream/handle/1912/191/chapter%203.pdf?sequence=11
https://pdxscholar.library.pdx.edu/cgi/viewcontent.cgi?article=4896&context=open_access_etds
Aquatic ecology
Fouling | Fouling community | Materials_science,Biology | 654 |
78,172,058 | https://en.wikipedia.org/wiki/Pegipanermin | Pegipanermin (; developmental code names and proposed brand names DN-TNF, INB-03, LIVNate, Quellor XENP345, XPro1595) is a tumor necrosis factor α (TNFα) inhibitor which is under development for the treatment of Alzheimer's disease, mild cognitive impairment, major depressive disorder, and other indications. It is described as having potential anti-inflammatory effects. It is administered by subcutaneous injection.
The drug is a protein and PEGylated variant of TNFα that does not bind to the tumor necrosis factor receptors (TNF receptors) but instead binds to and forms heterotrimers with TNFα and prevents TNFα from activating the TNF receptors. However, pegipanermin is said to be selective for blocking TNFα activation of the tumor necrosis factor receptor 1 (TNFR1) but not of the tumor necrosis factor receptor 2 (TNFR2). Whereas the non-selective TNFα inhibitor etanercept suppressed hippocampal neurogenesis, learning, and memory in animals, pegipanermin did not do so, yet still inhibited neuroinflammation. Pegipanermin crosses the blood–brain barrier into the central nervous system.
As of October 2024, pegipanermin is in phase 2 clinical trials for Alzheimer's disease, mild cognitive impairment, and major depressive disorder. It is in the preclinical stage of development for HER2-positive breast cancer. No recent development has been reported for other neurodegenerative disorders, non-alcoholic steatohepatitis, or solid tumors. Development was discontinued for Parkinson's disease and COVID-19 respiratory infections.
See also
Infliximab
References
Anti-inflammatory agents
Experimental antidepressants
Experimental drugs
Experimental drugs for Alzheimer's disease
TNF inhibitors
Peptides | Pegipanermin | Chemistry | 397 |
55,363,309 | https://en.wikipedia.org/wiki/History%20of%20magnetic%20resonance%20imaging | The history of magnetic resonance imaging (MRI) includes the work of many researchers who contributed to the discovery of nuclear magnetic resonance (NMR) and described the underlying physics of magnetic resonance imaging, starting early in the twentieth century. One researcher was American physicist Isidor Isaac Rabi who won the Nobel Prize in Physics in 1944 for his discovery of nuclear magnetic resonance, which is used in magnetic resonance imaging. MR imaging was invented by Paul C. Lauterbur who developed a mechanism to encode spatial information into an NMR signal using magnetic field gradients in September 1971; he published the theory behind it in March 1973.
The factors leading to image contrast (differences in tissue relaxation time values) had been described nearly 20 years earlier by physician and scientist Erik Odeblad and Gunnar Lindström. Among many other researchers in the late 1970s and 1980s, Peter Mansfield further refined the techniques used in MR image acquisition and processing, and in 2003 he and Lauterbur were awarded the Nobel Prize in Physiology or Medicine for their contributions to the development of MRI. The first clinical MRI scanners were installed in the early 1980s and significant development of the technology followed in the decades since, leading to its widespread use in medicine today.
Nuclear magnetic resonance
Isidor Isaac Rabi won the Nobel Prize in Physics in 1944 for his discovery of nuclear magnetic resonance, which is used in magnetic resonance imaging. In 1950, spin echoes and free induction decay were first detected by Erwin Hahn and in 1952, Herman Carr produced a one-dimensional NMR spectrum as reported in his Harvard PhD thesis.
The next step (from spectra to imaging) was proposed by Vladislav Ivanov in Soviet Union, who filed in 1960 a patent application for a Magnetic Resonance Imaging device. Ivanov's main contribution was the idea of using magnetic field gradient, combined with a selective frequency excitation/readout, to encode the spatial coordinates. In modern terms, it was only proton-density (not relaxation times) imaging, which was also slow, since only one gradient direction was used at a time and the imaging had to be done slice-by-slice. Nevertheless, it was a true magnetic resonance imaging procedure. Originally rejected as "improbable", Ivanov's application was finally approved in 1984 (with the original priority date).
Relaxation times and early development of MRI
By 1959, Jay Singer had studied blood flow by NMR relaxation time measurements of blood in living humans. Such measurements were not introduced into common medical practice until the mid-1980s, although a patent for a whole-body NMR machine to measure blood flow in the human body was filed by Alexander Ganssen in early 1967.
In the 1960s, the results of work on relaxation, diffusion, and chemical exchange of water in cells and tissues of various types appeared in the scientific literature. In 1967, Ligon reported the measurement of NMR relaxation of water in the arms of living human subjects. In 1968, Jackson and Langham published the first NMR signals from a living animal, an anesthetized rat.
In the 1970s, it was realized that the relaxation times are key determinants of contrast in MRI and can be used to detect and differentiate a range of pathologies. A number of research groups had showed that early cancer cells tended to exhibit longer relaxation times than their corresponding normal cells and as such stimulated initial interest in the idea of detecting cancer with NMR. These early groups include Damadian, Hazlewood and Chang and several others.
This also initiated a program to catalog the relaxation times of a wide range of biological tissues, which became one of the main motivations for the development of MRI.
In a March 1971 paper in the journal Science, Raymond Damadian, an Armenian-American doctor and professor at the Downstate Medical Center State University of New York (SUNY), reported that tumors and normal tissue can be distinguished in vivo by NMR. Damadian's initial methods were flawed for practical use, relying on a point-by-point scan of the entire body and using relaxation rates, which turned out not to be an effective indicator of cancerous tissue. While researching the analytical properties of magnetic resonance, Damadian created a hypothetical magnetic resonance cancer-detecting machine in 1972. He patented such a machine, on February 5, 1974. Lawrence Bennett and Dr. Irwin Weisman also found in 1972 that neoplasms display different relaxation times than corresponding normal tissue. Zenuemon Abe and his colleagues applied the patent for a targeted NMR scanner, in 1973. They published this technique in 1974. Damadian claims to have invented the MRI.
The U.S. National Science Foundation notes "The patent included the idea of using NMR to 'scan' the human body to locate cancerous tissue." However, it did not describe a method for generating pictures from such a scan or precisely how such a scan might be done.
Imaging
Paul Lauterbur at Stony Brook University expanded on Carr's technique and developed a way to generate the first MRI images, in 2D and 3D, using gradients. In 1973, Lauterbur published the first nuclear magnetic resonance image and the first cross-sectional image of a living mouse in January 1974. In the late 1970s, Peter Mansfield, a physicist and professor at the University of Nottingham, England, developed the echo-planar imaging (EPI) technique that would lead to scans taking seconds rather than hours and produce clearer images than Lauterbur had. Damadian, along with Larry Minkoff and Michael Goldsmith, obtained an image of a tumor in the thorax of a mouse in 1976. They also performed the first MRI body scan of a human being on July 3, 1977, studies they published in 1977. In 1979, Richard S. Likes filed a patent on k-space .
Full-body scanning
During the 1970s, a team led by John Mallard built the first full-body MRI scanner at the University of Aberdeen. On 28 August 1980, they used this machine to obtain the first clinically useful image of a patient's internal tissues using MRI, which identified a primary tumour in the patient's chest, an abnormal liver, and secondary cancer in his bones. This machine was later used at St Bartholomew's Hospital, in London, from 1983 to 1993. Mallard and his team are credited for technological advances that led to the widespread introduction of MRI.
In 1975, the University of California, San Francisco Radiology Department founded the Radiologic Imaging Laboratory (RIL). With the support of Pfizer, Diasonics, and later Toshiba America MRI, the lab developed new imaging technology and installed systems in the United States and worldwide. In 1981 RIL researchers, including Leon Kaufman and Lawrence Crooks, published Nuclear Magnetic Resonance Imaging in Medicine. In the 1980s the book was considered the definitive introductory textbook to the subject.
In 1980, Paul Bottomley joined the GE Research Center in Schenectady, New York. His team ordered the highest field-strength magnet then available, a 1.5 T system, and built the first high-field device, overcoming problems of coil design, RF penetration and signal-to-noise ratio to build the first whole-body MRI/MRS scanner. The results translated into the highly successful 1.5 T MRI product-line, delivering over 20,000 systems. In 1982, Bottomley performed the first localized MRS in the human heart and brain. After starting a collaboration on heart applications with Robert Weiss at Johns Hopkins, Bottomley returned to the university in 1994 as Russell Morgan Professor and director of the MR Research Division.
Additional techniques
In 1986, Charles L. Dumoulin and Howard R. Hart at General Electric developed MR angiography and Denis Le Bihan, obtained the first images and later patented diffusion MRI. In 1988, Arno Villringer and colleagues demonstrated that susceptibility contrast agents may be employed in perfusion MRI. In 1990, Seiji Ogawa at AT&T Bell labs recognized that oxygen-depleted blood with dHb was attracted to a magnetic field, and discovered the technique that underlies Functional Magnetic Resonance Imaging (fMRI).
In the early 1990s, Peter Basser and Le Bihan working at NIH, and Aaron Filler, Franklyn Howe and colleagues published the first DTI and tractographic brain images. Joseph Hajnal, Young and Graeme Bydder described the use of FLAIR pulse sequence to demonstrate high signal regions in normal white matter in 1992. In the same year, arterial spin labelling was developed by John Detre and Alan P. Koretsky. In 1997, Jürgen R. Reichenbach, E. Mark Haacke and coworkers at Washington University School of Medicine developed Susceptibility weighted imaging.
Advances in semiconductor technology were crucial to the development of practical MRI, which requires a large amount of computational power.
Although MRI is most commonly performed in the clinic at 1.5 T, higher fields such as 3 T for clinical imaging and more recently 7 T for research purposes are gaining popularity because of their increased sensitivity and resolution. In research laboratories, human studies have been performed at 9.4 T (2006), 10.5 T (2019), and up to 11.7T (2024) <https://healthcare-in-europe.com/en/news/11-7-tesla-first-images-world-most-powerful-mri-scanner.html>. Non-human animal studies have been performed at up to 21.1 T.
In 2020, the United States Food and Drug Administration (USFDA) proffered 510(k) approval of Hyperfine Research's bedside MRI system. The Hyperfine system claims 1/20th the cost, 1/35th the power consumption, and 1/10th the weight of conventional MRI systems. It uses a standard electrical outlet for power.
2003 Nobel Prize
Reflecting the fundamental importance and applicability of MRI in medicine, Paul Lauterbur of Stony Brook University and Sir Peter Mansfield of the University of Nottingham were awarded the 2003 Nobel Prize in Physiology or Medicine for their "discoveries concerning magnetic resonance imaging". The Nobel citation acknowledged Lauterbur's insight of using magnetic field gradients to determine spatial localization, a discovery that allowed the acquisition of 3D and 2D images. Mansfield was credited with introducing the mathematical formalism and developing techniques for efficient gradient utilization and fast imaging. The research that won the Prize was done almost 30 years earlier while Paul Lauterbur was a professor in the Department of Chemistry at Stony Brook University in New York.
References
External links
Magnetic
Magnetic resonance imaging | History of magnetic resonance imaging | Chemistry | 2,157 |
35,409,799 | https://en.wikipedia.org/wiki/Amp%C3%A8re%20Prize | The Prix Ampère de l’Électricité de France is a scientific prize awarded annually by the French Academy of Sciences.
Founded in 1974 in honor of André-Marie Ampère to celebrate his 200th birthday in 1975, the award is granted to one or more French scientists for outstanding research work in mathematics or physics. The monetary award is 50,000 euro, funded by Électricité de France.
Winners
2024 : Xavier Pennec
2023 : Philippe Grangier
2022 : Yann Brenier
2021 : Equipe géodynamo d'ISTerre
2020 : Guy David
2019 : Jacqueline Bloch
2018 : Frank Merle
2017 :
2016 : Alain Brillet
2015 :
2014 : Gilles Chabrier
2013 : Arnaud Beauville
2012 :
2011 :
2010 :
2009 :
2008 : Gérard Iooss
2007 :
2004, 2005, 2006 : Prize not awarded.
2003 : Gilles Lebeau
2002 :
2001 : Bernard Derrida
2000 : Pierre Suquet
1999 : Yves Colin de Verdière
1998 : and Jean-Michel Raimond
1997 : Michèle Vergne
1996 : and Marc Mézard
1995 : Claude Itzykson
1994 :
1993 : Christophe Soulé
1992 : Pierre-Louis Lions
1991 : Michel Devoret and
1990 : Jean-Michel Bismut
1989 : Adrien Douady
1988 : Jules Horowitz
1987 : Michel Raynaud
1986 :
1985 : Haïm Brezis
1984 : Daniel Kastler
1983 : Claude Bouchiat, Marie-Anne Bouchiat and
1982 : Paul-André Meyer
1981 : Édouard Brézin, Jean Zinn-Justin
1980 : Alain Connes
1979 : Claude Cohen-Tannoudji
1978 : Pierre Cartier
1977 : Pierre-Gilles de Gennes
1976 : Jacques Dixmier
1975 : André Lagarrigue
1974 : Jean Brossel
See also
List of mathematics awards
List of physics awards
List of prizes named after people
References
External links
Physics awards
Mathematics awards
Awards established in 1974 | Ampère Prize | Technology | 392 |
56,200,257 | https://en.wikipedia.org/wiki/Michael%20C.%20Mitchell | Michael C. Mitchell (born January 4, 1946) is an American planner, designer, lecturer and environmentalist. He works on rural development.
Earlier career
At Portland State University, Mitchell became one of the organizers of the First Earth Day in 1970, coordinating universities throughout America's northwest states.[citation needed] After his work on the First Earth Day, he was one of ten university students selected from across the nation by President Richard Nixon's Administration to form a national Youth Advisory Board on environmental matters, S.C.O.P.E (Student Council on Pollution and the Environment) was assigned to the U.S. Department of Interior where Mitchell was a reviewer on the creation of the first Environmental Impact Statement (EIS).
Mitchell continued his work with what became the United States Environmental Protection Agency (EPA), writing an environmental education program for students.[citation needed]
MCM Group International
MCM Group is an international planning and design firm headquartered in Los Angeles. Founded in 1984 by Michael C. Mitchell after the close of the Los Angeles Olympic Games, where he served as the head of planning and operations, the firm has sought to expand those planning techniques as a model to address prominent social problems. MCM Group provides feasibility consulting, planning, architecture, landscape design and sustainable engineering services. Mitchell has developed offices in Tokyo, Moscow, Middle East offices in Doha, Qatar, an African base in Nairobi, Kenya and currently four offices in China, with its headquarters in Beijing.
1984 Los Angeles Summer Olympic Games
In the early 1980s, Mitchell was recruited by the Los Angeles Olympic Organizing Committee, where he served as the Group vice-president of Planning and Control (Finance). Among his duties included overseeing the planning of the Olympic venues and supervising the architectural department's venue planning. During the Olympics he was responsible for the Games Operations Center and oversaw the closeout of the Games after their completion.
He has since served as a senior planning consultant to six other Olympic Games and four World Fairs.
LA84 Foundation
As head of the close-out operations after the completion of the Los Angeles 1984 Summer Olympics, Mitchell oversaw the creation of the LA84 Foundation, which was formed out of the $225 million surplus from the operations of the Games. The Foundation is now a national leader in supporting youth programs, providing recreation and learning opportunities to disadvantaged youth, training youth coaches and convening national conferences on youth sports issues.
Live Aid
In the Spring of 1985, Mitchell was contacted by Bob Geldof, an Irish rock musician, that had been working on issues of drought and famine in Africa. Geldof asked Mitchell to produce a worldwide televised music show to raise funds to help alleviate the catastrophic consequences of the worst African famine in a century.
Mitchell became the Executive Producer of the worldwide Live Aid broadcast (under a newly formed venture Worldwide Sports and Entertainment) and President of the Live Aid Foundation in America.
The July 13, 1985 broadcast was the world's first large globally interactive show seen by 1.5 billion viewers in 150 countries. Whereas the 1984 Olympics utilized three satellites to beam from one location around the world, Live Aid utilized thirteen satellites sending and receiving concerts from seven locations from around the world and producing one international feed back to the 150 nations. Despite 1985 being at the height of the Cold War, Mitchell established a global broadcast with a live concert from the Soviet Union featuring Autograph, and a delayed Live Aid showing in China.
President Ronald Reagan's Administration supported the Live Aid Foundation by providing wheat from America's reserves and awarded Mitchell a Presidential Citation for the Live Aid Foundation's contributions to humanity.
NFIE
Mitchell continued his contributions to social and education programs by accepting an appointment to the Board of the National Education Association's Foundation for the Improvement of Education (NFIE), serving on the board from 1987 to 1997. Since its beginning in 1969, the Foundation has served as a laboratory of learning, offering funding and other resources to public school educators, their schools, and districts to solve complex teaching and learning challenges.
Fund for Democracy and Development
During the dissolution of the Soviet Union starting in 1990, Russia and Ukraine experienced a severe shortage of medical and food supplies. Working throughout both countries witnessing first-hand the growing crisis, Mitchell and his close friend, Yankel Ginzburg, an American artist and humanitarian, who had family in Tver, Russia, responded to requests by Russia's leadership for assistance, co-founding the "Fund for Democracy and Development" to provide aid to alleviate the crisis.
Mitchell served as the founding board chairman in 1991 and L. Ronald Scheman (co-founder of the Pan American Development Foundation, where his work included providing financial assistance to low-income rural communities), served as the first President. Past President Richard M. Nixon served as the honorary chairman of the Fund.
From 1991 to 1994 the Fund is credited with channeling 240 million dollars worth of staples and food supplies to the former Soviet Union. As gratitude for the contributions of the Fund, the Russian government commissioned a monument park to reflect American goodwill.
Amur Tiger Sanctuary
With offices established in Moscow and St. Petersburg, Mitchell contributed to several rural development and environmental projects across the former Soviet Union. Mitchell's planning of development projects in rural Russia included work in Siberia on sustainable resource and forest management practices.
While undertaking those projects in conjunction with local wildlife scientists Mitchell convinced the Prime Minister of Russia, Viktor Chernomyrdin, to establish the Amur Tiger Sanctuary in 1993, which was initially funded through the Global Survival Network (GSN), an environmental organization he co-founded with Steve Galster now of Freeland Foundation.
The Sanctuary included introducing armed ranger patrols to stop the threat that poachers played in the region. The initial work that Mitchell and the executive director of GSN, Steve Galster, did to establish the sanctuary was soon funded by the World Wildlife Fund (WWF), now known as World Wide Fund for Nature, and is currently carried out with the support of the Ministry of Natural Resources and Environment (Russia). As a result of this work, the wild Siberian Tiger population has rebounded from their critical endangered level.
Exposing animal and human trafficking
In order to strengthen the Sanctuary efforts to stop poaching, Mitchell worked with Steve Galster conducting undercover video interviews with the poachers. Through these undercover meetings, he and Galster discovered a link between animal poachers and human traffickers. What began as an effort to preserve habitat became an international exposé on trafficking. From 1995 to 1997 they undertook a two-year undercover investigation personally holding meetings with traffickers and trafficked women to expose the international relationship between animal and human trafficking.
Information and undercover video derived from their investigation were used to create a GSN written report, "Crime & Servitude" and a video documentary, "Bought & Sold." The film was released in 1997 and received widespread media coverage in the US and abroad, including specials on ABC Primetime Live, CNN, and BBC.
The documentary also helped to catalyze legislative reform on trafficking as well as new financial resources to address the problem.
Galster took what was learned during that undercover period and continues this work, founding the Freeland Foundation, which is the lead implementing partner of Asia's Regional Response to Endangered Species Trafficking (ARREST), a program sponsored by the U.S. government in partnership with ASEAN and over fifty governmental and non-governmental organizations.
The material that was collected during those two years is housed at the Human Rights Documentation Initiative (HRDI), The University of Texas at Austin
United Nations Day of Tolerance
Beginning in 1985, Mitchell began an association with Irving Sarnoff, the executive director of Friends of the United Nations (FOTUN), and his co-founder, Dr. Noel Brown, Director of the United Nations Environmental Program (UNEP), North America. The Friends of the United Nations is an NGO dedicated to advocating support for programs of the United Nations.
As part of their work on international social issues Mitchell was asked to create a celebration for the United Nations International Day for Tolerance in 1999. The International Day for Tolerance is an annual observation declared by UNESCO in 1995 to generate public awareness of the dangers of intolerance.
Mitchell organized the 1999 event honoring Mikhail Gorbachev, former leader of the Soviet Union and Arnold Schwarzenegger, actor, politician and Chairman of the USC Schwarzenegger Institute of State and Global Policy. Keynote speakers included John Kerry, U.S. Senator and U.S. Secretary of State.
Rural development
One of the first projects integrating agricultural development, sustainability, community and social values, and economic growth was in a region of Qingdao, China where his company, MCM Group, brought international blueberry agricultural experts to develop what is considered now one of the world's largest blueberry farms (The Qingdao Cangma Mountain Development). The project included hi-technology organic agriculture, agritourism, educational programs, local culture and residential development to provide the local rural community with a successful economic transition.
Lectures and education
Invited by universities in the U.S., China, South Korea and Japan, he has given lectures and planning studios, sharing his professional experience with students and faculty members.
University of Michigan, ERB Institute for Global Sustainable Enterprise – "Sustainability, Design Thinking, and Business Strategies: Developing 'Optimal Environments' in China" - March, 2011
"Experiential Design," Lecture at Tianjin University of Technology, April 20, 2012
Featured speaker at China's first International Architectural Education Forum held at Tianjin University, along with Karl Otto Ellefsen, the Director of the Oslo School of Architecture and President of the European Association for Architectural Education, and Preston Scott Cohen, Professor of Architecture at Harvard Graduate School of Design, September, 2014.
Keynote Address at Sino-US International Design Exhibition – Los Angeles, September 3, 2016
Huaqiao University, College of Tourism, November 2016
Tianjin Association of City Planning – Master Lecture, May 11, 2017
Keynote Speaker – "Digital Brings Changes to Entertainment Experience," at Tianjin's Design Week themed – "The Future is Now," May 13, 2017
New Urbanism & Agritourism Research Program – July 22, 2017
Led meeting of the New Urbanism and Agritourism Research Program in Beijing, 2017. The program won full support from enterprises, universities and social groups including China's Association of Mayors, Tsinghua University School of Social Sciences, Digital China, China State Farming Agriculture Group, Agricultural Valley Research Institute, Shandong University of Arts, Tourism College of Huaqiao University, CSA, Beijing Qunxue Urban and Rural Community Social Development Research Institute and Taiwan Rural Tourism Association (reference).
He also initiated internship programs providing Chinese and African students with opportunities to receive training in MCM offices.
Recognition
1984 : Los Angeles Olympics Organizing Committee – Recognition and Appreciation for Contribution to the Success of the Los Angeles Olympic Games, 1984.
1984 : Plaque, Michael C. Mitchell, Group Vice-President Planning and Control, Los Angeles Olympics Organizing Committee – Games of the XXIIIrd Olympiad, July 28-August 12, 1984.
1985 : Presidential Citation for the Live Aid Foundation's contributions to humanity, President Regan.
1994 : Honorary Member of the Russian Academy of Sciences
2013: World Hotel Association – Continental Diamond Award for Design
2013 : Gold Award for Design, Society of American Registered Architects(SARA), Project – Youth Olympic Games 2014
2013 : Bronze Award for Design, Society of American Registered Architects(SARA), Project: COFCO ECO Resort and Attraction, Agriculture Ecological Valley Development, Beijing
2014 : Design Award of Merit, Society of American Registered Architects(SARA), Project -Tianjin Stadium Redevelopment
2016: Selected to exhibit Cangma Mountain Development, 2015, at Time Space Existence – 2016, Palazzo Mora, Venice Biennale of Architecture.
Memberships & Affiliations
Urban Land Institute
American Farmland Trust
Association of Children's Museums
International Association of Amusement Parks and Attractions
International Association for China Planning.
References
Recent Publications
Contributing writer on sustainable issues for "Green" a publication of Domus Magazine (2012-2013)
Collection of Creative & Design magazine, Tianjin University Press, about MCM's Qinghe Snoopy Theme Park, February 2015
China Real Estate Business, Architecture Section, "Always With You," about MCM's Qinghe Snoopy Theme Park, July 5, 2015
Originated Magazine, November 11, 2016
China People's Daily, Beijing, "MCM Designing Beautiful Country," National Launch of Luneng's "Beautiful Countryside", January 19, 2017
External links
International Rural Development Center
1946 births
University of Portland alumni
Urban planning
Rural development
Living people
Activists from Portland, Oregon
People from Los Angeles | Michael C. Mitchell | Engineering | 2,571 |
58,742,475 | https://en.wikipedia.org/wiki/MAGESTIC | Multiplexed Accurate Genome Editing with Short, Trackable, Integrated Cellular barcodes (MAGESTIC) is a platform that builds on the CRISPR/Cas technique. It further improves CRISPR/Cas by making the gene-editing process more precise. It also increases cell survival during the editing process up to sevenfold.
This technology was invented at the Stanford Genome Technology Center in collaboration with the Joint Initiative for Metrology in Biology (JIMB) which is a coalition of Stanford University and the National Institute of Standards and Technology.
Overview
Gene editing is used for a variety of tasks including the modifying of crops, the modifying of bacteria, and the modifying of disease-causing genetic mutations in patients. When only a single edited cell line is required, CRISPR/Cas combined with the endogenous DNA repair efficiency is sufficient to obtain an edited cell line. However, when trying to introduce many edits in multiplex, a higher efficiency of Homology directed repair is required. The MAGESTIC technology has multiple components. One component, the LexA-Fkh1 protein is involved in the process of Donor Recruitment that increases the efficiency of homology directed repair. The second component is a library of CRISPR Guide RNAs paired with donor DNA which encodes for specified edited to be integrated through homology directed repair. This in turn is linked to a DNA barcodes that allows for specific variants to be tracked in pools, similar to how Genome-wide CRISPR-Cas9 knockout screens work, only MAGESTIC is more versatile as it allows for not only loss of function edits, but also DNA Codon changes, Single-nucleotide polymorphism, Indels, and other types of genetic changes to be introduced and tracked. By improving DNA repair efficiency, using array-synthesized guide–donor oligos for the plasmid-based high-throughput editing, and integrating a genomic barcode to prevent plasmid barcode loss, MAGESTIC leads to more uniform pools with genome integrated stable single copy barcodes and enables robust phenotyping.
Donor Recruitment
Because editing multiple sites in pools can be impacted by a number of factors including ineffective CRISPR Guide RNA, DNA synthesis errors, competition with Non-homologous end joining and other challenges that occur when building multiplex libraries, MAGESTIC screens required improved DNA repair. This is where the donor recruitment aspect of MAGESTIC comes in. MAGESTIC achieves greater editing efficiency by localizing donor DNA to the site of DNA breaks introduced by a CRISPR cut.
A CRISPR machinery cuts at desired locations in the genome, and then MAGESTIC direct the donor DNA to the site of this cut to direct cells to introduce designed edits at the DNA cut sites. This technology is called donor recruitment and relies on a fusion protein that contains one domain recruited to DNA breaks and another domain that binds to the donor DNA. This allows for the production of high quality precision edit pools in yeast, where each cells contains a single edit and a DNA barcode. The donor recruitment aspect of the technology also holds the potential to improve editing efficiency in additional cell types, such as mammalian cells. This may one day prove beneficial to gene therapies or other therapeutic editing.
References
Gene therapy
Medical genetics
Gene delivery
Applied genetics
Biological engineering
Biotechnology
2018 in science | MAGESTIC | Chemistry,Engineering,Biology | 663 |
29,992,116 | https://en.wikipedia.org/wiki/Lead%20star | A lead star is a low-metallicity star with an overabundance of lead and bismuth as compared to other products of the S-process.
See also
Barium star
References
Nucleosynthesis
Star types
Lead
Bismuth | Lead star | Physics,Chemistry,Astronomy | 52 |
7,818,178 | https://en.wikipedia.org/wiki/Maximal%20pair | In computer science, a maximal pair within a string is a pair of matching substrings that are maximal, where "maximal" means that it is not possible to make a longer matching pair by extending the range of both substrings to the left or right.
Example
For example, in this table, the substrings at indices 2 to 4 (in red) and indices 6 to 8 (in blue) are a maximal pair, because they contain identical characters (abc), and they have different characters to the left (x at index 1 and y at index 5) and different characters to the right (y at index 5 and w at index 9). Similarly, the substrings at indices 6 to 8 (in blue) and indices 10 to 12 (in green) are a maximal pair.
However, the substrings at indices 2 to 4 (in red) and indices 10 to 12 (in green) are not a maximal pair, as the character y follows both substrings, and so they can be extended to the right to make a longer pair.
Formal definition
Formally, a maximal pair of substrings with starting positions and respectively, and both of length , is specified by a triple , such that, given a string of length , (meaning that the substrings have identical contents), but (they have different characters to their left) and (they also have different characters to their right; together, these two inequalities are the condition for being maximal). Thus, in the example above, the maximal pairs are (the red and blue substrings) and (the green and blue substrings), and is not a maximal pair.
Related concepts and time complexity
A maximal repeat is the string represented by a maximal pair. A supermaximal repeat is a maximal repeat never occurring as a proper substring of another maximal repeat. In the above example, abc and abcy are both maximal repeats, but only abcy is a supermaximal repeat.
Maximal pairs, maximal repeats and supermaximal repeats can each be found in time using a suffix tree, if there are such structures.
References
External links
Project for the computation of all maximal repeats in one ore more strings in Python, using suffix array.
String (computer science)
Formal languages | Maximal pair | Mathematics,Technology | 478 |
18,617,142 | https://en.wikipedia.org/wiki/Mercury%20%28element%29 | Mercury is a chemical element; it has symbol Hg and atomic number 80. It is also known as quicksilver and was formerly named hydrargyrum ( ) from the Greek words and , from which its chemical symbol is derived. A heavy, silvery d-block element, mercury is the only metallic element that is known to be liquid at standard temperature and pressure; the only other element that is liquid under these conditions is the halogen bromine, though metals such as caesium, gallium, and rubidium melt just above room temperature.
Mercury occurs in deposits throughout the world mostly as cinnabar (mercuric sulfide). The red pigment vermilion is obtained by grinding natural cinnabar or synthetic mercuric sulfide. Exposure to mercury and mercury-containing organic compounds is toxic to the nervous system, immune system and kidneys of humans and other animals; mercury poisoning can result from exposure to water-soluble forms of mercury (such as mercuric chloride or methylmercury) either directly or through mechanisms of biomagnification.
Mercury is used in thermometers, barometers, manometers, sphygmomanometers, float valves, mercury switches, mercury relays, fluorescent lamps and other devices, although concerns about the element's toxicity have led to the phasing out of such mercury-containing instruments. It remains in use in scientific research applications and in amalgam for dental restoration in some locales. It is also used in fluorescent lighting. Electricity passed through mercury vapor in a fluorescent lamp produces short-wave ultraviolet light, which then causes the phosphor in the tube to fluoresce, making visible light.
Properties
Physical properties
Mercury is a heavy, silvery-white metal that is liquid at room temperature. Compared to other metals, it is a poor conductor of heat, but a fair conductor of electricity.
It has a melting point of −38.83 °C and a boiling point of 356.73 °C, both the lowest of any stable metal, although preliminary experiments on copernicium and flerovium have indicated that they have even lower boiling points. This effect is due to lanthanide contraction and relativistic contraction reducing the orbit radius of the outermost electrons, and thus weakening the metallic bonding in mercury. Upon freezing, the volume of mercury decreases by 3.59% and its density changes from 13.69 g/cm3 when liquid to 14.184 g/cm3 when solid. The coefficient of volume expansion is 181.59 × 10−6 at 0 °C, 181.71 × 10−6 at 20 °C and 182.50 × 10−6 at 100 °C (per °C). Solid mercury is malleable and ductile, and can be cut with a knife.
Table of thermal and physical properties of liquid mercury:
Chemical properties
Mercury does not react with most acids, such as dilute sulfuric acid, although oxidizing acids such as concentrated sulfuric acid and nitric acid or aqua regia dissolve it to give sulfate, nitrate, and chloride. Like silver, mercury reacts with atmospheric hydrogen sulfide. Mercury reacts with solid sulfur flakes, which are used in mercury spill kits to absorb mercury (spill kits also use activated carbon and powdered zinc).
Amalgams
Mercury dissolves many metals such as gold and silver to form amalgams. Iron is an exception, and iron flasks have traditionally been used to transport the material. Several other first row transition metals with the exception of manganese, copper and zinc are also resistant in forming amalgams. Other elements that do not readily form amalgams with mercury include platinum. Sodium amalgam is a common reducing agent in organic synthesis, and is also used in high-pressure sodium lamps.
Mercury readily combines with aluminium to form a mercury-aluminium amalgam when the two pure metals come into contact. Since the amalgam destroys the aluminium oxide layer which protects metallic aluminium from oxidizing in-depth (as in iron rusting), even small amounts of mercury can seriously corrode aluminium. For this reason, mercury is not allowed aboard an aircraft under most circumstances because of the risk of it forming an amalgam with exposed aluminium parts in the aircraft.
Mercury embrittlement is the most common type of liquid metal embrittlement, as mercury is a natural component of some hydrocarbon reservoirs and will come into contact with petroleum processing equipment under normal conditions.
Isotopes
There are seven stable isotopes of mercury, with being the most abundant (29.86%). The longest-lived radioisotopes are with a half-life of 444 years, and with a half-life of 46.612 days. Most of the remaining radioisotopes have half-lives that are less than a day. occurs naturally in tiny traces as an intermediate decay product of . and are the most often studied NMR-active nuclei, having spins of and respectively.
Etymology
Hg is the modern chemical symbol for mercury. It is an abbreviation of , a romanized form of the ancient Greek name for mercury, (). is a Greek compound word meaning , from - (-), the root of () , and () . Like the English name quicksilver (), this name was due to mercury's liquid and shiny properties.
The modern English name mercury comes from the planet Mercury. In medieval alchemy, the seven known metals—quicksilver, gold, silver, copper, iron, lead, and tin—were associated with the seven planets. Quicksilver was associated with the fastest planet, which had been named after the Roman god Mercury, who was associated with speed and mobility. The astrological symbol for the planet became one of the alchemical symbols for the metal, and Mercury became an alternative name for the metal. Mercury is the only metal for which the alchemical planetary name survives, as it was decided it was preferable to quicksilver as a chemical name.
History
Mercury was found in Egyptian tombs that date from 1500 BC; cinnabar, the most common natural source of mercury, has been in use since the Neolithic Age.
In China and Tibet, mercury use was thought to prolong life, heal fractures, and maintain generally good health, although it is now known that exposure to mercury vapor leads to serious adverse health effects. The first emperor of a unified China, Qín Shǐ Huáng Dì—allegedly buried in a tomb that contained rivers of flowing mercury on a model of the land he ruled, representative of the rivers of China—was reportedly killed by drinking a mercury and powdered jade mixture formulated by Qin alchemists intended as an elixir of immortality. Khumarawayh ibn Ahmad ibn Tulun, the second Tulunid ruler of Egypt (r. 884–896), known for his extravagance and profligacy, reportedly built a basin filled with mercury, on which he would lie on top of air-filled cushions and be rocked to sleep.
In November 2014 "large quantities" of mercury were discovered in a chamber 60 feet below the 1800-year-old pyramid known as the Temple of the Feathered Serpent, the third-largest pyramid of Teotihuacan, Mexico, along with "jade statues, jaguar remains, a box filled with carved shells and rubber balls". In Lamanai, once a major city of the Maya civilization, a pool of mercury was found under a marker in a Mesoamerican ballcourt.
Aristotle recounts that Daedalus made a wooden statue of Aphrodite move by pouring quicksilver in its interior. In Greek mythology Daedalus gave the appearance of voice in his statues using quicksilver. The ancient Greeks used cinnabar (mercury sulfide) in ointments; the ancient Egyptians and the Romans used it in cosmetics. By 500 BC mercury was used to make amalgams (Medieval Latin amalgama, "alloy of mercury") with other metals.
Alchemists thought of mercury as the First Matter from which all metals were formed. They believed that different metals could be produced by varying the quality and quantity of sulfur contained within the mercury. The purest of these was gold, and mercury was called for in attempts at the transmutation of base (or impure) metals into gold, which was the goal of many alchemists.
The mines in Almadén (Spain), Monte Amiata (Italy), and Idrija (now Slovenia) dominated mercury production from the opening of the mine in Almadén 2500 years ago, until new deposits were found at the end of the 19th century.
Occurrence
Mercury is an extremely rare element in Earth's crust; it has an average crustal abundance by mass of only 0.08 parts per million (ppm) and is the 66th most abundant element in the Earth's crust. Because it does not blend geochemically with those elements that constitute the majority of the crustal mass, mercury ores can be extraordinarily concentrated considering the element's abundance in ordinary rock. The richest mercury ores contain up to 2.5% mercury by mass, and even the leanest concentrated deposits are at least 0.1% mercury (12,000 times average crustal abundance). It is found either as a native metal (rare) or in cinnabar, metacinnabar, sphalerite, corderoite, livingstonite and other minerals, with cinnabar (HgS) being the most common ore. Mercury ores often occur in hot springs or other volcanic regions.
Beginning in 1558, with the invention of the patio process to extract silver from ore using mercury, mercury became an essential resource in the economy of Spain and its American colonies. Mercury was used to extract silver from the lucrative mines in New Spain and Peru. Initially, the Spanish Crown's mines in Almadén in Southern Spain supplied all the mercury for the colonies. Mercury deposits were discovered in the New World, and more than 100,000 tons of mercury were mined from the region of Huancavelica, Peru, over the course of three centuries following the discovery of deposits there in 1563. The patio process and later pan amalgamation process continued to create great demand for mercury to treat silver ores until the late 19th century.
Former mines in Italy, the United States and Mexico, which once produced a large proportion of the world supply, have now been completely mined out or, in the case of Slovenia (Idrija) and Spain (Almadén), shut down due to the fall of the price of mercury. Nevada's McDermitt Mine, the last mercury mine in the United States, closed in 1992. The price of mercury has been highly volatile over the years and in 2006 was $650 per 76-pound (34.46 kg) flask.
Mercury is extracted by heating cinnabar in a current of air and condensing the vapor. The equation for this extraction is:
HgS + O2 → Hg + SO2
In 2020, China was the top producer of mercury, providing 88% of the world output (2200 out of 2500 tonnes), followed by Tajikistan (178 t), Russia (50 t) and Mexico (32 t).
Because of the high toxicity of mercury, both the mining of cinnabar and refining for mercury are hazardous and historic causes of mercury poisoning. In China, prison labor was used by a private mining company as recently as the 1950s to develop new cinnabar mines. Thousands of prisoners were used by the Luo Xi mining company to establish new tunnels. Worker health in functioning mines is at high risk.
A newspaper claimed that an unidentified European Union directive calling for energy-efficient lightbulbs to be made mandatory by 2012 encouraged China to re-open cinnabar mines to obtain the mercury required for CFL bulb manufacture. Environmental dangers have been a concern, particularly in the southern cities of Foshan and Guangzhou, and in Guizhou province in the southwest.
Abandoned mercury mine processing sites often contain very hazardous waste piles of roasted cinnabar calcines. Water run-off from such sites is a recognized source of ecological damage. Former mercury mines may be suited for constructive re-use; for example, in 1976 Santa Clara County, California purchased the historic Almaden Quicksilver Mine and created a county park on the site, after conducting extensive safety and environmental analysis of the property.
Chemistry
All known mercury compounds exhibit one of two positive oxidation states: I and II. Experiments have failed to unequivocally demonstrate any higher oxidation states: both the claimed 1976 electrosynthesis of an unstable Hg(III) species and 2007 cryogenic isolation of HgF4 have disputed interpretations and remain difficult (if not impossible) to reproduce.
Compounds of mercury(I)
Unlike its lighter neighbors, cadmium and zinc, mercury usually forms simple stable compounds with metal-metal bonds. Most mercury(I) compounds are diamagnetic and feature the dimeric cation, Hg. Stable derivatives include the chloride and nitrate. In aqueous solution of a mercury(I) salt, slight disproportion of Hg into Hg and results in >0.5% of dissolved mercury existing as . In these solutions, complexation of the with addition of ligands such as cyanide causes disproportionation to go to completion, with all Hg precipitating as elemental mercury and insoluble mercury(II) compounds (e.g. mercury(II) cyanide if cyanide is used as the ligand). Mercury(I) chloride, a colorless solid also known as calomel, is really the compound with the formula Hg2Cl2, with the connectivity Cl-Hg-Hg-Cl. It reacts with chlorine to give mercury(II) chloride, which resists further oxidation. Mercury(I) hydride, a colorless gas, has the formula HgH, containing no Hg-Hg bond; however, the gas has only ever been observed as isolated molecules.
Indicative of its tendency to bond to itself, mercury forms mercury polycations, which consist of linear chains of mercury centers, capped with a positive charge. One example is containing the cation.
Compounds of mercury(II)
Mercury(II) is the most common oxidation state and is the main one in nature as well. All four mercuric halides are known and have been demonstrated to form linear coordination geometry, despite mercury's tendency to form tetrahedral molecular geometry with other ligands. This behavior is similar to the Ag+ ion. The best known mercury halide is mercury(II) chloride, an easily sublimating white solid.
Mercury(II) oxide, the main oxide of mercury, arises when the metal is exposed to air for long periods at elevated temperatures. It reverts to the elements upon heating near 400 °C, as was demonstrated by Joseph Priestley in an early synthesis of pure oxygen. Hydroxides of mercury are poorly characterized, as attempted isolation studies of mercury(II) hydroxide have yielded mercury oxide instead.
Being a soft metal, mercury forms very stable derivatives with the heavier chalcogens. Preeminent is mercury(II) sulfide, HgS, which occurs in nature as the ore cinnabar and is the brilliant pigment vermilion. Like ZnS, HgS crystallizes in two forms, the reddish cubic form and the black zinc blende form. The latter sometimes occurs naturally as metacinnabar. Mercury(II) selenide (HgSe) and mercury(II) telluride (HgTe) are known, these as well as various derivatives, e.g. mercury cadmium telluride and mercury zinc telluride being semiconductors useful as infrared detector materials.
Mercury(II) salts form a variety of complex derivatives with ammonia. These include Millon's base (Hg2N+), the one-dimensional polymer (salts of )), and "fusible white precipitate" or [Hg(NH3)2]Cl2. Known as Nessler's reagent, potassium tetraiodomercurate(II) () is still occasionally used to test for ammonia owing to its tendency to form the deeply colored iodide salt of Millon's base.
Mercury fulminate is a detonator widely used in explosives.
Organomercury compounds
Organic mercury compounds are historically important but are of little industrial value in the western world. Mercury(II) salts are a rare example of simple metal complexes that react directly with aromatic rings. Organomercury compounds are always divalent and usually two-coordinate and linear geometry. Unlike organocadmium and organozinc compounds, organomercury compounds do not react with water. They usually have the formula HgR2, which are often volatile, or HgRX, which are often solids, where R is aryl or alkyl and X is usually halide or acetate. Methylmercury, a generic term for compounds with the formula CH3HgX, is a dangerous family of compounds that are often found in polluted water. They arise by a process known as biomethylation.
Applications
Mercury is used primarily for the manufacture of industrial chemicals or for electrical and electronic applications. It is used in some liquid-in-glass thermometers, especially those used to measure high temperatures. A still increasing amount is used as gaseous mercury in fluorescent lamps, while most of the other applications are slowly being phased out due to health and safety regulations. In some applications, mercury is replaced with less toxic but considerably more expensive Galinstan alloy.
Medicine
Historical and folk
Mercury and its compounds have been used in medicine, although they are much less common today than they once were, now that the toxic effects of mercury and its compounds are more widely understood. An example of the early therapeutic application of mercury was published in 1787 by James Lind.
The first edition of The Merck Manuals (1899) featured many then-medically relevant mercuric compounds, such as mercury-ammonium chloride, yellow mercury proto-iodide, calomel, and mercuric chloride, among others.
Mercury in the form of one of its common ores, cinnabar, is used in various traditional medicines, especially in traditional Chinese medicine. Review of its safety has found that cinnabar can lead to significant mercury intoxication when heated, consumed in overdose, or taken long term, and can have adverse effects at therapeutic doses, though effects from therapeutic doses are typically reversible. Although this form of mercury appears to be less toxic than other forms, its use in traditional Chinese medicine has not yet been justified, as the therapeutic basis for the use of cinnabar is not clear.
Mercury(I) chloride (also known as calomel or mercurous chloride) has been used in traditional medicine as a diuretic, topical disinfectant, and laxative. Mercury(II) chloride (also known as mercuric chloride or corrosive sublimate) was once used to treat syphilis (along with other mercury compounds), although it is so toxic that sometimes the symptoms of its toxicity were confused with those of the syphilis it was believed to treat. It is also used as a disinfectant. Blue mass, a pill or syrup in which mercury is the main ingredient, was prescribed throughout the 19th century for numerous conditions including constipation, depression, child-bearing and toothaches. In the early 20th century, mercury was administered to children yearly as a laxative and dewormer, and it was used in teething powders for infants. The mercury-containing organohalide merbromin (sometimes sold as Mercurochrome) is still widely used but has been banned in some countries, such as the U.S.
Contemporary
Mercury is an ingredient in dental amalgams.
Thiomersal (called Thimerosal in the United States) is an organic compound used as a preservative in vaccines, although this use is in decline. Although it was widely speculated that this mercury-based preservative could cause or trigger autism in children, no evidence supports any such link. Nevertheless, thiomersal has been removed from, or reduced to trace amounts in, all U.S. vaccines recommended for children 6 years of age and under, with the exception of the inactivated influenza vaccine. Merbromin (Mercurochrome), another mercury compound, is a topical antiseptic used for minor cuts and scrapes in some countries. Today, the use of mercury in medicine has greatly declined in all respects, especially in developed countries.
Mercury is still used in some diuretics, although substitutes such as thiazides now exist for most therapeutic uses. In 2003, mercury compounds were found in some over-the-counter drugs, including topical antiseptics, stimulant laxatives, diaper-rash ointment, eye drops, and nasal sprays. The FDA has "inadequate data to establish general recognition of the safety and effectiveness" of the mercury ingredients in these products.
Production of chlorine and caustic soda
Chlorine is produced from sodium chloride (common salt, NaCl) using electrolysis to separate metallic sodium from chlorine gas. Usually salt is dissolved in water to produce a brine. By-products of any such chloralkali process are hydrogen (H2) and sodium hydroxide (NaOH), which is commonly called caustic soda or lye. By far the largest use of mercury in the late 20th century was in the mercury cell process (also called the Castner-Kellner process) where metallic sodium is formed as an amalgam at a cathode made from mercury; this sodium is then reacted with water to produce sodium hydroxide. Many of the industrial mercury releases of the 20th century came from this process, although modern plants claim to be safe in this regard. From the 1960s onward, the majority of industrial plants moved away from mercury cell processes towards diaphragm cell technologies to produce chlorine, though 11% of the chlorine made in the United States was still produced with the mercury cell method as of 2005.
Laboratory uses
Thermometers
Thermometers containing mercury were invented in the early 18th century by Daniel Gabriel Fahrenheit, though earlier attempts at making temperature-measuring instruments filled with quicksilver had been described in the 1650s. Fahrenheit's mercury thermometer was based on an earlier design that used alcohol rather than mercury; the mercury thermometer was significantly more accurate than those using alcohol. From the early 21st century onwards, the use of mercury thermometers has been declining, and mercury-containing instruments have been banned in many jurisdictions following the 1998 Protocol on Heavy Metals. Modern alternatives to mercury thermometers include resistance thermometers, thermocouples, and thermistor sensors that output to a digital display.
Mirrors
Some transit telescopes use a basin of mercury to form a flat and absolutely horizontal mirror, useful in determining an absolute vertical or perpendicular reference. Concave horizontal parabolic mirrors may be formed by rotating liquid mercury on a disk, the parabolic form of the liquid thus formed reflecting and focusing incident light. Such liquid-mirror telescopes are cheaper than conventional large mirror telescopes by up to a factor of 100, but the mirror cannot be tilted and always points straight up.
Electrochemistry
Liquid mercury is part of a popular secondary reference electrode (called the calomel electrode) in electrochemistry as an alternative to the standard hydrogen electrode. The calomel electrode is used to work out the electrode potential of half cells. The triple point of mercury, −38.8344 °C, is a fixed point used as a temperature standard for the International Temperature Scale (ITS-90).
Polarography and crystallography
In polarography, both the dropping mercury electrode and the hanging mercury drop electrode use elemental mercury. This use allows a new uncontaminated electrode to be available for each measurement or each new experiment.
Mercury-containing compounds are also of use in the field of structural biology. Mercuric compounds such as mercury(II) chloride or potassium tetraiodomercurate(II) can be added to protein crystals in an effort to create heavy atom derivatives that can be used to solve the phase problem in X-ray crystallography via isomorphous replacement or anomalous scattering methods.
Niche uses
Gaseous mercury is used in mercury-vapor lamps and some "neon sign" type advertising signs and fluorescent lamps. Those low-pressure lamps emit very spectrally narrow lines, which are traditionally used in optical spectroscopy for calibration of spectral position. Commercial calibration lamps are sold for this purpose; reflecting a fluorescent ceiling light into a spectrometer is a common calibration practice. Gaseous mercury is also found in some electron tubes, including ignitrons, thyratrons, and mercury arc rectifiers. It is also used in specialist medical care lamps for skin tanning and disinfection. Gaseous mercury is added to cold cathode argon-filled lamps to increase the ionization and electrical conductivity. An argon-filled lamp without mercury will have dull spots and will fail to light correctly. Lighting containing mercury can be bombarded/oven pumped only once. When added to neon filled tubes, inconsistent red and blue spots are produced in the light emissions until the initial burning-in process is completed; eventually it will light a consistent dull off-blue color.
The Deep Space Atomic Clock (DSAC) under development by the Jet Propulsion Laboratory utilises mercury in a linear ion-trap-based clock. The novel use of mercury permits the creation of compact atomic clocks with low energy requirements ideal for space probes and Mars missions.
Skin whitening
Mercury is effective as an active ingredient in skin whitening compounds used to depigment skin. The Minamata Convention on Mercury limits the concentration of mercury in such whiteners to 1 part per million. However, as of 2022, many commercially sold whitener products continue to exceed that limit, and are considered toxic.
Firearms
Mercury(II) fulminate is a primary explosive, which has mainly been used as a primer of a cartridge in firearms throughout the 19th and 20th centuries.
Mining
Mercury is used in illegal gold mining to help separate gold particles from a mixture of sand or gravel and water. Small gold particles may form mercury-gold amalgam and therefore increase the gold recovery rates. The use of mercury causes a severe pollution problem in places such as Ghana.
Historic uses
Many historic applications made use of the peculiar physical properties of mercury, especially as a dense liquid and a liquid metal:
Quantities of liquid mercury ranging from have been recovered from elite Maya tombs (100–700 AD) or ritual caches at six sites. This mercury may have been used in bowls as mirrors for divinatory purposes. Five of these date to the Classic Period of Maya civilization (c. 250–900) but one example predated this.
In Islamic Spain, it was used for filling decorative pools. Later, the American artist Alexander Calder built a mercury fountain for the Spanish Pavilion at the 1937 World Exhibition in Paris. The fountain is now on display at the Fundació Joan Miró in Barcelona.
The Fresnel lenses of old lighthouses used to float and rotate in a bath of mercury which acted like a bearing.
Mercury sphygmomanometers, barometers, diffusion pumps, coulometers, and many other laboratory instruments took advantage of mercury's properties as a very dense, opaque liquid with a nearly linear thermal expansion.
As an electrically conductive liquid, it was used in mercury switches (including home mercury light switches installed prior to 1970), tilt switches used in old fire detectors and in some home thermostats.
Owing to its acoustic properties, mercury was used as the propagation medium in delay-line memory devices used in early digital computers of the mid-20th century, such as the SEAC computer.
In 1911, Heike Kamerlingh Onnes discovered superconductivity through the cooling of mercury below 4 kelvin shortly after the discovery and production of liquid helium. Its superconductive properties were later determined to be unusual compared to other later-discovered superconductors, such as the more popular niobium alloys.
Experimental mercury vapor turbines were installed to increase the efficiency of fossil-fuel electrical power plants. The South Meadow power plant in Hartford, CT employed mercury as its working fluid, in a binary configuration with a secondary water circuit, for a number of years starting in the late 1920s in a drive to improve plant efficiency. Several other plants were built, including the Schiller Station in Portsmouth, NH, which went online in 1950. The idea did not catch on industry-wide due to the weight and toxicity of mercury, as well as the advent of supercritical steam plants in later years.
Similarly, liquid mercury was used as a coolant for some nuclear reactors; however, sodium is proposed for reactors cooled with liquid metal, because the high density of mercury requires much more energy to circulate as coolant.
Mercury was a propellant for early ion engines in electric space propulsion systems. Advantages were mercury's high molecular weight, low ionization energy, low dual-ionization energy, high liquid density and liquid storability at room temperature. Disadvantages were concerns regarding environmental impact associated with ground testing and concerns about eventual cooling and condensation of some of the propellant on the spacecraft in long-duration operations. The first spaceflight to use electric propulsion was a mercury-fueled ion thruster developed at NASA Glenn Research Center and flown on the Space Electric Rocket Test "SERT-1" spacecraft launched by NASA at its Wallops Flight Facility in 1964. The SERT-1 flight was followed up by the SERT-2 flight in 1970. Mercury and caesium were preferred propellants for ion engines until Hughes Research Laboratory performed studies finding xenon gas to be a suitable replacement. Xenon is now the preferred propellant for ion engines, as it has a high molecular weight, little or no reactivity due to its noble gas nature, and high liquid density under mild cryogenic storage.
Other applications made use of the chemical properties of mercury:
The mercury battery is a non-rechargeable electrochemical battery, a primary cell, that was common in the middle of the 20th century. It was used in a wide variety of applications and was available in various sizes, particularly button sizes. Its constant voltage output and long shelf life gave it a niche use for camera light meters and hearing aids. The mercury cell was effectively banned in most countries in the 1990s due to concerns about the mercury contaminating landfills.
Mercury was used for preserving wood, developing daguerreotypes, silvering mirrors, anti-fouling paints, herbicides, interior latex paint, handheld maze games, cleaning, and road leveling devices in cars. Mercury compounds have been used in antiseptics, laxatives, antidepressants, and in antisyphilitics. Mercury has been replaced with safer compounds in most, if not all, of these applications.
It was allegedly used by allied spies to sabotage Luftwaffe planes: a mercury paste was applied to bare aluminium, causing the metal to rapidly corrode; this would cause structural failures.
Mercury was once used as a gun barrel bore cleaner.
From the mid-18th to the mid-19th centuries, a process called "carroting" was used in the making of felt hats. Animal skins were rinsed in an orange solution (the term "carroting" arose from this color) of the mercury compound mercuric nitrate, Hg(NO3)2. This process separated the fur from the pelt and matted it together. This solution and the vapors it produced were highly toxic. The United States Public Health Service banned the use of mercury in the felt industry in December 1941. The psychological symptoms associated with mercury poisoning inspired the phrase "mad as a hatter". Lewis Carroll's "Mad Hatter" in his book Alice's Adventures in Wonderland was a play on words based on the older phrase, but the character himself does not exhibit symptoms of mercury poisoning.
Historically, mercury was used extensively in hydraulic gold mining (see #Mining. Large-scale use of mercury stopped in the 1960s. However, mercury is still used in small scale, often clandestine, gold prospecting. It is estimated that 45,000 metric tons of mercury used in California for placer mining have not been recovered. Mercury was also used in silver mining to extract the metal from ore through the patio process.
Toxicity and safety
Due to its physical properties and relative chemical inertness, liquid mercury is absorbed very poorly through intact skin and the gastrointestinal tract. Mercury vapor is the primary hazard of elemental mercury. As a result, containers of mercury are securely sealed to avoid spills and evaporation. Heating of mercury, or of compounds of mercury that may decompose when heated, should be carried out with adequate ventilation in order to minimize exposure to mercury vapor. The most toxic forms of mercury are its organic compounds, such as dimethylmercury and methylmercury. Mercury can cause both chronic and acute poisoning.
Releases in the environment
Preindustrial deposition rates of mercury from the atmosphere may be about 4 ng per 1 L of ice deposited. Volcanic eruptions and related natural sources are responsible for approximately half of atmospheric mercury emissions.
Atmospheric mercury contamination in outdoor urban air at the start of the 21st century was measured at 0.01–0.02 μg/m3. A 2001 study measured mercury levels in 12 indoor sites chosen to represent a cross-section of building types, locations and ages in the New York area. This study found mercury concentrations significantly elevated over outdoor concentrations, at a range of 0.0065 – 0.523 μg/m3. The average was 0.069 μg/m3.
Half of mercury emissions are attributed to mankind. The sources can be divided into the following estimated percentages:
65% from stationary combustion, of which coal-fired power plants are the largest aggregate source (40% of U.S. mercury emissions in 1999). This includes power plants fueled with gas where the mercury has not been removed. Emissions from coal combustion are between one and two orders of magnitude higher than emissions from oil combustion, depending on the country.
11% from gold production. The three largest point sources for mercury emissions in the U.S. are the three largest gold mines. Hydrogeochemical release of mercury from gold-mine tailings has been accounted as a significant source of atmospheric mercury in eastern Canada.
6.8% from non-ferrous metal production, typically smelters.
6.4% from cement production.
3.0% from waste disposal, including municipal and hazardous waste, crematoria, and sewage sludge incineration.
3.0% from caustic soda production.
1.4% from pig iron and steel production.
1.1% from mercury production, mainly for batteries.
2.0% from other sources.
The above percentages are estimates of the global human-caused mercury emissions in 2000, excluding biomass burning, an important source in some regions.
A serious industrial disaster was the dumping of waste mercury compounds into Minamata Bay, Japan, between 1932 and 1968. It is estimated that over 3,000 people suffered various deformities, severe mercury poisoning symptoms or death from what became known as Minamata disease.
China is estimated to produce 50% of mercury emissions, most of which result from production of vinyl chloride.
Mercury also enters into the environment through the improper disposal of mercury-containing products. Due to health concerns, toxics use reduction efforts are cutting back or eliminating mercury in such products. For example, the amount of mercury sold in thermostats in the United States decreased from 14.5 tons in 2004 to 3.9 tons in 2007.
The tobacco plant readily absorbs and accumulates heavy metals such as mercury from the surrounding soil into its leaves. These are subsequently inhaled during tobacco smoking. While mercury is a constituent of tobacco smoke, studies have largely failed to discover a significant correlation between smoking and mercury uptake by humans compared to sources such as occupational exposure, fish consumption, and amalgam tooth fillings.
A less well-known source of mercury is the burning of joss paper, which is a common tradition practiced in Asia, including China, Vietnam, Hong Kong, Thailand, Taiwan and Malaysia.
Spill cleanup
Mercury spills pose an immediate threat to people handling the material, in addition to being an environmental hazard if the material is not contained properly. This is of particular concern for visible mercury, or mercury in liquid state, as its unusual appearance and behavior for a metal makes it an attractive nuisance to the uninformed. Procedures have been developed to contain mercury spills, as well as recommendations on appropriate responses based on the conditions of a spill. Tracking liquid mercury away from the site of a spill is a major concern in liquid mercury spills; regulations emphasize containment of the visible mercury as the first course of action, followed by monitoring of mercury vapors and vapor cleanup. Several products are sold as mercury spill adsorbents, ranging from metal salts to polymers and zeolites.
Sediment contamination
Sediments within large urban-industrial estuaries act as an important sink for point source and diffuse mercury pollution within catchments. A 2015 study of foreshore sediments from the Thames estuary measured total mercury at 0.01 to 12.07 mg/kg with mean of 2.10 mg/kg and median of 0.85 mg/kg (n = 351). The highest mercury concentrations were shown to occur in and around the city of London in association with fine grain muds and high total organic carbon content. The strong affinity of mercury for carbon rich sediments has also been observed in salt marsh sediments of the River Mersey, with a mean concentration of 2 mg/kg, up to 5 mg/kg. These concentrations are far higher than those in the salt marsh river creek sediments of New Jersey and mangroves of Southern China, which exhibit low mercury concentrations of about 0.2 mg/kg.
Occupational exposure
Due to the health effects of mercury exposure, industrial and commercial uses are regulated in many countries. The World Health Organization, OSHA, and NIOSH all treat mercury as an occupational hazard; both OSHA and NIOSH, among other regulatory agencies, have established specific occupational exposure limits on the element and its derivative compounds in liquid and vapor form. Environmental releases and disposal of mercury are regulated in the U.S. primarily by the United States Environmental Protection Agency.
Fish
Fish and shellfish have a natural tendency to concentrate mercury in their bodies, often in the form of methylmercury, a highly toxic organic compound of mercury. Species of fish that are high on the food chain, such as shark, swordfish, king mackerel, bluefin tuna, albacore tuna, and tilefish contain higher concentrations of mercury than others. Because mercury and methylmercury are fat soluble, they primarily accumulate in the viscera, although they are also found throughout the muscle tissue. Mercury presence in fish muscles can be studied using non-lethal muscle biopsies. Mercury present in prey fish accumulates in the predator that consumes them. Since fish are less efficient at depurating than accumulating methylmercury, methylmercury concentrations in the fish tissue increase over time. Thus species that are high on the food chain amass body burdens of mercury that can be ten times higher than the species they consume. This process is called biomagnification. Mercury poisoning happened this way in Minamata, Japan, now called Minamata disease.
Cosmetics
Some facial creams contain dangerous levels of mercury. Most contain comparatively non-toxic inorganic mercury, but products containing highly toxic organic mercury have been encountered. New York City residents have been found to be exposed to significant levels of inorganic mercury compounds through the use of skin care products.
Effects and symptoms of mercury poisoning
Toxic effects include damage to the brain, kidneys and lungs. Mercury poisoning can result in several diseases, including acrodynia (pink disease), Hunter-Russell syndrome, and Minamata disease. Symptoms typically include sensory impairment (vision, hearing, speech), disturbed sensation and a lack of coordination. The type and degree of symptoms exhibited depend upon the individual toxin, the dose, and the method and duration of exposure. Case–control studies have shown effects such as tremors, impaired cognitive skills, and sleep disturbance in workers with chronic exposure to mercury vapor even at low concentrations in the range 0.7–42 μg/m3.
A study has shown that acute exposure (4–8 hours) to calculated elemental mercury levels of 1.1 to 44 mg/m3 resulted in chest pain, dyspnea, cough, hemoptysis, impairment of pulmonary function, and evidence of interstitial pneumonitis. Acute exposure to mercury vapor has been shown to result in profound central nervous system effects, including psychotic reactions characterized by delirium, hallucinations, and suicidal tendency. Occupational exposure has resulted in broad-ranging functional disturbance, including erethism, irritability, excitability, excessive shyness, and insomnia. With continuing exposure, a fine tremor develops and may escalate to violent muscular spasms. Tremor initially involves the hands and later spreads to the eyelids, lips, and tongue. Long-term, low-level exposure has been associated with more subtle symptoms of erethism, including fatigue, irritability, loss of memory, vivid dreams and depression.
Treatment
Research on the treatment of mercury poisoning is limited. Currently available drugs for acute mercurial poisoning include chelators N-acetyl-D,L-penicillamine (NAP), British Anti-Lewisite (BAL), 2,3-dimercapto-1-propanesulfonic acid (DMPS), and dimercaptosuccinic acid (DMSA). In one small study including 11 construction workers exposed to elemental mercury, patients were treated with DMSA and NAP. Chelation therapy with both drugs resulted in the mobilization of a small fraction of the total estimated body mercury. DMSA was able to increase the excretion of mercury to a greater extent than NAP.
Regulations
International
140 countries agreed in the Minamata Convention on Mercury by the United Nations Environment Programme (UNEP) to prevent mercury vapor emissions. The convention was signed on 10 October 2013.
United States
In the United States, the Environmental Protection Agency is charged with regulating and managing mercury contamination. Several laws give the EPA this authority, including the Clean Air Act, the Clean Water Act, the Resource Conservation and Recovery Act, and the Safe Drinking Water Act. Additionally, the Mercury-Containing and Rechargeable Battery Management Act, passed in 1996, phases out the use of mercury in batteries, and provides for the efficient and cost-effective disposal of many types of used batteries. North America contributed approximately 11% of the total global anthropogenic mercury emissions in 1995.
The United States Clean Air Act, passed in 1990, put mercury on a list of toxic pollutants that need to be controlled to the greatest possible extent. Thus, industries that release high concentrations of mercury into the environment agreed to install maximum achievable control technologies (MACT). In March 2005, the EPA promulgated a regulation that added power plants to the list of sources that should be controlled and instituted a national cap and trade system. States were given until November 2006 to impose stricter controls, but after a legal challenge from several states, the regulations were struck down by a federal appeals court on 8 February 2008. The rule was deemed not sufficient to protect the health of persons living near coal-fired power plants, given the negative effects documented in the EPA Study Report to Congress of 1998. However newer data published in 2015 showed that after introduction of the stricter controls mercury declined sharply, indicating that the Clean Air Act had its intended impact.
The EPA announced new rules for coal-fired power plants on 22 December 2011. Cement kilns that burn hazardous waste are held to a looser standard than are standard hazardous waste incinerators in the United States, and as a result are a disproportionate source of mercury pollution.
European Union
In the European Union, the directive on the Restriction of the Use of Certain Hazardous Substances in Electrical and Electronic Equipment (see RoHS) bans mercury from certain electrical and electronic products, and limits the amount of mercury in other products to less than 1000 ppm. There are restrictions for mercury concentration in packaging (the limit is 100 ppm for sum of mercury, lead, hexavalent chromium and cadmium) and batteries (the limit is 5 ppm). In July 2007, the European Union also banned mercury in non-electrical measuring devices, such as thermometers and barometers. The ban applies to new devices only, and contains exemptions for the health care sector and a two-year grace period for manufacturers of barometers.
Scandinavia
Norway enacted a total ban on the use of mercury in the manufacturing and import/export of mercury products, effective 1 January 2008. In 2002, several lakes in Norway were found to have a poor state of mercury pollution, with an excess of 1 μg/g of mercury in their sediment. In 2008, Norway's Minister of Environment Development Erik Solheim said: "Mercury is among the most dangerous environmental toxins. Satisfactory alternatives to Hg in products are available, and it is therefore fitting to induce a ban." Products containing mercury were banned in Sweden in 2009, while elemental mercury has been banned from manufacture and use in all but a few applications (such as certain energy-saving light sources and amalgam dental fillings) in Denmark since 2008.
See also
COLEX process (isotopic separation)
Mercury pollution in the ocean
Red mercury
Notes
References
Further reading
External links
Chemistry in its element podcast (MP3) from the Royal Society of Chemistry's Chemistry World: Mercury
Mercury at The Periodic Table of Videos (University of Nottingham)
Centers for Disease Control and Prevention – Mercury Topic
EPA fish consumption guidelines
Hg 80 Mercury
Material Safety Data Sheet – Mercury ICSC 0056
Stopping Pollution: Mercury – Oceana
Natural Resources Defense Council (NRDC): Mercury Contamination in Fish guide – NRDC
NLM Hazardous Substances Databank – Mercury
BBC – Earth News – Mercury "turns" wetland birds such as ibises homosexual
Changing Patterns in the Use, Recycling, and Material Substitution of Mercury in the United States United States Geological Survey
Thermodynamical data on liquid mercury.
Chemical elements
Coolants
Endocrine disruptors
Native element minerals
Neurotoxins
Nuclear reactor coolants
Occupational safety and health
Transition metals
Chemical elements with rhombohedral structure
Mercury compounds
Mercury minerals
Mercury poisoning | Mercury (element) | Physics,Chemistry | 9,665 |
34,309,119 | https://en.wikipedia.org/wiki/Import%20and%20export%20of%20data | The import and export of data is the automated or semi-automated input and output of data sets between different software applications. It involves "translating" from the format used in one application into that used by another, where such translation is accomplished automatically via machine processes, such as transcoding, data transformation, and others. True exports of data often contain data in raw formats otherwise unreadable to end-users without the user interface that was designed to render it.
Import and export of data shares semantic analogy with copying and pasting, in that sets of data are copied from one application and pasted into another. In fact, the software development behind operating system clipboards (and clipboard extender apps) greatly concerns the many details and challenges of data transformation and transcoding, in order to present the end user with the illusion of effortless copy and paste between any two apps, no matter how internally different. The "Save As" command in many applications requires much of the same engineering, when files are saved as another file format.
The ability to import and export data (or lack of such ability) has large economic implications, because it can be resource-intensive to input data in non-automated ways (such as manual rekeying), and because lack of interoperability and data portability between systems unable to import or export data between each other causes stovepiping, lack of opportunity and efficiencies such as those seen in, for example, mash-ups, and may not suffice in its ability to search for information as enabled by tools such as grep.
See also
Data dump as export from databases
Data portability
Solid (web decentralization project): allows users to control and export their own data
References
External links
Input/output | Import and export of data | Technology | 360 |
22,059,700 | https://en.wikipedia.org/wiki/December%202038%20lunar%20eclipse | A penumbral lunar eclipse will occur at the Moon’s ascending node of orbit on Saturday, December 11, 2038, with an umbral magnitude of −0.2876. A lunar eclipse occurs when the Moon moves into the Earth's shadow, causing the Moon to be darkened. A penumbral lunar eclipse occurs when part or all of the Moon's near side passes into the Earth's penumbra. Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. Occurring about 3.3 days after apogee (on December 8, 2038, at 8:35 UTC), the Moon's apparent diameter will be smaller.
This eclipse will be the last of four penumbral lunar eclipses in 2038, with the others occurring on January 21, June 17, and July 16.
Visibility
The eclipse will be completely visible over northeast Africa, Europe, Asia, and Australia, seen rising over west and central Africa and setting over the central Pacific Ocean.
Eclipse details
Shown below is a table displaying details about this particular solar eclipse. It describes various parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2038
An annular solar eclipse on January 5.
A penumbral lunar eclipse on January 21.
A penumbral lunar eclipse on June 17.
An annular solar eclipse on July 2.
A penumbral lunar eclipse on July 16.
A penumbral lunar eclipse on December 11.
A total solar eclipse on December 26.
Metonic
Preceded by: Lunar eclipse of February 22, 2035
Followed by: Lunar eclipse of September 29, 2042
Tzolkinex
Preceded by: Lunar eclipse of October 30, 2031
Followed by: Lunar eclipse of January 22, 2046
Half-Saros
Preceded by: Solar eclipse of December 5, 2029
Followed by: Solar eclipse of December 16, 2047
Tritos
Preceded by: Lunar eclipse of January 12, 2028
Followed by: Lunar eclipse of November 9, 2049
Lunar Saros 116
Preceded by: Lunar eclipse of November 30, 2020
Followed by: Lunar eclipse of December 22, 2056
Inex
Preceded by: Lunar eclipse of December 31, 2009
Followed by: Lunar eclipse of November 21, 2067
Triad
Preceded by: Lunar eclipse of February 11, 1952
Followed by: Lunar eclipse of October 12, 2125
Lunar eclipses of 2038–2042
Saros 116
Half-Saros cycle
A lunar eclipse will be preceded and followed by solar eclipses by 9 years and 5.5 days (a half saros). This lunar eclipse is related to two total solar eclipses of Solar Saros 123.
See also
List of lunar eclipses and List of 21st-century lunar eclipses
Notes
External links
2038-12
2038-12
2038 in science | December 2038 lunar eclipse | Astronomy | 679 |
46,149 | https://en.wikipedia.org/wiki/GLONASS | GLONASS (, ; ) is a Russian satellite navigation system operating as part of a radionavigation-satellite service. It provides an alternative to Global Positioning System (GPS) and is the second navigational system in operation with global coverage and of comparable precision.
Satellite navigation devices supporting both GPS and GLONASS have more satellites available, meaning positions can be fixed more quickly and accurately, especially in built-up areas where buildings may obscure the view to some satellites. Owing to its higher orbital inclination, GLONASS supplementation of GPS systems also improves positioning in high latitudes (near the poles).
Development of GLONASS began in the Soviet Union in 1976. Beginning on 12 October 1982, numerous rocket launches added satellites to the system until the completion of the constellation in 1995. In 2001, after a decline in capacity during the late 1990s, the restoration of the system was made a government priority, and funding increased substantially. GLONASS is the most expensive program of the Roscosmos, consuming a third of its budget in 2010.
By 2010, GLONASS had achieved full coverage of Russia's territory. In October 2011, the full orbital constellation of 24 satellites was restored, enabling full global coverage. The GLONASS satellites' designs have undergone several upgrades, with the latest version, GLONASS-K2, launched in 2023.
System description
GLONASS is a global navigation satellite system, providing real time position and velocity determination for military and civilian users. The satellites are located in middle circular orbit at altitude with a 64.8° inclination and an orbital period of 11 hours and 16 minutes (every 17 revolutions, done in 8 sidereal days, a satellite passes over the same location). GLONASS's orbit makes it especially suited for usage in high latitudes (north or south), where getting a GPS signal can be problematic.
The constellation operates in three orbital planes, with eight evenly spaced satellites on each. A fully operational constellation with global coverage consists of 24 satellites, while 18 satellites are necessary for covering the territory of Russia. To get a position fix the receiver must be in the range of at least four satellites.
Signal
FDMA
GLONASS satellites transmit two types of signals: open standard-precision signal L1OF/L2OF, and obfuscated high-precision signal L1SF/L2SF.
The signals use similar DSSS encoding and binary phase-shift keying (BPSK) modulation as in GPS signals. All GLONASS satellites transmit the same code as their standard-precision signal; however each transmits on a different frequency using a 15-channel frequency-division multiple access (FDMA) technique spanning either side from 1602.0 MHz, known as the L1 band. The center frequency is 1602 MHz + n × 0.5625 MHz, where n is a satellite's frequency channel number (n=−6,...,0,...,6, previously n=0,...,13). Signals are transmitted in a 38° cone, using right-hand circular polarization, at an EIRP between 25 and 27 dBW (316 to 500 watts). Note that the 24-satellite constellation is accommodated with only 15 channels by using identical frequency channels to support antipodal (opposite side of planet in orbit) satellite pairs, as these satellites are never both in view of an Earth-based user at the same time.
The L2 band signals use the same FDMA as the L1 band signals, but transmit straddling 1246 MHz with the center frequency 1246 MHz + n × 0.4375 MHz, where n spans the same range as for L1. In the original GLONASS design, only obfuscated high-precision signal was broadcast in the L2 band, but starting with GLONASS-M, an additional civil reference signal L2OF is broadcast with an identical standard-precision code to the L1OF signal.
The open standard-precision signal is generated with modulo-2 addition (XOR) of 511 kbit/s pseudo-random ranging code, 50 bit/s navigation message, and an auxiliary 100 Hz meander sequence (Manchester code), all generated using a single time/frequency oscillator. The pseudo-random code is generated with a 9-stage shift register operating with a period of 1 milliseconds.
The navigational message is modulated at 50 bits per second. The superframe of the open signal is 7500 bits long and consists of 5 frames of 30 seconds, taking 150 seconds (2.5 minutes) to transmit the continuous message. Each frame is 1500 bits long and consists of 15 strings of 100 bits (2 seconds for each string), with 85 bits (1.7 seconds) for data and check-sum bits, and 15 bits (0.3 seconds) for time mark. Strings 1-4 provide immediate data for the transmitting satellite, and are repeated every frame; the data include ephemeris, clock and frequency offsets, and satellite status. Strings 5-15 provide non-immediate data (i.e. almanac) for each satellite in the constellation, with frames I-IV each describing five satellites, and frame V describing remaining four satellites.
The ephemerides are updated every 30 minutes using data from the Ground Control segment; they use Earth Centred Earth Fixed (ECEF) Cartesian coordinates in position and velocity, and include lunisolar acceleration parameters. The almanac uses modified orbital elements (Keplerian elements) and is updated daily.
The more accurate high-precision signal is available for authorized users, such as the Russian military, yet unlike the United States P(Y) code, which is modulated by an encrypting W code, the GLONASS restricted-use codes are broadcast in the clear using only security through obscurity. The details of the high-precision signal have not been disclosed. The modulation (and therefore the tracking strategy) of the data bits on the L2SF code has recently changed from unmodulated to 250 bit/s burst at random intervals. The L1SF code is modulated by the navigation data at 50 bit/s without a Manchester meander code.
The high-precision signal is broadcast in phase quadrature with the standard-precision signal, effectively sharing the same carrier wave, but with a ten-times-higher bandwidth than the open signal. The message format of the high-precision signal remains unpublished, although attempts at reverse-engineering indicate that the superframe is composed of 72 frames, each containing 5 strings of 100 bits and taking 10 seconds to transmit, with total length of 36 000 bits or 720 seconds (12 minutes) for the whole navigational message. The additional data are seemingly allocated to critical Lunisolar acceleration parameters and clock correction terms.
Accuracy
At peak efficiency, the standard-precision signal offers horizontal positioning accuracy within 5–10 metres, vertical positioning within , a velocity vector measuring within , and timing within 200 nanoseconds, all based on measurements from four first-generation satellites simultaneously; newer satellites such as GLONASS-M improve on this.
GLONASS uses a coordinate datum named "PZ-90" (Earth Parameters 1990 – Parametry Zemli 1990), in which the precise location of the North Pole is given as an average of its position from 1990 to 1995. This is in contrast to the GPS's coordinate datum, WGS 84, which uses the location of the North Pole in 1984. As of 17 September 2007, the PZ-90 datum has been updated to version PZ-90.02 which differ from WGS 84 by less than in any given direction. Since 31 December 2013, version PZ-90.11 is being broadcast, which is aligned to the International Terrestrial Reference System and Frame 2008 at epoch 2011.0 at the centimetre level, but ideally a conversion to ITRF2008 should be done.
CDMA
Since 2008, new CDMA signals are being researched for use with GLONASS.
The interface control documents for GLONASS CDMA signals was published in August 2016.
According to GLONASS developers, there will be three open and two restricted CDMA signals. The open signal L3OC is centered at 1202.025 MHz and uses BPSK(10) modulation for both data and pilot channels; the ranging code transmits at 10.23 million chips per second, modulated onto the carrier frequency using QPSK with in-phase data and quadrature pilot. The data is error-coded with 5-bit Barker code and the pilot with 10-bit Neuman-Hoffman code.
Open L1OC and restricted L1SC signals are centered at 1600.995 MHz, and open L2OC and restricted L2SC signals are centered at 1248.06 MHz, overlapping with GLONASS FDMA signals. Open signals L1OC and L2OC use time-division multiplexing to transmit pilot and data signals, with BPSK(1) modulation for data and BOC(1,1) modulation for pilot; wide-band restricted signals L1SC and L2SC use BOC (5, 2.5) modulation for both data and pilot, transmitted in quadrature phase to the open signals; this places peak signal strength away from the center frequency of narrow-band open signals.
Binary phase-shift keying (BPSK) is used by standard GPS and GLONASS signals. Binary offset carrier (BOC) is the modulation used by Galileo, modernized GPS, and BeiDou-2.
The navigational message of CDMA signals is transmitted as a sequence of text strings. The message has variable size - each pseudo-frame usually includes six strings and contains ephemerides for the current satellite (string types 10, 11, and 12 in a sequence) and part of the almanac for three satellites (three strings of type 20). To transmit the full almanac for all current 24 satellites, a superframe of 8 pseudo-frames is required. In the future, the superframe will be expanded to 10 pseudo-frames of data to cover full 30 satellites.
The message can also contain Earth's rotation parameters, ionosphere models, long-term orbit parameters for GLONASS satellites, and COSPAS-SARSAT messages. The system time marker is transmitted with each string; UTC leap second correction is achieved by shortening or lengthening (zero-padding) the final string of the day by one second, with abnormal strings being discarded by the receiver.
The strings have a version tag to facilitate forward compatibility: future upgrades to the message format will not break older equipment, which will continue to work by ignoring new data (as long as the constellation still transmits old string types), but up-to-date equipment will be able to use additional information from newer satellites.
The navigational message of the L3OC signal is transmitted at 100 bit/s, with each string of symbols taking 3 seconds (300 bits). A pseudo-frame of 6 strings takes 18 seconds (1800 bits) to transmit. A superframe of 8 pseudo-frames is 14,400 bits long and takes 144 seconds (2 minutes 24 seconds) to transmit the full almanac.
The navigational message of the L1OC signal is transmitted at 100 bit/s. The string is 250 bits long and takes 2.5 seconds to transmit. A pseudo-frame is 1500 bits (15 seconds) long, and a superframe is 12,000 bits or 120 seconds (2 minutes).
L2OC signal does not transmit any navigational message, only the pseudo-range codes:
Glonass-K1 test satellite launched in 2011 introduced L3OC signal. Glonass-M satellites produced since 2014 (s/n 755+) will also transmit L3OC signal for testing purposes.
Enhanced Glonass-K1 and Glonass-K2 satellites, to be launched from 2023, will feature a full suite of modernized CDMA signals in the existing L1 and L2 bands, which includes L1SC, L1OC, L2SC, and L2OC, as well as the L3OC signal. Glonass-K2 series should gradually replace existing satellites starting from 2023, when Glonass-M launches will cease.
Glonass-KM satellites will be launched by 2025. Additional open signals are being studied for these satellites, based on frequencies and formats used by existing GPS, Galileo, and Beidou/COMPASS signals:
open signal L1OCM using BOC(1,1) modulation centered at 1575.42 MHz, similar to modernized GPS signal L1C, Galileo signal E1, and Beidou/COMPASS signal B1C;
open signal L5OCM using BPSK(10) modulation centered at 1176.45 MHz, similar to the GPS "Safety of Life" (L5), Galileo signal E5a, and Beidou/COMPASS signal B2a;
open signal L3OCM using BPSK(10) modulation centered at 1207.14 MHz, similar to Galileo signal E5b and Beidou/COMPASS signal B2b.
Such an arrangement will allow easier and cheaper implementation of multi-standard GNSS receivers.
With the introduction of CDMA signals, the constellation will be expanded to 30 active satellites by 2025; this may require eventual deprecation of FDMA signals. The new satellites will be deployed into three additional planes, bringing the total to six planes from the current three—aided by System for Differential Correction and Monitoring (SDCM), which is a GNSS augmentation system based on a network of ground-based control stations and communication satellites Luch 5A and Luch 5B.
Six additional Glonass-V satellites, using Tundra orbit in three orbital planes, will be launched starting in 2025; this regional high-orbit segment will offer increased regional availability and 25% improvement in precision over Eastern Hemisphere, similar to Japanese QZSS system and Beidou-1. The new satellites will form two ground traces with inclination of 64.8°, eccentricity of 0.072, period of 23.9 hours, and ascending node longitude of 60° and 120°. Glonass-V vehicles are based on Glonass-K platform and will broadcast new CDMA signals only. Previously Molniya orbit, geosynchronous orbit, or inclined orbit were also under consideration for the regional segment.
Navigational message
L1OC
L3OC
Common properties of open CDMA signals
Satellites
The main contractor of the GLONASS program is Joint Stock Company Information Satellite Systems Reshetnev (ISS Reshetnev, formerly called NPO-PM). The company, located in Zheleznogorsk, is the designer of all GLONASS satellites, in cooperation with the Institute for Space Device Engineering (:ru:РНИИ КП) and the Russian Institute of Radio Navigation and Time. Serial production of the satellites is accomplished by the company Production Corporation Polyot in Omsk.
Over the three decades of development, the satellite designs have gone through numerous improvements, and can be divided into three generations: the original GLONASS (since 1982), GLONASS-M (since 2003) and GLONASS-K (since 2011). Each GLONASS satellite has a GRAU designation 11F654, and each of them also has the military "Cosmos-NNNN" designation.
First generation
The true first generation of GLONASS (also called Uragan) satellites were all three-axis stabilized vehicles, generally weighing and were equipped with a modest propulsion system to permit relocation within the constellation. Over time they were upgraded to Block IIa, IIb, and IIv vehicles, with each block containing evolutionary improvements.
Six Block IIa satellites were launched in 1985–1986 with improved time and frequency standards over the prototypes, and increased frequency stability. These spacecraft also demonstrated a 16-month average operational lifetime. Block IIb spacecraft, with a two-year design lifetimes, appeared in 1987, of which a total of 12 were launched, but half were lost in launch vehicle accidents. The six spacecraft that made it to orbit worked well, operating for an average of nearly 22 months.
Block IIv was the most prolific of the first generation. Used exclusively from 1988 to 2000, and continued to be included in launches through 2005, a total of 56 satellites were launched. The design life was three years, however numerous spacecraft exceeded this, with one late model lasting 68 months, nearly double.
Block II satellites were typically launched three at a time from the Baikonur Cosmodrome using Proton-K Blok-DM2 or Proton-K Briz-M boosters. The only exception was when, on two launches, an Etalon geodetic reflector satellite was substituted for a GLONASS satellite.
Second generation
The second generation of satellites, known as Glonass-M, were developed beginning in 1990 and first launched in 2003. These satellites possess a substantially increased lifetime of seven years and weigh slightly more at . They are approximately in diameter and high, with a solar array span of for an electrical power generation capability of 1600 watts at launch. The aft payload structure houses 12 primary antennas for L-band transmissions. Laser corner-cube reflectors are also carried to aid in precise orbit determination and geodetic research. On-board cesium clocks provide the local clock source. 52 Glonass-M have been produced and launched.
A total of 41 second generation satellites were launched through the end of 2013. As with the previous generation, the second generation spacecraft were launched three at a time using Proton-K Blok-DM2 or Proton-K Briz-M boosters. Some were launched alone with Soyuz-2-1b/Fregat.
In July 2015, ISS Reshetnev announced that it had completed the last GLONASS-M (No. 61) spacecraft and it was putting it in storage waiting for launch, along with eight previously built satellites.
As on 22 September 2017, GLONASS-M No.52 satellite went into operation and the orbital grouping has again increased to 24 space vehicles.
Third generation
GLONASS-K is a substantial improvement of the previous generation: it is the first unpressurised GLONASS satellite with a much reduced mass of versus the of GLONASS-M. It has an operational lifetime of 10 years, compared to the 7-year lifetime of the second generation GLONASS-M. It will transmit more navigation signals to improve the system's accuracy — including new CDMA signals in the L3 and L5 bands, which will use modulation similar to modernized GPS, Galileo, and BeiDou. Glonass-K consist of 26 satellites having satellite index 65-98 and widely used in Russian Military space.
The new satellite's advanced equipment—made solely from Russian components — will allow the doubling of GLONASS' accuracy. As with the previous satellites, these are 3-axis stabilized, nadir pointing with dual solar arrays. The first GLONASS-K satellite was successfully launched on 26 February 2011.
Due to their weight reduction, GLONASS-K spacecraft can be launched in pairs from the Plesetsk Cosmodrome launch site using the substantially lower cost Soyuz-2.1b boosters or in six-at-once from the Baikonur Cosmodrome using Proton-K Briz-M launch vehicles.
Ground control
The ground control segment of GLONASS is almost entirely located within former Soviet Union territory, except for several in Brazil and one in Nicaragua.
The GLONASS ground segment consists of:
a system control centre;
five Telemetry, Tracking and Command centers;
two Laser Ranging Stations; and
ten Monitoring and Measuring Stations.
Receivers
Companies producing GNSS receivers making use of GLONASS:
Furuno
JAVAD GNSS, Inc
Septentrio
Topcon
C-Nav
Magellan Navigation
Novatel
ComNav technology Ltd.
Leica Geosystems
Hemisphere GNSS
Trimble Inc
u-blox
NPO Progress describes a receiver called GALS-A1, which combines GPS and GLONASS reception.
SkyWave Mobile Communications manufactures an Inmarsat-based satellite communications terminal that uses both GLONASS and GPS.
, some of the latest receivers in the Garmin eTrex line also support GLONASS (along with GPS). Garmin also produce a standalone Bluetooth receiver, the GLO for Aviation, which combines GPS, WAAS and GLONASS.
Various smartphones from 2011 onwards have integrated GLONASS capability in addition to their pre-existing GPS receivers, with the intention of reducing signal acquisition periods by allowing the device to pick up more satellites than with a single-network receiver, including devices from:
Xiaomi
Sony Ericsson
ZTE
Huawei
Samsung
Apple (since iPhone 4S, concurrently with GPS)
HTC
LG
Motorola
Nokia
Status
Availability
, the GLONASS constellation status is:
The system requires 18 satellites for continuous navigation services covering all of Russia, and 24 satellites to provide services worldwide. The GLONASS system covers 100% of worldwide territory.
On 2 April 2014, the system experienced a technical failure that resulted in practical unavailability of the navigation signal for around 12 hours.
On 14–15 April 2014, nine GLONASS satellites experienced a technical failure due to software problems.
On 19 February 2016, three GLONASS satellites experienced a technical failure: the batteries of GLONASS-738 exploded, the batteries of GLONASS-737 were depleted, and GLONASS-736 experienced a stationkeeping failure due to human error during maneuvering. GLONASS-737 and GLONASS-736 were expected to be operational again after maintenance, and one new satellite (GLONASS-751) to replace GLONASS-738 was expected to complete commissioning in early March 2016. The full capacity of the satellite group was expected to be restored in the middle of March 2016.
After the launching of two new satellites and maintenance of two others, the full capacity of the satellite group was restored.
Accuracy
According to Russian System of Differentional Correction and Monitoring's data, , precision of GLONASS navigation definitions (for p=0.95) for latitude and longitude were with mean number of navigation space vehicles (NSV) equals 7—8 (depending on station). In comparison, the same time precision of GPS navigation definitions were with mean number of NSV equals 6—11 (depending on station).
Some modern receivers are able to use both GLONASS and GPS satellites together, providing greatly improved coverage in urban canyons and giving a very fast time to fix due to over 50 satellites being available. In indoor, urban canyon or mountainous areas, accuracy can be greatly improved over using GPS alone. For using both navigation systems simultaneously, precision of GLONASS/GPS navigation definitions were with mean number of NSV equals 14—19 (depends on station).
In May 2009, Anatoly Perminov, then director of the Roscosmos, stated that actions were undertaken to expand GLONASS's constellation and to improve the ground segment to increase the navigation definition of GLONASS to an accuracy of by 2011. In particular, the latest satellite design, GLONASS-K has the ability to double the system's accuracy once introduced. The system's ground segment is also to undergo improvements. As of early 2012, sixteen positioning ground stations are under construction in Russia and in the Antarctic at the Bellingshausen and Novolazarevskaya bases. New stations will be built around the southern hemisphere from Brazil to Indonesia. Together, these improvements are expected to bring GLONASS' accuracy to 0.6 m or better by 2020. The setup of a GLONASS receiving station in the Philippines is also now under negotiation.
History
See also
Aviaconversiya – a Russian satellite navigation firm
BeiDou – Chinese counterpart
Era-glonass – GLONASS-based system of emergency response
Galileo – European Union's counterpart
Global Positioning System - American counterpart
List of GLONASS satellites
Multilateration – the mathematical technique used for positioning
NAVIC – Indian counterpart
Tsikada – a Russian satellite navigation system
Notes
References
Standards
Bibliography
GLONASS Interface Control Document, Edition 5.1, 2008 (backup)
GLONASS Interface Control Document, Version 4.0, 1998
External links
Official GLONASS web page
Navigation satellite constellations
Space program of Russia
Space program of the Soviet Union
Soviet inventions
Wireless locating
Earth observation satellites of the Soviet Union
Military equipment introduced in the 1980s | GLONASS | Technology | 5,071 |
30,110,338 | https://en.wikipedia.org/wiki/Tricholoma%20columbetta | Tricholoma columbetta, commonly known as dove-coloured tricholoma, is an edible mushroom of the large genus Tricholoma. It is found in Europe, where it is eaten in France.
Genus
Elias Magnus Fries described the species in 1821 as Agaricus columbetta. Paul Kummer placed it in the genus Tricholoma in 1871, within which it is classified in the Section Albata.
Description
The fruit body (mushroom) is white or ivory-coloured, sometimes with a pale ochre tinge in the centre of the cap or pinkish, violet-blue or greenish spots. The cap is conical in young specimens, expanding to convex or flattish with a wavy margin, and is 4–10 cm in diameter. It can be a little sticky when wet. The centre of the cap may have a small boss or be depressed. The gills are adnate and widely spaced. The cylindrical stalk is 6–14 cm tall and 0,8–2 cm thick, and has no ring. The mushroom has a mealy smell, which is stronger when it is cut. The spore print is white. The spores are 5–7.5 x 3.5–5.5 μm. Tricholoma columbetta is edible, with a pleasant taste.
Tricholoma albidum is similar but stains yellow when cut or bruised. T. columbetta could be confused with paler specimens of the poisonous Entoloma lividum, though the latter has a more grey-white cap, yellow or pink gills.
Distribution
Widespread across Europe, Tricholoma columbetta forms mycorrhizal relationships with oak (Quercus) and is found in woodlands, parks, and rarely sand dunes on sandy mildly acidic soils. Mushrooms appear from August to November. In southern Finland, mushrooms appear in August and September.
In 2010, Roger Phillips reported what "seems to be a first record of this species in North America."
See also
List of Tricholoma species
References
Edible fungi
columbetta
Fungi described in 1871
Fungi of Europe
Taxa named by Elias Magnus Fries
Fungus species | Tricholoma columbetta | Biology | 437 |
676,328 | https://en.wikipedia.org/wiki/Graph%20homomorphism | In the mathematical field of graph theory, a graph homomorphism is a mapping between two graphs that respects their structure. More concretely, it is a function between the vertex sets of two graphs that maps adjacent vertices to adjacent vertices.
Homomorphisms generalize various notions of graph colorings and allow the expression of an important class of constraint satisfaction problems, such as certain scheduling or frequency assignment problems.
The fact that homomorphisms can be composed leads to rich algebraic structures: a preorder on graphs, a distributive lattice, and a category (one for undirected graphs and one for directed graphs).
The computational complexity of finding a homomorphism between given graphs is prohibitive in general, but a lot is known about special cases that are solvable in polynomial time. Boundaries between tractable and intractable cases have been an active area of research.
Definitions
In this article, unless stated otherwise, graphs are finite, undirected graphs with loops allowed, but multiple edges (parallel edges) disallowed.
A graph homomorphism from a graph to a graph , written
is a function from to that preserves edges. Formally, implies , for all pairs of vertices in .
If there exists any homomorphism from G to H, then G is said to be homomorphic to H or H-colorable. This is often denoted as just
The above definition is extended to directed graphs. Then, for a homomorphism f : G → H, (f(u),f(v)) is an arc (directed edge) of H whenever (u,v) is an arc of G.
There is an injective homomorphism from G to H (i.e., one that maps distinct vertices in G to distinct vertices in H) if and only if G is isomorphic to a subgraph of H.
If a homomorphism f : G → H is a bijection, and its inverse function is also a graph homomorphism, then f is a graph isomorphism.
Covering maps are a special kind of homomorphisms that mirror the definition and many properties of covering maps in topology.
They are defined as surjective homomorphisms (i.e., something maps to each vertex) that are also locally bijective, that is, a bijection on the neighbourhood of each vertex.
An example is the bipartite double cover, formed from a graph by splitting each vertex v into v0 and v1 and replacing each edge u,v with edges u0,v1 and v0,u1. The function mapping v0 and v1 in the cover to v in the original graph is a homomorphism and a covering map.
Graph homeomorphism is a different notion, not related directly to homomorphisms. Roughly speaking, it requires injectivity, but allows mapping edges to paths (not just to edges). Graph minors are a still more relaxed notion.
Cores and retracts
Two graphs G and H are homomorphically equivalent if
G → H and H → G. The maps are not necessarily surjective nor injective. For instance, the complete bipartite graphs K2,2 and K3,3 are homomorphically equivalent: each map can be defined as taking the left (resp. right) half of the domain graph and mapping to just one vertex in the left (resp. right) half of the image graph.
A retraction is a homomorphism r from a graph G to a subgraph H of G such that r(v) = v for each vertex v of H.
In this case the subgraph H is called a retract of G.
A core is a graph with no homomorphism to any proper subgraph. Equivalently, a core can be defined as a graph that does not retract to any proper subgraph.
Every graph G is homomorphically equivalent to a unique core (up to isomorphism), called the core of G. Notably, this is not true in general for infinite graphs.
However, the same definitions apply to directed graphs and a directed graph is also equivalent to a unique core.
Every graph and every directed graph contains its core as a retract and as an induced subgraph.
For example, all complete graphs Kn and all odd cycles (cycle graphs of odd length) are cores.
Every 3-colorable graph G that contains a triangle (that is, has the complete graph K3 as a subgraph) is homomorphically equivalent to K3. This is because, on one hand, a 3-coloring of G is the same as a homomorphism G → K3, as explained below. On the other hand, every subgraph of G trivially admits a homomorphism into G, implying K3 → G. This also means that K3 is the core of any such graph G. Similarly, every bipartite graph that has at least one edge is equivalent to K2.
Connection to colorings
A k-coloring, for some integer k, is an assignment of one of k colors to each vertex of a graph G such that the endpoints of each edge get different colors. The k-colorings of G correspond exactly to homomorphisms from G to the complete graph Kk. Indeed, the vertices of Kk correspond to the k colors, and two colors are adjacent as vertices of Kk if and only if they are different. Hence a function defines a homomorphism to Kk if and only if it maps adjacent vertices of G to different colors (i.e., it is a k-coloring). In particular, G is k-colorable if and only if it is Kk-colorable.
If there are two homomorphisms G → H and H → Kk, then their composition G → Kk is also a homomorphism. In other words, if a graph H can be colored with k colors, and there is a homomorphism from G to H, then G can also be k-colored. Therefore, G → H implies χ(G) ≤ χ(H), where χ denotes the chromatic number of a graph (the least k for which it is k-colorable).
Variants
General homomorphisms can also be thought of as a kind of coloring: if the vertices of a fixed graph H are the available colors and edges of H describe which colors are compatible, then an H-coloring of G is an assignment of colors to vertices of G such that adjacent vertices get compatible colors.
Many notions of graph coloring fit into this pattern and can be expressed as graph homomorphisms into different families of graphs.
Circular colorings can be defined using homomorphisms into circular complete graphs, refining the usual notion of colorings.
Fractional and b-fold coloring can be defined using homomorphisms into Kneser graphs.
T-colorings correspond to homomorphisms into certain infinite graphs.
An oriented coloring of a directed graph is a homomorphism into any oriented graph.
An L(2,1)-coloring is a homomorphism into the complement of the path graph that is locally injective, meaning it is required to be injective on the neighbourhood of every vertex.
Orientations without long paths
Another interesting connection concerns orientations of graphs.
An orientation of an undirected graph G is any directed graph obtained by choosing one of the two possible orientations for each edge.
An example of an orientation of the complete graph Kk is the transitive tournament k with vertices 1,2,…,k and arcs from i to j whenever i < j.
A homomorphism between orientations of graphs G and H yields a homomorphism between the undirected graphs G and H, simply by disregarding the orientations.
On the other hand, given a homomorphism G → H between undirected graphs, any orientation of H can be pulled back to an orientation of G so that has a homomorphism to .
Therefore, a graph G is k-colorable (has a homomorphism to Kk) if and only if some orientation of G has a homomorphism to k.
A folklore theorem states that for all k, a directed graph G has a homomorphism to k if and only if it admits no homomorphism from the directed path k+1.
Here n is the directed graph with vertices 1, 2, …, n and edges from i to i + 1, for i = 1, 2, …, n − 1.
Therefore, a graph is k-colorable if and only if it has an orientation that admits no homomorphism from k+1.
This statement can be strengthened slightly to say that a graph is k-colorable if and only if some orientation contains no directed path of length k (no k+1 as a subgraph).
This is the Gallai–Hasse–Roy–Vitaver theorem.
Connection to constraint satisfaction problems
Examples
Some scheduling problems can be modeled as a question about finding graph homomorphisms. As an example, one might want to assign workshop courses to time slots in a calendar so that two courses attended by the same student are not too close to each other in time. The courses form a graph G, with an edge between any two courses that are attended by some common student. The time slots form a graph H, with an edge between any two slots that are distant enough in time. For instance, if one wants a cyclical, weekly schedule, such that each student gets their workshop courses on non-consecutive days, then H would be the complement graph of C7. A graph homomorphism from G to H is then a schedule assigning courses to time slots, as specified. To add a requirement saying that, e.g., no single student has courses on both Friday and Monday, it suffices to remove the corresponding edge from H.
A simple frequency allocation problem can be specified as follows: a number of transmitters in a wireless network must choose a frequency channel on which they will transmit data. To avoid interference, transmitters that are geographically close should use channels with frequencies that are far apart. If this condition is approximated with a single threshold to define 'geographically close' and 'far apart', then a valid channel choice again corresponds to a graph homomorphism. It should go from the graph of transmitters G, with edges between pairs that are geographically close, to the graph of channels H, with edges between channels that are far apart. While this model is rather simplified, it does admit some flexibility: transmitter pairs that are not close but could interfere because of geographical features can be added to the edges of G. Those that do not communicate at the same time can be removed from it. Similarly, channel pairs that are far apart but exhibit harmonic interference can be removed from the edge set of H.
In each case, these simplified models display many of the issues that have to be handled in practice. Constraint satisfaction problems, which generalize graph homomorphism problems, can express various additional types of conditions (such as individual preferences, or bounds on the number of coinciding assignments). This allows the models to be made more realistic and practical.
Formal view
Graphs and directed graphs can be viewed as a special case of the far more general notion called relational structures (defined as a set with a tuple of relations on it). Directed graphs are structures with a single binary relation (adjacency) on the domain (the vertex set). Under this view, homomorphisms of such structures are exactly graph homomorphisms.
In general, the question of finding a homomorphism from one relational structure to another is a constraint satisfaction problem (CSP).
The case of graphs gives a concrete first step that helps to understand more complicated CSPs.
Many algorithmic methods for finding graph homomorphisms, like backtracking, constraint propagation and local search, apply to all CSPs.
For graphs G and H, the question of whether G has a homomorphism to H corresponds to a CSP instance with only one kind of constraint, as follows. The variables are the vertices of G and the domain for each variable is the vertex set of H. An evaluation is a function that assigns to each variable an element of the domain, so a function f from V(G) to V(H). Each edge or arc (u,v) of G then corresponds to the constraint ((u,v), E(H)). This is a constraint expressing that the evaluation should map the arc (u,v) to a pair (f(u),f(v)) that is in the relation E(H), that is, to an arc of H. A solution to the CSP is an evaluation that respects all constraints, so it is exactly a homomorphism from G to H.
Structure of homomorphisms
Compositions of homomorphisms are homomorphisms.
In particular, the relation → on graphs is transitive (and reflexive, trivially), so it is a preorder on graphs.
Let the equivalence class of a graph G under homomorphic equivalence be [G].
The equivalence class can also be represented by the unique core in [G].
The relation → is a partial order on those equivalence classes; it defines a poset.
Let G < H denote that there is a homomorphism from G to H, but no homomorphism from H to G.
The relation → is a dense order, meaning that for all (undirected) graphs G, H such that G < H, there is a graph K such that G < K < H (this holds except for the trivial cases G = K0 or K1).
For example, between any two complete graphs (except K0, K1, K2) there are infinitely many circular complete graphs, corresponding to rational numbers between natural numbers.
The poset of equivalence classes of graphs under homomorphisms is a distributive lattice, with the join of [G] and [H] defined as (the equivalence class of) the disjoint union [G ∪ H], and the meet of [G] and [H] defined as the tensor product [G × H] (the choice of graphs G and H representing the equivalence classes [G] and [H] does not matter).
The join-irreducible elements of this lattice are exactly connected graphs. This can be shown using the fact that a homomorphism maps a connected graph into one connected component of the target graph.
The meet-irreducible elements of this lattice are exactly the multiplicative graphs. These are the graphs K such that a product G × H has a homomorphism to K only when one of G or H also does. Identifying multiplicative graphs lies at the heart of Hedetniemi's conjecture.
Graph homomorphisms also form a category, with graphs as objects and homomorphisms as arrows.
The initial object is the empty graph, while the terminal object is the graph with one vertex and one loop at that vertex.
The tensor product of graphs is the category-theoretic product and
the exponential graph is the exponential object for this category.
Since these two operations are always defined, the category of graphs is a cartesian closed category.
For the same reason, the lattice of equivalence classes of graphs under homomorphisms is in fact a Heyting algebra.
For directed graphs the same definitions apply. In particular → is a partial order on equivalence classes of directed graphs. It is distinct from the order → on equivalence classes of undirected graphs, but contains it as a suborder. This is because every undirected graph can be thought of as a directed graph where every arc (u,v) appears together with its inverse arc (v,u), and this does not change the definition of homomorphism. The order → for directed graphs is again a distributive lattice and a Heyting algebra, with join and meet operations defined as before. However, it is not dense. There is also a category with directed graphs as objects and homomorphisms as arrows, which is again a cartesian closed category.
Incomparable graphs
There are many incomparable graphs with respect to the homomorphism preorder, that is, pairs of graphs such that neither admits a homomorphism into the other.
One way to construct them is to consider the odd girth of a graph G, the length of its shortest odd-length cycle.
The odd girth is, equivalently, the smallest odd number g for which there exists a homomorphism from the cycle graph on g vertices to G. For this reason, if G → H, then the odd girth of G is greater than or equal to the odd girth of H.
On the other hand, if G → H, then the chromatic number of G is less than or equal to the chromatic number of H.
Therefore, if G has strictly larger odd girth than H and strictly larger chromatic number than H, then G and H are incomparable.
For example, the Grötzsch graph is 4-chromatic and triangle-free (it has girth 4 and odd girth 5), so it is incomparable to the triangle graph K3.
Examples of graphs with arbitrarily large values of odd girth and chromatic number are Kneser graphs and generalized Mycielskians.
A sequence of such graphs, with simultaneously increasing values of both parameters, gives infinitely many incomparable graphs (an antichain in the homomorphism preorder).
Other properties, such as density of the homomorphism preorder, can be proved using such families.
Constructions of graphs with large values of chromatic number and girth, not just odd girth, are also possible, but more complicated (see Girth and graph coloring).
Among directed graphs, it is much easier to find incomparable pairs. For example, consider the directed cycle graphs n, with vertices 1, 2, …, n and edges from i to i + 1 (for i = 1, 2, …, n − 1) and from n to 1.
There is a homomorphism from n to k (n, k ≥ 3) if and only if n is a multiple of k.
In particular, directed cycle graphs n with n prime are all incomparable.
Computational complexity
In the graph homomorphism problem, an instance is a pair of graphs (G,H) and a solution is a homomorphism from G to H. The general decision problem, asking whether there is any solution, is NP-complete. However, limiting allowed instances gives rise to a variety of different problems, some of which are much easier to solve. Methods that apply when restraining the left side G are very different than for the right side H, but in each case a dichotomy (a sharp boundary between easy and hard cases) is known or conjectured.
Homomorphisms to a fixed graph
The homomorphism problem with a fixed graph H on the right side of each instance is also called the H-coloring problem. When H is the complete graph Kk, this is the graph k-coloring problem, which is solvable in polynomial time for k = 0, 1, 2, and NP-complete otherwise.
In particular, K2-colorability of a graph G is equivalent to G being bipartite, which can be tested in linear time.
More generally, whenever H is a bipartite graph, H-colorability is equivalent to K2-colorability (or K0 / K1-colorability when H is empty/edgeless), hence equally easy to decide.
Pavol Hell and Jaroslav Nešetřil proved that, for undirected graphs, no other case is tractable:
Hell–Nešetřil theorem (1990): The H-coloring problem is in P when H is bipartite and NP-complete otherwise.
This is also known as the dichotomy theorem for (undirected) graph homomorphisms, since it divides H-coloring problems into NP-complete or P problems, with no intermediate cases.
For directed graphs, the situation is more complicated and in fact equivalent to the much more general question of characterizing the complexity of constraint satisfaction problems.
It turns out that H-coloring problems for directed graphs are just as general and as diverse as CSPs with any other kinds of constraints. Formally, a (finite) constraint language (or template) Γ is a finite domain and a finite set of relations over this domain. CSP(Γ) is the constraint satisfaction problem where instances are only allowed to use constraints in Γ.
Theorem (Feder, Vardi 1998): For every constraint language Γ, the problem CSP(Γ) is equivalent under polynomial-time reductions to some H-coloring problem, for some directed graph H.
Intuitively, this means that every algorithmic technique or complexity result that applies to H-coloring problems for directed graphs H applies just as well to general CSPs. In particular, one can ask whether the Hell–Nešetřil theorem can be extended to directed graphs. By the above theorem, this is equivalent to the Feder–Vardi conjecture (aka CSP conjecture, dichotomy conjecture) on CSP dichotomy, which states that for every constraint language Γ, CSP(Γ) is NP-complete or in P. This conjecture was proved in 2017 independently by Dmitry Zhuk and Andrei Bulatov, leading to the following corollary:
Corollary (Bulatov 2017; Zhuk 2017): The H-coloring problem on directed graphs, for a fixed H, is either in P or NP-complete.
Homomorphisms from a fixed family of graphs
The homomorphism problem with a single fixed graph G on left side of input instances can be solved by brute-force in time |V(H)|O(|V(G)|), so polynomial in the size of the input graph H. In other words, the problem is trivially in P for graphs G of bounded size. The interesting question is then what other properties of G, beside size, make polynomial algorithms possible.
The crucial property turns out to be treewidth, a measure of how tree-like the graph is. For a graph G of treewidth at most k and a graph H, the homomorphism problem can be solved in time |V(H)|O(k) with a standard dynamic programming approach. In fact, it is enough to assume that the core of G has treewidth at most k. This holds even if the core is not known.
The exponent in the |V(H)|O(k)-time algorithm cannot be lowered significantly: no algorithm with running time |V(H)|o(tw(G) /log tw(G)) exists, assuming the exponential time hypothesis (ETH), even if the inputs are restricted to any class of graphs of unbounded treewidth.
The ETH is an unproven assumption similar to P ≠ NP, but stronger.
Under the same assumption, there are also essentially no other properties that can be used to get polynomial time algorithms. This is formalized as follows:
Theorem (Grohe): For a computable class of graphs , the homomorphism problem for instances with is in P if and only if graphs in have cores of bounded treewidth (assuming ETH).
One can ask whether the problem is at least solvable in a time arbitrarily highly dependent on G, but with a fixed polynomial dependency on the size of H.
The answer is again positive if we limit G to a class of graphs with cores of bounded treewidth, and negative for every other class.
In the language of parameterized complexity, this formally states that the homomorphism problem in parameterized by the size (number of edges) of G exhibits a dichotomy. It is fixed-parameter tractable if graphs in have cores of bounded treewidth, and W[1]-complete otherwise.
The same statements hold more generally for constraint satisfaction problems (or for relational structures, in other words). The only assumption needed is that constraints can involve only a bounded number of variables (all relations are of some bounded arity, 2 in the case of graphs). The relevant parameter is then the treewidth of the primal constraint graph.
See also
Glossary of graph theory terms
Homomorphism, for the same notion on different algebraic structures
Graph rewriting
Median graphs, definable as the retracts of hypercubes
Sidorenko's conjecture
Notes
References
General books and expositions
In constraint satisfaction and universal algebra
In lattice theory and category theory
(AMSI Vacation Research Scholarships , student research report supervised by Brian Davey and Jane Pitkethly, La Trobe University).
Graph theory
Morphisms
NP-complete problems | Graph homomorphism | Mathematics | 5,099 |
70,615,290 | https://en.wikipedia.org/wiki/Vivo%20X%20Fold | Vivo X Fold is an Android-based foldable smartphone developed and manufactured by Vivo. This phone announced on 11 April 2022. On 11 April 2022 was announced Vivo X Fold+ which is improved version of X Fold with more powerful GPU, bigger battery, faster charging and new red color.
References
Foldable smartphones
Android (operating system) devices
Mobile phones introduced in 2022
Mobile phones with 8K video recording | Vivo X Fold | Technology | 86 |
2,924,177 | https://en.wikipedia.org/wiki/Ridge%20%28meteorology%29 | In meteorology a ridge or barometric ridge is an elongated area of relatively high atmospheric pressure compared to the surrounding environment, without being a closed circulation. It is associated with an area of maximum anticyclonic curvature of wind flow. The ridge originates in the center of an anticyclone and sandwiched between two low-pressure areas, and the locus of the maximum curvature is called the ridge line. This phenomenon is the opposite of a trough.
Description
Ridges can be represented in two ways:
On surface weather maps, the pressure isobars form contours where the maximum pressure is found along the axis of the ridge.
In upper-air maps, geopotential height isohypses form similar contours where the maximum defines the ridge.
Related weather
Given the direction of the winds around an anticyclonic circulation and the fact that weather systems move from west to east:
ahead of an upper-ridge, the airflow that comes from the polar regions and brings cold air.
behind the upper-ridge line, the flow that comes from the equator and brings mild air.
Surface ridges, just like highs, generate fair weather because they develop under wind convergence in the negative vorticity advection zone ahead of the upper-level ridge. The vertical downward air motion then gives a divergence of the winds near the surface. The subsidence of the air causes a warming in the column compared to the previous environment and therefore a drying of it because its relative humidity decreases, which has the effect of clearing the sky.
Subtropical ridge
An important atmospheric ridge is the subtropical ridge. It is a series of ridges near the horse latitude characterized by mostly calm winds, which act to reduce air quality under its axis by causing fog overnight, and haze during daylight hours as a result of the stable atmosphere found near its location. The air descending from the upper troposphere flows out from its center at surface level toward the upper and lower latitudes of each hemisphere, creating both the trade winds and the westerlies. It helps steer tropical cyclones and the monsoon.
Ridge blocking
Blocks in meteorology are large-scale patterns in the atmospheric pressure field that are nearly stationary, effectively "blocking" or redirecting migratory cyclones. These blocks can remain in place for several days or even weeks, causing the areas affected by them to have the same kind of weather for an extended period of time (e.g. precipitation for some areas, clear skies for others). Upper ridges are often associated with such blocks, particularly in Omega blocks.
References
Atmospheric dynamics
Atmospheric circulation
Synoptic meteorology and weather | Ridge (meteorology) | Chemistry | 524 |
1,695,867 | https://en.wikipedia.org/wiki/Dot%20crawl | Dot crawl (also known as chroma crawl or cross-luma) is a visual defect of color analog video standards when signals are transmitted as composite video, as in terrestrial broadcast television. It consists of moving checkerboard patterns which appear along horizontal color transitions (vertical edges). It results from intermodulation or crosstalk between chrominance and luminance components of the signal, which are imperfectly multiplexed in the frequency domain.
The term is more associated with the NTSC analog color TV system, but is also present in PAL (see Chroma dots). Although the interference patterns are slightly different depending on the system used, they have the same cause and the same general principles apply. A related effect, color bleed or rainbow artifacts, is discussed below.
Description
Intermodulation or crosstalk problems take two forms:
chrominance interference in luminance (chrominance being interpreted as luminance),
luminance interference in chrominance.
Dot crawl is most visible when the chrominance is transmitted with a high bandwidth, so that its spectrum reaches well into the band of frequencies used by the luminance signal in the composite video signal. This causes high-frequency chrominance detail at color transitions to be interpreted as luminance detail.
Some (mostly older) video-game consoles and home computers use nonstandard colorburst phases, thereby producing dot crawl that appears quite different from that seen in broadcast NTSC or PAL. The effect is more noticeable on these cases due to the saturated colors and small pixel scale details normally present on computer graphics.
The opposite problem, luminance interference in chroma, is the appearance of a colored noise in image areas with high levels of detail. This results from high-frequency luminance detail crossing into the frequencies used by the chrominance channel and producing false coloration, known as color bleed or rainbow artifacts. Bleed can also make narrowly spaced text difficult to read. Some computers, such as the Apple II, or IBM PC compatibles with CGA graphics utilized this to generate color (see Composite artifact colors).
History
Dot crawl has long been recognized as a problem by professionals since the creation of composite video. When the NTSC standard was adopted in the 1950s, TV engineers realized that it should theoretically be possible to design a filter to properly separate the luminance and chroma signals. However, the vacuum tube-based electronics of the time did not permit any cost-effective method of implementing a comb filter. Thus, the early color TVs used only notch filters, which cut the luminance off at 3.5 MHz. This effectively reduced the luminance bandwidth (normally 4 MHz) to that of the chroma, causing considerable color bleed .
Comb filters
By the 1970s, TVs had begun using solid-state electronics and the first comb filters appeared. This coincides with the advent of LaserDiscs and other high quality media that make the problem noticeable for the public. However, comb filters were expensive and only high-end TVs used them, while most color sets continued to use notch filters.
By the 1990s, a further development took place with the advent of three-line digital comb filters. This type of filter uses a computer to analyze the NTSC signal three scan lines at a time and determine the best place to put the chroma and luminance. During this period, digital filters became standard in high-end TVs while the older analog filter began appearing in cheaper models (although notch filters were still widely used). Modern HDTVs and capture devices do a much better job at eliminating dot crawl than traditional CRT TVs and earlier LCD TVs.
However, no comb filter can totally eliminate NTSC artifacts and the only complete solutions to dot crawl are not to use NTSC or PAL composite video, maintaining the signals separately by using S-Video or component video connections instead, or encoding the chrominance signal differently as in SECAM or any modern digital video standard as long as the source video has never been processed using any video system vulnerable to dot crawl.
Other solutions
Some consoles like the PlayStation 3 have a built-in filter that reduces dot crawl and "rainbow effect" almost completely. So it is technically possible, without the use of a built-in TV filter, to remove this negative effect in the composite video signal in both NTSC and PAL signals.
Likewise Colour-plus, a technique part of the PALplus standard introduced in 1993, gives a cleaner luminance/chrominance separation in the PALplus receiver. It is used with signals with high horizontal luminance frequencies (3 MHz) that share the spectrum with the chrominance signals. Colour pictures on both standard and PALplus receivers are enhanced.
Color recovery
Monochrome film recordings of color television programs may exhibit dot crawl, and starting in 2008 this has been used to recover the original color information in a process called color recovery.
References
See also
Composite artifact colors
Chroma dots
Hanover bars
Television technology
Television terminology | Dot crawl | Technology | 1,034 |
47,923,207 | https://en.wikipedia.org/wiki/Penicillium%20variabile | Penicillium variabile is an anamorph species of fungus in the genus Penicillium which has been isolated from permafrost deposits. Penicillium variabile produces rugulovasine A and rugulovasine B This species occurs on wheat, flour, maize, rice, and barley, and it is also very common in indoor environments.
In the University of Newcastle, and publicated in the Journal of Parkinson’s Disease, was found that the Penicillium variable P16 is a main marker of the advancement of the Parkinson illness (with the loss of telomere length and P21)
References
Further reading
variabile
Fungi described in 1912
Fungus species | Penicillium variabile | Biology | 148 |
43,388,949 | https://en.wikipedia.org/wiki/Tawkers | Tawkers is a Saas application, which allows publishers to create and distribute branded text message conversations live or after the fact. The application is created for various devices, but was originally launched as an iPhone app and has become a set of content tools for brand marketers and publishers. Tawkers is owned and managed by Tawkers Inc, which was founded in 2011.
The company launched a beta of Tawkers as a web application in 2013, before releasing the official version in March 2014. Gizmodo Brazil ranked Tawkers in their list of top iPhone Apps for that particular month.
In 2016, Tawkers began work on a SaaS product, creating an enterprise solution for brands, agencies and publishers to create and manage public messaging content between influencers. The content is then embedded across the client's media. This new offering followed a partnership with NBCUniversal and 360i in which the companies utilized the technology to create content for their owned, earned and paid media channels, as well as within mobile applications. The first campaign was with the Bravo TV show Odd Mom Out.
Background
Blake Ian is the current CEO and co-founded the company in 2011 in New York City after having the idea of sharing text conversations while he was chatting with a friend about a film over instant messaging. In late 2011, after developing the idea, the company received $360,000 in seed funding.
Following the development of a beta in 2013, Ian stated to TechCrunch that he believed that celebrities would use the Tawkers platform as a way to engage with fans and also engage conversation on given subjects.
The app did just that in the early stages of the beta, with Lee Camp, Ron "Bumblefoot" Thal, Colin Quinn, singer Eleanor Goldfield, Deepak Chopra, and also Howard Rheingold using the app during the early parts of its existence.
In March 2014, Gizmodo Brazil ranked Tawkers in their list of top Apps.
Mechanics
According to their website, the application aims to bridge the gap between private text messaging and public expression, providing marketers and publishers a solution for creating mass amounts of high engaging content for no production cost and minor effort from influential brand ambassadors.
The SaaS platform works by allowing marketing professionals and content creators to create publicly accessible chats. These chats are then embedded across the internet within owned, earned and paid media channels as well as mobile applications. In 2014, Ian commented on the desire for people to share their own text conversations and read conversations by others, particularly celebrities. The example he gave was when Macklemore texted Kendrick Lamar, after Macklemore's Grammy success and then tweeted an image of the conversation which went viral.
References
Online companies of the United States
Social networking services
IOS software
Instant messaging clients | Tawkers | Technology | 569 |
6,324,034 | https://en.wikipedia.org/wiki/Uglies | Uglies is a 2005 dystopian novel by Scott Westerfeld. It is set in a futuristic post-scarcity world in which everyone is considered an "Ugly" until they are then turned "Pretty" by extreme cosmetic surgery when they reach the age of 16. It tells the story of a teenager, Tally Youngblood, who rebels against society's enforced conformity after her friends Shay and David show her the downsides to becoming a "Pretty".
Written for young adults, Uglies deals with themes of change, both emotional and physical. The book is the first installment in what was originally a trilogy, the Uglies series, which also includes the books Pretties, Specials, and Extras. In 2018, four new installments were announced, collectively titled the Impostors Series.
Plot
Three hundred years in the future, the government provides everything, including plastic surgery operations. Everyone on their 16th birthday receives the "Pretty" operation, transforming them into society's standard of beauty. After the operation, new Pretties cross the river that divides the city and lead a new life with no responsibilities or obligations. There are two other operations available: one to transform new Pretties into "Middle-Pretties" (adults with a job) and another to transform Middle-Pretties into "Crumblies" (late Pretties who live in the suburbs).
Former cities have decayed after bacteria infected the world's petroleum and made it unstable. The old society, so dependent on oil, fell apart when cars and oil fields exploded, and food could no longer be transported. People who lived before the catastrophe are called the "Rusties."
Tally Youngblood is almost sixteen. Like every other Ugly, she awaits the operation with great anticipation. Her best friend, Peris, has already had the operation, and motivated by her desire to see him, Tally sneaks across the river to New Pretty Town. There, she meets Shay, another Ugly. They become friends, and Shay teaches Tally how to ride a hoverboard. Shay also mentions rebelling against the operation. At first, Tally ignores the idea but is forced to deal with it when Shay runs away a few days before their shared sixteenth birthday, leaving behind cryptic directions to her destination, a "renegade settlement" called the Smoke, where city runaways go to escape the operation.
On the day of Tally's operation, she is taken to Special Circumstances, a division that is likened to "gremlins" and "[blamed] when anything weird happens." Dr. Cable, a woman who is described as "a cruel pretty," is the head of Special Circumstances. She gives Tally an ultimatum to help locate Shay and the Smoke or never become a pretty. Tally cooperates, and Dr. Cable gives her a hoverboard and all the needed supplies to survive in the wild, along with a heart locket that contains a tracking device. Once activated with Tally's eye, it will show the location of the Smoke to Special Circumstances. By following Shay's clues, Tally sets off to find her friend.
When Tally arrives at the Smoke, she finds Shay, her friend David, and an entire community of runaway Uglies. She is reluctant to activate the pendant, and it eventually becomes clear that David is in love with her. David takes her to meet his parents, Maddy and Az, who are the original runaways from the city. They explain how the operation does more than "cosmetic nipping and tucking." It also causes lesions in the brain that make the people placid, or "pretty-minded." Horrified, Tally decides to keep the Smoke secret and throws the locket into a fire to destroy it. However, the flames' heat causes the tracker to activate and give away the Smoke's location.
The following morning, Special Circumstances arrive at the camp, and Tally tries to escape. She fails and is caught and taken to a rabbit pen in which other caught Smokies are kept, tied up. Eye scans are taken of all the captured Smokies and identify from which city they fled. Tally is then taken to Dr. Cable, who explains how they found the Smoke. Because of how long it took to activate the pendant, Dr. Cable suspects that Tally betrayed her but activated it by accident. Dr. Cable tests Tally by ordering her to retrieve the locket, which should be intact. Tally escapes on a hoverboard. After a long and stressful chase, she hides in a cave in which they cannot track her heat signature. There, she finds David also hiding, and together, they begin to plan a rescue.
Tally and David return to his house, where they find evidence that Special Circumstances took Maddy and Az. David leads Tally to a secret stash of survival equipment, where they find everything they need and load them onto the four hoverboards stashed there. As Tally and David travel back to the city to free their friends, they fall in love. Arriving at the Special Circumstances complex, they discover that Shay has already been "turned" and is now a Pretty. After meeting Dr. Cable, David knocks her out and takes her work tablet, which contains all the necessary information to reverse the brain lesions created by the Pretty operation. Tally and David then free all the Smokies held in the complex. As they escape the complex, Maddy tells David that his father, Az, is dead.
Once everyone is safe at the Rusty Ruins, Maddy begins working on a cure by using Dr. Cable's tablet. She then offers it to Shay, who refuses and does not want to become a "vegetable." Since Tally feels responsible for her betrayal, she decides to become a Pretty and take the cure as a "willing subject." To convince David to let her go back to the city, she tells him about her involvement with Special Circumstances and searching for the Smoke to betray them. While David is absorbing what Tally admitted, Maddy advises Tally to go back with Shay before she changes her mind. Once there, Tally announces to a Middle Pretty, "I'm Tally Youngblood. Make me pretty," the final phrase of the novel.
Characters
Uglies
Tally Youngblood is the main character of the story. She is clever and loves tricks. Her Ugly nickname is Squint. As the story progresses, she begins to stray from the rules of her city and her assignment. She falls in love with David at the Smoke. Together, Tally and David rescue the Smokies after the Smoke is captured by Special Circumstances. In the end, she gives herself up to be Pretty and helps Shay.
Shay is Tally's new friend in Uglyville. She is an Ugly nicknamed Skinny. They meet while Tally is fleeing from New Pretty City. Shay teaches Tally how to hoverboard and more about the outside world and the Rusties. Shay prefers to refer to Tally by her name, but they occasionally call each other by their Ugly nicknames to provoke each other. After a fight, Shay decides to "grow up" and go to the Smoke. Shay grows jealous of Tally's presence in the Smoke, especially of her relationship with David. Shay realizes Tally's ultimate betrayal but no longer cares once she is made Pretty.
Smokies
David, the son of the founders of the Smoke, was not born in any city. Shay has an interest in David, but he does not reciprocate her feelings. After Tally arrives at the Smoke, his interest in her makes Shay jealous. David helps change Tally's feeling towards the cities. He wears hand-sewn clothing, which is made from animal skins, and has survival skills that he passes on to Tally.
Maddy is David's mother. She tells Tally about the brain lesions caused by the Pretty surgery, and later develops a cure for the lesions. She is a founder of the Smoke.
Az is David's father and dies in an operation. As part of the Pretty Special Committee, he discovered the brain lesions that the Pretty surgery causes and suspects the lesions are intentional. He is a founder of the Smoke.
Croy is another Smokie. He is originally from Uglyville and knows Shay. He is originally suspicious of Tally but grows to trust her.
The Boss is called the "Boss" but is not in charge of the leaderless Smoke; however, he is in charge of the library. He dies when the Specials arrived.
Specials
Dr. Cable is the head of Special Circumstances and denies Tally the operation until she finds Shay. She sends Specials to destroy the Smoke. She is described as having an aquiline nose, non-reflective gray eyes, a razor-like voice, and sharp teeth.
Pretties
Peris, Tally's best friend, is three months older than Tally and thus has become a Pretty. He helps Tally decide to betray Shay. His Ugly nickname was Nose.
Ellie Youngblood is Tally's mother who helps Tally decide to turn in Shay. She is a Middle Pretty.
Sol Youngblood is Tally's father who helps Tally decide to turn in Shay. He is a Middle Pretty.
Major motifs
Identity
According to critics, Uglies contains themes of identity, particularly regarding teenagers. Phillip Gough said the government of Tally's city, which controls what happens within the operation, "removes responsibility for identity," creating sameness and uniformity. By placing heavy emphasis on the role of individualism, the novel shows the importance of teen's self-concept. Because identity is formed by "displacement," and all citizens are carefully sheltered, there is no chance for them to branch out into independence. "Physical identity is determined by committees," notes Gough in his essay discussing Westerfeld's novel. The lack of choice causes all "markers of physical identity" to be destroyed by their government.
Beauty
Kristi N. Scott and M. Heather Dragoo note that another recurring theme in the "image-obsessed society" is beauty, and its recurring relationship with individuality. Gough agreed and commented that "when everyone is equal, beauty loses its meaning." Beauty went hand in hand with identity: Uglies were taught to think of their bodies and faces as "temporary," which would be replaced later with cosmetic surgery. A strong line is drawn to connect features with personality, and one critic stated that the characters develop "ugly" and "pretty" personalities with each stage of their operations.
Dystopian society
A "utopia resting on ruthless suppression of individual freedom" was Amanda Craig of The Times's description of Tally's city. Many critics identified the trend of a controlling government in the novel. People in the protagonist's world are "programmed and designed by the Pretty committee, " with no say-so in their operation, and identity is placed firmly "in the hands of the state". Dragoo and Scott point out how segregated the city is, with Pretties, Uglies, Middlies, and Crumblies neatly divided into different sections. Many reviewers have commented on the way in which the city manipulates its inhabitants, including the supposedly rebellious uglies, who are nothing more than "docile bodies". Bedies, the dystopian society depicted by Westerfeld includes a particularly common trope in the genre: the duality of spaces, the metropolis representing the totalitarian civilization and nature as a field for freedom.
Humanity
Various critics also found a theme of humanity within Uglies. Phillip Gough noted that Pretties and Specials (those who worked for Special Circumstances) are "posthuman" because of their operations. Others, including Scott and Dragoo, argued against this by claiming "the human body provides an artistic and political canvas for intentional manipulation" and that the physical transformation can be an "outlet for humanity." The novel Uglies seems to take no definite stance on it, but clearer points are shown in Pretties and Specials, the following books in the trilogy.
Background
When asked how he came up with the idea of Uglies, Westerfeld said that the inspiration came from an email sent by a friend who had recently moved from New York to Los Angeles who experienced culture shock after an appointment with a local dentist who "asked him to consider getting cosmetic surgery." He is the son of a computer programmer for UNIVAC, which meant that he grew up familiar with the cutting edge of 1960s technology. Amanda Craig said that "it is his prescient perception of how such inventions will lead to absolute loss of privacy which has elicited as much fan-mail as the issue of how looks dominate our lives."
The book shares many themes with the 1964 The Twilight Zone episode "Number 12 Looks Just Like You". The author of the books noted that he saw the episode in his childhood but had forgotten the details.
In the dedication page for Uglies, Westerfeld says: "This novel was shaped by a series of e-mail exchanges between me and Ted Chiang about his story "Liking What You See: A Documentary". His input on the manuscript was also invaluable." In another interview, Westerfeld says that the short story is about a new technology that enables individuals to turn off their ability to see beauty so they can focus on the deeper and more important parts of another individual. In a 2012 interview on Bookyurt, Westerfeld explained that his point in writing the book was not to make a big commentary on the issues with beauty but to make people aware of the culture of retouching that is developing in the world and to be aware of our own ideas about beauty and our need to think for ourselves.
Reception
The novel has received mostly positive reviews. The Baroque Body praised the novel as having "creative slang, unique technical gadgets, and defining characteristics of personhood." Cory Doctorow complimented its "perfect parables of adolescent life" and stated that it is "fine science fiction for youth." Jennifer Mattson claimed it to be "ingenious." Reed Business Information praised the "convincing plot" and noted that it is "highly readable."
However, Publishers Weekly commented that Tally was a "rather passive protagonist," and the Times complained that "Tally herself is a bit too vague as a character." The critic Jennifer Mattson noted that the brisk pace of the novel as being "bad for convincing relationships."
The novel sparked discussion over the use of plastic surgery to improve one's looks. The author said he has "received many letters from girls who have decided against having surgery since reading Uglies," and others, sparked by Uglies, have started to ponder the ethics of changing your body's appearance. Westerfeld has theorized that "having extreme cosmetic surgery will be like buying a $1,000 Gucci bag, an indication that you are a member of the privileged class." Critics echo his opinion. However, Westerfeld has also stated he "wouldn't hesitate [to use plastic surgery] if he had a kid with port-wine stain. We have all been altering our appearances ever since clothing was invented." Other critics have stated that while altering one's appearance with plastic surgery can be ethically debatable, the benefits of it to people who have need for it are tremendous.
There is also some moderate debate sparked by Uglies over the issue of monitoring people. The state has started to track teens through their cell phones and on occasion through dental implants. Westerfeld feels this will "result in a total loss of privacy," however, others feel that this technology is necessary to "properly supervise" people.
Adaptations
Film
20th Century Fox and producer John Davis (Eragon) bought the film rights to the novel in 2006. The movie was reportedly supposed to be released in 2011, but was delayed multiple times and ultimately entered development hell.
In September 2020, it was announced that Netflix had acquired the rights for a film based on the novel. Joey King, who had previously worked with Netflix on The Kissing Booth, served as the movie's executive producer in addition to the lead actress, where she portrayed the protagonist, Tally Youngblood. American filmmaker McG served as its director.
Upon its release on September 13, 2024, the film was panned by critics; review aggregator Rotten Tomatoes listed 15% of 46 available reviews as positive, while Metacritic, which uses a weighted average, assigned it a score of 34 out of 100, indicating “generally unfavorable” reception.
Graphic novel
Steven Cummings illustrated Uglies in a manga-style graphic novel written by Westerfeld and Devin K. Grayson called Shay's Story, which tells the story from Shay's perspective. It was published in a black-and-white 5¾ x 8¼ inches format by Del Rey Manga in 2012.
Publication history
The novel Uglies was first published in 2005, and the photograph was made by photographer Carissa "Car" Pelleteri. It later re-released in 2011 with a new cover. It is the first part of a trilogy, with sequels Pretties, and Specials and the further book Extras. The trilogy featured on the New York Times bestseller list for a significant amount of time. There has also been an audio recording made of the book that was published in 2006 and made available on both CD and cassette.
Bibliography
References
External links
Book excerpt
Official Uglies series downloadables site
Author page at Pulse Blogfest
Uglies on User Based Casting
2005 science fiction novels
2005 American novels
American post-apocalyptic novels
Novels set in the 24th century
American science fiction novels
American young adult novels
Children's science fiction novels
Dystopian novels
Novels by Scott Westerfeld
Fiction about nanotechnology
Books about petroleum
Novels about mass surveillance
Body image in popular culture | Uglies | Chemistry,Materials_science | 3,635 |
45,092,410 | https://en.wikipedia.org/wiki/Phyllis%20Chinn | Phyllis Zweig Chinn (née Zweig; born September 26, 1941) is an American mathematician who holds a professorship in mathematics, women's studies, and teaching preparation at Cal Poly Humboldt in California. Her publications concern graph theory, mathematics education, and the history of women in mathematics.
Education and career
Chinn was born in Rochester, New York and graduated in 1962 from Brandeis University. She earned her Ph.D. in 1969 from the University of California, Santa Barbara with a dissertation on graph isomorphism supervised by Paul Kelly.
She taught at Towson State College, a training school for teachers in Maryland, from 1969 to 1975, and earned tenure there in 1974, before moving to Cal Poly Humboldt. At the time she joined the Cal Poly Humboldt faculty, she was the first female mathematics professor there; the only other female professor in the sciences was a biologist.
In 1997 she became chair of the mathematics department at Cal Poly Humboldt.
Contributions
Chinn has written highly cited work on graph bandwidth, dominating sets, and on bandwidth.
Chinn is also an avid juggler, and founded a juggling club at Cal Poly Humboldt in the 1980s.
Recognition
Cal Poly Humboldt named Chinn as Outstanding Professor for 1988–1989.
She was the 2010 winner of the Louise Hay Award for Contributions to Mathematics Education, given by the Association for Women in Mathematics, for her work in improving mathematics education at the middle and high school levels and encouraging young women to become mathematicians.
References
External links
Home page
1941 births
Living people
20th-century American mathematicians
21st-century American mathematicians
21st-century American women mathematicians
Graph theorists
American historians of mathematics
American mathematics educators
Brandeis University alumni
Towson University faculty
California State Polytechnic University, Humboldt faculty
20th-century American women mathematicians | Phyllis Chinn | Mathematics | 351 |
28,295,974 | https://en.wikipedia.org/wiki/Integrity%20Management%20Plan | Integrity Management Plan (part of an asset integrity management system) is a documented and systematic approach to ensure the long-term integrity of an asset or assets.
Integrity management planning is a process for assessing and mitigating risks in an effort to reduce both the likelihood and consequences of incidents.
Asset integrity plans are maintained and reviewed regularly so that they:
Optimize operational and capital expenditures
Ensure adoption of best-in-class practices
Assist management of risk
Increase shareholder, senior management, regulator and public confidence
Identify risks, conducting assessments, taking preventive actions and implementing mitigate measures
An effective IMP should contain:
A process outline – details the process envelope (temperature pressure and velocity) in which the system operates and any critical parameters that must be followed. Also states process chemistry and contaminants that are encountered
A threat identification list- Information about the asset segments are evaluated to identify the threats of concerns to the asset and to assess risk
A Risk Analysis – a systematic process, in which potential hazards from facility operation are identified, and the likelihood and consequences of potential adverse events are estimated
Detailed Regulatory Requirements – details the applicable regulatory standards to be met and the methods used to comply with those standards
A Responsibility assignment matrix – details the persons responsible, accountable for any tasks to comply with the requirements of the plan. It also states those who should be communicated to and Informed
Corrosion circuits – showing the system broken down into corrosion circuits shows damage mechanisms, materials of construction and process chemistry
Damage frequency (statistics) rates of decay
Critical operating parameters for process safety
Mitigation techniques, frequency, condition monitoring, inspections and process monitoring requirements
Historic review of asset condition to-date
Knowledge gap analysis, missing information, manufacturer's data report (MDR), review drawings etc.
Previous failure analysis review
Design drawing review, materials of construction (initial)
Management of change review, alterations and repairs carried out
Remediation – repairs techniques considered
Also:
Record keeping
Performance plan
Communication plan
Regular Interaction
Management of change
Prevention and Mitigation measures
How to conduct assessments / frequency / audit of system
The Integrity Management Plan (IMP) forms a part of the overall asset integrity management system and is audited as per the Integrity Engineering Audit (IEA).
Particularly post the Deepwater Horizon oil spill, the role of asset integrity has never been more critical to the global oil and gas business. Not only can a well-managed asset integrity plan help operators identify and reduce safety risks before they escalate but focusing on asset integrity can also play a major role in both achieving operational excellence and extending the life of ageing assets. See also
Asset integrity management systems
References
2
3
4 DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Parts 192 and 195 [Docket No. RSPA–04–16855; Amdt. 192–101 and 195–85]Federal Register /Vol. 70, No. 205 /Tuesday, October 25, 2005 /Rules and Regulations https://web.archive.org/web/20120228214259/http://www.wutc.wa.gov/webdocs.nsf/b8da29aede8fdd67882571430005a9c1/b9bc43effc07ff6388256fe10062a0a2/$FILE/70%20FR%2061571.pdf
External links
Oil and Gas Fundamentals The Fundamentals of Asset Integrity
Maintenance | Integrity Management Plan | Engineering | 702 |
35,845,616 | https://en.wikipedia.org/wiki/Bicyclo%282.2.1%29heptane-2-carbonitrile | Bicyclo[2.2.1]heptane-2-carbonitrile is a chemical which is classified as an extremely hazardous substance in the United States. It is defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
See also
BIDN
Cloflubicyne
References
Nitriles
Pesticides
Norbornanes | Bicyclo(2.2.1)heptane-2-carbonitrile | Chemistry,Biology,Environmental_science | 113 |
74,872,418 | https://en.wikipedia.org/wiki/Ying%20Sun%20%28environmental%20scientist%29 | Ying Sun is a Chinese-American agricultural scientist and environmental scientist whose research combines space-based sensing and land surface modeling to study the interactions between climate and agricultural ecosystems. She is an associate professor in the School of Integrative Plant Science Soil and Crop Sciences at Cornell University.
Sun is originally from Yangquan. She is a 2008 graduate of Beijing Normal University, and completed a doctorate at the University of Texas at Austin in 2013. Her doctoral dissertation, Role of Mesophyll CO2 Diffusion and Large-Scale Disturbances in the Interactions between Climate and Carbon Cycles, was supervised by Robert E. Dickinson. She was a postdoctoral researcher, jointly between the University of Texas and with Christian Frankenberg at the Jet Propulsion Laboratory, before taking her present faculty position at Cornell in 2016.
In 2024, Sun's research group developed a remote sensing method to assess and predict crop yield by measuring the solar-induced chlorophyll fluorescence (SIF). This approach using satellite data is cost-effective and has the potential to inform policy making, crop insurance, and poverty forecasting.
References
External links
Sun Lab at Cornell
Year of birth missing (living people)
Living people
People from Yangquan
Chinese climatologists
Chinese women scientists
American climatologists
American women scientists
Women climatologists
Environmental scientists
Beijing Normal University alumni
University of Texas at Austin alumni
Cornell University faculty | Ying Sun (environmental scientist) | Environmental_science | 280 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.