id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
14,791,449 | https://en.wikipedia.org/wiki/Marine%20Corps%20Air%20Station%20Tustin | Marine Corps Air Station Tustin (IATA: NTK, ICAO: KNTK, FAA LID: NTK) is a former United States Navy and United States Marine Corps air station, located in Tustin, California.
History
The Air Station was established in 1942 by the United States Navy as a lighter-than-air base, officially known as Naval Air Station Santa Ana. The base was designed for blimp operations in support of the Navy's coastal patrol efforts during World War II. It was commissioned on 1 October 1942 by its commandant, Capt. Howard N. Coulter. As of July 1947, the facility, under command of Capt. Benjamin May, had personnel consisting of 100 officers, 500 enlisted men and 180 civilian employees. NAS Santa Ana was decommissioned in 1949. In 1951, the facility was reactivated as Marine Corps Air Facility Santa Ana to support the Korean War. It was the country's first air facility developed solely for helicopter operations. It was named "Marine Corps Air Station, Santa Ana" in 1966 and renamed Marine Corps Air Station Tustin in 1979.
During the Vietnam War, the base was a center for on-going testing of radar installations (including the Sperry TPS-34) which were erected, tested, disassembled and shipped to South Vietnam. It also was a training facility for helicopter pilots.
By the early 1990s, MCAS Tustin was a major center for Marine Corps helicopter aviation and radar on the Pacific Coast. Its primary purpose was to provide support services and material for the 3rd Marine Aircraft Wing and for other units utilizing the base. About 4,500 residents once lived on the base, and the base employed nearly 5,000 military personnel and civilians. In addition to providing military support, MCAS Tustin leased to farmers for commercial crop development. For many years, agricultural lands surrounded the facility. However beginning in the 1980s residential and light industrial/manufacturing areas developed adjacent to the station.
In 1991 and again in 1993, under the authority of the Base Realignment and Closure Act of 1990, it was announced that MCAS Tustin would be closed. Operational closure of the base occurred in July 1999. Of the approximately , some (now known collectively as "Tustin Legacy") have been conveyed to the City of Tustin, private developers and public institutions for a combination of residential, commercial, educational, and public recreational and open-space uses. The remaining will be conveyed to other federal agencies, the City of Tustin, and public institutions for the same uses once environmental clean-up operations have been concluded. The site of the base is now the home of the academy of the Orange County Sheriff's Department. Much of the former base has become residential housing.
The base was featured in Visiting... with Huell Howser Episode 1509.
Blimp hangars
In 1993, the blimp hangars were designated a National Civil Engineering Landmark by the American Society of Civil Engineers (ASCE). Worldwide Aeros Corp utilized the north hangar to build a prototype cargo airship under contract from the Pentagon and NASA. In October, 2013, part of the roof collapsed, damaging the airship prototype. There was interest in making one of the hangars a military museum.
The blimp hangars have been used for location shooting for numerous movies and TV programs, including JAG and The X-Files.
On 7 November 2023 at approximately 12:53am, a three-alarm fire broke out on the roof of the North Hangar. Orange County Fire Authority units responded, adapting a defensive strategy and letting it burn due to the risk of roof collapse. An investigation has been opened into the cause of the fire. Several schools in Tustin Unified School District were temporarily closed when asbestos was detected near the fire on 9 November. Tustin authorities plan to demolish the remainder of the hangar, and demolition commenced in December. The hangar site is to be completely remediated by the U.S. Navy.
Proposals
Plans are in the works to convert of the former base into a regional park, originally scheduled to be opened in 2016. In the summer of 2013, OC Parks was in the process of gathering input from the community in order to determine the features and layout of the forthcoming facilities.
Although the preservation of the hangars is one of the greatest concerns raised in surveys taken by OC Parks, the fate of the south hangar is uncertain.
The City of Tustin has met with officials from the Los Angeles Angels of Anaheim, proposing the former air base as a potential site for a new stadium for the team, whose lease with the City of Anaheim's Angel Stadium allowed the team to opt out between 2016 and 2019.
In 2016, Orange County and the South Orange County Community College District arranged for a land swap of ten acres to be used to replace the aging Orange County Animal Shelter in nearby Orange. In July 2016, a ground-breaking ceremony was held.
See also
List of United States Marine Corps installations
Hangar One (Mountain View, California)
Tillamook Air Museum
Navy Air Stations Blimps bases
References
Further reading
External links
Former Marine Corps Air Station Tustin from Base Realignment and Closure Project Management Office US Navy
Tustin Hangars | City of Tustin, California
Info from Paul Freeman's Abandoned & Little-Known Airfields
Buildings and structures in Orange County, California
Transportation buildings and structures in Orange County, California
Tustin, California
Tustin
Formerly Used Defense Sites in California
Defunct airports in California
World War II airfields in the United States
History of Orange County, California
Military facilities in Greater Los Angeles
National Register of Historic Places in Orange County, California
World War II on the National Register of Historic Places in California
Historic Civil Engineering Landmarks
1942 establishments in California
1999 disestablishments in California
Military airbases established in 1942
Military installations closed in 1999
Closed installations of the United States Navy | Marine Corps Air Station Tustin | [
"Engineering"
] | 1,197 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
14,793,522 | https://en.wikipedia.org/wiki/Superstatistics | Superstatistics is a branch of statistical mechanics or statistical physics devoted to the study of non-linear and non-equilibrium systems. It is characterized by using the superposition of multiple differing statistical models to achieve the desired non-linearity. In terms of ordinary statistical ideas, this is equivalent to compounding the distributions of random variables and it may be considered a simple case of a doubly stochastic model.
Consider an extended thermodynamical system which is locally in equilibrium and has a Boltzmann distribution, that is the probability of finding the system in a state with energy is proportional to . Here is the local inverse temperature. A non-equilibrium thermodynamical system is modeled by considering macroscopic fluctuations of the local inverse temperature. These fluctuations happen on time scales which are much larger than the microscopic relaxation times to the Boltzmann distribution. If the fluctuations of are characterized by a distribution , the superstatistical Boltzmann factor of the system is given by
This defines the superstatistical partition function
for system that can assume discrete energy states . The probability of finding the system in state is then given by
Modeling the fluctuations of leads to a description in terms of statistics of Boltzmann statistics, or "superstatistics". For example, if follows a Gamma distribution, the resulting superstatistics corresponds to Tsallis statistics. Superstatistics can also lead to other statistics such as power-law distributions or stretched exponentials. One needs to note here that the word super here is short for superposition of the statistics.
This branch is highly related to the exponential family and Mixing. These concepts are used in many approximation approaches, like particle filtering (where the distribution is approximated by delta functions) for example.
See also
Maxwell–Boltzmann statistics
E.G.D. Cohen
References
Statistical mechanics
Nonlinear systems | Superstatistics | [
"Physics",
"Mathematics"
] | 381 | [
"Statistical mechanics stubs",
"Nonlinear systems",
"Statistical mechanics",
"Dynamical systems"
] |
14,794,097 | https://en.wikipedia.org/wiki/CLNS1A | Methylosome subunit pICln is a protein that in humans is encoded by the CLNS1A gene.
Interactions
CLNS1A has been shown to interact with:
ITGA2B,
PRMT5,
SNRPD1, and
SNRPD3.
See also
Chloride channel
References
Further reading
External links
Ion channels | CLNS1A | [
"Chemistry"
] | 67 | [
"Neurochemistry",
"Ion channels"
] |
14,794,105 | https://en.wikipedia.org/wiki/Cyclic%20nucleotide-gated%20channel%20alpha%203 | Cyclic nucleotide-gated cation channel alpha-3 is a protein that in humans is encoded by the CNGA3 gene.
Function
This gene encodes a member of the cyclic nucleotide-gated cation channel protein family, which is required for normal vision and olfactory signal transduction. CNGA3 is expressed in cone photoreceptors and is necessary for color vision. Missense mutations in this gene are associated with rod monochromacy and segregate in an autosomal recessive pattern. Two alternatively-spliced transcripts encoding different isoforms have been described.
Clinical relevance
Variants in this gene have been shown to cause achromatopsia and colour blindness.
See also
Cyclic nucleotide-gated ion channel
References
Further reading
External links
GeneReviews/NIH/NCBI/UW entry on Achromatopsia
OMIM entries on Achromatopsia
Ion channels | Cyclic nucleotide-gated channel alpha 3 | [
"Chemistry"
] | 194 | [
"Neurochemistry",
"Ion channels"
] |
14,794,194 | https://en.wikipedia.org/wiki/DLX3 | Homeobox protein DLX-3 is a protein that in humans is encoded by the DLX3 gene.
Function
Dlx3 is a crucial regulator of hair follicle differentiation and cycling. Dlx3 transcription is mediated through Wnt, and colocalization of Dlx3 with phospho-SMAD1/5/8 is involved in the regulation of transcription by BMP signaling. Dlx3 transcription is also induced by BMP-2 through transactivation with SMAD1 and SMAD4.
Many vertebrate homeo box-containing genes have been identified on the basis of their sequence similarity with Drosophila developmental genes. Members of the Dlx gene family contain a homeobox that is related to that of Distal-less (Dll), a gene expressed in the head and limbs of the developing fruit fly. The Distal-less (Dlx) family of genes comprises at least 6 different members, DLX1-DLX6. This gene is located in a tail-to-tail configuration with another member of the gene family on the long arm of chromosome 17.
Clinical significance
Mutations in this gene have been associated with the autosomal dominant conditions trichodentoosseous syndrome (TDO) and amelogenesis imperfecta with taurodontism.
References
Further reading
External links
Transcription factors | DLX3 | [
"Chemistry",
"Biology"
] | 276 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,794,239 | https://en.wikipedia.org/wiki/E2F5 | Transcription factor E2F5 is a protein that in humans is encoded by the E2F5 gene.
Function
The protein encoded by this gene is a member of the E2F family of transcription factors. The E2F family plays a crucial role in the control of cell cycle and action of tumor suppressor proteins and is also a target of the transforming proteins of small DNA tumor viruses. The E2F proteins contain several evolutionarily conserved domains that are present in most members of the family. These domains include a DNA binding domain, a dimerization domain which determines interaction with the differentiation regulated transcription factor proteins (DP), a transactivation domain enriched in acidic amino acids, and a tumor suppressor protein association domain which is embedded within the transactivation domain. This protein is differentially phosphorylated and is expressed in a wide variety of human tissues. It has higher identity to E2F4 than to other family members. Both this protein and E2F4 interact with tumor suppressor proteins p130 and p107, but not with pRB. Alternative splicing results in multiple variants encoding different isoforms.
Interactions
E2F5 has been shown to interact with TFDP1.
See also
E2F
References
Further reading
External links
Transcription factors | E2F5 | [
"Chemistry",
"Biology"
] | 264 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,794,411 | https://en.wikipedia.org/wiki/FUSIP1 | FUS-interacting serine-arginine-rich protein 1 is a protein that in humans is encoded by the SFRS13A gene.
Function
This gene product is a member of the serine-arginine (SR) family of proteins, which is involved in constitutive and regulated RNA splicing. Members of this family are characterized by N-terminal RNP1 and RNP2 motifs, which are required for binding to RNA, and multiple C-terminal SR/RS repeats, which are important in mediating association with other cellular proteins. This protein can influence splice site selection of adenovirus E1A pre-mRNA. It interacts with the oncoprotein TLS and abrogates the influence of TLS on E1A pre-mRNA splicing. Alternative splicing of this gene results in at least two transcript variants encoding different isoforms. In addition, transcript variants utilizing alternative polyA sites exist.
Interactions
FUSIP1 has been shown to interact with FUS.
References
Further reading | FUSIP1 | [
"Chemistry"
] | 216 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,794,869 | https://en.wikipedia.org/wiki/HEY2 | Hairy/enhancer-of-split related with YRPW motif protein 2 (HEY2) also known as cardiovascular helix-loop-helix factor 1 (CHF1) is a protein that in humans is encoded by the HEY2 gene.
This protein is a type of transcription factor that belongs to the hairy and enhancer of split-related (HESR) family of basic helix-loop-helix (bHLH)-type transcription factors. It forms homo- or hetero-dimers that localize to the nucleus and interact with a histone deacetylase complex to repress transcription. During embryonic development, this mechanism is used to control the number of cells that develop into cardiac progenitor cells and myocardial cells. The relationship is inversely related, so as the number of cells that express the Hey2 gene increases, the more CHF1 is present to repress transcription and the number of cells that take on a myocardial fate decreases.
Expression
The expression of the Hey2 gene is induced by the Notch signaling pathway. In this mechanism, adjacent cells bind via transmembrane notch receptors. Two similar and redundant genes in mouse are required for embryonic cardiovascular development, and are also implicated in neurogenesis and somitogenesis. Alternatively spliced transcript variants have been found, but their biological validity has not been determined.
Knockout studies
The Hey2 gene is involved with the formation of the cardiovascular system and especially the heart itself. Although studies have not been conducted about the effects of a malfunction in Hey2 expression in humans, experiments done with mice suggest this gene could be responsible for a number of heart defects. Using a gene knockout technique, scientists inactivated both the Hey1 and Hey2 genes of mice. The loss of these two genes resulted in death of the embryo 9.5 days after conception. It was found that the developing hearts of these embryos lacked most structural formations which resulted in massive hemorrhage. When only the Hey1 gene was knocked out, no apparent phenotypic changes occurred, suggesting that these two genes carry similar and redundant information for the development of the heart.
Clinical significance
Common variants of SCN5A, SCN10A, and HEY2 (this gene) are associated with Brugada syndrome.
Interactions
HEY2 has been shown to interact with Sirtuin 1 and Nuclear receptor co-repressor 1.
References
Further reading
External links
Transcription factors | HEY2 | [
"Chemistry",
"Biology"
] | 497 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,795,013 | https://en.wikipedia.org/wiki/PHOX2A | Paired mesoderm homeobox protein 2A is a protein that in humans is encoded by the PHOX2A gene.
Function
The protein encoded by this gene contains a paired-like homeodomain most similar to that of the Drosophila aristaless gene product. This protein is expressed specifically in noradrenergic cell types. It regulates the expression of tyrosine hydroxylase and dopamine beta-hydroxylase, two catecholaminergic biosynthetic enzymes essential for the differentiation and maintenance of noradrenergic phenotype. Mutations in this gene have been associated with autosomal recessive congenital fibrosis of the extraocular muscles (CFEOM2).
Interactions
PHOX2A has been shown to interact with HAND2.
References
Further reading
External links
Engle Laboratory CFEOM page
GeneReviews/NCBI/NIH/UW entry on Congenital Fibrosis of the Extraocular Muscles
OMIM entries on Congenital Fibrosis of the Extraocular Muscles
Transcription factors | PHOX2A | [
"Chemistry",
"Biology"
] | 217 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,795,083 | https://en.wikipedia.org/wiki/HMGN2 | Non-histone chromosomal protein HMG-17 is a protein that in humans is encoded by the HMGN2 gene.
See also
High mobility group protein HMG14 and HMG17
HMGN1 (HMG-14)
References
Further reading | HMGN2 | [
"Chemistry"
] | 54 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,795,115 | https://en.wikipedia.org/wiki/Interleukin%2011%20receptor%20alpha%20subunit | Interleukin 11 receptor, alpha subunit is a subunit of the interleukin 11 receptor. IL11RA is its human gene.
Interleukin 11 is a stromal cell-derived cytokine that belongs to a family of pleiotropic and redundant cytokines that use the gp130 transducing subunit in their high affinity receptors. This gene encodes the IL-11 receptor, which is a member of the hematopoietic cytokine receptor family. This particular receptor is very similar to ciliary neurotrophic factor, since both contain an extracellular region with a 2-domain structure composed of an immunoglobulin-like domain and a cytokine receptor-like domain. Alternative splicing has been observed at this locus and two variants, each encoding a distinct isoform, have been identified.
References
Further reading | Interleukin 11 receptor alpha subunit | [
"Chemistry"
] | 185 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,795,126 | https://en.wikipedia.org/wiki/KCNK2 | Potassium channel subfamily K member 2, also known as TREK-1, is a protein that in humans is encoded by the KCNK2 gene.
This gene encodes K2P2.1, a lipid-gated ion channel belonging to the two-pore-domain background potassium channel protein family. This type of potassium channel is formed by two homodimers that create a channel that releases potassium out of the cell to control resting membrane potential. The channel is opened by anionic lipid, certain anesthetics, membrane stretching, intracellular acidosis, and heat. Three transcript variants encoding different isoforms have been found for this gene.
Function in neurons
TREK-1 is part of the subfamily of mechano-gated potassium channels that are present in mammalian neurons. They can be gated in both chemical and physical ways and can be opened via both physical stimuli and chemical stimuli. TREK-1 channels are found in a variety of tissues, but are particularly abundant in the brain and heart and are seen in various types of neurons. The C-terminal of TREK-1 channels plays a role in the mechanosensitivity of the channels.
In the neurons of the central nervous system, TREK-1 channels are important in physiological, pathophysiological, and pharmacological processes, including having a role in electrogenesis, ischemia, and anesthesia. TREK-1 has an important role in neuroprotection against epilepsy and brain and spinal cord ischemia and is being evaluated as a potential target for new developments of therapeutic agents for neurology and anesthesiology.
In the absence of a properly functioning cytoskeleton, TREK-1 channels can still open via mechanical gating. The cell membrane functions independently of the cytoskeleton and the thickness and curvature of the membrane is able to modulate the activity of the TREK-1 channels. The change in thickness is thought to be sensed by an amphipathic helix that extends from the inner leaflet of the membrane.
The insertion of certain compounds into the membrane, including inhaled anesthetics and propofol, activate TREK-1 through the enzyme phospholipase D2 (PLD2). Prior to the addition of anesthetic, PLD2 associates with GM-1 lipid rafts. After anesthetic, the enzyme or a complex of the enzyme and the channel traffic to PIP2 domains where the enzyme makes phosphatidic acid that opens the channel.
See also
Tandem pore domain potassium channel
References
Further reading
External links
Ion channels | KCNK2 | [
"Chemistry"
] | 535 | [
"Neurochemistry",
"Ion channels"
] |
14,795,246 | https://en.wikipedia.org/wiki/Tate%27s%20algorithm | In the theory of elliptic curves, Tate's algorithm takes as input an integral model of an elliptic curve E over , or more generally an algebraic number field, and a prime or prime ideal p. It returns the exponent fp of p in the conductor of E, the type of reduction at p, the local index
where is the group of -points
whose reduction mod p is a non-singular point. Also, the algorithm determines whether or not the given integral model is minimal at p, and, if not, returns an integral model with integral coefficients for which the valuation at p of the discriminant is minimal.
Tate's algorithm also gives the structure of the singular fibers given by the Kodaira symbol or Néron symbol, for which, see elliptic surfaces: in turn this determines the exponent fp of the conductor E.
Tate's algorithm can be greatly simplified if the characteristic of the residue class field is not 2 or 3; in this case the type and c and f can be read off from the valuations of j and Δ (defined below).
Tate's algorithm was introduced by as an improvement of the description of the Néron model of an elliptic curve by .
Notation
Assume that all the coefficients of the equation of the curve lie in a complete discrete valuation ring R with perfect residue field K and maximal ideal generated by a prime π.
The elliptic curve is given by the equation
Define:
the p-adic valuation of in , that is, exponent of in prime factorization of , or infinity if
The algorithm
Step 1: If π does not divide Δ then the type is I0, c=1 and f=0.
Step 2: If π divides Δ but not c4 then the type is Iv with v = v(Δ), c=v, and f=1.
Step 3. Otherwise, change coordinates so that π divides a3,a4,a6. If π2 does not divide a6 then the type is II, c=1, and f=v(Δ);
Step 4. Otherwise, if π3 does not divide b8 then the type is III, c=2, and f=v(Δ)−1;
Step 5. Otherwise, let Q1 be the polynomial
.
If π3 does not divide b6 then the type is IV, c=3 if has two roots in K and 1 if it has two roots outside of K, and f=v(Δ)−2.
Step 6. Otherwise, change coordinates so that π divides a1 and a2, π2 divides a3 and a4, and π3 divides a6. Let P be the polynomial
If has 3 distinct roots modulo π then the type is I0*, f=v(Δ)−4, and c is 1+(number of roots of P in K).
Step 7. If P has one single and one double root, then the type is Iν* for some ν>0, f=v(Δ)−4−ν, c=2 or 4: there is a "sub-algorithm" for dealing with this case.
Step 8. If P has a triple root, change variables so the triple root is 0, so that π2 divides a2 and π3 divides a4, and π4 divides a6. Let Q2 be the polynomial
.
If has two distinct roots modulo π then the type is IV*, f=v(Δ)−6, and c is 3 if the roots are in K, 1 otherwise.
Step 9. If has a double root, change variables so the double root is 0. Then π3 divides a3 and π5 divides a6. If π4 does not divide a4 then the type is III* and f=v(Δ)−7 and c = 2.
Step 10. Otherwise if π6 does not divide a6 then the type is II* and f=v(Δ)−8 and c = 1.
Step 11. Otherwise the equation is not minimal. Divide each an by πn and go back to step 1.
Implementations
The algorithm is implemented for algebraic number fields in the PARI/GP computer algebra system, available through the function elllocalred.
References
Elliptic curves
Number theory | Tate's algorithm | [
"Mathematics"
] | 872 | [
"Discrete mathematics",
"Number theory"
] |
14,795,437 | https://en.wikipedia.org/wiki/Discoidin%20domain-containing%20receptor%202 | Discoidin domain-containing receptor 2, also known as CD167b (cluster of differentiation 167b), is a protein that in humans is encoded by the DDR2 gene. Discoidin domain-containing receptor 2 is a receptor tyrosine kinase (RTK).
Function
RTKs play a key role in the communication of cells with their microenvironment. These molecules are involved in the regulation of cell growth, differentiation, and metabolism. In several cases the biochemical mechanism by which RTKs transduce signals across the membrane has been shown to be ligand induced receptor oligomerization and subsequent intracellular phosphorylation. In the case of DDR2, the ligand is collagen which binds to its extracellular discoidin domain. This autophosphorylation leads to phosphorylation of cytosolic targets as well as association with other molecules, which are involved in pleiotropic effects of signal transduction. DDR2 has been associated with a number of diseases including fibrosis and cancer.
Structure
RTKs have a tripartite structure with extracellular, transmembrane, and cytoplasmic regions. This gene encodes a member of a novel subclass of RTKs and contains a distinct extracellular region encompassing a factor VIII-like domain.
Gene
Alternative splicing in the 5' UTR of the DDR2 gene results in multiple transcript variants encoding the same protein.
Interactions
DDR2 (gene) has been shown to interact with SHC1 and phosphorylate Shp2. DDR2 also interacts with Integrin α1β1 and α2β1 by promoting their adhesion to collagen.
References
Further reading
Clusters of differentiation
Tyrosine kinase receptors | Discoidin domain-containing receptor 2 | [
"Chemistry"
] | 369 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
14,795,559 | https://en.wikipedia.org/wiki/Condominial%20sewerage | Condominial sewerage is the application of simplified sewerage coupled with consultations and ongoing interactions between users and agencies during planning and implementation. The term is used primarily in Latin America, particularly in Brazil, and is derived from the term condominio, which means housing block.
From a pure engineering perspective there is no difference between designing a regular sewage system and a condominial one. However, bureaucratically a condominial system includes the participation of the individuals and owners who will be served and can often result in lower costs due to shorter runs of piping. This is achieved by local concentration of sewage from a single "housing block". Thus a number of dwellings are grouped into a "block" known as a condominium. The condominium may share no other aspects of ownership or relation except geographic proximity. In addition, individuals and owners may share a role in the maintenance of the sewers at the block level.
References
Sewerage
Environmental engineering | Condominial sewerage | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 194 | [
"Chemical engineering",
"Water pollution",
"Sewerage",
"Civil engineering",
"Environmental engineering"
] |
14,795,640 | https://en.wikipedia.org/wiki/UGT1A4 | UDP-glucuronosyltransferase 1-4 is an enzyme that in humans is encoded by the UGT1A4 gene.
This gene encodes a UDP-glucuronosyltransferase, an enzyme of the glucuronidation pathway that transforms small lipophilic molecules, such as steroids, bilirubin, hormones, and drugs, into water-soluble, excretable metabolites. This gene is part of a complex locus that encodes several UDP-glucuronosyltransferases. The locus includes thirteen unique alternate first exons followed by four common exons. Four of the alternate first exons are considered pseudogenes. Each of the remaining nine 5′ exons may be spliced to the four common exons, resulting in nine proteins with different N-termini and identical C-termini. Each first exon encodes the substrate binding site, and is regulated by its own promoter. This enzyme has some glucuronidase activity towards bilirubin, although it is more active on amines, steroids, and sapogenins.
It is the main enzyme responsible for glucuronidation of the anticonvulsant lamotrigine.
References
Further reading | UGT1A4 | [
"Chemistry"
] | 276 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,795,865 | https://en.wikipedia.org/wiki/SOX3 | Transcription factor SOX-3 is a protein that in humans is encoded by the SOX3 gene. This gene encodes a member of the SOX (SRY-related HMG-box) family of transcription factors involved in the regulation of embryonic brain development and in determination of cell fate. The encoded protein acts as a transcriptional activator.
Mutations in this gene have been associated with X-linked hypopituitarism (XH) and X-linked mental retardation. Patients with XH are male, have short stature, exhibit a mild form of mental retardation and present pan-hypopituitarism. A duplication of the SOX3 gene has also been discovered to cause XX male sex reversal.
SRY-box transcription factor 3, SOX3, is a transcription factor that is encoded by the SOX3 gene. This gene is responsible for ensuring proper embryonic development and determining the fate of different cells. Regarding its developmental facet, SOX3, alongside other SOX transcription factors, ensures the proper formation of the hypothalamo-pituitary axis. The proper development of the hypothalamo-pituitary axis is necessary as it serves to ensure proper systemic hormonal function. When SOX3 expression is affected, the development of different structures can be affected as well. Specifically, both the hypothalamus and the pituitary gland can suffer in accomplishing proper growth. Due to this, conditions such as hypopituitarism and mental retardation are found in cases with a lack of SOX3. Also, craniofacial abnormalities can be seen as a result of a lack of the SOX3 gene. To aid in the further understanding of the SOX3 gene, mice have been used as knockout models to study the effects of the gene’s absence.
Function
SOX3 belongs to the family of SRY-related HMG-box containing genes which behave as transcription factors. SOX3 has been found to be involved in the regulation of embryonic brain development, the determination of cell fate and in XX male sex reversal.
SOX3 contains a single exon and is found in a highly conserved region of the X chromosome. The SOX3 gene shares some conservation with the SRY gene, and encodes a protein that is similar, sharing 67% amino acid identity across the DNA-binding HMG domain. This has led to the hypothesis that the SRY gene arose from SOX3 through a gain of function mutation within the proto-Y chromosome. Evidence to support this hypothesis arose from the discovery of a rare human case of XX sex reversal, that is thought to have occurred through a de novo duplication of the SOX3 gene. Such a duplication is thought to result in a gain of function expression of SOX3 in the genital ridge of the developing embryo leading to XX male sex reversal.
See also
SOX gene family
References
Further reading
External links
Transcription factors | SOX3 | [
"Chemistry",
"Biology"
] | 601 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,796,011 | https://en.wikipedia.org/wiki/DAZ2 | Deleted in azoospermia protein 2 is a protein that in humans is encoded by the DAZ2 gene.
This gene is a member of the DAZ gene family and is a candidate for the human Y-chromosomal azoospermia factor (AZF). Its expression is restricted to premeiotic germ cells, particularly in spermatogonia. It encodes an RNA-binding protein that is important for spermatogenesis. Four copies of this gene are found on chromosome Y within palindromic duplications; one pair of genes is part of the P2 palindrome and the second pair is part of the P1 palindrome. Each gene contains a 2.4 kb repeat including a 72-bp exon, called the DAZ repeat; the number of DAZ repeats is variable and there are several variations in the sequence of the DAZ repeat. Each copy of the gene also contains a 10.8 kb region that may be amplified; this region includes five exons that encode an RNA recognition motif (RRM) domain. This gene contains one copy of the 10.8 kb repeat. Alternative splicing results in multiple transcript variants encoding different isoforms.
References
Further reading | DAZ2 | [
"Chemistry"
] | 253 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,796,095 | https://en.wikipedia.org/wiki/UBE2D3 | Ubiquitin-conjugating enzyme E2 D3 is a protein that in humans is encoded by the UBE2D3 gene.
Function
The modification of proteins with ubiquitin is an important cellular mechanism for targeting abnormal or short-lived proteins for degradation. Ubiquitination involves at least three classes of enzymes: ubiquitin-activating enzymes, or E1s, ubiquitin-conjugating enzymes, or E2s, and ubiquitin-protein ligases, or E3s. This gene encodes a member of the E2 ubiquitin-conjugating enzyme family. This enzyme functions in the ubiquitination of the tumor-suppressor protein p53, which is induced by an E3 ubiquitin-protein ligase. Multiple spliced transcript variants have been found for this gene, but the full-length nature of some variants has not been determined.
Interactions
UBE2D3 has been shown to interact with NEDD4.
References
Further reading | UBE2D3 | [
"Chemistry"
] | 218 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,796,489 | https://en.wikipedia.org/wiki/DLX4 | Homeobox protein DLX-4 is a protein that in humans is encoded by the DLX4 gene.
Function
Many vertebrate homeobox-containing genes have been identified on the basis of their sequence similarity with Drosophila developmental genes. Members of the Dlx gene family contain a homeobox that is related to that of Distal-less (Dll), a gene expressed in the head and limbs of the developing fruit fly. The Distal-less (Dlx) family of genes comprises at least 6 different members, DLX1-DLX6. The DLX proteins are postulated to play a role in forebrain and craniofacial development. Three transcript variants have been described for this gene, however, the full length nature of one variant has not been described. Studies of the two splice variants revealed that one encoded isoform (BP1) functions as a repressor of the beta-globin gene while the other isoform lacks that function.
References
Further reading
External links
Transcription factors | DLX4 | [
"Chemistry",
"Biology"
] | 213 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,796,540 | https://en.wikipedia.org/wiki/DBF4 | Protein DBF4 homolog A is a protein that is encoded by the DBF4 gene in humans.
Interactions
DBF4 has been shown to interact with:
Cell division cycle 7-related protein kinase,
MCM3,
MCM7,
ORC2L, and
ORC6L.
References
Further reading
Zinc finger proteins | DBF4 | [
"Chemistry"
] | 69 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,796,583 | https://en.wikipedia.org/wiki/TREX2 | Three prime repair exonuclease 2 is an enzyme that in humans is encoded by the TREX2 gene.
This gene encodes a protein with 3' exonuclease activity. Enzymes with this activity are involved in DNA replication, repair, and recombination. Similarity to an E. coli protein suggests that this enzyme may be a subunit of DNA polymerase III, which does not have intrinsic exonuclease activity.
Newer research has determined that TREX2 is also involved in flap endonuclease activity, as detected in the context of inhibiting gene-editing nickases that generate an extension flap such as prime editors that do not usually create a double-stranded break. This function was first demonstrated in a thesis by Lung in 2021, and replicated by Koeppel et al. in 2023. Subsequently, TREX2 has become incorporated into fusion enzymes for genetic engineering by multiple research groups for the purposes of reducing off-target edits which include chromosomal translocations and mismatched insertions.
Mutations in this gene may lead to Aicardi-Goutieres syndrome.
References
Further reading
External links | TREX2 | [
"Chemistry"
] | 239 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,796,659 | https://en.wikipedia.org/wiki/EPH%20receptor%20A1 | EPH receptor A1 (ephrin type-A receptor 1) is a protein that in humans is encoded by the EPHA1 gene.
This gene belongs to the ephrin receptor subfamily of the protein-tyrosine kinase family. EPH and EPH-related receptors have been implicated in mediating developmental events, particularly in the nervous system. Receptors in the EPH subfamily typically have a single kinase domain and an extracellular region containing a Cys-rich domain and 2 fibronectin type III repeats. The ephrin receptors are divided into 2 groups based on the similarity of their extracellular domain sequences and their affinities for binding ephrin-A and ephrin-B ligands. This gene is expressed in some human cancer cell lines and has been implicated in carcinogenesis.
References
Further reading
Tyrosine kinase receptors | EPH receptor A1 | [
"Chemistry"
] | 176 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
14,796,671 | https://en.wikipedia.org/wiki/EPHB3 | Ephrin type-B receptor 3 is a protein that in humans is encoded by the EPHB3 gene.
Function
Ephrin receptors and their ligands, the ephrins, mediate numerous developmental processes, particularly in the nervous system. Based on their structures and sequence relationships, ephrins are divided into the ephrin-A (EFNA) class, which are anchored to the membrane by a glycosylphosphatidylinositol linkage, and the ephrin-B (EFNB) class, which are transmembrane proteins. The Eph family of receptors are divided into 2 groups based on the similarity of their extracellular domain sequences and their affinities for binding ephrin-A and ephrin-B ligands. Ephrin receptors make up the largest subgroup of the receptor tyrosine kinase (RTK) family. The protein encoded by this gene is a receptor for ephrin-B family members.
Interactions
EPHB3 has been shown to interact with MLLT4 and RAS p21 protein activator 1.
References
Further reading
External links
Tyrosine kinase receptors | EPHB3 | [
"Chemistry"
] | 241 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
14,796,744 | https://en.wikipedia.org/wiki/GABRR1 | Gamma-aminobutyric acid receptor subunit rho-1 is a protein that in humans is encoded by the GABRR1 gene.
GABA is the major inhibitory neurotransmitter in the mammalian brain where it acts at GABA receptors, which are ligand-gated chloride channels. GABRR1 is a member of the rho subunit family.
See also
GABAA-ρ receptor
References
Further reading
External links
Ion channels | GABRR1 | [
"Chemistry"
] | 94 | [
"Neurochemistry",
"Ion channels"
] |
14,796,770 | https://en.wikipedia.org/wiki/SMG1 | Serine/threonine-protein kinase SMG1 is an enzyme that in humans is encoded by the SMG1 gene. SMG1 belongs to the phosphatidylinositol 3-kinase-related kinase protein family.
Function
This gene encodes a protein involved in nonsense-mediated mRNA decay (NMD) as part of the mRNA surveillance complex. The protein has kinase activity and is thought to function in NMD by phosphorylating the regulator of nonsense transcripts 1 protein. Alternative spliced transcript variants have been described, but their full-length natures have not been determined.
Interactions
SMG1 (gene) has been shown to interact with PRKCI and UPF1.
References
Further reading
EC 2.7.11
Genes on human chromosome 16 | SMG1 | [
"Chemistry"
] | 164 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,796,782 | https://en.wikipedia.org/wiki/SIN3B | Paired amphipathic helix protein Sin3b is a protein that in humans is encoded by the SIN3B gene.
Interactions
SIN3B has been shown to interact with HDAC1, Zinc finger and BTB domain-containing protein 16, SUDS3 and IKZF1.
See also
Transcription coregulator
References
Further reading
External links
Gene expression
Transcription coregulators | SIN3B | [
"Chemistry",
"Biology"
] | 77 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
14,796,829 | https://en.wikipedia.org/wiki/PRPF6 | Pre-mRNA-processing factor 6 is a protein that in humans is encoded by the PRPF6 gene.
The protein encoded by this gene appears to be involved in pre-mRNA splicing, possibly acting as a bridging factor between U5 and U4/U6 snRNPs in formation of the spliceosome. The encoded protein also can bind androgen receptor, providing a link between transcriptional activation and splicing.
Interactions
PRPF6 has been shown to interact with TXNL4B, ARAF and Androgen receptor.
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on Retinitis Pigmentosa Overview
Spliceosome | PRPF6 | [
"Chemistry"
] | 146 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,796,974 | https://en.wikipedia.org/wiki/HLA-DOB | HLA class II histocompatibility antigen, DO beta chain is a protein that in humans is encoded by the HLA-DOB gene.
HLA-DOB belongs to the HLA class II beta chain paralogues. This class II molecule is a heterodimer consisting of an alpha (DOA) and a beta chain (DOB), both anchored in the membrane. It is located in intracellular vesicles. DO suppresses peptide loading of MHC class II molecules by inhibiting HLA-DM. Class II molecules are expressed in antigen presenting cells (APC: B lymphocytes, dendritic cells, macrophages). The beta chain is approximately 26-28 kDa and its gene contains 6 exons. Exon one encodes the leader peptide, exons 2 and 3 encode the two extracellular domains, exon 4 encodes the transmembrane domain and exon 5 encodes the cytoplasmic tail.
References
Further reading
MHC class II | HLA-DOB | [
"Chemistry"
] | 214 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,796,992 | https://en.wikipedia.org/wiki/HOXB3 | Homeobox protein Hox-B3 is a protein that in humans is encoded by the HOXB3 gene.
This gene is a member of the Antp homeobox family and encodes a nuclear protein with a homeobox DNA-binding domain. It is included in a cluster of homeobox B genes located on chromosome 17. The encoded protein functions as a sequence-specific transcription factor that is involved in development. Increased expression of this gene is associated with a distinct biologic subset of acute myeloid leukemia (AML).
See also
Homeobox
References
Further reading
External links
Transcription factors | HOXB3 | [
"Chemistry",
"Biology"
] | 126 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,796,999 | https://en.wikipedia.org/wiki/HOXC4 | Homeobox protein Hox-C4 is a protein that in humans is encoded by the HOXC4 gene.
Function
This gene belongs to the homeobox family of genes. The homeobox genes encode a highly conserved family of transcription factors that play an important role in morphogenesis in all multicellular organisms. Mammals possess four similar homeobox gene clusters, HOXA, HOXB, HOXC and HOXD, which are located on different chromosomes and consist of 9 to 11 genes arranged in tandem. This gene, HOXC4, is one of several homeobox HOXC genes located in a cluster on chromosome 12. Three genes, HOXC5, HOXC4 and HOXC6, share a 5' non-coding exon. Transcripts may include the shared exon spliced to the gene-specific exons, or they may include only the gene-specific exons. Two alternatively spliced variants that encode the same protein have been described for HOXC4. Transcript variant one includes the shared exon, and transcript variant two includes only gene-specific exons.
See also
Homeobox
Interactions
HOXC4 has been shown to interact with Ku70.
References
Further reading
External links
Transcription factors | HOXC4 | [
"Chemistry",
"Biology"
] | 266 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,797,009 | https://en.wikipedia.org/wiki/HOXD9 | Homeobox protein Hox-D9 is a protein that in humans is encoded by the HOXD9 gene.
Function
This gene belongs to the homeobox family of genes. The homeobox genes encode a highly conserved family of transcription factors that play an important role in morphogenesis in all multicellular organisms. Mammals possess four similar homeobox gene clusters, HOXA, HOXB, HOXC and HOXD, located on different chromosomes, consisting of 9 to 11 genes arranged in tandem. This gene is one of several homeobox HOXD genes located at 2q31-2q37 chromosome regions. Deletions that removed the entire HOXD gene cluster or 5' end of this cluster have been associated with severe limb and genital abnormalities. The exact role of this gene has not been determined.
See also
Homeobox
References
Further reading
External links
Transcription factors | HOXD9 | [
"Chemistry",
"Biology"
] | 188 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,797,072 | https://en.wikipedia.org/wiki/KCNJ3 | G protein-activated inward rectifier potassium channel 1 (GIRK-1) is encoded in the human by the gene KCNJ3.
Potassium channels are present in most mammalian cells, where they participate in a wide range of physiologic responses. The protein encoded by this gene is an integral membrane protein and inward-rectifier type potassium channel. The encoded protein, which has a greater tendency to allow potassium to flow into a cell rather than out of a cell, is controlled by G-proteins and plays an important role in regulating heartbeat. It associates with three other G-protein-activated potassium channels to form a hetero-tetrameric pore-forming complex.
Interactions
KCNJ3 has been shown to interact with KCNJ5.
See also
G protein-coupled inwardly-rectifying potassium channel
Inward-rectifier potassium ion channel
References
Further reading
External links
Ion channels | KCNJ3 | [
"Chemistry"
] | 190 | [
"Neurochemistry",
"Ion channels"
] |
14,797,080 | https://en.wikipedia.org/wiki/KCNK3 | Potassium channel subfamily K member 3 is a protein that in humans is encoded by the KCNK3 gene.
This gene encodes K2P3.1, one of the members of the superfamily of potassium channel proteins containing two pore-forming P domains. K2P3.1 is an outwardly rectifying channel that is sensitive to changes in extracellular pH and is inhibited by extracellular acidification. Also referred to as an acid-sensitive potassium channel, it is activated by the anesthetics halothane and isoflurane. Although three transcripts are detected in northern blots, there is currently no sequence available to confirm transcript variants for this gene.
Interactive pathway map
Interactions
KCNK3 has been shown to interact with YWHAB and S100A10.
See also
Tandem pore domain potassium channel
References
Further reading
External links
Ion channels | KCNK3 | [
"Chemistry"
] | 179 | [
"Neurochemistry",
"Ion channels"
] |
14,797,088 | https://en.wikipedia.org/wiki/COPG2 | Coatomer subunit gamma-2 is a protein that in humans is encoded by the COPG2 gene.
Interactions
COPG2 has been shown to interact with Dopamine receptor D1 and COPB1.
References
External links
Further reading | COPG2 | [
"Chemistry"
] | 50 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,797,185 | https://en.wikipedia.org/wiki/MAFG | Transcription factor MafG is a bZip Maf transcription factor protein that in humans is encoded by the MAFG gene.
MafG is one of the small Maf proteins, which are basic region and leucine zipper (bZIP)-type transcription factors. The HUGO Gene Nomenclature Committee-approved gene name of MAFG is “v-maf avian musculoaponeurotic fibrosarcoma oncogene homolog G”.
Discovery
MafG was first cloned and identified in chicken in 1995 as a new member of the small Maf (sMaf) genes. MAFG has been identified in many vertebrates, including humans. There are three functionally redundant sMaf proteins in vertebrates, MafF, MafG, and MafK.
Structure
MafG has a bZIP structure that consists of a basic region for DNA binding and a leucine zipper structure for dimer formation. Similar to other sMafs, MafG lacks any canonical transcriptional activation domains.
Expression
MAFG is broadly but differentially expressed in various tissues. MAFG expression was detected in all 16 tissues examined by the human BodyMap Project, but relatively abundant in lung, lymph node, skeletal muscle and thyroid tissues. MafG gene expression is induced by oxidative stresses, such as hydrogen peroxide and electrophilic compounds. Mouse Mafg gene is induced by Nrf2-sMaf heterodimers through an antioxidant response element (ARE) at the promoter proximal region. In response to bile acids, mouse Mafg gene is induced by the nuclear receptor, FXR (Farnesoid X receptor).
Function
Because of sequence similarity, no functional differences have been observed among the sMafs in terms of their bZIP structures. sMafs form homodimers by themselves and heterodimers with other specific bZIP transcription factors, such as CNC (cap 'n' collar) proteins [p45 NF-E2 (NFE2), Nrf1 (NFE2L1), Nrf2 (NFE2L2), and Nrf3 (NFE2L3)] and Bach proteins (BACH1 and BACH2).
sMaf homodimers bind to a palindromic DNA sequence called the Maf recognition element (MARE: TGCTGACTCAGCA) and its related sequences. Structural analyses have demonstrated that the basic region of a Maf factor recognizes the flanking GC sequences. By contrast, CNC-sMaf or Bach-sMaf heterodimers preferentially bind to DNA sequences (RTGA(C/G)NNNGC: R=A or G) that are slightly different from MARE. The latter DNA sequences have been recognized as antioxidant/electrophile response elements or NF-E2-binding motifs to which Nrf2-sMaf heterodimers and p45 NF-E2-sMaf heterodimer bind, respectively. It has been proposed that the latter sequences should be classified as CNC-sMaf-binding elements (CsMBEs).
It has also been reported that sMafs form heterodimers with other bZIP transcription factors, such as c-Jun and c-Fos.
Target genes
sMafs regulate different target genes depending on their partners. For instance, the p45-NF-E2-sMaf heterodimer regulate genes responsible for platelet production. Nrf2-sMaf heterodimer regulates a battery of cytoprotective genes, such as antioxidant/xenobiotic metabolizing enzyme genes. The Bach1-sMaf heterodimer regulates the heme oxygenase-1 gene. In particular, it has been reported that Bach1-MafG heterodimers participate in the hypermethylation of genes with CpG island promoters in certain types of cancers. The contribution of individual sMafs to the transcriptional regulation of their target genes has not yet been well examined.
Disease linkage
Loss of sMafs results in disease-like phenotypes as summarized in table below. Mice lacking MafG exhibit mild neuronal phenotype and mild thrombocytopenia. However, mice lacking Mafg and one allele of Mafk (Mafg−/−::Mafk+/−) exhibit more severe neuronal phenotypes, severe thrombocytopenia and cataracts. Mice lacking MafG and MafK (Mafg−/−::Mafk−/− ) die in the perinatal stage. Finally, mice lacking MafF, MafG and MafK are embryonic lethal. Embryonic fibroblasts that are derived from Maff−/−::Mafg−/−::Mafk−/− mice fail to activate Nrf2-dependent cytoprotective genes in response to stress.
In addition, accumulating evidence suggests that as partners of CNC and Bach proteins, sMafs are involved in the onset and progression of various human diseases, including neurodegeneration, arteriosclerosis and cancer.
Notes
References
Further reading
External links
Transcription factors | MAFG | [
"Chemistry",
"Biology"
] | 1,112 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,797,236 | https://en.wikipedia.org/wiki/MYCL | L-myc-1 proto-oncogene protein is a protein that in humans is encoded by the MYCL1 gene.
MYCL1 is a bHLH (basic helix-loop-helix) transcription factor implicated in lung cancer.
Interactions
MYCL1 has been shown to interact with MAX.
References
Further reading
External links
Transcription factors | MYCL | [
"Chemistry",
"Biology"
] | 72 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,797,274 | https://en.wikipedia.org/wiki/PITX1 | Paired-like homeodomain 1 is a protein that in humans is encoded by the PITX1 gene.
Function
This gene encodes a member of the RIEG/PITX homeobox family, which is in the bicoid class of homeodomain proteins. Members of this family are involved in organ development and left-right asymmetry. This protein acts as a transcriptional regulator involved in basal and hormone-regulated activity of prolactin.
Clinical relevance
Mutations in this gene have been associated with autism, club foot and polydactyly in humans.
Genetic basis of pathologies
Genomic rearrangements at the PITX1 locus are associated with Liebenberg syndrome. In PITX1 Liebenberg is associated with a translocation or deletions, which cause insert promoter groups into the PITX1 locus. A missense mutation within the PITX1 locus is associated with the development of autosomal dominant clubfoot.
Interactions
PITX1 has been shown to interact with pituitary-specific positive transcription factor 1.
References
Further reading
External links
http://omim.org/entry/602149 at OMIM, holds the most up to date information on PITX1.
https://ghr.nlm.nih.gov/gene/PITX1 at the NIH, has a summary on the effects of PITX1 mutations.
Transcription factors | PITX1 | [
"Chemistry",
"Biology"
] | 294 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,797,324 | https://en.wikipedia.org/wiki/UBR5 | E3 ubiquitin-protein ligase UBR5 is an enzyme that in humans is encoded by the UBR5 gene.
Function
This gene encodes a progestin-induced protein, which belongs to the HECT (homology to E6-AP carboxyl terminus) family. The HECT family proteins function as E3 ubiquitin-protein ligases, targeting specific proteins for ubiquitin-mediated proteolysis. This gene is localized to chromosome 8q22 which is disrupted in a variety of cancers. This gene potentially has a role in regulation of cell proliferation or differentiation.
Interactions
UBR5 has been shown to interact with:
CIB1,
Karyopherin alpha 1,
MAPK1, and
TOPBP1.
References
Further reading
External links
Ubr5 : Protein Overview : UCSD-Nature Molecule Pages | UBR5 | [
"Chemistry"
] | 180 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,797,548 | https://en.wikipedia.org/wiki/SELS%20%28gene%29 | Selenoprotein S, also known as SELS, is a human gene.
This gene encodes a selenoprotein, which contains a selenocysteine (Sec) residue at its active site. The selenocysteine is encoded by the UGA codon that normally signals translation termination. The 3' UTR of selenoprotein genes have a common stem-loop structure, the sec insertion sequence (SECIS), that is necessary for the recognition of UGA as a Sec codon rather than as a stop signal. Studies suggest that this protein may regulate cytokine production, and thus play a key role in the control of the inflammatory response. Two alternatively spliced transcript variants encoding the same protein have been found for this gene.
Interactions
SELS (gene) has been shown to interact with Valosin-containing protein.
References
Further reading
Selenoproteins | SELS (gene) | [
"Chemistry"
] | 192 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,797,589 | https://en.wikipedia.org/wiki/IL36G | Interleukin-36 gamma previously known as interleukin-1 family member 9 (IL1F9) is a protein that in humans is encoded by the IL36G gene.
Expression
IL36G is well-expressed in the epithelium of the skin, gut, and lung. In the skin IL36G is predominantly expressed in epidermal granular layer keratinocytes with little to no expression in basal layer keratinocytes.
Function
The protein encoded by this gene is a member of the interleukin-1 cytokine family. This gene and eight other interleukin-1 family genes form a cytokine gene cluster on chromosome 2. The activity of this cytokine is mediated via the interleukin-1 receptor-like 2 (IL1RL2/IL1R-rp2/IL-36 receptor), and is specifically inhibited by interleukin-36 receptor antagonist, (IL-36RA/IL1F5/IL-1 delta). Interferon-gamma, tumor necrosis factor-alpha and interleukin-1 β (IL-1β) are reported to stimulate the expression of this cytokine in keratinocytes. The expression of this cytokine in keratinocytes can also be induced by a multiple Pathogen-Associated Molecular Patterns (PAMPs). Both IL-36γ mRNA and protein have been linked to psoriasis lesions and has been used as a biomarker for differentiating between eczema and psoriasis. As with many other interleukin-1 family cytokines IL-36γ requires proteolytic cleavage of its N-terminus for full biological activity. However, unlike IL-1β the activation of IL-36γ is inflammasome-independent. IL-36γ is specifically cleaved by the endogenous protease cathepsin S as well exogenous proteases derived from fungal and bacterial pathogens.
References
Biomarkers
Further reading | IL36G | [
"Biology"
] | 428 | [
"Biomarkers"
] |
14,797,599 | https://en.wikipedia.org/wiki/SALL4 | Sal-like protein 4 (SALL4) is a transcription factor encoded by a member of the Spalt-like (SALL) gene family, SALL4. The SALL genes were identified based on their sequence homology to Spalt, which is a homeotic gene originally cloned in Drosophila melanogaster that is important for terminal trunk structure formation in embryogenesis and imaginal disc development in the larval stages. There are four human SALL proteins (SALL1, 2, 3, and 4) with structural homology and playing diverse roles in embryonic development, kidney function, and cancer. The SALL4 gene encodes at least three isoforms, termed A, B, and C, through alternative splicing, with the A and B forms being the most studied. SALL4 can alter gene expression changes through its interaction with many co-factors and epigenetic complexes. It is also known as a key embryonic stem cell (ESC) factor.
Structure, interaction partners, and DNA binding activity
SALL4 contains one zinc finger in its amino (N-) terminus and three clusters of zinc fingers that each coordinates zinc with two cysteines and two histidines (Cys2His2-type) that potentially confer nucleic acid binding activity. SALL4B lacks two of the zinc finger clusters found in the A isoform. Although it remains unclear which zinc finger cluster is responsible for SALL4’s DNA binding property
Different SALL family members can form hetero- or homodimers via their conserved glutamine (Q)-rich region. SALL4 has at least one canonical nuclear localization signal (NLS) with the K-K/R-X-K/R motif in the N-terminal portion of the protein shared among both A and B isoforms (residues 64–67). One report has suggested that with a mutated NLS sequence, SALL4 cannot localize to the nucleus. Through a 12-amino acid sequence in its N-terminus (N-12a.a.), SALL4 binds to retinoblastoma binding protein 4 (RBBP4), a subunit of the nucleosome remodeling and histone deacetylation (NuRD) complex, which also contains chromodomain-helicase-DNA binding proteins (CHD3/4 or Mi-2a/b), metastasis-associated proteins (MTA), methyl-CpG-binding domain proteins (MBD2 or MBD3), and histone deacetylases (HDAC1 and HDAC2). This association allows SALL4 to act as a transcriptional repressor. Accordingly, SALL4 has been shown to localize to heterochromatin regions in cells, for which its last zinc finger cluster (shared between SALL4A and B) is necessary. Beside the NuRD complex, SALL4 is reportedly able to bind to other epigenetic modifiers such as histone lysine-specific demethylase 1 (LSD1), which is frequently associated with the NuRD complex and subsequently gene repression. In addition, SALL4 can also activate gene expression via the recruitment of the mixed lineage leukemia (MLL) protein, which is a homolog of Drosophila Trithorax and yeast Set1 proteins and has histone 3 lysine 4 (H3K4) trimethylation activity. This interaction is best characterized in the co-regulation of HOXA9 gene by SALL4 and MLL in leukemic cells.
In mouse ESCs, Sall4 was found to bind the essential stem cell factor, octamer-binding transcription factor 4 (Oct4), in two separate unbiased mass spectrometry (spec) screens Sall4 can also bind other important pluripotency proteins such as Nanog and sex determining region Y (SRY)-box 2 protein (Sox2). Together these proteins can affect each other’s expression patterns as well as their own, thus forming a mESC-specific transcriptional regulatory circuit. SALL4 has also been reported to bind T-box 5 protein (Tbx5) in cardiac tissues as well as genetically interact with Tbx5 in mouse limb development. Other binding partners of SALL4 include promyelocytic leukemia zinc finger protein (PLZF) in sperm precursor cells, Rad50 during DNA damage repair, and b-catenin downstream of the Wnt signaling pathway. Since most of these interactions were identified by mass-spec or co-immunoprecipitation, whether they are direct are unknown. Through chromatin immunoprecipitation (ChIP) followed by next-generation sequencing or microarray, some SALL4 targets have been identified. A key verified target gene encodes the enzyme phosphatidylinositol-3,4,5-trisphosphate 3-phosphatase (PTEN). PTEN is a tumor suppressor that keeps uncontrolled cell growth in check through inducing programmed cell death, or apoptosis. SALL4 binds the PTEN promoter and recruits the NuRD complex to mediate its repression, thus leads to proliferation of cells.
Expression and role in stem cells and development
In mouse embryos, SALL4 expression is detectable as early as the two-cell stage. Its expression persists through 8- and 16-cell stages to the blastocyst, where it is found in some cells of the trophectoderm and inner cell mass (ICM), from which mouse ESCs are derived. SALL4 is an important factor for maintaining the “stemness” of ESCs of both mouse and human origin, since loss of Sall4 leads to differentiation of these pluripotent cells down the trophectoderm lineage. This is possibly due to down-regulation of Pou5f1 (encoding Oct4) expression and up-regulation of caudal-type homeobox 2 (Cdx2) gene expression. Sall4 is part of the transcriptional regulatory network that includes other pluripotent factors such as Oct4, Nanog, and Sox2 Because of its important role in early development, genetically mutated mice without functioning SALL4 die early on at the peri-implantation stage, while heterozygous mice have neural, kidney, heart defects and limb abnormalities.
Clinical significance
The various SALL4-null mouse models mimic human mutations in the SALL4 gene, which were shown to cause developmental problems in patients with Okihiro/Duane-Radial-ray syndrome. These individuals frequently have family history of hand malformation and eye movement disorders.
SALL4 expression is low to undetectable in most adult tissues with the exception of germ cells and human blood progenitor cells. However, SALL4 is re-activated and mis-regulated in various cancers such as acute myeloid leukemia (AML), B-cell acute lymphocytic leukemia (B-ALL), germ cell tumors, gastric cancer, breast cancer, hepatocellular carcinoma (HCC), lung cancer, and glioma. In many of these cancers, SALL4 expression was compared in tumor cells to the normal tissue counterpart, e.g. it is expressed in nearly half of primary human endometrial cancer samples, but not in normal or hyperplastic endometrial tissue samples. Often, SALL4 expression is correlated with worse survival and poor prognosis such as in HCC, or with metastasis such as in endometrial cancer, colorectal carcinoma, and esophageal squamous cell carcinoma. It is unclear how SALL4 expression is de-regulated in malignant cells, but DNA hypomethylation in its intron 1 region has been observed in B-ALL.
In breast cancer, Signal transducer and activator of transcription 3 (STAT3) has been reported to directly activate SALL4 expression. Furthermore, canonical Wnt signaling has been proposed to activate SALL4 gene expression in both development and in cancer. In leukemia, the mechanism of SALL4 function is better characterized; mice with over-expression of human SALL4 develop myelodysplatic syndromes (MDS)-like symptoms and eventually AML. This is consistent with high level of SALL4 expression correlating with high-risk MDS patients. Further elucidating its tumorigenesis function, knocking down SALL4 expression with short hairpin-RNA in leukemic cells or treating these cells with a peptide that mimics the N-12aa of SALL4 to inhibit its interaction with the NuRD complex both result in cell death. These suggest the primary cancer-maintaining property of SALL4 is mediated through its transcriptional repressing function. These observations have led to growing interest in SALL4 as both a diagnostic tool as well as target in cancer therapy. For example, in solid tumors such as germ cell tumors, SALL4 protein expression has become a standard diagnostic biomarker.
Notes
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on SALL4-Related Disorders
Transcription factors | SALL4 | [
"Chemistry",
"Biology"
] | 1,961 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,797,652 | https://en.wikipedia.org/wiki/CACNB4 | Voltage-dependent L-type calcium channel subunit beta-4 is a protein that in humans is encoded by the CACNB4 gene.
Function
This gene encodes a member of the beta subunit family, a protein in the voltage-dependent calcium channel complex. Calcium channels mediate the influx of calcium ions into the cell upon membrane polarization and consist of a complex of alpha-1, alpha-2/delta, beta, and gamma subunits in a 1:1:1:1 ratio. Various versions of each of these subunits exist, either expressed from similar genes or the result of alternative splicing. The protein described in this record plays an important role in calcium channel function by modulating G protein inhibition, increasing peak calcium current, controlling the alpha-1 subunit membrane targeting and shifting the voltage dependence of activation and inactivation. Alternate transcriptional splice variants of this gene, encoding different isoforms, have been characterized.
Clinical significance
Certain mutations in this gene have been associated with idiopathic generalized epilepsy (IGE) and juvenile myoclonic epilepsy (JME).
Interactions
CACNB4 has been shown to interact with Cav2.1.
See also
Voltage-dependent calcium channel
References
Further reading
External links
Ion channels | CACNB4 | [
"Chemistry"
] | 259 | [
"Neurochemistry",
"Ion channels"
] |
14,797,658 | https://en.wikipedia.org/wiki/40S%20ribosomal%20protein%20S13 | 40S ribosomal protein S13 is a protein that in humans is encoded by the RPS13 gene.
Function
Ribosomes, the organelles that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. This gene encodes a ribosomal protein that is a component of the 40S subunit. The protein belongs to the S15P family of ribosomal proteins. It is located in the cytoplasm. The protein has been shown to bind to the 5.8S rRNA in rat. The gene product of the E. coli ortholog (ribosomal protein S15) functions at early steps in ribosome assembly. This gene is co-transcribed with two U14 small nucleolar RNA genes, which are located in its third and fifth introns. As is typical for genes encoding ribosomal proteins, there are multiple processed pseudogenes of this gene dispersed through the genome.
Interactions
RPS13 has been shown to interact with PDCD4.
References
Further reading
External links
Ribosomal proteins | 40S ribosomal protein S13 | [
"Chemistry"
] | 233 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,797,661 | https://en.wikipedia.org/wiki/40S%20ribosomal%20protein%20S16 | 40S ribosomal protein S16''' is a protein that in humans is encoded by the RPS16'' gene.
Ribosomes, the organelles that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. This gene encodes a ribosomal protein that is a component of the 40S subunit. The protein belongs to the S9P family of ribosomal proteins. It is located in the cytoplasm. As is typical for genes encoding ribosomal proteins, there are multiple processed pseudogenes of this gene dispersed through the genome.
Interactions
Ribosomal protein S16 is one of the proteins from the small ribosomal subunit. It belongs to a ribosomal protein family that is divided into three groups based on sequence similarity:
* Eubacterial S16.
* Algal and plant chloroplast S16.
* Cyanelle S16.
* Neurospora crassa mitochondrial S24 (cyt-21).
S16 proteins have about 100 amino-acid residues. There are two paralogues in Arabidopsis thaliana, RPS16-1 (chloroplastic) and RPS16-2 (targeted to the chloroplast and the mitochondrion)
[4].
RPS16 has been shown to interact with CDC5L.
References
Further reading
Ribosomal proteins | 40S ribosomal protein S16 | [
"Chemistry"
] | 304 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,797,738 | https://en.wikipedia.org/wiki/SOX5 | Transcription factor SOX-5 is a protein that in humans is encoded by the SOX5 gene.
Function
This gene encodes a member of the SOX (SRY-related HMG-box) family of transcription factors involved in the regulation of embryonic development and in the determination of the cell fate. The encoded protein may act as a transcriptional regulator after forming a protein complex with other proteins. The encoded protein may play a role in chondrogenesis. A pseudogene of this gene is located on chromosome 8. Multiple transcript variants encoding distinct isoforms have been identified for this gene.
Mutations in the SOX5 gene can cause Lamb-Shaffer syndrome.
See also
SOX genes
References
Further reading
Transcription factors | SOX5 | [
"Chemistry",
"Biology"
] | 144 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,797,899 | https://en.wikipedia.org/wiki/Mainichi%20Design%20Prize | The , originally the New Japan Design Competition, is an annual award given to outstanding Japanese designers. The award, founded in 1952, is sponsored by Japanese newspaper Mainichi Shimbun. It is considered Japan's most prestigious award for design.
Recipients
Daito Manabe, 2016
Naoki Takizawa, 1998
References
External links
Mainichi Design Prize Homepage
Design awards
Japanese awards
Awards established in 1952
1952 establishments in Japan
Japanese design | Mainichi Design Prize | [
"Engineering"
] | 88 | [
"Design stubs",
"Design",
"Design awards"
] |
14,797,935 | https://en.wikipedia.org/wiki/HIST1H2AK | Histone H2A type 1 is a protein that in humans is encoded by the HIST1H2AK gene.
Histones are basic nuclear proteins that are responsible for the nucleosome structure of the chromosomal fiber in eukaryotes. Two molecules of each of the four core histones (H2A, H2B, H3, and H4) form an octamer, around which approximately 146 bp of DNA is wrapped in repeating units, called nucleosomes. The linker histone, H1, interacts with linker DNA between nucleosomes and functions in the compaction of chromatin into higher order structures. This gene is intronless and encodes a member of the histone H2A family. Transcripts from this gene lack polyA tails but instead contain a palindromic termination element. This gene is found in the small histone gene cluster on chromosome 6p22-p21.3.
References
Further reading | HIST1H2AK | [
"Chemistry"
] | 206 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,798,104 | https://en.wikipedia.org/wiki/HOPX | Homeodomain-only protein is a protein that in humans is encoded by the HOPX gene. It is an important regulator of cardiac development and a marker of hippocampal neural stem cells.
Function
The protein encoded by this gene is a homeodomain protein that lacks certain conserved residues required for DNA binding. It was reported that choriocarcinoma cell lines and tissues failed to express this gene, which suggested the possible involvement of this gene in malignant conversion of placental trophoblasts. Studies in mice suggested that this protein may interact with serum response factor (SRF) and modulate SRF-dependent cardiac-specific gene expression and cardiac development. Multiple alternatively spliced transcript variants encoding the same protein have been observed, the full-length natures of only some have been determined.
References
Further reading
External links
Transcription factors | HOPX | [
"Chemistry",
"Biology"
] | 175 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,798,147 | https://en.wikipedia.org/wiki/GD%2066 | GD 66 or V361 Aurigae is a 0.64 solar mass () pulsating white dwarf star located 170 light years from Earth in the Auriga constellation. The estimated cooling age of the white dwarf is 500 million years. Models of the relationship between the initial mass of a star and its final mass as a white dwarf star suggest that when the star was on the main sequence it had a mass of approximately 2.5 , which implies its lifetime was around 830 million years. The total age of the star is thus estimated to be in the range 1.2 to 1.7 billion years.
In 1983, Noël Dolez et al. discovered that GD 66 is a variable star, from photometric data obtained at Haute-Provence Observatory. It was given its variable star designation, V361 Aurigae, in 1985. The star is a pulsating white dwarf of type DAV, with an extremely stable period. Small variations in the phase of pulsation led to the suggestion that the star was being orbited by a giant planet which caused the pulsations to be delayed due to the varying distance to the star caused by the reflex motion about the system's centre-of-mass. Observations with the Spitzer Space Telescope failed to directly detect the planet, which put an upper limit on the mass of 5–6 Jupiter masses. Investigation of a separate pulsation mode revealed timing variations in antiphase with the variations in the originally-analysed pulsation mode. This would not be the case if the variations were caused by an orbiting planet, and thus the timing variations must have a different cause. This illustrates the potential dangers of attempting to detect planets by white dwarf pulsation timing.
References
External links
V361 Aurigae Catalog
WD 0517+307 Catalog
Image GD 66
Pulsating white dwarfs
Aurigae, V361
Auriga
Hypothetical planetary systems | GD 66 | [
"Astronomy"
] | 400 | [
"Auriga",
"Constellations"
] |
14,798,154 | https://en.wikipedia.org/wiki/UBE1C | NEDD8-activating enzyme E1 catalytic subunit is a protein that in humans is encoded by the UBA3 gene.
The modification of proteins with ubiquitin is an important cellular mechanism for targeting abnormal or short-lived proteins for degradation. Ubiquitination involves at least three classes of enzymes: ubiquitin-activating enzymes, or E1s, ubiquitin-conjugating enzymes, or E2s, and ubiquitin-protein ligases, or E3s. This gene encodes a member of the E1 ubiquitin-activating enzyme family. The encoded enzyme associates with AppBp1, an amyloid beta precursor protein binding protein, to form a heterodimer, and then the enzyme complex activates NEDD8, a ubiquitin-like protein, which regulates cell division, signaling and embryogenesis. Multiple alternatively spliced transcript variants encoding distinct isoforms have been found for this gene.
This enzyme contains an E2 binding domain, which resembles ubiquitin, and recruits the catalytic core of the E2 enzyme UBE2M (Ubc12) in a similar manner to that in which ubiquitin interacts with ubiquitin binding domains.
Interactions
UBE1C has been shown to interact with NEDD8, APPBP1 and UBE2M.
References
Further reading
External links
PDBe-KB provides an overview of all the structure information available in the PDB for Human NEDD8-activating enzyme E1 catalytic subunit (UBE1C)
Protein domains | UBE1C | [
"Biology"
] | 330 | [
"Protein domains",
"Protein classification"
] |
14,798,185 | https://en.wikipedia.org/wiki/USP8 | Ubiquitin carboxyl-terminal hydrolase 8 is an enzyme that in humans is encoded by the USP8 gene.
Interactions
USP8 has been shown to interact with RNF41 and STAM2.
Diseases
In a few cases of Morbus Cushing's disease, a mutation of USP8 has been found.
References
Further reading | USP8 | [
"Chemistry"
] | 74 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,798,224 | https://en.wikipedia.org/wiki/BUB3 | Mitotic checkpoint protein BUB3 is a protein that in humans is encoded by the BUB3 gene.
Bub3 is a protein involved with the regulation of the Spindle Assembly Checkpoint (SAC); though BUB3 is non-essential in yeast, it is essential in higher eukaryotes. As one of the checkpoint proteins, Bub3 delays the irreversible onset of anaphase through direction of kinetochore localization during prometaphase to achieve biorientation. In directing the kinetochore-microtubule interaction, this ensures the proper (and consequently, bioriented) attachment of the chromosomes prior to anaphase. Bub3 and its related proteins that form the Spindle Assembly Checkpoint (SAC) inhibit the action of the Anaphase Promoting Complex (APC), preventing early anaphase entry and mitotic exit; this serves as a mechanism for the fidelity of chromosomal segregation.
Function
Bub3 is a crucial component in the formation of the mitotic spindle assembly complex, which forms a complex with other important proteins. For correct segregation of the cells it is necessary for all mitotic spindles to attach correctly to the kinetochore of each chromosome. This is controlled by the mitotic spindle checkpoint complex which operates as a feedback-response. If there is a signal of a defect in the attachment, mitosis will be stopped to ensure that all chromosomes have an amphitelic binding to spindles. After the error is corrected, the cell will proceed to anaphase. The complex of proteins which regulate the cell arrest are BUB1, BUB2, BUB3 (this protein), Mad1, Mad2, Mad3 and MPS1.
Role in the spindle assembly checkpoint
At unattached kinetochores, a complex consisting of BubR1, Bub3, and Cdc20 interact with the Mad2-Cdc20 complex to inhibit the APC, thus inhibiting the formation of active APCCdc20. Bub3 binds constitutively to BubR1; in this arrangement, Bub3 acts as a key component of the SAC in the formation of an inhibitory complex. Securin and cyclin B are also stabilized before the anaphase transition by the unattached kinetochores. The stabilization of cyclin and securin prevent the degradation that would lead to the irreversible and fast separation of the sister chromatids.
The formation of these “inhibitory complexes” and steps feed into a ‘wait’ signal before activation of separase; at the stage prior to anaphase, securin inhibits the activity of separase and maintains the cohesion complex.
Structure
The crystal structure of Bub3 indicates a protein of the seven-bladed beta-propeller structure with the presence of WD40 repeats, with each blade formed by four anti-parallel beta sheet strands that have been organized around a tapered channel. Mutation data suggest several important surfaces of interaction for the formation of the SAC, particularly the conserved tryptophans (in blades 1 and 3) and the conserved VAVE sequences in blade 5.
Rae1 (an mRNA export factor), another member of the WD40 protein family, shows high sequence conservation with that of Bub3. Both bind to Gle2p-binding-sequence (GLEBS) motifs; while Bub3 specifically binds Mad3 and Bub1, Rae1 has more promiscuous binding as it binds both the nuclear pore complex and Bub1. This indicates a similarity in interaction of Bub3 and Rae1 with Bub1.
Interactions
BUB3 has been shown to interact with BUB1B, HDAC1 and Histone deacetylase 2.
Bub3 has been shown to form complexes with Mad1-Bub1 and with Cdc20 (the interaction of which does not require intact kinetochores). Additionally, it has been shown to bind Mad2 and Mad3.
Bub3 directs the localization of Bub1 at the kinetochore in order to activate the SAC. In both Saccharomyces cerevisiae and metazoans, Bub3 has been shown to bind BubR1 and Bub1.
The components that are essential for the spindle assembly checkpoint in yeast have been determined to be Bub1, Bub3, Mad1, Mad2, Mad3, and the increasingly important Mps1 (a protein kinase).
Regulation
When the SAC is activated, the production of the Bub3-Cdc20 complex is activated. After kinetochore attachment is complete, the spindle checkpoint complexes (including the BubR1-Bub3) experience a decrease in concentration.
Bub3 also acts as a regulator in that it affects binding of Mad3 to Mad2.
Structural and sequence analysis indicated the existence of three conserved regions that are referred to as WD40 repeats. Mutation of one of these motifs has indicated an impaired ability of Bub3 to interact with Mad2, Mad3, and Cdc20. The structural data suggested that Bub3 acts as a platform that mediates the interaction of SAC protein complexes.
Clinical significance
BUB3 forms a complex with BUB1 (BUB1/BUB3 complex) to inhibit the anaphase-promoting complex or cyclosome (APC/C) as soon as the spindle-assembly checkpoint is activated. BUB3 also phosphorylates:
CDC20 (activator) and thereby inhibits the ubiquitin ligase activity of APC/C.
MAD1L1, which usually interacts with BUB1 and BUBR1, and in turn the BUB1/BUB3 complex interacts with MAD1L1.
Another function of BUB3 is to promote correct kinetochore-microtubule (K-MT) attachments when the spindle-assembly checkpoint is active. It plays a role in the localization of kinetochore of BUB1.
BUB3 serves in oocyte meiosis as the regulator of chromosome segregation.
Defects in BUB3 in the cell cycle can contribute to the following diseases:
hepatocellular carcinoma
gastric cancer
breast cancer
cervical cancer
adenomatous polyposis
osteosarcoma familial breast cancer
glioblastoma cervicitis
lung cancer carcinoma
Coli polyposis
References
Further reading
External links
Proteins | BUB3 | [
"Chemistry"
] | 1,351 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
14,798,249 | https://en.wikipedia.org/wiki/MATR3 | Matrin-3 is a protein that in humans is encoded by the MATR3 gene.
Function
The protein encoded by this gene is localized in the nuclear matrix. It may play a role in transcription or may interact with other nuclear matrix proteins to form the internal fibrogranular network. Two transcript variants encoding the same protein have been identified for this gene.
Pathology
Mutations in the Matrin 3 gene are associated with familial amyotrophic lateral sclerosis.
References
Further reading | MATR3 | [
"Chemistry"
] | 102 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,798,281 | https://en.wikipedia.org/wiki/MAFB%20%28gene%29 | Transcription factor MafB also known as V-maf musculoaponeurotic fibrosarcoma oncogene homolog B is a protein that in humans is encoded by the MAFB gene. This gene maps to chromosome 20q11.2-q13.1, consists of a single exon and spans around 3 kb.
Function
MafB is a basic leucine zipper (bZIP) transcription factor that plays an important role in the regulation of lineage-specific hematopoiesis. The encoded nuclear protein represses ETS1-mediated transcription of erythroid-specific genes in myeloid cells.
Clinical significance
Mutations in the murine Mafb gene are responsible for the mutant mouse Kreisler (kr) that presents an abnormal segmentation of the hindbrain and exhibit hyperactive behavior, including head tossing and running in circles. This mice dies at birth due to renal failure whereas the Mafb -/- mice dies of central apnea.
Recently, single-nucleotide polymorphisms (SNPs) near MAFB have been found associated with nonsyndromic cleft lip and palate. The GENEVA Cleft Consortium study, a genomewide association study involving 1,908 case-parent trios from Europe, the United States, China, Taiwan, Singapore, Korea, and the Philippines, first identified MAFB as being associated with cleft lip and/or palate with stronger genome-wide significance in Asian than European populations. The difference in populations could reflect variable coverage by available markers or true allelic heterogeneity. In mouse models, Mafb mRNA and protein were detected in both craniofacial ectoderm and neural crest-derived mesoderm between embryonic days 13.5 and 14.5; expression was strong in the epithelium around the palatal shelves and in the medial edge epithelium during palatal fusion. After fusion, Mafb expression was stronger in oral epithelium compared to mesenchymal tissue. In addition, sequencing analysis detected a new missense mutation in the Filipino population, H131Q, that was significantly more frequent in cases than in matched controls. The gene-poor regions either side of the MAFB gene include numerous binding sites for transcription factors that are known to have a role in palate development.
References
Further reading
External links
Transcription factors | MAFB (gene) | [
"Chemistry",
"Biology"
] | 497 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,798,403 | https://en.wikipedia.org/wiki/HOXB13 | Homeobox protein Hox-B13 is a protein that in humans is encoded by the HOXB13 gene.
Function
This gene encodes a transcription factor that belongs to the homeobox gene family. Genes of this family are highly conserved among vertebrates and essential for vertebrate embryonic development. This gene has been implicated in fetal skin development and cutaneous regeneration. In mice, a similar gene was shown to exhibit temporal and spatial colinearity in the main body axis of the embryo, but was not expressed in the secondary axes, which suggests functions in body patterning along the axis. This gene and other HOXB genes form a gene cluster on chromosome 17 in the 17q21.32 region.
Men who inherit a rare (<0.1% in a selected group of patients without clinical signs of prostate cancer) genetic variant in HOXB13 (G84E or rs138213197) have a 10-20-fold increased risk of prostate cancer.
See also
Homeobox
References
Further reading
External links
Transcription factors | HOXB13 | [
"Chemistry",
"Biology"
] | 219 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,798,410 | https://en.wikipedia.org/wiki/CREB3 | Cyclic AMP-responsive element-binding protein 3 is a protein that in humans is encoded by the CREB3 gene.
This gene encodes a transcription factor that is a member of the leucine zipper family of DNA binding proteins. This protein binds to the cAMP-responsive element, an octameric palindrome. The protein interacts with host cell factor C1, which also associates with the herpes simplex virus (HSV) protein VP16 that induces transcription of HSV immediate-early genes. This protein and VP16 both bind to the same site on host cell factor C1. It is thought that the interaction between this protein and host cell factor C1 plays a role in the establishment of latency during HSV infection. An additional transcript variant has been identified, but its biological validity has not been determined.
Interactions
CREB3 has been shown to interact with Host cell factor C1.
See also
CREB
References
Further reading
External links
Transcription factors | CREB3 | [
"Chemistry",
"Biology"
] | 195 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,798,425 | https://en.wikipedia.org/wiki/SPTLC1 | Serine palmitoyltransferase, long chain base subunit 1, also known as SPTLC1, is a protein which in humans is encoded by the SPTLC1 gene.
Serine palmitoyltransferase, which consists of two different subunits, is the initial enzyme in sphingolipid biosynthesis. It converts L-serine and palmitoyl CoA to 3-oxosphinganine with pyridoxal 5'-phosphate as a cofactor. The product of this gene is the long chain base subunit 1 of serine palmitoyltransferase. Mutations in this gene were identified in patients with hereditary sensory neuropathy type 1, macular disease, and juvenile amyotrophic lateral sclerosis. Alternatively spliced variants encoding different isoforms have been identified.
References
Further reading
External links
GeneReviews/NIH/NCBI/UW entry on Hereditary Sensory Neuropathy Type I | SPTLC1 | [
"Chemistry"
] | 204 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,798,494 | https://en.wikipedia.org/wiki/Electrochemiluminescence | Electrochemiluminescence or electrogenerated chemiluminescence (ECL) is a kind of luminescence produced during electrochemical reactions in solutions. In electrogenerated chemiluminescence, electrochemically generated intermediates undergo a highly exergonic reaction to produce an electronically excited state that then emits light upon relaxation to a lower-level state. This wavelength of the emitted photon of light corresponds to the energy gap between these two states. ECL excitation can be caused by energetic electron transfer (redox) reactions of electrogenerated species. Such luminescence excitation is a form of chemiluminescence where one/all reactants are produced electrochemically on the electrodes.
ECL is usually observed during application of potential (several volts) to electrodes of electrochemical cell that contains solution of luminescent species (polycyclic aromatic hydrocarbons, metal complexes, quantum dots or nanoparticles) in aprotic organic solvent (ECL composition).
In organic solvents both oxidized and reduced forms of luminescent species can be produced at different electrodes simultaneously or at a single one by sweeping its potential between oxidation and reduction. The excitation energy is obtained from recombination of oxidized and reduced species.
In aqueous medium, which is mostly used for analytical applications, simultaneous oxidation and reduction of luminescent species is difficult to achieve due to electrochemical splitting of water itself so the ECL reaction with the coreactants is used. In the latter case luminescent species are oxidized at the electrode together with the coreactant which gives a strong reducing agent after some chemical transformations (the oxidative reduction mechanism).
Applications
ECL proved to be very useful in analytical applications as a highly sensitive and selective method. It combines analytical advantages of chemiluminescent analysis (absence of background optical signal) with ease of reaction control by applying electrode potential. As an analytical technique it presents outstanding advantages over other common analytical methods due to its versatility, simplified optical setup compared with photoluminescence (PL), and good temporal and spatial control compared with chemiluminescence (CL). Enhanced selectivity of ECL analysis is reached by variation of electrode potential thus controlling species that are oxidized/reduced at the electrode and take part in ECL reaction (see electrochemical analysis).
It generally uses Ruthenium complexes, especially [Ru(bpy)3]2+ (bpy = 2,2'-bipyridine) which releases a photon at ~620 nm regenerating with TPrA (Tripropylamine) in liquid phase or liquid–solid interface. It can be used as monolayer immobilized on an electrode surface (made e.g. of nafion, or special thin films made by Langmuir–Blogett technique or self-assembly technique) or as a coreactant or more commonly as a tag and used in HPLC, Ru tagged antibody based immunoassays, Ru Tagged DNA probes for PCR etc., NADH or H2O2 generation based biosensors, oxalate and organic amine detection and many other applications and can be detected from picomolar sensitivity to dynamic range of more than six orders of magnitude. Photon detection is done with photomultiplier tubes (PMT) or silicon photodiode or gold coated fiber-optic sensors. The importance of ECL techniques detection for bio-related applications has been well established. ECL is heavily used commercially for many clinical lab applications.
See also
Biosensors
Chemiluminescence
Electrochemistry
Luminescence
References
Luminescence
Photoelectrochemistry | Electrochemiluminescence | [
"Chemistry"
] | 778 | [
"Photoelectrochemistry",
"Luminescence",
"Molecular physics",
"Electrochemistry"
] |
14,798,555 | https://en.wikipedia.org/wiki/CRYGS | Gamma-crystallin S is a protein that in humans is encoded by the CRYGS gene.
Crystallins are separated into two classes: taxon-specific, or enzyme, and ubiquitous. The latter class constitutes the major proteins of vertebrate eye lens and maintains the transparency and refractive index of the lens. Since lens central fiber cells lose their nuclei during development, these crystallins are made and then retained throughout life, making them extremely stable proteins.
Mammalian lens crystallins are divided into alpha, beta, and gamma families; beta and gamma crystallins are also considered as a superfamily. Alpha and beta families are further divided into acidic and basic groups. Seven protein regions exist in crystallins: four homologous motifs, a connecting peptide, and N- and C-terminal extensions. Gamma-crystallins are a homogeneous group of highly symmetrical, monomeric proteins typically lacking connecting peptides and terminal extensions. They are differentially regulated after early development. This gene encodes a protein initially considered to be a beta-crystallin but the encoded protein is monomeric and has greater sequence similarity to other gamma-crystallins. This gene encodes the most significant gamma-crystallin in adult eye lens tissue.
Whether due to aging or mutations in specific genes, gamma-crystallins have been involved in cataract formation.
References
External links
Further reading | CRYGS | [
"Chemistry"
] | 271 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,798,711 | https://en.wikipedia.org/wiki/GABPB2 | GA-binding protein subunit beta-1 is a protein that in humans is encoded by the GABPB1 gene.
This gene encodes the GA-binding protein transcription factor, beta subunit. This protein forms a tetrameric complex with the alpha subunit, and stimulates transcription of target genes. The encoded protein may be involved in activation of cytochrome oxidase expression and nuclear control of mitochondrial function. The crystal structure of a similar protein in mouse has been resolved as a ternary protein complex. Multiple transcript variants encoding distinct isoforms have been identified for this gene.
References
Further reading
External links
Transcription factors | GABPB2 | [
"Chemistry",
"Biology"
] | 126 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,798,782 | https://en.wikipedia.org/wiki/BRD4 | Bromodomain-containing protein 4 is a protein that in humans is encoded by the BRD4 gene.
BRD4 is a member of the BET (bromodomain and extra terminal domain) family, which also includes BRD2, BRD3, and BRDT. BRD4, similar to other BET family members, contains two bromodomains that recognize acetylated lysine residues. BRD4 also has an extended C-terminal domain with little sequence homology to other BET family members.
Structure
The two bromodomains in BRD4, termed BD1 and BD2, consist of 4 alpha-helices linked by 2 loops. The ET domain structure is made up of 3 alpha-helices and a loop. The C-terminal domain of BRD4 has been implicated in promoting gene transcription through interaction with the transcription elongation factor P-TEFb and RNA polymerase II.
Function
The protein encoded by this gene is homologous to the murine protein MCAP, which associates with chromosomes during mitosis, and to the human BRD2 (RING3) protein, a serine/threonine kinase. Each of these proteins contains two bromodomains, a conserved sequence motif which may be involved in chromatin targeting. This gene has been implicated as the chromosome 19 target of translocation t(15;19)(q13;p13.1), which defines the NUT midline carcinoma. Two alternatively spliced transcript variants have been described.
Role in cancer
Most cases of NUT midline carcinoma involve translocation of the BRD4 gene with NUT genes. BRD4 is often required for expression of Myc and other "tumor driving" oncogenes in hematologic cancers including multiple myeloma, acute myelogenous leukemia and acute lymphoblastic leukaemia.
BRD4 is a major target of BET inhibitors, a class of pharmaceutical drugs currently being evaluated in clinical trials.
Interactions
Notably, BRD4 interacts with P-TEFb via its P-TEFb interaction domain (PID), thereby stimulating its kinase activity and stimulating its phosphorylation of the carboxy-terminal domain (CTD) of RNA polymerase II. Recent review.
BRD4 has been shown to interact with GATA1, JMJD6, RFC2, RFC3, RFC1, RFC4 and RFC5.
BRD4 has also been implicated in binding with the diacetylated Twist protein, and the disruption of this interaction has been shown to suppress tumorigenesis in basal-like breast cancer.
BRD4 has also been shown to interact with a variety of inhibitors, such as MS417; inhibition of BRD4 with MS417 has been shown to down-regulate NF-κB activity seen in HIV-associated kidney disease. BRD4 also interacts with apabetalone (RVX-208), which is being evaluated for treatment of atherosclerosis and cardiovascular disease.
References
Further reading
External links
Proteins | BRD4 | [
"Chemistry"
] | 643 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
14,798,852 | https://en.wikipedia.org/wiki/GFI1 | Zinc finger protein Gfi-1 is a transcriptional repressor that in humans is encoded by the GFI1 gene. It is important normal hematopoiesis. Gfi1 (growth factor independence 1) is a transcriptional repressor that plays a critical role in hematopoiesis and in protecting hematopoietic cells against stress-induced apoptosis. Recent research has shown that Gfi1 upregulates the expression of the nuclear protein Hemgn, which contributes to its anti-apoptotic activity. This upregulation is mediated through a specific 16-bp promoter region and is dependent on Gfi1’s interaction with the histone demethylase LSD1.
Gfi1 represses PU.1, and this repression precedes and correlates with the upregulation of Hemgn. The upregulation of Hemgn, in turn, contributes to the anti-apoptotic function of Gfi1, acting in a p53-independent manner.
These findings suggest that Gfi1 promotes cell survival by upregulating Hemgn through the repression of PU.1, offering a new understanding of its role in apoptosis regulation.
Interactions
GFI1 has been shown to interact with PIAS3 and RUNX1T1.
References
Further reading
External links
Transcription factors | GFI1 | [
"Chemistry",
"Biology"
] | 282 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,798,896 | https://en.wikipedia.org/wiki/GTF3C2 | General transcription factor 3C polypeptide 2 is a protein that in humans is encoded by the GTF3C2 gene.
Interactions
GTF3C2 has been shown to interact with GTF3C4 and GTF3C5.
References
Further reading
External links
Transcription factors | GTF3C2 | [
"Chemistry",
"Biology"
] | 60 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,798,904 | https://en.wikipedia.org/wiki/Saeu-jeot | Saeu-jeot () is a variety of jeotgal, salted and fermented food made with small shrimp in Korean cuisine. It is the most consumed jeotgal along with myeolchi-jeot (, salted anchovy jeot) in South Korea. The name consists of the two Korean words saeu (, shrimp) and jeot. Saeu-jeot is widely used throughout Korean cuisine but is mostly used as an ingredient in kimchi and dipping pastes. The shrimp used for making saeu-jeot are called jeot-saeu () and are smaller and have thinner shells than ordinary shrimp.
The quality of saeu-jeot largely depends on the freshness of the shrimp. In warm weather, fishermen may immediately add salt for preliminary preservation.
Types
The types of saeu-jeot depend on the kind of shrimp used and when they are harvested.
In spring
Putjeot () is made with shrimp harvested from the end of January In the Korean calendar (lunar) through April. It is called deddeugi jeot () or dotddegi jeot () in the west coast of the South Korea. Ojeot () is made with shrimp harvested in May.
In summer
Yukjeot (육젓, 六젓, six [month] jeot) is made with shrimp harvested in June and is regarded as the highest quality jeot. It is the saeu-jeot most preferred for making kimchi because of its richer flavor and bigger shrimp than other saeu-jeot. The shrimp in Yukjoet have red heads and tails. Chajeot () is made with shrimp harvested in July.
In fall
Gonjaeng-ijeot () or jahajeot () is made with very small shrimp-like Neomysis awatschensis, one of the opossum shrimp family which is called gonjaeng-i or jaha () in Korean. The shrimp used for it is the smallest among all saeu-jeot. They are harvested in August and September in small amounts where freshwater mixes with seawater of the abyss of the Yellow Sea. As it ferments, the jeot changes from transparent to light violet or brown in color and becomes soft in texture. Gonjaeng-ijeot is called gogaemijeot () in Jeolla Province. It is a local specialty of Seosan-gun, Chungcheong Province.
Chujeot () is made with small shrimp harvested in autumn which are smaller and cleaner than the shrimp in yukjeot.
In winter
Dongjeot () is made with shrimp harvested in November. Dongbaekha (동백하젓 冬白蝦) is made with shrimp harvested in February whose bodies are white and clean.
Other saeu-jeot
Tohajeot () is made with toha (), small shrimp caught only in clean freshwater of valleys. It is a local specialty of South Jeolla Province. It is also called saengijeot ().
Jajeot () is commonly called japjeot (잡젓, literally mixed jeot) which is made with several types of small shrimp without special selection. Daetdaegijeot () is made with shrimp that have thick, stiff, yellowish shells. It is considered to be the lowest quality saeu-jeot.
Saeualjŏt () is made with the eggs of medium-sized red shrimp harvested in April. It was presented to the royal court as a local product during the late period of the Joseon dynasty and currently is only produced in Okgu-gun, North Jeolla Province.
See also
References
External links
General information about saeu-jeot
Fermented foods
Jeotgal
Shrimp dishes | Saeu-jeot | [
"Biology"
] | 784 | [
"Fermented foods",
"Biotechnology products"
] |
14,798,926 | https://en.wikipedia.org/wiki/HOXA4 | Homeobox A4, also known as HOXA4, is a protein which in humans is encoded by the HOXA4 gene.
Function
In vertebrates, the genes encoding the class of transcription factors called homeobox genes are found in clusters named A, B, C, and D on four separate chromosomes. Expression of these proteins is spatially and temporally regulated during embryonic development. This gene is part of the A cluster on chromosome 7 and encodes a DNA-binding transcription factor which may regulate gene expression, morphogenesis, and differentiation.
See also
Homeobox
References
Further reading
External links
Transcription factors | HOXA4 | [
"Chemistry",
"Biology"
] | 129 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,798,943 | https://en.wikipedia.org/wiki/IGH%40 | Immunoglobulin heavy locus, also known as IGH, is a region on human chromosome 14 that contains a gene for the heavy chains of human antibodies (or immunoglobulins).
Immunoglobulins recognize foreign antigens and initiate immune responses such as phagocytosis and the complement system. Each immunoglobulin molecule consists of two identical heavy chains and two identical light chains. This region represents the germline organization of the heavy chain locus. The locus includes V (variable), D (diversity), J (joining), and C (constant) segments. During B cell development, a recombination event at the DNA level joins a single D segment with a J segment; the fused D-J exon of this partially rearranged D-J region is then joined to a V segment. The rearranged V-D-J region containing a fused V-D-J exon is then transcribed and fused at the RNA level to the IGHM constant region; this transcript encodes a mu heavy chain. Later in development B cells generate V-D-J-Cmu-Cdelta pre-messenger RNA, which is alternatively spliced to encode either a mu or a delta heavy chain. Mature B cells in the lymph nodes undergo switch recombination, so that the fused V-D-J gene segment is brought in proximity to one of the IGHG, IGHA, or IGHE gene segments and each cell expresses either the gamma, alpha, or epsilon heavy chain. Potential recombination of many different V segments with several J segments provides a wide range of antigen recognition. Additional diversity is attained by junctional diversity, resulting from the random addition of nucleotides by terminal deoxynucleotidyl transferase, and by somatic hypermutation, which occurs during B cell maturation in the spleen and lymph nodes. Several V, D, J, and C segments are known to be incapable of encoding a protein and are considered pseudogenous gene segments (often simply referred to as pseudogenes).
Nomenclature
Symbols for variable (V) immunoglobulin gene segments start with IGHV and include two or three numbers separated by dashes. Examples:
IGHV1-2, IGHV1-3, …, IGHV1-69-2, IGHV2-5, …, IGHV7-4-1
Symbols for diversity (D) immunoglobulin gene segments start with IGHD and include two numbers separated by dashes. Examples:
IGHD1-1, IGHD1-7, …, IGHD7-27
Symbols for joining (J) immunoglobulin gene segments:
IGHJ1, IGHJ2, IGHJ3, IGHJ4, IGHJ5, IGHJ6
Symbols for constant region (C) immunoglobulin genes:
Heavy chain alpha (IgA): IGHA1, IGHA2
Heavy chain gamma (IgG): IGHG1, IGHG2, IGHG3, IGHG4
Heavy chain delta (IgD): IGHD
Heavy chain epsilon (IgE): IGHE
Heavy chain mu (IgM): IGHM
See also
IGHV@
References
Further reading
Antibodies | IGH@ | [
"Chemistry"
] | 720 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,799,091 | https://en.wikipedia.org/wiki/ATBF1 | Zinc finger homeobox protein 3 is a protein that in humans is encoded by the ZFHX3 gene.
References
Further reading
External links
Transcription factors | ATBF1 | [
"Chemistry",
"Biology"
] | 33 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,799,103 | https://en.wikipedia.org/wiki/IGHD | Ig delta chain C region is a protein that in humans is encoded by the IGHD gene.
References
Further reading | IGHD | [
"Chemistry"
] | 26 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,799,130 | https://en.wikipedia.org/wiki/KCNJ10 | ATP-sensitive inward rectifier potassium channel 10 is a protein that in humans is encoded by the KCNJ10 gene.
Function
This gene encodes a member of the inward rectifier-type potassium channel family, Kir4.1, characterized by having a greater tendency to allow potassium to flow into, rather than out of, a cell. Kir4.1, may form a heterodimer with another potassium channel protein and may be responsible for the potassium buffering action of glial cells in the brain. Mutations in this gene have been associated with seizure susceptibility of common idiopathic generalized epilepsy syndromes.
EAST syndrome
Humans with mutations in the KCNJ10 gene that cause loss of function in related K+ channels can display Epilepsy, Ataxia, Sensorineural deafness and Tubulopathy, the EAST syndrome (Gitelman syndrome phenotype) reflecting roles for KCNJ10 gene products in the brain, inner ear and kidney. The Kir4.1 channel is expressed in the Stria vascularis and is essential for formation of the endolymph, the fluid that surrounds the mechanosensitive stereocilia of the sensory hair cells that make hearing possible.
Rett Syndrome
Rett syndrome is a neurological disorder characterized by a mutation in the MeCP2 gene. This mutation results in less MeCP2. KCNJ10 expression is upregulated by the transcription factor MeCP2. MeCP2 deficiency leads to less Kir4.1 channels present on astrocytes in the brain. Since there are fewer channels allowing potassium into the cells, extracellular potassium levels are higher. Higher extracellular potassium leaves neurons more easily excitable which could contribute to the epilepsy observed in many Rett Syndrome patients.
Interactions
KCNJ10 has been shown to interact with Interleukin 16.
See also
Inward-rectifier potassium ion channel
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on Pendred Syndrome/DFNB4
Ion channels | KCNJ10 | [
"Chemistry"
] | 432 | [
"Neurochemistry",
"Ion channels"
] |
14,799,237 | https://en.wikipedia.org/wiki/MTF1 | Metal regulatory transcription factor 1 is a protein that in humans is encoded by the MTF1 gene.
Function
This gene encodes a transcription factor that induces expression of metallothioneins and other genes involved in metal homeostasis in response to heavy metals such as cadmium, zinc, copper, and silver. The protein is a nucleocytoplasmic shuttling protein that accumulates in the nucleus upon heavy metal exposure and binds to promoters containing a metal-responsive element (MRE).
References
Further reading
External links
Transcription factors | MTF1 | [
"Chemistry",
"Biology"
] | 112 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,799,275 | https://en.wikipedia.org/wiki/NFE2L1 | Nuclear factor erythroid 2-related factor 1 (Nrf1) also known as nuclear factor erythroid-2-like 1 (NFE2L1) is a protein that in humans is encoded by the NFE2L1 gene. Since NFE2L1 is also referred to as Nrf1, it is often confused with nuclear respiratory factor 1.
NFE2L1 is a cap ‘n’ collar, basic-leucine zipper (bZIP) transcription factor. Several isoforms of NFE2L1 have been described for both human and mouse genes. NFE2L1 was first cloned in yeast using a genetic screening method. NFE2L1 is ubiquitously expressed, and high levels of transcript are detected in the heart, kidney, skeletal muscle, fat, and brain. Four separate regions — an asparagine/serine/threonine, acidic domains near the N-terminus, and a serine-rich domain located near the CNC motif — are required for full transactivation function of NFE2L1. NFE2L1 is a key regulator of cellular functions including oxidative stress response, differentiation, inflammatory response, metabolism, cholesterol handling and maintaining proteostasis.
Interactions
NFE2L1 binds DNA as heterodimers with one of small Maf proteins (MAFF, MAFG, MAFK). NFE2L1 has been shown to interact with C-jun.
Cellular homeostasis
NFE2L1 regulates a wide variety of cellular responses, several of which are related to important aspects of protection from stress stimuli. NFE2L1 is involved in providing cellular protection against oxidative stress through the induction of antioxidant genes. The glutathione synthesis pathway is catalyzed by glutamate-cysteine ligase, which contains the catalytic GCLC and regulatory GCLM, and glutathione synthetase (GSS). NFE2L1 was found to regulate Gclm and Gss expression in mouse fibroblasts. Gclm was found to be a direct target of NFE2L1. NFE2L1 also regulates Gclc expression through an indirect mechanism. NFE2L1 knockout mice also exhibit down-regulation of Gpx1-, Hmox1-, and NFE2L1-deficient hepatocytes from liver-specific NFE2L1 knockout mice showed decreased expression of various Gst genes. Metallothioenein-1 and Metallothioenein-2 genes, which protect cells against cytotoxicity induced by toxic metals, are also direct targets of NFE2L1.
NFE2L1 is also involved in maintaining proteostasis. Brains of mice with conditional knockout of NFE2L1 in neuronal cells showed decreased proteasome activity and accumulation of ubiquitin-conjugated proteins, and down regulation of genes encoding the 20S core and 19S regulatory sub-complexes of the 26S proteasome. A similar effect on proteasome gene expression and function was observed in livers of mice with NFE2L1 conditional knockout in hepatocytes. Induction of proteasome genes was also lost in brains and livers of NFE2L1 conditional knockout mice. Re-establishment of NFE2L1 function in NFE2L1 null cells rescued proteasome expression and function, indicating NFE2L1 was necessary for induction of proteasome genes (bounce-back response) in response to proteasome inhibition. This compensatory up-regulation of proteasome genes in response to proteasome inhibition has also been demonstrated to be NFE2L1-dependent in various other cell types. NFE2L1 was shown to directly bind and activate expression of the PsmB6 gene, which encodes a catalytic subunit of the 20S core. NFE2L1 was also shown to regulate expression of Herpud1 and Vcp/p97, which are components of the ER-associated degradation pathway.
NFE2L1 also plays a role in metabolic processes. Loss of hepatic NFE2L1 has been shown to result in lipid accumulation, hepatocellular damage, cysteine accumulation, and altered fatty acid composition. Glucose homeostasis and insulin secretion have also been found to be under the control of NFE2L1. Insulin-regulated glycolytic genes—Gck, Aldob, Pgk1, and Pklr, hepatic glucose transporter gene — SLC2A2, and gluconeogenic genes — Fbp1 and Pck1 were repressed in livers of NFE2L1 transgenic mice. NFE2L1 may also play a role in maintaining chromosomal stability and genomic integrity by inducing expression of genes encoding components of the spindle assembly and kinetochore. NFE2L1 has also been shown to sense and respond to excess cholesterol in the ER.
Regulation
NFE2L1 is an ER membrane protein. Its N-terminal domain (NTD) anchors the protein to the membrane. Specifically, amino acid residues 7 to 24 are known to be a hydrophobic domain that serves as a transmembrane region. The concerted mechanism of HRD1, a member of E3-ubiquitin ligase family, and p97/VCP1 was found to play an important role in the degradation of NFE2L1 through the ER Associated Degradation (ERAD) pathway and the release of NFE2L1 from the ER membrane. NFE2L1 is also regulated by other ubiquitin ligases and kinases. FBXW7, a member of the SCF ubiquitin ligase family, targets NFE2L1 for proteolytic degradation by the proteasome. FBXW7 requires the Cdc4 phosphodegron domain within NFE2L1 to be phosphorylated via Glycogen Kinase 3. Casein Kinase 2 was shown to phosphorylate Ser497 of NFE2L1, which attenuates the activity of NFE2L1 on proteasome gene expression. NFE2L1 also interacts with another member of the SCF ligase ubiquitin family known as β-TrCP. β-TrCP also binds to the DSGLC motif, a highly conserved region of CNC-bZIP proteins, in order to polyubiquitinate NFE2L1 prior to its proteolytic degradation. Phosphorylation of Ser599 by protein kinase A enables NFE2L1 and C/EBP-β to dimerize to repress DSPP expression during odontoblast differentiation. NFE2L1 expression and activation is also controlled by cellular stresses. Oxidative stress induced by arsenic and t-butyl hydroquinone leads to accumulation of the NFE2L1 protein inside the nucleus as well as higher activation on antioxidant genes. Treatment with an ER stress inducer, tunicamycin, was shown to induce accumulation of NFE2L1 inside the nucleus; however, it was not associated with increased activity, suggesting further investigation is needed to explain the role of ER stress on NFE2L1. Hypoxia was also shown to increase the expression of NFE2L1 while attenuating expression of the p65 isoform of NFE2L1. Growth factors affect expression of NFE2L1 through an mTORC and SREBP-1 mediated pathway. Growth factors induce higher activity of mTORC, which then promotes activity of its downstream protein SREBP-1, a transcription factor for NFE2L1.
Animal studies
Loss and gain of function studies in mice showed that dysregulation of NFE2L1 leads to pathological states that could have relevance in human diseases. NFE2L1 is crucial for embryonic development and survival of hepatocytes during development. Loss of NFE2L1 in mouse hepatocytes leads to steatosis, inflammation, and tumorigenesis. NFE2L1 is also necessary for neuronal homeostasis. Loss of NFE2L1 function is also associated with insulin resistance. Mice with conditional deletion of NFE2L1 in pancreatic β-cells exhibited severe fasting hyperinsulinemia and glucose intolerance, suggesting that NFE2L1 may play a role in the development of type-2 diabetes Future studies may provide therapeutic efforts involving NFE2L1 for cancer, neurodegeneration, and metabolic diseases.
Notes
References
Further reading
External links
Transcription factors | NFE2L1 | [
"Chemistry",
"Biology"
] | 1,860 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,799,306 | https://en.wikipedia.org/wiki/Orthodenticle%20homeobox%202 | Homeobox protein OTX2 is a protein that in humans is encoded by the OTX2 gene.
Function
This gene encodes a member of the bicoid sub-family of homeodomain-containing transcription factors. The encoded protein acts as a transcription factor and plays a role in brain and sensory organ development. A similar protein in mice is required for proper forebrain development. Two transcript variants encoding distinct isoforms have been identified for this gene. Other alternative splice variants may exist, but their full length sequences have not been determined.
Otx2 is a group of homeobox genes that are typically described as a head organizer in the primitive streak stage of embryonic development. Otx2, which is an encoded protein that plays the role of a transcription factor, has also been shown to be involved in the regional patterning of the midbrain and forebrain. This group of genes demonstrates later in progression to have an influence on the formation of the sensory organs, pituitary gland, pineal gland, inner ear, eye and optic nerve. Otx2 not only has a prominent role in developing this area but also aids in ensuring that the retina and brain stay intact. This group of genes has a huge role in development and if it is expressed incorrectly it can have detrimental effects on the fetus. Otx2 mutations have also been associated with seizures, developmental delays, short stature, structural abnormalities of the pituitary gland, and an early onset of degeneration of the retina. A “knockout” model on the group of Otx2 genes has been performed to see what effects it would have on the adult retina. It was found that without the Otx2 gene expression there was slow degeneration of photoreceptor cells in this area. Thus, proving that the homeobox genes of Otx2 are essential in forming a viable embryo.
Otx2 is necessary for retina development, retina maturation, and fate determination of photoreceptors. In the mouse, studies have shown development of the retina is regulated in a cell type- and stage-specific manner by seven Otx2 cis-regulatory modules. Three of these cis-regulatory modules, O5, O7 and O9 indicate three distinct cellular expressions of Otx2. A “knockin” mouse line was generated where Crx (Otx family homeoprotein) was replaced by Otx2 and vice versa to examine the functional substitutability. It was found that Crx and Otx2 cannot be substituted in photoreceptor development. High Otx2 levels induce photoreceptor cell fate but not bipolar cell fate. Low levels of Otx2 impair bipolar cell maturation and survival. Studies in the chicken confirmed a functional role for Otx2 in the determination of photoreceptors. Otx2 also represses specific retinal fates (such as subtypes of retinal ganglion and horizontal cells) of sister cells to promote the specification of photoreceptors.
Clinical significance
Otx2 is expressed in the brain, ear, nose and eye, and in the case of mutations; it can lead to significant developmental abnormalities and disorders. Mutations in OTX2 can cause eye disorders including anophthalmia and microphthalmia. Apart from anophthalmia and microphthalmia, other abnormalities such as aplasia of the optic nerve, hypoplasia of the optic chiasm and dysplastic optic globes have also been observed. Other defects that occur due to a mutation of the Otx2 gene include pituitary abnormalities and mental retardation. Abnormal pituitary structure and/or function seem to be the most common feature associated with Otx2 mutations.
Otx2 also regulates two other genes, Lhx1 and Dkk1 that also play a role in head morphogenesis. Otx2 is required during early formation of the embryo to initiate the movement of cells towards the anterior region and establish the anterior visceral endoderm. In the absence of Otx2, this movement can be impeded, which can be overcome by the expression of Dkk1, but it does not prevent the embryo from developing head truncation defects. The absence of Otx2 and the enhanced expression of Lhx1 can also lead to severe head truncation.
It has been shown that if Otx2 is overexpressed, it can lead to childhood malignant brain tumors called medulloblastomas.
Duplication of OTX2 is involved in the pathogenesis of Hemifacial Microsomia.
In the mouse, the lack of Otx2 inhibits the development of the head. These 'knockout' mice that fail to form the head have gastrulation defects and die at midgestation with severe brain anomalies.
Role of Otx2 in Visual Plasticity
Recent research has identified the homeoprotein Otx2 as a possible molecular ‘messenger’ that is necessary for experience-driven visual plasticity during the critical period. Initially involved in embryonic head formation, Otx2 is re-expressed during the critical period of rats (>P23) and regulates the maturation of parvalbumin-expressing GABAergic interneurons (PV-cells), which control the onset of critical period plasticity. Dark-rearing from birth and binocular enucleation of rats resulted in decreased expression of PV-cells and Otx2, which suggests that these proteins are visually experience-driven. Otx2 loss-of-function experiments delayed ocular dominance plasticity by impairing the development of PV-cells. Research into Otx2 and visual plasticity during the critical period is of particular interest to the study of developmental abnormalities such as amblyopia. More research must be conducted to determine if Otx2 could be utilized for therapeutic recovery of visual plasticity to aid some amblyopic patients.
Role in Embryonic Stem Cells Biology
Otx2 is a key regulator of the earliest stages of ES cell differentiation. The ectopic expression of Otx2 drives ES cells into differentiation, even in the presence of the LIF cytokine. At the molecular level, Otx2 induction partially compensates the gene expression changes induced by Nanog overexpression in the absence of LIF.
Role in Ethanol Consumption in the Adult and Fetus
The Otx2 gene is an important transcription factor in the formation of dopaminergic neurons in the Ventral tegmental Area (VTA), an area located in the midbrain. The VTA is involved in drug reinforcement, reward processing, and addiction in the adult. Ethanol consumption during embryogenesis, leads to a reduction in Otx2 mRNA in the central nervous system altering gene expression. This altering of gene expression in the central nervous system in utero may contribute to addiction behaviors as an adult. In order to detect if the reduction of Otx2 due to ethanol caused an increase in binge-drinking behaviors in adults, a lentiviral hairpin (sh)RNA was used to target Otx2 and reduce the levels of Otx2 expression in the VTA in mice. The mice were then administered ethanol. It was found that Otx2 may contribute to binge-like drinking through transcriptional changes in the VTA (Coles & Lasek, 2021.
References
Further reading
Coles, C., & Lasek, A. W. (2021). Binge-like ethanol drinking increases otx2, wnt1, and mdk gene expression in the ventral tegmental area of adult mice. Neuroscience Insights, 16, 263310552110098. https://doi.org/10.1177/26331055211009850 == External links ==
Transcription factors | Orthodenticle homeobox 2 | [
"Chemistry",
"Biology"
] | 1,633 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,799,325 | https://en.wikipedia.org/wiki/PBX2 | Pre-B-cell leukemia transcription factor 2 is a protein that in humans is encoded by the PBX2 gene.
Function
This gene encodes a ubiquitously expressed member of the TALE/PBX homeobox family. It was identified by its similarity to a homeobox gene which is involved in t(1;19) translocation in acute pre-B-cell leukemias. This protein is a transcriptional activator which binds to the TLX1 promoter. The gene is located within the major histocompatibility complex (MHC) on chromosome 6.
Interactions
PBX2 has been shown to interact with HOXA9.
References
Further reading
External links
Transcription factors | PBX2 | [
"Chemistry",
"Biology"
] | 146 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
181,334 | https://en.wikipedia.org/wiki/Discrete%20logarithm | In mathematics, for given real numbers a and b, the logarithm logb a is a number x such that . Analogously, in any group G, powers bk can be defined for all integers k, and the discrete logarithm logb a is an integer k such that . In number theory, the more commonly used term is index: we can write x = indr a (mod m) (read "the index of a to the base r modulo m") for r x ≡ a (mod m) if r is a primitive root of m and gcd(a,m) = 1.
Discrete logarithms are quickly computable in a few special cases. However, no efficient method is known for computing them in general. In cryptography, the computational complexity of the discrete logarithm problem, along with its application, was first proposed in the Diffie–Hellman problem. Several important algorithms in public-key cryptography, such as ElGamal, base their security on the hardness assumption that the discrete logarithm problem (DLP) over carefully chosen groups has no efficient solution.
Definition
Let G be any group. Denote its group operation by multiplication and its identity element by 1. Let b be any element of G. For any positive integer k, the expression bk denotes the product of b with itself k times:
Similarly, let b−k denote the product of b−1 with itself k times. For k = 0, the kth power is the identity: .
Let a also be an element of G. An integer k that solves the equation is termed a discrete logarithm (or simply logarithm, in this context) of a to the base b. One writes k = logb a.
Examples
Powers of 10
The powers of 10 are
For any number a in this list, one can compute log10 a. For example, log10 10000 = 4, and log10 0.001 = −3. These are instances of the discrete logarithm problem.
Other base-10 logarithms in the real numbers are not instances of the discrete logarithm problem, because they involve non-integer exponents. For example, the equation log10 53 = 1.724276… means that 101.724276… = 53. While integer exponents can be defined in any group using products and inverses, arbitrary real exponents, such as this 1.724276…, require other concepts such as the exponential function.
In group-theoretic terms, the powers of 10 form a cyclic group G under multiplication, and 10 is a generator for this group. The discrete logarithm log10 a is defined for any a in G.
Powers of a fixed real number
A similar example holds for any non-zero real number b. The powers form a multiplicative subgroup G = {…, b−3, b−2, b−1, 1, b1, b2, b3, …} of the non-zero real numbers. For any element a of G, one can compute logb a.
Modular arithmetic
One of the simplest settings for discrete logarithms is the group Zp×. This is the group of multiplication modulo the prime p. Its elements are non-zero congruence classes modulo p, and the group product of two elements may be obtained by ordinary integer multiplication of the elements followed by reduction modulo p.
The kth power of one of the numbers in this group may be computed by finding its kth power as an integer and then finding the remainder after division by p. When the numbers involved are large, it is more efficient to reduce modulo p multiple times during the computation. Regardless of the specific algorithm used, this operation is called modular exponentiation. For example, consider Z17×. To compute 34 in this group, compute 34 = 81, and then divide 81 by 17, obtaining a remainder of 13. Thus 34 = 13 in the group Z17×.
The discrete logarithm is just the inverse operation. For example, consider the equation 3k ≡ 13 (mod 17). From the example above, one solution is k = 4, but it is not the only solution. Since 316 ≡ 1 (mod 17)—as follows from Fermat's little theorem—it also follows that if n is an integer then 34+16n ≡ 34 × (316)n ≡ 13 × 1n ≡ 13 (mod 17). Hence the equation has infinitely many solutions of the form 4 + 16n. Moreover, because 16 is the smallest positive integer m satisfying 3m ≡ 1 (mod 17), these are the only solutions. Equivalently, the set of all possible solutions can be expressed by the constraint that k ≡ 4 (mod 16).
Powers of the identity
In the special case where b is the identity element 1 of the group G, the discrete logarithm logb a is undefined for a other than 1, and every integer k is a discrete logarithm for a = 1.
Properties
Powers obey the usual algebraic identity bk + l = bk b l. In other words, the function
defined by f(k) = bk is a group homomorphism from the integers Z under addition onto the subgroup H of G generated by b. For all a in H, logb a exists. Conversely, logb a does not exist for a that are not in H.
If H is infinite, then logb a is also unique, and the discrete logarithm amounts to a group isomorphism
On the other hand, if H is finite of order n, then logb a is unique only up to congruence modulo n, and the discrete logarithm amounts to a group isomorphism
where Zn denotes the additive group of integers modulo n.
The familiar base change formula for ordinary logarithms remains valid: If c is another generator of H, then
Algorithms
The discrete logarithm problem is considered to be computationally intractable. That is, no efficient classical algorithm is known for computing discrete logarithms in general.
A general algorithm for computing logb a in finite groups G is to raise b to larger and larger powers k until the desired a is found. This algorithm is sometimes called trial multiplication. It requires running time linear in the size of the group G and thus exponential in the number of digits in the size of the group. Therefore, it is an exponential-time algorithm, practical only for small groups G.
More sophisticated algorithms exist, usually inspired by similar algorithms for integer factorization. These algorithms run faster than the naïve algorithm, some of them proportional to the square root of the size of the group, and thus exponential in half the number of digits in the size of the group. However, none of them runs in polynomial time (in the number of digits in the size of the group).
Baby-step giant-step
Function field sieve
Index calculus algorithm
Number field sieve
Pohlig–Hellman algorithm
Pollard's rho algorithm for logarithms
Pollard's kangaroo algorithm (aka Pollard's lambda algorithm)
There is an efficient quantum algorithm due to Peter Shor.
Efficient classical algorithms also exist in certain special cases. For example, in the group of the integers modulo p under addition, the power bk becomes a product bk, and equality means congruence modulo p in the integers. The extended Euclidean algorithm finds k quickly.
With Diffie–Hellman, a cyclic group modulo a prime p is used, allowing an efficient computation of the discrete logarithm with Pohlig–Hellman if the order of the group (being p−1) is sufficiently smooth, i.e. has no large prime factors.
Comparison with integer factorization
While computing discrete logarithms and integer factorization are distinct problems, they share some properties:
both are special cases of the hidden subgroup problem for finite abelian groups,
both problems seem to be difficult (no efficient algorithms are known for non-quantum computers),
for both problems efficient algorithms on quantum computers are known,
algorithms from one problem are often adapted to the other, and
the difficulty of both problems has been used to construct various cryptographic systems.
Cryptography
There exist groups for which computing discrete logarithms is apparently difficult. In some cases (e.g. large prime order subgroups of groups Zp×) there is not only no efficient algorithm known for the worst case, but the average-case complexity can be shown to be about as hard as the worst case using random self-reducibility.
At the same time, the inverse problem of discrete exponentiation is not difficult (it can be computed efficiently using exponentiation by squaring, for example). This asymmetry is analogous to the one between integer factorization and integer multiplication. Both asymmetries (and other possibly one-way functions) have been exploited in the construction of cryptographic systems.
Popular choices for the group G in discrete logarithm cryptography (DLC) are the cyclic groups Zp× (e.g. ElGamal encryption, Diffie–Hellman key exchange, and the Digital Signature Algorithm) and cyclic subgroups of elliptic curves over finite fields (see Elliptic curve cryptography).
While there is no publicly known algorithm for solving the discrete logarithm problem in general, the first three steps of the number field sieve algorithm only depend on the group G, not on the specific elements of G whose finite log is desired. By precomputing these three steps for a specific group, one need only carry out the last step, which is much less computationally expensive than the first three, to obtain a specific logarithm in that group.
It turns out that much internet traffic uses one of a handful of groups that are of order 1024 bits or less, e.g. cyclic groups with order of the Oakley primes specified in RFC 2409. The Logjam attack used this vulnerability to compromise a variety of internet services that allowed the use of groups whose order was a 512-bit prime number, so called export grade.
The authors of the Logjam attack estimate that the much more difficult precomputation needed to solve the discrete log problem for a 1024-bit prime would be within the budget of a large national intelligence agency such as the U.S. National Security Agency (NSA). The Logjam authors speculate that precomputation against widely reused 1024 DH primes is behind claims in leaked NSA documents that NSA is able to break much of current cryptography.
See also
A. W. Faber Model 366
Percy Ludgate and Irish logarithm
References
Further reading
Richard Crandall; Carl Pomerance. Chapter 5, Prime Numbers: A computational perspective, 2nd ed., Springer.
Modular arithmetic
Group theory
Cryptography
Logarithms
Finite fields
Computational hardness assumptions
Unsolved problems in computer science | Discrete logarithm | [
"Mathematics",
"Engineering"
] | 2,269 | [
"Logarithms",
"Unsolved problems in mathematics",
"Cryptography",
"Cybersecurity engineering",
"Unsolved problems in computer science",
"Applied mathematics",
"Group theory",
"E (mathematical constant)",
"Fields of abstract algebra",
"Arithmetic",
"Mathematical problems",
"Modular arithmetic",... |
181,347 | https://en.wikipedia.org/wiki/Interjection | An interjection is a word or expression that occurs as an utterance on its own and expresses a spontaneous feeling or reaction. It is a diverse category, encompassing many different parts of speech, such as exclamations (ouch!, wow!), curses (damn!), greetings (hey, bye), response particles (okay, oh!, m-hm, huh?), hesitation markers (uh, er, um), and other words (stop, cool). Due to its diverse nature, the category of interjections partly overlaps with a few other categories like profanities, discourse markers, and fillers. The use and linguistic discussion of interjections can be traced historically through the Greek and Latin Modistae over many centuries.
Historical classification
Greek and Latin intellectuals as well as the Modistae have contributed to the different perspectives of interjections in language throughout history. The Greeks held that interjections fell into the grammatical category of adverbs. They thought interjections modified the verb much in the same way as adverbs do, thus interjections were closely connected to verbs.
Unlike their Greek counterparts, many Latin scholars took the position that interjections did not rely on verbs and were used to communicate emotions and abstract ideas. They considered interjections to be their own independent part of speech. Further, the Latin grammarians classified any small non-word utterances as interjections.
Several hundred years later, the 13th- and 14th-century Modistae took inconsistent approaches to interjections. Some, such as Thomas of Erfurt, agreed with the former Greeks that the interjection was closely tied to the verb while others like Siger of Courtrai held that the interjection was its own part of speech syntactically, much like the Latin scholars.
Meaning and use
In contrast to typical words and sentences, the function of most interjections is related to an expression of feeling, rather than representing some idea or concept. Generally, interjections can be classified into three types of meaning: volitive, emotive, or cognitive.
Volitive interjections function as imperative or directive expressions; requesting or demanding something from the addressee (e.g., Shh! = "Be quiet!"; Boo! as in "Boo!" she cried, jumping to frighten him).
Emotive interjections are used to express emotions, such as disgust and fear (e.g., Yuck! expressing disgust; Boo! signalling contempt as in Boo! Shame on you or by audience members or spectators after a performance).
Cognitive interjections express thoughts which are more related to cognition, or information known to the speaker of the utterance (e.g., Um! indicating confusion or thinking).
While there exists some apparent overlap between emotive and cognitive interjections, as both express a feeling, cognitive interjections can be seen as more related to knowledge of something (i.e., information previously known to the speaker, or recently learned).
Distinctions and modern classification
Primary and secondary interjections
Interjections may be subdivided and classified in several ways. A common distinction is based on relations to other word categories: primary interjections are interjections first and foremost (examples: Oops., Ouch!, Huh?), while secondary interjections are words from other categories that come to be used as interjections in virtue of their meaning (examples: Damn!, Hell!) Primary interjections are generally considered to be single words (Oh!, Wow!). Secondary interjections can consist of multi-word phrases, or interjectional phrases, (examples: sup! from What's up?, Excuse me!, Oh dear!, Thank God!), but can also include single-word alarm words (Help!), swear and taboo words (Heavens!), and other words used to show emotion (Drats!). Although secondary interjections tend to interact more with the words around them, a characteristic of all interjections—whether primary or secondary—is that they can stand alone. For example, it is possible to utter an interjection like ouch! or bloody hell! on its own, whereas a different part of speech that may seem similar in function and length, such as the conjunction the, cannot be uttered alone (you can not just say the! independently in English).
Further distinctions can be made based on function. Exclamations and curses are primarily about giving expression to private feelings or emotions, while response particles and hesitation markers are primarily directed at managing the flow of social interaction.
Interjections and other word classes
Interjections are sometimes classified as particles, a catch-all category that includes adverbs and onomatopoeia. The main thing these word types share is that they can occur on their own and do not easily undergo inflection, but they are otherwise divergent in several ways. A key difference between interjections and onomatopoeia is that interjections are typically responses to events, while onomatopoeia can be seen as imitations of events.
Interjections can also be confused with adverbs when they appear following a form of the verb “go” (as in "he went 'ouch!'"), which may seem to describe a manner of going (compare: 'he went rapidly'). However, this is only a superficial similarity, as the verb go in the first example does not describe the action of going somewhere. One way to differentiate between an interjection and adverb in this position is to find the speaker of the item in question. If it is understood that the subject of the utterance also utters the item (as in "ouch!" in the first example), then it cannot be an adverb.
Routines are considered as a form of speech acts that rely on an understood social communicative pattern between the addressee and addressed. This differs from an interjection that is more of a strategic utterance within a speech act that brings attention to the utterance but may or may not also have an intended addressed (directed at an individual or group). In addition, routines generally are multi-word expressions whereas interjections tend to be single utterances.
Under a different use of the term 'particle', particles and interjections can be distinctions in that particles cannot be independent utterances and are fully a part of the syntax of the utterance. Interjections, on the other hand, can stand alone and also are always preceded by a pause, separating them from the grammar and syntax of other surrounding utterances.
Interjections as deictics
Interjections are bound by context, meaning that their interpretation is largely dependent on the time and place at which they are uttered. In linguistics, interjections can also be considered a form of deixis. Although their meaning is fixed (e.g., "Wow!" = surprised), there is also a referencing element which is tied to the situation. For example, the use of the interjection "Wow!" necessarily references some relation between the speaker and something that has just caused surprise to the speaker at the moment of the utterance. Without context, the listener would not know the referent of the expression (viz., the source of the surprise). Similarly, the interjection "Ouch!" generally expresses pain, but also requires contextual information for the listener to determine the referent of the expression (viz., the cause of the pain).
While we can often see deictic or indexical elements in expressive interjections, examples of reference are perhaps more clearly illustrated in the use of imperative examples. Volitive interjections such as "Ahem", "Psst!", and "Shh!" could be considered imperative, as the speaker is requesting or demanding something from the listener. Similar to the deictic pronoun "you", the referent of these expressions changes, dependent on the context of the utterance.
Interjections across languages
Interjections can take very different forms and meanings across cultures. For instance, the English interjections gee and wow have no direct equivalent in Polish, and the closest equivalent for Polish 'fu' (an interjection of disgust) is the different sounding 'Yuck!'. Curses likewise are famously language-specific and colourful. On the other hand, interjections that manage social interaction may be more similar across languages. For instance, the word 'Huh?', used when one has not caught what someone just said, is remarkably similar in 31 spoken languages around the world, prompting claims that it may be a universal word. Similar observations have been made for the interjections '''Oh!' (meaning, roughly, "now I see") and 'Mm/m-hm' (with the meaning "keep talking, I'm with you").
Across languages, interjections often use special sounds and syllable types that are not commonly used in other parts of the vocabulary. For instance, interjections like 'brr' and 'shh!' are made entirely of consonants, where in virtually all languages, words have to feature at least one vowel-like element. Some, like 'tut-tut' and 'ahem', are written like normal words, but their actual production involves clicks or throat-clearing. The phonetic atypicality of some interjections is one reason they have traditionally been considered as lying outside the realm of language.
Examples from English
Several English interjections contain sounds, or are sounds as opposed to words, that do not (or very rarely) exist in regular English phonological inventory. For example:
Ahem , ("Attention!") may contain a glottal stop or a in any dialect of English; the glottal stop is common in American English, some British dialects, and in other languages, such as German.
Gah , ("Gah, there's nothing to do!") ends with , which does not occur with regular English words.
Psst ("Listen closely!") is an entirely consonantal syllable, and its consonant cluster does not occur initially in regular English words.
Shh ("Quiet!") is another entirely consonantal syllable word.
Tut-tut ("Shame on you"), also spelled tsk-tsk, is made up entirely of clicks, which are an active part of regular speech in several African languages. This particular click is dental. (This also has the spelling pronunciation .)
Ugh ("Disgusting!") ends with a velar fricative consonant, which is otherwise restricted to just a few regional dialects of English, though is common in languages like Spanish, German, Gaelic, and Russian.
Whew or phew , [] ("What a relief!"), also spelled shew, may start with a bilabial fricative, a sound pronounced with a strong puff of air through the lips. This sound is a common phoneme in such languages as Suki (a language of New Guinea) and Ewe and Logba (both spoken in Ghana and Togo).
Uh-oh ("Oh, no!") contains a glottal stop.
Yeah ("Yes") ends with the vowel , or in some dialects the short vowel or tensed , none of which are found at the end of any regular English words.
See also
Aizuchi
Apostrophe (figure of speech)
Discourse marker
Filler (linguistics)
List of interjections by language at Wiktionary
English interjections at Wiktionary
Category: Interjections
Vocable
References
Parts of speech | Interjection | [
"Technology"
] | 2,435 | [
"Parts of speech",
"Components"
] |
181,384 | https://en.wikipedia.org/wiki/Negative-feedback%20amplifier | A negative-feedback amplifier (or feedback amplifier) is an electronic amplifier that subtracts a fraction of its output from its input, so that negative feedback opposes the original signal. The applied negative feedback can improve its performance (gain stability, linearity, frequency response, step response) and reduces sensitivity to parameter variations due to manufacturing or environment. Because of these advantages, many amplifiers and control systems use negative feedback.
An idealized negative-feedback amplifier as shown in the diagram is a system of three elements (see Figure 1):
an amplifier with gain AOL,
a feedback network β, which senses the output signal and possibly transforms it in some way (for example by attenuating or filtering it),
a summing circuit that acts as a subtractor (the circle in the figure), which combines the input and the transformed output.
Overview
Fundamentally, all electronic devices that provide power gain (e.g., vacuum tubes, bipolar transistors, MOS transistors) are nonlinear. Negative feedback trades gain for higher linearity (reducing distortion) and can provide other benefits. If not designed correctly, amplifiers with negative feedback can under some circumstances become unstable due to the feedback becoming positive, resulting in unwanted behavior such as oscillation. The Nyquist stability criterion developed by Harry Nyquist of Bell Laboratories is used to study the stability of feedback amplifiers.
Feedback amplifiers share these properties:
Pros:
Can increase or decrease input impedance (depending on type of feedback).
Can increase or decrease output impedance (depending on type of feedback).
Reduces total distortion if sufficiently applied (increases linearity).
Increases the bandwidth.
Desensitizes gain to component variations.
Can control step response of amplifier.
Cons:
May lead to instability if not designed carefully.
Amplifier gain decreases.
Input and output impedances of a negative-feedback amplifier (closed-loop amplifier) become sensitive to the gain of an amplifier without feedback (open-loop amplifier)—that exposes these impedances to variations in the open-loop gain, for example, due to parameter variations or nonlinearity of the open-loop gain.
Changes the composition of the distortion (increasing audibility) if insufficiently applied.
History
Paul Voigt patented a negative feedback amplifier in January 1924, though his theory lacked detail. Harold Stephen Black independently invented the negative-feedback amplifier while he was a passenger on the Lackawanna Ferry (from Hoboken Terminal to Manhattan) on his way to work at Bell Laboratories (located in Manhattan instead of New Jersey in 1927) on August 2, 1927 (US Patent 2,102,671, issued in 1937). Black was working on reducing distortion in repeater amplifiers used for telephone transmission. On a blank space in his copy of The New York Times, he recorded the diagram found in Figure 1 and the equations derived below.
On August 8, 1928, Black submitted his invention to the U. S. Patent Office, which took more than 9 years to issue the patent. Black later wrote: "One reason for the delay was that the concept was so contrary to established beliefs that the Patent Office initially did not believe it would work."
Classical feedback
Using the model of two unilateral blocks, several consequences of feedback are simply derived.
Gain reduction
Below, the voltage gain of the amplifier with feedback, the closed-loop gain AFB, is derived in terms of the gain of the amplifier without feedback, the open-loop gain AOL and the feedback factor β, which governs how much of the output signal is applied to the input (see Figure 1). The open-loop gain AOL in general may be a function of both frequency and voltage; the feedback parameter β is determined by the feedback network that is connected around the amplifier. For an operational amplifier, two resistors forming a voltage divider may be used for the feedback network to set β between 0 and 1. This network may be modified using reactive elements like capacitors or inductors to (a) give frequency-dependent closed-loop gain as in equalization/tone-control circuits or (b) construct oscillators. The gain of the amplifier with feedback is derived below in the case of a voltage amplifier with voltage feedback.
Without feedback, the input voltage V′in is applied directly to the amplifier input. The according output voltage is
Suppose now that an attenuating feedback loop applies a fraction of the output to one of the subtractor inputs so that it subtracts from the circuit input voltage Vin applied to the other subtractor input. The result of subtraction applied to the amplifier input is
Substituting for V′in in the first expression,
Rearranging:
Then the gain of the amplifier with feedback, called the closed-loop gain, AFB is given by
If AOL ≫ 1, then AFB ≈ 1 / β, and the effective amplification (or closed-loop gain) AFB is set by the feedback constant β, and hence set by the feedback network, usually a simple reproducible network, thus making linearizing and stabilizing the amplification characteristics straightforward. If there are conditions where β AOL = −1, the amplifier has infinite amplification – it has become an oscillator, and the system is unstable. The stability characteristics of the gain feedback product β AOL are often displayed and investigated on a Nyquist plot (a polar plot of the gain/phase shift as a parametric function of frequency). A simpler, but less general technique, uses Bode plots.
The combination L = −β AOL appears commonly in feedback analysis and is called the loop gain. The combination (1 + β AOL) also appears commonly and is variously named as the desensitivity factor, return difference, or improvement factor.
Summary of terms
Open-loop gain =
Closed-loop gain =
Feedback factor =
Noise gain =
Loop gain =
Desensitivity factor =
Bandwidth extension
Feedback can be used to extend the bandwidth of an amplifier at the cost of lowering the amplifier gain. Figure 2 shows such a comparison. The figure is understood as follows. Without feedback the so-called open-loop gain in this example has a single-time-constant frequency response given by
where fC is the cutoff or corner frequency of the amplifier: in this example fC = 104 Hz, and the gain at zero frequency A0 = 105 V/V. The figure shows that the gain is flat out to the corner frequency and then drops. When feedback is present, the so-called closed-loop gain, as shown in the formula of the previous section, becomes
The last expression shows that the feedback amplifier still has a single-time-constant behavior, but the corner frequency is now increased by the improvement factor (1 + β A0), and the gain at zero frequency has dropped by exactly the same factor. This behavior is called the gain–bandwidth tradeoff. In Figure 2, (1 + β A0) = 103, so AFB(0) = 105 / 103 = 100 V/V, and fC increases to 104 × 103 = 107 Hz.
Multiple poles
When the close-loop gain has several poles, rather than the single pole of the above example, feedback can result in complex poles (real and imaginary parts). In a two-pole case, the result is peaking in the frequency response of the feedback amplifier near its corner frequency and ringing and overshoot in its step response. In the case of more than two poles, the feedback amplifier can become unstable and oscillate. See the discussion of gain margin and phase margin. For a complete discussion, see Sansen.
Signal-flow analysis
A principal idealization behind the formulation of the Introduction is the network's division into two autonomous blocks (that is, with their own individually determined transfer functions), a simple example of what often is called "circuit partitioning", which refers in this instance to the division into a forward amplification block and a feedback block. In practical amplifiers, the information flow is not unidirectional as shown here. Frequently these blocks are taken to be two-port networks to allow inclusion of bilateral information transfer. Casting an amplifier into this form is a non-trivial task, however, especially when the feedback involved is not global (that is directly from the output to the input) but local (that is, feedback within the network, involving nodes that do not coincide with input and/or output terminals).
In these more general cases, the amplifier is analyzed more directly without the partitioning into blocks like those in the diagram, using instead some analysis based upon signal-flow analysis, such as the return-ratio method or the asymptotic gain model. Commenting upon the signal-flow approach, Choma says:
"In contrast to block diagram and two-port approaches to the feedback network analysis problem, signal flow methods mandate no a priori assumptions as to the unilateral or bilateral properties of the open loop and feedback subcircuits. Moreover, they are not predicated on mutually independent open loop and feedback subcircuit transfer functions, and they do not require that feedback be implemented only globally. Indeed signal flow techniques do not even require explicit identification of the open loop and feedback subcircuits. Signal flow thus removes the detriments pervasive of conventional feedback network analyses but additionally, it proves to be computationally efficient as well."
Following up on this suggestion, a signal-flow graph for a negative-feedback amplifier is shown in the figure, which is patterned after one by D'Amico et al.. Following these authors, the notation is as follows:
"Variables xS, xO represent the input and output signals, moreover, two other generic variables, xi, xj linked together through the control (or critical) parameter P are explicitly shown. Parameters aij are the weight branches. Variables xi, xj and the control parameter, P, model a controlled generator, or the relation between voltage and current across two nodes of the circuit.
The term a11 is the transfer function between the input and the output [after] setting the control parameter, P, to zero; term a12 is the transfer function between the output and the controlled variable xj [after] setting the input source, xS, to zero; term a21 represents the transfer function between the source variable and the inner variable, xi when the controlled variable xj is set to zero (i.e., when the control parameter, P is set to zero); term a22 gives the relation between the independent and the controlled inner variables setting control parameter, P and input variable, xS, to zero."
Using this graph, these authors derive the generalized gain expression in terms of the control parameter P that defines the controlled source relationship xj = Pxi:
Combining these results, the gain is given by
To employ this formula, one has to identify a critical controlled source for the particular amplifier circuit in hand. For example, P could be the control parameter of one of the controlled sources in a two-port network, as shown for a particular case in D'Amico et al. As a different example, if we take a12 = a21 = 1, P = A, a22 = –β (negative feedback) and a11 = 0 (no feedforward), we regain the simple result with two unidirectional blocks.
Two-port analysis of feedback
Although, as mentioned in the section Signal-flow analysis, some form of signal-flow analysis is the most general way to treat the negative-feedback amplifier, representation as two two-ports is the approach most often presented in textbooks and is presented here. It retains a two-block circuit partition of the amplifier, but allows the blocks to be bilateral. Some drawbacks of this method are described at the end.
Electronic amplifiers use current or voltage as input and output, so four types of amplifier are possible (any of two possible inputs with any of two possible outputs). See classification of amplifiers. The objective for the feedback amplifier may be any one of the four types of amplifier and is not necessarily the same type as the open-loop amplifier, which itself may be any one of these types. So, for example, an op amp (voltage amplifier) can be arranged to make a current amplifier instead.
Negative-feedback amplifiers of any type can be implemented using combinations of two-port networks. There are four types of two-port network, and the type of amplifier desired dictates the choice of two-ports and the selection of one of the four different connection topologies shown in the diagram. These connections are usually referred to as series or shunt (parallel) connections. In the diagram, the left column shows shunt inputs; the right column shows series inputs. The top row shows series outputs; the bottom row shows shunt outputs. The various combinations of connections and two-ports are listed in the table below.
For example, for a current-feedback amplifier, current from the output is sampled for feedback and combined with current at the input. Therefore, the feedback ideally is performed using an (output) current-controlled current source (CCCS), and its imperfect realization using a two-port network also must incorporate a CCCS, that is, the appropriate choice for feedback network is a g-parameter two-port. Here the two-port method used in most textbooks is presented, using the circuit treated in the article on asymptotic gain model.
Figure 3 shows a two-transistor amplifier with a feedback resistor Rf. The aim is to analyze this circuit to find three items: the gain, the output impedance looking into the amplifier from the load, and the input impedance looking into the amplifier from the source.
Replacement of the feedback network with a two-port
The first step is replacement of the feedback network by a two-port. Just what components go into the two-port?
On the input side of the two-port we have Rf. If the voltage at the right side of Rf changes, it changes the current in Rf that is subtracted from the current entering the base of the input transistor. That is, the input side of the two-port is a dependent current source controlled by the voltage at the top of resistor R2.
One might say the second stage of the amplifier is just a voltage follower, transmitting the voltage at the collector of the input transistor to the top of R2. That is, the monitored output signal is really the voltage at the collector of the input transistor. That view is legitimate, but then the voltage follower stage becomes part of the feedback network. That makes analysis of feedback more complicated.
An alternative view is that the voltage at the top of R2 is set by the emitter current of the output transistor. That view leads to an entirely passive feedback network made up of R2 and Rf. The variable controlling the feedback is the emitter current, so the feedback is a current-controlled current source (CCCS). We search through the four available two-port networks and find the only one with a CCCS is the g-parameter two-port, shown in Figure 4. The next task is to select the g-parameters so that the two-port of Figure 4 is electrically equivalent to the L-section made up of R2 and Rf. That selection is an algebraic procedure made most simply by looking at two individual cases: the case with V1 = 0, which makes the VCVS on the right side of the two-port a short-circuit; and the case with I2 = 0. which makes the CCCS on the left side an open circuit. The algebra in these two cases is simple, much easier than solving for all variables at once. The choice of g-parameters that make the two-port and the L-section behave the same way are shown in the table below.
Small-signal circuit
The next step is to draw the small-signal schematic for the amplifier with the two-port in place using the hybrid-pi model for the transistors. Figure 5 shows the schematic with notation R3 = RC2 || RL and R11 = 1 / g11, R22 = g22.
Loaded open-loop gain
Figure 3 indicates the output node, but not the choice of output variable. A useful choice is the short-circuit current output of the amplifier (leading to the short-circuit current gain). Because this variable leads simply to any of the other choices (for example, load voltage or load current), the short-circuit current gain is found below.
First the loaded open-loop gain is found. The feedback is turned off by setting g12 = g21 = 0. The idea is to find how much the amplifier gain is changed because of the resistors in the feedback network by themselves, with the feedback turned off. This calculation is pretty easy because R11, RB, and rπ1 all are in parallel and v1 = vπ. Let R1 = R11 || RB || rπ1. In addition, i2 = −(β+1) iB. The result for the open-loop current gain AOL is:
Gain with feedback
In the classical approach to feedback, the feedforward represented by the VCVS (that is, g21 v1) is neglected. That makes the circuit of Figure 5 resemble the block diagram of Figure 1, and the gain with feedback is then:
where the feedback factor βFB = −g12. Notation βFB is introduced for the feedback factor to distinguish it from the transistor β.
Input and output resistances
Feedback is used to better match signal sources to their loads. For example, a direct connection of a voltage source to a resistive load may result in signal loss due to voltage division, but interjecting a negative feedback amplifier can increase the apparent load seen by the source, and reduce the apparent driver impedance seen by the load, avoiding signal attenuation by voltage division. This advantage is not restricted to voltage amplifiers, but analogous improvements in matching can be arranged for current amplifiers, transconductance amplifiers and transresistance amplifiers.
To explain these effects of feedback upon impedances, first a digression on how two-port theory approaches resistance determination, and then its application to the amplifier at hand.
Background on resistance determination
Figure 6 shows an equivalent circuit for finding the input resistance of a feedback voltage amplifier (left) and for a feedback current amplifier (right). These arrangements are typical Miller theorem applications.
In the case of the voltage amplifier, the output voltage βVout of the feedback network is applied in series and with an opposite polarity to the input voltage Vx travelling over the loop (but in respect to ground, the polarities are the same). As a result, the effective voltage across and the current through the amplifier input resistance Rin decrease so that the circuit input resistance increases (one might say that Rin apparently increases). Its new value can be calculated by applying Miller theorem (for voltages) or the basic circuit laws. Thus Kirchhoff's voltage law provides:
where vout = Av vin = Av Ix Rin. Substituting this result in the above equation and solving for the input resistance of the feedback amplifier, the result is:
The general conclusion from this example and a similar example for the output resistance case is:
A series feedback connection at the input (output) increases the input (output) resistance by a factor ( 1 + β AOL ), where AOL = open loop gain.
On the other hand, for the current amplifier, the output current βIout of the feedback network is applied in parallel and with an opposite direction to the input current Ix. As a result, the total current flowing through the circuit input (not only through the input resistance Rin) increases and the voltage across it decreases so that the circuit input resistance decreases (Rin apparently decreases). Its new value can be calculated by applying the dual Miller theorem (for currents) or the basic Kirchhoff's laws:
where iout = Ai iin = Ai Vx / Rin. Substituting this result in the above equation and solving for the input resistance of the feedback amplifier, the result is:
The general conclusion from this example and a similar example for the output resistance case is:
A parallel feedback connection at the input (output) decreases the input (output) resistance by a factor ( 1 + β AOL ), where AOL = open loop gain.
These conclusions can be generalized to treat cases with arbitrary Norton or Thévenin drives, arbitrary loads, and general two-port feedback networks. However, the results do depend upon the main amplifier having a representation as a two-port – that is, the results depend on the same current entering and leaving the input terminals, and likewise, the same current that leaves one output terminal must enter the other output terminal.
A broader conclusion, independent of the quantitative details, is that feedback can be used to increase or to decrease the input and output impedance.
Application to the example amplifier
These resistance results now are applied to the amplifier of Figure 3 and Figure 5. The improvement factor that reduces the gain, namely ( 1 + βFB AOL), directly decides the effect of feedback upon the input and output resistances of the amplifier. In the case of a shunt connection, the input impedance is reduced by this factor; and in the case of series connection, the impedance is multiplied by this factor. However, the impedance that is modified by feedback is the impedance of the amplifier in Figure 5 with the feedback turned off, and does include the modifications to impedance caused by the resistors of the feedback network.
Therefore, the input impedance seen by the source with feedback turned off is Rin = R1 = R11 || RB || rπ1, and with the feedback turned on (but no feedforward)
where division is used because the input connection is shunt: the feedback two-port is in parallel with the signal source at the input side of the amplifier. A reminder: AOL is the loaded open loop gain found above, as modified by the resistors of the feedback network.
The impedance seen by the load needs further discussion. The load in Figure 5 is connected to the collector of the output transistor, and therefore is separated from the body of the amplifier by the infinite impedance of the output current source. Therefore, feedback has no effect on the output impedance, which remains simply RC2 as seen by the load resistor RL in Figure 3.
If instead we wanted to find the impedance presented at the emitter of the output transistor (instead of its collector), which is series connected to the feedback network, feedback would increase this resistance by the improvement factor ( 1 + βFB AOL).
Load voltage and load current
The gain derived above is the current gain at the collector of the output transistor. To relate this gain to the gain when voltage is the output of the amplifier, notice that the output voltage at the load RL is related to the collector current by Ohm's law as vL = iC (RC2 || RL). Consequently, the transresistance gain vL / iS is found by multiplying the current gain by RC2 || RL:
Similarly, if the output of the amplifier is taken to be the current in the load resistor RL, current division determines the load current, and the gain is then:
Is the main amplifier block a two-port?
Some drawbacks of the two two-port approach follow, intended for the attentive reader.
Figure 7 shows the small-signal schematic with the main amplifier and the feedback two-port in shaded boxes. The feedback two-port satisfies the port conditions: at the input port, Iin enters and leaves the port, and likewise at the output, Iout enters and leaves.
Is the main amplifier block also a two-port? The main amplifier is shown in the upper shaded box. The ground connections are labeled. Figure 7 shows the interesting fact that the main amplifier does not satisfy the port conditions at its input and output unless the ground connections are chosen to make that happen. For example, on the input side, the current entering the main amplifier is IS. This current is divided three ways: to the feedback network, to the bias resistor RB and to the base resistance of the input transistor rπ. To satisfy the port condition for the main amplifier, all three components must be returned to the input side of the main amplifier, which means all the ground leads labeled G1 must be connected, as well as emitter lead GE1. Likewise, on the output side, all ground connections G2 must be connected and also ground connection GE2. Then, at the bottom of the schematic, underneath the feedback two-port and outside the amplifier blocks, G1 is connected to G2. That forces the ground currents to divide between the input and output sides as planned. Notice that this connection arrangement splits the emitter of the input transistor into a base-side and a collector-side – a physically impossible thing to do, but electrically the circuit sees all the ground connections as one node, so this fiction is permitted.
Of course, the way the ground leads are connected makes no difference to the amplifier (they are all one node), but it makes a difference to the port conditions. This artificiality is a weakness of this approach: the port conditions are needed to justify the method, but the circuit really is unaffected by how currents are traded among ground connections.
However, if no possible arrangement of ground conditions leads to the port conditions, the circuit might not behave the same way. The improvement factors (1 + βFB AOL) for determining input and output impedance might not work. This situation is awkward, because a failure to make a two-port may reflect a real problem (it just is not possible), or reflect a lack of imagination (for example, just did not think of splitting the emitter node in two). As a consequence, when the port conditions are in doubt, at least two approaches are possible to establish whether improvement factors are accurate: either simulate an example using Spice and compare results with use of an improvement factor, or calculate the impedance using a test source and compare results.
A more practical choice is to drop the two-port approach altogether, and use various alternatives based on signal flow graph theory, including the Rosenstark method, the Choma method, and use of Blackman's theorem. That choice may be advisable if small-signal device models are complex, or are not available (for example, the devices are known only numerically, perhaps from measurement or from SPICE simulations).
Feedback amplifier formulas
Summarizing the two-port analysis of feedback, one can get this table of formulas.
The variables and their meanings are
- gain, - current, - voltage,- feedback gain and - resistance.
The subscripts and their meanings are
- feedback amplifier, - voltage,- transconductance, - transresistance, - output and - current for gains and feedback and - input for resistances.
For example means voltage feedback amplifier gain.
Distortion
Simple amplifiers like the common emitter configuration have primarily low-order distortion, such as the 2nd and 3rd harmonics. In audio systems, these can be minimally audible because musical signals are typically already a harmonic series, and the low-order distortion products are hidden by the masking effect of the human hearing system.
After applying moderate amounts of negative feedback (10–15 dB), the low-order harmonics are reduced, but higher-order harmonics are introduced. Since these are not masked as well, the distortion becomes audibly worse, even though the overall THD may go down. This has led to a persistent myth that negative feedback is detrimental in audio amplifiers, leading audiophile manufacturers to market their amplifiers as "zero feedback" (even when they use local feedback to linearize each stage).
However, as the amount of negative feedback is increased further, all harmonics are reduced, returning the distortion to inaudibility, and then improving it beyond the original zero-feedback stage (provided the system is strictly stable). So the problem is not negative feedback, but insufficient amounts of it.
See also
Asymptotic gain model
Blackman's theorem
Bode plot
Buffer amplifier considers the basic op-amp amplifying stage with negative feedback
Common collector (emitter follower) is dedicated to the basic transistor amplifying stage with negative feedback
Extra element theorem
Frequency compensation
Miller theorem is a powerful tool for determining the input/output impedances of negative feedback circuits
Operational amplifier presents the basic op-amp non-inverting amplifier and inverting amplifier
Operational amplifier applications shows the most typical op-amp circuits with negative feedback
Phase margin
Pole splitting
Return ratio
Step response
References and notes
Electronic amplifiers
de:Negative Rückkopplung
es:Realimentación negativa
fr:Contre réaction
gl:Realimentación negativa
nl:Tegenkoppeling
no:Negativ tilbakekobling
ru:Отрицательная обратная связь
sr:Negativna povratna sprega
sv:Negativ återkoppling | Negative-feedback amplifier | [
"Technology"
] | 6,043 | [
"Electronic amplifiers",
"Amplifiers"
] |
181,407 | https://en.wikipedia.org/wiki/Groundcover | Groundcover or ground cover is any plant that grows low over an area of ground, which protects the topsoil from erosion and drought. In a terrestrial ecosystem, the ground cover forms the layer of vegetation below the shrub layer known as the herbaceous layer, and provides habitats and concealments for (especially fossorial) terrestrial fauna. The most widespread ground covers are grasses of various types.
In ecology, groundcover is a difficult subject to address because it is known by several different names and is classified in several different ways. The term "groundcover" could also be referring to "the herbaceous layer", "regenerative layer", "ground flora" or even "step over".
In agriculture, ground cover refers to anything that lies on top of the soil and protects it from erosion and inhibits weeds. It can be anything from a low layer of grasses to a plastic material. The term ground cover can also specifically refer to landscaping fabric, a breathable tarp that allows water and gas exchange.
In gardening jargon, however, the term groundcover refers to plants that are used in place of weeds and improves appearance by concealing bare earth.
Contributions to the environment
The herbaceous layer is often overlooked in most ecological analyses because it is so common and contributes the smallest amount of the environment's overall biomass. However, groundcover is crucial to the survival of many environments. The groundcover layer of a forest can contribute up to 90% of the ecosystem's plant diversity. Additionally, the herbaceous layer ratio of biomass to contribution to plant productivity is disproportionate in many ecosystems. The herbaceous layer can constitute up to 4% of the overall net primary productivity (NPP) of an ecosystem, four times its average biomass.
Reproduction
Groundcover typically reproduces one of five ways:
Lateral growth
Side growth: Branches on the side of the plant extend outwards upon contact with the soil.
Base growth: New plants produced from the base of the origin plant.
Under/Above-ground growth: Produced from rhizomes and stolons
Roots
Like most foliage, groundcover reacts to both natural and anthropogenic disturbances. These responses can be classified as legacy or active responses. Legacy responses occur during long-term changes to an environment, such as the conversion of a forest to agricultural land and back into forest. Active responses occur with sudden disturbances to the environment, such as tornadoes and forest fires.
Groundcover has also been known to influence the placement and growth of tree seedlings. All tree seedlings must first fall from their origin trees and then permeate the layer created by groundcover in order to reach the soil and germinate. The groundcover filters out a large amount of seeds, but lets a smaller portion of seeds pass through and grow. This filtration provides ample amount of space between the seeds for future growth. In some areas, the groundcover can become so dense that no seeds can permeate the surface, and the forest is instead converted to shrubbery. Groundcover also inhibits the amount of light which reaches the floor of an ecosystem. An experiment conducted with the rhododendron maximum canopy in the southern Appalachian region concluded that 4–8% of total sunlight makes it to the herbaceous layer, whereas only about 1–2% reaches the ground.
Variation
Two common variations of groundcover are residency and transient species. Residency species typically reach a maximum of in height, and are therefore permanently classified as herbaceous. Transient species are capable of growing past this height, and are therefore only temporarily considered herbaceous. These height differences make ideal environments for a variety of animals, such as the reed warbler, the harvest mouse and the wren.
Groundcover can also be classified in terms of its foliage. Groundcover that keeps its foliage for the entire year is known as evergreen, whereas groundcover that loses its foliage in the winter months is known as deciduous.
In gardening
Five general types of plants are commonly used as groundcovers in gardening:
Vines, which are woody plants with slender, spreading stems
Herbaceous plants, or non-woody plants
Shrubs of low-growing, spreading species
Moss of larger, coarser species
Ornamental grasses, especially low-growing varieties
Of these types, some of the most common groundcovers include:
Alfalfa (Medicago sativa)
Clover (Trifolium)
Dichondra
Bacopa (Bacopa)
Carpobrotus
Delairea
Ivy (Hedera)
Gazania (Gazania rigens)
Ground-elder (Aegopodium podagraria)
Ice plant
Japanese honeysuckle (Lonicera japonica)
Juniperus horizontalis
Creeping lantana
Lilyturf (Liriope muscari and Liriope spicata)
Mint (Mentha)
Mesembryanthemum cordifolium
Nasturtium (Tropaeolum majus)
Pachysandra
Pearlwort (Sagina subulata)
Sphagneticola trilobata
Periwinkle (Vinca)
Shasta daisy (Leucanthemum)
Soleirolia (Soleirolia soleirolii)
Spider plant (Chlorophytum comosum)
In roof gardens
Groundcover is a popular solution for difficult gardening issues because it is low maintenance, aesthetically pleasing and fast growing, minimizing the spread of weeds. For this reason, ground cover is also a common choice for roof gardens. Roofs take on the brunt of incoming weather, meaning any plants on a roof must be resistant to long-term exposure to sun, overwatering from rain and harsh winds. Groundcover plants are able to sustain themselves in such conditions while also providing lush vegetation to what would otherwise be unused space.
See also
Cover crop
Robel pole
Living mulch
Tapestry lawn
References
Plant morphology
Garden plants
Horticulture | Groundcover | [
"Biology"
] | 1,211 | [
"Plant morphology",
"Plants"
] |
181,417 | https://en.wikipedia.org/wiki/Knaster%E2%80%93Tarski%20theorem | In the mathematical areas of order and lattice theory, the Knaster–Tarski theorem, named after Bronisław Knaster and Alfred Tarski, states the following:
Let (L, ≤) be a complete lattice and let f : L → L be an order-preserving (monotonic) function w.r.t. ≤ . Then the set of fixed points of f in L forms a complete lattice under ≤ .
It was Tarski who stated the result in its most general form, and so the theorem is often known as Tarski's fixed-point theorem. Some time earlier, Knaster and Tarski established the result for the special case where L is the lattice of subsets of a set, the power set lattice.
The theorem has important applications in formal semantics of programming languages and abstract interpretation, as well as in game theory.
A kind of converse of this theorem was proved by Anne C. Davis: If every order-preserving function f : L → L on a lattice L has a fixed point, then L is a complete lattice.
Consequences: least and greatest fixed points
Since complete lattices cannot be empty (they must contain a supremum and infimum of the empty set), the theorem in particular guarantees the existence of at least one fixed point of f, and even the existence of a least fixed point (or greatest fixed point). In many practical cases, this is the most important implication of the theorem.
The least fixpoint of f is the least element x such that f(x) = x, or, equivalently, such that f(x) ≤ x; the dual holds for the greatest fixpoint, the greatest element x such that f(x) = x.
If f(lim xn) = lim f(xn) for all ascending sequences xn, then the least fixpoint of f is lim f n(0) where 0 is the least element of L, thus giving a more "constructive" version of the theorem. (See: Kleene fixed-point theorem.) More generally, if f is monotonic, then the least fixpoint of f is the stationary limit of f α(0), taking α over the ordinals, where f α is defined by transfinite induction: f α+1 = f (f α) and f γ for a limit ordinal γ is the least upper bound of the f β for all β ordinals less than γ. The dual theorem holds for the greatest fixpoint.
For example, in theoretical computer science, least fixed points of monotonic functions are used to define program semantics, see for an example. Often a more specialized version of the theorem is used, where L is assumed to be the lattice of all subsets of a certain set ordered by subset inclusion. This reflects the fact that in many applications only such lattices are considered. One then usually is looking for the smallest set that has the property of being a fixed point of the function f. Abstract interpretation makes ample use of the Knaster–Tarski theorem and the formulas giving the least and greatest fixpoints.
The Knaster–Tarski theorem can be used to give a simple proof of the Cantor–Bernstein–Schroeder theorem and it is also used in establishing the Banach–Tarski paradox.
Weaker versions of the theorem
Weaker versions of the Knaster–Tarski theorem can be formulated for ordered sets, but involve more complicated assumptions. For example:
Let L be a partially ordered set with a least element (bottom) and let f : L → L be an monotonic function. Further, suppose there exists u in L such that f(u) ≤ u and that any chain in the subset has a supremum. Then f admits a least fixed point.
This can be applied to obtain various theorems on invariant sets, e.g. the Ok's theorem:
For the monotone map F : P(X ) → P(X ) on the family of (closed) nonempty subsets of X, the following are equivalent: (o) F admits A in P(X ) s.t. , (i) F admits invariant set A in P(X ) i.e. , (ii) F admits maximal invariant set A, (iii) F admits the greatest invariant set A.
In particular, using the Knaster-Tarski principle one can develop the theory of global attractors for noncontractive discontinuous (multivalued) iterated function systems. For weakly contractive iterated function systems the Kantorovich theorem (known also as Tarski-Kantorovich fixpoint principle) suffices.
Other applications of fixed-point principles for ordered sets come from the theory of differential, integral and operator equations.
Proof
Let us restate the theorem.
For a complete lattice and a monotone function on L, the set of all fixpoints of f is also a complete lattice , with:
as the greatest fixpoint of f
as the least fixpoint of f.
Proof. We begin by showing that P has both a least element and a greatest element. Let and (we know that at least 0L belongs to D). Then because f is monotone we have , that is .
Now let (u exists because and L is a complete lattice). Then for all it is true that and , so . Therefore, f(u) is an upper bound of D, but u is the least upper bound, so , i.e. . Then (because and so from which follows f(u) = u. Because every fixpoint is in D we have that u is the greatest fixpoint of f.
The function f is monotone on the dual (complete) lattice . As we have just proved, its greatest fixpoint exists. It is the least fixpoint of L, so P has least and greatest elements, that is more generally, every monotone function on a complete lattice has a least fixpoint and a greatest fixpoint.
For a, b in L we write [a, b] for the closed interval with bounds a and . If a ≤ b, then is a complete lattice.
It remains to be proven that P is a complete lattice. Let , and . We show that . Indeed, for every we have x = f(x) and since w is the least upper bound of W, . In particular . Then from follows that , giving or simply . This allows us to look at f as a function on the complete lattice [w, 1L]. Then it has a least fixpoint there, giving us the least upper bound of W. We've shown that an arbitrary subset of P has a supremum, that is, P is a complete lattice.
Computing a Tarski fixed-point
Chang, Lyuu and Ti present an algorithm for finding a Tarski fixed-point in a totally-ordered lattice, when the order-preserving function is given by a value oracle. Their algorithm requires queries, where L is the number of elements in the lattice. In contrast, for a general lattice (given as an oracle), they prove a lower bound of queries.
Deng, Qi and Ye present several algorithms for finding a Tarski fixed-point. They consider two kinds of lattices: componentwise ordering and lexicographic ordering. They consider two kinds of input for the function f: value oracle, or a polynomial function. Their algorithms have the following runtime complexity (where d is the number of dimensions, and Ni is the number of elements in dimension i):
The algorithms are based on binary search. On the other hand, determining whether a given fixed point is unique is computationally hard:
For d=2, for componentwise lattice and a value-oracle, the complexity of is optimal. But for d>2, there are faster algorithms:
Fearnley, Palvolgyi and Savani presented an algorithm using only queries. In particular, for d=3, only queries are needed.
Chen and Li presented an algorithm using only queries.
Application in game theory
Tarski's fixed-point theorem has applications to supermodular games. A supermodular game (also called a game of strategic complements) is a game in which the utility function of each player has increasing differences, so the best response of a player is a weakly-increasing function of other players' strategies. For example, consider a game of competition between two firms. Each firm has to decide how much money to spend on research. In general, if one firm spends more on research, the other firm's best response is to spend more on research too. Some common games can be modeled as supermodular games, for example Cournot competition, Bertrand competition and Investment Games.
Because the best-response functions are monotone, Tarski's fixed-point theorem can be used to prove the existence of a pure-strategy Nash equilibrium (PNE) in a supermodular game. Moreover, Topkis showed that the set of PNE of a supermodular game is a complete lattice, so the game has a "smallest" PNE and a "largest" PNE.
Echenique presents an algorithm for finding all PNE in a supermodular game. His algorithm first uses best-response sequences to find the smallest and largest PNE; then, he removes some strategies and repeats, until all PNE are found. His algorithm is exponential in the worst case, but runs fast in practice. Deng, Qi and Ye show that a PNE can be computed efficiently by finding a Tarski fixed-point of an order-preserving mapping associated with the game.
See also
Modal μ-calculus
Notes
References
Further reading
External links
J. B. Nation, Notes on lattice theory.
An application to an elementary combinatorics problem: Given a book with 100 pages and 100 lemmas, prove that there is some lemma written on the same page as its index
Order theory
Fixed points (mathematics)
Fixed-point theorems
Theorems in the foundations of mathematics
Articles containing proofs | Knaster–Tarski theorem | [
"Mathematics"
] | 2,092 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical theorems",
"Foundations of mathematics",
"Fixed points (mathematics)",
"Mathematical logic",
"Fixed-point theorems",
"Theorems in topology",
"Topology",
"Mathematical problems",
"Articles containing proofs",
"Order the... |
181,503 | https://en.wikipedia.org/wiki/Culmination | In observational astronomy, culmination is the passage of a celestial object (such as the Sun, the Moon, a planet, a star, constellation or a deep-sky object) across the observer's local meridian. These events are also known as meridian transits, used in timekeeping and navigation, and measured precisely using a transit telescope.
During each day, every celestial object appears to move along a circular path on the celestial sphere due to the Earth's rotation creating two moments when it crosses the meridian. Except at the geographic poles, any celestial object passing through the meridian has an upper culmination, when it reaches its highest point (the moment when it is nearest to the zenith), and nearly twelve hours later, is followed by a lower culmination, when it reaches its lowest point (nearest to the nadir). The time of culmination (when the object culminates) is often used to mean upper culmination.
An object's altitude (A) in degrees at its upper culmination is equal to 90 minus the observer's latitude (L) plus the object's declination (δ):
.
Cases
Three cases are dependent on the observer's latitude (L) and the declination (δ) of the celestial object:
The object is above the horizon even at its lower culmination; i.e. if (i.e. if in absolute value the declination is more than the colatitude, in the corresponding hemisphere)
The object is below the horizon even at its upper culmination; i.e. if (i.e. if in absolute value the declination is more than the colatitude, in the opposite hemisphere)
The upper culmination is above and the lower below the horizon, so the body is observed to rise and set daily; in the other cases (i.e. if in absolute value the declination is less than the colatitude)
The third case applies for objects in a part of the full sky equal to the cosine of the latitude (at the equator it applies for all objects, because the sky turns around the horizontal north–south line; at the poles it applies for none, because the sky turns around the vertical line). The first and second case each apply for half of the remaining sky.
Period of time
The period between a culmination and the next is a sidereal day, which is exactly 24 sidereal hours and 4 minutes less than 24 common solar hours, while the period between an upper culmination and a lower one is 12 sidereal hours. The period between successive day to day (rotational) culminations is effected mainly by Earth's orbital proper motion, which produces the different lengths between the solar day (the interval between culminations of the Sun) and the sidereal day (the interval between culminations of any reference star) or the slightly more precise, precession unaffected, stellar day. This results in culminations occurring every solar day at different times, taking a sidereal year (366.3 days), a year that is one day longer than the solar year, for a culmination to reoccur. Therefore, only once every 366.3 solar days the culmination reoccurs at the same time of a solar day, while reoccurring every sidereal day. The remaining small changes in the culmination period time from sidereal year to sidereal year is on the other hand mainly caused by nutation (with a 18.6 years cycle), resulting in the longer time scale axial precession of Earth (with a 26,000 years cycle), while apsidal precession and other mechanics have a much smaller impact on sidereal observation, impacting Earth's climate through the Milankovitch cycles significantly more. Though at such timescales stars themself change position, particularly those stars which have, as viewed from the Solar System, a high proper motion.
Stellar parallax appears to be a similar motion like all these apparent movements, but has only from non-averaged sidereal day to sidereal day a slight effect, returning to its original apparent position, completing a cycle every orbit, with a slight additional lasting change to the position due to the precessions. This phenomenon results from Earth changing position on its orbital path.
The Sun
From the tropics and middle latitudes, the Sun is visible in the sky at its upper culmination (at solar noon) and invisible (below the horizon) at its lower culmination (at solar midnight). When viewed from the region within either polar circle around the winter solstice of that hemisphere (the December solstice in the Arctic and the June solstice in the Antarctic), the Sun is below the horizon at both of its culminations.
Earth's subsolar point occurs at the point where the upper culmination of the Sun reaches the point's zenith. At this point, which moves around the tropics throughout the year, the Sun is perceived to be directly overhead.
We apply the previous equation, , in the following examples.
Supposing that the declination of the Sun is +20° when it crosses the local meridian, then the complementary angle of 70° (from the Sun to the pole) is added to and subtracted from the observer's latitude to find the solar altitudes at upper and lower culminations, respectively.
From 52° north, the upper culmination is at 58° above the horizon due south, while the lower is at 18° below the horizon due north. This is calculated as 52° + 70° = 122° (the supplementary angle being 58°) for the upper, and 52° − 70° = −18° for the lower.
From 80° north, the upper culmination is at 30° above the horizon due south, while the lower is at 10° above the horizon (midnight sun) due north.
Circumpolar stars
From most of the Northern Hemisphere, Polaris (the North Star) and the other stars of the constellation Ursa Minor circles counterclockwise around the north celestial pole and remain visible at both culminations (as long as the sky is clear and dark enough). In the Southern Hemisphere there is no bright pole star, but the constellation Octans circles clockwise around the south celestial pole and remains visible at both culminations.
Any astronomical objects that always remain above the local horizon, as viewed from the observer's latitude, are described as circumpolar.
See also
Celestial sphere
Meridian (astronomy)
Nadir
Satellite pass
Zenith
References
Celestial mechanics
Spherical astronomy | Culmination | [
"Physics"
] | 1,381 | [
"Celestial mechanics",
"Classical mechanics",
"Astrophysics"
] |
181,554 | https://en.wikipedia.org/wiki/Period%204%20element | A period 4 element is one of the chemical elements in the fourth row (or period) of the periodic table of the chemical elements. The periodic table is laid out in rows to illustrate recurring (periodic) trends in the chemical behaviour of the elements as their atomic number increases: a new row is begun when chemical behaviour begins to repeat, meaning that elements with similar behaviour fall into the same vertical columns. The fourth period contains 18 elements beginning with potassium and ending with krypton – one element for each of the eighteen groups. It sees the first appearance of d-block (which includes transition metals) in the table.
Properties
All 4th-period elements are stable, and many are extremely common in the Earth's crust and/or core; it is the last period with no unstable elements. Many transition metals in the period are very strong, and therefore common in industry, especially iron. Some are toxic, with all known vanadium compounds toxic, arsenic one of the most well-known poisons, and bromine a toxic liquid. Conversely, many elements are essential to human survival, such as calcium, the main component in bones.
Atomic structure
Progressing towards increase of atomic number, the Aufbau principle causes elements of the period to put electrons onto 4s, 3d, and 4p subshells, in that order. However, there are exceptions, such as chromium. The first twelve elements—K, Ca, and transition metals—have from 1 to 12 valence electrons respectively, which are placed on 4s and 3d.
Twelve electrons over the electron configuration of argon reach the configuration of zinc, namely 3d10 4s2. After this element, the filled 3d subshell effectively withdraws from chemistry and the subsequent trend looks much like trends in the periods 2 and 3. The p-block elements of period 4 have their valence shell composed of 4s and 4p subshells of the fourth () shell and obey the octet rule.
For quantum chemistry namely this period sees transition from the simplified electron shell paradigm to research of many differently-shaped subshells. The relative disposition of their energy levels is governed by the interplay of various physical effects. The period's s-block metals put their differentiating electrons onto 4s despite having vacancies among nominally lower states – a phenomenon unseen in lighter elements. Contrariwise, the six elements from gallium to krypton are the heaviest where all electron shells below the valence shell are filled completely. This is no longer possible in further periods due to the existence of f-subshells starting from .
List of elements
{| class="wikitable sortable"
! colspan="3" | Chemical element
! Block
! Electron configuration
|-
!
!
!
!
!
|- bgcolor=""
|| 19 || K || Potassium || s-block || [Ar] 4s1
|- bgcolor=""
|| 20 || Ca || Calcium || s-block || [Ar] 4s2
|- bgcolor=""
|| 21 || Sc || Scandium || d-block || [Ar] 3d1 4s2
|- bgcolor=""
|| 22 || Ti || Titanium || d-block || [Ar] 3d2 4s2
|- bgcolor=""
|| 23 || V || Vanadium || d-block || [Ar] 3d3 4s2
|- bgcolor=""
|| 24 || Cr || Chromium || d-block || [Ar] 3d5 4s1 (*)
|- bgcolor=""
|| 25 || Mn || Manganese || d-block || [Ar] 3d5 4s2
|- bgcolor=""
|| 26 || Fe || Iron || d-block || [Ar] 3d6 4s2
|- bgcolor=""
|| 27 || Co || Cobalt || d-block || [Ar] 3d7 4s2
|- bgcolor=""
|| 28 || Ni || Nickel || d-block || [Ar] 3d8 4s2
|- bgcolor=""
|| 29 || Cu || Copper || d-block || [Ar] 3d10 4s1 (*)
|- bgcolor=""
|| 30 || Zn || Zinc || d-block || [Ar] 3d10 4s2
|- bgcolor=""
|| 31 || Ga || Gallium || p-block || [Ar] 3d10 4s2 4p1
|- bgcolor=""
|| 32 || Ge || Germanium || p-block || [Ar] 3d10 4s2 4p2
|- bgcolor=""
|| 33 || As || Arsenic || p-block || [Ar] 3d10 4s2 4p3
|- bgcolor=""
|| 34 || Se || Selenium || p-block || [Ar] 3d10 4s2 4p4
|- bgcolor=""
|| 35 || Br || Bromine || p-block || [Ar] 3d10 4s2 4p5
|- bgcolor=""
|| 36 || Kr || Krypton || p-block || [Ar] 3d10 4s2 4p6
|}
(*) Exception to the Madelung rule
s-block elements
Potassium
Potassium (K) is an alkali metal, underneath sodium and above rubidium, and the first element of period 4. One of the most reactive chemical elements, it is usually found only in compounds. It is a silvery metal that tarnishes rapidly when exposed to the oxygen in air, which oxidizes it. It is soft enough to be cut with a knife and the second least-dense element. Potassium has a relatively low melting point; it will melt under a small open flame. It also is less dense than water, and can, in principle, float (although it will react with any water it is exposed to).
Calcium
Calcium (Ca) is the second element in the period. An alkali earth metal, native calcium is almost never found in nature, because it reacts with water. It has one of the most widely-known biological roles in all animals and some plants, making up structural elements such as bones and teeth. It also has applications in cells, such as signals for cellular processeses. It is regarded as the most abundant mineral in the human body.
d-block elements
Scandium
Scandium (Sc) is the third element in the period, and is the first transition metal in the periodic table. Scandium is quite common in nature, but difficult to isolate because its chemistry mirrors that of the other rare earth compounds quite closely. Scandium has very few commercial applications, the major exception being aluminium alloys.
Titanium
Titanium (Ti) is an element in group 4. Titanium is both one of the least dense metals and one of the strongest and most corrosion-resistant. As such, it has many applications, especially in alloys with other elements, such as iron. It is commonly used in airplanes, golf clubs, and other objects that must be strong, but lightweight.
Vanadium
Vanadium (V) is an element in group 5. Vanadium is never found in pure form in nature, but is commonly found in compounds. Vanadium is similar to titanium in many ways, such as being very corrosion-resistant, however, unlike titanium, it oxidizes in air even at room temperature. All vanadium compounds have at least some level of toxicity, with some of them being extremely toxic.
Chromium
Chromium (Cr) is an element in group 6. Chromium is, like titanium and vanadium before it, extremely resistant to corrosion, and is indeed one of the main components of stainless steel. Chromium also has many colorful compounds, and as such is very commonly used in pigments, such as chrome green.
Manganese
Manganese (Mn) is an element in group 7. Manganese is often found in combination with iron. Manganese, like chromium before it, is an important component in stainless steel, preventing the iron from rusting. Manganese is also often used in pigments, again like chromium. Manganese is also poisonous; if enough is inhaled, it can cause irreversible neurological damage.
Iron
Iron (Fe) is an element in group 8. Iron is the most common on Earth among elements of the period, and probably the most well-known of them. It is the principal component of steel. Iron-56 has the lowest energy density of any isotope of any element, meaning that it is the most massive element that can be produced in supergiant stars. Iron also has some applications in the human body; hemoglobin is partly iron.
Cobalt
Cobalt (Co) is an element in group 9. Cobalt is commonly used in pigments, as many compounds of cobalt are blue in color. Cobalt is also a core component of many magnetic and high-strength alloys. The only stable isotope, cobalt-59, is an important component of vitamin B-12, while cobalt-60 is a component of nuclear fallout and can be dangerous in large enough quantities due to its radioactivity.
Nickel
Nickel (Ni) is an element in group 10. Nickel is rare in the Earth's crust, mainly due to the fact that it reacts with oxygen in the air, with most of the nickel on Earth coming from nickel iron meteorites. However, nickel is very abundant in the Earth's core; along with iron it is one of the two main components. Nickel is an important component of stainless steel, and in many superalloys.
Copper
Copper (Cu) is an element in group 11. Copper is one of the few metals that is not white or gray in color, the only others being gold, osmium and caesium. Copper has been used by humans for thousands of years to provide a reddish tint to many objects, and is even an essential nutrient to humans, although too much is poisonous. Copper is also commonly used as a wood preservative or fungicides.
Zinc
Zinc (Zn) is an element in group 12. Zinc is one of the main components of brass, being used since the 10th century BCE. Zinc is also incredibly important to humans; almost 2 billion people in the world suffer from zinc deficiency. However, too much zinc can cause copper deficiency. Zinc is often used in batteries, aptly named carbon-zinc batteries, and is important in many platings, as zinc is very corrosion resistant.
p-block elements
Gallium
Gallium (Ga) is an element in group 13, under aluminium. Gallium is noteworthy because it has a melting point at about 303 kelvins, right around room temperature. For example, it will be solid on a typical spring day, but will be liquid on a hot summer day. Gallium is an important component in the alloy galinstan, along with tin. Gallium can also be found in semiconductors.
Germanium
Germanium (Ge) is an element in group 14. Germanium, like silicon above it, is an important semiconductor and is commonly used in diodes and transistors, often in combination with arsenic. Germanium is fairly rare on Earth, leading to its comparatively late discovery. Germanium, in compounds, can sometimes irritate the eyes, skin, or lungs.
Arsenic
Arsenic (As) is an element in group 15, the pnictogens. Arsenic, as mentioned above, is often used in semiconductors in alloys with germanium. Arsenic, in pure form and some alloys, is incredibly poisonous to all multicellular life, and as such is a common component in pesticides. Arsenic was also used in some pigments before its toxicity was discovered.
Selenium
Selenium (Se) is an element in group 16, the chalcogens. Selenium is the first nonmetal in period 4, with properties similar to sulfur. Selenium is quite rare in pure form in nature, mostly being found in minerals such as pyrite, and even then it is quite rare. Selenium is necessary for humans in trace amounts, but is toxic in larger quantities. Selenium is red in monomolar structure but metallic gray in its crystalline structure.
Bromine
Bromine (Br) is an element in group 17 (halogen). It does not exist in elemental form in nature. Bromine is barely liquid at room temperature, boiling at about 330 kelvins. Bromine is also quite toxic and corrosive, but bromide ions, which are relatively inert, can be found in halite, or table salt. Bromine is often used as a fire retardant because many compounds can be made to release free bromine atoms.
Krypton
Krypton (Kr) is a noble gas, placed under argon and over xenon. Being a noble gas, krypton rarely interacts with itself or other elements; although compounds have been detected, they are all unstable and decay rapidly, and as such, krypton is often used in fluorescent lights. Krypton, like most noble gases, is also used in lighting because of its many spectral lines and the aforementioned reasons.
Biological role
Many period 4 elements find roles in controlling protein function as secondary messengers, structural components, or enzyme cofactors. A gradient of potassium is used by cells to maintain a membrane potential which enables neurotransmitter firing and facilitated diffusion among other processes. Calcium is a common signaling molecule for proteins such as calmodulin and plays a critical role in triggering skeletal muscle contraction in vertebrates. Selenium is a component of the noncanonical amino acid, selenocysteine; proteins which contain selenocysteine are known as selenoproteins. Manganese enzymes are utilized by both eukaryotes and prokaryotes, and may play a role in the virulence of some pathogenic bacteria. Vanabins, also known as vanadium-associated proteins, are found in the blood cells of some species of sea squirts. The role of these proteins is disputed, although there is some speculation that they function as oxygen carriers. Zinc ions are used to stabilize the zinc finger milieu of many DNA-binding proteins.
Period 4 elements can also be found complexed with organic small molecules to form cofactors. The most famous example of this is heme: an iron-containing porphyrin compound responsible for the oxygen-carrying function of myoglobin and hemoglobin as well as the catalytic activity of cytochrome enzymes. Hemocyanin replaces hemoglobin as the oxygen carrier of choice in the blood of certain invertebrates, including horseshoe crabs, tarantulas, and octopuses. Vitamin B12 represents one of the few biochemical applications for cobalt.
References
Periods (periodic table)
Pages containing element color directly | Period 4 element | [
"Chemistry"
] | 3,185 | [
"Periodic table",
"Periods (periodic table)"
] |
181,556 | https://en.wikipedia.org/wiki/Period%206%20element | A period 6 element is one of the chemical elements in the sixth row (or period) of the periodic table of the chemical elements, including the lanthanides. The periodic table is laid out in rows to illustrate recurring (periodic) trends in the chemical behaviour of the elements as their atomic number increases: a new row is begun when chemical behaviour begins to repeat, meaning that elements with similar behaviour fall into the same vertical columns. The sixth period contains 32 elements, tied for the most with period 7, beginning with caesium and ending with radon. Lead is currently the last stable element; all subsequent elements are radioactive. For bismuth, however, its only primordial isotope, 209Bi, has a half-life of more than 1019 years, over a billion times longer than the current age of the universe. As a rule, period 6 elements fill their 6s shells first, then their 4f, 5d, and 6p shells, in that order; however, there are exceptions, such as gold.
Properties
This period contains the lanthanides, also known as the rare earths. Many lanthanides are known for their magnetic properties, such as neodymium. Many period 6 transition metals are very valuable, such as gold, however many period 6 other metals are incredibly toxic, such as thallium. Period 6 contains the last stable element, lead. All subsequent elements in the periodic table are radioactive. After bismuth, which has a half-life or more than 1019 years, polonium, astatine, and radon are some of the shortest-lived and rarest elements known; less than a gram of astatine is estimated to exist on earth at any given time.
Atomic characteristics
{| class="wikitable sortable"
! colspan="3" | Chemical element
! Block
! Electron configuration
|- bgcolor=""
|| 55 || Cs || Caesium || s-block || [Xe] 6s1
|- bgcolor=""
|| 56 || Ba || Barium || s-block || [Xe] 6s2
|- bgcolor=""
|| 57 || La || Lanthanum || f-block || [Xe] 5d1 6s2
|- bgcolor=""
|| 58 || Ce || Cerium || f-block || [Xe] 4f1 5d1 6s2
|- bgcolor=""
|| 59 || Pr || Praseodymium || f-block || [Xe] 4f3 6s2
|- bgcolor=""
|| 60 || Nd || Neodymium || f-block || [Xe] 4f4 6s2
|- bgcolor=""
|| 61 || Pm || Promethium || f-block || [Xe] 4f5 6s2
|- bgcolor=""
|| 62 || Sm || Samarium || f-block || [Xe] 4f6 6s2
|- bgcolor=""
|| 63 || Eu || Europium || f-block || [Xe] 4f7 6s2
|- bgcolor=""
|| 64 || Gd || Gadolinium || f-block || [Xe] 4f7 5d1 6s2
|- bgcolor=""
|| 65 || Tb || Terbium || f-block || [Xe] 4f9 6s2
|- bgcolor=""
|| 66 || Dy || Dysprosium || f-block || [Xe] 4f10 6s2
|- bgcolor=""
|| 67 || Ho || Holmium || f-block || [Xe] 4f11 6s2
|- bgcolor=""
|| 68 || Er || Erbium || f-block || [Xe] 4f12 6s2
|- bgcolor=""
|| 69 || Tm || Thulium || f-block || [Xe] 4f13 6s2
|- bgcolor=""
|| 70 || Yb || Ytterbium || f-block || [Xe] 4f14 6s2
|- bgcolor=""
|| 71 || Lu || Lutetium || d-block || [Xe] 4f14 5d1 6s2
|- bgcolor=""
|| 72 || Hf || Hafnium || d-block || [Xe] 4f14 5d2 6s2
|- bgcolor=""
|| 73 || Ta || Tantalum || d-block || [Xe] 4f14 5d3 6s2
|- bgcolor=""
|| 74 || W || Tungsten || d-block || [Xe] 4f14 5d4 6s2
|- bgcolor=""
|| 75 || Re || Rhenium || d-block || [Xe] 4f14 5d5 6s2
|- bgcolor=""
|| 76 || Os || Osmium || d-block || [Xe] 4f14 5d6 6s2
|- bgcolor=""
|| 77 || Ir || Iridium || d-block || [Xe] 4f14 5d7 6s2
|- bgcolor=""
|| 78 || Pt || Platinum || d-block || [Xe] 4f14 5d9 6s1
|- bgcolor=""
|| 79 || Au || Gold || d-block || [Xe] 4f14 5d10 6s1
|- bgcolor=""
|| 80 || Hg || Mercury || d-block || [Xe] 4f14 5d10 6s2
|- bgcolor=""
|| 81 || Tl || Thallium || p-block || [Xe] 4f14 5d10 6s2 6p1
|- bgcolor=""
|| 82 || Pb || Lead || p-block || [Xe] 4f14 5d10 6s2 6p2
|- bgcolor=""
|| 83 || Bi || Bismuth || p-block || [Xe] 4f14 5d10 6s2 6p3
|- bgcolor=""
|| 84 || Po || Polonium || p-block || [Xe] 4f14 5d10 6s2 6p4
|- bgcolor=""
|| 85 || At || Astatine || p-block || [Xe] 4f14 5d10 6s2 6p5
|- bgcolor=""
|| 86 || Rn || Radon || p-block || [Xe] 4f14 5d10 6s2 6p6
|}
In many periodic tables, the f-block is erroneously shifted one element to the right, so that lanthanum and actinium become d-block elements, and Ce–Lu and Th–Lr form the f-block, tearing the d-block into two very uneven portions. This is a holdover from early erroneous measurements of electron configurations. Lev Landau and Evgeny Lifshitz pointed out in 1948 that lutetium is not an f-block element, and since then physical, chemical, and electronic evidence has overwhelmingly supported that the f-block contains the elements La–Yb and Ac–No, as shown here and as supported by International Union of Pure and Applied Chemistry reports dating from 1988 and 2021.
An exception to the Madelung rule.
s-block elements
Caesium
Caesium or cesium is the chemical element with the symbol Cs and atomic number 55. It is a soft, silvery-gold alkali metal with a melting point of 28 °C (82 °F), which makes it one of only five elemental metals that are liquid at (or near) room temperature. Caesium is an alkali metal and has physical and chemical properties similar to those of rubidium and potassium. The metal is extremely reactive and pyrophoric, reacting with water even at−116 °C (−177 °F). It is the least electronegative element having a stable isotope, caesium-133. Caesium is mined mostly from pollucite, while the radioisotopes, especially caesium-137, a fission product, are extracted from waste produced by nuclear reactors.
Two German chemists, Robert Bunsen and Gustav Kirchhoff, discovered caesium in 1860 by the newly developed method of flame spectroscopy. The first small-scale applications for caesium have been as a "getter" in vacuum tubes and in photoelectric cells. In 1967, a specific frequency from the emission spectrum of caesium-133 was chosen to be used in the definition of the second by the International System of Units. Since then, caesium has been widely used in atomic clocks.
Since the 1990s, the largest application of the element has been as caesium formate for drilling fluids. It has a range of applications in the production of electricity, in electronics, and in chemistry. The radioactive isotope caesium-137 has a half-life of about 30 years and is used in medical applications, industrial gauges, and hydrology. Although the element is only mildly toxic, it is a hazardous material as a metal and its radioisotopes present a high health risk in case of radioactivity releases.
Barium
Barium is a chemical element with the symbol Ba and atomic number 56. It is the fifth element in Group 2, a soft silvery metallic alkaline earth metal. Barium is never found in nature in its pure form due to its reactivity with air. Its oxide is historically known as baryta but it reacts with water and carbon dioxide and is not found as a mineral. The most common naturally occurring minerals are the very insoluble barium sulfate, BaSO4 (barite), and barium carbonate, BaCO3(witherite). Barium's name originates from Greek barys (βαρύς), meaning "heavy", describing the high density of some common barium-containing ores.
Barium has few industrial applications, but the metal has been historically used to scavenge air in vacuum tubes. Barium compounds impart a green color to flames and have been used in fireworks. Barium sulfate is used for its density, insolubility, and X-ray opacity. It is used as an insoluble heavy additive to oil well drilling mud, and in purer form, as an X-ray radiocontrast agent for imaging the human gastrointestinal tract. Soluble barium compounds are poisonous due to release of the soluble barium ion, and have been used as rodenticides. New uses for barium continue to be sought. It is a component of some "high temperature" YBCOsuperconductors, and electroceramics.
f-block elements (lanthanides)
The lanthanide or lanthanoid (IUPAC nomenclature) series comprises the fifteen metallic chemical elements with atomic numbers 57 through 71, from lanthanum through lutetium. These fifteen elements, along with the chemically similar elements scandium and yttrium, are often collectively known as the rare-earth elements.
The informal chemical symbol Ln is used in general discussions of lanthanide chemistry. All but one of the lanthanides are f-block elements, corresponding to the filling of the 4f electron shell; lanthanum, a d-block element, is also generally considered to be a lanthanide due to its chemical similarities with the other fourteen. All lanthanide elements form trivalent cations, Ln3+, whose chemistry is largely determined by the ionic radius, which decreases steadily from lanthanum to lutetium.
Between initial [Xe] and final 6s2 electronic shells
The lanthanide elements are the group of elements with atomic number increasing from 57 (lanthanum) to 71 (lutetium). They are termed lanthanide because the lighter elements in the series are chemically similar to lanthanum. Strictly speaking, both lanthanum and lutetium have been labeled as group 3 elements, because they both have a single valence electron in the d shell. However, both elements are often included in any general discussion of the chemistry of the lanthanide elements.
In presentations of the periodic table, the lanthanides and the actinides are customarily shown as two additional rows below the main body of the table, with placeholders or else a selected single element of each series (either lanthanum or lutetium, and either actinium or lawrencium, respectively) shown in a single cell of the main table, between barium and hafnium, and radium and rutherfordium, respectively. This convention is entirely a matter of aesthetics and formatting practicality; a rarely used wide-formatted periodic table inserts the lanthanide and actinide series in their proper places, as parts of the table's sixth and seventh rows (periods).
d-block elements
Lutetium
Lutetium ( ) is a chemical element with the symbol Lu and atomic number 71. It is the last element in the lanthanide series, which, along with the lanthanide contraction, explains several important properties of lutetium, such as it having the highest hardness or density among lanthanides. Unlike other lanthanides, which lie in the f-block of the periodic table, this element lies in the d-block; however, lanthanum is sometimes placed on the d-block lanthanide position. Chemically, lutetium is a typical lanthanide: its only common oxidation state is +3, seen in its oxide, halides and other compounds. In an aqueous solution, like compounds of other late lanthanides, soluble lutetium compounds form a complex with nine water molecules.
Lutetium was independently discovered in 1907 by French scientist Georges Urbain, Austrian mineralogist Baron Carl Auer von Welsbach, and American chemist Charles James. All of these men found lutetium as an impurity in the mineral ytterbia, which was previously thought to consist entirely of ytterbium. The dispute on the priority of the discovery occurred shortly after, with Urbain and von Welsbach accusing each other of publishing results influenced by the published research of the other; the naming honor went to Urbain as he published his results earlier. He chose the name lutecium for the new element but in 1949 the spelling of element 71 was changed to lutetium. In 1909, the priority was finally granted to Urbain and his names were adopted as official ones; however, the name cassiopeium (or later cassiopium) for element 71 proposed by von Welsbach was used by many German scientists until the 1950s. Like other lanthanides, lutetium is one of the elements that traditionally were included in the classification "rare earths."
Lutetium is rare and expensive; consequently, it has few specific uses. For example, a radioactive isotope lutetium-176 is used in nuclear technology to determine the age of meteorites. Lutetium usually occurs in association with the element yttrium and is sometimes used in metal alloys and as a catalyst in various chemical reactions. 177Lu-DOTA-TATE is used for radionuclide therapy (see Nuclear medicine) on neuroendocrine tumours.
Hafnium
Hafnium is a chemical element with the symbol Hf and atomic number 72. A lustrous, silvery gray, tetravalent transition metal, hafnium chemically resembles zirconium and is found in zirconium minerals. Its existence was predicted by Dmitri Mendeleev in 1869. Hafnium was the penultimate stable isotope element to be discovered (rhenium was identified two years later). Hafnium is named for Hafnia, the Latin name for "Copenhagen", where it was discovered.
Hafnium is used in filaments and electrodes. Some semiconductor fabrication processes use its oxide for integrated circuits at 45 nm and smaller feature lengths. Some superalloys used for special applications contain hafnium in combination with niobium, titanium, or tungsten.
Hafnium's large neutron capture cross-section makes it a good material for neutron absorption in control rods in nuclear power plants, but at the same time requires that it be removed from the neutron-transparent corrosion-resistant zirconium alloys used in nuclear reactors.
Tantalum
Tantalum is a chemical element with the symbol Ta and atomic number 73. Previously known as tantalium, the name comes from Tantalus, a character from Greek mythology. Tantalum is a rare, hard, blue-gray, lustrous transition metal that is highly corrosion resistant. It is part of the refractory metals group, which are widely used as minor component in alloys. The chemical inertness of tantalum makes it a valuable substance for laboratory equipment and a substitute for platinum, but its main use today is in tantalum capacitors in electronic equipment such as mobile phones, DVD players, video game systems and computers.
Tantalum, always together with the chemically similar niobium, occurs in the minerals tantalite, columbite and coltan (a mix of columbite and tantalite).
Tungsten
Tungsten, also known as wolfram, is a chemical element with the chemical symbol W and atomic number 74. The word tungsten comes from the Swedish language tung sten directly translatable to heavy stone, though the name is volfram in Swedish to distinguish it from Scheelite, in Swedish alternatively named tungsten.
A hard, rare metal under standard conditions when uncombined, tungsten is found naturally on Earth only in chemical compounds. It was identified as a new element in 1781, and first isolated as a metal in 1783. Its important ores include wolframite and scheelite. The free element is remarkable for its robustness, especially the fact that it has the highest melting point of all the non-alloyed metals and the second highest of all the elements after carbon. Also remarkable is its high density of 19.3 times that of water, comparable to that of uranium and gold, and much higher (about 1.7 times) than that of lead. Tungsten with minor amounts of impurities is often brittle and hard, making it difficult to work. However, very pure tungsten, though still hard, is more ductile, and can be cut with a hard-steel hacksaw.
The unalloyed elemental form is used mainly in electrical applications. Tungsten's many alloys have numerous applications, most notably in incandescent light bulb filaments, X-ray tubes (as both the filament and target), electrodes in TIG welding, and superalloys. Tungsten's hardness and high density give it military applications in penetrating projectiles. Tungsten compounds are most often used industrially as catalysts.
Tungsten is the only metal from the third transition series that is known to occur in biomolecules, where it is used in a few species of bacteria. It is the heaviest element known to be used by any living organism. Tungsten interferes with molybdenum and copper metabolism, and is somewhat toxic to animal life.
Rhenium
Rhenium is a chemical element with the symbol Re and atomic number 75. It is a silvery-white, heavy, third-row transition metal in group 7 of the periodic table. With an estimated average concentration of 1 part per billion (ppb), rhenium is one of the rarest elements in the Earth's crust. The free element has the third-highest melting point and highest boiling point of any element. Rhenium resembles manganese chemically and is obtained as a by-product of molybdenum and copper ore's extraction and refinement. Rhenium shows in its compounds a wide variety of oxidation states ranging from −1 to +7.
Discovered in 1925, rhenium was the last stable element to be discovered. It was named after the river Rhine in Europe.
Nickel-based superalloys of rhenium are used in the combustion chambers, turbine blades, and exhaust nozzles of jet engines, these alloys contain up to 6% rhenium, making jet engine construction the largest single use for the element, with the chemical industry's catalytic uses being next-most important. Because of the low availability relative to demand, rhenium is among the most expensive of metals, with an average price of approximately US$4,575 per kilogram (US$142.30 per troy ounce) as of August 2011; it is also of critical strategic military importance, for its use in high performance military jet and rocket engines.
Osmium
Osmium is a chemical element with the symbol Os and atomic number 76. It is a hard, brittle, blue-gray or blue-black transition metal in the platinum family and is the densest naturally occurring element, with a density of (slightly greater than that of iridium and twice that of lead). It is found in nature as an alloy, mostly in platinum ores; its alloys with platinum, iridium, and other platinum group metals are employed in fountain pen tips, electrical contacts, and other applications where extreme durability and hardness are needed.
Iridium
Iridium is the chemical element with atomic number 77, and is represented by the symbol Ir. A very hard, brittle, silvery-white transition metal of the platinum family, iridium is the second-densest element (after osmium) and is the most corrosion-resistant metal, even at temperatures as high as 2000 °C. Although only certain molten salts and halogens are corrosive to solid iridium, finely divided iridium dust is much more reactive and can be flammable.
Iridium was discovered in 1803 among insoluble impurities in natural platinum. Smithson Tennant, the primary discoverer, named the iridium for the goddess Iris, personification of the rainbow, because of the striking and diverse colors of its salts. Iridium is one of the rarest elements in the Earth's crust, with annual production and consumption of only three tonnes. and are the only two naturally occurring isotopes of iridium as well as the only stable isotopes; the latter is the more abundant of the two.
The most important iridium compounds in use are the salts and acids it forms with chlorine, though iridium also forms a number of organometallic compounds used in industrial catalysis, and in research. Iridium metal is employed when high corrosion resistance at high temperatures is needed, as in high-end spark plugs, crucibles for recrystallization of semiconductors at high temperatures, and electrodes for the production of chlorine in the chloralkali process. Iridium radioisotopes are used in some radioisotope thermoelectric generators.
Iridium is found in meteorites with an abundance much higher than its average abundance in the Earth's crust. For this reason the unusually high abundance of iridium in the clay layer at the Cretaceous–Paleogene boundary gave rise to the Alvarez hypothesis that the impact of a massive extraterrestrial object caused the extinction of dinosaurs and many other species 66 million years ago. It is thought that the total amount of iridium in the planet Earth is much higher than that observed in crustal rocks, but as with other platinum group metals, the high density and tendency of iridium to bond with iron caused most iridium to descend below the crust when the planet was young and still molten.
Platinum
Platinum is a chemical element with the chemical symbol Pt and an atomic number of 78.
Its name is derived from the Spanish term platina, which is literally translated into "little silver". It is a dense, malleable, ductile, precious, gray-white transition metal.
Platinum has six naturally occurring isotopes. It is one of the rarest elements in the Earth's crust and has an average abundance of approximately 5 μg/kg. It is the least reactive metal. It occurs in some nickel and copper ores along with some native deposits, mostly in South Africa, which accounts for 80% of the world production.
As a member of the platinum group of elements, as well as of the group 10 of the periodic table of elements, platinum is generally non-reactive. It exhibits a remarkable resistance to corrosion, even at high temperatures, and as such is considered a noble metal. As a result, platinum is often found chemically uncombined as native platinum. Because it occurs naturally in the alluvial sands of various rivers, it was first used by pre-Columbian South American natives to produce artifacts. It was referenced in European writings as early as 16th century, but it was not until Antonio de Ulloa published a report on a new metal of Colombian origin in 1748 that it became investigated by scientists.
Platinum is used in catalytic converters, laboratory equipment, electrical contacts and electrodes, platinum-resistance thermometers, dentistry equipment, and jewelry. Because only a few hundred tonnes are produced annually, it is a scarce material, and is highly valuable. Being a heavy metal, it leads to health issues upon exposure to its salts, but due to its corrosion resistance, it is not as toxic as some metals. Its compounds, most notably cisplatin, are applied in chemotherapy against certain types of cancer.
Gold
Gold is a dense, soft, shiny, malleable and ductile metal. It is a chemical element with the symbol Au and atomic number 79.
Pure gold has a bright yellow color and luster traditionally considered attractive, which it maintains without oxidizing in air or water. Chemically, gold is a transition metal and a group 11 element. It is one of the least reactive chemical elements solid under standard conditions. The metal therefore occurs often in free elemental (native) form, as nuggets or grains in rocks, in veins and in alluvial deposits. Less commonly, it occurs in minerals as gold compounds, usually with tellurium.
Gold resists attacks by individual acids, but it can be dissolved by the aqua regia (nitro-hydrochloric acid), so named because it dissolves gold. Gold also dissolves in alkaline solutions of cyanide, which have been used in mining. Gold dissolves in mercury, forming amalgam alloys. Gold is insoluble in nitric acid, which dissolves silver and base metals, a property that has long been used to confirm the presence of gold in items, giving rise to the term the acid test.
Gold has been a valuable and highly sought-after precious metal for coinage, jewelry, and other arts since long before the beginning of recorded history. Gold standards have been a common basis for monetary policies throughout human history, later being supplanted by fiat currency starting in the 1930s. The last gold certificate and gold coin currencies were issued in the U.S. in 1932. In Europe, most countries left the gold standard with the start of World War I in 1914 and, with huge war debts, failed to return to gold as a medium of exchange.
A total of 165,000 tonnes of gold have been mined in human history, as of 2009. This is roughly equivalent to 5.3 billion troy ounces or, in terms of volume, about 8500 m3, or a cube 20.4 m on a side. The world consumption of new gold produced is about 50% in jewelry, 40% in investments, and 10% in industry.
Besides its widespread monetary and symbolic functions, gold has many practical uses in dentistry, electronics, and other fields. Its high malleability, ductility, resistance to corrosion and most other chemical reactions, and conductivity of electricity led to many uses of gold, including electric wiring, colored-glass production and even gold leaf eating.
It has been claimed that most of the Earth's gold lies at its core, the metal's high density having made it sink there in the planet's youth. Virtually all of the gold that mankind has discovered is considered to have been deposited later by meteorites which contained the element. This supposedly explains why, in prehistory, gold appeared as nuggets on the earth's surface.
Mercury
Mercury is a chemical element with the symbol Hg and atomic number 80. It is also known as quicksilver or hydrargyrum ( < Greek "hydr-" water and "argyros" silver). A heavy, silvery d-block element, mercury is the only metal that is liquid at standard conditions for temperature and pressure; the only other element that is liquid under these conditions is bromine, though metals such as caesium, francium, gallium, and rubidium melt just above room temperature. With a freezing point of −38.83 °C and boiling point of 356.73 °C, mercury has one of the narrowest ranges of its liquid state of any metal.
Mercury occurs in deposits throughout the world mostly as cinnabar (mercuric sulfide). The red pigment vermilion is mostly obtained by reduction from cinnabar. Cinnabar is highly toxic by ingestion or inhalation of the dust. Mercury poisoning can also result from exposure to water-soluble forms of mercury (such as mercuric chloride or methylmercury), inhalation of mercury vapor, or eating seafood contaminated with mercury.
Mercury is used in thermometers, barometers, manometers, sphygmomanometers, float valves, mercury switches, and other devices though concerns about the element's toxicity have led to mercury thermometers and sphygmomanometers being largely phased out in clinical environments in favor of alcohol-filled, galinstan-filled, digital, or thermistor-based instruments. It remains in use in scientific research applications and in amalgam material for dental restoration. It is used in lighting: electricity passed through mercury vapor in a phosphor tube produces short-wave ultraviolet light which then causes the phosphor to fluoresce, making visible light.
p-block elements
Thallium
Thallium is a chemical element with the symbol Tl and atomic number 81. This soft gray other metal resembles tin but discolors when exposed to air. The two chemists William Crookes and Claude-Auguste Lamy discovered thallium independently in 1861 by the newly developed method of flame spectroscopy. Both discovered the new element in residues of sulfuric acid production.
Approximately 60–70% of thallium production is used in the electronics industry, and the remainder is used in the pharmaceutical industry and in glass manufacturing. It is also used in infrared detectors. Thallium is highly toxic and was used in rat poisons and insecticides. Its use has been reduced or eliminated in many countries because of its nonselective toxicity. Because of its use for murder, thallium has gained the nicknames "The Poisoner's Poison" and "Inheritance Powder" (alongside arsenic).
Lead
Lead is a main-group element in the carbon group with the symbol Pb (from ) and atomic number 82. Lead is a soft, malleable other metal. It is also counted as one of the heavy metals. Metallic lead has a bluish-white color after being freshly cut, but it soon tarnishes to a dull grayish color when exposed to air. Lead has a shiny chrome-silver luster when it is melted into a liquid.
Lead is used in building construction, lead-acid batteries, bullets and shots, weights, as part of solders, pewters, fusible alloys and as a radiation shield. Lead has the highest atomic number of all of the stable elements, although the next higher element, bismuth, has a half-life that is so long (much longer than the age of the universe) that it can be considered stable. Its four stable isotopes have 82 protons, a magic number in the nuclear shell model of atomic nuclei.
Lead, at certain exposure levels, is a poisonous substance to animals as well as for human beings. It damages the nervous system and causes brain disorders. Excessive lead also causes blood disorders in mammals. Like the element mercury, another heavy metal, lead is a neurotoxin that accumulates both in soft tissues and the bones. Lead poisoning has been documented from ancient Rome, ancient Greece, and ancient China.
Bismuth
Bismuth is a chemical element with symbol Bi and atomic number 83. Bismuth, a trivalent other metal, chemically resembles arsenic and antimony. Elemental bismuth may occur naturally uncombined, although its sulfide and oxide form important commercial ores. The free element is 86% as dense as lead. It is a brittle metal with a silvery white color when newly made, but often seen in air with a pink tinge owing to the surface oxide. Bismuth metal has been known from ancient times, although until the 18th century it was often confused with lead and tin, which each have some of bismuth's bulk physical properties. The etymology is uncertain but possibly comes from Arabic meaning having the properties of antimony or German words or meaning "white mass".
Bismuth is the most naturally diamagnetic of all metals, and only mercury has a lower thermal conductivity.
Bismuth has classically been considered to be the heaviest naturally occurring stable element, in terms of atomic mass. Recently, however, it has been found to be very slightly radioactive: its only primordial isotope bismuth-209 decays via alpha decay into thallium-205 with a half-life of more than a billion times the estimated age of the universe.
Bismuth compounds (accounting for about half the production of bismuth) are used in cosmetics, pigments, and a few pharmaceuticals. Bismuth has unusually low toxicity for a heavy metal. As the toxicity of lead has become more apparent in recent years, alloy uses for bismuth metal (presently about a third of bismuth production), as a replacement for lead, have become an increasing part of bismuth's commercial importance.
Polonium
Polonium is a chemical element with the symbol Po and atomic number 84, discovered in 1898 by Marie Skłodowska-Curie and Pierre Curie. A rare and highly radioactive element, polonium is chemically similar to bismuth and tellurium, and it occurs in uranium ores. Polonium has been studied for possible use in heating spacecraft. As it is unstable, all isotopes of polonium are radioactive. There is disagreement as to whether polonium is a post-transition metal or metalloid.
Astatine
Astatine is a radioactive chemical element with the symbol At and atomic number 85. It occurs on the Earth only as the result of decay of heavier elements, and decays away rapidly, so much less is known about this element than its upper neighbors in the periodic table. Earlier studies have shown this element follows periodic trends, being the heaviest known halogen, with melting and boiling points being higher than those of lighter halogens.
Until recently most of the chemical characteristics of astatine were inferred from comparison with other elements; however, important studies have already been done. The main difference between astatine and iodine is that the HAt molecule is chemically a hydride rather than a halide; however, in a fashion similar to the lighter halogens, it is known to form ionic astatides with metals. Bonds to nonmetals result in positive oxidation states, with +1 best portrayed by monohalides and their derivatives, while the higher are characterized by bond to oxygen and carbon. Attempts to synthesize astatine fluoride have been met with failure. The second longest-living astatine-211 is the only one to find a commercial use, being useful as an alpha emitter in medicine; however, only extremely small quantities are used, and in larger ones it is very hazardous, as it is intensely radioactive.
Astatine was first produced by Dale R. Corson, Kenneth Ross MacKenzie, and Emilio Segrè in the University of California, Berkeley in 1940. Three years later, it was found in nature; however, with an estimated amount of less than 28 grams (1 oz) at given time, astatine is the least abundant element in Earth's crust among non-transuranium elements. Among astatine isotopes, four (with mass numbers 215, 217, 218 and 219) are present in nature as the result of decay of heavier elements; however, the most stable astatine-210 and the industrially used astatine-211 are not.
Radon
Radon is a chemical element with symbol Rn and atomic number 86. It is a radioactive, colorless, odorless, tasteless noble gas, occurring naturally as the decay product of uranium or thorium. Its most stable isotope, 222Rn, has a half-life of 3.8 days. Radon is one of the densest substances that remains a gas under normal conditions. It is also the only gas that is radioactive under normal conditions, and is considered a health hazard due to its radioactivity. Intense radioactivity also hindered chemical studies of radon and only a few compounds are known.
Radon is formed as part of the normal radioactive decay chain of uranium and thorium. Uranium and thorium have been around since the earth was formed and their most common isotope has a very long half-life (14.05 billion years). Uranium and thorium, radium, and thus radon, will continue to occur for millions of years at about the same concentrations as they do now. As the radioactive gas of radon decays, it produces new radioactive elements called radon daughters or decay products. Radon daughters are solids and stick to surfaces such as dust particles in the air. If contaminated dust is inhaled, these particles can stick to the airways of the lung and increase the risk of developing lung cancer.
Radon is responsible for the majority of the public exposure to ionizing radiation. It is often the single largest contributor to an individual's background radiation dose, and is the most variable from location to location. Radon gas from natural sources can accumulate in buildings, especially in confined areas such as attics and basements. It can also be found in some spring waters and hot springs.
Epidemiological studies have shown a clear link between breathing high concentrations of radon and incidence of lung cancer. Thus, radon is considered a significant contaminant that affects indoor air quality worldwide. According to the United States Environmental Protection Agency, radon is the second most frequent cause of lung cancer, after cigarette smoking, causing 21,000 lung cancer deaths per year in the United States. About 2,900 of these deaths occur among people who have never smoked. While radon is the second most frequent cause of lung cancer, it is the number one cause among non-smokers, according to EPA estimates.
Biological role
Of the period 6 elements, only tungsten and the early lanthanides are known to have any biological role in organisms, and even then only in lower organisms (not mammals). However, gold, platinum, mercury, and some lanthanides such as gadolinium have applications as drugs.
Toxicity
Most of the period 6 elements are toxic (for instance lead) and produce heavy-element poisoning. Promethium, polonium, astatine and radon are radioactive, and therefore present radioactive hazards.
Notes
References
Periods (periodic table)
Pages containing element color directly | Period 6 element | [
"Chemistry"
] | 8,467 | [
"Periodic table",
"Periods (periodic table)"
] |
181,568 | https://en.wikipedia.org/wiki/NMEA%200183 | NMEA 0183 is a combined electrical and data specification for communication between marine electronics such as echo sounder, sonars, anemometer, gyrocompass, autopilot, GPS receivers and many other types of instruments. It has been defined and is controlled by the National Marine Electronics Association (NMEA). It replaces the earlier NMEA 0180 and NMEA 0182 standards. In leisure marine applications, it is slowly being phased out in favor of the newer NMEA 2000 standard, though NMEA 0183 remains the norm in commercial shipping.
Details
The electrical standard that is used is EIA-422, also known as RS-422, although most hardware with NMEA-0183 outputs are also able to drive a single EIA-232 port. The standard calls for optically isolated inputs. There is no requirement for isolation for the outputs.
The NMEA 0183 standard uses a simple ASCII, serial communications protocol that defines how data are transmitted in a "sentence" from one "talker" to multiple "listeners" at a time. Through the use of intermediate expanders, a talker can have a unidirectional conversation with a nearly unlimited number of listeners, and using multiplexers, multiple sensors can talk to a single computer port.
At the application layer, the standard also defines the contents of each sentence (message) type, so that all listeners can parse messages accurately.
While NMEA 0183 only defines an RS-422 transport, there also exists a de facto standard in which the sentences from NMEA 0183 are placed in UDP datagrams (one sentence per packet) and sent over an IP network.
The NMEA standard is proprietary and sells for at least US$2000 (except for members of the NMEA) as of September 2020. However, much of it has been reverse-engineered from public sources.
UART settings
There is a variation of the standard called NMEA-0183HS that specifies a baud rate of 38,400. This is in general use by AIS devices.
Message structure
All transmitted data are printable ASCII characters between 0x20 (space) to 0x7e (~)
Data characters are all the above characters except the reserved characters (See next line)
Reserved characters are used by NMEA0183 for the following uses:
Messages have a maximum length of 82 characters, including the $ or ! starting character and the ending <LF>
The start character for each message can be either a $ (For conventional field delimited messages) or ! (for messages that have special encapsulation in them)
The next five characters identify the talker (two characters) and the type of message (three characters).
All data fields that follow are comma-delimited.
Where data is unavailable, the corresponding field remains blank (it contains no character before the next delimiter – see Sample file section below).
The first character that immediately follows the last data field character is an asterisk, but it is only included if a checksum is supplied.
The asterisk is immediately followed by a checksum represented as a two-digit hexadecimal number. The checksum is the bitwise exclusive OR of ASCII codes of all characters between the $ and *, not inclusive. According to the official specification, the checksum is optional for most data sentences, but is compulsory for RMA, RMB, and RMC (among others).
ends the message.
As an example, a waypoint arrival alarm has the form:
$GPAAM,A,A,0.10,N,WPTNME*32
Another example for AIS messages is:
!AIVDM,1,1,,A,14eG;o@034o8sd<L9i:a;WF>062D,0*7D
NMEA sentence format
The main talker ID includes:
BD or GB - Beidou
GA - Galileo
GP - GPS
GL - GLONASS.
NMEA message mainly include the following "sentences" in the NMEA message:
One example, the sentence for Global Positioning System Fixed Data for GPS should be "$GPGGA".
Vendor extensions
Most GPS manufacturers include special messages in addition to the standard NMEA set in their products for maintenance and diagnostics purposes. Extended messages begin with "$P". These extended messages are not standardized.
Software compatibility
NMEA 0183 is supported by various navigation and mapping software. Notable applications include:
Infrakit SURVEY
DeLorme Street Atlas
ESRI
Google Earth
Google Maps Mobile Edition
gpsd - Unix GPS Daemon
JOSM - OpenStreetMap Map Editor
MapKing
Microsoft MapPoint
Microsoft Streets & Trips
NetStumbler
OpenCPN - Open source navigation software
OpenBSD's hw.sensors framework with the nmea(4) pseudo-device driver
OpenNTPD through sysctl API
Rand McNally StreetFinder
ObserVIEW
QGIS
Sample file
A sample file produced by a Tripmate 850 GPS logger. This file was produced in Leixlip, County Kildare, Ireland. The record lasts two seconds.
$GPGGA,092750.000,5321.6802,N,00630.3372,W,1,8,1.03,61.7,M,55.2,M,,*4E
$GPGSA,A,3,10,07,05,02,29,04,08,13,,,,,1.72,1.03,1.38*0A
$GPGSV,3,1,11,10,63,137,17,07,61,098,15,05,59,290,20,08,54,157,30*70
$GPGSV,3,2,11,02,39,223,19,13,28,070,17,26,23,252,,04,14,186,14*79
$GPGSV,3,3,11,29,09,301,24,16,09,020,,36,,,*76
$GPRMC,092750.000,A,5321.6802,N,00630.3372,W,0.02,31.66,280511,,,A*43
$GPGGA,092751.000,5321.6802,N,00630.3371,W,1,8,1.03,61.7,M,55.3,M,,*75
$GPGSA,A,3,10,07,05,02,29,04,08,13,,,,,1.72,1.03,1.38*0A
$GPGSV,3,1,11,10,63,137,17,07,61,098,15,05,59,290,20,08,54,157,30*70
$GPGSV,3,2,11,02,39,223,16,13,28,070,17,26,23,252,,04,14,186,15*77
$GPGSV,3,3,11,29,09,301,24,16,09,020,,36,,,*76
$GPRMC,092751.000,A,5321.6802,N,00630.3371,W,0.06,31.66,280511,,,A*45
Note some blank fields, for example:
GSV records, which describe satellites 'visible', lack the SNR (signal–to–noise ratio) field for satellite 16 and all data for satellite 36.
GSA record, which lists satellites used for determining a fix (position) and gives a DOP of the fix, contains 12 fields for satellites' numbers, but only 8 satellites were taken into account—so 4 fields remain blank.
Status
NMEA 0183 continued to be maintained separately: V4.10 was published in early May 2012, and an erratum noted on 12 May 2012.
On November 27, 2018, it was issued an update to version 4.11, which supports Global Navigation Satellite Systems other than GPS.
See also
GPS Exchange Format
TransducerML
IEEE 1451
IEC 61162
NMEA 2000
NMEA OneNet
RTCM SC-104
RINEX
References
External links
National Marine Electronics Association
NMEA's website about NMEA 0183
NMEA Specifications at APRS Info
Global Positioning System
Network protocols
Computer buses
Marine electronics
Satellite navigation | NMEA 0183 | [
"Technology",
"Engineering"
] | 1,808 | [
"Wireless locating",
"Aerospace engineering",
"Aircraft instruments",
"Marine engineering",
"Global Positioning System",
"Marine electronics"
] |
181,586 | https://en.wikipedia.org/wiki/Service%20%28economics%29 | A service is an act or use for which a consumer, company, or government is willing to pay. Examples include work done by barbers, doctors, lawyers, mechanics, banks, insurance companies, and so on. Public services are those that society (nation state, fiscal union or region) as a whole pays for. Using resources, skill, ingenuity, and experience, service providers benefit service consumers. Services may be defined as intangible acts or performances whereby the service provider provides value to the customer.
Key characteristics
Services have three key characteristics:
Intangibility
Services are by definition intangible. They are not manufactured, transported or stocked.
One cannot store services for future use. They are produced and consumed simultaneously.
Perishability
Services are perishable in two regards:
Service-relevant resources, processes, and systems are assigned for service delivery during a specific period in time. If the service consumer does not request and consume the service during this period, the related resources may go unused. From the perspective of the service provider, this is a lost business opportunity if no other use for those resources is available. Examples: A hairdresser serves another client. An empty seat on an airplane cannot be filled after departure.
When the service has been completely rendered to the consumer, this particular service irreversibly vanishes. Example: a passenger has been transported to the destination.
The service provider must deliver the service at the exact time of service consumption. The service is not manifested in a physical object that is independent of the provider. The service consumer is also inseparable from service delivery. Examples: The service consumer must sit in the hairdresser's chair, or in the airplane seat. Correspondingly, the hairdresser or the pilot must be in the shop or plane, respectively, to deliver the service.
Variability
Each service is unique. It can never be exactly repeated as the time, location, circumstances, conditions, current configurations or assigned resources are different for the next delivery, even if the same service is requested by the consumer. Many services are regarded as heterogeneous and are typically modified for each service-consumer or for each service-context. Example: The taxi service which transports the service consumer from home to work is different from the taxi service which transports the same service consumer from work to home – another point in time, the other direction, possibly another route, probably another taxi-driver and cab. Another and more common term for this is heterogeneity.
Service quality
Mass generation and delivery of services must be mastered for a service provider to expand. This can be seen as a problem of service quality. Both inputs and outputs to the processes involved providing services are highly variable, as are the relationships between these processes, making it difficult to maintain consistent service quality. Many services involve variable human activity, rather than a precisely determined process; exceptions include utilities. The human factor is often the key success factor in service provision. Demand can vary by season, time of day, business cycle, etc. Consistency is necessary to create enduring business relationships.
Specification
Any service can be clearly and completely, consistently and concisely specified by means of standard attributes that conform to the MECE principle (Mutually Exclusive, Collectively Exhaustive).
Service consumer benefits – (set of) benefits that are triggerable, consumable and effectively utilizable for any authorized service consumer and that are rendered upon request. These benefits must be described in terms that are meaningful to consumers.
Service-specific functional parameters – parameters that are essential to the respective service and that describe the important dimension(s) of the servicescape, the service output or the service outcome, e.g. whether the passenger sits in an aisle or window seat.
Service delivery point – the physical location or logical interface where the benefits of the service are rendered to the consumer. At this point the service delivery preparation can be assessed and delivery can be monitored and controlled.
Service consumer count – the number of consumers that are enabled to consume a service.
Service delivery readiness time – the moments when the service is available and all the specified service elements are available at the delivery point
Service consumer support times – the moments when the support team ("service desk") is available. The service desk is the Single Point of Contact (SPoC) for service inquiries. At those times, the service desk can be reached via commonly available communication methods (phone, web, etc.)
Service consumer support language – the language(s) spoken by the service desk.
Service fulfillment target – the provider's promise to deliver the service, expressed as the ratio of the count of successful service deliveries to the count of service requests by a single consumer or consumer group over some time period.
Service impairment duration – the maximum allowable interval between the first occurrence of a service impairment and the full resumption and completion of the service delivery.
Service delivery duration – the maximum allowable period for effectively rendering all service benefits to the consumer.
Service delivery unit – the scope/number of action(s) that constitute a delivered service. Serves as the reference object for the Service Delivering Price, for all service costs as well as for charging and billing.
Service delivery price – the amount of money the customer pays to receive a service. Typically, the price includes a service access price that qualifies the consumer to request the service and a service consumption price for each delivered service.
Delivery
The delivery of a service typically involves six factors:
Service provider (workers and managers)
Equipment used to provide the service (e.g. vehicles, cash registers, technical systems, computer systems)
Physical facilities (e.g. buildings, parking, waiting rooms)
Service consumer
Other customers at the service delivery location
Customer contact
The service encounter is defined as all activities involved in the service delivery process. Some service managers use the term "moment of truth" to indicate that point in a service encounter where interactions are most intense.
Many business theorists view service provision as a performance or act (sometimes humorously referred to as dramalurgy, perhaps in reference to dramaturgy). The location of the service delivery is referred to as the stage and the objects that facilitate the service process are called props. A script is a sequence of behaviors followed by those involved, including the client(s). Some service dramas are tightly scripted, others are more ad lib. Role congruence occurs when each actor follows a script that harmonizes with the roles played by the other actors.
In some service industries, especially health care, dispute resolution and social services, a popular concept is the idea of the caseload, which refers to the total number of patients, clients, litigants, or claimants for which a given employee is responsible. Employees must balance the needs of each individual case against the needs of all other current cases as well as their own needs.
Under English law, if a service provider is induced to deliver services to a dishonest client by a deception, this is an offence under the Theft Act 1978.
Lovelock used the number of delivery sites (whether single or multiple) and the method of delivery to classify services in a 2 x 3 matrix. Then implications are that the convenience of receiving the service is the lowest when the customer has to come to the service and must use a single or specific outlet. Convenience increases (to a point) as the number of service points increase.
Service-commodity goods continuum
The distinction between a good and a service remains disputed. The perspective in the late-eighteenth and early-nineteenth centuries focused on creation and possession of wealth. Classical economists contended that goods were objects of value over which ownership rights could be established and exchanged. Ownership implied tangible possession of an object that had been acquired through purchase, barter or gift from the producer or previous owner and was legally identifiable as the property of the current owner.
Adam Smith's famous book, The Wealth of Nations, published in 1776, distinguished between the outputs of what he termed "productive" and "unproductive" labor. The former, he stated, produced goods that could be stored after production and subsequently exchanged for money or other items of value. The latter, however useful or necessary, created services that perished at the time of production and therefore did not contribute to wealth. Building on this theme, French economist Jean-Baptiste Say argued that production and consumption were inseparable in services, coining the term "immaterial products" to describe them.
In the modern day, Gustofsson & Johnson describe a continuum with pure service on one terminal point and pure commodity good on the other. Most products fall between these two extremes. For example, a restaurant provides a physical good (the food), but also provides services in the form of ambience, the setting and clearing of the table, etc. And although some utilities actually deliver physical goods — like water utilities that deliver water — utilities are usually treated as services.
Service types
The following is a list of service industries, grouped into sectors. Parenthetical notations indicate how specific occupations and organizations can be regarded as service industries to the extent they provide an intangible service, as opposed to a tangible good.
Business functions (that apply to all organizations in general)
Consulting
Customer service
Human resources administrators (providing services like ensuring that employees are paid accurately)
Cleaning, patronage, repair and maintenance services
Gardeners
Janitors (who provide cleaning services)
Mechanics
Construction
Carpentry
Electricians (offering the service of making wiring work properly)
Plumbing
Death care
Coroners (who provide the service of identifying cadavers and determining time and cause of death)
Funeral homes (who prepare corpses for public display, cremation or burial)
Dispute resolution and prevention services
Arbitration
Courts of law (who perform the service of dispute resolution backed by the power of the state)
Diplomacy
Incarceration (provides the service of keeping criminals out of society)
Law enforcement (provides the service of identifying and apprehending criminals)
Lawyers (who perform the services of advocacy and decisionmaking in many dispute resolution and prevention processes)
Mediation
Military (performs the service of protecting states in disputes with other states)
Negotiation (not really a service unless someone is negotiating on behalf of another)
Education (institutions offering the services of teaching and access to information)
Library
Museum
School
Entertainment (when provided live or within a highly specialized facility)
Gambling
Movie theatres (providing the service of showing a movie on a big screen)
Performing arts productions
Sport
Television
Fabric care
Dry cleaning
Laundry
Financial services
Accountancy
Banks and building societies (offering lending services and safekeeping of money and valuables)
Real estate
Stock brokerages
Tax services
Valuation
Foodservice industry
Health care (all health care professions provide services)
Hospitality industry
Information services
Database services
Data processing
Interpreting
Translation
Logistics
Transport
Warehousing
Stock management
Packaging
Personal grooming
Body hair removal
Dental hygienist
Hairdressing
Manicurist / pedicurist
Public utility
Electric power
Natural gas
Telecommunications
Waste management
Water industry
Risk management
Insurance
Security
Social services
Social work
Childcare
Elderly care
List of countries by tertiary output
See also
As a service
Deliverable
Good (economics)
Intangible good
List of economics topics
Product (economics)
Services marketing
Universal basic services
References
Further reading
SO Player :
Valerie Zeithaml, A. Parasumaran, Leonhard Berry (1990): SERVQUAL
Sharon Dobson: Product and Services Strategy
John Swearingen: Operations ManagementCharacteristics of services
James A. Fitzsimmons, Mona J. Fitzsimmons: Service ManagementOperations, Strategy, Information Technology
Russell Wolak, Stavros Kalafatis, Patricia Harris: An Investigation Into Four Characteristics of Services
Sheelagh Matear, Brendan Gray, Tony Garrett, Ken Deans: Moderating Effects of Service Characteristics on the Sources of Competitive AdvantagePositional Advantage Relationship
Alan Pilkington, Kah Hin Chai, "Research Themes, Concepts and Relationships: A study of International Journal of Service Industry Management (1990 to 2005)", International Journal of Service Industry Management, (2008) Vol. 19, No. 1, pp. 83–110.
External links
Goods (economics) | Service (economics) | [
"Physics"
] | 2,444 | [
"Materials",
"Goods (economics)",
"Matter"
] |
181,592 | https://en.wikipedia.org/wiki/Competition | Competition is a rivalry where two or more parties strive for a common goal which cannot be shared: where one's gain is the other's loss (an example of which is a zero-sum game). Competition can arise between entities such as organisms, individuals, economic and social groups, etc. The rivalry can be over attainment of any exclusive goal, including recognition.
Competition occurs in nature, between living organisms which co-exist in the same environment. Animals compete over water supplies, food, mates, and other biological resources. Humans usually compete for food and mates, though when these needs are met deep rivalries often arise over the pursuit of wealth, power, prestige, and fame when in a static, repetitive, or unchanging environment. Competition is a major tenet of market economies and business, often associated with business competition as companies are in competition with at least one other firm over the same group of customers. Competition inside a company is usually stimulated with the larger purpose of meeting and reaching higher quality of services or improved products that the company may produce or develop.
Competition is often considered to be the opposite of cooperation; however, in the real world, mixtures of cooperation and competition are the norm. In economies, as the philosopher R. G. Collingwood argued "the presence of these two opposites together is essential to an economic system. The parties to an economic action co-operate in competing, like two chess players". Optimal strategies to achieve goals are studied in the branch of mathematics known as game theory.
Competition has been studied in several fields, including psychology, sociology and anthropology. Social psychologists, for instance, study the nature of competition. They investigate the natural urge of competition and its circumstances. They also study group dynamics, to detect how competition emerges and what its effects are. Sociologists, meanwhile, study the effects of competition on society as a whole. Additionally, anthropologists study the history and prehistory of competition in various cultures. They also investigate how competition manifested itself in various cultural settings in the past, and how competition has developed over time.
Biology and ecology
Competition within, between, and among species is one of the most important forces in biology, especially in the field of ecology.
Competition between members of a species ("intraspecific") for resources such as food, water, territory, and sunlight may result in an increase in the frequency of a variant of the species best suited for survival and reproduction until its fixation within a population. However, competition among resources also has a strong tendency for diversification between members of the same species, resulting in coexistence of competitive and non-competitive strategies or cycles between low and high competitiveness. Third parties within a species often favour highly competitive strategies leading to species extinction when environmental conditions are harsh (evolutionary suicide).
Competition is also present between species ("interspecific"). When resources are limited, several species may depend on these resources. Thus, each of the species competes with the others to gain access to the resources. As a result, species less suited to compete for the resources may die out unless they adapt by character dislocation, for instance. According to evolutionary theory, this competition within and between species for resources plays a significant role in natural selection. At shorter time scales, competition is also one of the most important factors controlling diversity in ecological communities, but at larger scales expansion and contraction of ecological space is a much larger factor than competition. This is illustrated by living plant communities where asymmetric competition and competitive dominance frequently occur. Multiple examples of symmetric and asymmetric competition also exist for animals.
Consumer competitions – games of luck or skill
In Australia, New Zealand and the United Kingdom, competitions or lotto are the equivalent of what are commonly known as sweepstakes in the United States. The correct technical name for Australian consumer competitions is a trade promotion lottery or lotto.
Competition or trade promotion lottery entrants enter to win a prize or prizes, hence many entrants are all in competition, or competing for a limited number of prizes.
A trade promotion lottery or competition is a free entry lottery run to promote goods or services supplied by a business. An example is where you purchase goods or services and then given the chance to enter into the lottery and possibly win a prize. A trade promotion lottery can be called a lotto, competition, contest, sweepstake, or giveaway.
Such competitions can be games of luck (randomly drawn) or skill (judged on an entry question or submission), or possibly a combination of both.
People that enjoy entering competitions are known as compers.
Competitiveness
Many philosophers and psychologists have identified a trait in most living organisms which can drive the particular organism to compete. This trait, called competitiveness, is viewed as having a high adaptive value, which coexists along with the urge for survival. Competitiveness, or the inclination to compete, though, has become synonymous with aggressiveness and ambition in the English language. More advanced civilizations integrate aggressiveness and competitiveness into their interactions, as a way to distribute resources and adapt. Many plants compete with neighboring ones for sunlight.
The term also applies to econometrics. Here, it is a comparative measure of the ability and performance of a firm or sub-sector to sell and produce/supply goods and/or services in a given market. The two academic bodies of thought on the assessment of competitiveness are the Structure Conduct Performance Paradigm and the more contemporary New Empirical Industrial Organisation model. Predicting changes in the competitiveness of business sectors is becoming an integral and explicit step in public policymaking. Within capitalist economic systems, the drive of enterprises is to maintain and improve their own competitiveness.
One-upmanship
One-upmanship, also called "one-upsmanship", is the art or practice of successively outdoing a competitor. The term was first used in the title of a book by Stephen Potter, published in 1952 as a follow-up to The Theory and Practice of Gamesmanship (or the Art of Winning Games without Actually Cheating) (1947). Other Lifemanship titles in his series of tongue-in-cheek self-help books, as well as film and television derivatives, teach various ploys to achieve this. This comic satire of self-help style guides manipulates traditional British conventions for the gamester. The principle being all life being a game, who understands that if you're not one-up, you're one-down. Potter's unprincipled principles apply to almost any possession, experience or situation, deriving maximum undeserved rewards and discomfitting the opposition. The 1960 film School for Scoundrels and its 2006 remake were satiric portrayals of how to use Potter's ideas.
In that context, the term refers to a satiric course in the gambits required for the systematic and conscious practice of "creative intimidation", making one's associates feel inferior and thereby gaining the status of being "one-up" on them. Viewed seriously, it is a phenomenon of group dynamics that can have significant effects in the management field: for instance, manifesting in office politics.
Education
Competition is a major factor in education. On a global scale, national education systems, intending to bring out the best in the next generation, encourage competitiveness among students through scholarships. Countries such as England and Singapore have special education programmes which cater for specialist students, prompting charges of academic elitism. Upon receipt of their academic results, students tend to compare their grades to see who is better. In severe cases, the pressure to perform in some countries is so high that it can result in stigmatization of intellectually deficient students, or even suicide as a consequence of failing the exams. Critics of competition as a motivating factor in education systems, such as Alfie Kohn, assert that competition actually has a net negative influence on the achievement levels of students, and that it "turns all of us into losers". Economist Richard Layard has commented on the harmful effects, stating "people feel that they are under a great deal of pressure. They feel that their main objective in life is to do better than other people. That is certainly what young people are being taught in school every day. And it's not a good basis for a society."
However, other studies such as the Torrance Tests of Creative Thinking show that the effect of competition on students depends on each individual's level of agency. Students with a high level of agency thrive on competition, are self-motivated, and are willing to risk failure. Compared to their counterparts who are low in agency, these students are more likely to be flexible, adaptable and creative as adults.
Economics
Merriam-Webster gives as one definition of competition (relating to business) as "[...] rivalry: such as [...] the effort of two or more parties acting independently to secure the business of a third party by offering the most favorable terms". Adam Smith in his 1776 book The Wealth of Nations and later economists described competition in general as allocating productive resources to their most highly valued uses and encouraging efficiency. Later microeconomic theory distinguished between perfect competition and imperfect competition, concluding that no system of resource allocation is more efficient than perfect competition. Competition, according to the theory, causes commercial firms to develop new products, services and technologies, which would give consumers greater selection and better products. The greater selection typically causes lower prices for the products, compared to what the price would be if there was no competition (monopoly) or little competition (oligopoly).
However, competition may also lead to wasted (duplicated) effort and to increased costs (and prices) in some circumstances. For example, the intense competition for the small number of top jobs in music and movie-acting leads many aspiring musicians and actors to make substantial investments in training which are not recouped, because only a fraction become successful. Critics have also argued that competition can be destabilizing, particularly competition between certain financial institutions.
Experts have also questioned the constructiveness of competition in profitability. It has been argued that competition-oriented objectives are counterproductive to raising revenues and profitability because they limit the options of strategies for firms as well as their ability to offer innovative responses to changes in the market. In addition, the strong desire to defeat rival firms with competitive prices has the strong possibility of causing price wars.
Another distinction appearing in economics is that between competition as an end-state – as in the case of both perfect and imperfect competition – and competition as a process. It is a process of rivalry between firms (or consumers) intensifying selective pressures for improvements. One can restate this as a process of discovery.
Three levels of end-state economic competition have been classified:
The most narrow form is direct competition (also called "category competition" or "brand competition"), where products which perform the same function compete against each other. For example, one brand of pick-up trucks competes with several other brands of pick-up trucks. Sometimes, two companies are rivals and one adds new products to their line, which leads to the other company distributing the same new things, and in this manner they compete.
The next form is substitute or indirect competition, where products which are close substitutes for one another compete. For example, butter competes with margarine, with mayonnaise and with other various sauces and spreads.
The broadest form of competition is typically called budget competition. Included in this category is anything on which the consumer might want to spend their available money. For example, a family which has $20,000 available may choose to spend it on many different items, which can all be seen as competing with each other for the family's expenditure. This form of competition is also sometimes described as a competition of "share of wallet".
In addition, companies compete for financing on the capital markets (equity or debt) in order to generate the necessary cash for their operations. Investor typically consider alternative investment opportunities given their risk profile, and not only look at companies just competing on product (direct competitors). Enlarging the investment universe to include indirect competitors leads to a broader peer universe of comparable, indirectly competing companies.
Competition does not necessarily have to be between companies. For example, business writers sometimes refer to internal competition. This is competition within companies. The idea was first introduced by Alfred Sloan at General Motors in the 1920s. Sloan deliberately created areas of overlap between divisions of the company so that each division would compete with the other divisions. For example, the Chevrolet division would compete with the Pontiac division for some market segments. The competing brands by the same company allowed parts to be designed by one division and shared by several divisions, for example parts designed by Chevrolet would also be used by Pontiac. In 1931 Procter & Gamble initiated a deliberate system of internal brand-versus-brand rivalry. The company was organized around different brands, with each brand allocated resources, including a dedicated group of employees willing to champion the brand. Each brand manager was given responsibility for the success or failure of the brand, and compensated accordingly.
Most businesses also encourage competition between individual employees. An example of this is a contest between sales representatives. The sales representative with the highest sales (or the best improvement in sales) over a period of time would gain benefits from the employer. This is also known as intra-brand competition.
Shalev and Asbjornsen found that success (i.e. the saving resulted) of reverse auctions correlated most closely with competition. The literature widely supported the importance of competition as the primary driver of reverse auctions success. Their findings appear to support that argument, as competition correlated strongly with the reverse auction success, as well as with the number of bidders.
Business and economic competition in most countries is often limited or restricted. Competition often is subject to legal restrictions. For example, competition may be legally prohibited, as in the cases of a government monopoly or of a government-granted monopoly. Governments may institute tariffs, subsidies or other protectionist measures in order to prevent or reduce competition. Depending on the respective economic policy, pure competition is to a greater or lesser extent regulated by competition policy and competition law. Another component of these activities is the discovery process, with instances of higher government regulations typically leading to less competitive businesses being launched.
Nicholas Gruen has referred to The Competition Delusion, in which competition is taken to be unambiguously good, even where that competition leaks into the rules of the game. He claims this drives financialisation (the approximate doubling of proportion of economic resources dedicated to finance and to 'rule making and administering' professions such as law, accountancy and auditing.
Interstate
Competition between countries is quite subtle to detect, but is quite evident in the world economy. Countries compete to provide the best possible business environment for multinational corporations. Such competition is evident by the policies undertaken by these countries to educate the future workforce. For example, East Asian economies such as Singapore, Japan and South Korea tend to compete by allocating a large portion of the budget to the education sector, including by implementing programmes such as gifted education.
Law
Competition law, known in the United States as antitrust law, has three main functions:
First, it prohibits agreements aimed to restrict free trading between business entities and their customers. For example, a cartel of sports shops who together fix football-jersey prices higher than normal is illegal.
Second, competition law can ban the existence or abusive behaviour of a firm dominating the market. One case in point could be a software company who through its monopoly on computer platforms makes consumers use its media player.
Third, to preserve competitive markets, the law supervises the mergers and acquisitions of very large corporations. Competition authorities could for instance require that a large packaging company give plastic bottle licenses to competitors before taking over a major PET producer.
In all three cases, competition law aims to protect the welfare of consumers by ensuring that each business must compete for its share of the market economy.
In recent decades, competition law has also been sold as good medicine to provide better public services, traditionally funded by tax-payers and administered by democratically accountable governments. Hence competition law is closely connected with the law on deregulation of access to markets, providing state aids and subsidies, the privatisation of state-owned assets and the use of independent sector regulators, such as the United Kingdom telecommunications watchdog Ofcom. Behind the practice lies the theory, which over the last fifty years has been dominated by neo-classical economics. Markets are seen as the most efficient method of allocating resources, although sometimes they fail, and regulation becomes necessary to protect the ideal market model. Behind the theory lies the history, reaching back further than the Roman Empire. The business practices of market traders, guilds and governments have always been subject to scrutiny and sometimes to severe sanctions. Since the twentieth century, competition law has become global. The two largest, most organised and influential systems of competition regulation are United States antitrust law and European Community competition law. The respective national/international authorities, the U.S. Department of Justice (DOJ) and the Federal Trade Commission (FTC) in the United States and the European Commission's Competition Directorate General (DGCOMP) have formed international support- and enforcement-networks. Competition law is growing in importance every day, which warrants for its careful study.
Game theory
Game theory is "the study of mathematical models of conflict and cooperation between intelligent rational decision-makers." Game theory is mainly used in economics, political science, and psychology, as well as logic, computer science, biology and poker. Originally, it mainly addressed zero-sum games, in which one person's gains result in losses for the other participants.
Game theory is a major method used in mathematical economics and business for modeling competing behaviors of interacting agents. Applications include a wide array of economic phenomena and approaches, such as auctions, bargaining, mergers & acquisitions pricing, fair division, duopolies, oligopolies, social network formation, agent-based computational economics, general equilibrium, mechanism design, and voting systems; and across such broad areas as experimental economics, behavioral economics, information economics, industrial organization, and political economy.
This research usually focuses on particular sets of strategies known as "solution concepts" or "equilibria". A common assumption is that players act rationally. In non-cooperative games, the most famous of these is the Nash equilibrium. A set of strategies is a Nash equilibrium if each represents a best response to the other strategies. If all the players are playing the strategies in a Nash equilibrium, they have no unilateral incentive to deviate, since their strategy is the best they can do given what others are doing.
Literature
Literary competitions, such as contests sponsored by literary journals, publishing houses and theaters, have increasingly become a means for aspiring writers to gain recognition. Awards for fiction include those sponsored by the Missouri Review, Boston Review, Indiana Review, North American Review and Southwest Review. The Albee Award, sponsored by the Yale Drama Series, is among the most prestigious playwriting awards.
Philosophy
Margaret Heffernan's study, A Bigger Prize,
examines the perils and disadvantages of competition in (for example) biology, families, sport, education, commerce and the Soviet Union.
Marx
Karl Marx insisted that "the capitalist system fosters competition and egoism in all its members and thoroughly undermines all genuine forms of community".
It promotes a "climate of competitive egoism and individualism", with competition for jobs and competition between employees; Marx said competition between workers exceeds that demonstrated by company owners. He also points out that competition separates individuals from one another and while concentration of workers and development of better communication alleviate this, they are not a decision.
Freud
Sigmund Freud explained competition as a primal dilemma in which all infants find themselves. The infant competes with other family members for the attention and affection of the parent of the opposite sex or the primary caregiving parent. During this time, a boy develops a deep fear that the father (the son's prime rival) will punish him for these feelings of desire for the mother, by castrating him. Girls develop penis envy towards all males. The girl's envy is rooted in the biologic fact that, without a penis, she cannot sexually possess mother, as the infantile id demands, resultantly, the girl redirects her desire for sexual union upon father in competitive rivalry with her mother. This constellation of feelings is known as Oedipus Complex (after the Greek Mythology figure who accidentally killed his father and married his mother). This is associated with the phallic stage of childhood development where intense primal emotions of competitive rivalry with (usually) the parent of the same sex are rampant and create a crisis that must be negotiated successfully for healthy psychological development to proceed. Unresolved Oedipus complex competitiveness issues can lead to lifelong neuroses manifesting in various ways related to an overdetermined relationship to competition.
Mahatma Gandhi
Gandhi speaks of egoistic competition. For him, such qualities glorified and/or left unbridled, can lead to violence, conflict, discord and destructiveness. For Gandhi, competition comes from the ego, and therefore society must be based on mutual love, cooperation and sacrifice for the well-being of humanity. In the society desired by Gandhi, each individual will cooperate and serve for the welfare of others and people will share each other's joys, sorrows and achievements as a norm of a social life. For him, in a non-violent society, competition does not have a place and this should become realized with more people making the personal choice to have fewer tendencies toward egoism and selfishness.
Politics
Competition is also found in politics. In democracies, a free and fair election is an electoral competition for an elected office. In other words, two or more candidates strive and compete against one another to attain a position of power. The winner gains the seat of the elected office for a predefined period of time, towards the end of which another election is usually held to determine the next holder of the office.
In addition, there is inevitable competition inside a government. Because several offices are appointed, potential candidates compete against the others in order to gain the particular office. Departments may also compete for a limited amount of resources, such as for funding. Finally, where there are party systems, elected leaders of different parties will ultimately compete against the other parties for laws, funding and power.
Finally, competition also exists between governments. Each country or nationality struggles for world dominance, power, or military strength. For example, the United States competed against the Soviet Union in the Cold War for world power, and the two also struggled over the different types of government (in these cases representative democracy and communism). The result of this type of competition often leads to worldwide tensions, and may sometimes erupt into warfare.
Sports
While some sports and games (such as fishing or hiking) have been viewed as primarily recreational, most sports are considered competitive. The majority involve competition between two or more persons (sometimes using horses or cars). For example, in a game of basketball, two teams compete against one another to determine who can score the most points. When there is no set reward for the winning team, many players gain a sense of pride. In addition, extrinsic rewards may also be given. Athletes, besides competing against other humans, also compete against nature in sports such as whitewater kayaking or mountaineering, where the goal is to reach a destination, with only natural barriers impeding the process. A regularly scheduled (for instance annual) competition meant to determine the "best" competitor of that cycle is called a championship.
Competitive sports are governed by codified rules agreed upon by the participants. Violating these rules is considered to be unfair competition. Thus, sports provide artificial (not natural) competition; for example, competing for control of a ball, or defending territory on a playing field is not an innate biological factor in humans. Athletes in sports such as gymnastics and competitive diving compete against each other in order to come closest to a conceptual ideal of a perfect performance, which incorporates measurable criteria and standards which are translated into numerical ratings and scores by appointed judges.
Sports competition is generally broken down into three categories: individual sports, such as archery; dual sports, such as doubles tennis, and team sports competition, such as cricket or football. While most sports competitions are recreation, there exist several major and minor professional sports leagues throughout the world. The Olympic Games, held every four years, is usually regarded as the international pinnacle of sports competition.
Trade
Competition is also found in trade. For nations, as well as firms it is important to understand trade dynamics in order to market their goods and services effectively in international markets. Balance of trade can be considered a crude, but widely used proxy for international competitiveness across levels: country, industry or even firm. "We share a common belief that innovation comes from the edges," said Luisa Delgado, an SAP HR director, who noted the company valued the ability of many autistic people to "think differently and spark innovation." SAP’s Bangalore office saw its productivity increase after deploying autistic hires. The company is working closely with a Danish not-for-profit specializing in IT job placements for individuals with autism spectrum disorders." Research data hints that exporting firms have a higher survival rate and achieve greater employment growth compared with non-exporters.
Using a simple concept to measure heights that firms can climb may help improve execution of strategies. International competitiveness can be measured on several criteria but few are as flexible and versatile to be applied across levels as Trade Competitiveness Index (TCI)
Hypercompetitiveness
The tendency toward extreme, unhealthy competition has been termed hypercompetitiveness. This concept originated in Karen Horney's theories on neurosis; specifically, the highly aggressive personality type which is characterized as "moving against people". In her view, some people have a need to compete and win at all costs as a means of maintaining their self-worth. These individuals are likely to turn any activity into a competition, and they will feel threatened if they find themselves losing. Researchers have found that men and women who score high on the trait of hypercompetitiveness are more narcissistic and less psychologically healthy than those who score low on the trait. Hypercompetitive individuals generally believe that winning is the only thing that matters.
Consequences
Competition can have both beneficial and detrimental effects. Many evolutionary biologists view inter-species and intra-species competition as the driving force of adaptation, and ultimately of evolution. However, some biologists disagree, citing competition as a driving force only on a small scale, and citing the larger scale drivers of evolution to be abiotic factors (termed 'Room to Roam'). Richard Dawkins prefers to think of evolution in terms of competition between single genes, which have the welfare of the organism 'in mind' only insofar as that welfare furthers their own selfish drives for replication (termed the 'selfish gene').
Some social Darwinists claim that competition also serves as a mechanism for determining the best-suited group; politically, economically and ecologically. Positively, competition may serve as a form of recreation or a challenge provided that it is non-hostile. On the negative side, competition can cause injury and loss to the organisms involved, and drain valuable resources and energy. In the human species competition can be expensive on many levels, not only in lives lost to war, physical injuries, and damaged psychological well-beings, but also in the health effects from everyday civilian life caused by work stress, long work hours, abusive working relationships, and poor working conditions, that detract from the enjoyment of life, even as such competition results in financial gain for the owners.
See also
Academic achievement
Arms race
Asymmetric competition
Biological interaction
Brinkmanship
Competition regulator
Competition
Competitor analysis
Conflict of interest
Conflict theories
Cooperation
Dozens (game)
Ecological model of competition
Economic mobility
Free market
Gaming the system
Identity performance
Monopolistic competition
Non-zero-sum game
Opportunism
Planned economy
Prisoner's dilemma
Security dilemma
Sharing
Social mobility
Student competitions
Win-win game
Winning streak
Zero-profit condition
Zero-sum
References
External links
Human behavior
Human activities | Competition | [
"Biology"
] | 5,765 | [
"Human activities",
"Behavior",
"Human behavior"
] |
181,718 | https://en.wikipedia.org/wiki/Mimer%20SQL | Mimer SQL is a proprietary SQL-based relational database management system produced by the Swedish company Mimer Information Technology AB (Mimer AB), formerly known as Upright Database Technology AB. It was originally developed as a research project at the Uppsala University, Uppsala, Sweden in the 1970s before being developed into a commercial product.
The database has been deployed in a wide range of application situations, including the National Health Service Pulse blood transfusion service in the UK, Volvo Cars production line in Sweden and automotive dealers in Australia. It has sometimes been one of the limited options available in realtime critical applications and resource restricted situations such as mobile devices.
History
Mimer SQL originated from a project from the ITC service center supporting Uppsala University and some other institutions to leverage the relational database capabilities proposed by Codd and others. The initial release in about 1975 was designated RAPID and was written in IBM assembler language. The name was changed to Mimer in 1977 to avoid a trademark issue. Other universities were interested in the project on a number of machine architectures and Mimer was rewritten in Fortran to achieve portability. Further models were developed for Mimer with the Mimer/QL implementing the QUEL query languages.
The emergence of SQL in the 1980s as the standard query language resulted in Mimers' developers choosing to adopt it with the product becoming Mimer SQL.
In 1984 Mimer was transferred to the newly established company Mimer Information Systems.
Versions
the Mimer SQL database server is currently supported on the main platforms of Windows, MacOS, Linux, and OpenVMS (Itanium and x86-64). Previous versions of the database engine was supported on other operating systems including Solaris, AIX, HP-UX, Tru 64, SCO and DNIX. Versions of Mimer SQL are available for download and free for development.
The Enterprise product is a standards based SQL database server based upon the Mimer SQL Experience database server. This product is highly configurable and components can be added, removed or replacing in the foundation product to achieve a derived product suitable for embedded, real-time or small footprint application.
The Mimer SQL Realtime database server is a replacement database engine specifically designed for applications where real-time aspects are paramount. This is sometimes marketed as the Automotive approach. For resource limited environments the Mimer SQL Mobile database server is a replacement runtime environment without a SQL compiler. This is used for portable and certain custom devices and is termed the Mobile Approach.
Custom embedded approaches can be applied to multiple hardware and operating system combinations.
These options enable Mimer SQL to be deployed to a wide variety of additional target platforms, such as Android, and real-time operating systems including VxWorks.
The database is available in real-time, embedded and automotive specialist versions requiring no maintenance, with the intention to make the product suitable for mission-critical automotive, process automation and telecommunication systems.
Features
Mimer SQL provides support for multiple database application programming interfaces (APIs): ODBC, JDBC, ADO.NET, Embedded SQL (C/C++, Cobol and Fortran), Module SQL (C/C++, Cobol, Fortran and Pascal), and the native API's Mimer SQL C API, Mimer SQL Real-Time API, and Mimer SQL Micro C API.
MimerPy is an adapter for Mimer SQL in Python.
The Mimer Provider Manager is an ADO.NET provider dispatcher that uses different plugins to access different underlying ADO.NET providers. The Mimer Provider Manager makes it possible to write database independent ADO.NET applications.
Mimer SQL mainly uses optimistic concurrency control (OCC) to manage concurrent transactions.
Mimer SQL is assigned port 1360 in the Internet Assigned Numbers Authority (IANA) registry.
Etymology
The name "Mimer" is taken from the Norse mythology, where Mimer was the giant guarding the well of wisdom, also known as "Mímisbrunnr". Metaphorically this is what a database system is doing managing data.
See also
Werner Schneider the professor who started the development section for the relational database that became Mimer SQL (Swedish article)
References
External links
Mimer SQL
Official developer website
Proprietary database management systems
Relational database management systems
Real-time databases
Embedded databases
OpenVMS software | Mimer SQL | [
"Technology"
] | 880 | [
"Real-time databases",
"Real-time computing"
] |
181,805 | https://en.wikipedia.org/wiki/Menarche | Menarche ( ; ) is the first menstrual cycle, or first menstrual bleeding, in female humans. From both social and medical perspectives, it is often considered the central event of female puberty, as it signals the possibility of fertility. Girls experience menarche at different ages. Having menarche occur between the ages of 9–14 in the West is considered normal.
The timing of menarche is influenced by female biology, as well as genetic, environmental factors, and nutritional factors. The mean age of menarche has declined over the last century, but the magnitude of the decline and the factors responsible remain subjects of contention. The worldwide average age of menarche is very difficult to estimate accurately, and it varies significantly by geographical region, race, ethnicity and other characteristics, and occurs mostly during a span of ages from 8 to 16, with a small percentage of girls having menarche by age 10, and the vast majority having it by the time they were 14.
There is a later age of onset in Asian populations compared to the West, but it too is changing with time. For example a Korean study in 2011 showed an overall average age of 12.7, with around 20% before age 12, and more than 90% by age 14. A Chinese study from 2014 published in Acta Paediatrica showed similar results (overall average of age 12.8 in 2005 down to age 12.3 in 2014) and a similar trend in time, but also similar findings about ethnic, cultural, and environmental effects. The average age of menarche was about 12.7 years in Canada in 2001, and 12.9 in the United Kingdom. A study of girls in Istanbul, Turkey, in 2011 found the median age at menarche to be 12.7 years. In the United States, an analysis of 10,590 women aged 15–44 taken from the 2013–2017 round of the CDC's National Survey of Family Growth
found a median age of 11.9 years (down from 12.1 in 1995), with a mean of 12.5 years (down from 12.6).
Physiology
Puberty
Menarche is the culmination of a series of physiological and anatomic processes of puberty:
Attainment of a sufficient body fat percentage (typically around 17% of total body mass).
Disinhibition of the GnRH pulse generator in the arcuate nucleus of the hypothalamus.
Secretion of estrogen by the ovaries in response to pituitary hormones.
Over an interval of about 2 to 3 years, estrogen stimulates growth of the uterus (as well as height growth, breast growth, widening of the pelvis, and increased regional adipose tissue).
Estrogen stimulates growth and vascularity of the endometrium, the lining of the uterus.
Fluctuations of hormone levels can result in changes of adequacy of blood supply to parts of the endometrium.
Death of some of the endometrial tissue from these hormone or blood supply fluctuations leads to deciduation, a sloughing of part of the lining with some blood flow from the vagina.
No specific hormonal signal for menarche is known; menarche as a discrete event is thought to be the relatively chance result of the gradual thickening of the endometrium induced by rising but fluctuating pubertal estrogen.
The menstruum, or flow, consists of a combination of fresh and clotted blood with endometrial tissue. The initial flow of menarche is usually brighter than mature menstrual flow. It is often scanty in amount and may be very brief, even a single instance of "spotting". Like other menses, menarche may be accompanied by abdominal cramping.
Relation to fertility
In most girls, menarche does not mean that ovulation has occurred. In post-menarchal girls, about 80% of the cycles were anovulatory in the first year after menarche, 50% in the third and 10% in the sixth year. Regular ovulation is usually indicated by predictable and consistent intervals between menses, and predictable and consistent patterns of flow (e.g., heaviness or cramping). Continuing ovulation typically requires a body fat percentage of at least 22%. An anthropological term for this state of potential fertility is nubility.
On the other hand, not every girl follows the typical pattern, and some girls ovulate before the first menstruation. Although unlikely, it is possible for a girl who has engaged in sexual intercourse shortly before her menarche to conceive and become pregnant, which would delay her menarche until after the end of the pregnancy. This goes against the widely held assumption that a woman cannot become pregnant until after menarche. A young age at menarche is not correlated with a young age at first sexual intercourse.
Onset
When menarche occurs, it confirms that the girl has had a gradual estrogen-induced growth of the uterus, especially the endometrium, and that the "outflow tract" from the uterus, through the cervix to the vagina, is open.
When a woman experiences menarche, the blood flow can vary from a slow and spotty discharge to a consistent blood flow for 3–7 days. While the color of the blood does range from a brown to bright red color, this is normal; some women have light periods while others have heavy ones; no two women will have an identical experience.
In very rare instances, menarche may occur at an unusually early age, preceding thelarche and other signs of puberty. This is termed isolated premature menarche, but other causes of vaginal bleeding must be investigated and excluded. Growth is usually normal. Isolated premature menarche is rarely the first manifestation of precocious puberty.
When menarche has failed to occur for more than three years after thelarche, or beyond 16 years of age, the delay is referred to as primary amenorrhea.
Timing
Chronic illness
Certain systemic or chronic illness can delay menarche, such as undiagnosed and untreated celiac disease (which often occurs without gastrointestinal symptoms), asthma, diabetes mellitus type 1, cystic fibrosis and inflammatory diseases, among others. In some cases, because biochemical tests are not always discriminatory, underlying pathologies are not identified and the girl is classified as constitutional growth delay. Short stature, delayed growth in height and weight, and/or delayed menarche may be the only clinical manifestations of celiac disease, in absence of any other symptoms. According to a review article, there may also be an association between early age at menarche and breast cancer risk.
Conditions and disease states
Studies have been conducted to observe the association of the timing of menarche with various conditions and diseases. Some studies have shown that there may be an association between early or late-age menarche and cardiovascular disease, although the mechanism of the association is not well understood. A systematic review has concluded that early age at menarche is also a risk factor for the insulin resistance condition. There is conflicting evidence regarding the association between obesity and timing of menarche; a meta-analysis and systematic review has determined that more studies must be conducted to make any definitive conclusions about this association.
Effects of stress and social environment
Some of the aspects of family structure and function reported to be independently associated with earlier menarche [antenatal and early childhood]
Being non-white (in the UK)
Having experienced pre-eclampsia in the womb
Being a singleton, i.e. not a twin, triplet, etc.
Low birthweight
Not having been breast-fed
Previous exposure to smoking
High-conflict family relationships
Increased incidence of childhood obesity.
Lack of exercise in childhood
Other research has focused on the effect of childhood stress on timing of puberty, especially female. Stress is a vague term and studies have examined conditions ranging from family tensions or conflict to wartime refugee status with threat to physical survival. The more dire social conditions have been found to be associated with delay of maturation, an effect that may be compounded by dietary inadequacy. There is more uncertainty and mixed evidence as to whether milder degrees of stress or early-life under-nutrition can accelerate puberty in girls as would be predicted by life history theory and demonstrated in many other mammals.
The understanding of these environmental effects is incomplete and the following observations and cautions are relevant:
Mechanisms of these social effects are unknown, though a variety of physiological processes, including pheromones, have been suggested based on animal research.
Most of these "effects" are statistical associations revealed by epidemiologic surveys. Statistical associations are not necessarily causal, and a variety of secondary variables and alternative explanations can be possibly intervening. Effects of such small size can never be confirmed or refuted for any individual child.
Despite the small magnitude of effect, interpretations of the data are politically controversial because of the ease with which this type of research can be used for political advocacy. Accusations of bias based on political agenda sometimes accompany scientific criticism.
Correlation does not imply causation. While correlation can be objectively measured, causation is statistically inferred. Some suggest that childhood stress is caused by precocious puberty recognized later, rather than being the cause of it.
Changes in time of average age
There were few systematic studies of timing of menarche before the later half of the 20th century. Most older estimates of average timing of menarche were based on observation of a small homogeneous population not necessarily representative of the larger population, or based on recall by adult women, which is also susceptible to various forms of error. Most sources agree that the average age of menarche in girls in modern societies has declined, though the reasons and the degree remain subjects of controversy. From the sixth to the fifteenth centuries in Europe, most women reached menarche on average at about 14, between the ages of 12 and 15. The average age of menarche dropped from 14-15 years in the nineteenth century to 12-13 years in the present, but it seems that girls in the nineteenth century had a later age at menarche compared to girls in earlier centuries. A large North American survey reported only a 2–3 month decline from the mid-1970s to the mid-1990s. A 2011 study found that each 1 kg/m2 increase in childhood body-mass index (BMI) can be expected to result in a 6.5% higher absolute risk of early menarche (before age 12 years).
This is called the secular trend.
Fewer than 10% of U.S. girls start to menstruate before 11 years of age, and 90% of all U.S. girls are menstruating by 13.8 years of age, with a median age of 12.4 years. This age at menarche is not much different (0.3 years earlier) than that reported for U.S. girls in 1973. Age at menarche for non-Hispanic black girls was significantly earlier than that of white girls at 10%, 25%, and 50% of those who had attained menarche, whereas non-white Mexican American girls were only slightly earlier than the white girls at 25%.
Culture
Menstruation is a cultural as well as scientific phenomenon as many societies have specific rituals and cultural norms associated with it. These rituals typically begin at menarche and some are enacted during each menstruation cycle. The rituals are important in determining a status change for girls. Upon menarche and completion of the ritual, they have become a woman as defined by their culture.
For young women in many cultures, the first menstruation is a marker that signifies a change in status. Post-menarche, the young woman enters a stage called maidenhood, the stage between menarche and marriage. There are cultures that have in past centuries, and in present, practiced rites of passage for a girl experiencing menarche. Canadian psychological researcher Niva Piran claims that menarche or the perceived average age of puberty is used in many cultures to separate girls from activity with boys, and to begin transition into womanhood.
Celebratory ceremonies
In some cultures, a party, or celebration is thrown to show the girl's transition to womanhood. This party is similar to the quinceañera in Latin America, except that a specific age marks the transition rather than menarche. In Morocco, the girl is thrown a celebration. All of her family members are invited and the girl is showered with money and gifts.
When a Japanese girl had her first period, the family sometimes celebrated by eating red-colored rice and beans (sekihan). The color of blood and the red of sekihan are not related. All the rice of ancient times of Japan was red. Since rice was precious in ancient Japan (usually, millet was eaten), it was eaten only during the celebration. Sekihan was the tradition of an ancient custom. The celebration was kept a secret from extended family until the rice was served.
In some Indian communities, young women are given a special menarche ceremony called Ruthu Sadangu.
The Mescalero Apaches place high importance on their menarche ceremony and it is regarded as the most important ritual in their tribe. Each year, there is an eight-day event celebrating all of the girls who have menstruated in the past year. The days are split between feasting and private ceremonies reflecting on their new womanly status.
Rituals of learning
In Australia, the Aboriginals treat a girl to "love magic". She is taught the ways of womanhood by the other women in her tribe. Her mother builds her a menstruation hut to which she confines herself for the remainder of her menses. The hut is burned and she is bathed in the river at the end of menstruation. When she returns to the village, she is paired with a man who will be her husband.
In the United States, some public schools have a sex education program that teaches girls about menstruation and what to expect at the onset of menarche (often this takes place during the fourth grade). Historically menstruation has been a social taboo and girls were taught about menarche and menstruation by their mothers or a female role model. Then, and to an extent now, menstruation was a private matter and a girl's menarche was not a community phenomenon.
Rituals of cleansing or purification
The Ulithi tribe of Micronesia call a girl's menarche kufar. She goes to a menstrual house, where the women bathe her and recite spells. She will have to return to the menstruation hut every time she menstruates. Her parents build her a private hut that she will live in until she is married.
In Sri Lanka, an astrologer is contacted to study the alignment of stars when the girl experiences menarche because it is believed that her future can be predicted. The women of the family then gather in her home and scrub her in a ritual bathing ceremony. Her family then throws a familial party at which the girl wears white and may receive gifts.
In Ethiopia, Beta Jewish women were separated from male society and sent to menstruation huts during menarche and every menstruation following as the blood associated with menstruation in the Beta Jewish culture was believed to be impure. The Beta Jews built their villages surrounding and near bodies of water specifically for their women to have a place to clean themselves. The menstruation huts were built close to these bodies of water.
Rituals of transformation and scarification
In Nigeria, the Tiv ethnic group cut four lines into the abdomen of their girls during menarche. The lines are supposed to represent fertility.
Rituals of strength
The Navajo have a celebration called kinaalda (kinn-all-duh). Girls are expected to demonstrate their strength through footraces. The girls make a cornmeal pudding for the tribe to taste. The girls who experience menarche wear special clothes and style their hair like the Navajo goddess "Changing Woman".
The Nuu-chah-nulth (also known as the Nootka) believe that physical endurance is the most important quality in young women. At menarche the girl is taken out to sea and left there to swim back.
Movies and TV
In the horror movie Carrie (1976), an adaptation of the Stephen King novel of the same name, protagonist Carrie White experiences her first period as she showers after the school gym class. Unaware of what is happening to her, she panics and pleads for help, but the other girls respond by bullying her. Carrie's first period unleashes her violent powers and is central to her dangerous and out of control transformation. This theme is common to horror movies, another notable example being the Canadian horror movie Ginger Snaps (2000), where the protagonist's first period is central to her gradual transformation into a werewolf. The theme of transformation around the menarche is similarly present in Turning Red (2022), although the film also explores other aspects of puberty as a whole and the protagonist does not actually start her first period. Girls experiencing their first period is part of many movies, particularly ones that include coming-of-age plot lines, such as The Blue Lagoon (1980), The Company of Wolves (1984), An Angel at My Table (1990), My Girl (1991), Return to the Blue Lagoon (1991), Eve’s Bayou (1997), and A Walk on the Moon (1999). Menarche is also discussed in an episode of the animated series Baymax! (2022) in which the eponymous healthcare robot helps a girl deal with her first period.
See also
Puberty
Adrenarche
Gonadarche
Thelarche
Pubarche
Spermarche
Menopause, the equivalent opposite change at the end of the child-bearing years
Andropause
Delayed puberty
Lina Medina, who had her menarche at age 8 months and is the youngest mother in history
References
Further reading
External links
For mothers supporting their daughters as they come of age
Discusses some of the social influences
Developmental biology
Developmental stages
Menstrual cycle
Pediatrics
Puberty
Sexuality and age
Human female endocrine system
ja:月経#初潮 | Menarche | [
"Biology"
] | 3,838 | [
"Behavior",
"Developmental biology",
"Reproduction"
] |
181,810 | https://en.wikipedia.org/wiki/Marc%20Garneau | Joseph Jean-Pierre Marc Garneau (born February 23, 1949) is a retired Canadian Member of Parliament, retired Royal Canadian Navy officer and former astronaut who served as a Cabinet minister from 2015 to 2021. A member of the Liberal Party, Garneau was the minister of foreign affairs from January to October 2021 and minister of transport from November 2015 to January 2021. He was an MP in Westmount, Montreal for 15 years.
Prior to entering politics, Garneau served as a naval officer and was selected as an astronaut as part of the 1983 NRC Group. On October 5, 1984, he became the first Canadian in space as part of STS-41-G and served on two subsequent Space Shuttle missions: STS-77 and STS-97.
Early life
Joseph Jean-Pierre Marc Garneau was born on February 23, 1949, in Quebec City, Quebec, Canada. He attended primary and secondary schools in Quebec City and Saint-Jean-sur-Richelieu. He also has a brother, Philippe Garneau.
Education and military career
Garneau graduated from the Royal Military College of Canada in 1970 with a bachelor of science in engineering physics and began his career in the Canadian Forces Maritime Command.
In 1973 he received a PhD in electrical engineering from the Imperial College of Science and Technology in London, England. His thesis was entitled "The Perception of Facial Images". The Photofit analogue computer was used by him to discriminate facial features.
In 1974, Garneau served as a naval combat systems engineer aboard .
From 1982 to 1983, he attended the Canadian Forces Command and Staff College in Toronto. While there, he was promoted to the rank of commander and was transferred to Ottawa in 1983. In January 1986, he was promoted to captain(N). Garneau retired from the Canadian Forces in 1989.
Space career
Garneau was one of six first Canadian Astronauts and he became the first Canadian in outer space on October 5, 1984. In 1984, he was seconded to the new Canadian Astronaut Program (CAP), one of six chosen from over 4,000 applicants; of these six he was the only military officer.
Garneau flew on the Space Shuttle Challenger, STS-41-G from October 5 to 13, 1984, as payload specialist. He was promoted to captain(N) in 1986, and left the Canadian Forces in 1989, to become deputy director of the CAP. In 1992–93, he underwent further training to become a mission specialist. He worked as CAPCOM for a number of shuttle flights and was on two further flights himself: STS-77 (May 19 to 29, 1996) and STS-97 (to the ISS, November 30 to December 11, 2000). He has logged over 677 hours in space.
On February 1, 2001, Garneau was appointed executive vice-president of the Canadian Space Agency (CSA). On September 28, 2001, the government announced his appointment as president of the CSA, replacing Mac Evans in that position on November 22, 2001.
Political career
Garneau served as the member of Parliament (MP) for the Montreal riding of Notre-Dame-de-Grâce—Westmount, and its predecessor Westmount—Ville-Marie since the 2008 federal election, winning by over 9,000 votes. He was re-elected to the House of Commons in the 2011 federal election by 642 votes, and in the 2015 federal election with a majority of over 18,000. Previously, he unsuccessfully stood in the riding of Vaudreuil—Soulanges at the 2006 federal election.
On November 28, 2012, Garneau announced his candidacy for the leadership of the Liberal Party to be decided in April 2013. On March 13, 2013, Garneau formally withdrew his bid for the party leadership. On November 4, 2015, Garneau was appointed as Minister of Transport in the 29th Canadian Ministry. He became Minister of Foreign Affairs on January 12, 2021 after a cabinet reshuffle.
Initial steps (2006–2008)
Garneau resigned as the president of the Canadian Space Agency to run for the Liberal Party of Canada in the 2006 federal election in the riding of Vaudreuil—Soulanges, which was then held by Meili Faille of the Bloc Québécois. The Liberal Party's support dropped off considerably in Quebec after the Sponsorship scandal and though considered a star candidate, Garneau lost to Faille by over nine thousand votes.
In the 2006 Liberal Party leadership election Garneau announced his support for perceived front-runner Michael Ignatieff, who lost to Stéphane Dion on the final ballot. With the resignation of Liberal MP Jean Lapierre in 2007, Garneau expressed interest in being the party's candidate in Lapierre's former riding of Outremont. Dion instead appointed Jocelyn Coulon as the party's candidate, who went on to be defeated by the New Democratic Party's Thomas Mulcair in the by-election.
In May 2007, Garneau filed nomination papers to be the party's candidate in Westmount—Ville-Marie, after former Liberal Party deputy leader Lucienne Robillard announced she would not be seeking re-election. However, a week after filing his nomination papers Dion announced that he had hand-picked a candidate for the riding. Garneau later withdrew his nomination papers and announced he no longer had an interest in politics. In October 2007, Garneau and Dion held a joint news conference where they announced that Garneau would be the Liberal Party candidate in Westmount—Ville-Marie. Robillard announced her resignation as Member of Parliament in January and a by-election was later scheduled for September 8, 2008. However, the by-election was cancelled during the campaign when Prime Minister Stephen Harper called a general election for October 14, 2008. Though some pundits predicted a close race between Garneau and NDP candidate Anne Lagacé-Dowson, Garneau went on to win the riding by over 9,000 votes.
Member of 40th Parliament
Garneau was a member of the Industry, Science and Technology committee of the 40th Parliament. He also served on the Canada-Japan interparliamentary group.
41st Parliament and leadership campaign
Garneau was narrowly re-elected in the 2011 election where he beat New Democratic Party candidate Joanne Corbeil. He was Liberal House leader and served from 2013 as Liberal foreign affairs critic. He was a candidate for interim leadership of the Liberal Party, but was ultimately defeated by Bob Rae. Garneau announced later that year that he was considering a bid for the permanent leadership of the party. In the summer of 2012, he announced that he was looking for a "dream team" to run his leadership bid and that he would only run if he could find the right people.
On November 21, 2012, Garneau was named his party's natural resources critic after David McGuinty resigned the post.
On November 28, 2012, Garneau announced his bid for the leadership of the Liberal Party, placing a heavy focus on the economy. While fellow leadership candidate Justin Trudeau was widely seen as the front-runner in the race, Garneau was thought to be his main challenger among the candidates. With his entrance into the leadership race he resigned his post as Liberal House leader, while remaining the party's critic for natural resources.
At the press conference announcing his candidacy Garneau ruled out any form of co-operation with the Green Party or New Democratic Party to help defeat the Conservative Party in the next election, which was proposed by leadership candidate Joyce Murray.
On January 30, 2013, Garneau was replaced as natural resources critic by Ted Hsu. Garneau had been serving in the position on an interim basis. On March 13, 2013 Garneau announced his withdrawal from the race, and threw his support to front-runner Justin Trudeau. On September 18, 2013, Garneau was named co-chair of the Liberal International Affairs Council of Advisors, providing advice on foreign and defence issues to Liberal Party of Canada leader Justin Trudeau.
Minister of Transport in the 42nd Parliament
In the 2015 elections held on October 19, 2015, Garneau was re-elected as MP in the newly created riding of Notre-Dame-de-Grâce—Westmount. Two weeks later, on November 4, 2015, Garneau was appointed the minister of transport by Prime Minister Justin Trudeau.
In May 2017, Garneau introduced an airline passenger bill of rights to standardize how passengers can be treated by airlines which operate any flights in and out of Canada. The legislation would create minimum compensation rates for overbooking, lost or damaged luggage, and bumping passengers off flights. It would also prohibit airlines from removing people from the flight if they have purchased a ticket and set the standard for tarmac delays and airline treatment of passengers when flights are delayed or cancelled over events in the airline's control, or because of weather conditions.
In March 2019, after days of initial refusal to take actions following the crash of Ethiopian Airlines Flight 302, Garneau who had even gone so far as to say on 11 March that he would board 737 MAX 8 "without hesitation" as an apparent show of support for the Boeing Company, finally agreed on 13 March to ground and prohibit all Boeing 737 Max aircraft from flying in Canadian airspace. The Trump administration followed suit later that day. This stood in contrast to the ministry's previous stance, where Garneau insisted the plane was safe to fly, thus making Canada one of the only two nations still flying a substantial number of Boeing 737 Max planes at the time.
Minister of Foreign Affairs in the 43rd Parliament
Garneau continued to serve as Minister of Transport after the elections to the 43rd Parliament held in October 2019. He was at Transport for the first two years of the Covid-19 pandemic, and thus he was responsible to enforce the Quarantine Act as lieutenant to the Minister of Health Patty Hajdu; during this time he made many decisions that would affect the lives of travellers in co-ordination with Hadju.
Garneau then served as Minister of Foreign Affairs from January 12, 2021 until October 26, 2021. On January 12, 2021, following the resignation of Navdeep Bains as minister of innovation, science and industry, Prime Minister Justin Trudeau shuffled the Cabinet, with Garneau becoming Minister of Foreign Affairs and Omar Alghabra taking his place at Transport. Garneau was described as one of the most qualified and capable members of Cabinet.
Member of 44th Parliament and retirement
Following the cabinet reshuffle stemming from the election in October 2021, Garneau was dropped from Cabinet on October 26, despite being re-elected to his seat in the House. Some have speculated that Garneau did not remain in cabinet due to his age, being sacrificed in the name of gender parity, and that he reportedly refused to be subservient to the Prime Minister’s Office.
On March 8, 2023, Garneau announced that he would resign his seat and retire from politics. He gave his farewell speech in the House of Commons the same day. The by-election to replace him in parliament occurred June 19, 2023. Liberal Anna Gainey succeeded him, with almost as big a majority of votes as Garneau had won previously.
Awards and honours
Garneau was appointed an Officer of the Order of Canada in 1984 in recognition of his role as the first Canadian astronaut. He was promoted the rank of Companion within the order in 2003 for his extensive work with Canada's space program.
He was awarded the Canadian Forces' Decoration for 12 years of honourable service with the Canadian Forces.
He is honoured with a high school named after him, Marc Garneau Collegiate Institute in Toronto and É.S.P. Marc-Garneau in Trenton, Ontario.
Garneau is the Honorary Captain of the Royal Canadian Sea Cadets. In addition, no 599 Royal Canadian Air Cadets squadron is named in his honour.
Garneau was awarded the Key to the City of Ottawa from Marion Dewar the Mayor of Ottawa on December 10, 1984.
He was inducted into the International Space Hall of Fame in 1992.
Honorary degrees
Electoral record
See also
Astronaut-politician
List of Canadian university leaders
References
External links
Official website
Bio and mandate from the Prime Minister
Canadian Space Agency biography
NASA biography
Spacefacts biography of Marc Garneau
CBC Digital Archives – Marc Garneau: Canadian Space Pioneer
Marc Garneau
|-
1949 births
Alumni of Imperial College London
Astronaut-politicians
Canadian astronauts
Canadian Roman Catholics
Ministers of transport of Canada
Chancellors of Carleton University
Companions of the Order of Canada
Liberal Party of Canada MPs
Living people
Members of the 29th Canadian Ministry
Members of the House of Commons of Canada from Quebec
Members of the King's Privy Council for Canada
People from Westmount, Quebec
Politicians from Quebec City
Presidents of the Canadian Space Agency
Royal Canadian Navy officers
Royal Military College of Canada alumni
Royal Military College Saint-Jean people
Systems engineers
Space Shuttle program astronauts
Ministers of foreign affairs of Canada
21st-century members of the House of Commons of Canada | Marc Garneau | [
"Engineering"
] | 2,673 | [
"Systems engineers",
"Systems engineering"
] |
181,823 | https://en.wikipedia.org/wiki/Turbojet | The turbojet is an airbreathing jet engine which is typically used in aircraft. It consists of a gas turbine with a propelling nozzle. The gas turbine has an air inlet which includes inlet guide vanes, a compressor, a combustion chamber, and a turbine (that drives the compressor). The compressed air from the compressor is heated by burning fuel in the combustion chamber and then allowed to expand through the turbine. The turbine exhaust is then expanded in the propelling nozzle where it is accelerated to high speed to provide thrust. Two engineers, Frank Whittle in the United Kingdom and Hans von Ohain in Germany, developed the concept independently into practical engines during the late 1930s.
Turbojets have poor efficiency at low vehicle speeds, which limits their usefulness in vehicles other than aircraft. Turbojet engines have been used in isolated cases to power vehicles other than aircraft, typically for attempts on land speed records. Where vehicles are "turbine-powered", this is more commonly by use of a turboshaft engine, a development of the gas turbine engine where an additional turbine is used to drive a rotating output shaft. These are common in helicopters and hovercraft.
Turbojets were widely used for early supersonic fighters, up to and including many third generation fighters, with the MiG-25 being the latest turbojet-powered fighter developed. As most fighters spend little time traveling supersonically, fourth-generation fighters (as well as some late third-generation fighters like the F-111 and Hawker Siddeley Harrier) and subsequent designs are powered by the more efficient low-bypass turbofans and use afterburners to raise exhaust speed for bursts of supersonic travel. Turbojets were used on Concorde and the longer-range versions of the Tu-144 which were required to spend a long period travelling supersonically. Turbojets are still common in medium range cruise missiles, due to their high exhaust speed, small frontal area, and relative simplicity.
History
The first patent for using a gas turbine to power an aircraft was filed in 1921 by Frenchman Maxime Guillaume. His engine was to be an axial-flow turbojet, but was never constructed, as it would have required considerable advances over the state of the art in compressors.
In 1928, British RAF College Cranwell cadet Frank Whittle formally submitted his ideas for a turbojet to his superiors. In October 1929 he developed his ideas further. On 16 January 1930 in England, Whittle submitted his first patent (granted in 1932). The patent showed a two-stage axial compressor feeding a single-sided centrifugal compressor. Practical axial compressors were made possible by ideas from A.A. Griffith in a seminal paper in 1926 ("An Aerodynamic Theory of Turbine Design"). Whittle later concentrated on the simpler centrifugal compressor only, for a variety of practical reasons. A Whittle engine was the first turbojet to run, the Power Jets WU, on 12 April 1937. It was liquid-fuelled. Whittle's team experienced near-panic during the first start attempts when the engine accelerated out of control to a relatively high speed despite the fuel supply being cut off. It was subsequently found that fuel had leaked into the combustion chamber during pre-start motoring checks and accumulated in pools, so the engine would not stop accelerating until all the leaked fuel had burned off. Whittle was unable to interest the government in his invention, and development continued at a slow pace.
In Germany, Hans von Ohain patented a similar engine in 1935. His design, an axial-flow engine, as opposed to Whittle's centrifugal flow engine, was eventually adopted by most manufacturers by the 1950s.
On 27 August 1939 the Heinkel He 178, powered by von Ohain's design, became the world's first aircraft to fly using the thrust from a turbojet engine. It was flown by test pilot Erich Warsitz. The Gloster E.28/39, (also referred to as the "Gloster Whittle", "Gloster Pioneer", or "Gloster G.40") made the first British jet-engined flight in 1941. It was designed to test the Whittle jet engine in flight, and led to the development of the Gloster Meteor.
The first two operational turbojet aircraft, the Messerschmitt Me 262 and then the Gloster Meteor, entered service in 1944, towards the end of World War II, the Me 262 in April and the Gloster Meteor in July. Only about 15 Meteor saw WW2 action but up to 1400 Me 262s were produced, with 300 entering combat, delivering the first ground attacks and air combat victories of jet planes.
Air is drawn into the rotating compressor via the intake and is compressed to a higher pressure before entering the combustion chamber. Fuel is mixed with the compressed air and burns in the combustor. The combustion products leave the combustor and expand through the turbine where power is extracted to drive the compressor. The turbine exit gases still contain considerable energy that is converted in the propelling nozzle to a high speed jet.
The first turbojets, used either a centrifugal compressor (as in the Heinkel HeS 3), or an axial compressor (as in the Junkers Jumo 004) which gave a smaller diameter, although longer, engine. By replacing the propeller used on piston engines with a high speed jet of exhaust, higher aircraft speeds were attainable.
One of the last applications for a turbojet engine was Concorde which used the Olympus 593 engine. However, joint studies by Rolls-Royce and Snecma for a second generation SST engine using the 593 core were done more than three years before Concorde entered service. They evaluated bypass engines with bypass ratios between 0.1 and 1.0 to give improved take-off and cruising performance. Nevertheless, the 593 met all the requirements of the Concorde programme. Estimates made in 1964 for the Concorde design at Mach 2.2 showed the penalty in range for the supersonic airliner, in terms of miles per gallon, compared to subsonic airliners at Mach 0.85 (Boeing 707, DC-8) was relatively small. This is because the large increase in drag is largely compensated by an increase in powerplant efficiency (the engine efficiency is increased by the ram pressure rise which adds to the compressor pressure rise, the higher aircraft speed approaches the exhaust jet speed increasing propulsive efficiency).
Turbojet engines had a significant impact on commercial aviation. Aside from giving faster flight speeds turbojets had greater reliability than piston engines, with some models demonstrating dispatch reliability rating in excess of 99.9%. Pre-jet commercial aircraft were designed with as many as four engines in part because of concerns over in-flight failures. Overseas flight paths were plotted to keep planes within an hour of a landing field, lengthening flights. The increase in reliability that came with the turbojet enabled three- and two-engine designs, and more direct long-distance flights.
High-temperature alloys were a reverse salient, a key technology that dragged progress on jet engines. Non-UK jet engines built in the 1930s and 1940s had to be overhauled every 10 or 20 hours due to creep failure and other types of damage to blades. British engines, however, utilised Nimonic alloys which allowed extended use without overhaul, engines such as the Rolls-Royce Welland and Rolls-Royce Derwent, and by 1949 the de Havilland Goblin, being type tested for 500 hours without maintenance. It was not until the 1950s that superalloy technology allowed other countries to produce economically practical engines.
Early designs
Early German turbojets had severe limitations on the amount of running they could do due to the lack of suitable high temperature materials for the turbines. British engines such as the Rolls-Royce Welland used better materials giving improved durability. The Welland was type-certified for 80 hours initially, later extended to 150 hours between overhauls, as a result of an extended 500-hour run being achieved in tests.
General Electric in the United States was in a good position to enter the jet engine business due to its experience with the high-temperature materials used in their turbosuperchargers during World War II.
Water injection was a common method used to increase thrust, usually during takeoff, in early turbojets that were thrust-limited by their allowable turbine entry temperature. The water increased thrust at the temperature limit, but prevented complete combustion, often leaving a very visible smoke trail.
Allowable turbine entry temperatures have increased steadily over time both with the introduction of superior alloys and coatings, and with the introduction and progressive effectiveness of blade cooling designs. On early engines, the turbine temperature limit had to be monitored, and avoided, by the pilot, typically during starting and at maximum thrust settings. Automatic temperature limiting was introduced to reduce pilot workload and reduce the likelihood of turbine damage due to over-temperature.
Components
Nose bullet
A nose bullet is a component of a turbojet used to divert air into the intake, in front of the accessory drive and to house the starter motor.
Air intake
An intake, or tube, is needed in front of the compressor to help direct the incoming air smoothly into the rotating compressor blades. Older engines had stationary vanes in front of the moving blades. These vanes also helped to direct the air onto the blades. The air flowing into a turbojet engine is always subsonic, regardless of the speed of the aircraft itself.
The intake has to supply air to the engine with an acceptably small variation in pressure (known as distortion) and having lost as little energy as possible on the way (known as pressure recovery). The ram pressure rise in the intake is the inlet's contribution to the propulsion system's overall pressure ratio and thermal efficiency.
The intake gains prominence at high speeds when it generates more compression than the compressor stage. Well-known examples are the Concorde and Lockheed SR-71 Blackbird propulsion systems where the intake and engine contributions to the total compression were 63%/8% at Mach 2 and 54%/17% at Mach 3+.
Intakes have ranged from "zero-length" on the Pratt & Whitney TF33 turbofan installation in the Lockheed C-141 Starlifter, to the twin long, intakes on the North American XB-70 Valkyrie, each feeding three engines with an intake airflow of about .
Compressor
The turbine rotates the compressor at high speed, adding energy to the airflow while squeezing (compressing) it into a smaller space. Compressing the air increases its pressure and temperature. The smaller the compressor, the faster it turns. The (large) GE90-115B fan rotates at about 2,500 RPM, while a small helicopter engine compressor rotates around 50,000 RPM.
Turbojets supply bleed air from the compressor to the aircraft for the operation of various sub-systems. Examples include the environmental control system, anti-icing, and fuel tank pressurization. The engine itself needs air at various pressures and flow rates to keep it running. This air comes from the compressor, and without it, the turbines would overheat, the lubricating oil would leak from the bearing cavities, the rotor thrust bearings would skid or be overloaded, and ice would form on the nose cone. The air from the compressor, called secondary air, is used for turbine cooling, bearing cavity sealing, anti-icing, and ensuring that the rotor axial load on its thrust bearing will not wear it out prematurely. Supplying bleed air to the aircraft decreases the efficiency of the engine because it has been compressed, but then does not contribute to producing thrust.
Compressor types used in turbojets were typically axial or centrifugal. Early turbojet compressors had low pressure ratios up to about 5:1. Aerodynamic improvements including splitting the compressor into two separately rotating parts, incorporating variable blade angles for entry guide vanes and stators, and bleeding air from the compressor enabled later turbojets to have overall pressure ratios of 15:1 or more. After leaving the compressor, the air enters the combustion chamber.
Combustion chamber
The burning process in the combustor is significantly different from that in a piston engine. In a piston engine, the burning gases are confined to a small volume, and as the fuel burns, the pressure increases. In a turbojet, the air and fuel mixture burn in the combustor and pass through to the turbine in a continuous flowing process with no pressure build-up. Instead, a small pressure loss occurs in the combustor.
The fuel-air mixture can only burn in slow-moving air, so an area of reverse flow is maintained by the fuel nozzles for the approximately stoichiometric burning in the primary zone. Further compressed air is introduced which completes the combustion process and reduces the temperature of the combustion products to a level which the turbine can accept. Less than 25% of the air is typically used for combustion, as an overall lean mixture is required to keep within the turbine temperature limits.
Turbine
Hot gases leaving the combustor expand through the turbine. Typical materials for turbines include inconel and Nimonic. The hottest turbine vanes and blades in an engine have internal cooling passages. Air from the compressor is passed through these to keep the metal temperature within limits. The remaining stages do not need cooling.
In the first stage, the turbine is largely an impulse turbine (similar to a pelton wheel) and rotates because of the impact of the hot gas stream. Later stages are convergent ducts that accelerate the gas. Energy is transferred into the shaft through momentum exchange in the opposite way to energy transfer in the compressor. The power developed by the turbine drives the compressor and accessories, like fuel, oil, and hydraulic pumps that are driven by the accessory gearbox.
Nozzle
After the turbine, the gases expand through the exhaust nozzle producing a high velocity jet. In a convergent nozzle, the ducting narrows progressively to a throat. The nozzle pressure ratio on a turbojet is high enough at higher thrust settings to cause the nozzle to choke.
If, however, a convergent-divergent de Laval nozzle is fitted, the divergent (increasing flow area) section allows the gases to reach supersonic velocity within the divergent section. Additional thrust is generated by the higher resulting exhaust velocity.
Thrust augmentation
Thrust was most commonly increased in turbojets with water/methanol injection or afterburning. Some engines used both methods.
Liquid injection was tested on the Power Jets W.1 in 1941 initially using ammonia before changing to water and then water-methanol. A system to trial the technique in the Gloster E.28/39 was devised but never fitted.
Afterburner
An afterburner or "reheat jetpipe" is a combustion chamber added to reheat the turbine exhaust gases. The fuel consumption is very high, typically four times that of the main engine. Afterburners are used almost exclusively on supersonic aircraft, most being military aircraft. Two supersonic airliners, Concorde and the Tu-144, also used afterburners as does Scaled Composites White Knight, a carrier aircraft for the experimental SpaceShipOne suborbital spacecraft.
Reheat was flight-trialled in 1944 on the W.2/700 engines in a Gloster Meteor I.
Net thrust
The net thrust of a turbojet is given by:
where:
If the speed of the jet is equal to sonic velocity the nozzle is said to be "choked". If the nozzle is choked, the pressure at the nozzle exit plane is greater than atmospheric pressure, and extra terms must be added to the above equation to account for the pressure thrust.
The rate of flow of fuel entering the engine is very small compared with the rate of flow of air. If the contribution of fuel to the nozzle gross thrust is ignored, the net thrust is:
The speed of the jet must exceed the true airspeed of the aircraft if there is to be a net forward thrust on the airframe. The speed can be calculated thermodynamically based on adiabatic expansion.
Cycle improvements
The operation of a turbojet is modelled approximately by the Brayton cycle.
The efficiency of a gas turbine is increased by raising the overall pressure ratio, requiring higher-temperature compressor materials, and raising the turbine entry temperature, requiring better turbine materials and/or improved vane/blade cooling. It is also increased by reducing the losses as the flow progresses from the intake to the propelling nozzle. These losses are quantified by compressor and turbine efficiencies and ducting pressure losses.
When used in a turbojet application, where the output from the gas turbine is used in a propelling nozzle, raising the turbine temperature increases the jet velocity. At normal subsonic speeds this reduces the propulsive efficiency, giving an overall loss, as reflected by the higher fuel consumption, or SFC. However, for supersonic aircraft this can be beneficial, and is part of the reason why the Concorde employed turbojets.
Turbojet systems are complex systems therefore to secure optimal function of such system, there is a call for the newer models being developed to advance its control systems to implement the newest knowledge from the areas of automation, so increase its safety and effectiveness.
See also
Air-start system
Exoskeletal engine
Jet car
Turbine engine failure
Turbojet development at the RAE
Variable cycle engine
References
Further reading
External links
Erich Warsitz, the world's first jet pilot: includes rare videos (Heinkel He 178) and audio commentaries
NASA reciprocating Engine Description: includes a software model
Possibilities of Jet Propulsion: 1941 survey with discussion of experimental designs of the 1920s and 1930s.
Whittle Power Jet Papers – Correspondence from the archives of Peterhouse, Cambridge College relating to the development of Whittle's reciprocating engine in Cambridge Digital Library
English inventions
Jet engines
Gas turbines
Research and development in Nazi Germany
1930s in science
cs:Proudový motor
de:Strahltriebwerk#Einstrom-Strahltriebwerk (Turbojet) | Turbojet | [
"Technology"
] | 3,730 | [
"Jet engines",
"Engines",
"Gas turbines"
] |
181,898 | https://en.wikipedia.org/wiki/Infrared%20cut-off%20filter | Infrared cut-off filters, sometimes called IR filters or heat-absorbing filters, are designed to reflect or block near-infrared wavelengths while passing visible light. They are often used in devices with bright incandescent light bulbs (such as slide and overhead projectors) to prevent unwanted heating. There are also filters which are used in solid state (CCD or CMOS) video cameras to block IR due to the high sensitivity of many camera sensors to near-infrared light. These filters typically have a blue hue to them as they also sometimes block some of the light from the longer red wavelengths.
IR transmitting/passing filters in photography
In contrast to the naming convention of optical filters where the name of the filter denotes the wavelengths that are blocked, and in line with the convention for air filters and oil filters, photographic filters are named for the color of light they pass. Thus a blue filter makes the picture look blue. A blue filter marginally allows more light in the blue wavelength to pass resulting in a slight shift of the color temperature of the photo to a cooler color. Because of this, the term "IR filters" is commonly used to refer to filters that pass infrared light while completely blocking other wavelengths. However, in some applications the term "IR filter" still can be used as a synonym of infrared cut-off filter.
Unlike the eye, sensors based on silicon (including CCDs and CMOS sensors) have sensitivities extending into the near-infrared. Such sensors may extend to 1000 nm. Digital cameras are usually equipped with IR-blocking filters to prevent unnatural-looking images. IR-transmitting (passing) filters, or removal of factory IR-blocking filters, are commonly used in infrared photography to pass infrared light and block visible and ultraviolet light. Such filters appear black to the eye, but are transparent when viewed with an IR sensitive device.
Since the dyes in processed film block various part of visible light but are all fairly transparent to infrared, dark black sections of any processed film (where all visible colors are blocked) pass only infrared light and are commonly used (layering one over another if necessary for better visual light filtering) as a cheap alternative to expensive glass-backed filters. Such filters can be used both over color camera lenses, and to filter visible light from IR illumination sources. Such filter stock is most easily made available most simply by having any commercial color negative film developed after being fully exposed to light. The leaders of 35mm film are ideal for this, without wasting an entire roll of film. (Some special communication may be necessary in such submission, to ensure that all of the "black" negative film thus produced is indeed returned, and that there is no need to print the color-negative results on photographic paper). In the same way, visually opaque "black" color-positive film emulsions mounted in cardboard, as for routine slide projection, provide inexpensive cardboard-mounted infrared filters. Film sizes larger than 35 mm may be handled in the same way for larger filter production.
For astrophotography, many photogenic targets (such as emission nebulae) are bright in the far red and near infrared. Removal of factory filters increases sensitivity to such targets, and may also increase sharpness, as such filters may also include anti-aliasing filters.
See also
UV filter
Cold mirror and Hot mirror
Interference filter
Optical filters | Infrared cut-off filter | [
"Chemistry"
] | 677 | [
"Optical filters",
"Filters"
] |
181,983 | https://en.wikipedia.org/wiki/Fermi%20gas | A Fermi gas is an idealized model, an ensemble of many non-interacting fermions. Fermions are particles that obey Fermi–Dirac statistics, like electrons, protons, and neutrons, and, in general, particles with half-integer spin. These statistics determine the energy distribution of fermions in a Fermi gas in thermal equilibrium, and is characterized by their number density, temperature, and the set of available energy states. The model is named after the Italian physicist Enrico Fermi.
This physical model is useful for certain systems with many fermions. Some key examples are the behaviour of charge carriers in a metal, nucleons in an atomic nucleus, neutrons in a neutron star, and electrons in a white dwarf.
Description
An ideal Fermi gas or free Fermi gas is a physical model assuming a collection of non-interacting fermions in a constant potential well. Fermions are elementary or composite particles with half-integer spin, thus follow Fermi–Dirac statistics. The equivalent model for integer spin particles is called the Bose gas (an ensemble of non-interacting bosons). At low enough particle number density and high temperature, both the Fermi gas and the Bose gas behave like a classical ideal gas.
By the Pauli exclusion principle, no quantum state can be occupied by more than one fermion with an identical set of quantum numbers. Thus a non-interacting Fermi gas, unlike a Bose gas, concentrates a small number of particles per energy. Thus a Fermi gas is prohibited from condensing into a Bose–Einstein condensate, although weakly-interacting Fermi gases might form a Cooper pair and condensate (also known as BCS-BEC crossover regime). The total energy of the Fermi gas at absolute zero is larger than the sum of the single-particle ground states because the Pauli principle implies a sort of interaction or pressure that keeps fermions separated and moving. For this reason, the pressure of a Fermi gas is non-zero even at zero temperature, in contrast to that of a classical ideal gas. For example, this so-called degeneracy pressure stabilizes a neutron star (a Fermi gas of neutrons) or a white dwarf star (a Fermi gas of electrons) against the inward pull of gravity, which would ostensibly collapse the star into a black hole. Only when a star is sufficiently massive to overcome the degeneracy pressure can it collapse into a singularity.
It is possible to define a Fermi temperature below which the gas can be considered degenerate (its pressure derives almost exclusively from the Pauli principle). This temperature depends on the mass of the fermions and the density of energy states.
The main assumption of the free electron model to describe the delocalized electrons in a metal can be derived from the Fermi gas. Since interactions are neglected due to screening effect, the problem of treating the equilibrium properties and dynamics of an ideal Fermi gas reduces to the study of the behaviour of single independent particles. In these systems the Fermi temperature is generally many thousands of kelvins, so in human applications the electron gas can be considered degenerate. The maximum energy of the fermions at zero temperature is called the Fermi energy. The Fermi energy surface in reciprocal space is known as the Fermi surface.
The nearly free electron model adapts the Fermi gas model to consider the crystal structure of metals and semiconductors, where electrons in a crystal lattice are substituted by Bloch electrons with a corresponding crystal momentum. As such, periodic systems are still relatively tractable and the model forms the starting point for more advanced theories that deal with interactions, e.g. using the perturbation theory.
1D uniform gas
The one-dimensional infinite square well of length L is a model for a one-dimensional box with the potential energy:
It is a standard model-system in quantum mechanics for which the solution for a single particle is well known. Since the potential inside the box is uniform, this model is referred to as 1D uniform gas, even though the actual number density profile of the gas can have nodes and anti-nodes when the total number of particles is small.
The levels are labelled by a single quantum number n and the energies are given by:
where is the zero-point energy (which can be chosen arbitrarily as a form of gauge fixing), the mass of a single fermion, and is the reduced Planck constant.
For N fermions with spin- in the box, no more than two particles can have the same energy, i.e., two particles can have the energy of , two other particles can have energy and so forth. The two particles of the same energy have spin (spin up) or − (spin down), leading to two states for each energy level. In the configuration for which the total energy is lowest (the ground state), all the energy levels up to n = N/2 are occupied and all the higher levels are empty.
Defining the reference for the Fermi energy to be , the Fermi energy is therefore given by
where is the floor function evaluated at n = N/2.
Thermodynamic limit
In the thermodynamic limit, the total number of particles N are so large that the quantum number n may be treated as a continuous variable. In this case, the overall number density profile in the box is indeed uniform.
The number of quantum states in the range is:
Without loss of generality, the zero-point energy is chosen to be zero, with the following result:
Therefore, in the range:
the number of quantum states is:
Here, the degree of degeneracy is:
And the density of states is:
In modern literature, the above is sometimes also called the "density of states". However, differs from by a factor of the system's volume (which is in this 1D case).
Based on the following formula:
the Fermi energy in the thermodynamic limit can be calculated to be:
3D uniform gas
The three-dimensional isotropic and non-relativistic uniform Fermi gas case is known as the Fermi sphere.
A three-dimensional infinite square well, (i.e. a cubical box that has a side length L) has the potential energy
The states are now labelled by three quantum numbers nx, ny, and nz. The single particle energies are
where nx, ny, nz are positive integers. In this case, multiple states have the same energy (known as degenerate energy levels), for example .
Thermodynamic limit
When the box contains N non-interacting fermions of spin-, it is interesting to calculate the energy in the thermodynamic limit, where N is so large that the quantum numbers nx, ny, nz can be treated as continuous variables.
With the vector , each quantum state corresponds to a point in 'n-space' with energy
With denoting the square of the usual Euclidean length .
The number of states with energy less than EF + E0 is equal to the number of states that lie within a sphere of radius in the region of n-space where nx, ny, nz are positive. In the ground state this number equals the number of fermions in the system:
The factor of two expresses the two spin states, and the factor of 1/8 expresses the fraction of the sphere that lies in the region where all n are positive.
The Fermi energy is given by
Which results in a relationship between the Fermi energy and the number of particles per volume (when L2 is replaced with V2/3):
This is also the energy of the highest-energy particle (the th particle), above the zero point energy . The th particle has an energy of
The total energy of a Fermi sphere of fermions (which occupy all energy states within the Fermi sphere) is given by:
Therefore, the average energy per particle is given by:
Density of states
For the 3D uniform Fermi gas, with fermions of spin-, the number of particles as a function of the energy is obtained by substituting the Fermi energy by a variable energy :
from which the density of states (number of energy states per energy per volume) can be obtained. It can be calculated by differentiating the number of particles with respect to the energy:
This result provides an alternative way to calculate the total energy of a Fermi sphere of fermions (which occupy all energy states within the Fermi sphere):
Thermodynamic quantities
Degeneracy pressure
By using the first law of thermodynamics, this internal energy can be expressed as a pressure, that is
where this expression remains valid for temperatures much smaller than the Fermi temperature. This pressure is known as the degeneracy pressure. In this sense, systems composed of fermions are also referred as degenerate matter.
Standard stars avoid collapse by balancing thermal pressure (plasma and radiation) against gravitational forces. At the end of the star lifetime, when thermal processes are weaker, some stars may become white dwarfs, which are only sustained against gravity by electron degeneracy pressure. Using the Fermi gas as a model, it is possible to calculate the Chandrasekhar limit, i.e. the maximum mass any star may acquire (without significant thermally generated pressure) before collapsing into a black hole or a neutron star. The latter, is a star mainly composed of neutrons, where the collapse is also avoided by neutron degeneracy pressure.
For the case of metals, the electron degeneracy pressure contributes to the compressibility or bulk modulus of the material.
Chemical potential
Assuming that the concentration of fermions does not change with temperature, then the total chemical potential μ (Fermi level) of the three-dimensional ideal Fermi gas is related to the zero temperature Fermi energy EF by a Sommerfeld expansion (assuming ):
where T is the temperature.
Hence, the internal chemical potential, μ-E0, is approximately equal to the Fermi energy at temperatures that are much lower than the characteristic Fermi temperature TF. This characteristic temperature is on the order of 105 K for a metal, hence at room temperature (300 K), the Fermi energy and internal chemical potential are essentially equivalent.
Typical values
Metals
Under the free electron model, the electrons in a metal can be considered to form a uniform Fermi gas. The number density of conduction electrons in metals ranges between approximately 1028 and 1029 electrons per m3, which is also the typical density of atoms in ordinary solid matter. This number density produces a Fermi energy of the order:
where me is the electron rest mass. This Fermi energy corresponds to a Fermi temperature of the order of 106 kelvins, much higher than the temperature of the Sun's surface. Any metal will boil before reaching this temperature under atmospheric pressure. Thus for any practical purpose, a metal can be considered as a Fermi gas at zero temperature as a first approximation (normal temperatures are small compared to TF).
White dwarfs
Stars known as white dwarfs have mass comparable to the Sun, but have about a hundredth of its radius. The high densities mean that the electrons are no longer bound to single nuclei and instead form a degenerate electron gas. The number density of electrons in a white dwarf is of the order of 1036 electrons/m3. This means their Fermi energy is:
Nucleus
Another typical example is that of the particles in a nucleus of an atom. The radius of the nucleus is roughly:
where A is the number of nucleons.
The number density of nucleons in a nucleus is therefore:
This density must be divided by two, because the Fermi energy only applies to fermions of the same type. The presence of neutrons does not affect the Fermi energy of the protons in the nucleus, and vice versa.
The Fermi energy of a nucleus is approximately:
where mp is the proton mass.
The radius of the nucleus admits deviations around the value mentioned above, so a typical value for the Fermi energy is usually given as 38 MeV.
Arbitrary-dimensional uniform gas
Density of states
Using a volume integral on dimensions, the density of states is:
The Fermi energy is obtained by looking for the number density of particles:
To get:
where is the corresponding d-dimensional volume, is the dimension for the internal Hilbert space. For the case of spin-, every energy is twice-degenerate, so in this case .
A particular result is obtained for , where the density of states becomes a constant (does not depend on the energy):
Fermi gas in harmonic trap
The harmonic trap potential:
is a model system with many applications in modern physics. The density of states (or more accurately, the degree of degeneracy) for a given spin species is:
where is the harmonic oscillation frequency.
The Fermi energy for a given spin species is:
Related Fermi quantities
Related to the Fermi energy, a few useful quantities also occur often in modern literature.
The Fermi temperature is defined as , where is the Boltzmann constant. The Fermi temperature can be thought of as the temperature at which thermal effects are comparable to quantum effects associated with Fermi statistics. The Fermi temperature for a metal is a couple of orders of magnitude above room temperature. Other quantities defined in this context are Fermi momentum , and Fermi velocity , which are the momentum and group velocity, respectively, of a fermion at the Fermi surface. The Fermi momentum can also be described as , where is the radius of the Fermi sphere and is called the Fermi wave vector.
Note that these quantities are not well-defined in cases where the Fermi surface is non-spherical.
Treatment at finite temperature
Grand canonical ensemble
Most of the calculations above are exact at zero temperature, yet remain as good approximations for temperatures lower than the Fermi temperature. For other thermodynamics variables it is necessary to write a thermodynamic potential. For an ensemble of identical fermions, the best way to derive a potential is from the grand canonical ensemble with fixed temperature, volume and chemical potential μ. The reason is due to Pauli exclusion principle, as the occupation numbers of each quantum state are given by either 1 or 0 (either there is an electron occupying the state or not), so the (grand) partition function can be written as
where , indexes the ensembles of all possible microstates that give the same total energy and number of particles , is the single particle energy of the state (it counts twice if the energy of the state is degenerate) and , its occupancy. Thus the grand potential is written as
The same result can be obtained in the canonical and microcanonical ensemble, as the result of every ensemble must give the same value at thermodynamic limit . The grand canonical ensemble is recommended here as it avoids the use of combinatorics and factorials.
As explored in previous sections, in the macroscopic limit we may use a continuous approximation (Thomas–Fermi approximation) to convert this sum to an integral:
where is the total density of states.
Relation to Fermi–Dirac distribution
The grand potential is related to the number of particles at finite temperature in the following way
where the derivative is taken at fixed temperature and volume, and it appears
also known as the Fermi–Dirac distribution.
Similarly, the total internal energy is
Exact solution for power-law density-of-states
Many systems of interest have a total density of states with the power-law form:
for some values of , , . The results of preceding sections generalize to dimensions, giving a power law with:
for non-relativistic particles in a -dimensional box,
for non-relativistic particles in a -dimensional harmonic potential well,
for hyper-relativistic particles in a -dimensional box.
For such a power-law density of states, the grand potential integral evaluates exactly to:
where is the complete Fermi–Dirac integral (related to the polylogarithm). From this grand potential and its derivatives, all thermodynamic quantities of interest can be recovered.
Extensions to the model
Relativistic Fermi gas
The article has only treated the case in which particles have a parabolic relation between energy and momentum, as is the case in non-relativistic mechanics. For particles with energies close to their respective rest mass, the equations of special relativity are applicable. Where single-particle energy is given by:
For this system, the Fermi energy is given by:
where the equality is only valid in the ultrarelativistic limit, and
The relativistic Fermi gas model is also used for the description of massive white dwarfs which are close to the Chandrasekhar limit. For the ultrarelativistic case, the degeneracy pressure is proportional to .
Fermi liquid
In 1956, Lev Landau developed the Fermi liquid theory, where he treated the case of a Fermi liquid, i.e., a system with repulsive, not necessarily small, interactions between fermions. The theory shows that the thermodynamic properties of an ideal Fermi gas and a Fermi liquid do not differ that much. It can be shown that the Fermi liquid is equivalent to a Fermi gas composed of collective excitations or quasiparticles, each with a different effective mass and magnetic moment.
See also
Bose gas
Fermionic condensate
Gas in a box
Jellium
Two-dimensional electron gas
References
Further reading
Neil W. Ashcroft and N. David Mermin, Solid State Physics (Harcourt: Orlando, 1976)
Charles Kittel, Introduction to Solid State Physics, 1st ed. 1953 – 8th ed. 2005,
Quantum models
Fermi–Dirac statistics
Ideal gas
Phases of matter | Fermi gas | [
"Physics",
"Chemistry"
] | 3,687 | [
"Thermodynamic systems",
"Phases of matter",
"Quantum mechanics",
"Physical systems",
"Quantum models",
"Ideal gas",
"Matter"
] |
182,028 | https://en.wikipedia.org/wiki/Cave%20painting | In archaeology, cave paintings are a type of parietal art (which category also includes petroglyphs, or engravings), found on the wall or ceilings of caves. The term usually implies prehistoric origin. These paintings were often created by Homo sapiens, but also Denisovans and Neanderthals; other species in the same Homo genus. Discussion around prehistoric art is important in understanding the history of the Homo sapiens species and how Homo sapiens have come to have unique abstract thoughts. Some point to these prehistoric paintings as possible examples of creativity, spirituality, and sentimental thinking in prehistoric humans.
The oldest known are more than 40,000 years old (art of the Upper Paleolithic) and found in the caves in the district of Maros (Sulawesi, Indonesia). The oldest are often constructed from hand stencils and simple geometric shapes.
More recently, in 2021, cave art of a pig found in Sulawesi, Indonesia, and dated to over 45,500 years ago, has been reported.
A 2018 study claimed an age of 64,000 years for the oldest examples of non-figurative cave art in the Iberian Peninsula. Represented by three red non-figurative symbols found in the caves of Maltravieso, Ardales and La Pasiega, Spain, these predate the appearance of modern humans in Europe by at least 20,000 years and thus must have been made by Neanderthals rather than modern humans.
In November 2018, scientists reported the discovery of the then-oldest known figurative art painting, over 40,000 (perhaps as old as 52,000) years old, of an unknown animal, in the cave of Lubang Jeriji Saléh on the Indonesian island of Borneo. In December 2019, cave paintings portraying pig hunting within the Maros-Pangkep karst region in Sulawesi were discovered to be even older, with an estimated age of at least 51,200 years. This finding was recognized as "the oldest known depiction of storytelling and the earliest instance of figurative art in human history." On July 3, 2024, the journal Nature published research findings indicating that the cave paintings which depict anthropomorphic figures interacting with a pig and measure in Leang Karampuang are approximately 51,200 years old, establishing them as the oldest known figurative art paintings in the world.
Dating
Nearly 350 caves have now been discovered in France and Spain that contain art from prehistoric times. Initially, the age of the paintings had been a contentious issue, since methods like radiocarbon dating can produce misleading results if contaminated by other samples, and caves and rocky overhangs (where parietal art is found) are typically littered with debris from many time periods. But subsequent technology has made it possible to date the paintings by sampling the pigment itself, torch marks on the walls, or the formation of carbonate deposits on top of the paintings. The subject matter can also indicate chronology: for instance, the reindeer depicted in the Spanish cave of Cueva de las Monedas places the drawings in the last Ice Age.
The oldest known cave painting is a red hand stencil in Maltravieso cave, Cáceres, Spain. It has been dated using the uranium-thorium method to older than 64,000 years and was made by a Neanderthal. The oldest date given to an animal cave painting is now a depiction of several human figures hunting pigs in the caves in the Maros-Pangkep karst of South Sulawesi, Indonesia, dated to be over 43,900 years old. Before this, the oldest known figurative cave paintings were that of a bull dated to 40,000 years, at Lubang Jeriji Saléh cave, East Kalimantan, Indonesian Borneo, and a depiction of a pig with a minimum age of 35,400 years at Timpuseng cave in Sulawesi.
The earliest known European figurative cave paintings are those of Chauvet Cave in France, dating to earlier than 30,000 BC in the Upper Paleolithic according to radiocarbon dating. Some researchers believe the drawings are too advanced for this era and question this age. More than 80 radiocarbon dates had been obtained by 2011, with samples taken from torch marks and from the paintings themselves, as well as from animal bones and charcoal found on the cave floor. The radiocarbon dates from these samples show that there were two periods of creation in Chauvet: 35,000 years ago and 30,000 years ago. One of the surprises was that many of the paintings were modified repeatedly over thousands of years, possibly explaining the confusion about finer paintings that seemed to date earlier than cruder ones.
In 2009, cavers discovered drawings in Coliboaia Cave in Romania, stylistically comparable to those at Chauvet. An initial dating puts the age of an image in the same range as Chauvet: about 32,000 years old.
In Australia, cave paintings have been found on the Arnhem Land plateau showing megafauna which are thought to have been extinct for over 40,000 years, making this site another candidate for oldest known painting; however, the proposed age is dependent on the estimate of the extinction of the species seemingly depicted. Another Australian site, Nawarla Gabarnmang, has charcoal drawings that have been radiocarbon-dated to 28,000 years, making it the oldest site in Australia and among the oldest in the world for which reliable date evidence has been obtained.
Other examples may date as late as the Early Bronze Age, but the well-known Magdalenian style seen at Lascaux in France (c.15,000 BC) and Altamira in Spain died out about 10,000BC, coinciding with the advent of the Neolithic period. Some caves probably continued to be painted over a period of several thousands of years.
The next phase of surviving European prehistoric painting, the rock art of the Iberian Mediterranean Basin, was very different, concentrating on large assemblies of smaller and much less detailed figures, with at least as many humans as animals. This was created roughly between 10,000 and 5,500 years ago, and painted in rock shelters under cliffs or shallow caves, in contrast to the recesses of deep caves used in the earlier (and much colder) period. Although individual figures are less naturalistic, they are grouped in coherent grouped compositions to a much greater degree. Over a long period of time, the cave art has become less naturalistic and has graduated from beautiful, naturalistic animal drawings to simple ones, and then to abstract shapes.
Subjects, themes, and patterns in cave painting
Cave artists use a variety of techniques such as finger tracing, modeling in clay, engravings, bas-relief sculpture, hand stencils, and paintings done in two or three colors. Scholars classify cave art as "Signs" or abstract marks.
The most common subjects in cave paintings are large wild animals, such as bison, horses, aurochs, and deer, and tracings of human hands as well as abstract patterns, called finger flutings. The species found most often were suitable for hunting by humans, but were not necessarily the actual typical prey found in associated deposits of bones; for example, the painters of Lascaux have mainly left reindeer bones, but this species does not appear at all in the cave paintings, where equine species are the most common. Drawings of humans were rare and are usually schematic as opposed to the more detailed and naturalistic images of animal subjects. Kieran D. O'Hara, geologist, suggests in his book Cave Art and Climate Change that climate controlled the themes depicted.
Pigments used include red and yellow ochre, hematite, manganese oxide and charcoal. Sometimes the silhouette of the animal was incised in the rock first, and in some caves all or many of the images are only engraved in this fashion, taking them somewhat out of a strict definition of "cave painting".
Similarly, large animals are also the most common subjects in the many small carved and engraved bone or ivory (less often stone) pieces dating from the same periods. But these include the group of Venus figurines, which with a few incomplete exceptions have no real equivalent in Paleolithic cave paintings. One counterexample is a feminine figure in the Chauvet Cave, as described in an interview with Dominique Baffier in Cave of Forgotten Dreams.
Hand stencils, formed by placing a hand against the wall and covering the surrounding area in pigment result in the characteristic image of a roughly round area of solid pigment with the negative shape of the hand in the centre, these may then be decorated with dots, dashes, and patterns. Often, these are found in the same caves as other paintings, or may be the only form of painting in a location. Some walls contain many hand stencils. Similar hands are also painted in the usual fashion. A number of hands show a finger wholly or partly missing, for which a number of explanations have been given. Hand images are found in similar forms in Europe, Eastern Asia, Australia, and South America. One site in Baja California features handprints as a prominent motif in its rock art. Archaeological study of this site revealed that, based on the size of the handprints, they most likely belonged to the women of the community. In addition to this, they were likely used during initiation rituals in Chinigchinich religious practices, which were commonly practiced in the Luiseño territory where this site is located.
Theories and interpretations
In the early 20th century, following the work of Walter Baldwin Spencer and Francis James Gillen, scholars such as Salomon Reinach, Henri Breuil and interpreted the paintings as 'utilitarian' hunting magic to increase the abundance of prey. Jacob Bronowski states, "I think that the power that we see expressed here for the first time is the power of anticipation: the forward-looking imagination. In these paintings the hunter was made familiar with dangers which he knew he had to face but to which he had not yet come."
Another theory, developed by David Lewis-Williams and broadly based on ethnographic studies of contemporary hunter-gatherer societies, is that the paintings were made by paleolithic shamans. The shaman would retreat into the darkness of the caves, enter into a trance state, then paint images of their visions, perhaps with some notion of drawing out power from the cave walls themselves.
R. Dale Guthrie, who has studied both highly artistic and lower quality art and figurines, identifies a wide range of skill and age among the artists. He hypothesizes that the main themes in the paintings and other artifacts (powerful beasts, risky hunting scenes and the representation of women in the Venus figurines) are the work of adolescent males, who constituted a large part of the human population at the time. However, in analyzing hand prints and stencils in French and Spanish caves, Dean Snow of Pennsylvania State University has proposed that a proportion of them, including those around the spotted horses in Pech Merle, were of female hands.
Analysis in 2022, led by Bennet Bacon, an amateur archaeologist, along with a team of professional archeologists and psychologists at the University of Durham, including Paul Pettitt and Robert William Kentridge, suggested that lines and dots (and a commonly seen, if curious, "Y" symbol, which was proposed to mean "to give birth") on upper palaeolithic cave paintings correlated with the mating cycle of animals in a lunar calendar, potentially making them the earliest known evidence of a proto-writing system and explaining one object of many cave paintings.
Paleolithic cave art by region
Europe
Well-known cave paintings include those of:
Cave of El Castillo, Spain (~40,000 y.o.)
Kapova Cave, Bashkortostan, Russia (~36,000 y.o.)
Chauvet Cave, near Vallon-Pont-d'Arc, France (~35,000 y.o.)
Cave of La Pasiega, Cuevas de El Castillo, Cantabria, Spain (~30,000 y.o.?)
Caves of Arcy-sur-Cure, France (~28,200 y.o.)
Cosquer Cave, with an entrance below sea level near Marseille, France (~27,000 y.o.)
Caves of Gargas, France (~27,000 y.o.)
Grotte de Cussac, France (~25,000 y.o.)
Pech Merle, near Cabrerets, France (25,000 y.o.)
Lascaux, France (~17,000 y.o.)
Cave of Niaux, France (~17,000 y.o.)
Font-de-Gaume, in the Dordogne Valley, France (~17,000 y.o.)
Badanj Cave, Stolac, Bosnia and Herzegovina (~16,000 y.o.)
Cave of Altamira, near Santillana del Mar, Cantabria, Spain (~15,500 y.o.)
La Marche, in Lussac-les-Châteaux, France (~15,000 y.o.)
Les Combarelles, in Les Eyzies de Tayac, Dordogne, France (~13,600 y.o.)
Cave of the Trois-Frères, in Ariège, France (~13,000 y.o.)
Magura Cave, Bulgaria (~10,000 y.o.)
Solsem cave, Norway (~3,000 y.o.)
Other sites include Creswell Crags, Nottinghamshire, England (~14,500 ys old cave etchings and bas-reliefs discovered in 2003), Peștera Coliboaia in Romania (~29,000 y.o. art?).
Rock painting was also performed on cliff faces; but fewer of those have survived because of erosion. One example is the rock paintings of Astuvansalmi (3,000–2,500 BC) in the Saimaa area of Finland.
When Marcelino Sanz de Sautuola first encountered the Magdalenian paintings of the Cave of Altamira in Cantabria, Spain in 1879, the academics of the time considered them hoaxes. Recent reappraisals and numerous additional discoveries have since demonstrated their authenticity, while at the same time stimulating interest in the artistry and symbolism of Upper Palaeolithic peoples.
East and Southeast Asia
In Indonesia the caves in the district of Maros in Sulawesi are famous for their hand prints. About 1,500 negative handprints have also been found in 30 painted caves in the Sangkulirang area of Kalimantan; preliminary dating analysis as of 2005 put their age in the range of 10,000 years old. A 2014 study based on uranium–thorium dating dated a Maros hand stencil to a minimum age of 39,900 years. A painting of a babirusa was dated to at least 35.4 ka, placing it among the oldest known figurative depictions worldwide.
In November 2018, scientists reported the discovery of the oldest known figurative art painting, over 40,000 (perhaps as old as 52,000) years old, of an unknown animal, in the cave of Lubang Jeriji Saléh on the Indonesian island of Borneo.
And more recently, in 2021, archaeologists announced the discovery of cave art at least 45,500 years old in Leang Tedongnge cave, Indonesia. According to the journal Science Advances, the cave painting of a warty pig is the earliest evidence of human settlement of the region. It has been reported that it is rapidly deteriorating as a result of climate change in the region.
Originating in the Paleolithic period, the rock art found in Khoit Tsenkher Cave, Mongolia, includes symbols and animal forms painted from the walls up to the ceiling. Stags, buffalo, oxen, ibex, lions, Argali sheep, antelopes, camels, elephants, ostriches, and other animal pictorials are present, often forming a palimpsest of overlapping images. The paintings appear brown or red in color, and are stylistically similar to other Paleolithic rock art from around the world but are unlike any other examples in Mongolia.
The Padah-Lin Caves of Burma contain 11,000-year-old paintings and many rock tools.
India
The Ambadevi rock shelters have the oldest cave paintings in India, dating back to 25,000 years. The Bhimbetka rock shelters are dated to about 8,000 BC. Similar paintings are found in other parts of India as well. In Tamil Nadu, ancient Paleolithic Cave paintings are found in Kombaikadu, Kilvalai, Settavarai and Nehanurpatti. In Odisha they are found in Yogimatha and Gudahandi. In Karnataka, these paintings are found in Hiregudda near Badami. The most recent painting, consisting of geometric figures, date to the medieval period.
Executed mainly in red and white with the occasional use of green and yellow, the paintings depict the lives and times of the people who lived in the caves, including scenes of childbirth, communal dancing and drinking, religious rites and burials, as well as indigenous animals.
Southern Africa
Cave paintings found at the Apollo 11 Cave in Namibia are estimated to date from approximately 25,500–27,500 years ago.
In 2011, archaeologists found a small rock fragment at Blombos Cave, about east of Cape Town on the southern cape coastline in South Africa, among spear points and other excavated material. After extensive testing for seven years, it was revealed that the lines drawn on the rock were handmade and from an ochre crayon dating back 73,000 years. This makes it the oldest known rock painting.
Australia
Significant early cave paintings, executed in ochre, have been found in Kimberley and Kakadu, Australia. Ochre is not an organic material, so carbon dating of these pictures is often impossible. The oldest so far dated at 17,300 years is an ochre painting of a kangaroo in the Kimberley region, which was dated by carbon dating wasp nest material underlying and overlying the painting. Sometimes the approximate date, or at least, an epoch, can be surmised from the painting content, contextual artifacts, or organic material intentionally or inadvertently mixed with the inorganic ochre paint, including torch soot.
A red ochre painting, discovered at the centre of the Arnhem Land Plateau, depicts two emu-like birds with their necks outstretched. They have been identified by a palaeontologist as depicting the megafauna species Genyornis, giant birds thought to have become extinct more than 40,000 years ago; however, this evidence is inconclusive for dating. It may suggest that Genyornis became extinct at a later date than previously determined.
Hook Island in the Whitsunday Islands is also home to a number of cave paintings created by the seafaring Ngaro people.
Holocene cave art
Asia
In the Philippines at Tabon Caves the oldest artwork may be a relief of a shark above the cave entrance. It was partially disfigured by a later jar burial scene.
The Edakkal Caves of Kerala, India, contain drawings that range over periods from the Neolithic as early as 5,000 BC to 1,000 BC.
Horn of Africa
Rock art near Qohaito appears to indicate habitation in the area since the fifth millennium BC, while the town is known to have survived to the sixth century AD. Mount Emba Soira, Eritrea's highest mountain, lies near the site, as does a small successor village. Much of the rock art sites are found together with evidence of prehistoric stone tools, suggesting that the art could predate the widely presumed pastoralist and domestication events that occurred 5,000– 4,000 years ago.
In 2002, a French archaeological team discovered the Laas Geel cave paintings on the outskirts of Hargeisa in Somaliland. Dating back around 5,000 years, the paintings depict both wild animals and decorated cows. They also feature herders, who are believed to be the creators of the rock art. In 2008, Somali archaeologists announced the discovery of other cave paintings in Dhambalin region, which the researchers suggest includes one of the earliest known depictions of a hunter on horseback. The rock art is dated to 1000 to 3000 BC.
Additionally, between the towns of Las Khorey and El Ayo in Karinhegane is a site of numerous cave paintings of real and mythical animals. Each painting has an inscription below it, which collectively have been estimated to be around 2,500 years old. Karihegane's rock art is in the same distinctive style as the Laas Geel and Dhambalin cave paintings. Around 25 miles from Las Khorey is found Gelweita, another key rock art site.
In Djibouti, rock art of what appear to be antelopes and a giraffe are also found at Dorra and Balho.
North Africa
Many cave paintings are found in the Tassili n'Ajjer mountains in southeast Algeria. A UNESCO World Heritage Site, the rock art was first discovered in 1933 and has since yielded 15,000 engravings and drawings that keep a record of the various animal migrations, climatic shifts, and change in human inhabitation patterns in this part of the Sahara from 6000 BC to the late classical period. Other cave paintings are also found at the Akakus, Mesak Settafet and Tadrart in Libya and other Sahara regions including: Ayr mountains, Niger and Tibesti, Chad.
The Cave of Swimmers and the Cave of Beasts in southwest Egypt, near the border with Libya, in the mountainous Gilf Kebir region of the Sahara Desert. The Cave of Swimmers was discovered in October 1933 by the Hungarian explorer László Almásy. The site contains rock painting images of people swimming, which are estimated to have been created 10,000 years ago during the time of the most recent Ice Age.
In 2020, limestone cave decorated with scenes of animals such as donkeys, camels, deer, mule and mountain goats was uncovered in the area of Wadi Al-Zulma by the archaeological mission from the Tourism and Antiquities Ministry. Rock art cave is 15 meters deep and 20 meters high.
Southern Africa
At uKhahlamba / Drakensberg Park, South Africa, now thought to be some 3,000 years old, the paintings by the San people who settled in the area some 8,000 years ago depict animals and humans, and are thought to represent religious beliefs. Human figures are much more common in the rock art of Africa than in Europe.
North America
Distinctive monochrome and polychrome cave paintings and murals exist in the mid-peninsula regions of southern Baja California and northern Baja California Sur, consisting of Pre-Columbian paintings of humans, land animals, sea creatures, and abstract designs. These paintings are mostly confined to the sierras of this region, but can also be found in outlying mesas and rock shelters. According to recent radiocarbon studies of the area, of materials recovered from archaeological deposits in the rock shelters and on materials in the paintings themselves, suggest that the Great Murals may have a time range extending as far back as 7,500 years ago.
California
Native artists in the Chumash tribes created cave paintings that are located in present-day Santa Barbara, Ventura, and San Luis Obispo Counties in Southern California in the United States. They include examples at Burro Flats Painted Cave and Chumash Painted Cave State Historic Park.
There are also Native American pictogram examples in caves of the Southwestern United States. Cave art that is 6,000 years old was found in the Cumberland Plateau region of Tennessee.
Native American tribes have contributed to the makings of Californian cave art, whether it be in Northern or Baja California. The Chumash people of Southern and Baja California made paintings in Swordfish Cave. It was given its name after the swordfish that are painted on its walls and is a sacred site for religious and cultural practices of the Chumash tribe. It was under attack of demolition, which prompted the start of its conservation with cooperation between the Vandenberg Air Force Base and the Tribal Elders Council of the Santa Ynez Band of Chumash. These two parties were able to stabilize and conserve the cave and its art. When previously studied, there were many conclusions about how the paintings were made but not a lot of conclusions about the symbolic value of the rock art and what its meaning to the Chumash tribe. The excavation of the inside of the cave became a viewing area for archaeologists and anthropologists, specifically Clayton Lebow, Douglas Harrow, and Rebecca McKim, to find out the symbolic meaning of the art. Some of the tools that were used to make the pictographs were found in the site and were connected to the two early occupations that were in the area. This pushed back the general knowledge of understood antiquity of rock art on California's Central Coast by more than 2,000 years.
Northern and Baja California
The National Institution of Anthropology and History (INAH) established in Mexico recorded over 1,500 rock art related archaeological monuments in Baja California. A little under 300 of the sites were connected to Native American Tribes. Throughout these 300 sites, 65% have paintings, 24% have petroglyphs, 10% have both paintings and petroglyphs, and 1% have geoglyphs. Five of these sites located in Baja California show hand designs or paintings, and they all spread out in that area. These sites include Milagro de Guadalupe (23 imprints), Corral de Queno (6 imprints), Rancho Viejo (1 drawing), Piedras Gordas (5 imprints), and finally Valle Seco (3 imprints).
South America
Serra da Capivara National Park is a national park in the north east of Brazil with many prehistoric paintings; the park was created to protect the prehistoric artifacts and paintings found there. It became a World Heritage Site in 1991. Its best known archaeological site is Pedra Furada.
It is located in northeast state of Piauí, between latitudes 8° 26' 50" and 8° 54' 23" south and longitudes 42° 19' 47" and 42° 45' 51" west. It falls within the municipal areas of São Raimundo Nonato, São João do Piauí, Coronel José Dias and Canto do Buriti. It has an area of 1291.4 square kilometres (319,000 acres). The area has the largest concentration of prehistoric small farms on the American continents. Scientific studies confirm that the Capivara mountain range was densely populated in prehistoric periods.
Cueva de las Manos (Spanish for "Cave of the Hands") is a cave located in the province of Santa Cruz, Argentina, 163 km (101 mi) south of the town of Perito Moreno, within the borders of the Francisco P. Moreno National Park, which includes many sites of archaeological and paleontological importance.
The hand images are often negative (stencilled). Besides these there are also depictions of human beings, guanacos, rheas, felines and other animals, as well as geometric shapes, zigzag patterns, representations of the sun, and hunting scenes. Similar paintings, though in smaller numbers, can be found in nearby caves. There are also red dots on the ceilings, probably made by submerging their hunting bolas in ink, and then throwing them up. The colours of the paintings vary from red (made from hematite) to white, black or yellow. The negative hand impressions date to around 550 BC, the positive impressions from 180 BC, while the hunting drawings are calculated to more than 10,000 years old. Most of the hands are "left hands" (that is, with thumb on the right, even though this pattern can be obtained as easily with both right and left hands, depending on whether the back or front is used) which has been used as an argument to suggest that painters held the spraying pipe with their right hand.
Southeast Asia
There are rock paintings in caves in Thailand, Malaysia, Indonesia, and Burma. In Thailand, caves and scarps along the Thai-Burmese border, in the Petchabun Range of Central Thailand, and overlooking the Mekong River in Nakorn Sawan Province, all contain galleries of rock paintings. In Malaysia, the Tambun rock art is dated at 2000 years, and those in the Painted Cave at Niah Caves National Park are 1200 years old. The anthropologist Ivor Hugh Norman Evans visited Malaysia in the early 1920s and found that some of the tribes (especially Negritos) were still producing cave paintings and had added depictions of modern objects including what are believed to be automobiles. (See prehistoric Malaysia.)
In Indonesia, rock paintings can be found in Sumatra, Kalimantan, Sulawesi, Flores, Timor, Maluku and Papua.
See also
List of Stone Age art
15,000 BC in art
Notes
References
Further reading
External links
Bradshaw Foundation The recording of cave paintings around the world
EuroPreArt database of European Prehistoric Art
American Rock Art Research Association
Tour of Afghan cave paintings from BBC News.
Le Kalimanthrope Rock art of Borneo (Kalimantan, Indonesia)
Journey through Art History, an outline of prehistoric art with emphasis on cave paintings from around the world.
Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016).
Art of the Upper Paleolithic
Indigenous art
Mass media technology
Murals
Pre-Columbian art | Cave painting | [
"Technology"
] | 6,086 | [
"Information and communications technology",
"Mass media technology"
] |
182,053 | https://en.wikipedia.org/wiki/Gamer | A gamer is someone who plays interactive games, either video games, tabletop role-playing games, skill-based card games, or any combination thereof, and who often plays for extended periods of time. Originally a hobby, gaming has evolved into a profession for some, with some gamers routinely competing in games for money, prizes, or awards. In some countries, such as the US, UK, and Australia, the term "gaming" can refer to legalized gambling, which can take both traditional and digital forms, such as through online gambling. There are many different gamer communities around the world. Since the advent of the Internet, many communities take the form of Internet forums or YouTube or Twitch virtual communities, as well as in-person social clubs. In 2021, there were an estimated 3.24 billion gamers across the globe.
Etymology
The term gamer originally meant gambler, and has been in use since at least 1422, when the town laws of Walsall, England, referred to "any dice-player, carder, tennis player, or other unlawful gamer". However, this description has not been adopted in the United States, where it became associated with other pastimes. In the US, they made their appearance as wargames. Wargames were originally created as a military and strategy tool. When Dungeons & Dragons was released, it was originally marketed as a wargame, but later was described by its creators as a role-playing game. They called their players gamers and this is where the word changed definition from someone who gambles to someone who plays board games and/or video games.
Categories
In the United States as of 2018, 28% of gamers are under 18, 29% are 18–35, 20% are 36-49 and 23% are over 50. In the UK as of 2014, 29% are under 18, 32% are 18-35 and 39% are over 36. According to Pew Research Center, 49% of adults have played a video game at some point in their life and those who have are more likely to let their children or future children play. Those who play video games regularly are split roughly equally between male and female, but men are more likely to call themselves a gamer. As of 2019, the average gamer is 33 years old.
Female gamer/gamer girl
A female gamer, or gamer girl or girl gamer, is any female who regularly engages in playing video games. According to a study conducted by the Entertainment Software Association in 2009, 40% of the game playing population is female, and women 18 or older comprise 34% of all gamers. Also, the percentage of women playing online had risen to 43%, up 4% from 2004. The same study shows that 48% of game purchasers are female.
According to a 2015 Pew survey, 6% of women in the United States identify as gamers, compared to 15% of men, and 48% of women and 50% of men play video games. Usage of the term "girl gamer" is controversial. Some critics have advocated use of the label as a reappropriated term, while others see it as non-descriptive or perpetuating the minority position of female gamers. Some critics of the term believe there is no singular definition of a female gamer and that they are as diverse as any other group. However it is generally understood that the term "girl gamer" implies that it is a girl who plays video games.
Psychology
Shigeru Miyamoto says that "I think that first a game needs a sense of accomplishment. And you have to have a sense that you have done something, so that you get that sense of satisfaction of completing something."
In April 2020, researchers found that top gamers shared the same mental toughness as Olympian athletes.
Escapism is a major factor in why individuals enjoy gaming. This idea of being in another world while gaming has become very common with gamers, these video games create a new world where these gamers feel they fit in and can control what is going on. Gaming is a form of escapism, Hideo Kojima states that "If the player isn't tricked into believing that the world is real, then there's no point in making the game."
Types and demographics
Sexes
Two highly controversial issues surrounding the gaming world in today's day and age are ideas of gender roles and LGBTQ+ involvement in the gaming industry. It is first important to understand the difference between men and women in the world of gaming. Although roughly the same number of men and women play games, the stereotype of a gamer is one that is predominantly male. A justification sometimes given for this is that while many women occasionally play games, they should not be considered "true" gamers because they tend to play games that are more casual and require fewer skills than men. This stereotype is perpetuated by the fact that at a professional level, most of the teams competing are composed of men, while female gamers of moderate skill are rendered invisible. The average gamer is seen as a male player who is usually Caucasian. A study has shown 48% of game purchases are from female consumers, but in 2015 only 6% of women that are in the U.S. identify as a gamer. Ideas behind the word "girl gamer" tend to spark a contentious reaction, and the use of this name has been supported as a title that is seen as a reappropriated term.
Gaymer
Besides the distinction of a "girl gamer" from a "male gamer", there is also a common understanding as stereotype of a "Gaymer." A Gaymer is a depiction of a gay gamer, and someone who identifies their sexual orientation to be a part of the LGBT (gay, bisexual, lesbian, or transgender) community while participating in video games. The concept of Gaymers is a part of two surveys in 2006 and 2009. The 2006 survey took note of the levels of detriment that Gaymers may have experienced, and the 2009 survey kept detail of the content that Gaymers would find to be normalized in video games. Staying the topic of ideas behind gaming and the relationship with the LGBTQ community, it has been noted that video games are starting to develop more characters and depictions of members from this specific community. Some of the topics of these specific LGBTQ-friendly video games include such ideas as coming out stories and queer relationships. These games are also providing the option of character creation with different forms of gender expression along with more LGBTQ romance options. One example of these games in the LGBTQ+ realm of dating would be Dream Daddy: A Dad Dating Simulator, released in 2017. The game had many queer individuals debating, but the overall representation of the game was applauded by many LGBTQ+ people due to its accurate presentation and the way that it provided comfort to people of many sexualities. Having more of these gender- and sexuality-friendly games is providing LGBTQ+ members with a safe space to feel welcome and explore their queerness in a more confident manner.
Dedication spectrum
It is common for games media, games industry analysts, and academics to divide gamers into broad behavioral categories. These categories are sometimes separated by level of dedication to gaming, sometimes by primary type of game played, and sometimes by a combination of those and other factors. There is no general consensus on the definitions or names of these categories, though many attempts have been made to formalize them. An overview of these attempts and their common elements follows.
Newbie: (commonly shortened to "noob", "n00b", or "newb") A slang term for a novice or newcomer to a certain game, or to gaming in general.
Casual gamer: The term often used for gamers who primarily play casual games, but can also refer to gamers who play less frequently than other gamers. Casual gamers may play games designed for ease of gameplay, or play more involved games in short sessions, or at a slower pace than hardcore gamers. The types of game that casual gamers play vary, and they are less likely to own a dedicated video game console. Notable examples of casual games include The Sims and Nintendogs. Casual gamer demographics vary greatly from those of other video gamers, as the typical casual gamer is older and more predominantly female. Fitness gamers, who play motion-based exercise games, are also seen as casual gamers.
Core gamer: (also mid-core) A player with a wider range of interests than a casual gamer and is more likely to enthusiastically play different types of games, but without the amount of time spent and sense of competition of a hardcore gamer. The mid-core gamer enjoys games but may not finish every game they buy and is a target consumer. Former Nintendo president Satoru Iwata stated that they designed the Wii U to cater to core gamers who are in between the casual and hardcore categories. A number of theories have been presented regarding the rise in popularity of mid-core games. James Hursthouse, the founder of Roadhouse Interactive, credits the evolution of devices towards tablets and touch-screen interfaces, whereas Jon Radoff of Disruptor Beam compares the emergence of mid-core games to similar increases in media sophistication that have occurred in media such as television.
Hardcore gamer: Ernest Adams and Scott Kim have proposed classification metrics to distinguish "hardcore gamers" from casual gamers, emphasizing action, competition, complexity, gaming communities, and staying abreast of developments in hardware and software. Others have attempted to draw the distinction based primarily on which platforms a gamer prefers, or to decry the entire concept of delineating casual from hardcore as divisive and vague.
Professional gamer
Professional gamers generally play video games for prize money or salaries. Usually, such individuals deeply study the game in order to master it and usually to play in competitions like esports. A pro gamer may also be another type of gamer, such as a hardcore gamer, if he or she meets the additional criteria for that gamer type. In countries of Asia, particularly South Korea and China, professional gamers and teams are sponsored by large companies and can earn more than a year. In 2006, Major League Gaming contracted several Halo 2 players including Tom "Tsquared" Taylor and members of Team Final Boss with $250,000 yearly deals. Many professional gamers find that competitions are able to provide a substantial amount of money to support themselves. However, oftentimes, these popular gamers can locate even more lucrative options. One such option is found through online live streaming of their games. These gamers who take time out of their lives to stream make money from their stream, usually through sponsorships with large companies looking for a new audience or donations from their fans just trying to support their favorite streamer. Live streaming often occurs through popular websites such as Twitch and YouTube. Professional gamers with particularly large followings can often bring their fan bases to watch them play on live streams. An example of this is shown through retired professional League of Legends player Wei "CaoMei" Han-Dong. Han-Dong had decided to retire from esports due to his ability to acquire substantially higher pay through live streaming. His yearly salary through the Battle Flag TV live streaming service increased his pay to roughly $800,000 yearly. Live streaming can be seen by many as a truly lucrative way for professional gamers to make money in a way that can also lessen the pressure in the competitive scene. We are seeing a rapid increase in the young video game players wanting to be professional gamers instead of the "pro athlete". The career path of becoming a professional gamer is open for anyone any race, gender, and background. The gaming community now has developed at a much faster rate and now is being considered esports. These more serious gamers are professional gamers; they are individuals that take the average everyday gaming much more seriously and profit from how they perform.
Although the LGBTQ+ gamers are starting to make more of a mark in the gaming world, there are still many disadvantages to this process. Homophobia in the gaming world does tend to take a toll on the problem of an equally shared gaming experience. This is both an issue within the games industry and many areas of the games culture. The brings back the thought of importance for increasing LGBTQ representation in games, especially with such events as GaymerX. There is a study called the online roulette survey that shows that queer gamers are at a disadvantage financially for the fact that the highest earning professional gamers in the LGBTQ+ community bring in less money than popular heterosexual professional gamers. This highlights that not only is there a huge divide between male and female counterparts in the gaming industry, but there also happens to be a great divide when it comes to sexual preference in the gaming world, especially when it comes to the professional gaming scene. Often, tech companies' privilege men's point of view over women's participation in tech and their consumption, which could be seen as vice versa for people of a homosexual and heterosexual identity. The two topics will always hold a big weight in the gaming industry.
Retrogamer
A retro gamer is a gamer who prefers to play, and enough collect, retro games—older video games and arcade games. They may also be called classic gamers or old-school gamers, which are terms that are more prevalent in the United States. The games can be played on the original hardware, on modern hardware via emulation, or on modern hardware via ports or compilations (though those 'in the hobby' tend toward original hardware and emulation).
Classification in taxonomies
A number of taxonomies have been proposed which classify gamer types and the aspects they value in games.
The Bartle taxonomy of player types classifies gamers according to their preferred activities within the game:
Achievers, who like to gain points and overall succeed within the game parameters, collecting all rewards and game badges.
Explorers, who like to discover all areas within the game, including hidden areas and glitches, and expose all game mechanics.
Socializers, who prefer to play games for the social aspect, rather than the actual game itself.
Beaters, who thrive on competition with other players.
Completionists, who are combinations of the Achiever and Explorer types. They complete every aspect of the game (main story, side quests, achievements) while finding every secret within it.
The MDA framework describes various aspects of the game regarding the basic rules and actions (Mechanics), how they build up during game to develop the gameplay (Dynamics), and what emotional response they convey to the player (Aesthetics). The described esthetics are further classified as Sensation, Fantasy, Narrative, Challenge, Fellowship, Discovery, Expression and Submission. Jesse Schell extends this classification with Anticipation, Schadenfreude, Gift giving, Humour, Possibility, Pride, Purification, Surprise, Thrill, Perseverance and Wonder, and proposes a number of generalizations of differences between how males and females play.
Avatar
Creating an avatar can be one of the first interaction that a potential player makes to identify themselves among the gaming community. An avatar, username, game name, alias, gamer tag, screen name, or handle is a name (usually a pseudonym) adopted by a video gamer, often used as a main preferred identification to the gaming community. Usage of user names is often most prevalent in games with online multiplayer support, or at electronic sport conventions. While some well-known gamers only go by their online handle, a number have adopted to using their handle within their real name typically presented as a middle name, such as Tyler "Ninja" Blevins or Jay "sinatraa" Won.
Similarly, a clan tag is a prefix or suffix added to a name to identify that the gamer is in a clan. Clans are generally a group of gamers who play together as a team against other clans. They are most commonly found in online multi-player games in which one team can face off against another. Clans can also be formed to create loosely based affiliations perhaps by all being fans of the same game or merely gamers who have close personal ties to each other. A team tag is a prefix or suffix added to a name to identify that the gamer is in a team. Teams are generally sub-divisions within the same clan and are regarded within gaming circuits as being a purely competitive affiliation. These gamers are usually in an online league such as the Cyberathlete Amateur League (C.A.L.) and their parent company the Cyberathlete Professional League (C.P.L.) where all grouped players were labeled as teams and not clans.
Clans and guilds
A clan, squad or guild is a group of players that form, usually under an informal 'leader' or administrator. Clans are often formed by gamers with similar interests; many clans or guilds form to connect an 'offline' community that might otherwise be isolated due to geographic, cultural or physical barriers. Some clans are composed of professional gamers, who enter competitive tournaments for cash or other prizes; most, however, are simply groups of like-minded players that band together for a mutual purpose (for example, a gaming-related interest or social group).
Identity
The identity of being a gamer is partly self-determination and partly performativity of characteristics society expects a gamer to embody. These expectations include not only a high level of dedication to playing games, but also preferences for certain types of games, as well as an interest in game-related paraphernalia like clothing and comic books. According to Graeme Kirkpatrick, the "true gamer" is concerned first and foremost with gameplay. The Escapist founder Alexander Macris says a gamer is an enthusiast with greater dedication to games than just playing them, similar in connotation to "cinemaphile". People who play may not identify as gamers because they feel they do not play "enough" to qualify. Social stigma against games has influenced some women and minorities to distance themselves from the term "gamer", even though they may play regularly.
Demographics
Games are stereotypically associated with young males, but the diversity of the audience has been steadily increasing over time. This stereotype exists even among a majority of women who play video games regularly. Among players using the same category of device (e.g., console or phone), patterns of play are largely the same between men and women. Diversity is driven in part by new hardware platforms. Expansion of the audience was catalyzed by Nintendo's efforts to reach new demographics. Market penetration of smartphones with gaming capabilities further expanded the audience, since in contrast to consoles or high-end PCs, mobile phone gaming requires only devices that non-gamers are likely to already own.
While 48% of women in the United States report having played a video game, only 6% identify as gamers, compared to 15% of men who identify as gamers. This rises to 9% among women aged 18–29, compared to 33% of men in that age group. Half of female PC gamers in the U.S. consider themselves to be core or hardcore gamers. Connotations of "gamer" with sexism on the fringe of gaming culture has caused women to be less willing to adopt the label.
Racial minorities responding to Pew Research were more likely to describe themselves as gamers, with 19% of Hispanics identifying as gamers, compared to 11% of African-Americans and 7% of whites. The competitive fighting game scene is noted as particularly racially diverse and tolerant. This is attributed to its origin in arcades, where competitors met face to face and the barrier to entry was merely a quarter. Only 4% of those aged 50 and over identified as gamers.
Casualization
Casualization is a trend in video games towards simpler games appealing to larger audiences, especially women or the elderly. Some developers, hoping to attract a broader audience, simplify or remove aspects of gameplay in established genres and franchises. Compared to seminal titles like DOOM, more recent mass-market action games like the Call of Duty series are less sensitive to player choice or skill, approaching the status of interactive movies.
The trend towards casual games is decried by some self-identified gamers who emphasize gameplay, meaning the activities that one undertakes in a game. According to Brendan Keogh, these are inherently masculine activities such as fighting and exerting dominance. He further says that games women prefer are more passive experiences, and male gamers deride the lack of interactivity in these games because of this association with femininity. Belying these trends, games including The Sims or Minecraft have some of the largest audiences in the industry while also being very complex. According to Joost van Dreunen of SuperData Research, girls who play Minecraft are "just as 'hardcore' as the next guy over who plays Counter-Strike". Dreunen says being in control of a game's environment appeals equally to boys and girls. Leigh Alexander argued that appealing to women does not necessarily entail reduced difficulty or complexity.
See also
Entertainment Consumers Association
Esports
Gamers Outreach Foundation
Going Cardboard (documentary)
List of gaming topics
Player (game)
Video game addiction
References
External links
Gaming
Nerd culture
Stereotypes
Video game culture
Video gaming
Video game terminology | Gamer | [
"Technology"
] | 4,346 | [
"Computing terminology",
"Video game terminology"
] |
182,079 | https://en.wikipedia.org/wiki/Hydrogen%20carrier | A hydrogen carrier is an organic macromolecule that transports atoms of hydrogen from one place to another inside a cell or from cell to cell for use in various metabolical processes. Examples include NADPH, NADH, and FADH. The main role of these is to transport hydrogen atom to electron transport chain which will change ADP to ATP by adding one phosphate during metabolic processes (e.g. photosynthesis and respiration). Hydrogen carrier participates in an oxidation-reduction reaction by getting reduced due to the acceptance of a Hydrogen. The enzyme used in Glycolysis, Dehydrogenase is used to attach the hydrogen to one of the hydrogen carrier.
See also
Electron carrier
Light reactions
Photosynthesis
Cellular respiration
References
External links
http://www.biology-online.org/1/3_respiration.htm
https://web.archive.org/web/20100727214925/http://student.ccbcmd.edu/~gkaiser/biotutorials/energy/oxphos.html
Hydrogen biology | Hydrogen carrier | [
"Chemistry",
"Biology"
] | 225 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
182,082 | https://en.wikipedia.org/wiki/George%20Santayana | George Santayana (b. Jorge Agustín Nicolás Ruiz de Santayana y Borrás, December 16, 1863 – September 26, 1952) was a Spanish-American philosopher, essayist, poet, and novelist. Born in Spain, Santayana was raised and educated in the United States from the age of eight and identified as an American, yet always retained a valid Spanish passport. At the age of 48, he left his academic position at Harvard University and permanently returned to Europe; his last will was to be buried in the Spanish Pantheon in the Campo di Verano, Rome.
As a philosopher, Santayana is known for aphorisms, such as "Those who cannot remember the past are condemned to repeat it", and "Only the dead have seen the end of war", and his definition of beauty as "Pleasure objectified". Although an atheist, Santayana valued the culture of the Spanish Catholic values, practices, and worldview, in which he was raised. As an intellectual, George Santayana was a broad-range cultural critic in several academic disciplines.
Early life
George Santayana was born on December 16, 1863, in Calle de San Bernardo of Madrid and spent his early childhood in Ávila, Spain. His mother Josefina Borrás was the daughter of a Spanish official in the Philippines and he was the only child of her second marriage. Josefina Borrás' first husband was George Sturgis, a Boston merchant with the Manila firm Russell & Sturgis. She had five children with him; two of them died in infancy. She lived in Boston for a few years following her husband's death in 1857; in 1861, she moved with her three surviving children to Madrid. There she encountered Agustín Ruiz de Santayana, an old friend from her years in the Philippines. They married in 1862. A colonial civil servant, Ruiz de Santayana was a painter and minor intellectual. The family lived in Madrid and Ávila, and Jorge was born in Spain in 1863.
In 1869, Josefina Borrás de Santayana returned to Boston with her three Sturgis children, because she had promised her first husband to raise the children in the US. She left the six-year-old Jorge with his father in Spain. Jorge and his father followed her to Boston in 1872. His father, finding neither Boston nor his wife's attitude to his liking, soon returned alone to Ávila, and remained there the rest of his life. Jorge did not see him again until he entered Harvard College and began to take his summer vacations in Spain. Sometime during this period, Jorge's first name was anglicized to its English equivalent: George.
Education
Santayana attended Boston Latin School and Harvard College, where he studied under the philosophers William James and Josiah Royce and was involved in eleven clubs. He was founder and president of the Philosophical Club, a member of the literary society known as the O.K., an editor and cartoonist for The Harvard Lampoon, he joined one of Harvard's "Final CLubs - the Delphic Club and co-founder of the literary journal The Harvard Monthly. In December, 1885, he played the role of Lady Elfrida in the Hasty Pudding theatrical Robin Hood, followed by the production Papillonetta in the spring of his senior year. He received his A.B. summa cum laude in 1886 and was elected to Phi Beta Kappa. in 1886, Santayana studied for two years in Berlin. He then returned to Harvard to write his dissertation on Hermann Lotze (1889). He was a professor at Harvard from 1889 to 1912, becoming part of the Golden Age of The Harvard University Department of Philosophy. Some of his Harvard students became famous in their own right, including Conrad Aiken, W. E. B. Du Bois, T. S. Eliot, Robert Frost, Horace Kallen, Walter Lippmann and Gertrude Stein. Wallace Stevens was not among his students but became a friend. From 1896 to 1897, Santayana studied at King's College, Cambridge.
Later life
Santayana never married. His romantic life, if any, is not well understood. Some evidence, including a comment Santayana made late in life comparing himself to A. E. Housman, and his friendships with people who were openly homosexual and bisexual, has led scholars to speculate that Santayana was perhaps homosexual or bisexual, but it remains unclear whether he had any actual heterosexual or homosexual relationships.
In 1912, Santayana resigned his position at Harvard to spend the rest of his life in Europe. He had saved money and been aided by a legacy from his mother. After some years in Ávila, Paris and Oxford, after 1920, he began to winter in Rome, eventually living there year-round until his death. During his 40 years in Europe, he wrote 19 books and declined several prestigious academic positions. Many of his visitors and correspondents were Americans, including his assistant and eventual literary executor, Daniel Cory. In later life, Santayana was financially comfortable, in part because his 1935 novel, The Last Puritan, had become an unexpected best-seller. In turn, he financially assisted a number of writers, including Bertrand Russell, with whom he was in fundamental disagreement, philosophically and politically.
Santayana's one novel, The Last Puritan, is a Bildungsroman, centering on the personal growth of its protagonist, Oliver Alden. His Persons and Places is an autobiography. These works also contain many of his sharper opinions and bons mots. He wrote books and essays on a wide range of subjects, including philosophy of a less technical sort, literary criticism, the history of ideas, politics, human nature, morals, the influence of religion on culture and social psychology, all with considerable wit and humour.
While his writings on technical philosophy can be difficult, his other writings are more accessible and pithy. He wrote poems and a few plays, and left ample correspondence, much of it published only since 2000. Like Alexis de Tocqueville, Santayana observed American culture and character from a foreigner's point of view. Like William James, his friend and mentor, he wrote philosophy in a literary way. Ezra Pound includes Santayana among his many cultural references in The Cantos, notably in "Canto LXXXI" and "Canto XCV". Santayana is usually considered an American writer, although he declined to become an American citizen, resided in Fascist Italy for decades, and said that he was most comfortable, intellectually and aesthetically, at Oxford University. Although an atheist, Santayana considered himself an "aesthetic Catholic" and spent the last decade of his life in Rome under the care of Catholic nuns. In 1941, he entered a hospital and convent run by the Little Company of Mary (also known as the Blue Nuns) on the Celian Hill at 6 Via Santo Stefano Rotondo in Roma, where he was cared for by the sisters until his death in September 1952. Upon his death, he did not want to be buried in consecrated land, which made his burial problematic in Italy. Finally, the Spanish consulate in Rome agreed that he be buried in the Pantheon of the Obra Pía Española, in the Campo Verano cemetery in Rome.
Philosophical work and publications
Santayana's main philosophical work consists of The Sense of Beauty (1896), his first book-length monograph and perhaps the first major work on aesthetics written in the United States; The Life of Reason (5 vols., 1905–06), the high point of his Harvard career; Scepticism and Animal Faith (1923); and The Realms of Being (4 vols., 1927–1940). Although Santayana was not a pragmatist in the mold of William James, Charles Sanders Peirce, Josiah Royce, or John Dewey, The Life of Reason arguably is the first extended treatment of pragmatism written.
Like many of the classical pragmatists, Santayana was committed to metaphysical naturalism. He believed that human cognition, cultural practices, and social institutions have evolved so as to harmonize with the conditions present in their environment. Their value may then be adjudged by the extent to which they facilitate human happiness. The alternate title to The Life of Reason, "the Phases of Human Progress", is indicative of this metaphysical stance.
Santayana was an early adherent of epiphenomenalism, but also admired the classical materialism of Democritus and Lucretius. (Of the three authors on whom he wrote in Three Philosophical Poets, Santayana speaks most favorably of Lucretius). He held Spinoza's writings in high regard, calling him his "master and model".
Although an atheist, he held a fairly benign view of religion and described himself as an "aesthetic Catholic". Santayana's views on religion are outlined in his books Reason in Religion, The Idea of Christ in the Gospels, and Interpretations of Poetry and Religion.
He held racial superiority and eugenic views. He believed superior races should be discouraged from "intermarriage with inferior stock".
Legacy
Santayana is remembered in large part for his aphorisms, many of which have been so frequently used as to have become clichéd. His philosophy has not fared quite as well. He is regarded by most as an excellent prose stylist, and John Lachs (who is sympathetic with much of Santayana's philosophy) writes, in On Santayana, that his eloquence may ironically be the very cause of this neglect.
Santayana influenced those around him, including Bertrand Russell, whom Santayana single-handedly steered away from the ethics of G. E. Moore. He also influenced many prominent people such as Harvard students T. S. Eliot, Robert Frost, Gertrude Stein, Horace Kallen, Walter Lippmann, W. E. B. Du Bois, Conrad Aiken, Van Wyck Brooks, Felix Frankfurter, Max Eastman, and Wallace Stevens. Stevens was especially influenced by Santayana's aesthetics and became a friend even though Stevens did not take courses taught by Santayana.
Santayana is quoted by the Canadian-American sociologist Erving Goffman as a central influence in the thesis of his famous book The Presentation of Self in Everyday Life (1959). Religious historian Jerome A. Stone credits Santayana with contributing to the early thinking in the development of religious naturalism. English mathematician and philosopher Alfred North Whitehead quotes Santayana extensively in his magnum opus Process and Reality (1929).
Chuck Jones used Santayana's description of fanaticism as "redoubling your effort after you've forgotten your aim" to describe his cartoons starring Wile E. Coyote and Road Runner.
In popular culture
Santayana's passing is referenced in the lyrics to singer-songwriter Billy Joel's 1989 single "We Didn't Start the Fire".
The quote "Only the dead have seen the end of war" is frequently attributed or misattributed to Plato; an early example of this misattribution (if it is indeed misattributed) is found in General Douglas MacArthur's Farewell Speech given to the Corps of Cadets at West Point in 1962.
Awards
Royal Society of Literature Benson Medal, 1925.
Columbia University Butler Gold Medal, 1945.
Honorary degree from the University of Wisconsin, 1911.
Bibliography
1894. Sonnets and Other Verses.
1896. The Sense of Beauty: Being the Outline of Aesthetic Theory.
1899. Lucifer: A Theological Tragedy.
1900. Interpretations of Poetry and Religion.
1901. A Hermit of Carmel and Other Poems.
1905–1906. The Life of Reason: The Phases of Human Progress, 5 vols.
1910. Three Philosophical Poets: Lucretius, Dante, and Goethe.
1913. Winds of Doctrine: Studies in Contemporary Opinion.
1915. Egotism in German Philosophy.
1920. Character and Opinion in the United States: With Reminiscences of William James and Josiah Royce and Academic Life in America.
1920. Little Essays, Drawn From the Writings of George Santayana, by Logan Pearsall Smith, with the Collaboration of the Author.
1922. Soliloquies in England and Later Soliloquies.
1922. Poems.
1923. Scepticism and Animal Faith: Introduction to a System of Philosophy.
1926. Dialogues in Limbo
1927. Platonism and the Spiritual Life.
1927–40. The Realms of Being, 4 vols.
1931. The Genteel Tradition at Bay.
1933. Some Turns of Thought in Modern Philosophy: Five Essays
1935. The Last Puritan: A Memoir in the Form of a Novel.
1936. Obiter Scripta: Lectures, Essays and Reviews. Justus Buchler and Benjamin Schwartz, eds.
1944. Persons and Places.
1945. The Middle Span.
1946. The Idea of Christ in the Gospels; or, God in Man: A Critical Essay.
1948. Dialogues in Limbo, With Three New Dialogues.
1951. Dominations and Powers: Reflections on Liberty, Society, and Government.
1953. My Host The World
Posthumous edited/selected works
1955. The Letters of George Santayana. Daniel Cory, ed. Charles Scribner's Sons. New York. (296 letters)
1956. Essays in Literary Criticism of George Santayana. Irving Singer, ed.
1957. The Idler and His Works, and Other Essays. Daniel Cory, ed.
1967. The Genteel Tradition: Nine Essays by George Santayana. Douglas L. Wilson, ed.
1967. George Santayana's America: Essays on Literature and Culture. James Ballowe, ed.
1967. Animal Faith and Spiritual Life: Previously Unpublished and Uncollected Writings by George Santayana With Critical Essays on His Thought. John Lachs, ed.
1968. Santayana on America: Essays, Notes, and Letters on American Life, Literature, and Philosophy. Richard Colton Lyon, ed.
1968. Selected Critical Writings of George Santayana, 2 vols. Norman Henfrey, ed.
1969. Physical Order and Moral Liberty: Previously Unpublished Essays of George Santayana. John and Shirley Lachs, eds.
1979. The Complete Poems of George Santayana: A Critical Edition. Edited, with an introduction, by W. G. Holzberger. Bucknell University Press.
1995. The Birth of Reason and Other Essays. Daniel Cory, ed., with an Introduction by Herman J. Saatkamp, Jr. Columbia Univ. Press.
2009. The Essential Santayana. Selected Writings Edited by the Santayana Edition, Compiled and with an introduction by Martin A. Coleman. Bloomington: Indiana University Press.
2009. The Genteel Tradition in American Philosophy and Character and Opinion in the United States (Rethinking the Western Tradition), Edited and with an introduction by James Seaton and contributions by Wilfred M. McClay, John Lachs, Roger Kimball and James Seaton Yale University Press.
2021. Recently Discovered Letters of George Santayana / Cartas recién descubiertas de George Santayana, Edited and with an introduction by Daniel Pinkas translated by Daniel Moreno, and a Prologue by José Beltrán.
The Works of George Santayana
Unmodernized, critical editions of George Santayana's published and unpublished writing. The Works is edited by the Santayana Edition and published by The MIT Press.
1986. Persons and Places. Santayana's autobiography, incorporating Persons and Places, 1944; The Middle Span, 1945; and My Host the World, 1953.
1988 (1896). The Sense of Beauty: Being the Outline of Aesthetic Theory.
1990 (1900). Interpretations of Poetry and Religion.
1994 (1935). The Last Puritan: A Memoir in the Form of a Novel.
The Letters of George Santayana. Containing over 3,000 of his letters, many discovered posthumously, to more than 350 recipients.
2001. Book One, 1868–1909.
2001. Book Two, 1910–1920.
2002. Book Three, 1921–1927.
2003. Book Four, 1928–1932.
2003. Book Five, 1933–1936.
2004. Book Six, 1937–1940.
2006. Book Seven, 1941–1947.
2008. Book Eight, 1948–1952.
2011. George Santayana's Marginalia: A Critical Selection, Books 1 and 2. Compiled by John O. McCormick and edited by Kristine W. Frost.
The Life of Reason in five books.
2011 (1905). Reason in Common Sense.
2013 (1905). Reason in Society.
2014 (1905). Reason in Religion.
2015 (1905). Reason in Art.
2016 (1906). Reason in Science.
2019 (1910). Three Philosophical Poets: Lucretius, Dante, and Goethe, Critical Edition, Edited by Kellie Dawson and David E. Spiech, with an introduction by James Seaton
2023 (1913). Winds of Doctrine, Critical Edition, Edited by David E Spiech, Martin A. Coleman and Faedra Lazar Weiss, with an introduction by Paul Forster
See also
American philosophy
List of American philosophers
Scientistic materialism
References
Further reading
W. Arnett, 1955. Santayana and the Sense of Beauty, Bloomington, Indiana University Press.
H. T. Kirby-Smith, 1997. A Philosophical Novelist: George Santayana and the Last Puritan. Southern Illinois University Press.
Jeffers, Thomas L., 2005. Apprenticeships: The Bildungsroman from Goethe to Santayana. New York: Palgrave: 159–84.
Lamont, Corliss (ed., with the assistance of Mary Redmer), 1959. Dialogue on George Santayana. New York: Horizon Press.
McCormick, John, 1987. George Santayana: A Biography. Alfred A. Knopf. The biography.
Padrón, Charles and Skowroński, Krzysztof Piotr, eds. 2018. The Life of Reason in an Age of Terrorism, Leiden-Boston: Brill.
Saatkamp, Herman 2021, A Life of Scholarship with Santayana, edited by Charles Padrón and Krzysztof Piotr Skowroński, Leiden-Boston: Brill.
Singer, Irving, 2000. George Santayana, Literary Philosopher. Yale University Press.
Skowroński, Krzysztof Piotr, 2007. Santayana and America: Values, Liberties, Responsibility, Newcastle: Cambridge Scholars Publishing.
Flamm, Matthew Caleb and Skowroński, Krzysztof Piotr (eds), 2007. Under Any Sky: Contemporary Readings of George Santayana. Newcastle: Cambridge Scholars Publishing.
Miguel Alfonso, Ricardo (ed.), 2010, La estética de George Santayana, Madrid: Verbum.
Patella, Giuseppe, Belleza, arte y vida. La estética mediterranea de George Santayana, Valencia, PUV, 2010, pp. 212. .
Pérez Firmat, Gustavo. Tongue Ties: Logo-Eroticism in Anglo-Hispanic Literature. New York: Palgrave Macmillan, 2003.
Moreno, Daniel. Santayana the Philosopher: Philosophy as a Form of Life. Lewisburg: Bucknell University Press, 2015. Translated by Charles Padron.
Kremplewska, Katarzyna. George Santayana's Political Hermeneutics. Brill, 2022.
External links
Critical Edition of the Works of George Santayana
Includes a complete bibliography of the primary literature, and a fair selection of the secondary literature
Internet Encyclopedia of Philosophy: "George Santayana" by Matthew C. Flamm
The Santayana Edition
Overheard in Seville : Bulletin of the Santayana Society
On George Santayana : Spanish-English Blog about Santayana.
LIMBO. BOLETÍN INTERNACIONAL SOBRE SANTAYANA Spanish-English Bulletin about Santayana
"George Santayana: Catholic Atheist" by Richard Butler in Spirituality Today, Vol. 38 (Winter 1986), p. 319
George Santayana, "Many Nations in One Empire" (1934)
1863 births
1952 deaths
19th-century atheists
19th-century American essayists
19th-century American male writers
19th-century American non-fiction writers
19th-century American novelists
19th-century American philosophers
19th-century American poets
19th-century Spanish male writers
19th-century Spanish novelists
19th-century Spanish poets
20th-century atheists
20th-century American essayists
20th-century American male writers
20th-century American novelists
20th-century American philosophers
20th-century American poets
20th-century male writers
20th-century Spanish male writers
20th-century Spanish novelists
20th-century Spanish philosophers
20th-century Spanish poets
Alumni of King's College, Cambridge
American atheists
American autobiographers
American ethicists
American logicians
American male essayists
American male non-fiction writers
American male novelists
American male poets
American memoirists
American people of Catalan descent
American skeptics
Aphorists
Atheist philosophers
Boston Latin School alumni
Deaths from cancer in Lazio
Deaths from stomach cancer in Italy
Epistemologists
Former Roman Catholics
Harvard College alumni
The Harvard Lampoon alumni
Harvard University Department of Philosophy faculty
Materialists
Metaphilosophers
Metaphysicians
Metaphysics writers
Novelists from Massachusetts
Ontologists
Writers from Rome
Phenomenologists
Philosophers of art
Philosophers of culture
American philosophers of education
Philosophers of history
Philosophers of literature
Philosophers of logic
Philosophers of mind
Philosophers of religion
Philosophers of sexuality
Political philosophers
Pragmatists
Rationalists
Social philosophers
Spanish atheists
Spanish autobiographers
Spanish emigrants to the United States
Spanish essayists
Spanish ethicists
Spanish male non-fiction writers
Spanish male novelists
Spanish male poets
Spanish memoirists
Spanish novelists
Spanish people of Catalan descent
Spanish poets
Writers from Boston
Writers from Madrid
Burials at Campo Verano | George Santayana | [
"Physics"
] | 4,420 | [
"Materialism",
"Matter",
"Materialists"
] |
182,146 | https://en.wikipedia.org/wiki/Orbital%20mechanics | Orbital mechanics or astrodynamics is the application of ballistics and celestial mechanics to the practical problems concerning the motion of rockets, satellites, and other spacecraft. The motion of these objects is usually calculated from Newton's laws of motion and the law of universal gravitation. Orbital mechanics is a core discipline within space-mission design and control.
Celestial mechanics treats more broadly the orbital dynamics of systems under the influence of gravity, including both spacecraft and natural astronomical bodies such as star systems, planets, moons, and comets. Orbital mechanics focuses on spacecraft trajectories, including orbital maneuvers, orbital plane changes, and interplanetary transfers, and is used by mission planners to predict the results of propulsive maneuvers.
General relativity is a more exact theory than Newton's laws for calculating orbits, and it is sometimes necessary to use it for greater accuracy or in high-gravity situations (e.g. orbits near the Sun).
History
Until the rise of space travel in the twentieth century, there was little distinction between orbital and celestial mechanics. At the time of Sputnik, the field was termed 'space dynamics'. The fundamental techniques, such as those used to solve the Keplerian problem (determining position as a function of time), are therefore the same in both fields. Furthermore, the history of the fields is almost entirely shared.
Johannes Kepler was the first to successfully model planetary orbits to a high degree of accuracy, publishing his laws in 1605. Isaac Newton published more general laws of celestial motion in the first edition of Philosophiæ Naturalis Principia Mathematica (1687), which gave a method for finding the orbit of a body following a parabolic path from three observations. This was used by Edmund Halley to establish the orbits of various comets, including that which bears his name. Newton's method of successive approximation was formalised into an analytic method by Leonhard Euler in 1744, whose work was in turn generalised to elliptical and hyperbolic orbits by Johann Lambert in 1761–1777.
Another milestone in orbit determination was Carl Friedrich Gauss's assistance in the "recovery" of the dwarf planet Ceres in 1801. Gauss's method was able to use just three observations (in the form of pairs of right ascension and declination), to find the six orbital elements that completely describe an orbit. The theory of orbit determination has subsequently been developed to the point where today it is applied in GPS receivers as well as the tracking and cataloguing of newly observed minor planets. Modern orbit determination and prediction are used to operate all types of satellites and space probes, as it is necessary to know their future positions to a high degree of accuracy.
Astrodynamics was developed by astronomer Samuel Herrick beginning in the 1930s. He consulted the rocket scientist Robert Goddard and was encouraged to continue his work on space navigation techniques, as Goddard believed they would be needed in the future. Numerical techniques of astrodynamics were coupled with new powerful computers in the 1960s, and humans were ready to travel to the Moon and return.
Practical techniques
Rules of thumb
The following rules of thumb are useful for situations approximated by classical mechanics under the standard assumptions of astrodynamics outlined below. The specific example discussed is of a satellite orbiting a planet, but the rules of thumb could also apply to other situations, such as orbits of small bodies around a star such as the Sun.
Kepler's laws of planetary motion:
Orbits are elliptical, with the heavier body at one focus of the ellipse. A special case of this is a circular orbit (a circle is a special case of ellipse) with the planet at the center.
A line drawn from the planet to the satellite sweeps out equal areas in equal times no matter which portion of the orbit is measured.
The square of a satellite's orbital period is proportional to the cube of its average distance from the planet.
Without applying force (such as firing a rocket engine), the period and shape of the satellite's orbit will not change.
A satellite in a low orbit (or a low part of an elliptical orbit) moves more quickly with respect to the surface of the planet than a satellite in a higher orbit (or a high part of an elliptical orbit), due to the stronger gravitational attraction closer to the planet.
If thrust is applied at only one point in the satellite's orbit, it will return to that same point on each subsequent orbit, though the rest of its path will change. Thus one cannot move from one circular orbit to another with only one brief application of thrust.
From a circular orbit, thrust applied in a direction opposite to the satellite's motion changes the orbit to an elliptical one; the satellite will descend and reach the lowest orbital point (the periapse) at 180 degrees away from the firing point; then it will ascend back. The period of the resultant orbit will be less than that of the original circular orbit. Thrust applied in the direction of the satellite's motion creates an elliptical orbit with its highest point (apoapse) 180 degrees away from the firing point. The period of the resultant orbit will be longer than that of the original circular orbit.
The consequences of the rules of orbital mechanics are sometimes counter-intuitive. For example, if two spacecrafts are in the same circular orbit and wish to dock, unless they are very close, the trailing craft cannot simply fire its engines to go faster. This will change the shape of its orbit, causing it to gain altitude and actually slow down relative to the leading craft, missing the target. The space rendezvous before docking normally takes multiple precisely calculated engine firings in multiple orbital periods, requiring hours or even days to complete.
To the extent that the standard assumptions of astrodynamics do not hold, actual trajectories will vary from those calculated. For example, simple atmospheric drag is another complicating factor for objects in low Earth orbit.
These rules of thumb are decidedly inaccurate when describing two or more bodies of similar mass, such as a binary star system (see n-body problem). Celestial mechanics uses more general rules applicable to a wider variety of situations. Kepler's laws of planetary motion, which can be mathematically derived from Newton's laws, hold strictly only in describing the motion of two gravitating bodies in the absence of non-gravitational forces; they also describe parabolic and hyperbolic trajectories. In the close proximity of large objects like stars the differences between classical mechanics and general relativity also become important.
Laws of astrodynamics
The fundamental laws of astrodynamics are Newton's law of universal gravitation and Newton's laws of motion, while the fundamental mathematical tool is differential calculus.
In a Newtonian framework, the laws governing orbits and trajectories are in principle time-symmetric.
Standard assumptions in astrodynamics include non-interference from outside bodies, negligible mass for one of the bodies, and negligible other forces (such as from the solar wind, atmospheric drag, etc.). More accurate calculations can be made without these simplifying assumptions, but they are more complicated. The increased accuracy often does not make enough of a difference in the calculation to be worthwhile.
Kepler's laws of planetary motion may be derived from Newton's laws, when it is assumed that the orbiting body is subject only to the gravitational force of the central attractor. When an engine thrust or propulsive force is present, Newton's laws still apply, but Kepler's laws are invalidated. When the thrust stops, the resulting orbit will be different but will once again be described by Kepler's laws which have been set out above. The three laws are:
The orbit of every planet is an ellipse with the Sun at one of the foci.
A line joining a planet and the Sun sweeps out equal areas during equal intervals of time.
The squares of the orbital periods of planets are directly proportional to the cubes of the semi-major axis of the orbits.
Escape velocity
The formula for an escape velocity is derived as follows. The specific energy (energy per unit mass) of any space vehicle is composed of two components, the specific potential energy and the specific kinetic energy. The specific potential energy associated with a planet of mass M is given by
where G is the gravitational constant and r is the distance between the two bodies;
while the specific kinetic energy of an object is given by
where v is its Velocity;
and so the total specific orbital energy is
Since energy is conserved, cannot depend on the distance, , from the center of the central body to the space vehicle in question, i.e. v must vary with r to keep the specific orbital energy constant. Therefore, the object can reach infinite only if this quantity is nonnegative, which implies
The escape velocity from the Earth's surface is about 11 km/s, but that is insufficient to send the body an infinite distance because of the gravitational pull of the Sun. To escape the Solar System from a location at a distance from the Sun equal to the distance Sun–Earth, but not close to the Earth, requires around 42 km/s velocity, but there will be "partial credit" for the Earth's orbital velocity for spacecraft launched from Earth, if their further acceleration (due to the propulsion system) carries them in the same direction as Earth travels in its orbit.
Formulae for free orbits
Orbits are conic sections, so the formula for the distance of a body for a given angle corresponds to the formula for that curve in polar coordinates, which is:
is called the gravitational parameter. and are the masses of objects 1 and 2, and is the specific angular momentum of object 2 with respect to object 1. The parameter is known as the true anomaly, is the semi-latus rectum, while is the orbital eccentricity, all obtainable from the various forms of the six independent orbital elements.
Circular orbits
All bounded orbits where the gravity of a central body dominates are elliptical in nature. A special case of this is the circular orbit, which is an ellipse of zero eccentricity. The formula for the velocity of a body in a circular orbit at distance r from the center of gravity of mass M can be derived as follows:
Centrifugal acceleration matches the acceleration due to gravity.
So,
Therefore,
where is the gravitational constant, equal to
6.6743 × 10−11 m3/(kg·s2)
To properly use this formula, the units must be consistent; for example, must be in kilograms, and must be in meters. The answer will be in meters per second.
The quantity is often termed the standard gravitational parameter, which has a different value for every planet or moon in the Solar System.
Once the circular orbital velocity is known, the escape velocity is easily found by multiplying by :
To escape from gravity, the kinetic energy must at least match the negative potential energy. Therefore,
Elliptical orbits
If , then the denominator of the equation of free orbits varies with the true anomaly , but remains positive, never becoming zero. Therefore, the relative position vector remains bounded, having its smallest magnitude at periapsis , which is given by:
The maximum value is reached when . This point is called the apoapsis, and its radial coordinate, denoted , is
Let be the distance measured along the apse line from periapsis to apoapsis , as illustrated in the equation below:
Substituting the equations above, we get:
a is the semimajor axis of the ellipse. Solving for , and substituting the result in the conic section curve formula above, we get:
Orbital period
Under standard assumptions the orbital period () of a body traveling along an elliptic orbit can be computed as:
where:
is the standard gravitational parameter,
is the length of the semi-major axis.
Conclusions:
The orbital period is equal to that for a circular orbit with the orbit radius equal to the semi-major axis (),
For a given semi-major axis the orbital period does not depend on the eccentricity (See also: Kepler's third law).
Velocity
Under standard assumptions the orbital speed () of a body traveling along an elliptic orbit can be computed from the Vis-viva equation as:
where:
is the standard gravitational parameter,
is the distance between the orbiting bodies.
is the length of the semi-major axis.
The velocity equation for a hyperbolic trajectory is .
Energy
Under standard assumptions, specific orbital energy () of elliptic orbit is negative and the orbital energy conservation equation (the Vis-viva equation) for this orbit can take the form:
where:
is the speed of the orbiting body,
is the distance of the orbiting body from the center of mass of the central body,
is the semi-major axis,
is the standard gravitational parameter.
Conclusions:
For a given semi-major axis the specific orbital energy is independent of the eccentricity.
Using the virial theorem we find:
the time-average of the specific potential energy is equal to
the time-average of is
the time-average of the specific kinetic energy is equal to
Parabolic orbits
If the eccentricity equals 1, then the orbit equation becomes:
where:
is the radial distance of the orbiting body from the mass center of the central body,
is specific angular momentum of the orbiting body,
is the true anomaly of the orbiting body,
is the standard gravitational parameter.
As the true anomaly θ approaches 180°, the denominator approaches zero, so that r tends towards infinity. Hence, the energy of the trajectory for which e=1 is zero, and is given by:
where:
is the speed of the orbiting body.
In other words, the speed anywhere on a parabolic path is:
Hyperbolic orbits
If , the orbit formula,
describes the geometry of the hyperbolic orbit. The system consists of two symmetric curves. The orbiting body occupies one of them; the other one is its empty mathematical image. Clearly, the denominator of the equation above goes to zero when . we denote this value of true anomaly
since the radial distance approaches infinity as the true anomaly approaches , known as the true anomaly of the asymptote. Observe that lies between 90° and 180°. From the trigonometric identity it follows that:
Energy
Under standard assumptions, specific orbital energy () of a hyperbolic trajectory is greater than zero and the orbital energy conservation equation for this kind of trajectory takes form:
where:
is the orbital velocity of orbiting body,
is the radial distance of orbiting body from central body,
is the negative semi-major axis of the orbit's hyperbola,
is standard gravitational parameter.
Hyperbolic excess velocity
Under standard assumptions the body traveling along a hyperbolic trajectory will attain at infinity an orbital velocity called hyperbolic excess velocity () that can be computed as:
where:
is standard gravitational parameter,
is the negative semi-major axis of orbit's hyperbola.
The hyperbolic excess velocity is related to the specific orbital energy or characteristic energy by
Calculating trajectories
Kepler's equation
One approach to calculating orbits (mainly used historically) is to use Kepler's equation:
.
where M is the mean anomaly, E is the eccentric anomaly, and is the eccentricity.
With Kepler's formula, finding the time-of-flight to reach an angle (true anomaly) of from periapsis is broken into two steps:
Compute the eccentric anomaly from true anomaly
Compute the time-of-flight from the eccentric anomaly
Finding the eccentric anomaly at a given time (the inverse problem) is more difficult. Kepler's equation is transcendental in , meaning it cannot be solved for algebraically. Kepler's equation can be solved for analytically by inversion.
A solution of Kepler's equation, valid for all real values of is:
Evaluating this yields:
Alternatively, Kepler's Equation can be solved numerically. First one must guess a value of and solve for time-of-flight; then adjust as necessary to bring the computed time-of-flight closer to the desired value until the required precision is achieved. Usually, Newton's method is used to achieve relatively fast convergence.
The main difficulty with this approach is that it can take prohibitively long to converge for the extreme elliptical orbits. For near-parabolic orbits, eccentricity is nearly 1, and substituting into the formula for mean anomaly, , we find ourselves subtracting two nearly-equal values, and accuracy suffers. For near-circular orbits, it is hard to find the periapsis in the first place (and truly circular orbits have no periapsis at all). Furthermore, the equation was derived on the assumption of an elliptical orbit, and so it does not hold for parabolic or hyperbolic orbits. These difficulties are what led to the development of the universal variable formulation, described below.
Conic orbits
For simple procedures, such as computing the delta-v for coplanar transfer ellipses, traditional approaches are fairly effective. Others, such as time-of-flight are far more complicated, especially for near-circular and hyperbolic orbits.
The patched conic approximation
The Hohmann transfer orbit alone is a poor approximation for interplanetary trajectories because it neglects the planets' own gravity. Planetary gravity dominates the behavior of the spacecraft in the vicinity of a planet and in most cases Hohmann severely overestimates delta-v, and produces highly inaccurate prescriptions for burn timings. A relatively simple way to get a first-order approximation of delta-v is based on the 'Patched Conic Approximation' technique. One must choose the one dominant gravitating body in each region of space through which the trajectory will pass, and to model only that body's effects in that region. For instance, on a trajectory from the Earth to Mars, one would begin by considering only the Earth's gravity until the trajectory reaches a distance where the Earth's gravity no longer dominates that of the Sun. The spacecraft would be given escape velocity to send it on its way to interplanetary space. Next, one would consider only the Sun's gravity until the trajectory reaches the neighborhood of Mars. During this stage, the transfer orbit model is appropriate. Finally, only Mars's gravity is considered during the final portion of the trajectory where Mars's gravity dominates the spacecraft's behavior. The spacecraft would approach Mars on a hyperbolic orbit, and a final retrograde burn would slow the spacecraft enough to be captured by Mars. Friedrich Zander was one of the first to apply the patched-conics approach for astrodynamics purposes, when proposing the use of intermediary bodies' gravity for interplanetary travels, in what is known today as a gravity assist.
The size of the "neighborhoods" (or spheres of influence) vary with radius :
where is the semimajor axis of the planet's orbit relative to the Sun; and are the masses of the planet and Sun, respectively.
This simplification is sufficient to compute rough estimates of fuel requirements, and rough time-of-flight estimates, but it is not generally accurate enough to guide a spacecraft to its destination. For that, numerical methods are required.
The universal variable formulation
To address computational shortcomings of traditional approaches for solving the 2-body problem, the universal variable formulation was developed. It works equally well for the circular, elliptical, parabolic, and hyperbolic cases, the differential equations converging well when integrated for any orbit. It also generalizes well to problems incorporating perturbation theory.
Perturbations
The universal variable formulation works well with the variation of parameters technique, except now, instead of the six Keplerian orbital elements, we use a different set of orbital elements: namely, the satellite's initial position and velocity vectors and at a given epoch . In a two-body simulation, these elements are sufficient to compute the satellite's position and velocity at any time in the future, using the universal variable formulation. Conversely, at any moment in the satellite's orbit, we can measure its position and velocity, and then use the universal variable approach to determine what its initial position and velocity would have been at the epoch. In perfect two-body motion, these orbital elements would be invariant (just like the Keplerian elements would be).
However, perturbations cause the orbital elements to change over time. Hence, the position element is written as and the velocity element as , indicating that they vary with time. The technique to compute the effect of perturbations becomes one of finding expressions, either exact or approximate, for the functions and .
The following are some effects which make real orbits differ from the simple models based on a spherical Earth. Most of them can be handled on short timescales (perhaps less than a few thousand orbits) by perturbation theory because they are small relative to the corresponding two-body effects.
Equatorial bulges cause precession of the node and the perigee
Tesseral harmonics of the gravity field introduce additional perturbations
Lunar and solar gravity perturbations alter the orbits
Atmospheric drag reduces the semi-major axis unless make-up thrust is used
Over very long timescales (perhaps millions of orbits), even small perturbations can dominate, and the behavior can become chaotic. On the other hand, the various perturbations can be orchestrated by clever astrodynamicists to assist with orbit maintenance tasks, such as station-keeping, ground track maintenance or adjustment, or phasing of perigee to cover selected targets at low altitude.
Orbital maneuver
In spaceflight, an orbital maneuver is the use of propulsion systems to change the orbit of a spacecraft. For spacecraft far from Earth—for example those in orbits around the Sun—an orbital maneuver is called a deep-space maneuver (DSM).
Orbital transfer
Transfer orbits are usually elliptical orbits that allow spacecraft to move from one (usually substantially circular) orbit to another. Usually they require a burn at the start, a burn at the end, and sometimes one or more burns in the middle.
The Hohmann transfer orbit requires a minimal delta-v.
A bi-elliptic transfer can require less energy than the Hohmann transfer, if the ratio of orbits is 11.94 or greater, but comes at the cost of increased trip time over the Hohmann transfer.
Faster transfers may use any orbit that intersects both the original and destination orbits, at the cost of higher delta-v.
Using low thrust engines (such as electrical propulsion), if the initial orbit is supersynchronous to the final desired circular orbit then the optimal transfer orbit is achieved by thrusting continuously in the direction of the velocity at apogee. This method however takes much longer due to the low thrust.
For the case of orbital transfer between non-coplanar orbits, the change-of-plane thrust must be made at the point where the orbital planes intersect (the "node"). As the objective is to change the direction of the velocity vector by an angle equal to the angle between the planes, almost all of this thrust should be made when the spacecraft is at the node near the apoapse, when the magnitude of the velocity vector is at its lowest. However, a small fraction of the orbital inclination change can be made at the node near the periapse, by slightly angling the transfer orbit injection thrust in the direction of the desired inclination change. This works because the cosine of a small angle is very nearly one, resulting in the small plane change being effectively "free" despite the high velocity of the spacecraft near periapse, as the Oberth Effect due to the increased, slightly angled thrust exceeds the cost of the thrust in the orbit-normal axis.
Gravity assist and the Oberth effect
In a gravity assist, a spacecraft swings by a planet and leaves in a different direction, at a different speed. This is useful to speed or slow a spacecraft instead of carrying more fuel.
This maneuver can be approximated by an elastic collision at large distances, though the flyby does not involve any physical contact. Due to Newton's Third Law (equal and opposite reaction), any momentum gained by a spacecraft must be lost by the planet, or vice versa. However, because the planet is much, much more massive than the spacecraft, the effect on the planet's orbit is negligible.
The Oberth effect can be employed, particularly during a gravity assist operation. This effect is that use of a propulsion system works better at high speeds, and hence course changes are best done when close to a gravitating body; this can multiply the effective delta-v.
Interplanetary Transport Network and fuzzy orbits
It is now possible to use computers to search for routes using the nonlinearities in the gravity of the planets and moons of the Solar System. For example, it is possible to plot an orbit from high Earth orbit to Mars, passing close to one of the Earth's Trojan points. Collectively referred to as the Interplanetary Transport Network, these highly perturbative, even chaotic, orbital trajectories in principle need no fuel beyond that needed to reach the Lagrange point (in practice keeping to the trajectory requires some course corrections). The biggest problem with them is they can be exceedingly slow, taking many years. In addition launch windows can be very far apart.
They have, however, been employed on projects such as Genesis. This spacecraft visited the Earth-Sun point and returned using very little propellant.
See also
Celestial mechanics
Chaos theory
Kepler orbit
Lagrange point
Mechanical engineering
N-body problem
Roche limit
Spacecraft propulsion
Universal variable formulation
References
Further reading
Many of the options, procedures, and supporting theory are covered in standard works such as:
External links
ORBITAL MECHANICS (Rocket and Space Technology)
Java Astrodynamics Toolkit
Astrodynamics-based Space Traffic and Event Knowledge Graph | Orbital mechanics | [
"Engineering"
] | 5,270 | [
"Astrodynamics",
"Aerospace engineering"
] |
182,203 | https://en.wikipedia.org/wiki/Differential%20heat%20treatment | Differential heat treatment (also called selective heat treatment or local heat treatment) is a technique used during heat treating of steel to harden or soften certain areas of an object, creating a difference in hardness between these areas. There are many techniques for creating a difference in properties, but most can be defined as either differential hardening or differential tempering. These were common heat treatment techniques used historically in Europe and Asia, with possibly the most widely known example being from Japanese swordsmithing. Some modern varieties were developed in the twentieth century as metallurgical knowledge and technology rapidly increased.
Differential hardening is done by either of two methods. One of them is heating the steel evenly to a red-hot temperature and then cooling part of it quickly, turning that part into very hard martensite while the rest cools more slowly and becomes softer pearlite. The other is heating only part of the steel very quickly to red-hot and then rapidly cooling it by quenching, again turning that part into martensite, but leaving the rest unchanged. Conversely, one may selectively harden steel by differential tempering, that is, by heating it evenly to red-hot and then quenching it, turning it into martensite, and then tempering part of it by heating it to a much lower temperature, softening only that part.
Introduction
Differential heat treatment is a method used to alter the properties of various parts of a steel object differently, producing areas that are harder or softer than others. This creates greater toughness in the parts of the object where it is needed, such as the tang or spine of a sword, but produces greater hardness at the edge or other areas where greater impact resistance, wear resistance, and strength is needed. Differential heat treatment can often make certain areas harder than could be allowed if the steel was uniformly treated, or "through treated". There are several techniques used to differentially heat treat steel, but they can usually be divided into differential hardening and differential tempering methods.
During heat treating, when red-hot steel (usually between and ) is quenched, it becomes very hard. However, it will be too hard, becoming very brittle like glass. Quenched-steel is usually heated again, slowly and evenly (usually between and ) in a process called tempering, to soften the metal, thereby increasing the toughness. However, although this softening of the metal makes the blade less prone to breaking, it makes the edge more susceptible to deformation such as dulling, peening, or curling.
Differential hardening is a method used in heat treating swords and knives to increase the hardness of the edge without making the whole blade brittle. To achieve this, the edge is cooled faster than the spine by adding a heat insulator to the spine before quenching. Clay or another material is used for insulation. To prevent cracking and loss of surface carbon, quenching is usually performed before beveling, shaping, and sharpening the edge.
It can also be achieved by carefully pouring water (perhaps already heated) onto the edge of a blade as is the case with the manufacture of some kukri. Differential hardening technology originated in China and later spread to Korea and Japan. This technique is mainly used in later Chinese jian, Chinese dao, and the katana, the traditional Japanese sword, and the khukuri, the traditional Nepalese knife. Most blades made with this technique have visible temper lines. Earlier Chinese jian from the ancient era (eg. Warring States to Han dynasty) used tempering rather than differential heat treatment. This method is sometimes called differential tempering, but this term more accurately refers to a different technique, which originated with the broadswords of Europe.
Modern versions of differential hardening were developed when sources of rapidly heating the metal were devised, such as an oxy-acetylene torch or induction heating. With flame hardening and induction hardening techniques, the steel is quickly heated to red-hot in a localized area and then quenched. This hardens only part of the object, but leaves the rest unaltered.
Differential tempering was more commonly used to make cutting tools, although it was sometimes used on knives and swords as well. Differential tempering is obtained by quenching the sword uniformly, then tempering one part of it, such as the spine or the center portion of double edged blades. This is usually done with a torch or some other directed heat source. The heated portion of the metal is softened by this process, leaving the edge at the higher hardness.
Differential hardening
Bladesmithing
Differential hardening (also called differential quenching, selective quenching, selective hardening, or local hardening) is most commonly used in bladesmithing to increase the toughness of a blade while keeping very high hardness and strength at the edge. This helps to make the blade very resistant to breaking, by making the spine very soft and bendable, but allows greater hardness at the edge than would be possible if the blade was uniformly quenched and tempered. This helps to create a tough blade that will maintain a very sharp, wear-resistant edge, even during rough use such as found in combat.
Insulation coatings
A differentially hardened blade will usually be coated with an insulating layer, like clay, but leaving the edge exposed. When it is heated to red-hot and quenched, the edge cools quickly, becoming very hard, but the rest cools slowly, becoming much softer. The insulation layer is quite often a mixture of clays, ashes, polishing stone powder, and salts, which protects the back of the blade from cooling very quickly when quenched. The clay is often applied by painting it on, coating the blade very thickly around the center and spine, but leaving the edge exposed. This allows the edge to cool very quickly, turning it into a very hard microstructure called martensite, but causes the rest of the blade to cool slowly, turning it into a soft microstructure called pearlite. This produces an edge that is exceptionally hard and brittle, but is backed-up by softer, tougher metal. The edge, however, will usually be too hard, so after quenching the entire blade is usually tempered to around for a short time, to bring the hardness of the edge down to around HRc60 on the Rockwell hardness scale.
The exact composition of the clay mixture, the thickness of the coating, and even the temperature of the water were often closely guarded secrets of the various bladesmithing schools. With the clay mixture, the main goal was to find a mixture that would withstand high temperatures and adhere to the blade without shrinking, cracking, or peeling as it dried. Sometimes the back of the blade was coated with clay, leaving the edge exposed. Other times the entire blade was coated and then the clay was cut away from the edge. Another method was to apply the clay thickly at the back but thinly at the edge, providing a lesser amount of insulation. By controlling the thickness of the edge-coating along with the temperature of the water, the cooling rate of each part of the blade can be controlled to produce the proper hardness upon quenching without the need for further tempering.
Quenching
Once the coating has dried, the blade is heated slowly and evenly, to prevent the coating from cracking or falling off. After the blade is heated to the proper temperature, which is usually judged by the cherry-red glow (blackbody radiation) of the blade, it will change into a phase called austenite. Both to help prevent cracking and to produce uniformity in the hardness of each area, the smith will need to ensure that the temperature is even, lacking any hot spots from sitting next to the coals. To prevent this, the blade is usually kept in motion while heating, to distribute the heat more evenly. Quenching is often done in low-light conditions, to help accurately judge the color of the glow. Typically, the smith will also try to avoid overheating the blade to prevent the metallic crystals from growing too large. At this time the blade will usually be plunged into a vat of water or oil, to quickly remove the heat from the edge. The clay, in turn, insulates the back of the blade, causing it to cool slower than the edge.
When the edge cools fast a diffusionless transformation occurs, turning the austenite into very hard martensite. This requires a temperature drop from around 750 °C (cherry-red) to 450 °C (at which point the transformation is complete) in less than a second to prevent the formation of soft pearlite. Because the rest of the blade cools slowly, the carbon in the austenite has time to precipitate, becoming pearlite. The diffusionless transformation causes the edge to "freeze" suddenly in a thermally expanded state, but allows the back to contract as it cools slower. This typically causes the blade to bend or curve during quenching, as the back contracts more than the edge. This gives swords like katana and wakizashi their characteristic curved shapes. The blade is usually straight when heated but then bows as it cools; first curving toward the edge as it contracts, and then away from the edge as the spine contracts more. With slashing-type swords, this curvature helps to facilitate cutting, but increases the chances of cracking during the procedure. Up to one third of all swords are ruined during the quenching process. However, when the sword does not crack, the internal stresses created help increase the toughness of the blade, similar to the increased toughness in tempered glass. The sword may need further shaping after quenching and tempering, to achieve the desired curvature.
Care must be taken to plunge the sword quickly and vertically (edge first), for if one side enters the quenching fluid before the other the cooling may be asymmetric and cause the blade to bend sideways (warp). Because quenching in water tends to cause a sudden loss of surface carbon, the sword will usually be quenched before the edge is beveled and sharpened. After quenching and tempering, the blade was traditionally given a rough shape with a metal-cutting draw knife (sen) before sending to a polisher for sharpening, although in modern times an electric belt sander is often used instead.
Metallography
Differential hardening will produce two different zones of hardness, which respond differently to grinding, sharpening, and polishing. The back and center of the blade will grind away much quicker than the edge, so the polisher will need to carefully control the angle of the edge, which will affect the geometry of the blade. An inexperienced polisher can quickly ruin a blade by applying too much pressure to the softened areas, rapidly altering the blade's shape without much change to the hardened zone.
Although both the pearlite and martensite can be polished to a mirror-like shine, only the back and spine are usually polished to such an extent. The hardened portion of the blade (yakiba) and the center portion (hira) are often given a matte finish instead, to make the differences in the hardness stand out. This causes the various microstructures to reflect light differently when viewed from different angles. The pearlite takes on longer, deeper scratches, and either appears shiny and bright, or sometimes dark depending on the viewing angle. The martensite is harder to scratch, so the microscopic abrasions are smaller. The martensite usually appears brighter yet flatter than the pearlite, and this is less dependent on the viewing angle. When polished or etched with acid to reveal these features, a distinct boundary is observed between the martensite portion of the blade and the pearlite. This boundary is often called the "temper line", or the commonly used Japanese term, the "hamon". Between the hardened edge and the hamon lies an intermediate zone, called the '"nioi" in Japanese, which is usually only visible at long angles. The nioi is about a millimeter or two wide, following the hamon, which is made up of individual martensite grains (niye) surrounded by pearlite. The nioi provides a very tough boundary between the yakiba and the hira.
Decorative hardening
In Japan, from the legendary time of the famous smith Amakuni, hamons were originally straight and parallel to the edge, but by the twelfth century AD, smiths such as Shintogo Kunimitsu began producing hamons with very irregular shapes, which provided both mechanical and decorative benefits. By the sixteenth century AD, the Japanese smiths often overheated their swords slightly before quenching, to produce rather large niye for aesthetic purposes, even though a larger grain size tended to weaken the sword a bit. During this time, great attention began to be paid in Japan to making decorative hamons, by carefully shaping the clay. It became very common during this era to find swords with wavy hamons, flowers or clovers depicted in the temper line, rat's feet, trees, or other shapes. By the eighteenth century, decorative hamons were often being combined with decorative folding techniques to produce entire landscapes, complete with specific islands, crashing waves, hills, mountains, rivers, and sometimes low spots were cut in the clay to produce niye far away from the hamon, creating effects such as birds in the sky.
Benefits and drawbacks
Although differential hardening produces a very hard edge, it also leaves the rest of the sword rather soft, which can make it prone to bending under heavy loads, such as parrying a hard blow. It can also make the edge more susceptible to chipping or cracking. Swords of this type can usually only be resharpened a few times before reaching the softer metal underneath the edge. However, if properly protected and maintained, these blades can usually hold an edge for long periods of time, even after slicing through bone and flesh, or heavily matted bamboo to simulate cutting through body parts, as is in iaido.
Modern differential hardening
Flame hardening
Flame hardening is often used to harden only a portion of an object, by quickly heating it with a very hot flame in a localized area, and then quenching the steel. This turns the heated portion into very hard martensite, but leaves the rest unchanged. Usually, an oxy-gas torch is used to provide such high temperatures. Flame hardening is a very common surface hardening technique, which is often used to provide a very wear-resistant surface. A common use is for hardening the surface of gears, making the teeth more resistant to erosion. The gear will usually be quenched and tempered to a specific hardness first, making a majority of the gear tough, and then the teeth are quickly heated and immediately quenched, hardening only the surface. Afterward, it may or may not be tempered again to achieve the final differential hardness.
This process is often used for knife making, by heating only the edge of a previously quenched and tempered blade. When edge turns the proper color temperature, it is quenched, hardening only the edge, but leaving most of the rest of the blade at the lower hardness. The knife is then tempered again to produce the final differential hardness. However, unlike a blade that has been evenly heated and differentially quenched, flame hardening creates a heat-affected zone. Unlike the nioi, the boundary between the hot and cold metal formed by this heat-affected zone causes extremely rapid cooling when quenched. When combined with the stresses formed, this creates a very brittle zone between the hard and softer metal, which usually makes this method unsuitable for swords or tools that may be subjected to shear and impact stresses.
Induction hardening
Induction hardening is a surface hardening technique which uses induction coils to provide a very rapid means of heating the metal. With induction heating, the steel can be heated very quickly to red-hot at the surface, before the heat can penetrate any distance into the metal. The surface is then quenched, hardening it, and is often used without further tempering. This makes the surface very resistant to wear, but provides tougher metal directly underneath it, leaving the majority of the object unchanged. A common use for induction hardening is for hardening the bearing surfaces, or "journals", on automotive crankshafts or the rods of hydraulic cylinders.
Differential tempering
Differential tempering (also called graded tempering, selective tempering or local tempering) is the inverse of differential hardening, to ultimately produce similar results. Differential tempering begins by taking steel that has been uniformly quenched and hardened, and then heating it in localized areas to reduce the hardness. The process is often used in blacksmithing for tempering cutting instruments, softening the back, shaft, or spine, but simultaneously tempering the edge to a very high hardness. The process was very common in ancient Europe, for making tools, but soon was applied to knives and swords as well.
Blacksmithing
The most common use for differential tempering was for heat treating cutting tools, such as axes and chisels, where an extremely hard edge is desired, but some malleability and springiness is needed in the rest of the tool. A chisel with a very hard edge can maintain that edge longer and cut harder materials, but, if the entire chisel was too hard, it would shatter under the hammer blows. Differential tempering was often used to provide a very hard cutting edge, but to soften parts of the tool that are subject to impact and shock loading.
Before a tool is differentially tempered, it is first heated to red-hot and then quenched, hardening the entire tool. This makes the tool much too hard for normal use, so the tool is tempered to reduce the hardness to a more suitable point. However, unlike normal tempering, the tool is not heated evenly. Instead, the heat is applied to only a part of the tool, allowing the heat to thermally conduct toward the cooler cutting edge. The quenched-steel is first sanded or polished, to remove any residual oxidation, revealing the bare metal underneath. The steel is then heated in a localized area, such as the hammering-end of a chisel or the handle-end of an axe. The smith then carefully gauges the temperature by watching the tempering colors of the steel. As the steel is heated, these colors will form, ranging from yellow to brown, purple, and blue, and many shades in between, and will indicate the temperature of the steel. As heat is applied, the colors will form near the heat source, and then slowly move across the tool, following the heat as it conducts toward the edge.
Before the yellow or "light-straw" color reaches the edge, the smith removes the heat. The heat will continue to conduct, moving the colors toward the edge for a short time after the heat is removed. When the light-straw color reaches the edge, the smith will usually dip the steel in water, to stop the process. This will generally produce a very hard edge, around HRc58-60 on the Rockwell scale, but will leave the opposite end of the tool much softer. The hardness of the cutting edge is generally controlled by the chosen color, but will also be affected primarily by the carbon content in the steel, plus a variety of other factors. The exact hardness of the soft end depends on many factors, but the main one is the speed at which the steel was heated, or how far the colors spread out. The light-straw color is very hard, brittle steel, but the light-blue is softer and very springy. Beyond the blue color, when the steel turns grey, it is more likely to be very malleable, which is usually undesirable in a chisel. If the steel is too soft it can bend or mushroom, plastically deforming under the force of the hammer.
Grade of temper
Unlike with differential hardening, in differential tempering there is no distinct boundary between the harder and softer metals, but the change from hard to soft is very gradual, forming a continuum, or "grade" (gradient), of hardness. However, higher heating temperatures cause the colors to spread less, creating a much steeper grade, while lower temperatures can make the change more gradual, using a smaller portion of the entire continuum. The tempering colors only represent a fraction of the entire grade, because the metal turns grey above , making it difficult to judge the temperature, but the hardness will continue to decrease as the temperature rises.
Guiding the heat
Heating in just one area, like the flat end of a center punch, will cause the grade to spread evenly down the length of the tool. Because having a continuous grade along the length of the entire tool is not always desired, methods of concentrating the change have been devised. A tool like a chisel may be heated quickly but evenly along the entire shaft, tempering it to a purple or blue color, but allowing the residual heat to quickly conduct a short distance to the edge. Another method is to hold the edge in water, keeping it cool as the rest of the tool is tempered. When the proper color is reached, the edge is removed from the water and allowed to temper from the residual heat, and the entire tool is plunged in the water when the edge turns the proper color. However, heating in localized areas with such low temperatures may be difficult with larger items, like an axe or a splitting maul, because the steel may lose too much heat before it can conduct to the edge. Sometimes the steel is heated evenly to just below the desired temperature, and then differentially tempered, making it easier to control the temperature change. Another way is to partially embed the steel in an insulator, like sand or lime, preventing too much heat loss during tempering.
Bladesmithing
Eventually, this process was applied to swords and knives, to produce mechanical effects that were similar to differential hardening, but with some important differences. To differentially temper a blade, it is first quenched to harden the entire blade evenly. The blade is then heated in a localized area, allowing the heat to flow toward the edge. With single-edged blades, the blade may be tempered with fire or a torch. The blade is heated along the spine and tang only, allowing the heat to conduct to the edge. The heat will need to be applied evenly, allowing the colors to spread evenly across the blade. However, with double-edged blades, the heat source will usually need to be more precisely localized because the heat must be applied evenly along the center of the blade, allowing it to conduct to both edges. Often, a red or yellow-hot bar is used to supply the heat, placing it along the center of the blade, typically fitted into a fuller. Modern gas torches often have the ability to produce very precise flames. To prevent too much heat loss in the blade, it may be preheated, partially insulated, or sandwiched between two red-hot bars. When the proper color reaches the edge, it is immersed in water to stop the process.
Guiding the heat
Differential tempering can be made more difficult by the shape of the blade. When tempering a double-edged sword with a taper along its length, the tip may reach the proper temperature before the shank does. The smith may need to control the temperature by using methods like pouring water along certain parts of the edge, or cooling it with ice, causing the proper temperature to reach the entire edge at the same time. In this way, although it is less time-consuming than differential hardening with clay, once the process starts the smith must be vigilant, carefully guiding the heat. This leaves little room for error, and mistakes in shaping the hardened zone cannot easily be corrected. This is made even more difficult if the knife or sword has a curve, an odd shape, or a sharply tapered tip. Swords tempered in this manner, especially double-edged swords, will generally need to be rather wide, allowing room for a gradient to form. However differential tempering does not alter the blade's shape.
Metallurgy
When a sword, knife or tool is evenly quenched, the entire object turns into martensite, which is extremely hard, without the formation of soft pearlite. Tempering reduces the hardness in the steel by gradually changing the martensite into a microstructure of various carbides, such as cementite, and softer ferrite (iron), forming a microstructure called "tempered martensite". When tempering high-carbon steel in the blacksmith method, the color provides a general indication of the final hardness, although some trial-and-error is usually required to match the right color to the type of steel to achieve the exact hardness, because the carbon content, the heating speed, and even the type of heat source will affect the outcome. Without the formation of pearlite, the steel can be incrementally tempered to achieve the proper hardness in each area, ensuring that no area is too soft. In arming swords, for instance, because the blade is typically rather wide and thin, a blade can be prone to bending during combat. If the center of the blade is too soft, this bending may likely be permanent. However, if the sword is tempered to a springy hardness, it will be more likely to return to its original shape.
Benefits and drawbacks
A sword tempered this way cannot usually have an edge as hard as a differentially-hardened sword, like a katana, because there is no softer metal directly underneath the edge to back-up the harder metal. This makes the edge more likely to chip away in larger pieces. Therefore, such an extremely hard edge is not always desirable, as greater hardness makes the edge more brittle and less resistant to impacts, such as cutting through bones, shafts of pole-arms, hitting shields or blocking and parrying. The sword will often be tempered to slightly higher temperatures to increase the impact resistance at a cost in the ability to hold a sharp edge when cutting. The edge may need to be tempered to dark-straw or brown to achieve this, and the center tempered to a blue or purple color. This may leave very little difference between the edge and the center, and the benefits of this method, over tempering the sword evenly at a point somewhere in the middle, may not be very substantial. When a sword tempered in this way is resharpened the hardness will decrease with each sharpening, although the reduction in hardness will usually not be noticeable until a large amount of steel has been removed.
See also
Case hardening
Shot peening
References
Bibliography
External links
Claying blades – Differential hardening with clay
Metal heat treatments | Differential heat treatment | [
"Chemistry"
] | 5,476 | [
"Metallurgical processes",
"Metal heat treatments"
] |
182,208 | https://en.wikipedia.org/wiki/Heat%20treating | Heat treating (or heat treatment) is a group of industrial, thermal and metalworking processes used to alter the physical, and sometimes chemical, properties of a material. The most common application is metallurgical. Heat treatments are also used in the manufacture of many other materials, such as glass. Heat treatment involves the use of heating or chilling, normally to extreme temperatures, to achieve the desired result such as hardening or softening of a material. Heat treatment techniques include annealing, case hardening, precipitation strengthening, tempering, carburizing, normalizing and quenching. Although the term heat treatment applies only to processes where the heating and cooling are done for the specific purpose of altering properties intentionally, heating and cooling often occur incidentally during other manufacturing processes such as hot forming or welding.
Physical processes
Metallic materials consist of a microstructure of small crystals called "grains" or crystallites. The nature of the grains (i.e. grain size and composition) is one of the most effective factors that can determine the overall mechanical behavior of the metal. Heat treatment provides an efficient way to manipulate the properties of the metal by controlling the rate of diffusion and the rate of cooling within the microstructure. Heat treating is often used to alter the mechanical properties of a metallic alloy, manipulating properties such as the hardness, strength, toughness, ductility, and elasticity.
There are two mechanisms that may change an alloy's properties during heat treatment: the formation of martensite causes the crystals to deform intrinsically, and the diffusion mechanism causes changes in the homogeneity of the alloy.
The crystal structure consists of atoms that are grouped in a very specific arrangement, called a lattice. In most elements, this order will rearrange itself, depending on conditions like temperature and pressure. This rearrangement called allotropy or polymorphism, may occur several times, at many different temperatures for a particular metal. In alloys, this rearrangement may cause an element that will not normally dissolve into the base metal to suddenly become soluble, while a reversal of the allotropy will make the elements either partially or completely insoluble.
When in the soluble state, the process of diffusion causes the atoms of the dissolved element to spread out, attempting to form a homogenous distribution within the crystals of the base metal. If the alloy is cooled to an insoluble state, the atoms of the dissolved constituents (solutes) may migrate out of the solution. This type of diffusion, called precipitation, leads to nucleation, where the migrating atoms group together at the grain-boundaries. This forms a microstructure generally consisting of two or more distinct phases. For instance, steel that has been heated above the austenizing temperature (red to orange-hot, or around to depending on carbon content), and then cooled slowly, forms a laminated structure composed of alternating layers of ferrite and cementite, becoming soft pearlite. After heating the steel to the austenite phase and then quenching it in water, the microstructure will be in the martensitic phase. This is due to the fact that the steel will change from the austenite phase to the martensite phase after quenching. Some pearlite or ferrite may be present if the quench did not rapidly cool off all the steel.
Unlike iron-based alloys, most heat-treatable alloys do not experience a ferrite transformation. In these alloys, the nucleation at the grain-boundaries often reinforces the structure of the crystal matrix. These metals harden by precipitation. Typically a slow process, depending on temperature, this is often referred to as "age hardening".
Many metals and non-metals exhibit a martensite transformation when cooled quickly (with external media like oil, polymer, water, etc.). When a metal is cooled very quickly, the insoluble atoms may not be able to migrate out of the solution in time. This is called a "diffusionless transformation." When the crystal matrix changes to its low-temperature arrangement, the atoms of the solute become trapped within the lattice. The trapped atoms prevent the crystal matrix from completely changing into its low-temperature allotrope, creating shearing stresses within the lattice. When some alloys are cooled quickly, such as steel, the martensite transformation hardens the metal, while in others, like aluminum, the alloy becomes softer.
Effects of composition
The specific composition of an alloy system will usually have a great effect on the results of heat treating. If the percentage of each constituent is just right, the alloy will form a single, continuous microstructure upon cooling. Such a mixture is said to be eutectoid. However, If the percentage of the solutes varies from the eutectoid mixture, two or more different microstructures will usually form simultaneously. A hypo eutectoid solution contains less of the solute than the eutectoid mix, while a hypereutectoid solution contains more.
Eutectoid alloys
A eutectoid (eutectic-like) alloy is similar in behavior to a eutectic alloy. A eutectic alloy is characterized by having a single melting point. This melting point is lower than that of any of the constituents, and no change in the mixture will lower the melting point any further. When a molten eutectic alloy is cooled, all of the constituents will crystallize into their respective phases at the same temperature.
A eutectoid alloy is similar, but the phase change occurs, not from a liquid, but from a solid solution. Upon cooling a eutectoid alloy from the solution temperature, the constituents will separate into different crystal phases, forming a single microstructure. A eutectoid steel, for example, contains 0.77% carbon. Upon cooling slowly, the solution of iron and carbon (a single phase called austenite) will separate into platelets of the phases ferrite and cementite. This forms a layered microstructure called pearlite.
Since pearlite is harder than iron, the degree of softness achievable is typically limited to that produced by the pearlite. Similarly, the hardenability is limited by the continuous martensitic microstructure formed when cooled very fast.
Hypoeutectoid alloys
A hypoeutectic alloy has two separate melting points. Both are above the eutectic melting point for the system but are below the melting points of any constituent forming the system. Between these two melting points, the alloy will exist as part solid and part liquid. The constituent with the higher melting point will solidify first. When completely solidified, a hypoeutectic alloy will often be in a solid solution.
Similarly, a hypoeutectoid alloy has two critical temperatures, called "arrests". Between these two temperatures, the alloy will exist partly as the solution and partly as a separate crystallizing phase, called the "pro eutectoid phase". These two temperatures are called the upper (A3) and lower (A1) transformation temperatures. As the solution cools from the upper transformation temperature toward an insoluble state, the excess base metal will often be forced to "crystallize-out", becoming the pro eutectoid. This will occur until the remaining concentration of solutes reaches the eutectoid level, which will then crystallize as a separate microstructure.
For example, a hypoeutectoid steel contains less than 0.77% carbon. Upon cooling a hypoeutectoid steel from the austenite transformation temperature, small islands of proeutectoid-ferrite will form. These will continue to grow and the carbon will recede until the eutectoid concentration in the rest of the steel is reached. This eutectoid mixture will then crystallize as a microstructure of pearlite. Since ferrite is softer than pearlite, the two microstructures combine to increase the ductility of the alloy. Consequently, the hardenability of the alloy is lowered.
Hypereutectoid alloys
A hypereutectic alloy also has different melting points. However, between these points, it is the constituent with the higher melting point that will be solid. Similarly, a hypereutectoid alloy has two critical temperatures. When cooling a hypereutectoid alloy from the upper transformation temperature, it will usually be the excess solutes that crystallize-out first, forming the pro-eutectoid. This continues until the concentration in the remaining alloy becomes eutectoid, which then crystallizes into a separate microstructure.
A hypereutectoid steel contains more than 0.77% carbon. When slowly cooling hypereutectoid steel, the cementite will begin to crystallize first. When the remaining steel becomes eutectoid in composition, it will crystallize into pearlite. Since cementite is much harder than pearlite, the alloy has greater hardenability at a cost in ductility.
Effects of time and temperature
Proper heat treating requires precise control over temperature, time held at a certain temperature and cooling rate.
With the exception of stress-relieving, tempering, and aging, most heat treatments begin by heating an alloy beyond a certain transformation, or arrest (A), temperature. This temperature is referred to as an "arrest" because at the A temperature the metal experiences a period of hysteresis. At this point, all of the heat energy is used to cause the crystal change, so the temperature stops rising for a short time (arrests) and then continues climbing once the change is complete. Therefore, the alloy must be heated above the critical temperature for a transformation to occur. The alloy will usually be held at this temperature long enough for the heat to completely penetrate the alloy, thereby bringing it into a complete solid solution. Iron, for example, has four critical-temperatures, depending on carbon content. Pure iron in its alpha (room temperature) state changes to nonmagnetic gamma-iron at its A2 temperature, and weldable delta-iron at its A4 temperature. However, as carbon is added, becoming steel, the A2 temperature splits into the A3 temperature, also called the austenizing temperature (all phases become austenite, a solution of gamma iron and carbon) and its A1 temperature (austenite changes into pearlite upon cooling). Between these upper and lower temperatures the pro eutectoid phase forms upon cooling.
Because a smaller grain size usually enhances mechanical properties, such as toughness, shear strength and tensile strength, these metals are often heated to a temperature that is just above the upper critical temperature, in order to prevent the grains of solution from growing too large. For instance, when steel is heated above the upper critical-temperature, small grains of austenite form. These grow larger as the temperature is increased. When cooled very quickly, during a martensite transformation, the austenite grain-size directly affects the martensitic grain-size. Larger grains have large grain-boundaries, which serve as weak spots in the structure. The grain size is usually controlled to reduce the probability of breakage.
The diffusion transformation is very time-dependent. Cooling a metal will usually suppress the precipitation to a much lower temperature. Austenite, for example, usually only exists above the upper critical temperature. However, if the austenite is cooled quickly enough, the transformation may be suppressed for hundreds of degrees below the lower critical temperature. Such austenite is highly unstable and, if given enough time, will precipitate into various microstructures of ferrite and cementite. The cooling rate can be used to control the rate of grain growth or can even be used to produce partially martensitic microstructures. However, the martensite transformation is time-independent. If the alloy is cooled to the martensite transformation (Ms) temperature before other microstructures can fully form, the transformation will usually occur at just under the speed of sound.
When austenite is cooled but kept above the martensite start temperature Ms so that a martensite transformation does not occur, the austenite grain size will have an effect on the rate of nucleation, but it is generally temperature and the rate of cooling that controls the grain size and microstructure. When austenite is cooled extremely slowly, it will form large ferrite crystals filled with spherical inclusions of cementite. This microstructure is referred to as "sphereoidite". If cooled a little faster, then coarse pearlite will form. Even faster, and fine pearlite will form. If cooled even faster, bainite will form, with more complete bainite transformation occurring depending on the time held above martensite start Ms. Similarly, these microstructures will also form, if cooled to a specific temperature and then held there for a certain time.
Most non-ferrous alloys are also heated in order to form a solution. Most often, these are then cooled very quickly to produce a martensite transformation, putting the solution into a supersaturated state. The alloy, being in a much softer state, may then be cold worked. This causes work hardening that increases the strength and hardness of the alloy. Moreover, the defects caused by plastic deformation tend to speed up precipitation, increasing the hardness beyond what is normal for the alloy. Even if not cold worked, the solutes in these alloys will usually precipitate, although the process may take much longer. Sometimes these metals are then heated to a temperature that is below the lower critical (A1) temperature, preventing recrystallization, in order to speed-up the precipitation.
Types of heat treatment
Complex heat treating schedules, or "cycles", are often devised by metallurgists to optimize an alloy's mechanical properties. In the aerospace industry, a superalloy may undergo five or more different heat treating operations to develop the desired properties. This can lead to quality problems depending on the accuracy of the furnace's temperature controls and timer. These operations can usually be divided into several basic techniques.
Annealing
Annealing consists of heating a metal to a specific temperature and then cooling at a rate that will produce a refined microstructure, either fully or partially separating the constituents. The rate of cooling is generally slow. Annealing is most often used to soften a metal for cold working, to improve machinability, or to enhance properties like electrical conductivity.
In ferrous alloys, annealing is usually accomplished by heating the metal beyond the upper critical temperature and then cooling very slowly, resulting in the formation of pearlite. In both pure metals and many alloys that cannot be heat treated, annealing is used to remove the hardness caused by cold working. The metal is heated to a temperature where recrystallization can occur, thereby repairing the defects caused by plastic deformation. In these metals, the rate of cooling will usually have little effect. Most non-ferrous alloys that are heat-treatable are also annealed to relieve the hardness of cold working. These may be slowly cooled to allow full precipitation of the constituents and produce a refined microstructure.
Ferrous alloys are usually either "full annealed" or "process annealed". Full annealing requires very slow cooling rates, in order to form coarse pearlite. In process annealing, the cooling rate may be faster; up to, and including normalizing. The main goal of process annealing is to produce a uniform microstructure. Non-ferrous alloys are often subjected to a variety of annealing techniques, including "recrystallization annealing", "partial annealing", "full annealing", and "final annealing". Not all annealing techniques involve recrystallization, such as stress relieving.
Normalizing
Normalizing is a technique used to provide uniformity in grain size and composition (equiaxed crystals) throughout an alloy. The term is often used for ferrous alloys that have been austenitized and then cooled in the open air. Normalizing not only produces pearlite but also martensite and sometimes bainite, which gives harder and stronger steel but with less ductility for the same composition than full annealing.
In the normalizing process the steel is heated to about 40 degrees Celsius above its upper critical temperature limit, held at this temperature for some time, and then cooled in air.
Stress relieving
Stress-relieving is a technique to remove or reduce the internal stresses created in metal. These stresses may be caused in a number of ways, ranging from cold working to non-uniform cooling. Stress-relieving is usually accomplished by heating a metal below the lower critical temperature and then cooling uniformly. Stress relieving is commonly used on items like air tanks, boilers and other pressure vessels, to remove a portion of the stresses created during the welding process.
Aging
Some metals are classified as precipitation hardening metals. When a precipitation hardening alloy is quenched, its alloying elements will be trapped in solution, resulting in a soft metal. Aging a "solutionized" metal will allow the alloying elements to diffuse through the microstructure and form intermetallic particles. These intermetallic particles will nucleate and fall out of the solution and act as a reinforcing phase, thereby increasing the strength of the alloy. Alloys may age " naturally" meaning that the precipitates form at room temperature, or they may age "artificially" when precipitates only form at elevated temperatures. In some applications, naturally aging alloys may be stored in a freezer to prevent hardening until after further operations - assembly of rivets, for example, maybe easier with a softer part.
Examples of precipitation hardening alloys include 2000 series, 6000 series, and 7000 series aluminium alloy, as well as some superalloys and some stainless steels. Steels that harden by aging are typically referred to as maraging steels, from a combination of the term "martensite aging".
Quenching
Quenching is a process of cooling a metal at a rapid rate. This is most often done to produce a martensite transformation. In ferrous alloys, this will often produce a harder metal, while non-ferrous alloys will usually become softer than normal.
To harden by quenching, a metal (usually steel or cast iron) must be heated above the upper critical temperature (Steel: above 815~900 degrees Celsius) and then quickly cooled. Depending on the alloy and other considerations (such as concern for maximum hardness vs. cracking and distortion), cooling may be done with forced air or other gases, (such as nitrogen). Liquids may be used, due to their better thermal conductivity, such as oil, water, a polymer dissolved in water, or a brine. Upon being rapidly cooled, a portion of austenite (dependent on alloy composition) will transform to martensite, a hard, brittle crystalline structure. The quenched hardness of a metal depends on its chemical composition and quenching method. Cooling speeds, from fastest to slowest, go from brine, polymer (i.e. mixtures of water + glycol polymers), freshwater, oil, and forced air. However, quenching certain steel too fast can result in cracking, which is why high-tensile steels such as AISI 4140 should be quenched in oil, tool steels such as ISO 1.2767 or H13 hot work tool steel should be quenched in forced air, and low alloy or medium-tensile steels such as XK1320 or AISI 1040 should be quenched in brine.
Some Beta titanium based alloys have also shown similar trends of increased strength through rapid cooling. However, most non-ferrous metals, like alloys of copper, aluminum, or nickel, and some high alloy steels such as austenitic stainless steel (304, 316), produce an opposite effect when these are quenched: they soften. Austenitic stainless steels must be quenched to become fully corrosion resistant, as they work-harden significantly.
Tempering
Untempered martensitic steel, while very hard, is too brittle to be useful for most applications. A method for alleviating this problem is called tempering. Most applications require that quenched parts be tempered. Tempering consists of heating steel below the lower critical temperature, (often from 400˚F to 1105˚F or 205˚C to 595˚C, depending on the desired results), to impart some toughness. Higher tempering temperatures (maybe up to 1,300˚F or 700˚C, depending on the alloy and application) are sometimes used to impart further ductility, although some yield strength is lost.
Tempering may also be performed on normalized steels. Other methods of tempering consist of quenching to a specific temperature, which is above the martensite start temperature, and then holding it there until pure bainite can form or internal stresses can be relieved. These include austempering and martempering.
Tempering colors
Steel that has been freshly ground or polished will form oxide layers when heated. At a very specific temperature, the iron oxide will form a layer with a very specific thickness, causing thin-film interference. This causes colors to appear on the surface of the steel. As the temperature is increased, the iron oxide layer grows in thickness, changing the color. These colors, called tempering colors, have been used for centuries to gauge the temperature of the metal.
350˚F (176˚C), light yellowish
400˚F (204˚C), light-straw
440˚F (226˚C), dark-straw
500˚F (260˚C), brown
540˚F (282˚C), purple
590˚F (310˚C), deep blue
640˚F (337˚C), light blue
The tempering colors can be used to judge the final properties of the tempered steel. Very hard tools are often tempered in the light to the dark straw range, whereas springs are often tempered to the blue. However, the final hardness of the tempered steel will vary, depending on the composition of the steel. Higher-carbon tool steel will remain much harder after tempering than spring steel (of slightly less carbon) when tempered at the same temperature. The oxide film will also increase in thickness over time. Therefore, steel that has been held at 400˚F for a very long time may turn brown or purple, even though the temperature never exceeded that needed to produce a light straw color. Other factors affecting the final outcome are oil films on the surface and the type of heat source used.
Selective heat treating
Many heat treating methods have been developed to alter the properties of only a portion of an object. These tend to consist of either cooling different areas of an alloy at different rates, by quickly heating in a localized area and then quenching, by thermochemical diffusion, or by tempering different areas of an object at different temperatures, such as in differential tempering.
Differential hardening
Some techniques allow different areas of a single object to receive different heat treatments. This is called differential hardening. It is common in high quality knives and swords. The Chinese jian is one of the earliest known examples of this, and the Japanese katana may be the most widely known. The Nepalese Khukuri is another example. This technique uses an insulating layer, like layers of clay, to cover the areas that are to remain soft. The areas to be hardened are left exposed, allowing only certain parts of the steel to fully harden when quenched.
Flame hardening
Flame hardening is used to harden only a portion of the metal. Unlike differential hardening, where the entire piece is heated and then cooled at different rates, in flame hardening, only a portion of the metal is heated before quenching. This is usually easier than differential hardening, but often produces an extremely brittle zone between the heated metal and the unheated metal, as cooling at the edge of this heat-affected zone is extremely rapid.
Induction hardening
Induction hardening is a surface hardening technique in which the surface of the metal is heated very quickly, using a no-contact method of induction heating. The alloy is then quenched, producing a martensite transformation at the surface while leaving the underlying metal unchanged. This creates a very hard, wear-resistant surface while maintaining the proper toughness in the majority of the object. Crankshaft journals are a good example of an induction hardened surface.
Case hardening
Case hardening is a thermochemical diffusion process in which an alloying element, most commonly carbon or nitrogen, diffuses into the surface of a monolithic metal. The resulting interstitial solid solution is harder than the base material, which improves wear resistance without sacrificing toughness.
Laser surface engineering is a surface treatment with high versatility, selectivity and novel properties. Since the cooling rate is very high in laser treatment, metastable even metallic glass can be obtained by this method.
Cold and cryogenic treating
Although quenching steel causes the austenite to transform into martensite, all of the austenite usually does not transform. Some austenite crystals will remain unchanged even after quenching below the martensite finish (Mf) temperature. Further transformation of the austenite into martensite can be induced by slowly cooling the metal to extremely low temperatures. Cold treating generally consists of cooling the steel to around -115˚F (-81˚C), but does not eliminate all of the austenite. Cryogenic treating usually consists of cooling to much lower temperatures, often in the range of -315˚F (-192˚C), to transform most of the austenite into martensite.
Cold and cryogenic treatments are typically done immediately after quenching, before any tempering, and will increase the hardness, wear resistance, and reduce the internal stresses in the metal but, because it is really an extension of the quenching process, it may increase the chances of cracking during the procedure. The process is often used for tools, bearings, or other items that require good wear resistance. However, it is usually only effective in high-carbon or high-alloy steels in which more than 10% austenite is retained after quenching.
Decarburization
The heating of steel is sometimes used as a method to alter the carbon content. When steel is heated in an oxidizing environment, the oxygen combines with the iron to form an iron-oxide layer, which protects the steel from decarburization. When the steel turns to austenite, however, the oxygen combines with iron to form a slag, which provides no protection from decarburization. The formation of slag and scale actually increases decarburization, because the iron oxide keeps oxygen in contact with the decarburization zone even after the steel is moved into an oxygen-free environment, such as the coals of a forge. Thus, the carbon atoms begin combining with the surrounding scale and slag to form both carbon monoxide and carbon dioxide, which is released into the air.
Steel contains a relatively small percentage of carbon, which can migrate freely within the gamma iron. When austenitized steel is exposed to air for long periods of time, the carbon content in the steel can be lowered. This is the opposite from what happens when steel is heated in a reducing environment, in which carbon slowly diffuses further into the metal. In an oxidizing environment, the carbon can readily diffuse outwardly, so austenitized steel is very susceptible to decarburization. This is often used for cast steel, where a high carbon-content is needed for casting, but a lower carbon-content is desired in the finished product. It is often used on cast-irons to produce malleable cast iron, in a process called "white tempering". This tendency to decarburize is often a problem in other operations, such as blacksmithing, where it becomes more desirable to austenize the steel for the shortest amount of time possible to prevent too much decarburization.
Specification of heat treatment
Usually the end condition is specified instead of the process used in heat treatment.
Case hardening
Case hardening is specified by "hardness" and "case depth". The case depth can be specified in two ways: total case depth or effective case depth. The total case depth is the true depth of the case. For most alloys, the effective case depth is the depth of the case that has a hardness equivalent of HRC50; however, some alloys specify a different hardness (40-60 HRC) at effective case depth; this is checked on a Tukon microhardness tester. This value can be roughly approximated as 65% of the total case depth; however, the chemical composition and hardenability can affect this approximation. If neither type of case depth is specified the total case depth is assumed.
For case hardened parts the specification should have a tolerance of at least ±. If the part is to be ground after heat treatment, the case depth is assumed to be after grinding.
The Rockwell hardness scale used for the specification depends on the depth of the total case depth, as shown in the table below. Usually, hardness is measured on the Rockwell "C" scale, but the load used on the scale will penetrate through the case if the case is less than . Using Rockwell "C" for a thinner case will result in a false reading.
For cases that are less than thick a Rockwell scale cannot reliably be used, so is specified instead. File hard is approximately equivalent to 58 HRC.
When specifying the hardness either a range should be given or the minimum hardness specified. If a range is specified at least 5 points should be given.
Through hardening
Only hardness is listed for through hardening. It is usually in the form of HRC with at least a five-point range.
Annealing
The hardness for an annealing process is usually listed on the HRB scale as a maximum value. It is a process to refine grain size, improve strength, remove residual stress, and affect the electromagnetic properties...
Types of furnaces
Furnaces used for heat treatment can be split into two broad categories: batch furnaces and continuous furnaces. Batch furnaces are usually manually loaded and unloaded, whereas continuous furnaces have an automatic conveying system to provide a constant load into the furnace chamber.
Batch furnaces
Batch systems usually consist of an insulated chamber with a steel shell, a heating system, and an access door to the chamber.
Box-type furnace
Many basic box-type furnaces have been upgraded to a semi-continuous batch furnace with the addition of integrated quench tanks and slow-cool chambers. These upgraded furnaces are a very commonly used piece of equipment for heat-treating.
Car-type furnace
Also known as a " bogie hearth", the car furnace is an extremely large batch furnace. The floor is constructed as an insulated movable car that is moved in and out of the furnace for loading and unloading. The car is usually sealed using sand seals or solid seals when in position. Due to the difficulty in getting a sufficient seal, car furnaces are usually used for non-atmosphere processes.
Elevator-type furnace
Similar in type to the car furnace, except that the car and hearth are rolled into position beneath the furnace and raised by means of a motor-driven mechanism, elevator furnaces can handle large heavy loads and often eliminate the need for any external cranes and transfer mechanisms.
Bell-type furnace
Bell furnaces have removable covers called bells, which are lowered over the load and hearth by crane. An inner bell is placed over the hearth and sealed to supply a protective atmosphere. An outer bell is lowered to provide the heat supply.
Pit furnaces
Furnaces that are constructed in a pit and extend to floor level or slightly above are called pit furnaces. Workpieces can be suspended from fixtures, held in baskets, or placed on bases in the furnace. Pit furnaces are suited to heating long tubes, shafts, and rods by holding them in a vertical position. This manner of loading provides minimal distortion.
Salt bath furnaces
Salt baths are used in a wide variety of heat treatment processes including neutral hardening, liquid carburising, liquid nitriding, austempering, martempering and tempering.
Parts are loaded into a pot of molten salt where they are heated by conduction, giving a very readily available source of heat. The core temperature of a part rises in temperature at approximately the same rate as its surface in a salt bath.
Salt baths utilize a variety of salts for heat treatment, with cyanide salts being the most extensively used. Concerns about associated occupation health and safety, and expensive waste management and disposal due to their environmental effects have made the use of salt baths less attractive in recent years. Consequently, many salt baths are being replaced by more environmentally friendly fluidized bed furnaces.
Fluidised bed furnaces
A fluidised bed consists of a cylindrical retort made from high-temperature alloy, filled with sand-like aluminum oxide particulate. Gas (air or nitrogen) is bubbled through the oxide and the sand moves in such a way that it exhibits fluid-like behavior, hence the term fluidized. The solid-solid contact of the oxide gives very high thermal conductivity and excellent temperature uniformity throughout the furnace, comparable to those seen in a salt bath.
See also
Carbon steel
Carbonizing
Diffusion hardening
Induction hardening
Retrogression heat treatment
Nitriding
References
Further reading
International Heat Treatment Magazine in English
Metallurgy
Metalworking
Physical phenomena | Heat treating | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 6,933 | [
"Physical phenomena",
"Metallurgical processes",
"Metallurgy",
"Materials science",
"nan",
"Metal heat treatments"
] |
182,255 | https://en.wikipedia.org/wiki/Trench | A trench is a type of excavation or depression in the ground that is generally deeper than it is wide (as opposed to a swale or a bar ditch), and narrow compared with its length (as opposed to a simple hole or pit).
In geology, trenches result from erosion by rivers or by geological movement of tectonic plates. In civil engineering, trenches are often created to install underground utilities such as gas, water, power and communication lines. In construction, trenches are dug for foundations of buildings, retaining walls and dams, and for cut-and-cover construction of tunnels. In archaeology, the "trench method" is used for searching and excavating ancient ruins or to dig into strata of sedimented material. In geotechnical engineering, trench investigations locate faults and investigate deep soil properties. In trench warfare, soldiers occupy trenches to protect them against weapons fire and artillery.
Trenches are dug using manual tools such as shovel and pickaxe or heavy equipment such as backhoe, trencher, and excavator.
For deep trenches, the instability of steep earthen walls requires engineering and safety techniques such as shoring. Trenches are usually considered temporary structures that are backfilled with soil after construction or abandoned after use. Some trenches are stabilized using durable materials such as concrete to create open passages such as canal and sunken roadways.
Geology
Some trenches are created as a result of erosion by running water or by glaciers (which may have long since disappeared). Others, such as rift valleys or oceanic trenches, are created by geological movement of tectonic plates. Some oceanic trenches include the Mariana Trench and the Aleutian Trench. The former geoform is relatively deep (approximately ), linear and narrow, and is formed by plate subduction when plates converge.
Civil engineering
In the civil engineering fields of construction and maintenance of infrastructure, trenches play a major role. They are used for installation of underground infrastructure or utilities (such as gas mains, water mains, communication lines and pipelines) that would be obstructive or easily damaged if placed above ground. Trenches are needed later for access to these installations for service. They may be created to search for pipes and other infrastructure whose exact location is no longer known ("search trench" or "search slit"). Finally, trenches may be created as the first step of creating a foundation wall. Trench shoring is often used in trenchworks to protect workers and stabilise the steep walls.
An alternative to digging trenches is to create a utility tunnel. Such a tunnel may be dug by boring or by using a trench for cut-and-cover construction. The advantages of utility tunnels are the reduction of maintenance manholes, one-time relocation, and less excavation and repair, compared with separate cable ducts for each service. When they are well mapped, they also allow rapid access to all utilities without having to dig access trenches or resort to confused and often inaccurate utility maps.
An important advantage to placing utilities underground is public safety. Underground power lines, whether in common or separate channels, prevent downed utility cables from blocking roads, thus speeding emergency access after natural disasters such as earthquakes, hurricanes, and tsunamis.
In some cases, a large trench is dug and deliberately preserved (not filled in), often for transport purposes. This is typically done to install depressed motorways, open railway cuttings, or canals. However, these large, permanent trenches are significant barriers to other forms of travel, and often become de facto boundaries between neighborhoods or other spaces.
Military engineering
Trenches have often been dug for military purposes. In the pre-firearm era, they were mainly a type of hindrance to an attacker of a fortified location, such as the moat around a castle (this is technically called a ditch). An early example of this can be seen in the Battle of the Trench, a religious war, one of the early battles fought by Muhammad.
With the advent of accurate firearms, trenches were used to shelter troops. Trench warfare and tactics evolved further in the Crimean War, the American Civil War and World War I, until systems of extensive main trenches, backup trenches (in case the first lines were overrun) and communication trenches often stretched dozens of kilometres along a front without interruption, and some kilometres further back from the front line. The area of land between trenches in trench warfare is known as "No Man's Land" because it often offers no protection from enemy fire. After WW1 had concluded, the trench became a symbol of WW1 and its horrors.
Gallery
Archaeology
Trenches are used for searching and excavating ancient ruins or to dig into strata of sedimented material to get a sideways (layered) view of the deposits – with a hope of being able to place found objects or materials in a chronological order. The advantage of this method is that it destroys only a small part of the site (those areas where the trenches, often arranged in a grid pattern, are located). However, this method also has the disadvantage of only revealing small slices of the whole volume, and modern archeological digs usually employ combination methods.
Safety
Trenches that are deeper than about 1.5 m present safety risks arising from their steep walls and confined space. These risks are similar those from pits or any steep-walled excavations. The risks include falling, injury from cave-in (wall collapse), inability to escape the trench, drowning and asphyxiation.
Falling into the trench. Mitigation methods include barriers such as railings or fencing.
Injury from cave-in, meaning collapse of a steep wall. Mitigation includes construction of sloped walls (sloped trench) or stepped walls (benched trench). For vertical walls, trench shoring stabilizes the walls, and trench shielding provides a barrier against collapsed material. The risk of cave-in increases from surcharge load, which is any weight placed outside the trench near its edge. These loads include the spoil pile (soil excavated from the trench) or heavy equipment. These add extra stress to the walls of the trench.
Inability to escape the trench because of steep and unstable walls, which may be difficult to climb. Ladders, stairs, or ramps allow exit. Cranes may assist rescue.
Drowning in water or mud that has accumulated in the trench from rain, seepage, or leaking water pipes.
Asphyxiation, poisoning, fire and explosion from gasses that are denser than air that have settled in a trench. These may come from nearby industrial processing of these gasses, intentional use within the trench, or leakage from nearby plumbing. These present an asphyxiation hazard and may also be toxic. Burnable gasses such as natural gas present a fire and explosion risk. Oxidizers such as pure oxygen increase the risk of fire from other fuels present in the trench. Gasses such as pure nitrogen and natural gas have densities similar to pure air but are denser when cold, for example when they have evaporated from liquid form, and may creep along the ground and fill the trench. Ventilation fans and ducts reduce the risk. Oxygen sensors and other gas sensors detect the danger; alarms from the sensors can warn the occupants.
See also
Abyssal plain
Cut (earthmoving)
Cut and fill
Ditch
Gully
Sunken lane#Erosion
Trench (album)
Trench coat
Trench fever
Trench foot
Trench mouth
Trench warfare
Tunnel
Tunnel warfare
Underground city
Underground living
Utility tunnel
References
External links
Trenching and Excavation (a NIOSH Safety and Health Topic, Centers for Disease Control and Prevention)
Trench Safety Awareness (a NIOSH Publication, Centers for Disease Control and Prevention)
Earth structures | Trench | [
"Engineering"
] | 1,538 | [
"Construction",
"Earth structures"
] |
182,283 | https://en.wikipedia.org/wiki/Miles%20per%20hour | Miles per hour (mph, m.p.h., MPH, or mi/h) is a British imperial and United States customary unit of speed expressing the number of miles travelled in one hour. It is used in the United Kingdom, the United States, and a number of smaller countries, most of which are UK or US territories, or have close historical ties with the UK or US.
Usage
Road traffic
Speed limits and road traffic speeds are given in miles per hour in the following jurisdictions:
Antigua and Barbuda
Bahamas
Belize
Dominica
Grenada
Liberia (occasionally)
Marshall Islands
Micronesia
Palau
Saint Kitts and Nevis
Saint Lucia
Saint Vincent and the Grenadines
United Kingdom
The following British Overseas Territories:
Anguilla
British Virgin Islands
British Indian Ocean Territory
Cayman Islands
Falkland Islands
Montserrat
Saint Helena, Ascension and Tristan da Cunha
Turks and Caicos Islands
The Crown dependencies:
Bailiwick of Guernsey
Isle of Man
Jersey
United States
The following United States overseas dependencies:
American Samoa
Guam
Northern Mariana Islands
Puerto Rico
United States Virgin Islands
Rail networks
Miles per hour is the unit used on the US, Canadian and Irish rail systems. Miles per hour is also used on British rail systems, excluding trams, some light metro systems, the Channel Tunnel and High Speed 1.
Nautical and aeronautical usage
Nautical and aeronautical applications favour the knot as a common unit of speed. (One knot is one nautical mile per hour, with a nautical mile being exactly 1,852 metres or about 6,076 feet.)
Other usage
In some countries mph may be used to express the speed of delivery of a ball in sporting events such as cricket, tennis and baseball.
Conversions
{|
|-
|valign=top rowspan=4|1 mph
|= (exactly)
|-
|= (exactly)
|}
See also
Kilometres per hour
Acceleration
Velocity
References
Units of velocity
Imperial units
Customary units of measurement in the United States | Miles per hour | [
"Mathematics"
] | 389 | [
"Quantity",
"Units of velocity",
"Units of measurement"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.